This is the multi-page printable view of this section. Click here to print.
General plugins
1 - dcount_intersect plugin
Calculates intersection between N sets based on hll
values (N in range of [2..16]), and returns N dcount
values. The plugin is invoked with the evaluate
operator.
Syntax
T | evaluate
dcount_intersect(
hll_1, hll_2, [,
hll_3,
…])
Parameters
Name | Type | Required | Description |
---|---|---|---|
T | string | ✔️ | The input tabular expression. |
hll_i | The values of set Si calculated with the hll() function. |
Returns
Returns a table with N dcount
values (per column, representing set intersections).
Column names are s0, s1, … (until n-1).
Given sets S1, S2, .. Sn return values will be representing distinct counts of:
S1,
S1 ∩ S2,
S1 ∩ S2 ∩ S3,
… ,
S1 ∩ S2 ∩ … ∩ Sn
Examples
// Generate numbers from 1 to 100
range x from 1 to 100 step 1
| extend isEven = (x % 2 == 0), isMod3 = (x % 3 == 0), isMod5 = (x % 5 == 0)
// Calculate conditional HLL values (note that '0' is included in each of them as additional value, so we will subtract it later)
| summarize hll_even = hll(iif(isEven, x, 0), 2),
hll_mod3 = hll(iif(isMod3, x, 0), 2),
hll_mod5 = hll(iif(isMod5, x, 0), 2)
// Invoke the plugin that calculates dcount intersections
| evaluate dcount_intersect(hll_even, hll_mod3, hll_mod5)
| project evenNumbers = s0 - 1, // 100 / 2 = 50
even_and_mod3 = s1 - 1, // gcd(2,3) = 6, therefor: 100 / 6 = 16
even_and_mod3_and_mod5 = s2 - 1 // gcd(2,3,5) is 30, therefore: 100 / 30 = 3
Output
evenNumbers | even_and_mod3 | even_and_mod3_and_mod5 |
---|---|---|
50 | 16 | 3 |
Related content
2 - infer_storage_schema plugin
This plugin infers the schema of external data, and returns it as CSL schema string. The string can be used when creating external tables. The plugin is invoked with the evaluate
operator.
Authentication and authorization
In the properties of the request, you specify storage connection strings to access. Each storage connection string specifies the authorization method to use for access to the storage. Depending on the authorization method, the principal may need to be granted permissions on the external storage to perform the schema inference.
The following table lists the supported authentication methods and any required permissions by storage type.
Authentication method | Azure Blob Storage / Data Lake Storage Gen2 | Data Lake Storage Gen1 |
---|---|---|
Impersonation | Storage Blob Data Reader | Reader |
Shared Access (SAS) token | List + Read | This authentication method isn’t supported in Gen1. |
Microsoft Entra access token | ||
Storage account access key | This authentication method isn’t supported in Gen1. |
Syntax
evaluate
infer_storage_schema(
Options )
Parameters
Name | Type | Required | Description |
---|---|---|---|
Options | dynamic | ✔️ | A property bag specifying the properties of the request. |
Supported properties of the request
Name | Type | Required | Description |
---|---|---|---|
StorageContainers | dynamic | ✔️ | An array of storage connection strings that represent prefix URI for stored data artifacts. |
DataFormat | string | ✔️ | One of the supported data formats. |
FileExtension | string | If specified, the function only scans files ending with this file extension. Specifying the extension may speed up the process or eliminate data reading issues. | |
FileNamePrefix | string | If specified, the function only scans files starting with this prefix. Specifying the prefix may speed up the process. | |
Mode | string | The schema inference strategy. A value of: any , last , all . The function infers the data schema from the first found file, from the last written file, or from all files respectively. The default value is last . | |
InferenceOptions | dynamic | More inference options. Valid options: UseFirstRowAsHeader for delimited file formats. For example, 'InferenceOptions': {'UseFirstRowAsHeader': true} . |
Returns
The infer_storage_schema
plugin returns a single result table containing a single row/column containing CSL schema string.
Example
let options = dynamic({
'StorageContainers': [
h@'https://storageaccount.blob.core.windows.net/MobileEvents;secretKey'
],
'FileExtension': '.parquet',
'FileNamePrefix': 'part-',
'DataFormat': 'parquet'
});
evaluate infer_storage_schema(options)
Output
CslSchema |
---|
app_id:string, user_id:long, event_time:datetime, country:string, city:string, device_type:string, device_vendor:string, ad_network:string, campaign:string, site_id:string, event_type:string, event_name:string, organic:string, days_from_install:int, revenue:real |
Use the returned schema in external table definition:
.create external table MobileEvents(
app_id:string, user_id:long, event_time:datetime, country:string, city:string, device_type:string, device_vendor:string, ad_network:string, campaign:string, site_id:string, event_type:string, event_name:string, organic:string, days_from_install:int, revenue:real
)
kind=blob
partition by (dt:datetime = bin(event_time, 1d), app:string = app_id)
pathformat = ('app=' app '/dt=' datetime_pattern('yyyyMMdd', dt))
dataformat = parquet
(
h@'https://storageaccount.blob.core.windows.net/MovileEvents;secretKey'
)
Related content
3 - infer_storage_schema_with_suggestions plugin
This infer_storage_schema_with_suggestions
plugin infers the schema of external data and returns a JSON object. For each column, the object provides inferred type, a recommended type, and the recommended mapping transformation. The recommended type and mapping are provided by the suggestion logic that determines the optimal type using the following logic:
- Identity columns: If the inferred type for a column is
long
and the column name ends withid
, the suggested type isstring
since it provides optimized indexing for identity columns where equality filters are common. - Unix datetime columns: If the inferred type for a column is
long
and one of the unix-time to datetime mapping transformations produces a valid datetime value, the suggested type isdatetime
and the suggestedApplicableTransformationMapping
mapping is the one that produced a valid datetime value.
The plugin is invoked with the evaluate
operator. To obtain the table schema that uses the inferred schema for Create and alter Azure Storage external tables without suggestions, use the infer_storage_schema plugin.
Authentication and authorization
In the properties of the request, you specify storage connection strings to access. Each storage connection string specifies the authorization method to use for access to the storage. Depending on the authorization method, the principal may need to be granted permissions on the external storage to perform the schema inference.
The following table lists the supported authentication methods and any required permissions by storage type.
Authentication method | Azure Blob Storage / Data Lake Storage Gen2 | Data Lake Storage Gen1 |
---|---|---|
Impersonation | Storage Blob Data Reader | Reader |
Shared Access (SAS) token | List + Read | This authentication method isn’t supported in Gen1. |
Microsoft Entra access token | ||
Storage account access key | This authentication method isn’t supported in Gen1. |
Syntax
evaluate
infer_storage_schema_with_suggestions(
Options )
Parameters
Name | Type | Required | Description |
---|---|---|---|
Options | dynamic | ✔️ | A property bag specifying the properties of the request. |
Supported properties of the request
Name | Type | Required | Description |
---|---|---|---|
StorageContainers | dynamic | ✔️ | An array of storage connection strings that represent prefix URI for stored data artifacts. |
DataFormat | string | ✔️ | One of the supported Data formats supported for ingestion |
FileExtension | string | If specified, the function only scans files ending with this file extension. Specifying the extension may speed up the process or eliminate data reading issues. | |
FileNamePrefix | string | If specified, the function only scans files starting with this prefix. Specifying the prefix may speed up the process. | |
Mode | string | The schema inference strategy. A value of: any , last , all . The function infers the data schema from the first found file, from the last written file, or from all files respectively. The default value is last . | |
InferenceOptions | dynamic | More inference options. Valid options: UseFirstRowAsHeader for delimited file formats. For example, 'InferenceOptions': {'UseFirstRowAsHeader': true} . |
Returns
The infer_storage_schema_with_suggestions
plugin returns a single result table containing a single row/column containing a JSON string.
Example
let options = dynamic({
'StorageContainers': [
h@'https://storageaccount.blob.core.windows.net/MobileEvents;secretKey'
],
'FileExtension': '.json',
'FileNamePrefix': 'js-',
'DataFormat': 'json'
});
evaluate infer_storage_schema_with_suggestions(options)
Example input data
{
"source": "DataExplorer",
"created_at": "2022-04-10 15:47:57",
"author_id": 739144091473215488,
"time_millisec":1547083647000
}
Output
{
"Columns": [
{
"OriginalColumn": {
"Name": "source",
"CslType": {
"type": "string",
"IsNumeric": false,
"IsSummable": false
}
},
"RecommendedColumn": {
"Name": "source",
"CslType": {
"type": "string",
"IsNumeric": false,
"IsSummable": false
}
},
"ApplicableTransformationMapping": "None"
},
{
"OriginalColumn": {
"Name": "created_at",
"CslType": {
"type": "datetime",
"IsNumeric": false,
"IsSummable": true
}
},
"RecommendedColumn": {
"Name": "created_at",
"CslType": {
"type": "datetime",
"IsNumeric": false,
"IsSummable": true
}
},
"ApplicableTransformationMapping": "None"
},
{
"OriginalColumn": {
"Name": "author_id",
"CslType": {
"type": "long",
"IsNumeric": true,
"IsSummable": true
}
},
"RecommendedColumn": {
"Name": "author_id",
"CslType": {
"type": "string",
"IsNumeric": false,
"IsSummable": false
}
},
"ApplicableTransformationMapping": "None"
},
{
"OriginalColumn": {
"Name": "time_millisec",
"CslType": {
"type": "long",
"IsNumeric": true,
"IsSummable": true
}
},
"RecommendedColumn": {
"Name": "time_millisec",
"CslType": {
"type": "datetime",
"IsNumeric": false,
"IsSummable": true
}
},
"ApplicableTransformationMapping": "DateTimeFromUnixMilliseconds"
}
]
}
Related content
4 - ipv4_lookup plugin
The ipv4_lookup
plugin looks up an IPv4 value in a lookup table and returns rows with matched values. The plugin is invoked with the evaluate
operator.
Syntax
T |
evaluate
ipv4_lookup(
LookupTable ,
SourceIPv4Key ,
IPv4LookupKey [,
ExtraKey1 [.. ,
ExtraKeyN [,
return_unmatched ]]] )
Parameters
Name | Type | Required | Description |
---|---|---|---|
T | string | ✔️ | The tabular input whose column SourceIPv4Key is used for IPv4 matching. |
LookupTable | string | ✔️ | Table or tabular expression with IPv4 lookup data, whose column LookupKey is used for IPv4 matching. IPv4 values can be masked using IP-prefix notation. |
SourceIPv4Key | string | ✔️ | The column of T with IPv4 string to be looked up in LookupTable. IPv4 values can be masked using IP-prefix notation. |
IPv4LookupKey | string | ✔️ | The column of LookupTable with IPv4 string that is matched against each SourceIPv4Key value. |
ExtraKey1 .. ExtraKeyN | string | Additional column references that are used for lookup matches. Similar to join operation: records with equal values are considered matching. Column name references must exist both is source table T and LookupTable . | |
return_unmatched | bool | A boolean flag that defines if the result should include all or only matching rows (default: false - only matching rows returned). |
Returns
The ipv4_lookup
plugin returns a result of join (lookup) based on IPv4 key. The schema of the table is the union of the source table and the lookup table, similar to the result of the lookup
operator.
If the return_unmatched argument is set to true
, the resulting table includes both matched and unmatched rows (filled with nulls).
If the return_unmatched argument is set to false
, or omitted (the default value of false
is used), the resulting table has as many records as matching results. This variant of lookup has better performance compared to return_unmatched=true
execution.
Examples
IPv4 lookup - matching rows only
// IP lookup table: IP_Data
// Partial data from: https://raw.githubusercontent.com/datasets/geoip2-ipv4/master/data/geoip2-ipv4.csv
let IP_Data = datatable(network:string, continent_code:string ,continent_name:string, country_iso_code:string, country_name:string)
[
"111.68.128.0/17","AS","Asia","JP","Japan",
"5.8.0.0/19","EU","Europe","RU","Russia",
"223.255.254.0/24","AS","Asia","SG","Singapore",
"46.36.200.51/32","OC","Oceania","CK","Cook Islands",
"2.20.183.0/24","EU","Europe","GB","United Kingdom",
];
let IPs = datatable(ip:string)
[
'2.20.183.12', // United Kingdom
'5.8.1.2', // Russia
'192.165.12.17', // Unknown
];
IPs
| evaluate ipv4_lookup(IP_Data, ip, network)
Output
ip | network | continent_code | continent_name | country_iso_code | country_name |
---|---|---|---|---|---|
2.20.183.12 | 2.20.183.0/24 | EU | Europe | GB | United Kingdom |
5.8.1.2 | 5.8.0.0/19 | EU | Europe | RU | Russia |
IPv4 lookup - return both matching and nonmatching rows
// IP lookup table: IP_Data
// Partial data from:
// https://raw.githubusercontent.com/datasets/geoip2-ipv4/master/data/geoip2-ipv4.csv
let IP_Data = datatable(network:string,continent_code:string ,continent_name:string ,country_iso_code:string ,country_name:string )
[
"111.68.128.0/17","AS","Asia","JP","Japan",
"5.8.0.0/19","EU","Europe","RU","Russia",
"223.255.254.0/24","AS","Asia","SG","Singapore",
"46.36.200.51/32","OC","Oceania","CK","Cook Islands",
"2.20.183.0/24","EU","Europe","GB","United Kingdom",
];
let IPs = datatable(ip:string)
[
'2.20.183.12', // United Kingdom
'5.8.1.2', // Russia
'192.165.12.17', // Unknown
];
IPs
| evaluate ipv4_lookup(IP_Data, ip, network, return_unmatched = true)
Output
ip | network | continent_code | continent_name | country_iso_code | country_name |
---|---|---|---|---|---|
2.20.183.12 | 2.20.183.0/24 | EU | Europe | GB | United Kingdom |
5.8.1.2 | 5.8.0.0/19 | EU | Europe | RU | Russia |
192.165.12.17 |
IPv4 lookup - using source in external_data()
let IP_Data = external_data(network:string,geoname_id:long,continent_code:string,continent_name:string ,country_iso_code:string,country_name:string,is_anonymous_proxy:bool,is_satellite_provider:bool)
['https://raw.githubusercontent.com/datasets/geoip2-ipv4/master/data/geoip2-ipv4.csv'];
let IPs = datatable(ip:string)
[
'2.20.183.12', // United Kingdom
'5.8.1.2', // Russia
'192.165.12.17', // Sweden
];
IPs
| evaluate ipv4_lookup(IP_Data, ip, network, return_unmatched = true)
Output
ip | network | geoname_id | continent_code | continent_name | country_iso_code | country_name | is_anonymous_proxy | is_satellite_provider |
---|---|---|---|---|---|---|---|---|
2.20.183.12 | 2.20.183.0/24 | 2635167 | EU | Europe | GB | United Kingdom | 0 | 0 |
5.8.1.2 | 5.8.0.0/19 | 2017370 | EU | Europe | RU | Russia | 0 | 0 |
192.165.12.17 | 192.165.8.0/21 | 2661886 | EU | Europe | SE | Sweden | 0 | 0 |
IPv4 lookup - using extra columns for matching
let IP_Data = external_data(network:string,geoname_id:long,continent_code:string,continent_name:string ,country_iso_code:string,country_name:string,is_anonymous_proxy:bool,is_satellite_provider:bool)
['https://raw.githubusercontent.com/datasets/geoip2-ipv4/master/data/geoip2-ipv4.csv'];
let IPs = datatable(ip:string, continent_name:string, country_iso_code:string)
[
'2.20.183.12', 'Europe', 'GB', // United Kingdom
'5.8.1.2', 'Europe', 'RU', // Russia
'192.165.12.17', 'Europe', '', // Sweden is 'SE' - so it won't be matched
];
IPs
| evaluate ipv4_lookup(IP_Data, ip, network, continent_name, country_iso_code)
Output
ip | continent_name | country_iso_code | network | geoname_id | continent_code | country_name | is_anonymous_proxy | is_satellite_provider |
---|---|---|---|---|---|---|---|---|
2.20.183.12 | Europe | GB | 2.20.183.0/24 | 2635167 | EU | United Kingdom | 0 | 0 |
5.8.1.2 | Europe | RU | 5.8.0.0/19 | 2017370 | EU | Russia | 0 | 0 |
Related content
- Overview of IPv4/IPv6 functions
- Overview of IPv4 text match functions
5 - ipv6_lookup plugin
The ipv6_lookup
plugin looks up an IPv6 value in a lookup table and returns rows with matched values. The plugin is invoked with the evaluate
operator.
Syntax
T |
evaluate
ipv6_lookup(
LookupTable ,
SourceIPv6Key ,
IPv6LookupKey [,
return_unmatched ] )
Parameters
Name | Type | Required | Description |
---|---|---|---|
T | string | ✔️ | The tabular input whose column SourceIPv6Key is used for IPv6 matching. |
LookupTable | string | ✔️ | Table or tabular expression with IPv6 lookup data, whose column LookupKey is used for IPv6 matching. IPv6 values can be masked using IP-prefix notation. |
SourceIPv6Key | string | ✔️ | The column of T with IPv6 string to be looked up in LookupTable. IPv6 values can be masked using IP-prefix notation. |
IPv6LookupKey | string | ✔️ | The column of LookupTable with IPv6 string that is matched against each SourceIPv6Key value. |
return_unmatched | bool | A boolean flag that defines if the result should include all or only matching rows (default: false - only matching rows returned). |
Returns
The ipv6_lookup
plugin returns a result of join (lookup) based on IPv6 key. The schema of the table is the union of the source table and the lookup table, similar to the result of the lookup
operator.
If the return_unmatched argument is set to true
, the resulting table includes both matched and unmatched rows (filled with nulls).
If the return_unmatched argument is set to false
, or omitted (the default value of false
is used), the resulting table has as many records as matching results. This variant of lookup has better performance compared to return_unmatched=true
execution.
Examples
IPv6 lookup - matching rows only
// IP lookup table: IP_Data (the data is generated by ChatGPT).
let IP_Data = datatable(network:string, continent_code:string ,continent_name:string, country_iso_code:string, country_name:string)
[
"2001:0db8:85a3::/48","NA","North America","US","United States",
"2404:6800:4001::/48","AS","Asia","JP","Japan",
"2a00:1450:4001::/48","EU","Europe","DE","Germany",
"2800:3f0:4001::/48","SA","South America","BR","Brazil",
"2c0f:fb50:4001::/48","AF","Africa","ZA","South Africa",
"2607:f8b0:4001::/48","NA","North America","CA","Canada",
"2a02:26f0:4001::/48","EU","Europe","FR","France",
"2400:cb00:4001::/48","AS","Asia","IN","India",
"2801:0db8:85a3::/48","SA","South America","AR","Argentina",
"2a03:2880:4001::/48","EU","Europe","GB","United Kingdom"
];
let IPs = datatable(ip:string)
[
"2001:0db8:85a3:0000:0000:8a2e:0370:7334", // United States
"2404:6800:4001:0001:0000:8a2e:0370:7334", // Japan
"2a02:26f0:4001:0006:0000:8a2e:0370:7334", // France
"a5e:f127:8a9d:146d:e102:b5d3:c755:abcd", // N/A
"a5e:f127:8a9d:146d:e102:b5d3:c755:abce" // N/A
];
IPs
| evaluate ipv6_lookup(IP_Data, ip, network)
Output
network | continent_code | continent_name | country_iso_code | country_name | ip |
---|---|---|---|---|---|
2001:0db8:85a3::/48 | NA | North America | US | United States | 2001:0db8:85a3:0000:0000:8a2e:0370:7334 |
2404:6800:4001::/48 | AS | Asia | JP | Japan | 2404:6800:4001:0001:0000:8a2e:0370:7334 |
2a02:26f0:4001::/48 | EU | Europe | FR | France | 2a02:26f0:4001:0006:0000:8a2e:0370:7334 |
IPv6 lookup - return both matching and nonmatching rows
// IP lookup table: IP_Data (the data is generated by ChatGPT).
let IP_Data = datatable(network:string, continent_code:string ,continent_name:string, country_iso_code:string, country_name:string)
[
"2001:0db8:85a3::/48","NA","North America","US","United States",
"2404:6800:4001::/48","AS","Asia","JP","Japan",
"2a00:1450:4001::/48","EU","Europe","DE","Germany",
"2800:3f0:4001::/48","SA","South America","BR","Brazil",
"2c0f:fb50:4001::/48","AF","Africa","ZA","South Africa",
"2607:f8b0:4001::/48","NA","North America","CA","Canada",
"2a02:26f0:4001::/48","EU","Europe","FR","France",
"2400:cb00:4001::/48","AS","Asia","IN","India",
"2801:0db8:85a3::/48","SA","South America","AR","Argentina",
"2a03:2880:4001::/48","EU","Europe","GB","United Kingdom"
];
let IPs = datatable(ip:string)
[
"2001:0db8:85a3:0000:0000:8a2e:0370:7334", // United States
"2404:6800:4001:0001:0000:8a2e:0370:7334", // Japan
"2a02:26f0:4001:0006:0000:8a2e:0370:7334", // France
"a5e:f127:8a9d:146d:e102:b5d3:c755:abcd", // N/A
"a5e:f127:8a9d:146d:e102:b5d3:c755:abce" // N/A
];
IPs
| evaluate ipv6_lookup(IP_Data, ip, network, true)
Output
network | continent_code | continent_name | country_iso_code | country_name | ip |
---|---|---|---|---|---|
2001:0db8:85a3::/48 | NA | North America | US | United States | 2001:0db8:85a3:0000:0000:8a2e:0370:7334 |
2404:6800:4001::/48 | AS | Asia | JP | Japan | 2404:6800:4001:0001:0000:8a2e:0370:7334 |
2a02:26f0:4001::/48 | EU | Europe | FR | France | 2a02:26f0:4001:0006:0000:8a2e:0370:7334 |
a5e:f127:8a9d:146d:e102:b5d3:c755:abcd | |||||
a5e:f127:8a9d:146d:e102:b5d3:c755:abce |
Related content
- Overview of IPv6/IPv6 functions
6 - preview plugin
Returns a table with up to the specified number of rows from the input record set, and the total number of records in the input record set.
Syntax
T |
evaluate
preview(
NumberOfRows)
Parameters
Name | Type | Required | Description |
---|---|---|---|
T | string | ✔️ | The table to preview. |
NumberOfRows | int | ✔️ | The number of rows to preview from the table. |
Returns
The preview
plugin returns two result tables:
- A table with up to the specified number of rows.
For example, the sample query above is equivalent to running
T | take 50
. - A table with a single row/column, holding the number of records in the
input record set.
For example, the sample query above is equivalent to running
T | count
.
Example
StormEvents | evaluate preview(5)
Table1
The following output table only includes the first 6 columns. To see the full result, run the query.
|StartTime|EndTime|EpisodeId|EventId|State|EventType|…| |–|–|–| |2007-12-30T16:00:00Z|2007-12-30T16:05:00Z|11749|64588|GEORGIA| Thunderstorm Wind|…| |2007-12-20T07:50:00Z|2007-12-20T07:53:00Z|12554|68796|MISSISSIPPI| Thunderstorm Wind|…| |2007-09-29T08:11:00Z|2007-09-29T08:11:00Z|11091|61032|ATLANTIC SOUTH| Waterspout|…| |2007-09-20T21:57:00Z|2007-09-20T22:05:00Z|11078|60913|FLORIDA| Tornado|…| |2007-09-18T20:00:00Z|2007-09-19T18:00:00Z|11074|60904|FLORIDA| Heavy Rain|…|
Table2
Count |
---|
59066 |
7 - schema_merge plugin
Merges tabular schema definitions into a unified schema.
Schema definitions are expected to be in the format produced by the getschema
operator.
The schema merge
operation joins columns in input schemas and tries to reduce
data types to common ones. If data types can’t be reduced, an error is displayed on the problematic column.
The plugin is invoked with the evaluate
operator.
Syntax
T
|
evaluate
schema_merge(
PreserveOrder)
Parameters
Name | Type | Required | Description |
---|---|---|---|
PreserveOrder | bool | When set to true , directs the plugin to validate the column order as defined by the first tabular schema that is kept. If the same column is in several schemas, the column ordinal must be like the column ordinal of the first schema that it appeared in. Default value is true . |
Returns
The schema_merge
plugin returns output similar to what getschema
operator returns.
Examples
Merge with a schema that has a new column appended.
let schema1 = datatable(Uri:string, HttpStatus:int)[] | getschema;
let schema2 = datatable(Uri:string, HttpStatus:int, Referrer:string)[] | getschema;
union schema1, schema2 | evaluate schema_merge()
Output
ColumnName | ColumnOrdinal | DataType | ColumnType |
---|---|---|---|
Uri | 0 | System.String | string |
HttpStatus | 1 | System.Int32 | int |
Referrer | 2 | System.String | string |
Merge with a schema that has different column ordering (HttpStatus
ordinal changes from 1
to 2
in the new variant).
let schema1 = datatable(Uri:string, HttpStatus:int)[] | getschema;
let schema2 = datatable(Uri:string, Referrer:string, HttpStatus:int)[] | getschema;
union schema1, schema2 | evaluate schema_merge()
Output
ColumnName | ColumnOrdinal | DataType | ColumnType |
---|---|---|---|
Uri | 0 | System.String | string |
Referrer | 1 | System.String | string |
HttpStatus | -1 | ERROR(unknown CSL type:ERROR(columns are out of order)) | ERROR(columns are out of order) |
Merge with a schema that has different column ordering, but with PreserveOrder
set to false
.
let schema1 = datatable(Uri:string, HttpStatus:int)[] | getschema;
let schema2 = datatable(Uri:string, Referrer:string, HttpStatus:int)[] | getschema;
union schema1, schema2 | evaluate schema_merge(PreserveOrder = false)
Output
ColumnName | ColumnOrdinal | DataType | ColumnType |
---|---|---|---|
Uri | 0 | System.String | string |
Referrer | 1 | System.String | string |
HttpStatus | 2 | System.Int32 | int |