This is the multi-page printable view of this section. Click here to print.
Workload groups
1 - Query consistency policy
A workload group’s query consistency policy allows specifying options that control the consistency mode of queries.
The policy object
Each option consists of:
- A typed
Value
- the value of the limit. IsRelaxable
- a boolean value that defines if the option can be relaxed by the caller, as part of the request’s request properties. Default istrue
.
The following limits are configurable:
Name | Type | Description | Supported values | Default value | Matching client request property |
---|---|---|---|---|---|
QueryConsistency | QueryConsistency | The consistency mode to use. | Strong , Weak , or WeakAffinitizedByQuery , WeakAffinitizedByDatabase | Strong | queryconsistency |
CachedResultsMaxAge | timespan | The maximum age of cached query results that can be returned. | A non-negative timespan | null | query_results_cache_max_age |
Example
"QueryConsistencyPolicy": {
"QueryConsistency": {
"IsRelaxable": true,
"Value": "Weak"
},
"CachedResultsMaxAge": {
"IsRelaxable": true,
"Value": "05:00:00"
}
}
Monitoring
You can monitor the latency of the metadata snapshot age on nodes serving as weak consistency service heads by using the Weak consistency latency
metric. For more information, see Query metrics.
Related content
2 - Request limits policy
A workload group’s request limits policy allows limiting the resources used by the request during its execution.
The policy object
Each limit consists of:
- A typed
Value
- the value of the limit. IsRelaxable
- a boolean value that defines if the limit can be relaxed by the caller, as part of the request’s request properties.
The following limits are configurable:
Property | Type | Description | Supported values | Matching client request property |
---|---|---|---|---|
DataScope | string | The query’s data scope. This value determines whether the query applies to all data or just the hot cache. | All , HotCache , or null | query_datascope |
MaxMemoryPerQueryPerNode | long | The maximum amount of memory (in bytes) a query can allocate. | [1 , 50% of a single node’s total RAM] | max_memory_consumption_per_query_per_node |
MaxMemoryPerIterator | long | The maximum amount of memory (in bytes) a query operator can allocate. | [1 , Min(32212254720 , 50% of a single node’s total RAM)] | maxmemoryconsumptionperiterator |
MaxFanoutThreadsPercentage | int | The percentage of threads on each node to fan out query execution to. When set to 100%, the cluster assigns all CPUs on each node. For example, 16 CPUs on a cluster deployed on Azure D14_v2 nodes. | [1 , 100 ] | query_fanout_threads_percent |
MaxFanoutNodesPercentage | int | The percentage of nodes on the cluster to fan out query execution to. Functions in a similar manner to MaxFanoutThreadsPercentage . | [1 , 100 ] | query_fanout_nodes_percent |
MaxResultRecords | long | The maximum number of records a request is allowed to return to the caller, beyond which the results are truncated. The truncation limit affects the final result of the query, as delivered back to the client. However, the truncation limit doesn’t apply to intermediate results of subqueries, such as those that result from having cross-cluster references. | [1 , 9223372036854775807 ] | truncationmaxrecords |
MaxResultBytes | long | The maximum data size (in bytes) a request is allowed to return to the caller, beyond which the results are truncated. The truncation limit affects the final result of the query, as delivered back to the client. However, the truncation limit doesn’t apply to intermediate results of subqueries, such as those that result from having cross-cluster references. | [1 , 9223372036854775807 ] | truncationmaxsize |
MaxExecutionTime | timespan | The maximum duration of a request. Notes: 1) This can be used to place more limits on top of the default limits on execution time, but not extend them. 2) Timeout processing isn’t at the resolution of seconds, rather it’s designed to prevent a query from running for minutes. 3) The time it takes to read the payload back at the client isn’t treated as part of the timeout. It depends on how quickly the caller pulls the data from the stream. 4) Total execution time can exceed the configured value if aborting execution takes longer to complete. | [00:00:00 , 01:00:00 ] | servertimeout |
Property | Type | Description | Supported values | Matching client request property |
– | – | – | – | – |
DataScope | string | The query’s data scope. This value determines whether the query applies to all data or just the hot cache. | All , HotCache , or null | query_datascope |
MaxMemoryPerQueryPerNode | long | The maximum amount of memory (in bytes) a query can allocate. | [1 , 50% of a single node’s total RAM] | max_memory_consumption_per_query_per_node |
MaxMemoryPerIterator | long | The maximum amount of memory (in bytes) a query operator can allocate. | [1 , Min(32212254720 , 50% of a single node’s total RAM)] | maxmemoryconsumptionperiterator |
MaxFanoutThreadsPercentage | int | The percentage of threads on each node to fan out query execution to. When set to 100%, the Eventhouse assigns all CPUs on each node. For example, 16 CPUs on an eventhouse deployed on Azure D14_v2 nodes. | [1 , 100 ] | query_fanout_threads_percent |
MaxFanoutNodesPercentage | int | The percentage of nodes on the Eventhouse to fan out query execution to. Functions in a similar manner to MaxFanoutThreadsPercentage . | [1 , 100 ] | query_fanout_nodes_percent |
MaxResultRecords | long | The maximum number of records a request is allowed to return to the caller, beyond which the results are truncated. The truncation limit affects the final result of the query, as delivered back to the client. However, the truncation limit doesn’t apply to intermediate results of subqueries, such as the results from having cross-eventhouse references. | [1 , 9223372036854775807 ] | truncationmaxrecords |
MaxResultBytes | long | The maximum data size (in bytes) a request is allowed to return to the caller, beyond which the results are truncated. The truncation limit affects the final result of the query, as delivered back to the client. However, the truncation limit doesn’t apply to intermediate results of subqueries, such as results from having cross-eventhouse references. | [1 , 9223372036854775807 ] | truncationmaxsize |
MaxExecutionTime | timespan | The maximum duration of a request. Notes: 1) This can be used to place more limits on top of the default limits on execution time, but not extend them. 2) Timeout processing isn’t at the resolution of seconds, rather it’s designed to prevent a query from running for minutes. 3) The time it takes to read the payload back at the client isn’t treated as part of the timeout. It depends on how quickly the caller pulls the data from the stream. 4) Total execution time might exceed the configured value if aborting execution takes longer to complete. | [00:00:00 , 01:00:00 ] | servertimeout |
CPU resource usage
Queries can use all the CPU resources within the cluster. By default, when multiple queries are running concurrently, the system employs a fair round-robin approach to distribute resources. This strategy is optimal for achieving high performance with ad-hoc queries. Queries can use all the CPU resources within the Eventhouse. By default, when multiple queries are running concurrently, the system employs a fair round-robin approach to distribute resources. This strategy is optimal for achieving high performance with ad-hoc queries.
However, there are scenarios where you might want to restrict the CPU resources allocated to a specific query. For instance, if you’re running a background job that can accommodate higher latencies. The request limits policy provides the flexibility to specify a lower percentage of threads or nodes to be used when executing distributed subquery operations. The default setting is 100%.
The default
workload group
The default
workload group has the following policy defined by default. This policy can be altered.
{
"DataScope": {
"IsRelaxable": true,
"Value": "All"
},
"MaxMemoryPerQueryPerNode": {
"IsRelaxable": true,
"Value": < 50% of a single node's total RAM >
},
"MaxMemoryPerIterator": {
"IsRelaxable": true,
"Value": 5368709120
},
"MaxFanoutThreadsPercentage": {
"IsRelaxable": true,
"Value": 100
},
"MaxFanoutNodesPercentage": {
"IsRelaxable": true,
"Value": 100
},
"MaxResultRecords": {
"IsRelaxable": true,
"Value": 500000
},
"MaxResultBytes": {
"IsRelaxable": true,
"Value": 67108864
},
"MaxExecutiontime": {
"IsRelaxable": true,
"Value": "00:04:00"
}
}
Example
The following JSON represents a custom requests limits policy object:
{
"DataScope": {
"IsRelaxable": true,
"Value": "HotCache"
},
"MaxMemoryPerQueryPerNode": {
"IsRelaxable": true,
"Value": 2684354560
},
"MaxMemoryPerIterator": {
"IsRelaxable": true,
"Value": 2684354560
},
"MaxFanoutThreadsPercentage": {
"IsRelaxable": true,
"Value": 50
},
"MaxFanoutNodesPercentage": {
"IsRelaxable": true,
"Value": 50
},
"MaxResultRecords": {
"IsRelaxable": true,
"Value": 1000
},
"MaxResultBytes": {
"IsRelaxable": true,
"Value": 33554432
},
"MaxExecutiontime": {
"IsRelaxable": true,
"Value": "00:01:00"
}
}
Related content
3 - Request queuing policy
A workload group’s request queuing policy controls queuing of requests for delayed execution, once a certain threshold of concurrent requests is exceeded.
Queuing of requests can reduce the number of throttling errors during times of peak activity. It does so by queuing incoming requests up to a predefined short time period, while polling for available capacity during that time period.
The policy might be defined only for workload groups with a request rate limit policy that limits the max concurrent requests at the scope of the workload group.
Use the .alter-merge workload group management command, to enable request queuing.
The policy object
The policy includes a single property:
IsEnabled
: A boolean indicating if the policy is enabled. The default value isfalse
.
Related content
4 - Request rate limit policy
The workload group’s request rate limit policy lets you limit the number of concurrent requests classified into the workload group, per workload group or per principal.
Rate limits are enforced at the level defined by the workload group’s Request rate limits enforcement policy.
The policy object
A request rate limit policy has the following properties:
Name | Supported values | Description |
---|---|---|
IsEnabled | true , false | Indicates if the policy is enabled or not. |
Scope | WorkloadGroup , Principal | The scope to which the limit applies. |
LimitKind | ConcurrentRequests , ResourceUtilization | The kind of the request rate limit. |
Properties | Property bag | Properties of the request rate limit. |
Concurrent requests rate limit
A request rate limit of kind ConcurrentRequests
includes the following property:
Name | Type | Description | Supported Values |
---|---|---|---|
MaxConcurrentRequests | int | The maximum number of concurrent requests. | [0 , 10000 ] |
When a request exceeds the limit on maximum number of concurrent requests:
- The request’s state, as presented by System information commands, will be
Throttled
. - The error message will include the origin of the throttling and the capacity that’s been exceeded.
The following table shows a few examples of concurrent requests that exceed the maximum limit and the error message that these requests return:
Scenario | Error message |
---|---|
A throttled .create table command that was classified to the default workload group, which has a limit of 80 concurrent requests at the scope of the workload group. | The management command was aborted due to throttling. Retrying after some backoff might succeed. CommandType: ‘TableCreate’, Capacity: 80, Origin: ‘RequestRateLimitPolicy/WorkloadGroup/default’. |
A throttled query that was classified to a workload group named MyWorkloadGroup , which has a limit of 50 concurrent requests at the scope of the workload group. | The query was aborted due to throttling. Retrying after some backoff might succeed. Capacity: 50, Origin: ‘RequestRateLimitPolicy/WorkloadGroup/MyWorkloadGroup’. |
A throttled query that was classified to a workload group named MyWorkloadGroup , which has a limit of 10 concurrent requests at the scope of a principal. | The query was aborted due to throttling. Retrying after some backoff might succeed. Capacity: 10, Origin: ‘RequestRateLimitPolicy/WorkloadGroup/MyWorkloadGroup/Principal/aaduser=9e04c4f5-1abd-48d4-a3d2-9f58615b4724;6ccf3fe8-6343-4be5-96c3-29a128dd9570’. |
- The HTTP response code will be
429
. The subcode will beTooManyRequests
. - The exception type will be
QueryThrottledException
for queries, andControlCommandThrottledException
for management commands.
Resource utilization rate limit
A request rate limit of kind ResourceUtilization
includes the following properties:
Name | Type | Description | Supported Values |
---|---|---|---|
ResourceKind | ResourceKind | The resource to limit.When ResourceKind is TotalCpuSeconds , the limit is enforced based on post-execution reports of CPU utilization of completed requests. Requests that report utilization of 0.005 seconds of CPU or lower aren’t counted. The limit (MaxUtilization ) represents the total CPU seconds that can be consumed by requests within a specified time window (TimeWindow ). For example, a user running ad-hoc queries may have a limit of 1000 CPU seconds per hour. If this limit is exceeded, subsequent queries will be throttled, even if started concurrently, as the cumulative CPU seconds have surpassed the defined limit within the sliding window period. | RequestCount , TotalCpuSeconds |
MaxUtilization | long | The maximum of the resource that can be utilized. | RequestCount: [1 , 16777215 ]; TotalCpuSeconds: [1 , 828000 ] |
TimeWindow | timespan | The sliding time window during which the limit is applied. | [00:00:01 , 01:00:00 ] |
When a request exceeds the limit on resources utilization:
- The request’s state, as presented by System information commands, will be
Throttled
. - The error message will include the origin of the throttling and the quota that’s been exceeded. For example:
The following table shows a few examples of requests that exceed the resource utilization rate limit and the error message that these requests return:
Scenario | Error message |
---|---|
A throttled request that was classified to a workload group named Automated Requests , which has a limit of 1000 requests per hour at the scope of a principal. | The request was denied due to exceeding quota limitations. Resource: ‘RequestCount’, Quota: ‘1000’, TimeWindow: ‘01:00:00’, Origin: ‘RequestRateLimitPolicy/WorkloadGroup/Automated Requests/Principal/aadapp=9e04c4f5-1abd-48d4-a3d2-9f58615b4724;6ccf3fe8-6343-4be5-96c3-29a128dd9570’. |
A throttled request, that was classified to a workload group named Automated Requests , which has a limit of 2000 total CPU seconds per hour at the scope of the workload group. | The request was denied due to exceeding quota limitations. Resource: ‘TotalCpuSeconds’, Quota: ‘2000’, TimeWindow: ‘01:00:00’, Origin: ‘RequestRateLimitPolicy/WorkloadGroup/Automated Requests’. |
- The HTTP response code will be
429
. The subcode will beTooManyRequests
. - The exception type will be
QuotaExceededException
.
How consistency affects rate limits
With strong consistency, the default limit on maximum concurrent requests depends on the SKU of the cluster, and is calculated as: Cores-Per-Node x 10
. For example, a cluster that’s set up with Azure D14_v2 nodes, where each node has 16 vCores, will have a default limit of 16
x 10
= 160
.
With weak consistency, the effective default limit on maximum concurrent requests depends on the SKU of the cluster and number of query heads, and is calculated as: Cores-Per-Node x 10 x Number-Of-Query-Heads
. For example, a cluster that’s set up with Azure D14_v2 and 5 query heads, where each node has 16 vCores, will have an effective default limit of 16
x 10
x 5
= 800
.
With strong consistency, the default limit on maximum concurrent requests depends on the SKU of the eventhouse, and is calculated as: Cores-Per-Node x 10
. For example, a eventhouse that’s set up with Azure D14_v2 nodes, where each node has 16 vCores, will have a default limit of 16
x 10
= 160
.
With weak consistency, the effective default limit on maximum concurrent requests depends on the SKU of the eventhouse and number of query heads, and is calculated as: Cores-Per-Node x 10 x Number-Of-Query-Heads
. For example, a eventhouse that’s set up with Azure D14_v2 and 5 query heads, where each node has 16 vCores, will have an effective default limit of 16
x 10
x 5
= 800
.
For more information, see Query consistency.
The default
workload group
The default
workload group has the following policy defined by default. This policy can be altered.
[
{
"IsEnabled": true,
"Scope": "WorkloadGroup",
"LimitKind": "ConcurrentRequests",
"Properties": {
"MaxConcurrentRequests": < Cores-Per-Node x 10 >
}
}
]
Examples
The following policies allow up to:
- 500 concurrent requests for the workload group.
- 25 concurrent requests per principal.
- 50 requests per principal per hour.
[
{
"IsEnabled": true,
"Scope": "WorkloadGroup",
"LimitKind": "ConcurrentRequests",
"Properties": {
"MaxConcurrentRequests": 500
}
},
{
"IsEnabled": true,
"Scope": "Principal",
"LimitKind": "ConcurrentRequests",
"Properties": {
"MaxConcurrentRequests": 25
}
},
{
"IsEnabled": true,
"Scope": "Principal",
"LimitKind": "ResourceUtilization",
"Properties": {
"ResourceKind": "RequestCount",
"MaxUtilization": 50,
"TimeWindow": "01:00:00"
}
}
]
The following policies will block all requests classified to the workload group:
[
{
"IsEnabled": true,
"Scope": "WorkloadGroup",
"LimitKind": "ConcurrentRequests",
"Properties": {
"MaxConcurrentRequests": 0
}
},
]
Related content
5 - Request rate limits enforcement policy
A workload group’s request rate limits enforcement policy controls how request rate limits are enforced.
The policy object
A request rate limit policy has the following properties:
Name | Supported values | Default value | Description |
---|---|---|---|
QueriesEnforcementLevel | Cluster , QueryHead | QueryHead | Indicates the enforcement level for queries. |
CommandsEnforcementLevel | Cluster , Database | Database | Indicates the enforcement level for commands. |
Request rate limits enforcement level
Request rate limits can be enforced at one of the following levels:
Cluster
:- Rate limits are enforced by the single cluster admin node.
Database
:- Rate limits are enforced by the database admin node that manages the database the request was sent to.
- If there are multiple database admin nodes, the configured rate limit is effectively multiplied by the number of database admin nodes.
QueryHead
:- Rate limits for queries are enforced by the query head node that the query was routed to.
- This option affects queries that are sent with either strong or weak query consistency.
- Strongly consistent queries run on the database admin node, and the configured rate limit is effectively multiplied by the number of database admin nodes.
- For weakly consistent queries, the configured rate limit is effectively multiplied by the number of query head nodes.
- This option doesn’t apply to management commands.
Cluster
:- Rate limits are enforced by the single Eventhouse admin node.
Database
:- Rate limits are enforced by the database admin node that manages the database the request was sent to.
- If there are multiple database admin nodes, the configured rate limit is effectively multiplied by the number of database admin nodes.
QueryHead
:- Rate limits for queries are enforced by the query head node that the query was routed to.
- This option affects queries that are sent with either strong or weak query consistency.
- Strongly consistent queries run on the database admin node, and the configured rate limit is effectively multiplied by the number of database admin nodes.
- For weakly consistent queries, the configured rate limit is effectively multiplied by the number of query head nodes.
- This option doesn’t apply to management commands.
Examples
Setup
The cluster has 10 nodes as follows:
- one cluster admin node.
- two database admin nodes (each manages 50% of the cluster’s databases).
- 50% of the tail nodes (5 out of 10) can serve as query heads for weakly consistent queries.
The
default
workload group is defined with the following policies:
"RequestRateLimitPolicies": [
{
"IsEnabled": true,
"Scope": "WorkloadGroup",
"LimitKind": "ConcurrentRequests",
"Properties": {
"MaxConcurrentRequests": 200
}
}
],
"RequestRateLimitsEnforcementPolicy": {
"QueriesEnforcementLevel": "QueryHead",
"CommandsEnforcementLevel": "Database"
}
Effective rate limits
The effective rate limits for the default
workload group are:
The maximum number of concurrent cluster-scoped management commands is
200
.The maximum number of concurrent database-scoped management commands is
2
(database admin nodes) x200
(max per admin node) =400
.The maximum number of concurrent strongly consistent queries is
2
(database admin nodes) x200
(max per admin node) =400
.The maximum number of concurrent weakly consistent queries is
5
(query heads) x200
(max per query head) =1000
.The maximum number of concurrent eventhouse-scoped management commands is
200
.The maximum number of concurrent database-scoped management commands is
2
(database admin nodes) x200
(max per admin node) =400
.The maximum number of concurrent strongly consistent queries is
2
(database admin nodes) x200
(max per admin node) =400
.The maximum number of concurrent weakly consistent queries is
5
(query heads) x200
(max per query head) =1000
.
Related content
6 - Workload groups
Workload groups allow you to group together sets of management commands and queries based on shared characteristics, and apply policies to control per-request limits and request rate limits for each of these groups.
Together with workload group policies, workload groups serve as a resource governance system for incoming requests to the cluster. When a request is initiated, it gets classified into a workload group. The classification is based on a user-defined function defined as part of a request classification policy. The request follows the policies assigned to the designated workload group throughout its execution.
Workload groups are defined at the cluster level, and up to 10 custom groups can be defined in addition to the three built-in workload groups. Together with workload group policies, workload groups serve as a resource governance system for incoming requests to the eventhouse. When a request is initiated, it gets classified into a workload group. The classification is based on a user-defined function defined as part of a request classification policy. The request follows the policies assigned to the designated workload group throughout its execution.
Workload groups are defined at the eventhouse level, and up to 10 custom groups can be defined in addition to the three built-in workload groups.
Use cases for custom workload groups
The following list covers some common use cases for creating custom workload groups:
Protect against runaway queries: Create a workload group with a requests limits policy to set restrictions on resource usage and parallelism during query execution. For example, this policy can regulate result set size, memory per iterator, memory per node, execution time, and CPU resource usage.
Control the rate of requests: Create a workload group with a request rate limit policy to manage the behavior of concurrent requests from a specific principal or application. This policy can restrict the number of concurrent requests, request count within a time period, and total CPU seconds per time period. While your cluster comes with default limits, such as query limits, you have the flexibility to adjust these limits based on your requirements.
Create shared environments: Imagine a scenario where you have 3 different customer teams running queries and commands on a shared cluster, possibly even accessing shared databases. If you’re billing these teams based on their resource usage, you can create three distinct workload groups, each with unique limits. These workload groups would allow you to effectively manage and monitor the resource usage of each customer team.
Control the rate of requests: Create a workload group with a request rate limit policy to manage the behavior of concurrent requests from a specific principal or application. This policy can restrict the number of concurrent requests, request count within a time period, and total CPU seconds per time period. While your eventhouse comes with default limits, such as query limits, you have the flexibility to adjust these limits based on your requirements.
Create shared environments: Imagine a scenario where you have 3 different customer teams running queries and commands on a shared eventhouse, possibly even accessing shared databases. If you’re billing these teams based on their resource usage, you can create three distinct workload groups, each with unique limits. These workload groups would allow you to effectively manage and monitor the resource usage of each customer team.
Monitor resources utilization: Workload groups can help you create periodic reports on the resource consumption of a given principal or application. For instance, if these principals represent different clients, such reports can facilitate accurate billing. For more information, see Monitor requests by workload group.
Create and manage workload groups
Use the following commands to manage workload groups and their policies:
Workload group policies
The following policies can be defined per workload group:
- Request limits policy
- Request rate limit policy
- Request rate limits enforcement policy
- Request queuing policy
- Query consistency policy
Built-in workload groups
The pre-defined workload groups are:
Default workload group
Requests are classified into the default
group under these conditions:
- There are no criteria to classify a request.
- An attempt was made to classify the request into a non-existent group.
- A general classification failure has occurred.
You can:
- Change the criteria used for routing these requests.
- Change the policies that apply to the
default
workload group. - Classify requests into the
default
workload group.
To monitor what gets classified to the default
workload group, see Monitor requests by workload group.
Internal workload group
The internal
workload group is populated with requests that are for internal use only.
You can’t:
- Change the criteria used for routing these requests.
- Change the policies that apply to the
internal
workload group. - Classify requests into the
internal
workload group.
To monitor what gets classified to the internal
workload group, see Monitor requests by workload group.
Materialized views workload group
The $materialized-views
workload group applies to the materialized views materialization process. For more information on how materialized views work, see Materialized views overview.
You can change the following values in the workload group’s request limits policy:
- MaxMemoryPerQueryPerNode
- MaxMemoryPerIterator
- MaxFanoutThreadsPercentage
- MaxFanoutNodesPercentage
Monitor requests by workload group
System commands indicate the workload group into which a request was classified. You can use these commands to aggregate resources utilization by workload group for completed requests.
The same information can also be viewed and analyzed in Azure Monitor insights.
Related content
7 - Request classification policy
7.1 - Request classification policy
The classification process assigns incoming requests to a workload group, based on the characteristics of the requests. Tailor the classification logic by writing a user-defined function, as part of a cluster-level request classification policy. The classification process assigns incoming requests to a workload group, based on the characteristics of the requests. Tailor the classification logic by writing a user-defined function, as part of an Eventhouse-level request classification policy.
In the absence of an enabled request classification policy, all requests are classified into the default
workload group.
Policy object
The policy has the following properties:
IsEnabled
:bool
- Indicates if the policy is enabled or not.ClassificationFunction
:string
- The body of the function to use for classifying requests.
Classification function
The classification of incoming requests is based on a user-defined function. The results of the function are used to classify requests into existing workload groups.
The user-defined function has the following characteristics and behaviors:
- If
IsEnabled
is set totrue
in the policy, the user-defined function is evaluated for every new request. - The user-defined function gives workload group context for the request for the full lifetime of the request.
- The request is given the
default
workload group context in the following situations:- The user-defined function returns an empty string,
default
, or the name of nonexistent workload group. - The function fails for any reason.
- The user-defined function returns an empty string,
- Only one user-defined function can be designated at any given time.
Requirements and limitations
A classification function:
- Must return a single scalar value of type
string
. That is the name of the workload group to assign the request to. - Must not reference any other entity (database, table, or function).
- Specifically - it might not use the following functions and operators:
cluster()
database()
table()
external_table()
externaldata
- Specifically - it might not use the following functions and operators:
- Has access to a special
dynamic
symbol, a property-bag namedrequest_properties
, with the following properties:
Name | Type | Description | Examples |
---|---|---|---|
current_database | string | The name of the request database. | "MyDatabase" |
current_application | string | The name of the application that sent the request. | "Kusto.Explorer" , "KusWeb" |
current_principal | string | The fully qualified name of the principal identity that sent the request. | "aaduser=1793eb1f-4a18-418c-be4c-728e310c86d3;83af1c0e-8c6d-4f09-b249-c67a2e8fda65" |
query_consistency | string | For queries: the consistency of the query - strongconsistency or weakconsistency . This property is set by the caller as part of the request’s request properties: The client request property to set is: queryconsistency . | "strongconsistency" , "weakconsistency" |
request_description | string | Custom text that the author of the request can include. The text is set by the caller as part of the request’s Client request properties: The client request property to set is: request_description . | "Some custom description" ; automatically populated for dashboards: "dashboard:{dashboard_id};version:{version};sourceId:{source_id};sourceType:{tile/parameter}" |
request_text | string | The obfuscated text of the request. Obfuscated string literals included in the query text are replaced by multiple of star (* ) characters. Note: only the leading 65,536 characters of the request text are evaluated. | ".show version" |
request_type | string | The type of the request - Command or Query . | "Command" , "Query" |
Examples
A single workload group
iff(request_properties.current_application == "Kusto.Explorer" and request_properties.request_type == "Query",
"Ad-hoc queries",
"default")
Multiple workload groups
case(current_principal_is_member_of('aadgroup=somesecuritygroup@contoso.com'), "First workload group",
request_properties.current_database == "MyDatabase" and request_properties.current_principal has 'aadapp=', "Second workload group",
request_properties.current_application == "Kusto.Explorer" and request_properties.request_type == "Query", "Third workload group",
request_properties.current_application == "Kusto.Explorer", "Third workload group",
request_properties.current_application == "KustoQueryRunner", "Fourth workload group",
request_properties.request_description == "this is a test", "Fifth workload group",
hourofday(now()) between (17 .. 23), "Sixth workload group",
"default")
Management commands
Use the following management commands to manage a cluster’s request classification policy.
Command | Description |
---|---|
.alter cluster request classification policy | Alters cluster’s request classification policy |
.alter-merge cluster request classification policy | Enables or disables a cluster’s request classification policy |
.delete cluster request classification policy | Deletes the cluster’s request classification policy |
.show cluster request classification policy | Shows the cluster’s request classification policy |
Use the following management commands to manage an Eventhouse’s request classification policy. |
Command | Description |
---|---|
.alter cluster request classification policy | Alters Eventhouse’s request classification policy |
.alter-merge cluster request classification policy | Enables or disables an Eventhouse’s request classification policy |
.delete cluster request classification policy | Deletes the Eventhouse’s request classification policy |
.show cluster request classification policy | Shows the Eventhouse’s request classification policy |