Request limits policy
A workload group’s request limits policy allows limiting the resources used by the request during its execution.
The policy object
Each limit consists of:
- A typed
Value
- the value of the limit. IsRelaxable
- a boolean value that defines if the limit can be relaxed by the caller, as part of the request’s request properties.
The following limits are configurable:
Property | Type | Description | Supported values | Matching client request property |
---|---|---|---|---|
DataScope | string | The query’s data scope. This value determines whether the query applies to all data or just the hot cache. | All , HotCache , or null | query_datascope |
MaxMemoryPerQueryPerNode | long | The maximum amount of memory (in bytes) a query can allocate. | [1 , 50% of a single node’s total RAM] | max_memory_consumption_per_query_per_node |
MaxMemoryPerIterator | long | The maximum amount of memory (in bytes) a query operator can allocate. | [1 , Min(32212254720 , 50% of a single node’s total RAM)] | maxmemoryconsumptionperiterator |
MaxFanoutThreadsPercentage | int | The percentage of threads on each node to fan out query execution to. When set to 100%, the cluster assigns all CPUs on each node. For example, 16 CPUs on a cluster deployed on Azure D14_v2 nodes. | [1 , 100 ] | query_fanout_threads_percent |
MaxFanoutNodesPercentage | int | The percentage of nodes on the cluster to fan out query execution to. Functions in a similar manner to MaxFanoutThreadsPercentage . | [1 , 100 ] | query_fanout_nodes_percent |
MaxResultRecords | long | The maximum number of records a request is allowed to return to the caller, beyond which the results are truncated. The truncation limit affects the final result of the query, as delivered back to the client. However, the truncation limit doesn’t apply to intermediate results of subqueries, such as those that result from having cross-cluster references. | [1 , 9223372036854775807 ] | truncationmaxrecords |
MaxResultBytes | long | The maximum data size (in bytes) a request is allowed to return to the caller, beyond which the results are truncated. The truncation limit affects the final result of the query, as delivered back to the client. However, the truncation limit doesn’t apply to intermediate results of subqueries, such as those that result from having cross-cluster references. | [1 , 9223372036854775807 ] | truncationmaxsize |
MaxExecutionTime | timespan | The maximum duration of a request. Notes: 1) This can be used to place more limits on top of the default limits on execution time, but not extend them. 2) Timeout processing isn’t at the resolution of seconds, rather it’s designed to prevent a query from running for minutes. 3) The time it takes to read the payload back at the client isn’t treated as part of the timeout. It depends on how quickly the caller pulls the data from the stream. 4) Total execution time can exceed the configured value if aborting execution takes longer to complete. | [00:00:00 , 01:00:00 ] | servertimeout |
Property | Type | Description | Supported values | Matching client request property |
– | – | – | – | – |
DataScope | string | The query’s data scope. This value determines whether the query applies to all data or just the hot cache. | All , HotCache , or null | query_datascope |
MaxMemoryPerQueryPerNode | long | The maximum amount of memory (in bytes) a query can allocate. | [1 , 50% of a single node’s total RAM] | max_memory_consumption_per_query_per_node |
MaxMemoryPerIterator | long | The maximum amount of memory (in bytes) a query operator can allocate. | [1 , Min(32212254720 , 50% of a single node’s total RAM)] | maxmemoryconsumptionperiterator |
MaxFanoutThreadsPercentage | int | The percentage of threads on each node to fan out query execution to. When set to 100%, the Eventhouse assigns all CPUs on each node. For example, 16 CPUs on an eventhouse deployed on Azure D14_v2 nodes. | [1 , 100 ] | query_fanout_threads_percent |
MaxFanoutNodesPercentage | int | The percentage of nodes on the Eventhouse to fan out query execution to. Functions in a similar manner to MaxFanoutThreadsPercentage . | [1 , 100 ] | query_fanout_nodes_percent |
MaxResultRecords | long | The maximum number of records a request is allowed to return to the caller, beyond which the results are truncated. The truncation limit affects the final result of the query, as delivered back to the client. However, the truncation limit doesn’t apply to intermediate results of subqueries, such as the results from having cross-eventhouse references. | [1 , 9223372036854775807 ] | truncationmaxrecords |
MaxResultBytes | long | The maximum data size (in bytes) a request is allowed to return to the caller, beyond which the results are truncated. The truncation limit affects the final result of the query, as delivered back to the client. However, the truncation limit doesn’t apply to intermediate results of subqueries, such as results from having cross-eventhouse references. | [1 , 9223372036854775807 ] | truncationmaxsize |
MaxExecutionTime | timespan | The maximum duration of a request. Notes: 1) This can be used to place more limits on top of the default limits on execution time, but not extend them. 2) Timeout processing isn’t at the resolution of seconds, rather it’s designed to prevent a query from running for minutes. 3) The time it takes to read the payload back at the client isn’t treated as part of the timeout. It depends on how quickly the caller pulls the data from the stream. 4) Total execution time might exceed the configured value if aborting execution takes longer to complete. | [00:00:00 , 01:00:00 ] | servertimeout |
CPU resource usage
Queries can use all the CPU resources within the cluster. By default, when multiple queries are running concurrently, the system employs a fair round-robin approach to distribute resources. This strategy is optimal for achieving high performance with ad-hoc queries. Queries can use all the CPU resources within the Eventhouse. By default, when multiple queries are running concurrently, the system employs a fair round-robin approach to distribute resources. This strategy is optimal for achieving high performance with ad-hoc queries.
However, there are scenarios where you might want to restrict the CPU resources allocated to a specific query. For instance, if you’re running a background job that can accommodate higher latencies. The request limits policy provides the flexibility to specify a lower percentage of threads or nodes to be used when executing distributed subquery operations. The default setting is 100%.
The default
workload group
The default
workload group has the following policy defined by default. This policy can be altered.
{
"DataScope": {
"IsRelaxable": true,
"Value": "All"
},
"MaxMemoryPerQueryPerNode": {
"IsRelaxable": true,
"Value": < 50% of a single node's total RAM >
},
"MaxMemoryPerIterator": {
"IsRelaxable": true,
"Value": 5368709120
},
"MaxFanoutThreadsPercentage": {
"IsRelaxable": true,
"Value": 100
},
"MaxFanoutNodesPercentage": {
"IsRelaxable": true,
"Value": 100
},
"MaxResultRecords": {
"IsRelaxable": true,
"Value": 500000
},
"MaxResultBytes": {
"IsRelaxable": true,
"Value": 67108864
},
"MaxExecutiontime": {
"IsRelaxable": true,
"Value": "00:04:00"
}
}
Example
The following JSON represents a custom requests limits policy object:
{
"DataScope": {
"IsRelaxable": true,
"Value": "HotCache"
},
"MaxMemoryPerQueryPerNode": {
"IsRelaxable": true,
"Value": 2684354560
},
"MaxMemoryPerIterator": {
"IsRelaxable": true,
"Value": 2684354560
},
"MaxFanoutThreadsPercentage": {
"IsRelaxable": true,
"Value": 50
},
"MaxFanoutNodesPercentage": {
"IsRelaxable": true,
"Value": 50
},
"MaxResultRecords": {
"IsRelaxable": true,
"Value": 1000
},
"MaxResultBytes": {
"IsRelaxable": true,
"Value": 33554432
},
"MaxExecutiontime": {
"IsRelaxable": true,
"Value": "00:01:00"
}
}
Related content
- Workload groups
- Client request properties
- .alter-merge workload_group
- .create-or-alter workload_group
- .drop workload_group
- .show workload_group command
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.