This is the multi-page printable view of this section. Click here to print.
REST API
- 1: Authentication over HTTPS
- 2: How to authenticate with Microsoft Authentication Library (MSAL) in apps
- 3: How to ingest data with the REST API
- 4: Query V2 HTTP response
- 5: Query/management HTTP request
- 6: Query/management HTTP response
- 7: Request properties
- 8: REST API overview
- 9: Send T-SQL queries over RESTful web API
- 10: Streaming ingestion HTTP request
- 11: UI deep links
1 - Authentication over HTTPS
To interact with your database over HTTPS, the principal making the request
must authenticate by using the HTTP Authorization
request header.
Syntax
Authorization:
Bearer
AccessToken
Parameters
Name | Type | Required | Description |
---|---|---|---|
AccessToken | string | ✔️ | A Microsoft Entra access token for the service. |
Get an access token
There are many different methods to get a Microsoft Entra access token. To learn more, see user authentication and application authentication. There are many different methods to get a Microsoft Entra access token. To learn more, see user authentication and application authentication.
Get an access token for a user principal using the Azure CLI
The following steps return an access token for the user principal making the request. Make sure the user principal has access to the resource you plan to access. For more information, see role-based access control. The following steps return an access token for the user principal making the request. Make sure the user principal has access to the resource you plan to access. For more information, see role-based access control.
Sign in to the Azure CLI.
az login --output table
Find the row where the column
Default
istrue
. Confirm that the subscription in that row is the subscription for which you want to create your Microsoft Entra access token. To find subscription information, see get subscription and tenant IDs in the Azure portal. If you need to switch to a different subscription, run one of the following commands.Find the row where the column
Default
istrue
. Confirm that the subscription in that row is the subscription for which you want to create your Microsoft Entra access token. To find subscription information, see get subscription and tenant IDs in the Azure portal. If you need to switch to a different subscription, run one of the following commands.Run the following command to get the access token.
az account get-access-token \ --resource "https://api.kusto.windows.net" \ --query "accessToken"
Get an access token for a service principal using the Azure CLI
Microsoft Entra service principals represent applications or services that need access to resources, usually in non-interactive scenarios such as API calls. The following steps guide you through creating a service principal and getting a bearer token for this principal.
Sign in to the Azure CLI.
az login --output table
Find the row where the column
Default
istrue
. Confirm that the subscription in that row is the subscription under which you want to create the service principal. To find subscription information, see get subscription and tenant IDs in the Azure portal. If you need to switch to a different subscription, run one of the following commands.Find the row where the column
Default
istrue
. Confirm that the subscription in that row is the subscription under which you want to create the service principal. To find subscription information, see get subscription and tenant IDs in the Azure portal. If you need to switch to a different subscription, run one of the following commands.Create a service principal. This following command creates a Microsoft Entra service principal and returns the
appId
,displayName
,password
, andtenantId
for the service principal.Grant the application principal access to your database. For example, in the context of your database, use the following command to add the principal as a user.
To learn about the different roles and how to assign them, see security roles management. To learn about the different roles and how to assign them, see security roles management.
-F grant_type=client_credentials \ -F resource=https://api.kusto.windows.net
Related content
- Authentication overview
- Authentication overview
- To learn how to perform On-behalf-of (OBO) authentication or Single Page Application (SPA) authentication, see How to authenticate with Microsoft Authentication Library (MSAL).
- To learn how to perform On-behalf-of (OBO) authentication or Single Page Application (SPA) authentication, see How to authenticate with Microsoft Authentication Library (MSAL).
2 - How to authenticate with Microsoft Authentication Library (MSAL) in apps
To programmatically authenticate with your cluster, you need to request an access token from Microsoft Entra ID specific to Azure Data Explorer. This access token acts as proof of identity when issuing requests to your cluster. You can use one of the Microsoft Authentication Library (MSAL) flows to create an access token. To programmatically authenticate with your cluster, you need to request an access token from Microsoft Entra ID specific to Azure Data Explorer. This access token acts as proof of identity when issuing requests to your cluster. You can use one of the Microsoft Authentication Library (MSAL) flows to create an access token.
This article explains how to use MSAL to authenticate principals to your cluster. The direct use of MSAL to authenticate principals is primarily relevant in web applications that require On-behalf-of (OBO) authentication or Single Page Application (SPA) authentication. For other cases, we recommend using the Kusto client libraries as they simplify the authentication process. This article explains how to use MSAL to authenticate principals to your cluster. The direct use of MSAL to authenticate principals is primarily relevant in web applications that require On-behalf-of (OBO) authentication or Single Page Application (SPA) authentication. For other cases, we recommend using the Kusto client libraries as they simplify the authentication process.
In this article, learn about the main authentication scenarios, the information to provide for successful authentication, and the use of MSAL for authentication.
Authentication scenarios
The main authentication scenarios are as follows:
User authentication: Used to verify the identity of human users.
User authentication: Used to verify the identity of human users.
Application authentication: Used to verify the identity of an application that needs to access resources without human intervention by using configured credentials.
Application authentication: Used to verify the identity of an application that needs to access resources without human intervention by using configured credentials.
On-behalf-of (OBO) authentication: Allows an application to exchange a token for said application with a token to access a Kusto service. This flow must be implemented with MSAL.
On-behalf-of (OBO) authentication: Allows an application to exchange a token for said application with a token to access a Kusto service. This flow must be implemented with MSAL.
Single page application (SPA) authentication: Allows client-side SPA web applications to sign in users and get tokens to access your cluster. This flow must be implemented with MSAL.
Single page application (SPA) authentication: Allows client-side SPA web applications to sign in users and get tokens to access your cluster. This flow must be implemented with MSAL.
For user and application authentication, we recommend using the Kusto client libraries. For OBO and SPA authentication, the Kusto client libraries can’t be used. For user and application authentication, we recommend using the Kusto client libraries. For OBO and SPA authentication, the Kusto client libraries can’t be used.
Authentication parameters
During the token acquisition process, the client needs to provide the following parameters:
Parameter name | Description |
---|
Perform user authentication with MSAL
The following code sample shows how to use MSAL to get an authorization token for your cluster. The authorization is done in a way that launches the interactive sign-in UI. The appRedirectUri
is the URL to which Microsoft Entra ID redirects after authentication completes successfully. MSAL extracts the authorization code from this redirect.
.Build();
var result = authClient.AcquireTokenInteractive(
new[] { $"{kustoUri}/.default" } // Define scopes for accessing Azure Data Explorer cluster
).ExecuteAsync().Result;
var bearerToken = result.AccessToken;
var request = WebRequest.Create(new Uri(kustoUri));
request.Headers.Set(HttpRequestHeader.Authorization, string.Format(CultureInfo.InvariantCulture, "{0} {1}", "Bearer", bearerToken));
Perform application authentication with MSAL
The following code sample shows how to use MSAL to get an authorization token for your cluster. In this flow, no prompt is presented. The application must be registered with Microsoft Entra ID and have an app key or an X509v2 certificate issued by Microsoft Entra ID. To set up an application, see Provision a Microsoft Entra application. The following code sample shows how to use MSAL to get an authorization token for your cluster. In this flow, no prompt is presented. The application must be registered with Microsoft Entra ID and have an app key or an X509v2 certificate issued by Microsoft Entra ID. To set up an application, see Provision a Microsoft Entra application.
.Build();
var result = authClient.AcquireTokenForClient(
new[] { $"{kustoUri}/.default" } // Define scopes for accessing Azure Data Explorer cluster
).ExecuteAsync().Result;
var bearerToken = result.AccessToken;
var request = WebRequest.Create(new Uri(kustoUri));
request.Headers.Set(HttpRequestHeader.Authorization, string.Format(CultureInfo.InvariantCulture, "{0} {1}", "Bearer", bearerToken));
Perform On-behalf-of (OBO) authentication
On-behalf-of authentication is relevant when your web application or service acts as a mediator between the user or application and your cluster. On-behalf-of authentication is relevant when your web application or service acts as a mediator between the user or application and your cluster.
In this scenario, an application is sent a Microsoft Entra access token for an arbitrary resource. Then, the application uses that token to acquire a new Microsoft Entra access token for the Azure Data Explorer resource. Then, the application can access your cluster on behalf of the principal indicated by the original Microsoft Entra access token. This flow is called the OAuth 2.0 on-behalf-of authentication flow. It generally requires multiple configuration steps with Microsoft Entra ID, and in some cases might require special consent from the administrator of the Microsoft Entra tenant. In this scenario, an application is sent a Microsoft Entra access token for an arbitrary resource. Then, the application uses that token to acquire a new Microsoft Entra access token for the Azure Data Explorer resource. Then, the application can access your cluster on behalf of the principal indicated by the original Microsoft Entra access token. This flow is called the OAuth 2.0 on-behalf-of authentication flow. It generally requires multiple configuration steps with Microsoft Entra ID, and in some cases might require special consent from the administrator of the Microsoft Entra tenant.
To perform on-behalf-of authentication:
In your server code, use MSAL to perform the token exchange.
.Build(); var result = authClient.AcquireTokenOnBehalfOf( new[] { $"{kustoUri}/.default" }, // Define scopes for accessing your cluster ).ExecuteAsync().Result; var accessTokenForAdx = result.AccessToken;
Use the token to run queries. For example:
var request = WebRequest.Create(new Uri(kustoUri)); request.Headers.Set(HttpRequestHeader.Authorization, string.Format(CultureInfo.InvariantCulture, "{0} {1}", "Bearer", accessTokenForAdx));
Perform Single Page Application (SPA) authentication
For authentication for a SPA web client, use the OAuth authorization code flow. For authentication for a SPA web client, use the OAuth authorization code flow.
In this scenario, the app is redirected to sign in to Microsoft Entra ID. Then, Microsoft Entra ID redirects back to the app with an authorization code in the URI. Then, the app makes a request to the token endpoint to get the access token. The token is valid for 24 hour during which the client can reuse it by acquiring the token silently.
Microsoft identity platform has detailed tutorials for different use cases such as React, Angular, and JavaScript. Microsoft identity platform has detailed tutorials for different use cases such as React, Angular, and JavaScript.
To set up authentication for a web client:
Configure the app as described in MSAL.js 2.0 with auth code flow.
Configure the app as described in MSAL.js 2.0 with auth code flow.
Use the MSAL.js 2.0 library to sign in a user and authenticate to your cluster. Microsoft identity platform has detailed tutorials for different use cases such as React, Angular, and JavaScript.
Use the MSAL.js 2.0 library to sign in a user and authenticate to your cluster. Microsoft identity platform has detailed tutorials for different use cases such as React, Angular, and JavaScript.
The following example uses the MSAL.js library to access Azure Data Explorer.
import * as msal from "@azure/msal-browser"; const msalConfig = { auth: { }, }; const msalInstance = new msal.PublicClientApplication(msalConfig); const myAccounts = msalInstance.getAllAccounts(); // If no account is logged in, redirect the user to log in. if (myAccounts === undefined || myAccounts.length === 0) { try { await msalInstance.loginRedirect({ scopes: ["https://help.kusto.windows.net/.default"], }); } catch (err) { console.error(err); } } const account = myAccounts[0]; const name = account.name; window.document.getElementById("main").innerHTML = `Hi ${name}!`; // Get the access token required to access the specified Azure Data Explorer cluster. const accessTokenRequest = { account, scopes: ["https://help.kusto.windows.net/.default"], }; let acquireTokenResult = undefined; try { acquireTokenResult = await msalInstance.acquireTokenSilent(accessTokenRequest); } catch (error) { if (error instanceof InteractionRequiredAuthError) { await msalInstance.acquireTokenRedirect(accessTokenRequest); } } const accessToken = acquireTokenResult.accessToken; // Make requests to the specified cluster with the token in the Authorization header. const fetchResult = await fetch("https://help.kusto.windows.net/v2/rest/query", { headers: { Authorization: `Bearer ${accessToken}`, "Content-Type": "application/json", }, method: "POST", body: JSON.stringify({ db: "Samples", csl: "StormEvents | count", }), }); const jsonResult = await fetchResult.json(); // The following line extracts the first cell in the result data.
Related content
3 - How to ingest data with the REST API
The Kusto.Ingest library is preferred for ingesting data to your database. However, you can still achieve almost the same functionality, without being dependent on the Kusto.Ingest package. This article shows you how, by using Queued Ingestion to your database for production-grade pipelines.
This article deals with the recommended mode of ingestion. For the Kusto.Ingest library, its corresponding entity is the IKustoQueuedIngestClient interface. Here, the client code interacts with your database by posting ingestion notification messages to an Azure queue. References to the messages are obtained from the Kusto Data Management (also known as the Ingestion) service. Interaction with the service must be authenticated with Microsoft Entra ID. This article deals with the recommended mode of ingestion. For the Kusto.Ingest library, its corresponding entity is the IKustoQueuedIngestClient interface. Here, the client code interacts with your database by posting ingestion notification messages to an Azure queue. References to the messages are obtained from the Kusto Data Management (also known as the Ingestion) service. Interaction with the service must be authenticated with Microsoft Entra ID.
The following code shows how the Kusto Data Management service handles queued data ingestion without using the Kusto.Ingest library. This example may be useful if full .NET is inaccessible or unavailable because of the environment, or other restrictions.
The code includes the steps to create an Azure Storage client and upload the data to a blob. Each step is described in greater detail, after the sample code.
- Obtain an authentication token for accessing the ingestion service
- Obtain an authentication token for accessing the ingestion service
- Query the ingestion service to obtain:
- Upload data to a blob on one of the blob containers obtained from Kusto in (2)
- Upload data to a blob on one of the blob containers obtained from Kusto in (2)
- Compose an ingestion message that identifies the target database and table and that points to the blob from (3)
- Compose an ingestion message that identifies the target database and table and that points to the blob from (3)
- Post the ingestion message we composed in (4) to an ingestion queue obtained in (2)
- Post the ingestion message we composed in (4) to an ingestion queue obtained in (2)
- Retrieve any error found by the service during ingestion
- Retrieve any error found by the service during ingestion
// A container class for ingestion resources we are going to obtain
internal class IngestionResourcesSnapshot
{
public string FailureNotificationsQueue { get; set; } = string.Empty;
public string SuccessNotificationsQueue { get; set; } = string.Empty;
}
public static void IngestSingleFile(string file, string db, string table, string ingestionMappingRef)
{
// Your ingestion service URI
var dmServiceBaseUri = @"{serviceURI}";
// 1. Authenticate the interactive user (or application) to access Kusto ingestion service
var bearerToken = AuthenticateInteractiveUser(dmServiceBaseUri);
// 2a. Retrieve ingestion resources
var ingestionResources = RetrieveIngestionResources(dmServiceBaseUri, bearerToken);
// 2b. Retrieve Kusto identity token
var identityToken = RetrieveKustoIdentityToken(dmServiceBaseUri, bearerToken);
// 3. Upload file to one of the blob containers.
// This example uses the first one, but when working with multiple blobs,
// one should round-robin the containers in order to prevent throttling
var blobName = $"TestData{DateTime.UtcNow:yyyy-MM-dd_HH-mm-ss.FFF}";
var blobUriWithSas = UploadFileToBlobContainer(
file, ingestionResources.TempStorageContainers.First(), blobName,
out var blobSizeBytes
);
// 4. Compose ingestion command
var ingestionMessage = PrepareIngestionMessage(db, table, blobUriWithSas, blobSizeBytes, ingestionMappingRef, identityToken);
// 5. Post ingestion command to one of the previously obtained ingestion queues.
// This example uses the first one, but when working with multiple blobs,
// one should round-robin the queues in order to prevent throttling
PostMessageToQueue(ingestionResources.IngestionQueues.First(), ingestionMessage);
Thread.Sleep(20000);
// 6a. Read success notifications
var successes = PopTopMessagesFromQueue(ingestionResources.SuccessNotificationsQueue, 32);
foreach (var sm in successes)
{
Console.WriteLine($"Ingestion completed: {sm}");
}
// 6b. Read failure notifications
var errors = PopTopMessagesFromQueue(ingestionResources.FailureNotificationsQueue, 32);
foreach (var em in errors)
{
Console.WriteLine($"Ingestion error: {em}");
}
}
Using queued ingestion for production-grade pipelines
Obtain authentication evidence from Microsoft Entra ID
// Authenticates the interactive user and retrieves Microsoft Entra Access token for specified resource
internal static string AuthenticateInteractiveUser(string resource)
{
// Create an authentication client for Microsoft Entra ID:
.Build();
// Acquire user token for the interactive user:
var result = authClient.AcquireTokenInteractive(
new[] { $"{resource}/.default" } // Define scopes
).ExecuteAsync().Result;
return result.AccessToken;
}
Retrieve ingestion resources
Manually construct an HTTP POST request to the Data Management service, requesting the return of the ingestion resources. These resources include queues that the DM service is listening on, and blob containers for data uploading. The Data Management service will process any messages containing ingestion requests that arrive on one of those queues.
// Retrieve ingestion resources (queues and blob containers) with SAS from specified ingestion service using supplied access token
internal static IngestionResourcesSnapshot RetrieveIngestionResources(string ingestClusterBaseUri, string accessToken)
{
var ingestClusterUri = $"{ingestClusterBaseUri}/v1/rest/mgmt";
var requestBody = "{ \"csl\": \".get ingestion resources\" }";
var ingestionResources = new IngestionResourcesSnapshot();
using var response = SendPostRequest(ingestClusterUri, accessToken, requestBody);
using var sr = new StreamReader(response.GetResponseStream());
using var jtr = new JsonTextReader(sr);
var responseJson = JObject.Load(jtr);
// Input queues
var tokens = responseJson.SelectTokens("Tables[0].Rows[?(@.[0] == 'SecuredReadyForAggregationQueue')]");
foreach (var token in tokens)
{
ingestionResources.IngestionQueues.Add((string)token[1]);
}
// Temp storage containers
tokens = responseJson.SelectTokens("Tables[0].Rows[?(@.[0] == 'TempStorage')]");
foreach (var token in tokens)
{
ingestionResources.TempStorageContainers.Add((string)token[1]);
}
// Failure notifications queue
var singleToken =
responseJson.SelectTokens("Tables[0].Rows[?(@.[0] == 'FailedIngestionsQueue')].[1]").FirstOrDefault();
ingestionResources.FailureNotificationsQueue = (string)singleToken;
// Success notifications queue
singleToken =
responseJson.SelectTokens("Tables[0].Rows[?(@.[0] == 'SuccessfulIngestionsQueue')].[1]").FirstOrDefault();
ingestionResources.SuccessNotificationsQueue = (string)singleToken;
return ingestionResources;
}
// Executes a POST request on provided URI using supplied Access token and request body
internal static WebResponse SendPostRequest(string uriString, string authToken, string body)
{
var request = WebRequest.Create(uriString);
request.Method = "POST";
request.ContentType = "application/json";
request.ContentLength = body.Length;
request.Headers.Set(HttpRequestHeader.Authorization, $"Bearer {authToken}");
using var bodyStream = request.GetRequestStream();
using (var sw = new StreamWriter(bodyStream))
{
sw.Write(body);
sw.Flush();
}
bodyStream.Close();
return request.GetResponse();
}
Obtain a Kusto identity token
Ingest messages are handed off to your cluster via a non-direct channel (Azure queue), making it impossible to do in-band authorization validation for accessing the ingestion service. The solution is to attach an identity token to every ingest message. The token enables in-band authorization validation. This signed token can then be validated by the ingestion service when it receives the ingestion message.
// Retrieves a Kusto identity token that will be added to every ingest message
internal static string RetrieveKustoIdentityToken(string ingestClusterBaseUri, string accessToken)
{
var ingestClusterUri = $"{ingestClusterBaseUri}/v1/rest/mgmt";
var requestBody = "{ \"csl\": \".get kusto identity token\" }";
var jsonPath = "Tables[0].Rows[*].[0]";
using var response = SendPostRequest(ingestClusterUri, accessToken, requestBody);
using var sr = new StreamReader(response.GetResponseStream());
using var jtr = new JsonTextReader(sr);
var responseJson = JObject.Load(jtr);
var identityToken = responseJson.SelectTokens(jsonPath).FirstOrDefault();
return (string)identityToken;
}
Upload data to the Azure Blob container
This step is about uploading a local file to an Azure Blob that will be handed off for ingestion. This code uses the Azure Storage SDK. If dependency isn’t possible, it can be achieved with Azure Blob Service REST API. This step is about uploading a local file to an Azure Blob that will be handed off for ingestion. This code uses the Azure Storage SDK. If dependency isn’t possible, it can be achieved with Azure Blob Service REST API.
// Uploads a single local file to an Azure Blob container, returns blob URI and original data size
internal static string UploadFileToBlobContainer(string filePath, string blobContainerUri, string blobName, out long blobSize)
{
var blobUri = new Uri(blobContainerUri);
var blobContainer = new BlobContainerClient(blobUri);
var blob = blobContainer.GetBlobClient(blobName);
using (var stream = File.OpenRead(filePath))
{
blob.UploadAsync(BinaryData.FromStream(stream));
blobSize = blob.GetProperties().Value.ContentLength;
}
return $"{blob.Uri.AbsoluteUri}{blobUri.Query}";
}
Compose the ingestion message
The NewtonSoft.JSON package will again compose a valid ingestion request to identify the target database and table, and that points to the blob. The message will be posted to the Azure Queue that the relevant Kusto Data Management service is listening on.
Here are some points to consider.
This request is the bare minimum for the ingestion message.
Whenever necessary, CsvMapping or JsonMapping properties must be provided as well
For more information, see the article on ingestion mapping pre-creation.
For more information, see the article on ingestion mapping pre-creation.
Section Ingestion message internal structure provides an explanation of the ingestion message structure
Section Ingestion message internal structure provides an explanation of the ingestion message structure
internal static string PrepareIngestionMessage(string db, string table, string dataUri, long blobSizeBytes, string mappingRef, string identityToken)
{
var message = new JObject
{
{ "Id", Guid.NewGuid().ToString() },
{ "BlobPath", dataUri },
{ "RawDataSize", blobSizeBytes },
{ "DatabaseName", db },
{ "TableName", table },
{ "RetainBlobOnSuccess", true }, // Do not delete the blob on success
{ "FlushImmediately", true }, // Do not aggregate
{ "ReportLevel", 2 }, // Report failures and successes (might incur perf overhead)
{ "ReportMethod", 0 }, // Failures are reported to an Azure Queue
{
"AdditionalProperties", new JObject(
new JProperty("authorizationContext", identityToken),
new JProperty("mappingReference", mappingRef),
// Data is in JSON format
new JProperty("format", "multijson")
)
}
};
return message.ToString();
}
Post the ingestion message to the ingestion queue
Finally, post the message that you constructed, to the selected ingestion queue that you previously obtained.
If you are using .Net storage client versions above v12, you must properly encode the message content.
internal static void PostMessageToQueue(string queueUriWithSas, string message)
{
var queue = new QueueClient(new Uri(queueUriWithSas));
queue.SendMessage(message);
}
Check for error messages from the Azure queue
After ingestion, we check for failure messages from the relevant queue that the Data Management writes to. For more information on the failure message structure, see Ingestion failure message structure. After ingestion, we check for failure messages from the relevant queue that the Data Management writes to. For more information on the failure message structure, see Ingestion failure message structure.
{
var queue = new QueueClient(new Uri(queueUriWithSas));
var messagesFromQueue = queue.ReceiveMessages(maxMessages: count).Value;
return messages;
}
Ingestion messages - JSON document formats
Ingestion message internal structure
The message that the Kusto Data Management service expects to read from the input Azure Queue is a JSON document in the following format.
{
}
Property | Description |
---|---|
Id | Message identifier (GUID) |
BlobPath | Path (URI) to the blob, including the SAS key granting permissions to read/write/delete it. Permissions are required so that the ingestion service can delete the blob once it has completed ingesting the data. |
RawDataSize | Size of the uncompressed data in bytes. Providing this value allows the ingestion service to optimize ingestion by potentially aggregating multiple blobs. This property is optional, but if not given, the service will access the blob just to retrieve the size. |
DatabaseName | Target database name |
TableName | Target table name |
RetainBlobOnSuccess | If set to true , the blob won’t be deleted once ingestion is successfully completed. Default is false |
FlushImmediately | If set to true , any aggregation will be skipped. Default is false |
ReportLevel | Success/Error reporting level: 0-Failures, 1-None, 2-All |
ReportMethod | Reporting mechanism: 0-Queue, 1-Table |
AdditionalProperties | Other properties such as format , tags , and creationTime . For more information, see data ingestion properties. |
AdditionalProperties | Other properties such as format , tags , and creationTime . For more information, see data ingestion properties. |
Ingestion failure message structure
The message that the Data Management expects to read from the input Azure Queue is a JSON document in the following format.
Property | Description |
---|---|
OperationId | Operation identifier (GUID) that can be used to track the operation on the service side |
Database | Target database name |
Table | Target table name |
FailedOn | Failure timestamp |
IngestionSourceId | GUID identifying the data chunk that failed to ingest |
IngestionSourcePath | Path (URI) to the data chunk that failed to ingest |
Details | Failure message |
ErrorCode | The error code. For all the error codes, see Ingestion error codes. |
ErrorCode | The error code. For all the error codes, see Ingestion error codes. |
FailureStatus | Indicates whether the failure is permanent or transient |
RootActivityId | The correlation identifier (GUID) that can be used to track the operation on the service side |
OriginatesFromUpdatePolicy | Indicates whether the failure was caused by an erroneous transactional update policy |
OriginatesFromUpdatePolicy | Indicates whether the failure was caused by an erroneous transactional update policy |
ShouldRetry | Indicates whether the ingestion could succeed if retried as is |
4 - Query V2 HTTP response
HTTP response status line
If the request succeeds, the HTTP response status code is 200 OK
.
The HTTP response body is a JSON array, as explained below.
If the request fails, the HTTP response status code is a 4xx
or 5xx
error.
The reason phrase will include additional information about the failure.
The HTTP response body is a JSON object, as explained below.
HTTP response headers
Irrespective of the success/failure of the request, two custom HTTP headers are included with the response:
x-ms-client-request-id
: The service returns an opaque string that identifies the request/response pair for correlation purposes. If the request included a client request ID, its value will appear here; otherwise, some random string is returned.x-ms-activity-id
: The service returns an opaque string that uniquely identifies the request/response pair for correlation purposes. Unlikex-ms-client-request-id
, this identifier is not affected by any information in the request, and is unique per response.
HTTP response body (on request failure)
On request failure, the HTTP response body will be a JSON document formatted according to
OneApiErrors
rules. For a description of the OneApiErrors
format, see section 7.10.2 here.
OneApiErrors
rules. For a description of the OneApiErrors
format, see section 7.10.2 here.
Below is an example for such a failure.
{
"error": {
"code": "General_BadRequest",
"message": "Request is invalid and cannot be executed.",
"@type": "Kusto.Data.Exceptions.KustoBadRequestException",
"@message": "Request is invalid and cannot be processed: Semantic error: SEM0100: 'table' operator: Failed to resolve table expression named 'aaa'",
"@context": {
"timestamp": "2023-04-18T12:59:27.4855445Z",
"serviceAlias": "HELP",
"machineName": "KEngine000000",
"processName": "Kusto.WinSvc.Svc",
"processId": 12580,
"threadId": 10260,
"clientRequestId": "Kusto.Cli;b90f4260-4eac-4574-a27a-3f302db21404",
"activityId": "9dcc4522-7b51-41db-a7ae-7c1bfe0696b2",
"subActivityId": "d0f30c8c-e6c6-45b6-9275-73dd6b379ecf",
"activityType": "DN.FE.ExecuteQuery",
"parentActivityId": "6e3c8dab-0aaf-4df5-85b5-fc20b0b29a84",
},
"@permanent": true,
"@text": "aaa",
"@database": "Samples",
"@ClientRequestLogger": "",
"innererror": {
"code": "SEM0100",
"message": "'table' operator: Failed to resolve table expression named 'aaa'",
"@type": "Kusto.Data.Exceptions.SemanticException",
"@message": "Semantic error: SEM0100: 'table' operator: Failed to resolve table expression named 'aaa'",
"@context": {
"timestamp": "2023-04-18T12:59:27.4855445Z",
"serviceAlias": "HELP",
"machineName": "KEngine000000",
"processName": "Kusto.WinSvc.Svc",
"processId": 12580,
"threadId": 10260,
"clientRequestId": "Kusto.Cli;b90f4260-4eac-4574-a27a-3f302db21404",
"activityId": "9dcc4522-7b51-41db-a7ae-7c1bfe0696b2",
"subActivityId": "d0f30c8c-e6c6-45b6-9275-73dd6b379ecf",
"activityType": "DN.FE.ExecuteQuery",
"parentActivityId": "6e3c8dab-0aaf-4df5-85b5-fc20b0b29a84",
},
"@permanent": true,
"@errorCode": "SEM0100",
"@errorMessage": "'table' operator: Failed to resolve table expression named 'aaa'"
}
}
}
HTTP response body (on request success)
On request success, the HTTP response body will be a JSON array that encodes the request results.
Logically, the V2 response describes a DataSet object which contains
any number of Tables. These tables can represent the actual data asked-for
by the request, or additional information about the execution of the request
(such as an accounting of the resources consumed by the request). Additionally,
the actual request might actually fail (due to various conditions) even though
a 200 OK
status gets returned, and in that case the response will include
partial response data plus an indication of the errors.
Physically, the response body’s JSON array is a list of JSON objects, each of which is called a frame. The DataSet object is encoded into two frames: DataSetHeader and DataSetCompletion. DataSetHeader and DataSetCompletion. The first is always the first frame, and the second is always the last frame. In “between” them one can find the frames describing the Table objects.
The Table objects can be encoded in two ways:
As a single frame: DataTable. This is the default.
As a single frame: DataTable. This is the default.
Alternatively, as a “mix” of four kinds of frames: TableHeader
Alternatively, as a “mix” of four kinds of frames: TableHeader (which comes first and describes the table), TableFragment (which comes first and describes the table), TableFragment (which describes a table’s data), TableProgress (which is (which describes a table’s data), TableProgress (which is optional and provides an estimation into how far in the table’s data we are), and TableCompletion (which is the last frame of the table). and TableCompletion (which is the last frame of the table).
The second case is called “progressive mode”, and will only appear if
the client request property results_progressive_enabled
is set to true
.
In this case, each TableFragment frame describes an update to the data
accumulated by all previous such frames for the table, either as an append
operation, or as a replace operation. (The latter is used, for example, when
some long-running aggregation calculation is performed at the “top level” of
the query, so an initial aggregation result is replaced by more accurate
results later-on.)
DataSetHeader
The DataSetHeader
frame is always the first in the dataset and appears exactly once.
{
"Version": string,
"IsProgressive": Boolean
}
Where:
Version
is the protocol version. The current version isv2.0
.IsProgressive
is a boolean flag that indicates whether this dataset contains progressive frames. A progressive frame is one of:Frame Description TableHeader
Contains general information about the table TableFragment
Contains a rectangular data shard of the table TableProgress
Contains the progress in percent (0-100) TableCompletion
Indicates that this frame is the last one The frames above describe a table. If the
IsProgressive
flag isn’t set to true, then every table in the set will be serialized using a single frame:DataTable
: Contains all the information that the client needs about a single table in the dataset.
TableHeader
Queries that are made with the results_progressive_enabled
option set to true may include this frame. Following this table, clients can expect an interleaving sequence of TableFragment
and TableProgress
frames. The final frame of the table is TableCompletion
.
{
"TableId": Number,
"TableKind": string,
"TableName": string,
"Columns": Array,
}
Where:
TableId
is the table’s unique ID.TableKind
is one of:- PrimaryResult
- QueryCompletionInformation
- QueryTraceLog
- QueryPerfLog
- TableOfContents
- QueryProperties
- QueryPlan
- Unknown
TableName
is the table’s name.Columns
is an array describing the table’s schema.
{
"ColumnName": string,
"ColumnType": string,
}
Supported column types are described here. Supported column types are described here.
TableFragment
The TableFragment
frame contains a rectangular data fragment of the table. In addition to the actual data, this frame also contains a TableFragmentType
property that tells the client what to do with the fragment. The fragment appended to existing fragments, or replace them.
{
"TableId": Number,
"FieldCount": Number,
"TableFragmentType": string,
"Rows": Array
}
Where:
TableId
is the table’s unique ID.FieldCount
is the number of columns in the table.TableFragmentType
describes what the client should do with this fragment.TableFragmentType
is one of:- DataAppend
- DataReplace
Rows
is a two-dimensional array that contains the fragment data.
TableProgress
The TableProgress
frame can interleave with the TableFragment
frame described above.
Its sole purpose is to notify the client of the query’s progress.
{
"TableId": Number,
"TableProgress": Number,
}
Where:
TableId
is the table’s unique ID.TableProgress
is the progress in percent (0–100).
TableCompletion
The TableCompletion
frame marks the end of the table transmission. No more frames related to that table will be sent.
{
"TableId": Number,
"RowCount": Number,
}
Where:
TableId
is the table’s unique ID.RowCount
is the total number of rows in the table.
DataTable
Queries that are issued with the EnableProgressiveQuery
flag set to false won’t include any of the frames (TableHeader
, TableFragment
, TableProgress
, and TableCompletion
). Instead, each table in the dataset will be transmitted using the DataTable
frame that contains all the information that the client needs, to read the table.
{
"TableId": Number,
"TableKind": string,
"TableName": string,
"Columns": Array,
"Rows": Array,
}
Where:
TableId
is the table’s unique ID.TableKind
is one of:- PrimaryResult
- QueryCompletionInformation
- QueryTraceLog
- QueryPerfLog
- QueryProperties
- QueryPlan
- Unknown
TableName
is the table’s name.Columns
is an array describing the table’s schema, and includes:
{
"ColumnName": string,
"ColumnType": string,
}
Rows
is a two-dimensional array that contains the table’s data.
The meaning of tables in the response
PrimaryResult
- The main tabular result of the query. For each tabular expression statement, one or more tables are generated in-order, representing the results produced by the statement. There can be multiple such tables because of batches and fork operators.PrimaryResult
- The main tabular result of the query. For each tabular expression statement, one or more tables are generated in-order, representing the results produced by the statement. There can be multiple such tables because of batches and fork operators.QueryCompletionInformation
- Provides additional information about the execution of the query itself, such as whether it completed successfully or not, and what were the resources consumed by the query (similar to the QueryStatus table in the v1 response).QueryProperties
- Provides additional values such as client visualization instructions (emitted, for example, to reflect the information in the render operator) and database cursor information. information in the render operator) and database cursor information.QueryTraceLog
- The performance trace log information (returned whenperftrace
in client request properties is set to true).QueryTraceLog
- The performance trace log information (returned whenperftrace
in client request properties is set to true).
DataSetCompletion
The DataSetCompletion
frame is the final one in the dataset.
{
"HasErrors": Boolean,
"Cancelled": Boolean,
"OneApiErrors": Array,
}
Where:
HasErrors
is true if there were errors while generating the dataset.Cancelled
is true if the request that led to the generation of the dataset was canceled before completion.OneApiErrors
is only returned ifHasErrors
is true. For a description of theOneApiErrors
format, see section 7.10.2 here.OneApiErrors
is only returned ifHasErrors
is true. For a description of theOneApiErrors
format, see section 7.10.2 here.
5 - Query/management HTTP request
Request verb and resource
Action | HTTP verb | HTTP resource |
---|---|---|
Query | GET | /v1/rest/query |
Query | POST | /v1/rest/query |
Query v2 | GET | /v2/rest/query |
Query v2 | POST | /v2/rest/query |
Management | POST | /v1/rest/mgmt |
For example, to send a management command (“management”) to a service endpoint, use the following request line:
POST https://help.kusto.windows.net/v1/rest/mgmt HTTP/1.1
See Request headers and Body to learn what to include. See Request headers and Body to learn what to include.
Request headers
The following table contains the common headers used for query and management operations.
Standard header | Description | Required/Optional |
---|---|---|
Accept | The media types the client receives. Set to application/json . | Required |
Accept-Encoding | The supported content encodings. Supported encodings are gzip and deflate . | Optional |
Authorization | The authentication credentials. For more information, see authentication. | Required |
Authorization | The authentication credentials. For more information, see authentication. | Required |
Connection | Whether the connection stays open after the operation. The recommendation is to set Connection to Keep-Alive . | Optional |
Content-Length | The size of the request body. Specify the request body length when known. | Optional |
Content-Type | The media type of the request body. Set to application/json with charset=utf-8 . | Required |
Expect | The expected response from the server. It can be set to 100-Continue . | Optional |
Host | The qualified domain name that the request was sent to. For example, help.kusto.windows.net . | Required |
The following table contains the common custom headers used for query and management operations. Unless noted, these headers are used only for telemetry purposes and don’t affect functionality.
All headers are optional. However, We recommend specifying the x-ms-client-request-id
custom header.
In some scenarios, such as canceling a running query, x-ms-client-request-id
is required since it’s used to identify the request.
Custom header | Description |
---|---|
x-ms-app | The friendly name of the application making the request. |
x-ms-user | The friendly name of the user making the request. |
x-ms-user-id | The same friendly name as x-ms-user . |
x-ms-client-request-id | A unique identifier for the request. |
x-ms-client-version | The friendly version identifier for the client making the request. |
x-ms-readonly | If specified, it forces the request to run in read-only mode which prevents the request from changing data. |
Request parameters
The following parameters can be passed in the request. They’re encoded in the request as query parameters or as part of the body, depending on whether GET or POST is used.
Parameter | Description | Required/Optional |
---|---|---|
csl | The text of the query or management command to execute. | Required |
properties | Request properties that modify how the request is processed and its results. For more information, see Request properties. | Optional |
properties | Request properties that modify how the request is processed and its results. For more information, see Request properties. | Optional |
GET query parameters
When a GET request is used, the query parameters specify the request parameters.
Body
When a POST request is used, the body of the request contains a single UTF-8 encoded JSON document, which includes the values of the request parameters.
Examples
The following example shows the HTTP POST request for a query.
POST https://help.kusto.windows.net/v2/rest/query HTTP/1.1
Request headers
Accept: application/json
Authorization: Bearer ...AzureActiveDirectoryAccessToken...
Accept-Encoding: deflate
Content-Type: application/json; charset=utf-8
Host: help.kusto.windows.net
x-ms-client-request-id: MyApp.Query;e9f884e4-90f0-404a-8e8b-01d883023bf1
x-ms-user-id: EARTH\davidbg
x-ms-app: MyApp
Request body
{
"db":"Samples",
"csl":"print Test=\"Hello, World!\"",
"properties":"{\"Options\":{\"queryconsistency\":\"strongconsistency\"},\"Parameters\":{},\"ClientRequestId\":\"MyApp.Query;e9f884e4-90f0-404a-8e8b-01d883023bf1\"}"
}
The following example shows how to create a request that sends the previous query, using curl. The following example shows how to create a request that sends the previous query, using curl.
Obtain a token for authentication.
Replace
AAD_TENANT_NAME_OR_ID
,AAD_APPLICATION_ID
, andAAD_APPLICATION_KEY
with the relevant values, after setting up Microsoft Entra application authentication. ReplaceAAD_TENANT_NAME_OR_ID
,AAD_APPLICATION_ID
, andAAD_APPLICATION_KEY
with the relevant values, after setting up Microsoft Entra application authentication.curl "https://login.microsoftonline.com/AAD_TENANT_NAME_OR_ID/oauth2/token" \ -F "grant_type=client_credentials" \ -F "resource=https://help.kusto.windows.net" \ -F "client_id=AAD_APPLICATION_ID" \ -F "client_secret=AAD_APPLICATION_KEY"
This code snippet provides you with the bearer token.
{ "token_type": "Bearer", "expires_in": "3599", "ext_expires_in":"3599", "expires_on":"1578439805", "not_before":"1578435905", "resource":"https://help.kusto.windows.net", "access_token":"eyJ0...uXOQ" }
Use the bearer token in your request to the query endpoint.
curl -d '{"db":"Samples","csl":"print Test=\"Hello, World!\"","properties":"{\"Options\":{\"queryconsistency\":\"strongconsistency\"}}"}"' \ -H "Accept: application/json" \ -H "Authorization: Bearer eyJ0...uXOQ" \ -H "Content-Type: application/json; charset=utf-8" \ -H "Host: help.kusto.windows.net" \ -H "x-ms-client-request-id: MyApp.Query;e9f884e4-90f0-404a-8e8b-01d883023bf1" \ -H "x-ms-user-id: EARTH\davidbg" \ -H "x-ms-app: MyApp" \ -X POST https://help.kusto.windows.net/v2/rest/query
Read the response according to the response status codes.
Read the response according to the response status codes.
Set client request properties and query parameters
In the following request body example, the query in the csl
field declares two parameters named n
and d
. The values for those query parameters are specified within the Parameters
field under the properties
field in the request body. The Options
field defines client request properties.
In the following request body example, the query in the csl
field declares two parameters named n
and d
. The values for those query parameters are specified within the Parameters
field under the properties
field in the request body. The Options
field defines client request properties.
{
"db": "Samples",
"csl": "declare query_parameters (n:long, d:dynamic); StormEvents | where State in (d) | top n by StartTime asc",
"properties": {
"Options": {
"maxmemoryconsumptionperiterator": 68719476736,
"max_memory_consumption_per_query_per_node": 68719476736,
"servertimeout": "50m"
},
"Parameters": {
"n": 10, "d": "dynamic([\"ATLANTIC SOUTH\"])"
}
}
}
For more information, see Supported request properties. For more information, see Supported request properties.
Send show database caching policy command
The following example sends a request to show the Samples
database caching policy.
{
"db": "Samples",
"csl": ".show database Samples policy caching",
"properties": {
"Options": {
"maxmemoryconsumptionperiterator": 68719476736,
"max_memory_consumption_per_query_per_node": 68719476736,
"servertimeout": "50m"
}
}
}
6 - Query/management HTTP response
Response status
The HTTP response status line follows the HTTP standard response codes. For example, code 200 indicates success.
The following status codes are currently in use, although any valid HTTP code may be returned.
Code | Subcode | Description |
---|---|---|
100 | Continue | Client can continue to send the request. |
200 | OK | Request started processing successfully. |
400 | BadRequest | Request is badly formed and failed (permanently). |
401 | Unauthorized | Client needs to authenticate first. |
403 | Forbidden | Client request is denied. |
404 | NotFound | Request references a non-existing entity. |
413 | PayloadTooLarge | Request payload exceeded limits. |
429 | TooManyRequests | Request has been denied because of throttling. |
504 | Timeout | Request has timed out. |
520 | ServiceError | Service found an error while processing the request. |
Response headers
The following custom headers will be returned.
Custom header | Description |
---|---|
x-ms-client-request-id | The unique request identifier sent in the request header of the same name, or some unique identifier. |
x-ms-activity-id | A globally unique correlation identifier for the request. It’s created by the service. |
Response body
If the status code is 200, the response body is a JSON document that encodes the query or management command’s results as a sequence of rectangular tables. See below for details.
If the status code indicates a 4xx or a 5xx error, other than 401, the response body is a JSON document that encodes the details of the failure. For more information, see Microsoft REST API Guidelines. For more information, see Microsoft REST API Guidelines.
JSON encoding of a sequence of tables
The JSON encoding of a sequence of tables is a single JSON property bag with the following name/value pairs.
Name | Value |
---|---|
Tables | An array of the Table property bag. |
The Table property bag has the following name/value pairs.
Name | Value |
---|---|
TableName | A string that identifies the table. |
Columns | An array of the Column property bag. |
Rows | An array of the Row array. |
The Column property bag has the following name/value pairs.
Name | Value |
---|---|
ColumnName | A string that identifies the column. |
DataType | A string that provides the approximate .NET Type of the column. |
ColumnType | A string that provides the scalar data type of the column. |
ColumnType | A string that provides the scalar data type of the column. |
The Row array has the same order as the respective Columns array.
The Row array also has one element that coincides with the value of the row for the relevant column.
Scalar data types that can’t be represented in JSON, such as datetime
and timespan
, are represented as JSON strings.
The following example shows one possible such object, when it contains
a single table called Table_0
that has a single column Text
of type
string
, and a single row.
{
"Tables": [{
"TableName": "Table_0",
"Columns": [{
"ColumnName": "Text",
"DataType": "String",
"ColumnType": "string"
}],
"Rows": [["Hello, World!"]]
}
Another example:
The meaning of tables in the response
In most cases, management commands return a result with a single table, containing the information generated by the management command. For example, the .show databases
command returns a single table with the details of all accessible databases.
Queries generally return multiple tables. For each tabular expression statement, For each tabular expression statement, one or more tables are generated in-order, representing the results produced by the statement.
Three tables are often produced:
An @ExtendedProperties table that provides additional values, such as client visualization instructions (information provided by the render operator), instructions (information provided by the render operator), information about the query’s effective database cursor, information about the query’s effective database cursor, or information about the query’s effective use of the query results cache. or information about the query’s effective use of the query results cache.
For queries sent using the v1 protocol, the table has a single column of type
string
, whose value is a JSON-encoded string, such as:Value {“Visualization”:“piechart”,…} {“Cursor”:“637239957206013576”} For queries sent using the v2 protocol, the table has three columns: (1) An
integer
column calledTableId
indicating which table in the results set the record applies to; (2) Astring
column calledKey
indicating the kind of information provided by the record (possible values:Visualization
,ServerCache
, andCursor
); (3) Adynamic
column calledValue
providing the Key-determined information.TableId Key Value 1 ServerCache {“OriginalStartedOn”:“2021-06-11T07:48:34.6201025Z”,…} 1 Visualization {“Visualization”:“piechart”,…} A QueryStatus table that provides additional information about the execution of the query itself, such as, if it completed successfully or not, and what were the resources consumed by the query.
This table has the following structure:
Timestamp Severity SeverityName StatusCode StatusDescription Count RequestId ActivityId SubActivityId ClientActivityId 2020-05-02 06:09:12.7052077 4 Info 0 Query completed successfully 1 … … … … Severity values of 2 or smaller indicate failure.
A TableOfContents table, which is created last, and lists the other tables in the results.
An example for this table is:
Ordinal Kind Name Id PrettyName 0 QueryResult PrimaryResult db9520f9-0455-4cb5-b257-53068497605a 1 QueryProperties @ExtendedProperties 908901f6-5319-4809-ae9e-009068c267c7 2 QueryStatus QueryStatus 00000000-0000-0000-0000-000000000000
7 - Request properties
Request properties control how a query or command executes and returns results.
Supported request properties
The following table overviews the supported request properties.
Property name | Type | Description |
---|---|---|
best_effort | bool | If set to true , allows fuzzy resolution and connectivity issues of data sources (union legs.) The set of union sources is reduced to the set of table references that exist and are accessible at the time of execution. If at least one accessible table is found, the query executes. Any failure yields a warning in the query status results but doesn’t prevent the query from executing. |
client_max_redirect_count | long | Controls the maximum number of HTTP redirects the client follows during processing. |
client_results_reader_allow_varying_row_widths | bool | If set to true , the results reader tolerates tables whose row width varies across rows. |
deferpartialqueryfailures | bool | If set to true , suppresses reporting of partial query failures within the result set. |
max_memory_consumption_per_query_per_node | long | Overrides the default maximum amount of memory a query can allocate per node. |
maxmemoryconsumptionperiterator | long | Overrides the default maximum amount of memory a query operator can allocate. |
maxoutputcolumns | long | Overrides the default maximum number of columns a query is allowed to produce. |
norequesttimeout | bool | Sets the request timeout to its maximum value. This option can’t be modified as part of a set statement. |
norequesttimeout | bool | Sets the request timeout to its maximum value. This option can’t be modified as part of a set statement. |
notruncation | bool | Disables truncation of query results returned to the caller. |
push_selection_through_aggregation | bool | If set to true , allows pushing simple selection through aggregation. |
query_bin_auto_at | literal | Specifies the start value to use when evaluating the bin_auto() function. |
query_bin_auto_at | literal | Specifies the start value to use when evaluating the bin_auto() function. |
query_bin_auto_size | literal | Specifies the bin size value to use when evaluating the bin_auto() function. |
query_bin_auto_size | literal | Specifies the bin size value to use when evaluating the bin_auto() function. |
query_cursor_after_default | string | Sets the default parameter value for the cursor_after() function when called without parameters. |
query_cursor_after_default | string | Sets the default parameter value for the cursor_after() function when called without parameters. |
query_cursor_before_or_at_default | string | Sets the default parameter value for the cursor_before_or_at() function when called without parameters. |
query_cursor_before_or_at_default | string | Sets the default parameter value for the cursor_before_or_at() function when called without parameters. |
query_cursor_current | string | Overrides the cursor value returned by the cursor_current() function. |
query_cursor_current | string | Overrides the cursor value returned by the cursor_current() function. |
query_cursor_disabled | bool | Disables the usage of cursor functions within the query context. |
query_cursor_disabled | bool | Disables the usage of cursor functions within the query context. |
query_cursor_scoped_tables | dynamic | Lists table names to be scoped to cursor_after_default .. cursor_before_or_at() (upper bound is optional). |
query_datascope | string | Controls the data to which the query applies. Supported values are default , all , or hotcache . |
query_datetimescope_column | string | Specifies the column name for the query’s datetime scope (query_datetimescope_to / query_datetimescope_from ). |
query_datetimescope_from | datetime | Sets the minimum date and time limit for the query scope. If defined, it serves as an autoapplied filter on query_datetimescope_column . |
query_datetimescope_to | datetime | Sets the maximum date and time limit for the query scope. If defined, it serves as an autoapplied filter on query_datetimescope_column . |
query_distribution_nodes_span | int | Controls the behavior of subquery merge. The executing node introduces an extra level in the query hierarchy for each subgroup of nodes, and this option sets the subgroup size. |
query_fanout_nodes_percent | int | Specifies the percentage of nodes for executing fan-out. |
query_fanout_threads_percent | int | Specifies the percentage of threads for executing fan-out. |
query_force_row_level_security | bool | If set to true , enforces row level security rules, even if the policy is disabled. |
query_force_row_level_security | bool | If set to true , enforces row level security rules, even if the policy is disabled. |
query_language | string | Determines how the query text should be interpreted. Supported values are csl , kql , or sql . This option can’t be modified as part of a set statement. |
query_language | string | Determines how the query text should be interpreted. Supported values are csl , kql , or sql . This option can’t be modified as part of a set statement. |
query_log_query_parameters | bool | Enables query parameters logging for later viewing in the .show queries journal. |
query_log_query_parameters | bool | Enables query parameters logging for later viewing in the .show queries journal. |
query_max_entities_in_union | long | Overrides the default maximum number of columns a query is allowed to produce. |
query_now | datetime | Overrides the datetime value returned by the now() function. |
query_now | datetime | Overrides the datetime value returned by the now() function. |
query_optimize_fts_at_relop | bool | When set to true , enables an experimental optimization for queries that perform costly free-text search operations. For instance, |where * has "pattern" . |
query_python_debug | bool or int | If set to true , generates a Python debug query for the enumerated Python node. |
query_results_apply_getschema | bool | If set to true , retrieves the schema of each tabular data in the results of the query instead of the data itself. |
query_results_cache_force_refresh | bool | If set to true , forces a cache refresh of query results for a specific query. Must be used in combination with query_results_cache_max_age , and sent via Kusto Data ClientRequestProperties class, not as a set statement. |
query_results_cache_force_refresh | bool | If set to true , forces a cache refresh of query results for a specific query. Must be used in combination with query_results_cache_max_age , and sent via Kusto Data ClientRequestProperties class, not as a set statement. |
query_results_cache_max_age | timespan | Controls the maximum age of the cached query results that the service is allowed to return. |
query_results_cache_per_shard | bool | If set to true , enables per extent query caching. |
query_results_cache_per_shard | bool | If set to true , enables per extent query caching. |
query_results_progressive_row_count | long | Provides a hint for how many records to send in each update. Takes effect only if results_progressive_enabled is set. |
query_results_progressive_update_period | timespan | Provides a hint for how often to send progress frames. Takes effect only if results_progressive_enabled is set. |
query_take_max_records | long | Limits query results to a specified number of records. |
query_weakconsistency_session_id | string | Sets the query weak consistency session ID. Takes effect when queryconsistency mode is set to weakconsistency_by_session_id . This option can’t be modified as part of a set statement. |
query_weakconsistency_session_id | string | Sets the query weak consistency session ID. Takes effect when queryconsistency mode is set to weakconsistency_by_session_id . This option can’t be modified as part of a set statement. |
queryconsistency | string | Controls query consistency. Supported values are strongconsistency , weakconsistency , weakconsistency_by_query , weakconsistency_by_database , or weakconsistency_by_session_id . When using weakconsistency_by_session_id , ensure to also set the query_weakconsistency_session_id property. This option can’t be modified as part of a set statement. |
queryconsistency | string | Controls query consistency. Supported values are strongconsistency , weakconsistency , weakconsistency_by_query , weakconsistency_by_database , or weakconsistency_by_session_id . When using weakconsistency_by_session_id , ensure to also set the query_weakconsistency_session_id property. This option can’t be modified as part of a set statement. |
request_app_name | string | Specifies the request application name to be used in reporting. For example, .show queries. This option can’t be modified as part of a set statement. |
request_app_name | string | Specifies the request application name to be used in reporting. For example, .show queries. This option can’t be modified as part of a set statement. |
request_block_row_level_security | bool | If set to true , blocks access to tables with row level security policy enabled. |
request_block_row_level_security | bool | If set to true , blocks access to tables with row level security policy enabled. |
request_callout_disabled | bool | If set to true , prevents request callout to a user-provided service. |
request_description | string | Allows inclusion of arbitrary text as the request description. |
request_external_data_disabled | bool | If set to true , prevents the request from accessing external data using the externaldata operator or external tables. |
request_external_data_disabled | bool | If set to true , prevents the request from accessing external data using the externaldata operator or external tables. |
request_external_table_disabled | bool | If set to true , prevents the request from accessing external tables. |
request_impersonation_disabled | bool | If set to true , indicates that the service shouldn’t impersonate the caller’s identity. |
request_readonly | bool | If set to true , prevents write access for the request. This option can’t be modified as part of a set statement. |
request_readonly | bool | If set to true , prevents write access for the request. This option can’t be modified as part of a set statement. |
request_readonly_hardline | bool | If set to true , then the request operates in a strict read-only mode. The request isn’t able to write anything, and any noncompliant functionality, such as plugins, are disabled. This option can’t be modified as part of a set statement. |
request_readonly_hardline | bool | If set to true , then the request operates in a strict read-only mode. The request isn’t able to write anything, and any noncompliant functionality, such as plugins, are disabled. This option can’t be modified as part of a set statement. |
request_remote_entities_disabled | bool | If set to true , prevents the request from accessing remote databases and remote entities. |
request_sandboxed_execution_disabled | bool | If set to true , prevents the request from invoking code in the sandbox. |
request_user | string | Specifies the request user to be used in reporting. For example, .show queries. This option can’t be modified as part of a set statement. |
request_user | string | Specifies the request user to be used in reporting. For example, .show queries. This option can’t be modified as part of a set statement. |
results_error_reporting_placement | string | Determines the placement of errors in the result set. Options are in_data , end_of_table , and end_of_dataset . |
results_progressive_enabled | bool | If set to true , enables the progressive query stream. This option can’t be modified as part of a set statement. |
results_progressive_enabled | bool | If set to true , enables the progressive query stream. This option can’t be modified as part of a set statement. |
results_v2_fragment_primary_tables | bool | Causes primary tables to be sent in multiple fragments, each containing a subset of the rows. This option can’t be modified as part of a set statement. |
results_v2_fragment_primary_tables | bool | Causes primary tables to be sent in multiple fragments, each containing a subset of the rows. This option can’t be modified as part of a set statement. |
results_v2_newlines_between_frames | bool | Adds new lines between frames in the results, in order to make it easier to parse them. |
servertimeout | timespan | Overrides the default request timeout. This option can’t be modified as part of a set statement. Instead, modify the option using the dashboard settings. |
servertimeout | timespan | Overrides the default request timeout. This option can’t be modified as part of a set statement. Instead, modify the option using the dashboard settings. |
truncation_max_records | long | Overrides the default maximum number of records a query is allowed to return to the caller (truncation). |
truncationmaxsize | long | Overrides the default maximum data size a query is allowed to return to the caller (truncation). This option can’t be modified as part of a set statement. |
truncationmaxsize | long | Overrides the default maximum data size a query is allowed to return to the caller (truncation). This option can’t be modified as part of a set statement. |
validatepermissions | bool | Validates the user’s permissions to perform the query without actually running the query. Possible results for this property are: OK (permissions are present and valid), Incomplete (validation couldn’t be completed due to dynamic schema evaluation), or KustoRequestDeniedException (permissions weren’t set). |
How to set request properties
You can set request properties in the following ways:
- The POST body of an HTTP request
- The POST body of an HTTP request
- A Kusto Query Language set statement
- A Kusto Query Language set statement
- The set option method of the
ClientRequestProperties
class - The set option method of the
ClientRequestProperties
class
Related content
8 - REST API overview
This article describes how to interact with your cluster over HTTPS.
Supported actions
The available actions for an endpoint depend on whether it’s an engine endpoint or a data management endpoint. In the Azure portal cluster overview, the engine endpoint is identified as the Cluster URI and the data management endpoint as the Data ingestion URI.
Action | HTTP verb | URI template | Engine | Data Management | Authentication |
---|---|---|---|---|---|
Query | GET or POST | /v1/rest/query | Yes | No | Yes |
Query | GET or POST | /v2/rest/query | Yes | No | Yes |
Management | POST | /v1/rest/mgmt | Yes | Yes | Yes |
StreamIngest | POST | /v1/rest/ingest | Yes | No | Yes |
UI | GET | / | Yes | No | No |
UI | GET | /{dbname} | Yes | No | No |
Where Action represents a group of related activities
- The Query action sends a query to the service and gets back the results of the query.
- The Management action sends a management command to the service and gets back the results of the management command.
- The StreamIngest action ingests data to a table.
- The UI action can be used to start up a desktop client or web client. The action is done through an HTTP Redirect response, to interact with the service.
Related content
9 - Send T-SQL queries over RESTful web API
This article describes how to use a subset of the T-SQL language to send T-SQL queries via the REST API. This article describes how to use a subset of the T-SQL language to send T-SQL queries via the REST API.
Request structure
To send T-SQL queries to the API, create a POST
request with the following components.
To copy your URI, in the Azure portal, go to your cluster’s overview page, and then select the URI. Replace <your_cluster> with your Azure Data Explorer cluster name.
To copy your URI, see Copy a KQL database URI. To copy your URI, see Copy a KQL database URI.
```makefile
Accept:application/json
Content-Type:application/json; charset=utf-8
```
Body: Set the
csl
property to the text of your T-SQL query, and the client request propertyquery_language
tosql
.Body: Set the
csl
property to the text of your T-SQL query, and the client request propertyquery_language
tosql
.{ "properties": { "Options": { "query_language": "sql" } } }
Example
The following example shows a request body with a T-SQL query in the csl
field and the query_language
client request property set to sql
.
{
"db": "MyDatabase",
"csl": "SELECT top(10) * FROM MyTable",
"properties": {
"Options": {
"query_language": "sql"
}
}
}
The response is in a format similar to the following.
{
"Tables": [
{
"TableName": "Table_0",
"Columns": [
{
"ColumnName": "rf_id",
"DataType": "String",
"ColumnType": "string"
},
...
],
"Rows": [
[
"b9b84d3451b4d3183d0640df455399a9",
...
],
...
]
}
]
}
Related content
- Learn more about T-SQL limitations
- Learn more about T-SQL limitations
- See the REST API overview
- See the REST API overview
10 - Streaming ingestion HTTP request
Request verb and resource
Action | HTTP verb | HTTP resource |
---|---|---|
Ingest | POST | /v1/rest/ingest/{database}/{table}?{additional parameters} |
Request parameters
Parameter | Description | Required/Optional |
---|---|---|
{database} | Name of the target database for the ingestion request | Required |
{table} | Name of the target table for the ingestion request | Required |
Additional parameters
Additional parameters are formatted as URL query {name}={value}
pairs, separated by the & character.
Parameter | Description | Required/Optional |
---|---|---|
streamFormat | Specifies the format of the data in the request body. The value should be one of: CSV , TSV , SCsv , SOHsv , PSV , JSON , MultiJSON , Avro . For more information, see Supported Data Formats. | Required |
streamFormat | Specifies the format of the data in the request body. The value should be one of: CSV , TSV , SCsv , SOHsv , PSV , JSON , MultiJSON , Avro . For more information, see Supported Data Formats. | Required |
mappingName | The name of the pre-created ingestion mapping defined on the table. For more information, see Data Mappings. The way to manage pre-created mappings on the table is described here. | Optional, but Required if streamFormat is one of JSON , MultiJSON , or Avro |
mappingName | The name of the pre-created ingestion mapping defined on the table. For more information, see Data Mappings. The way to manage pre-created mappings on the table is described here. | Optional, but Required if streamFormat is one of JSON , MultiJSON , or Avro |
For example, to ingest CSV-formatted data into table Logs
in database Test
, use:
POST https://help.kusto.windows.net/v1/rest/ingest/Test/Logs?streamFormat=Csv HTTP/1.1
To ingest JSON-formatted data with pre-created mapping mylogmapping
, use:
POST https://help.kusto.windows.net/v1/rest/ingest/Test/Logs?streamFormat=Json&mappingName=mylogmapping HTTP/1.1
Request headers
The following table contains the common headers for query and management operations.
Standard header | Description | Required/Optional |
---|---|---|
Accept | Set this value to application/json . | Optional |
Accept-Encoding | Supported encodings are gzip and deflate . | Optional |
Authorization | See authentication. | Required |
Authorization | See authentication. | Required |
Connection | Enable Keep-Alive . | Optional |
Content-Length | Specify the request body length, when known. | Optional |
Content-Encoding | Set to gzip but the body must be gzip-compressed | Optional |
Expect | Set to 100-Continue . | Optional |
Host | Set to the domain name to which you sent the request (such as, help.kusto.windows.net ). | Required |
The following table contains the common custom headers for query and management operations. Unless otherwise indicated, the headers are for telemetry purposes only, and have no functionality impact.
|Custom header |Description | Required/Optional |
|————————|———————————————————————————————————-|
|x-ms-app
|The (friendly) name of the application making the request. | Optional |
|x-ms-user
|The (friendly) name of the user making the request. | Optional |
|x-ms-user-id
|Same as x-ms-user
. | Optional |
|x-ms-client-request-id
|A unique identifier for the request. | Optional |
|x-ms-client-version
|The (friendly) version identifier for the client making the request. Required in scenarios, where it’s used to identify the request, such as canceling a running query. | Optional/Required |
Body
The body is the actual data to be ingested. The textual formats should use UTF-8 encoding.
Examples
The following example shows the HTTP POST request for ingesting JSON content:
POST https://help.kusto.windows.net/v1/rest/ingest/Test/Logs?streamFormat=Json&mappingName=mylogmapping HTTP/1.1
Request headers:
Authorization: Bearer ...AzureActiveDirectoryAccessToken...
Accept-Encoding: deflate
Accept-Encoding: gzip
Connection: Keep-Alive
Content-Length: 161
Host: help.kusto.windows.net
x-ms-client-request-id: MyApp.Ingest;5c0656b9-37c9-4e3a-a671-5f83e6843fce
x-ms-user-id: alex@contoso.com
x-ms-app: MyApp
Request body:
{"Timestamp":"2018-11-14 11:34","Level":"Info","EventText":"Nothing Happened"}
{"Timestamp":"2018-11-14 11:35","Level":"Error","EventText":"Something Happened"}
The following example shows the HTTP POST request for ingesting the same compressed data.
POST https://help.kusto.windows.net/v1/rest/ingest/Test/Logs?streamFormat=Json&mappingName=mylogmapping HTTP/1.1
Request headers:
Authorization: Bearer ...AzureActiveDirectoryAccessToken...
Accept-Encoding: deflate
Accept-Encoding: gzip
Connection: Keep-Alive
Content-Length: 116
Content-Encoding: gzip
Host: help.kusto.windows.net
x-ms-client-request-id: MyApp.Ingest;5c0656b9-37c9-4e3a-a671-5f83e6843fce
x-ms-user-id: alex@contoso.com
x-ms-app: MyApp
Request body:
... binary data ...
11 - UI deep links
UI deep links are URIs that, when opened in a web browser, result in automatically opening a UI tool (such as Kusto.Explorer or Kusto.WebExplorer) in a way that preselects the desired Kusto cluster (and optionally database).
For example, when a user selects https://help.kusto.windows.net/Samples?query=print%20123,
For example, when a user selects https://help.kusto.windows.net/Samples?query=print%20123,
Kusto.WebExplorer opens to the help.kusto.windows.net
cluster, select the Samples
database as the default database, and run the associated query.
UI deep links work by having the user browser receive a redirect response when issuing a GET request to the URI, and depend on the browser settings to allow processing of this redirection. (For example, a UI deep link to Kusto.Explorer requires the browser to be configured to allow ClickOnce applications to start.)
The UI deep link must be a valid URI, and has the following format:
https://
Cluster /
[DatabaseName] [?
Parameters]
Where:
Cluster is the base address of the cluster itself. This part is mandatory, but can be overridden by specifying the query parameter
uri
in Parameters.DatabaseName is the name of the database in Cluster to use as the database in scope. If this property isn’t set, the UI tool decides which database to use, if at all. (If a query or a command is specified by Parameters, the recommendation is for the correct value for DatabaseName to be included in the URI.)
Parameters can be used to specify other parameters to control the behavior of the UI deep link. Parameters that are supported by all Kusto “official” UI tools are indicated in the following table. Tool-specific parameters are noted later on in this document.
Parameter Description web
Selects the UI tool. By default, or if set to 1
, Kusto.WebExplorer is used. If set to0
, Kusto.Explorer is used. If set to3
, Kusto.WebExplorer is used with no preexisting tabs.query
The text of the query or management command to start with when opening the UI tool. querysrc
A URI pointing at a web resource that holds the text of the query or management command to start with when opening the UI tool. name
The name of the connection to the cluster. autorun
If set to false
, requires that the user actively run the query instead of autorunning it when the link is clicked.The value of
query
can use standard HTTP query parameter encoding. Alternatively, it can be encoded using the transformationbase64(gzip(text))
, which makes it possible to compress long queries or management commands to git in the default browser URI length limits.
Examples
Here are a few examples for links:
https://help.kusto.windows.net/
: When a user agent (such as a browser) issues aGET /
request it’s redirected to the default UI tool configured to query thehelp
cluster.https://help.kusto.windows.net/Samples
: When a user agent (such as a browser) issues aGET /Samples
request it’s redirected to the default UI tool configured to query thehelp
clusterSamples
database.http://help.kusto.windows.net/Samples?query=StormEvents
: When a user (such as a browser) issues aGET /Samples?query=StormEvents
request it’s redirected to the default UI tool configured to query thehelp
clusterSamples
database, and issue theStormEvents
query.
Deep linking to Kusto.Explorer
This REST API performs redirection that installs and runs the Kusto.Explorer desktop client tool with specially crafted startup parameters that open a connection to a specific cluster and execute a query against that cluster.
See Deep-linking with Kusto.Explorer See Deep-linking with Kusto.Explorer for a description of the redirect URI syntax for starting up Kusto.Explorer.
Deep linking to Kusto.WebExplorer
In addition to the query parameters already mentioned, the following parameters might appear in UI deep links to Kusto.WebExplorer:
Parameter | Description |
---|---|
login_hint | Sets the user sign-in name (email) of the user. |
tenant | Sets the Microsoft Entra tenant ID of the user. |
To instruct Kusto.WebExplorer to sign-in a user from another Microsoft Entra tenant, specify login_hint
and tenant
for the user.
Redirection is to the following URI:
https://
BaseAddress /clusters/
Cluster [/databases/
DatabaseName] [?
Parameters]
Specifying the query or management command in the URI
When the URI query string parameter query
is specified, it must be encoded
according to the URI query string encoding HTML rules. Alternatively, the text of
the query or management command can be compressed by gzip, and then encoded
via base64 encoding. This feature allows you to send longer queries or control
commands (since the latter encoding method results in shorter URIs).
Specifying the query or management command by indirection
If the query or management command is long, even encoding it using gzip/base64 might exceed the maximum URI length of the user agent. Alternatively, the URI query string parameter
querysrc
is provided, and its value is a short URI pointing at a web resource
that holds the query or management command text.
For example, this value can be the URI for a file hosted by Azure Blob Storage.