1 - Go SDK

1.1 - Kusto Go SDK

This article describes Kusto Go SDK.

The Kusto Go Client library provides the capability to query, control, and ingest into your database using Go. This SDK is used for ingesting or querying data. For resource administration, see the GitHub library.

Minimum Requirements

  • go version go1.13

Installation

go get github.com/Azure/azure-kusto-go/kusto

Docs (godoc)

https://godoc.org/github.com/Azure/azure-kusto-go

Repo

2 - Java SDK

2.1 - Kusto Java SDK

This article describes Kusto Java SDK.

The Kusto Java client library provides the capability to query your database using Java. The Kusto Java SDK is available in azure-kusto-java.

3 - Node SDK

3.1 - Kusto Node SDK

This article describes Kusto Node SDK.

The Kusto Node SDK is compatible with Node LTS v6.14 and built with ES6.

Kusto Node Client Library provides the capability to query your database using NodeJs.

Kusto Node Ingest Client is a Node library that can send data to your database (i.e. ingest data).

4 - PowerShell

4.1 - Kusto .NET Client Libraries from PowerShell

This article describes how to use Kusto .NET Client Libraries from PowerShell.

PowerShell scripts can use the Kusto client libraries, as PowerShell inherently integrates with .NET libraries. In this article, you learn how to load and use the client libraries to run queries and management commands.

Prerequisites

  • An archiving tool to extract ZIP files, such as 7-Zip or WinRAR.

Get the libraries

To use the Kusto .NET client libraries in PowerShell:

  1. Download Microsoft.Azure.Kusto.Tools.

  2. Right-click on the downloaded package. From the menu, select your archiving tool and extract the package contents. If the archiving tool isn’t visible from the menu, select Show more options. The extraction results in multiple folders, one of which is named tools.

  3. Inside the tools folder, there are different subfolders catering to different PowerShell versions. For PowerShell version 5.1, use the net472 folder. For PowerShell version 7 or later, use any of the version folders. Copy the path of the relevant folder.

    You should see an output similar to the following:

    GACVersionLocation
    Falsev4.0.30319C:\Downloads\tools\net472\Kusto.Data.dll

Once loaded, you can use the libraries to connect to a cluster and database.

Connect to a cluster and database

Authenticate to a cluster and database with one of the following methods:

  • User authentication: Prompt the user to verify their identity in a web browser.
  • Application authentication: Create a Microsoft Entra app and use the credentials for authentication.
  • Azure CLI authentication: Sign-in to the Azure CLI on your machine, and Kusto will retrieve the token from Azure CLI.

Select the relevant tab.

User

Once you run your first query or command, this method opens an interactive browser window for user authorization.


$kcsb = New-Object Kusto.Data.KustoConnectionStringBuilder($clusterUrl, $databaseName)

Application

Create an MS Entra app and grant it access to your database. Then, provide the app credentials in place of the $applicationId, $applicationKey, and $authority.


$kcsb = New-Object Kusto.Data.KustoConnectionStringBuilder($clusterUrl, $databaseName)
$kcsb = $kcsb.WithAadApplicationKeyAuthentication($applicationId, $applicationKey, $authority)

Azure CLI

For this method of authentication to work, first sign-in to Azure CLI with the az login command.


$kcsb = New-Object Kusto.Data.KustoConnectionStringBuilder($clusterUrl, $databaseName)
$kcsb = $kcsb.WithAadAzCliAuthentication()

Run a query

Create a query provider and run Kusto Query Language queries.

$queryProvider = [Kusto.Data.Net.Client.KustoClientFactory]::CreateCslQueryProvider($kcsb)
Write-Host "Executing query: '$query' with connection string: '$($kcsb.ToString())'"

# Optional: set a client request ID and set a client request property (e.g. Server Timeout)
$crp = New-Object Kusto.Data.Common.ClientRequestProperties
$crp.ClientRequestId = "MyPowershellScript.ExecuteQuery." + [Guid]::NewGuid().ToString()
$crp.SetOption([Kusto.Data.Common.ClientRequestProperties]::OptionServerTimeout, [TimeSpan]::FromSeconds(30))

# Run the query
$reader = $queryProvider.ExecuteQuery($query, $crp)

# Do something with the result datatable
# For example: print it formatted as a table, sorted by the "StartTime" column in descending order
$dataTable = [Kusto.Cloud.Platform.Data.ExtendedDataReader]::ToDataSet($reader).Tables[0]
$dataView = New-Object System.Data.DataView($dataTable)
$dataView | Sort StartTime -Descending | Format-Table -AutoSize

Output

StartTimeEndTimeEpisodeIDEventIDStateEventTypeInjuriesDirectInjuriesIndirectDeathsDirectDeathsIndirect
2007-12-30 16:00:002007-12-30 16:05:001174964588GEORGIAThunderstorm Wind0000
2007-12-20 07:50:002007-12-20 07:53:001255468796MISSISSIPPIThunderstorm Wind0000
2007-09-29 08:11:002007-09-29 08:11:001109161032ATLANTIC SOUTHWater spout0000
2007-09-20 21:57:002007-09-20 22:05:001107860913FLORIDATornado0000
2007-09-18 20:00:002007-09-19 18:00:001107460904FLORIDAHeavy Rain0000

Run a management command

Create a CSL admin provider and run management commands.

The following example runs a management command to check the health of the cluster.

$adminProvider = [Kusto.Data.Net.Client.KustoClientFactory]::CreateCslAdminProvider($kcsb)
$command = [Kusto.Data.Common.CslCommandGenerator]::GenerateDiagnosticsShowCommand()
Write-Host "Executing command: '$command' with connection string: '$($kcsb.ToString())'"
# Run the command
$reader = $adminProvider.ExecuteControlCommand($command)
# Read the results
$reader.Read() # this reads a single row/record. If you have multiple ones returned, you can read in a loop
$isHealthy = $Reader.GetBoolean(0)
Write-Host "IsHealthy = $isHealthy"

Output

IsHealthy = True

For more information on how to run management commands with the Kusto client libraries, see Create an app to run management commands.

Example

The following example demonstrates the process of loading the libraries, authenticating, and executing a query on the publicly accessible help cluster.

#  This is an example of the location from where you extract the Microsoft.Azure.Kusto.Tools package
#  Make sure you load the types from a local directory and not from a remote share
#  Make sure you load the version compatible with your PowerShell version (see explanations above)
#  Use `dir "$packagesRoot\*" | Unblock-File` to make sure all these files can be loaded and executed
$packagesRoot = "C:\Microsoft.Azure.Kusto.Tools\tools\net472"
#  Load the Kusto client library and its dependencies
[System.Reflection.Assembly]::LoadFrom("$packagesRoot\Kusto.Data.dll")
#  Define the connection to the help cluster and database
$clusterUrl = "https://help.kusto.windows.net;Fed=True"
$databaseName = "Samples"
# MS Entra user authentication with interactive prompt
$kcsb = New-Object Kusto.Data.KustoConnectionStringBuilder($clusterUrl, $databaseName)
# Run a simple query
$queryProvider = [Kusto.Data.Net.Client.KustoClientFactory]::CreateCslQueryProvider($kcsb)
$query = "StormEvents | take 5"
$reader = $queryProvider.ExecuteQuery($query, $crp)

Controlling tracing

Since there’s only one global PowerShell.exe.config file for all PowerShell applications, generally libraries can’t rely on .NET’s app.config model to access their settings. You can still use the programmatic model for tracing. For more information, see controlling tracing.

You can use the following methods instead:

  • Enable tracing to the console:

    $traceListener = New-Object Kusto.Cloud.Platform.Utils.ConsoleTraceListener
    [Kusto.Cloud.Platform.Utils.TraceSourceManager]::RegisterTraceListener($traceListener)
    
  • Create a Kusto.Cloud.Platform.Utils.RollingCsvTraceListener2 object with a single argument of the folder location where traces are written.

5 - Python SDK

5.1 - Kusto Python SDK

This article describes Python SDK.

The Kusto Python Client library lets you query your database using Python.

The library is Python 2.x/3.x compatible. It supports all data types using the Python DB API interface.

You can use the library, for example, from Jupyter Notebooks that are attached to Spark clusters, You can use the library, for example, from Jupyter Notebooks that are attached to Spark clusters, including, but not exclusively, Azure Databricks instances. including, but not exclusively, Azure Databricks instances.

Kusto Python Ingest Client is a python library that lets you send, or ingest, data to your database.

6 - R SDK

6.1 - Kusto R SDK

This article describes Kusto R SDK.

The Kusto R library is an open-source project that is part of the cloudyr project that allows you to query your databaae using R. The Kusto R library is an open-source project that is part of the cloudyr project that allows you to query your databaae using R.

The GitHub Repository includes installation instructions and usage examples. The GitHub Repository includes installation instructions and usage examples.

7 - REST API

7.1 - Authentication over HTTPS

This article describes Authentication over HTTPS.

To interact with your database over HTTPS, the principal making the request must authenticate by using the HTTP Authorization request header.

Syntax

Authorization: Bearer AccessToken

Parameters

NameTypeRequiredDescription
AccessTokenstring✔️A Microsoft Entra access token for the service.

Get an access token

There are many different methods to get a Microsoft Entra access token. To learn more, see user authentication and application authentication. There are many different methods to get a Microsoft Entra access token. To learn more, see user authentication and application authentication.

Get an access token for a user principal using the Azure CLI

The following steps return an access token for the user principal making the request. Make sure the user principal has access to the resource you plan to access. For more information, see role-based access control. The following steps return an access token for the user principal making the request. Make sure the user principal has access to the resource you plan to access. For more information, see role-based access control.

  1. Sign in to the Azure CLI.

    az login --output table
    
  2. Find the row where the column Default is true. Confirm that the subscription in that row is the subscription for which you want to create your Microsoft Entra access token. To find subscription information, see get subscription and tenant IDs in the Azure portal. If you need to switch to a different subscription, run one of the following commands.

  3. Find the row where the column Default is true. Confirm that the subscription in that row is the subscription for which you want to create your Microsoft Entra access token. To find subscription information, see get subscription and tenant IDs in the Azure portal. If you need to switch to a different subscription, run one of the following commands.

  4. Run the following command to get the access token.

    az account get-access-token \
      --resource "https://api.kusto.windows.net" \
      --query "accessToken"
    

Get an access token for a service principal using the Azure CLI

Microsoft Entra service principals represent applications or services that need access to resources, usually in non-interactive scenarios such as API calls. The following steps guide you through creating a service principal and getting a bearer token for this principal.

  1. Sign in to the Azure CLI.

    az login --output table
    
  2. Find the row where the column Default is true. Confirm that the subscription in that row is the subscription under which you want to create the service principal. To find subscription information, see get subscription and tenant IDs in the Azure portal. If you need to switch to a different subscription, run one of the following commands.

  3. Find the row where the column Default is true. Confirm that the subscription in that row is the subscription under which you want to create the service principal. To find subscription information, see get subscription and tenant IDs in the Azure portal. If you need to switch to a different subscription, run one of the following commands.

  4. Create a service principal. This following command creates a Microsoft Entra service principal and returns the appId, displayName, password, and tenantId for the service principal.

  5. Grant the application principal access to your database. For example, in the context of your database, use the following command to add the principal as a user.

    To learn about the different roles and how to assign them, see security roles management. To learn about the different roles and how to assign them, see security roles management.

      -F grant_type=client_credentials \
      -F resource=https://api.kusto.windows.net
    

7.2 - How to authenticate with Microsoft Authentication Library (MSAL) in apps

This article describes authentication with Microsoft Authentication Library (MSAL).

To programmatically authenticate with your cluster, you need to request an access token from Microsoft Entra ID specific to Azure Data Explorer. This access token acts as proof of identity when issuing requests to your cluster. You can use one of the Microsoft Authentication Library (MSAL) flows to create an access token. To programmatically authenticate with your cluster, you need to request an access token from Microsoft Entra ID specific to Azure Data Explorer. This access token acts as proof of identity when issuing requests to your cluster. You can use one of the Microsoft Authentication Library (MSAL) flows to create an access token.

This article explains how to use MSAL to authenticate principals to your cluster. The direct use of MSAL to authenticate principals is primarily relevant in web applications that require On-behalf-of (OBO) authentication or Single Page Application (SPA) authentication. For other cases, we recommend using the Kusto client libraries as they simplify the authentication process. This article explains how to use MSAL to authenticate principals to your cluster. The direct use of MSAL to authenticate principals is primarily relevant in web applications that require On-behalf-of (OBO) authentication or Single Page Application (SPA) authentication. For other cases, we recommend using the Kusto client libraries as they simplify the authentication process.

In this article, learn about the main authentication scenarios, the information to provide for successful authentication, and the use of MSAL for authentication.

Authentication scenarios

The main authentication scenarios are as follows:

For user and application authentication, we recommend using the Kusto client libraries. For OBO and SPA authentication, the Kusto client libraries can’t be used. For user and application authentication, we recommend using the Kusto client libraries. For OBO and SPA authentication, the Kusto client libraries can’t be used.

Authentication parameters

During the token acquisition process, the client needs to provide the following parameters:

Parameter nameDescription

Perform user authentication with MSAL

The following code sample shows how to use MSAL to get an authorization token for your cluster. The authorization is done in a way that launches the interactive sign-in UI. The appRedirectUri is the URL to which Microsoft Entra ID redirects after authentication completes successfully. MSAL extracts the authorization code from this redirect.


    .Build();

var result = authClient.AcquireTokenInteractive(
    new[] { $"{kustoUri}/.default" } // Define scopes for accessing Azure Data Explorer cluster
).ExecuteAsync().Result;

var bearerToken = result.AccessToken;

var request = WebRequest.Create(new Uri(kustoUri));
request.Headers.Set(HttpRequestHeader.Authorization, string.Format(CultureInfo.InvariantCulture, "{0} {1}", "Bearer", bearerToken));

Perform application authentication with MSAL

The following code sample shows how to use MSAL to get an authorization token for your cluster. In this flow, no prompt is presented. The application must be registered with Microsoft Entra ID and have an app key or an X509v2 certificate issued by Microsoft Entra ID. To set up an application, see Provision a Microsoft Entra application. The following code sample shows how to use MSAL to get an authorization token for your cluster. In this flow, no prompt is presented. The application must be registered with Microsoft Entra ID and have an app key or an X509v2 certificate issued by Microsoft Entra ID. To set up an application, see Provision a Microsoft Entra application.


    .Build();

var result = authClient.AcquireTokenForClient(
    new[] { $"{kustoUri}/.default" } // Define scopes for accessing Azure Data Explorer cluster
).ExecuteAsync().Result;
var bearerToken = result.AccessToken;

var request = WebRequest.Create(new Uri(kustoUri));
request.Headers.Set(HttpRequestHeader.Authorization, string.Format(CultureInfo.InvariantCulture, "{0} {1}", "Bearer", bearerToken));

Perform On-behalf-of (OBO) authentication

On-behalf-of authentication is relevant when your web application or service acts as a mediator between the user or application and your cluster. On-behalf-of authentication is relevant when your web application or service acts as a mediator between the user or application and your cluster.

In this scenario, an application is sent a Microsoft Entra access token for an arbitrary resource. Then, the application uses that token to acquire a new Microsoft Entra access token for the Azure Data Explorer resource. Then, the application can access your cluster on behalf of the principal indicated by the original Microsoft Entra access token. This flow is called the OAuth 2.0 on-behalf-of authentication flow. It generally requires multiple configuration steps with Microsoft Entra ID, and in some cases might require special consent from the administrator of the Microsoft Entra tenant. In this scenario, an application is sent a Microsoft Entra access token for an arbitrary resource. Then, the application uses that token to acquire a new Microsoft Entra access token for the Azure Data Explorer resource. Then, the application can access your cluster on behalf of the principal indicated by the original Microsoft Entra access token. This flow is called the OAuth 2.0 on-behalf-of authentication flow. It generally requires multiple configuration steps with Microsoft Entra ID, and in some cases might require special consent from the administrator of the Microsoft Entra tenant.

To perform on-behalf-of authentication:

  1. Provision a Microsoft Entra application.

  2. Provision a Microsoft Entra application.

  3. In your server code, use MSAL to perform the token exchange.

    
        .Build();
    
    var result = authClient.AcquireTokenOnBehalfOf(
        new[] { $"{kustoUri}/.default" }, // Define scopes for accessing your cluster
    ).ExecuteAsync().Result;
    var accessTokenForAdx = result.AccessToken;
    
  4. Use the token to run queries. For example:

    var request = WebRequest.Create(new Uri(kustoUri));
    request.Headers.Set(HttpRequestHeader.Authorization, string.Format(CultureInfo.InvariantCulture, "{0} {1}", "Bearer", accessTokenForAdx));
    

Perform Single Page Application (SPA) authentication

For authentication for a SPA web client, use the OAuth authorization code flow. For authentication for a SPA web client, use the OAuth authorization code flow.

In this scenario, the app is redirected to sign in to Microsoft Entra ID. Then, Microsoft Entra ID redirects back to the app with an authorization code in the URI. Then, the app makes a request to the token endpoint to get the access token. The token is valid for 24 hour during which the client can reuse it by acquiring the token silently.

Microsoft identity platform has detailed tutorials for different use cases such as React, Angular, and JavaScript. Microsoft identity platform has detailed tutorials for different use cases such as React, Angular, and JavaScript.

To set up authentication for a web client:

  1. Provision a Microsoft Entra application.

  2. Provision a Microsoft Entra application.

  3. Configure the app as described in MSAL.js 2.0 with auth code flow.

  4. Configure the app as described in MSAL.js 2.0 with auth code flow.

  5. Use the MSAL.js 2.0 library to sign in a user and authenticate to your cluster. Microsoft identity platform has detailed tutorials for different use cases such as React, Angular, and JavaScript.

  6. Use the MSAL.js 2.0 library to sign in a user and authenticate to your cluster. Microsoft identity platform has detailed tutorials for different use cases such as React, Angular, and JavaScript.

    The following example uses the MSAL.js library to access Azure Data Explorer.

    import * as msal from "@azure/msal-browser";
    
    const msalConfig = {
      auth: {
      },
    };
    
    const msalInstance = new msal.PublicClientApplication(msalConfig);
    const myAccounts = msalInstance.getAllAccounts();
    
    // If no account is logged in, redirect the user to log in.
    if (myAccounts === undefined || myAccounts.length === 0) {
      try {
        await msalInstance.loginRedirect({
          scopes: ["https://help.kusto.windows.net/.default"],
        });
      } catch (err) {
        console.error(err);
      }
    }
    const account = myAccounts[0];
    const name = account.name;
    window.document.getElementById("main").innerHTML = `Hi ${name}!`;
    
    // Get the access token required to access the specified Azure Data Explorer cluster.
    const accessTokenRequest = {
      account,
      scopes: ["https://help.kusto.windows.net/.default"],
    };
    let acquireTokenResult = undefined;
    try {
      acquireTokenResult = await msalInstance.acquireTokenSilent(accessTokenRequest);
    } catch (error) {
      if (error instanceof InteractionRequiredAuthError) {
        await msalInstance.acquireTokenRedirect(accessTokenRequest);
      }
    }
    
    const accessToken = acquireTokenResult.accessToken;
    
    // Make requests to the specified cluster with the token in the Authorization header.
    const fetchResult = await fetch("https://help.kusto.windows.net/v2/rest/query", {
      headers: {
        Authorization: `Bearer ${accessToken}`,
        "Content-Type": "application/json",
      },
      method: "POST",
      body: JSON.stringify({
        db: "Samples",
        csl: "StormEvents | count",
      }),
    });
    const jsonResult = await fetchResult.json();
    
    // The following line extracts the first cell in the result data.
    

7.3 - How to ingest data with the REST API

This article describes how to ingest data without Kusto.Ingest library by using the REST API.

The Kusto.Ingest library is preferred for ingesting data to your database. However, you can still achieve almost the same functionality, without being dependent on the Kusto.Ingest package. This article shows you how, by using Queued Ingestion to your database for production-grade pipelines.

This article deals with the recommended mode of ingestion. For the Kusto.Ingest library, its corresponding entity is the IKustoQueuedIngestClient interface. Here, the client code interacts with your database by posting ingestion notification messages to an Azure queue. References to the messages are obtained from the Kusto Data Management (also known as the Ingestion) service. Interaction with the service must be authenticated with Microsoft Entra ID. This article deals with the recommended mode of ingestion. For the Kusto.Ingest library, its corresponding entity is the IKustoQueuedIngestClient interface. Here, the client code interacts with your database by posting ingestion notification messages to an Azure queue. References to the messages are obtained from the Kusto Data Management (also known as the Ingestion) service. Interaction with the service must be authenticated with Microsoft Entra ID.

The following code shows how the Kusto Data Management service handles queued data ingestion without using the Kusto.Ingest library. This example may be useful if full .NET is inaccessible or unavailable because of the environment, or other restrictions.

The code includes the steps to create an Azure Storage client and upload the data to a blob. Each step is described in greater detail, after the sample code.

  1. Obtain an authentication token for accessing the ingestion service
  2. Obtain an authentication token for accessing the ingestion service
  3. Query the ingestion service to obtain:
  4. Upload data to a blob on one of the blob containers obtained from Kusto in (2)
  5. Upload data to a blob on one of the blob containers obtained from Kusto in (2)
  6. Compose an ingestion message that identifies the target database and table and that points to the blob from (3)
  7. Compose an ingestion message that identifies the target database and table and that points to the blob from (3)
  8. Post the ingestion message we composed in (4) to an ingestion queue obtained in (2)
  9. Post the ingestion message we composed in (4) to an ingestion queue obtained in (2)
  10. Retrieve any error found by the service during ingestion
  11. Retrieve any error found by the service during ingestion
// A container class for ingestion resources we are going to obtain
internal class IngestionResourcesSnapshot
{

    public string FailureNotificationsQueue { get; set; } = string.Empty;
    public string SuccessNotificationsQueue { get; set; } = string.Empty;
}

public static void IngestSingleFile(string file, string db, string table, string ingestionMappingRef)
{
    // Your ingestion service URI
    var dmServiceBaseUri = @"{serviceURI}";
    // 1. Authenticate the interactive user (or application) to access Kusto ingestion service
    var bearerToken = AuthenticateInteractiveUser(dmServiceBaseUri);
    // 2a. Retrieve ingestion resources
    var ingestionResources = RetrieveIngestionResources(dmServiceBaseUri, bearerToken);
    // 2b. Retrieve Kusto identity token
    var identityToken = RetrieveKustoIdentityToken(dmServiceBaseUri, bearerToken);
    // 3. Upload file to one of the blob containers.
    // This example uses the first one, but when working with multiple blobs,
    // one should round-robin the containers in order to prevent throttling
    var blobName = $"TestData{DateTime.UtcNow:yyyy-MM-dd_HH-mm-ss.FFF}";
    var blobUriWithSas = UploadFileToBlobContainer(
        file, ingestionResources.TempStorageContainers.First(), blobName,
        out var blobSizeBytes
    );
    // 4. Compose ingestion command
    var ingestionMessage = PrepareIngestionMessage(db, table, blobUriWithSas, blobSizeBytes, ingestionMappingRef, identityToken);
    // 5. Post ingestion command to one of the previously obtained ingestion queues.
    // This example uses the first one, but when working with multiple blobs,
    // one should round-robin the queues in order to prevent throttling
    PostMessageToQueue(ingestionResources.IngestionQueues.First(), ingestionMessage);

    Thread.Sleep(20000);

    // 6a. Read success notifications
    var successes = PopTopMessagesFromQueue(ingestionResources.SuccessNotificationsQueue, 32);
    foreach (var sm in successes)
    {
        Console.WriteLine($"Ingestion completed: {sm}");
    }

    // 6b. Read failure notifications
    var errors = PopTopMessagesFromQueue(ingestionResources.FailureNotificationsQueue, 32);
    foreach (var em in errors)
    {
        Console.WriteLine($"Ingestion error: {em}");
    }
}

Using queued ingestion for production-grade pipelines

Obtain authentication evidence from Microsoft Entra ID

// Authenticates the interactive user and retrieves Microsoft Entra Access token for specified resource
internal static string AuthenticateInteractiveUser(string resource)
{
    // Create an authentication client for Microsoft Entra ID:
        .Build();
    // Acquire user token for the interactive user:
    var result = authClient.AcquireTokenInteractive(
        new[] { $"{resource}/.default" } // Define scopes
    ).ExecuteAsync().Result;
    return result.AccessToken;
}

Retrieve ingestion resources

Manually construct an HTTP POST request to the Data Management service, requesting the return of the ingestion resources. These resources include queues that the DM service is listening on, and blob containers for data uploading. The Data Management service will process any messages containing ingestion requests that arrive on one of those queues.

// Retrieve ingestion resources (queues and blob containers) with SAS from specified ingestion service using supplied access token
internal static IngestionResourcesSnapshot RetrieveIngestionResources(string ingestClusterBaseUri, string accessToken)
{
    var ingestClusterUri = $"{ingestClusterBaseUri}/v1/rest/mgmt";
    var requestBody = "{ \"csl\": \".get ingestion resources\" }";
    var ingestionResources = new IngestionResourcesSnapshot();
    using var response = SendPostRequest(ingestClusterUri, accessToken, requestBody);
    using var sr = new StreamReader(response.GetResponseStream());
    using var jtr = new JsonTextReader(sr);
    var responseJson = JObject.Load(jtr);
    // Input queues
    var tokens = responseJson.SelectTokens("Tables[0].Rows[?(@.[0] == 'SecuredReadyForAggregationQueue')]");
    foreach (var token in tokens)
    {
        ingestionResources.IngestionQueues.Add((string)token[1]);
    }
    // Temp storage containers
    tokens = responseJson.SelectTokens("Tables[0].Rows[?(@.[0] == 'TempStorage')]");
    foreach (var token in tokens)
    {
        ingestionResources.TempStorageContainers.Add((string)token[1]);
    }
    // Failure notifications queue
    var singleToken =
        responseJson.SelectTokens("Tables[0].Rows[?(@.[0] == 'FailedIngestionsQueue')].[1]").FirstOrDefault();
    ingestionResources.FailureNotificationsQueue = (string)singleToken;
    // Success notifications queue
    singleToken =
        responseJson.SelectTokens("Tables[0].Rows[?(@.[0] == 'SuccessfulIngestionsQueue')].[1]").FirstOrDefault();
    ingestionResources.SuccessNotificationsQueue = (string)singleToken;
    return ingestionResources;
}

// Executes a POST request on provided URI using supplied Access token and request body
internal static WebResponse SendPostRequest(string uriString, string authToken, string body)
{
    var request = WebRequest.Create(uriString);
    request.Method = "POST";
    request.ContentType = "application/json";
    request.ContentLength = body.Length;
    request.Headers.Set(HttpRequestHeader.Authorization, $"Bearer {authToken}");
    using var bodyStream = request.GetRequestStream();
    using (var sw = new StreamWriter(bodyStream))
    {
        sw.Write(body);
        sw.Flush();
    }
    bodyStream.Close();
    return request.GetResponse();
}

Obtain a Kusto identity token

Ingest messages are handed off to your cluster via a non-direct channel (Azure queue), making it impossible to do in-band authorization validation for accessing the ingestion service. The solution is to attach an identity token to every ingest message. The token enables in-band authorization validation. This signed token can then be validated by the ingestion service when it receives the ingestion message.

// Retrieves a Kusto identity token that will be added to every ingest message
internal static string RetrieveKustoIdentityToken(string ingestClusterBaseUri, string accessToken)
{
    var ingestClusterUri = $"{ingestClusterBaseUri}/v1/rest/mgmt";
    var requestBody = "{ \"csl\": \".get kusto identity token\" }";
    var jsonPath = "Tables[0].Rows[*].[0]";
    using var response = SendPostRequest(ingestClusterUri, accessToken, requestBody);
    using var sr = new StreamReader(response.GetResponseStream());
    using var jtr = new JsonTextReader(sr);
    var responseJson = JObject.Load(jtr);
    var identityToken = responseJson.SelectTokens(jsonPath).FirstOrDefault();
    return (string)identityToken;
}

Upload data to the Azure Blob container

This step is about uploading a local file to an Azure Blob that will be handed off for ingestion. This code uses the Azure Storage SDK. If dependency isn’t possible, it can be achieved with Azure Blob Service REST API. This step is about uploading a local file to an Azure Blob that will be handed off for ingestion. This code uses the Azure Storage SDK. If dependency isn’t possible, it can be achieved with Azure Blob Service REST API.

// Uploads a single local file to an Azure Blob container, returns blob URI and original data size
internal static string UploadFileToBlobContainer(string filePath, string blobContainerUri, string blobName, out long blobSize)
{
    var blobUri = new Uri(blobContainerUri);
    var blobContainer = new BlobContainerClient(blobUri);
    var blob = blobContainer.GetBlobClient(blobName);
    using (var stream = File.OpenRead(filePath))
    {
        blob.UploadAsync(BinaryData.FromStream(stream));
        blobSize = blob.GetProperties().Value.ContentLength;
    }
    return $"{blob.Uri.AbsoluteUri}{blobUri.Query}";
}

Compose the ingestion message

The NewtonSoft.JSON package will again compose a valid ingestion request to identify the target database and table, and that points to the blob. The message will be posted to the Azure Queue that the relevant Kusto Data Management service is listening on.

Here are some points to consider.

internal static string PrepareIngestionMessage(string db, string table, string dataUri, long blobSizeBytes, string mappingRef, string identityToken)
{
    var message = new JObject
    {
        { "Id", Guid.NewGuid().ToString() },
        { "BlobPath", dataUri },
        { "RawDataSize", blobSizeBytes },
        { "DatabaseName", db },
        { "TableName", table },
        { "RetainBlobOnSuccess", true }, // Do not delete the blob on success
        { "FlushImmediately", true }, // Do not aggregate
        { "ReportLevel", 2 }, // Report failures and successes (might incur perf overhead)
        { "ReportMethod", 0 }, // Failures are reported to an Azure Queue
        {
            "AdditionalProperties", new JObject(
                new JProperty("authorizationContext", identityToken),
                new JProperty("mappingReference", mappingRef),
                // Data is in JSON format
                new JProperty("format", "multijson")
            )
        }
    };
    return message.ToString();
}

Post the ingestion message to the ingestion queue

Finally, post the message that you constructed, to the selected ingestion queue that you previously obtained.

If you are using .Net storage client versions above v12, you must properly encode the message content.

internal static void PostMessageToQueue(string queueUriWithSas, string message)
{
    var queue = new QueueClient(new Uri(queueUriWithSas));
    queue.SendMessage(message);
}

Check for error messages from the Azure queue

After ingestion, we check for failure messages from the relevant queue that the Data Management writes to. For more information on the failure message structure, see Ingestion failure message structure. After ingestion, we check for failure messages from the relevant queue that the Data Management writes to. For more information on the failure message structure, see Ingestion failure message structure.

{
    var queue = new QueueClient(new Uri(queueUriWithSas));
    var messagesFromQueue = queue.ReceiveMessages(maxMessages: count).Value;
    return messages;
}

Ingestion messages - JSON document formats

Ingestion message internal structure

The message that the Kusto Data Management service expects to read from the input Azure Queue is a JSON document in the following format.

{
}
PropertyDescription
IdMessage identifier (GUID)
BlobPathPath (URI) to the blob, including the SAS key granting permissions to read/write/delete it. Permissions are required so that the ingestion service can delete the blob once it has completed ingesting the data.
RawDataSizeSize of the uncompressed data in bytes. Providing this value allows the ingestion service to optimize ingestion by potentially aggregating multiple blobs. This property is optional, but if not given, the service will access the blob just to retrieve the size.
DatabaseNameTarget database name
TableNameTarget table name
RetainBlobOnSuccessIf set to true, the blob won’t be deleted once ingestion is successfully completed. Default is false
FlushImmediatelyIf set to true, any aggregation will be skipped. Default is false
ReportLevelSuccess/Error reporting level: 0-Failures, 1-None, 2-All
ReportMethodReporting mechanism: 0-Queue, 1-Table
AdditionalPropertiesOther properties such as format, tags, and creationTime. For more information, see data ingestion properties.
AdditionalPropertiesOther properties such as format, tags, and creationTime. For more information, see data ingestion properties.

Ingestion failure message structure

The message that the Data Management expects to read from the input Azure Queue is a JSON document in the following format.

PropertyDescription
OperationIdOperation identifier (GUID) that can be used to track the operation on the service side
DatabaseTarget database name
TableTarget table name
FailedOnFailure timestamp
IngestionSourceIdGUID identifying the data chunk that failed to ingest
IngestionSourcePathPath (URI) to the data chunk that failed to ingest
DetailsFailure message
ErrorCodeThe error code. For all the error codes, see Ingestion error codes.
ErrorCodeThe error code. For all the error codes, see Ingestion error codes.
FailureStatusIndicates whether the failure is permanent or transient
RootActivityIdThe correlation identifier (GUID) that can be used to track the operation on the service side
OriginatesFromUpdatePolicyIndicates whether the failure was caused by an erroneous transactional update policy
OriginatesFromUpdatePolicyIndicates whether the failure was caused by an erroneous transactional update policy
ShouldRetryIndicates whether the ingestion could succeed if retried as is

7.4 - Query V2 HTTP response

This article describes Query V2 HTTP response.

HTTP response status line

If the request succeeds, the HTTP response status code is 200 OK. The HTTP response body is a JSON array, as explained below.

If the request fails, the HTTP response status code is a 4xx or 5xx error. The reason phrase will include additional information about the failure. The HTTP response body is a JSON object, as explained below.

HTTP response headers

Irrespective of the success/failure of the request, two custom HTTP headers are included with the response:

  1. x-ms-client-request-id: The service returns an opaque string that identifies the request/response pair for correlation purposes. If the request included a client request ID, its value will appear here; otherwise, some random string is returned.

  2. x-ms-activity-id: The service returns an opaque string that uniquely identifies the request/response pair for correlation purposes. Unlike x-ms-client-request-id, this identifier is not affected by any information in the request, and is unique per response.

HTTP response body (on request failure)

On request failure, the HTTP response body will be a JSON document formatted according to OneApiErrors rules. For a description of the OneApiErrors format, see section 7.10.2 here. OneApiErrors rules. For a description of the OneApiErrors format, see section 7.10.2 here. Below is an example for such a failure.

{
    "error": {
        "code": "General_BadRequest",
        "message": "Request is invalid and cannot be executed.",
        "@type": "Kusto.Data.Exceptions.KustoBadRequestException",
        "@message": "Request is invalid and cannot be processed: Semantic error: SEM0100: 'table' operator: Failed to resolve table expression named 'aaa'",
        "@context": {
            "timestamp": "2023-04-18T12:59:27.4855445Z",
            "serviceAlias": "HELP",
            "machineName": "KEngine000000",
            "processName": "Kusto.WinSvc.Svc",
            "processId": 12580,
            "threadId": 10260,
            "clientRequestId": "Kusto.Cli;b90f4260-4eac-4574-a27a-3f302db21404",
            "activityId": "9dcc4522-7b51-41db-a7ae-7c1bfe0696b2",
            "subActivityId": "d0f30c8c-e6c6-45b6-9275-73dd6b379ecf",
            "activityType": "DN.FE.ExecuteQuery",
            "parentActivityId": "6e3c8dab-0aaf-4df5-85b5-fc20b0b29a84",
        },
        "@permanent": true,
        "@text": "aaa",
        "@database": "Samples",
        "@ClientRequestLogger": "",
        "innererror": {
            "code": "SEM0100",
            "message": "'table' operator: Failed to resolve table expression named 'aaa'",
            "@type": "Kusto.Data.Exceptions.SemanticException",
            "@message": "Semantic error: SEM0100: 'table' operator: Failed to resolve table expression named 'aaa'",
            "@context": {
                "timestamp": "2023-04-18T12:59:27.4855445Z",
                "serviceAlias": "HELP",
                "machineName": "KEngine000000",
                "processName": "Kusto.WinSvc.Svc",
                "processId": 12580,
                "threadId": 10260,
                "clientRequestId": "Kusto.Cli;b90f4260-4eac-4574-a27a-3f302db21404",
                "activityId": "9dcc4522-7b51-41db-a7ae-7c1bfe0696b2",
                "subActivityId": "d0f30c8c-e6c6-45b6-9275-73dd6b379ecf",
                "activityType": "DN.FE.ExecuteQuery",
                "parentActivityId": "6e3c8dab-0aaf-4df5-85b5-fc20b0b29a84",
            },
            "@permanent": true,
            "@errorCode": "SEM0100",
            "@errorMessage": "'table' operator: Failed to resolve table expression named 'aaa'"
        }
    }
}

HTTP response body (on request success)

On request success, the HTTP response body will be a JSON array that encodes the request results.

Logically, the V2 response describes a DataSet object which contains any number of Tables. These tables can represent the actual data asked-for by the request, or additional information about the execution of the request (such as an accounting of the resources consumed by the request). Additionally, the actual request might actually fail (due to various conditions) even though a 200 OK status gets returned, and in that case the response will include partial response data plus an indication of the errors.

Physically, the response body’s JSON array is a list of JSON objects, each of which is called a frame. The DataSet object is encoded into two frames: DataSetHeader and DataSetCompletion. DataSetHeader and DataSetCompletion. The first is always the first frame, and the second is always the last frame. In “between” them one can find the frames describing the Table objects.

The Table objects can be encoded in two ways:

  1. As a single frame: DataTable. This is the default.

  2. As a single frame: DataTable. This is the default.

  3. Alternatively, as a “mix” of four kinds of frames: TableHeader

  4. Alternatively, as a “mix” of four kinds of frames: TableHeader (which comes first and describes the table), TableFragment (which comes first and describes the table), TableFragment (which describes a table’s data), TableProgress (which is (which describes a table’s data), TableProgress (which is optional and provides an estimation into how far in the table’s data we are), and TableCompletion (which is the last frame of the table). and TableCompletion (which is the last frame of the table).

The second case is called “progressive mode”, and will only appear if the client request property results_progressive_enabled is set to true. In this case, each TableFragment frame describes an update to the data accumulated by all previous such frames for the table, either as an append operation, or as a replace operation. (The latter is used, for example, when some long-running aggregation calculation is performed at the “top level” of the query, so an initial aggregation result is replaced by more accurate results later-on.)

DataSetHeader

The DataSetHeader frame is always the first in the dataset and appears exactly once.

{
    "Version": string,
    "IsProgressive": Boolean
}

Where:

  • Version is the protocol version. The current version is v2.0.

  • IsProgressive is a boolean flag that indicates whether this dataset contains progressive frames. A progressive frame is one of:

    FrameDescription
    TableHeaderContains general information about the table
    TableFragmentContains a rectangular data shard of the table
    TableProgressContains the progress in percent (0-100)
    TableCompletionIndicates that this frame is the last one

    The frames above describe a table. If the IsProgressive flag isn’t set to true, then every table in the set will be serialized using a single frame:

  • DataTable: Contains all the information that the client needs about a single table in the dataset.

TableHeader

Queries that are made with the results_progressive_enabled option set to true may include this frame. Following this table, clients can expect an interleaving sequence of TableFragment and TableProgress frames. The final frame of the table is TableCompletion.

{
    "TableId": Number,
    "TableKind": string,
    "TableName": string,
    "Columns": Array,
}

Where:

  • TableId is the table’s unique ID.

  • TableKind is one of:

    • PrimaryResult
    • QueryCompletionInformation
    • QueryTraceLog
    • QueryPerfLog
    • TableOfContents
    • QueryProperties
    • QueryPlan
    • Unknown
  • TableName is the table’s name.

  • Columns is an array describing the table’s schema.

{
    "ColumnName": string,
    "ColumnType": string,
}

Supported column types are described here. Supported column types are described here.

TableFragment

The TableFragment frame contains a rectangular data fragment of the table. In addition to the actual data, this frame also contains a TableFragmentType property that tells the client what to do with the fragment. The fragment appended to existing fragments, or replace them.

{
    "TableId": Number,
    "FieldCount": Number,
    "TableFragmentType": string,
    "Rows": Array
}

Where:

  • TableId is the table’s unique ID.

  • FieldCount is the number of columns in the table.

  • TableFragmentType describes what the client should do with this fragment. TableFragmentType is one of:

    • DataAppend
    • DataReplace
  • Rows is a two-dimensional array that contains the fragment data.

TableProgress

The TableProgress frame can interleave with the TableFragment frame described above. Its sole purpose is to notify the client of the query’s progress.

{
    "TableId": Number,
    "TableProgress": Number,
}

Where:

  • TableId is the table’s unique ID.
  • TableProgress is the progress in percent (0–100).

TableCompletion

The TableCompletion frame marks the end of the table transmission. No more frames related to that table will be sent.

{
    "TableId": Number,
    "RowCount": Number,
}

Where:

  • TableId is the table’s unique ID.
  • RowCount is the total number of rows in the table.

DataTable

Queries that are issued with the EnableProgressiveQuery flag set to false won’t include any of the frames (TableHeader, TableFragment, TableProgress, and TableCompletion). Instead, each table in the dataset will be transmitted using the DataTable frame that contains all the information that the client needs, to read the table.

{
    "TableId": Number,
    "TableKind": string,
    "TableName": string,
    "Columns": Array,
    "Rows": Array,
}

Where:

  • TableId is the table’s unique ID.

  • TableKind is one of:

    • PrimaryResult
    • QueryCompletionInformation
    • QueryTraceLog
    • QueryPerfLog
    • QueryProperties
    • QueryPlan
    • Unknown
  • TableName is the table’s name.

  • Columns is an array describing the table’s schema, and includes:

{
    "ColumnName": string,
    "ColumnType": string,
}
  • Rows is a two-dimensional array that contains the table’s data.

The meaning of tables in the response

  • PrimaryResult - The main tabular result of the query. For each tabular expression statement, one or more tables are generated in-order, representing the results produced by the statement. There can be multiple such tables because of batches and fork operators.
  • PrimaryResult - The main tabular result of the query. For each tabular expression statement, one or more tables are generated in-order, representing the results produced by the statement. There can be multiple such tables because of batches and fork operators.
  • QueryCompletionInformation - Provides additional information about the execution of the query itself, such as whether it completed successfully or not, and what were the resources consumed by the query (similar to the QueryStatus table in the v1 response).
  • QueryProperties - Provides additional values such as client visualization instructions (emitted, for example, to reflect the information in the render operator) and database cursor information. information in the render operator) and database cursor information.
  • QueryTraceLog - The performance trace log information (returned when perftrace in client request properties is set to true).
  • QueryTraceLog - The performance trace log information (returned when perftrace in client request properties is set to true).

DataSetCompletion

The DataSetCompletion frame is the final one in the dataset.

{
    "HasErrors": Boolean,
    "Cancelled": Boolean,
    "OneApiErrors": Array,
}

Where:

  • HasErrors is true if there were errors while generating the dataset.
  • Cancelled is true if the request that led to the generation of the dataset was canceled before completion.
  • OneApiErrors is only returned if HasErrors is true. For a description of the OneApiErrors format, see section 7.10.2 here.
  • OneApiErrors is only returned if HasErrors is true. For a description of the OneApiErrors format, see section 7.10.2 here.

7.5 - Query/management HTTP request

This article describes Query/management HTTP request.

Request verb and resource

ActionHTTP verbHTTP resource
QueryGET/v1/rest/query
QueryPOST/v1/rest/query
Query v2GET/v2/rest/query
Query v2POST/v2/rest/query
ManagementPOST/v1/rest/mgmt

For example, to send a management command (“management”) to a service endpoint, use the following request line:

POST https://help.kusto.windows.net/v1/rest/mgmt HTTP/1.1

See Request headers and Body to learn what to include. See Request headers and Body to learn what to include.

Request headers

The following table contains the common headers used for query and management operations.

Standard headerDescriptionRequired/Optional
AcceptThe media types the client receives. Set to application/json.Required
Accept-EncodingThe supported content encodings. Supported encodings are gzip and deflate.Optional
AuthorizationThe authentication credentials. For more information, see authentication.Required
AuthorizationThe authentication credentials. For more information, see authentication.Required
ConnectionWhether the connection stays open after the operation. The recommendation is to set Connection to Keep-Alive.Optional
Content-LengthThe size of the request body. Specify the request body length when known.Optional
Content-TypeThe media type of the request body. Set to application/json with charset=utf-8.Required
ExpectThe expected response from the server. It can be set to 100-Continue.Optional
HostThe qualified domain name that the request was sent to. For example, help.kusto.windows.net.Required

The following table contains the common custom headers used for query and management operations. Unless noted, these headers are used only for telemetry purposes and don’t affect functionality.

All headers are optional. However, We recommend specifying the x-ms-client-request-id custom header. In some scenarios, such as canceling a running query, x-ms-client-request-id is required since it’s used to identify the request.

Custom headerDescription
x-ms-appThe friendly name of the application making the request.
x-ms-userThe friendly name of the user making the request.
x-ms-user-idThe same friendly name as x-ms-user.
x-ms-client-request-idA unique identifier for the request.
x-ms-client-versionThe friendly version identifier for the client making the request.
x-ms-readonlyIf specified, it forces the request to run in read-only mode which prevents the request from changing data.

Request parameters

The following parameters can be passed in the request. They’re encoded in the request as query parameters or as part of the body, depending on whether GET or POST is used.

ParameterDescriptionRequired/Optional
cslThe text of the query or management command to execute.Required
propertiesRequest properties that modify how the request is processed and its results. For more information, see Request properties.Optional
propertiesRequest properties that modify how the request is processed and its results. For more information, see Request properties.Optional

GET query parameters

When a GET request is used, the query parameters specify the request parameters.

Body

When a POST request is used, the body of the request contains a single UTF-8 encoded JSON document, which includes the values of the request parameters.

Examples

The following example shows the HTTP POST request for a query.

POST https://help.kusto.windows.net/v2/rest/query HTTP/1.1

Request headers

Accept: application/json
Authorization: Bearer ...AzureActiveDirectoryAccessToken...
Accept-Encoding: deflate
Content-Type: application/json; charset=utf-8
Host: help.kusto.windows.net
x-ms-client-request-id: MyApp.Query;e9f884e4-90f0-404a-8e8b-01d883023bf1
x-ms-user-id: EARTH\davidbg
x-ms-app: MyApp

Request body

{
  "db":"Samples",
  "csl":"print Test=\"Hello, World!\"",
  "properties":"{\"Options\":{\"queryconsistency\":\"strongconsistency\"},\"Parameters\":{},\"ClientRequestId\":\"MyApp.Query;e9f884e4-90f0-404a-8e8b-01d883023bf1\"}"
}

The following example shows how to create a request that sends the previous query, using curl. The following example shows how to create a request that sends the previous query, using curl.

  1. Obtain a token for authentication.

    Replace AAD_TENANT_NAME_OR_ID, AAD_APPLICATION_ID, and AAD_APPLICATION_KEY with the relevant values, after setting up Microsoft Entra application authentication. Replace AAD_TENANT_NAME_OR_ID, AAD_APPLICATION_ID, and AAD_APPLICATION_KEY with the relevant values, after setting up Microsoft Entra application authentication.

    curl "https://login.microsoftonline.com/AAD_TENANT_NAME_OR_ID/oauth2/token" \
      -F "grant_type=client_credentials" \
      -F "resource=https://help.kusto.windows.net" \
      -F "client_id=AAD_APPLICATION_ID" \
      -F "client_secret=AAD_APPLICATION_KEY"
    

    This code snippet provides you with the bearer token.

    {
      "token_type": "Bearer",
      "expires_in": "3599",
      "ext_expires_in":"3599", 
      "expires_on":"1578439805",
      "not_before":"1578435905",
      "resource":"https://help.kusto.windows.net",
      "access_token":"eyJ0...uXOQ"
    }
    
  2. Use the bearer token in your request to the query endpoint.

    curl -d '{"db":"Samples","csl":"print Test=\"Hello, World!\"","properties":"{\"Options\":{\"queryconsistency\":\"strongconsistency\"}}"}"' \
    -H "Accept: application/json" \
    -H "Authorization: Bearer eyJ0...uXOQ" \
    -H "Content-Type: application/json; charset=utf-8" \
    -H "Host: help.kusto.windows.net" \
    -H "x-ms-client-request-id: MyApp.Query;e9f884e4-90f0-404a-8e8b-01d883023bf1" \
    -H "x-ms-user-id: EARTH\davidbg" \
    -H "x-ms-app: MyApp" \
    -X POST https://help.kusto.windows.net/v2/rest/query
    
  3. Read the response according to the response status codes.

  4. Read the response according to the response status codes.

Set client request properties and query parameters

In the following request body example, the query in the csl field declares two parameters named n and d. The values for those query parameters are specified within the Parameters field under the properties field in the request body. The Options field defines client request properties. In the following request body example, the query in the csl field declares two parameters named n and d. The values for those query parameters are specified within the Parameters field under the properties field in the request body. The Options field defines client request properties.

{
    "db": "Samples",
    "csl": "declare query_parameters (n:long, d:dynamic); StormEvents | where State in (d) | top n by StartTime asc",
    "properties": {
        "Options": {
            "maxmemoryconsumptionperiterator": 68719476736,
            "max_memory_consumption_per_query_per_node": 68719476736,
            "servertimeout": "50m"
        },
        "Parameters": {
            "n": 10, "d": "dynamic([\"ATLANTIC SOUTH\"])"
        }
    }
}

For more information, see Supported request properties. For more information, see Supported request properties.

Send show database caching policy command

The following example sends a request to show the Samples database caching policy.


{
    "db": "Samples",
    "csl": ".show database Samples policy caching",
    "properties": {
        "Options": {
            "maxmemoryconsumptionperiterator": 68719476736,
            "max_memory_consumption_per_query_per_node": 68719476736,
            "servertimeout": "50m"
        }
    }
}

7.6 - Query/management HTTP response

This article describes Query/management HTTP response.

Response status

The HTTP response status line follows the HTTP standard response codes. For example, code 200 indicates success.

The following status codes are currently in use, although any valid HTTP code may be returned.

CodeSubcodeDescription
100ContinueClient can continue to send the request.
200OKRequest started processing successfully.
400BadRequestRequest is badly formed and failed (permanently).
401UnauthorizedClient needs to authenticate first.
403ForbiddenClient request is denied.
404NotFoundRequest references a non-existing entity.
413PayloadTooLargeRequest payload exceeded limits.
429TooManyRequestsRequest has been denied because of throttling.
504TimeoutRequest has timed out.
520ServiceErrorService found an error while processing the request.

Response headers

The following custom headers will be returned.

Custom headerDescription
x-ms-client-request-idThe unique request identifier sent in the request header of the same name, or some unique identifier.
x-ms-activity-idA globally unique correlation identifier for the request. It’s created by the service.

Response body

If the status code is 200, the response body is a JSON document that encodes the query or management command’s results as a sequence of rectangular tables. See below for details.

If the status code indicates a 4xx or a 5xx error, other than 401, the response body is a JSON document that encodes the details of the failure. For more information, see Microsoft REST API Guidelines. For more information, see Microsoft REST API Guidelines.

JSON encoding of a sequence of tables

The JSON encoding of a sequence of tables is a single JSON property bag with the following name/value pairs.

NameValue
TablesAn array of the Table property bag.

The Table property bag has the following name/value pairs.

NameValue
TableNameA string that identifies the table.
ColumnsAn array of the Column property bag.
RowsAn array of the Row array.

The Column property bag has the following name/value pairs.

NameValue
ColumnNameA string that identifies the column.
DataTypeA string that provides the approximate .NET Type of the column.
ColumnTypeA string that provides the scalar data type of the column.
ColumnTypeA string that provides the scalar data type of the column.

The Row array has the same order as the respective Columns array. The Row array also has one element that coincides with the value of the row for the relevant column. Scalar data types that can’t be represented in JSON, such as datetime and timespan, are represented as JSON strings.

The following example shows one possible such object, when it contains a single table called Table_0 that has a single column Text of type string, and a single row.

{
    "Tables": [{
        "TableName": "Table_0",
        "Columns": [{
            "ColumnName": "Text",
            "DataType": "String",
            "ColumnType": "string"
        }],
        "Rows": [["Hello, World!"]]
}

Another example:

Screenshot showing the tree view of a JSON file that contains an array of Table objects. Screenshot showing the tree view of a JSON file that contains an array of Table objects.

The meaning of tables in the response

In most cases, management commands return a result with a single table, containing the information generated by the management command. For example, the .show databases command returns a single table with the details of all accessible databases.

Queries generally return multiple tables. For each tabular expression statement, For each tabular expression statement, one or more tables are generated in-order, representing the results produced by the statement.

Three tables are often produced:

  • An @ExtendedProperties table that provides additional values, such as client visualization instructions (information provided by the render operator), instructions (information provided by the render operator), information about the query’s effective database cursor, information about the query’s effective database cursor, or information about the query’s effective use of the query results cache. or information about the query’s effective use of the query results cache.

    For queries sent using the v1 protocol, the table has a single column of type string, whose value is a JSON-encoded string, such as:

    Value
    {“Visualization”:“piechart”,…}
    {“Cursor”:“637239957206013576”}

    For queries sent using the v2 protocol, the table has three columns: (1) An integer column called TableId indicating which table in the results set the record applies to; (2) A string column called Key indicating the kind of information provided by the record (possible values: Visualization, ServerCache, and Cursor); (3) A dynamic column called Value providing the Key-determined information.

    TableIdKeyValue
    1ServerCache{“OriginalStartedOn”:“2021-06-11T07:48:34.6201025Z”,…}
    1Visualization{“Visualization”:“piechart”,…}
  • A QueryStatus table that provides additional information about the execution of the query itself, such as, if it completed successfully or not, and what were the resources consumed by the query.

    This table has the following structure:

    TimestampSeveritySeverityNameStatusCodeStatusDescriptionCountRequestIdActivityIdSubActivityIdClientActivityId
    2020-05-02 06:09:12.70520774Info0Query completed successfully1

    Severity values of 2 or smaller indicate failure.

  • A TableOfContents table, which is created last, and lists the other tables in the results.

    An example for this table is:

    OrdinalKindNameIdPrettyName
    0QueryResultPrimaryResultdb9520f9-0455-4cb5-b257-53068497605a
    1QueryProperties@ExtendedProperties908901f6-5319-4809-ae9e-009068c267c7
    2QueryStatusQueryStatus00000000-0000-0000-0000-000000000000

7.7 - Request properties

This article describes request properties.

Request properties control how a query or command executes and returns results.

Supported request properties

The following table overviews the supported request properties.

Property nameTypeDescription
best_effortboolIf set to true, allows fuzzy resolution and connectivity issues of data sources (union legs.) The set of union sources is reduced to the set of table references that exist and are accessible at the time of execution. If at least one accessible table is found, the query executes. Any failure yields a warning in the query status results but doesn’t prevent the query from executing.
client_max_redirect_countlongControls the maximum number of HTTP redirects the client follows during processing.
client_results_reader_allow_varying_row_widthsboolIf set to true, the results reader tolerates tables whose row width varies across rows.
deferpartialqueryfailuresboolIf set to true, suppresses reporting of partial query failures within the result set.
max_memory_consumption_per_query_per_nodelongOverrides the default maximum amount of memory a query can allocate per node.
maxmemoryconsumptionperiteratorlongOverrides the default maximum amount of memory a query operator can allocate.
maxoutputcolumnslongOverrides the default maximum number of columns a query is allowed to produce.
norequesttimeoutboolSets the request timeout to its maximum value. This option can’t be modified as part of a set statement.
norequesttimeoutboolSets the request timeout to its maximum value. This option can’t be modified as part of a set statement.
notruncationboolDisables truncation of query results returned to the caller.
push_selection_through_aggregationboolIf set to true, allows pushing simple selection through aggregation.
query_bin_auto_atliteralSpecifies the start value to use when evaluating the bin_auto() function.
query_bin_auto_atliteralSpecifies the start value to use when evaluating the bin_auto() function.
query_bin_auto_sizeliteralSpecifies the bin size value to use when evaluating the bin_auto() function.
query_bin_auto_sizeliteralSpecifies the bin size value to use when evaluating the bin_auto() function.
query_cursor_after_defaultstringSets the default parameter value for the cursor_after() function when called without parameters.
query_cursor_after_defaultstringSets the default parameter value for the cursor_after() function when called without parameters.
query_cursor_before_or_at_defaultstringSets the default parameter value for the cursor_before_or_at() function when called without parameters.
query_cursor_before_or_at_defaultstringSets the default parameter value for the cursor_before_or_at() function when called without parameters.
query_cursor_currentstringOverrides the cursor value returned by the cursor_current() function.
query_cursor_currentstringOverrides the cursor value returned by the cursor_current() function.
query_cursor_disabledboolDisables the usage of cursor functions within the query context.
query_cursor_disabledboolDisables the usage of cursor functions within the query context.
query_cursor_scoped_tablesdynamicLists table names to be scoped to cursor_after_default .. cursor_before_or_at() (upper bound is optional).
query_datascopestringControls the data to which the query applies. Supported values are default, all, or hotcache.
query_datetimescope_columnstringSpecifies the column name for the query’s datetime scope (query_datetimescope_to / query_datetimescope_from).
query_datetimescope_fromdatetimeSets the minimum date and time limit for the query scope. If defined, it serves as an autoapplied filter on query_datetimescope_column.
query_datetimescope_todatetimeSets the maximum date and time limit for the query scope. If defined, it serves as an autoapplied filter on query_datetimescope_column.
query_distribution_nodes_spanintControls the behavior of subquery merge. The executing node introduces an extra level in the query hierarchy for each subgroup of nodes, and this option sets the subgroup size.
query_fanout_nodes_percentintSpecifies the percentage of nodes for executing fan-out.
query_fanout_threads_percentintSpecifies the percentage of threads for executing fan-out.
query_force_row_level_securityboolIf set to true, enforces row level security rules, even if the policy is disabled.
query_force_row_level_securityboolIf set to true, enforces row level security rules, even if the policy is disabled.
query_languagestringDetermines how the query text should be interpreted. Supported values are csl, kql, or sql. This option can’t be modified as part of a set statement.
query_languagestringDetermines how the query text should be interpreted. Supported values are csl, kql, or sql. This option can’t be modified as part of a set statement.
query_log_query_parametersboolEnables query parameters logging for later viewing in the .show queries journal.
query_log_query_parametersboolEnables query parameters logging for later viewing in the .show queries journal.
query_max_entities_in_unionlongOverrides the default maximum number of columns a query is allowed to produce.
query_nowdatetimeOverrides the datetime value returned by the now() function.
query_nowdatetimeOverrides the datetime value returned by the now() function.
query_optimize_fts_at_relopboolWhen set to true, enables an experimental optimization for queries that perform costly free-text search operations. For instance, |where * has "pattern".
query_python_debugbool or intIf set to true, generates a Python debug query for the enumerated Python node.
query_results_apply_getschemaboolIf set to true, retrieves the schema of each tabular data in the results of the query instead of the data itself.
query_results_cache_force_refreshboolIf set to true, forces a cache refresh of query results for a specific query. Must be used in combination with query_results_cache_max_age, and sent via Kusto Data ClientRequestProperties class, not as a set statement.
query_results_cache_force_refreshboolIf set to true, forces a cache refresh of query results for a specific query. Must be used in combination with query_results_cache_max_age, and sent via Kusto Data ClientRequestProperties class, not as a set statement.
query_results_cache_max_agetimespanControls the maximum age of the cached query results that the service is allowed to return.
query_results_cache_per_shardboolIf set to true, enables per extent query caching.
query_results_cache_per_shardboolIf set to true, enables per extent query caching.
query_results_progressive_row_countlongProvides a hint for how many records to send in each update. Takes effect only if results_progressive_enabled is set.
query_results_progressive_update_periodtimespanProvides a hint for how often to send progress frames. Takes effect only if results_progressive_enabled is set.
query_take_max_recordslongLimits query results to a specified number of records.
query_weakconsistency_session_idstringSets the query weak consistency session ID. Takes effect when queryconsistency mode is set to weakconsistency_by_session_id. This option can’t be modified as part of a set statement.
query_weakconsistency_session_idstringSets the query weak consistency session ID. Takes effect when queryconsistency mode is set to weakconsistency_by_session_id. This option can’t be modified as part of a set statement.
queryconsistencystringControls query consistency. Supported values are strongconsistency, weakconsistency, weakconsistency_by_query, weakconsistency_by_database, or weakconsistency_by_session_id. When using weakconsistency_by_session_id, ensure to also set the query_weakconsistency_session_id property. This option can’t be modified as part of a set statement.
queryconsistencystringControls query consistency. Supported values are strongconsistency, weakconsistency, weakconsistency_by_query, weakconsistency_by_database, or weakconsistency_by_session_id. When using weakconsistency_by_session_id, ensure to also set the query_weakconsistency_session_id property. This option can’t be modified as part of a set statement.
request_app_namestringSpecifies the request application name to be used in reporting. For example, .show queries. This option can’t be modified as part of a set statement.
request_app_namestringSpecifies the request application name to be used in reporting. For example, .show queries. This option can’t be modified as part of a set statement.
request_block_row_level_securityboolIf set to true, blocks access to tables with row level security policy enabled.
request_block_row_level_securityboolIf set to true, blocks access to tables with row level security policy enabled.
request_callout_disabledboolIf set to true, prevents request callout to a user-provided service.
request_descriptionstringAllows inclusion of arbitrary text as the request description.
request_external_data_disabledboolIf set to true, prevents the request from accessing external data using the externaldata operator or external tables.
request_external_data_disabledboolIf set to true, prevents the request from accessing external data using the externaldata operator or external tables.
request_external_table_disabledboolIf set to true, prevents the request from accessing external tables.
request_impersonation_disabledboolIf set to true, indicates that the service shouldn’t impersonate the caller’s identity.
request_readonlyboolIf set to true, prevents write access for the request. This option can’t be modified as part of a set statement.
request_readonlyboolIf set to true, prevents write access for the request. This option can’t be modified as part of a set statement.
request_readonly_hardlineboolIf set to true, then the request operates in a strict read-only mode. The request isn’t able to write anything, and any noncompliant functionality, such as plugins, are disabled. This option can’t be modified as part of a set statement.
request_readonly_hardlineboolIf set to true, then the request operates in a strict read-only mode. The request isn’t able to write anything, and any noncompliant functionality, such as plugins, are disabled. This option can’t be modified as part of a set statement.
request_remote_entities_disabledboolIf set to true, prevents the request from accessing remote databases and remote entities.
request_sandboxed_execution_disabledboolIf set to true, prevents the request from invoking code in the sandbox.
request_userstringSpecifies the request user to be used in reporting. For example, .show queries. This option can’t be modified as part of a set statement.
request_userstringSpecifies the request user to be used in reporting. For example, .show queries. This option can’t be modified as part of a set statement.
results_error_reporting_placementstringDetermines the placement of errors in the result set. Options are in_data, end_of_table, and end_of_dataset.
results_progressive_enabledboolIf set to true, enables the progressive query stream. This option can’t be modified as part of a set statement.
results_progressive_enabledboolIf set to true, enables the progressive query stream. This option can’t be modified as part of a set statement.
results_v2_fragment_primary_tablesboolCauses primary tables to be sent in multiple fragments, each containing a subset of the rows. This option can’t be modified as part of a set statement.
results_v2_fragment_primary_tablesboolCauses primary tables to be sent in multiple fragments, each containing a subset of the rows. This option can’t be modified as part of a set statement.
results_v2_newlines_between_framesboolAdds new lines between frames in the results, in order to make it easier to parse them.
servertimeouttimespanOverrides the default request timeout. This option can’t be modified as part of a set statement. Instead, modify the option using the dashboard settings.
servertimeouttimespanOverrides the default request timeout. This option can’t be modified as part of a set statement. Instead, modify the option using the dashboard settings.
truncation_max_recordslongOverrides the default maximum number of records a query is allowed to return to the caller (truncation).
truncationmaxsizelongOverrides the default maximum data size a query is allowed to return to the caller (truncation). This option can’t be modified as part of a set statement.
truncationmaxsizelongOverrides the default maximum data size a query is allowed to return to the caller (truncation). This option can’t be modified as part of a set statement.
validatepermissionsboolValidates the user’s permissions to perform the query without actually running the query. Possible results for this property are: OK (permissions are present and valid), Incomplete (validation couldn’t be completed due to dynamic schema evaluation), or KustoRequestDeniedException (permissions weren’t set).

How to set request properties

You can set request properties in the following ways:

7.8 - REST API overview

This article describes how to use the REST API.

This article describes how to interact with your cluster over HTTPS.

Supported actions

The available actions for an endpoint depend on whether it’s an engine endpoint or a data management endpoint. In the Azure portal cluster overview, the engine endpoint is identified as the Cluster URI and the data management endpoint as the Data ingestion URI.

ActionHTTP verbURI templateEngineData ManagementAuthentication
QueryGET or POST/v1/rest/queryYesNoYes
QueryGET or POST/v2/rest/queryYesNoYes
ManagementPOST/v1/rest/mgmtYesYesYes
StreamIngestPOST/v1/rest/ingestYesNoYes
UIGET/YesNoNo
UIGET/{dbname}YesNoNo

Where Action represents a group of related activities

  • The Query action sends a query to the service and gets back the results of the query.
  • The Management action sends a management command to the service and gets back the results of the management command.
  • The StreamIngest action ingests data to a table.
  • The UI action can be used to start up a desktop client or web client. The action is done through an HTTP Redirect response, to interact with the service.

7.9 - Send T-SQL queries over RESTful web API

This article describes how to use T-SQL queries with the RESTful web API.

This article describes how to use a subset of the T-SQL language to send T-SQL queries via the REST API. This article describes how to use a subset of the T-SQL language to send T-SQL queries via the REST API.

Request structure

To send T-SQL queries to the API, create a POST request with the following components.

To copy your URI, in the Azure portal, go to your cluster’s overview page, and then select the URI. Replace <your_cluster> with your Azure Data Explorer cluster name.

To copy your URI, see Copy a KQL database URI. To copy your URI, see Copy a KQL database URI.

```makefile
Accept:application/json
Content-Type:application/json; charset=utf-8
```
  • Body: Set the csl property to the text of your T-SQL query, and the client request property query_language to sql.

  • Body: Set the csl property to the text of your T-SQL query, and the client request property query_language to sql.

    {
        "properties": {
            "Options": {
                "query_language": "sql"
            }
        }
    }
    

Example

The following example shows a request body with a T-SQL query in the csl field and the query_language client request property set to sql.

{
    "db": "MyDatabase",
    "csl": "SELECT top(10) * FROM MyTable",
    "properties": {
        "Options": {
            "query_language": "sql"
        }
    }
}

The response is in a format similar to the following.

{
    "Tables": [
        {
            "TableName": "Table_0",
            "Columns": [
                {
                    "ColumnName": "rf_id",
                    "DataType": "String",
                    "ColumnType": "string"
                },
                ...
            ],
            "Rows": [
                [
                    "b9b84d3451b4d3183d0640df455399a9",
                    ...
                ],
                ...
            ]
        }
    ]
}

7.10 - Streaming ingestion HTTP request

This article describes Streaming ingestion HTTP request.

Request verb and resource

ActionHTTP verbHTTP resource
IngestPOST/v1/rest/ingest/{database}/{table}?{additional parameters}

Request parameters

ParameterDescriptionRequired/Optional
{database}Name of the target database for the ingestion requestRequired
{table}Name of the target table for the ingestion requestRequired

Additional parameters

Additional parameters are formatted as URL query {name}={value} pairs, separated by the & character.

ParameterDescriptionRequired/Optional
streamFormatSpecifies the format of the data in the request body. The value should be one of: CSV, TSV, SCsv, SOHsv, PSV, JSON, MultiJSON, Avro. For more information, see Supported Data Formats.Required
streamFormatSpecifies the format of the data in the request body. The value should be one of: CSV, TSV, SCsv, SOHsv, PSV, JSON, MultiJSON, Avro. For more information, see Supported Data Formats.Required
mappingNameThe name of the pre-created ingestion mapping defined on the table. For more information, see Data Mappings. The way to manage pre-created mappings on the table is described here.Optional, but Required if streamFormat is one of JSON, MultiJSON, or Avro
mappingNameThe name of the pre-created ingestion mapping defined on the table. For more information, see Data Mappings. The way to manage pre-created mappings on the table is described here.Optional, but Required if streamFormat is one of JSON, MultiJSON, or Avro

For example, to ingest CSV-formatted data into table Logs in database Test, use:

POST https://help.kusto.windows.net/v1/rest/ingest/Test/Logs?streamFormat=Csv HTTP/1.1

To ingest JSON-formatted data with pre-created mapping mylogmapping, use:

POST https://help.kusto.windows.net/v1/rest/ingest/Test/Logs?streamFormat=Json&mappingName=mylogmapping HTTP/1.1

Request headers

The following table contains the common headers for query and management operations.

Standard headerDescriptionRequired/Optional
AcceptSet this value to application/json.Optional
Accept-EncodingSupported encodings are gzip and deflate.Optional
AuthorizationSee authentication.Required
AuthorizationSee authentication.Required
ConnectionEnable Keep-Alive.Optional
Content-LengthSpecify the request body length, when known.Optional
Content-EncodingSet to gzip but the body must be gzip-compressedOptional
ExpectSet to 100-Continue.Optional
HostSet to the domain name to which you sent the request (such as, help.kusto.windows.net).Required

The following table contains the common custom headers for query and management operations. Unless otherwise indicated, the headers are for telemetry purposes only, and have no functionality impact.

|Custom header |Description | Required/Optional | |————————|———————————————————————————————————-| |x-ms-app |The (friendly) name of the application making the request. | Optional | |x-ms-user |The (friendly) name of the user making the request. | Optional | |x-ms-user-id |Same as x-ms-user. | Optional | |x-ms-client-request-id|A unique identifier for the request. | Optional | |x-ms-client-version |The (friendly) version identifier for the client making the request. Required in scenarios, where it’s used to identify the request, such as canceling a running query. | Optional/Required |

Body

The body is the actual data to be ingested. The textual formats should use UTF-8 encoding.

Examples

The following example shows the HTTP POST request for ingesting JSON content:

POST https://help.kusto.windows.net/v1/rest/ingest/Test/Logs?streamFormat=Json&mappingName=mylogmapping HTTP/1.1

Request headers:

Authorization: Bearer ...AzureActiveDirectoryAccessToken...
Accept-Encoding: deflate
Accept-Encoding: gzip
Connection: Keep-Alive
Content-Length: 161
Host: help.kusto.windows.net
x-ms-client-request-id: MyApp.Ingest;5c0656b9-37c9-4e3a-a671-5f83e6843fce
x-ms-user-id: alex@contoso.com
x-ms-app: MyApp

Request body:

{"Timestamp":"2018-11-14 11:34","Level":"Info","EventText":"Nothing Happened"}
{"Timestamp":"2018-11-14 11:35","Level":"Error","EventText":"Something Happened"}

The following example shows the HTTP POST request for ingesting the same compressed data.

POST https://help.kusto.windows.net/v1/rest/ingest/Test/Logs?streamFormat=Json&mappingName=mylogmapping HTTP/1.1

Request headers:

Authorization: Bearer ...AzureActiveDirectoryAccessToken...
Accept-Encoding: deflate
Accept-Encoding: gzip
Connection: Keep-Alive
Content-Length: 116
Content-Encoding: gzip
Host: help.kusto.windows.net
x-ms-client-request-id: MyApp.Ingest;5c0656b9-37c9-4e3a-a671-5f83e6843fce
x-ms-user-id: alex@contoso.com
x-ms-app: MyApp

Request body:

... binary data ...

7.11 - UI deep links

This article describes UI deep links.

UI deep links are URIs that, when opened in a web browser, result in automatically opening a UI tool (such as Kusto.Explorer or Kusto.WebExplorer) in a way that preselects the desired Kusto cluster (and optionally database).

For example, when a user selects https://help.kusto.windows.net/Samples?query=print%20123, For example, when a user selects https://help.kusto.windows.net/Samples?query=print%20123, Kusto.WebExplorer opens to the help.kusto.windows.net cluster, select the Samples database as the default database, and run the associated query.

UI deep links work by having the user browser receive a redirect response when issuing a GET request to the URI, and depend on the browser settings to allow processing of this redirection. (For example, a UI deep link to Kusto.Explorer requires the browser to be configured to allow ClickOnce applications to start.)

The UI deep link must be a valid URI, and has the following format:

https:// Cluster / [DatabaseName] [? Parameters]

Where:

  • Cluster is the base address of the cluster itself. This part is mandatory, but can be overridden by specifying the query parameter uri in Parameters.

  • DatabaseName is the name of the database in Cluster to use as the database in scope. If this property isn’t set, the UI tool decides which database to use, if at all. (If a query or a command is specified by Parameters, the recommendation is for the correct value for DatabaseName to be included in the URI.)

  • Parameters can be used to specify other parameters to control the behavior of the UI deep link. Parameters that are supported by all Kusto “official” UI tools are indicated in the following table. Tool-specific parameters are noted later on in this document.

    ParameterDescription
    webSelects the UI tool. By default, or if set to 1, Kusto.WebExplorer is used. If set to 0, Kusto.Explorer is used. If set to 3, Kusto.WebExplorer is used with no preexisting tabs.
    queryThe text of the query or management command to start with when opening the UI tool.
    querysrcA URI pointing at a web resource that holds the text of the query or management command to start with when opening the UI tool.
    nameThe name of the connection to the cluster.
    autorunIf set to false, requires that the user actively run the query instead of autorunning it when the link is clicked.

    The value of query can use standard HTTP query parameter encoding. Alternatively, it can be encoded using the transformation base64(gzip(text)), which makes it possible to compress long queries or management commands to git in the default browser URI length limits.

Examples

Here are a few examples for links:

  • https://help.kusto.windows.net/: When a user agent (such as a browser) issues a GET / request it’s redirected to the default UI tool configured to query the help cluster.
  • https://help.kusto.windows.net/Samples: When a user agent (such as a browser) issues a GET /Samples request it’s redirected to the default UI tool configured to query the help cluster Samples database.
  • http://help.kusto.windows.net/Samples?query=StormEvents: When a user (such as a browser) issues a GET /Samples?query=StormEvents request it’s redirected to the default UI tool configured to query the help cluster Samples database, and issue the StormEvents query.

Deep linking to Kusto.Explorer

This REST API performs redirection that installs and runs the Kusto.Explorer desktop client tool with specially crafted startup parameters that open a connection to a specific cluster and execute a query against that cluster.

See Deep-linking with Kusto.Explorer See Deep-linking with Kusto.Explorer for a description of the redirect URI syntax for starting up Kusto.Explorer.

Deep linking to Kusto.WebExplorer

In addition to the query parameters already mentioned, the following parameters might appear in UI deep links to Kusto.WebExplorer:

ParameterDescription
login_hintSets the user sign-in name (email) of the user.
tenantSets the Microsoft Entra tenant ID of the user.

To instruct Kusto.WebExplorer to sign-in a user from another Microsoft Entra tenant, specify login_hint and tenant for the user.

Redirection is to the following URI:

https:// BaseAddress /clusters/ Cluster [/databases/ DatabaseName] [? Parameters]

Specifying the query or management command in the URI

When the URI query string parameter query is specified, it must be encoded according to the URI query string encoding HTML rules. Alternatively, the text of the query or management command can be compressed by gzip, and then encoded via base64 encoding. This feature allows you to send longer queries or control commands (since the latter encoding method results in shorter URIs).

Specifying the query or management command by indirection

If the query or management command is long, even encoding it using gzip/base64 might exceed the maximum URI length of the user agent. Alternatively, the URI query string parameter querysrc is provided, and its value is a short URI pointing at a web resource that holds the query or management command text.

For example, this value can be the URI for a file hosted by Azure Blob Storage.