ai_chat_completion_prompt plugin (preview)
Prerequisites
- An Azure OpenAI Service configured with at least the (Cognitive Services OpenAI User) role assigned to the identity being used.
- A Callout Policy configured to allow calls to AI services.
- When using managed identity to access Azure OpenAI Service, configure the Managed Identity Policy to allow communication with the service.
Syntax
evaluate ai_chat_completion_prompt (Prompt, ConnectionString [, Options [, IncludeErrorMessages]])
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| Prompt | string | ✔️ | The prompt for generating chat completions. The value can be a column reference or a constant scalar. |
| ConnectionString | string | ✔️ | The connection string for the language model in the format <ModelDeploymentUri>;<AuthenticationMethod>; replace <ModelDeploymentUri> and <AuthenticationMethod> with the AI model deployment URI and the authentication method respectively. |
| Options | dynamic | The options that control calls to the chat model endpoint. See Options. | |
| IncludeErrorMessages | bool | Indicates whether to output errors in a new column in the output table. Default value: false. |
Options
The following table describes the options that control the way the requests are made to the chat model endpoint.
| Name | Type | Description |
|---|---|---|
RetriesOnThrottling | int | Specifies the number of retry attempts when throttling occurs. Default value: 0. |
GlobalTimeout | timespan | Specifies the maximum time to wait for a response from the AI chat model. Default value: null. |
ModelParameters | dynamic | Parameters specific to the AI chat model. Possible values: temperature, top_p, stop, max_tokens, max_completion_tokens, presence_penalty, frequency_penalty, user, seed. Any other specified model parameters are ignored. Default value: null. |
Configure Callout Policy
The azure_openai callout policy enables external calls to Azure AI services.
To configure the callout policy to authorize the AI model endpoint domain:
.alter-merge cluster policy callout
```
[
{
"CalloutType": "azure_openai",
"CalloutUriRegex": "https://[A-Za-z0-9-]{3,63}\.(?:openai\\.azure\\.com|cognitiveservices\\.azure\\.com|cognitive\\.microsoft\\.com|services\\.ai\\.azure\\.com)(?:/.*)?",
"CanCall": true
}
]
```
Configure Managed Identity
When using managed identity to access Azure OpenAI Service, you must configure the Managed Identity policy to allow the system-assigned managed identity to authenticate to Azure OpenAI Service.
To configure the managed identity:
.alter-merge cluster policy managed_identity
```
[
{
"ObjectId": "system",
"AllowedUsages": "AzureAI"
}
]
```
Returns
Returns the following new chat completion columns:
- A column with the _chat_completion suffix that contains the chat completion values.
- If configured to return errors, a column with the _chat_completion_error suffix, which contains error strings or is left empty if the operation is successful.
Depending on the input type, the plugin returns different results:
- Column reference: Returns one or more records with additional columns prefixed by the reference column name. For example, if the input column is named PromptData, the output columns are named PromptData_chat_completion and, if configured to return errors, PromptData_chat_completion_error.
- Constant scalar: Returns a single record with additional columns that are not prefixed. The column names are _chat_completion and, if configured to return errors, _chat_completion_error.
Examples
The following example generates a chat completion for the prompt Provide a summary of AI capabilities using the Azure OpenAI chat completion model.
Managed Identity
let prompt = 'Provide a summary of AI capabilities';
let connectionString = 'https://myaccount.openai.azure.com/openai/deployments/gpt4o/chat/completions?api-version=2024-06-01;managed_identity=system';
evaluate ai_chat_completion_prompt(prompt, connectionString)
Impersonation
let prompt = 'Provide a summary of AI capabilities';
let connectionString = 'https://myaccount.openai.azure.com/openai/deployments/gpt4o/chat/completions?api-version=2024-06-01;impersonate';
evaluate ai_chat_completion_prompt(prompt, connectionString)
let prompt = 'Provide a summary of AI capabilities';
let connectionString = 'https://myaccount.openai.azure.com/openai/deployments/gpt4o/chat/completions?api-version=2024-06-01;impersonate';
evaluate ai_chat_completion_prompt(prompt, connectionString)
Managed Identity
The following example sends a separate prompt for each row to the Azure OpenAI chat completion model.
let connectionString = 'https://myaccount.openai.azure.com/openai/deployments/gpt4o/chat/completion?api-version=2024-06-01;managed_identity=system';
let options = dynamic({
"RetriesOnThrottling": 1,
"GlobalTimeout": 2m,
"ModelParameters": {
"temperature: 0.7
}
});
datatable(Prompt: string)
[
"Provide a summary of AI capabilities",
"What is the answer to everything?",
"What is 42?"
]
| evaluate ai_chat_completion_prompt(prompt, connectionString, options , true)
Impersonation
let connectionString = 'https://myaccount.openai.azure.com/openai/deployments/gpt4o/chat/completions?api-version=2024-06-01;impersonate';
let options = dynamic({
"RetriesOnThrottling": 1,
"GlobalTimeout": 2m,
"ModelParameters": {
"temperature: 0.7
}
});
datatable(Prompt: string)
[
"Provide a summary of AI capabilities",
"What is the answer to everything?",
"What is 42?"
]
| evaluate ai_chat_completion_prompt(prompt, connectionString, options , true)
Related content
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.