@@ -5,23 +5,23 @@ description: Latest data plane inference documentation generated from OpenAPI 3.
manager: nitinme
ms.service: azure-ai-openai
ms.topic: include
-ms.date: 07/09/2024
+ms.date: 11/01/2024
---
## Completions
```HTTP
-POST https://{endpoint}/openai/deployments/{deployment-id}/completions?api-version=2024-06-01
+POST https://{endpoint}/openai/deployments/{deployment-id}/completions?api-version=2024-10-21
```
-Creates a completion for the provided prompt, parameters and chosen model.
+Creates a completion for the provided prompt, parameters, and chosen model.
### URI Parameters
| Name | In | Required | Type | Description |
|------|------|----------|------|-----------|
| endpoint | path | Yes | string<br>url | Supported Azure OpenAI endpoints (protocol and hostname, for example: `https://aoairesource.openai.azure.com`. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
-| deployment-id | path | Yes | string | Deployment id of the model which was deployed. |
+| deployment-id | path | Yes | string | Deployment ID of the model which was deployed. |
| api-version | query | Yes | string | API version |
### Request Header
@@ -36,53 +36,33 @@ Creates a completion for the provided prompt, parameters and chosen model.
| Name | Type | Description | Required | Default |
|------|------|-------------|----------|---------|
-| prompt | string or array | The prompt(s) to generate completions for, encoded as a string or array of strings.<br>Note that <|endoftext|> is the document separator that the model sees during training, so if a prompt isn'tspecified the model will generate as if from the beginning of a new document. Maximum allowed size of string list is 2048. | No | |
-| max_tokens | integer | The token count of your prompt plus max_tokens can't exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096). Has minimum of 0. | No | 16 |
-| temperature | number | What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (arg max sampling) for ones with a well-defined answer.<br>We generally recommend altering this or top_p but not both. | No | 1 |
-| top_p | number | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.<br>We generally recommend altering this or temperature but not both. | No | 1 |
-| logit_bias | object | Defaults to null. Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this tokenizer tool (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass {"50256":-100} to prevent the <|endoftext|> token from being generated. | No | |
-| user | string | A unique identifier representing your end-user, which can help monitoring and detecting abuse | No | |
-| n | integer | How many completions to generate for each prompt. Minimum of 1 and maximum of 128 allowed.<br>Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop. | No | 1 |
-| stream | boolean | Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message. | No | False |
-| logprobs | integer | Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 5, the API will return a list of the 5 most likely tokens. The API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response.<br>Minimum of 0 and maximum of 5 allowed. | No | None |
-| suffix | string | The suffix that comes after a completion of inserted text. | No | |
-| echo | boolean | Echo back the prompt in addition to the completion | No | False |
-| stop | string or array | Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. | No | |
-| completion_config | string | | No | |
-| presence_penalty | number | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. | No | 0 |
-| frequency_penalty | number | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. | No | 0 |
-| best_of | integer | Generates best_of completions server-side and returns the "best" (defined as the one with the highest log probability per token). Results can't be streamed.<br>When used with n, best_of controls the number of candidate completions and n specifies how many to return - best_of must be greater than n.<br>Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop. Has maximum value of 128. | No | |
+| prompt | string or array | The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays.<br><br>Note that <|endoftext|> is the document separator that the model sees during training, so if a prompt isn't specified the model will generate as if from the beginning of a new document.<br> | Yes | |
+| best_of | integer | Generates `best_of` completions server-side and returns the "best" (the one with the highest log probability per token). Results can't be streamed.<br><br>When used with `n`, `best_of` controls the number of candidate completions and `n` specifies how many to return – `best_of` must be greater than `n`.<br><br>**Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for `max_tokens` and `stop`.<br> | No | 1 |
+| echo | boolean | Echo back the prompt in addition to the completion<br> | No | False |
+| frequency_penalty | number | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.<br> | No | 0 |
+| logit_bias | object | Modify the likelihood of specified tokens appearing in the completion.<br><br>Accepts a JSON object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.<br><br>As an example, you can pass `{"50256": -100}` to prevent the <|endoftext|> token from being generated.<br> | No | None |
+| logprobs | integer | Include the log probabilities on the `logprobs` most likely output tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the five most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response.<br><br>The maximum value for `logprobs` is 5.<br> | No | None |
+| max_tokens | integer | The maximum number of tokens that can be generated in the completion.<br><br>The token count of your prompt plus `max_tokens` can't exceed the model's context length. | No | 16 |
+| n | integer | How many completions to generate for each prompt.<br><br>**Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for `max_tokens` and `stop`.<br> | No | 1 |
+| presence_penalty | number | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.<br> | No | 0 |
+| seed | integer | If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same `seed` and parameters should return the same result.<br><br>Determinism isn't guaranteed, and you should refer to the `system_fingerprint` response parameter to monitor changes in the backend.<br> | No | |
+| stop | string or array | Up to four sequences where the API will stop generating further tokens. The returned text won't contain the stop sequence.<br> | No | |
+| stream | boolean | Whether to stream back partial progress. If set, tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. | No | False |
+| suffix | string | The suffix that comes after a completion of inserted text.<br><br>This parameter is only supported for `gpt-3.5-turbo-instruct`.<br> | No | None |
+| temperature | number | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.<br><br>We generally recommend altering this or `top_p` but not both.<br> | No | 1 |
+| top_p | number | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.<br><br>We generally recommend altering this or `temperature` but not both.<br> | No | 1 |
+| user | string | A unique identifier representing your end-user, which can help to monitor and detect abuse.<br> | No | |
### Responses
-| Name | Type | Description | Required | Default |
-|------|------|-------------|----------|---------|
-| id | string | | Yes | |
-| object | string | | Yes | |
-| created | integer | | Yes | |
-| model | string | | Yes | |
-| prompt_filter_results | [promptFilterResults](#promptfilterresults) | Content filtering results for zero or more prompts in the request. In a streaming request, results for different prompts may arrive at different times or in different orders. | No | |
-| choices | array | | Yes | |
-| usage | object | | No | |
-
-
-### Properties for usage
-
-#### completion_tokens
-
-| Name | Type | Description | Default |
-|------|------|-------------|--------|
-| completion_tokens | number | | |
-| prompt_tokens | number | | |
-| total_tokens | number | | |
-
**Status Code:** 200
**Description**: OK
|**Content-Type**|**Type**|**Description**|
|:---|:---|:---|
-|application/json | object | |
+|application/json | [createCompletionResponse](#createcompletionresponse) | Represents a completion response from the API. Note: both the streamed and nonstreamed response objects share the same shape (unlike the chat endpoint).
+|
**Status Code:** default
@@ -96,10 +76,10 @@ Creates a completion for the provided prompt, parameters and chosen model.
### Example
-Creates a completion for the provided prompt, parameters and chosen model.
+Creates a completion for the provided prompt, parameters, and chosen model.
```HTTP
-POST https://{endpoint}/openai/deployments/{deployment-id}/completions?api-version=2024-06-01
+POST https://{endpoint}/openai/deployments/{deployment-id}/completions?api-version=2024-10-21
{
"prompt": [
@@ -139,7 +119,7 @@ Status Code: 200
## Embeddings
```HTTP
-POST https://{endpoint}/openai/deployments/{deployment-id}/embeddings?api-version=2024-06-01
+POST https://{endpoint}/openai/deployments/{deployment-id}/embeddings?api-version=2024-10-21
```
Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.
@@ -164,7 +144,7 @@ Get a vector representation of a given input that can be easily consumed by mach
| Name | Type | Description | Required | Default |
|------|------|-------------|----------|---------|
-| input | string or array | Input text to get embeddings for, encoded as a string. To get embeddings for multiple inputs in a single request, pass an array of strings. Each array must not exceed 2048 inputs in length.<br>Unless you're embedding code, we suggest replacing newlines (\n) in your input with a single space, as we have observed inferior results when newlines are present. | Yes | |
+| input | string or array | Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. The input must not exceed the max input tokens for the model (8,192 tokens for `text-embedding-ada-002`), can't be an empty string, and any array must be 2,048 dimensions or less.| Yes | |
| user | string | A unique identifier representing your end-user, which can help monitoring and detecting abuse. | No | |
| input_type | string | input type of embedding search to use | No | |
| encoding_format | string | The format to return the embeddings in. Can be either `float` or `base64`. Defaults to `float`. | No | |
@@ -210,7 +190,7 @@ Get a vector representation of a given input that can be easily consumed by mach
Return the embeddings for a given prompt.
```HTTP
-POST https://{endpoint}/openai/deployments/{deployment-id}/embeddings?api-version=2024-06-01
+POST https://{endpoint}/openai/deployments/{deployment-id}/embeddings?api-version=2024-10-21
{
"input": [
@@ -248,6 +228,29 @@ Status Code: 200
0.020790534,
0.00074595667,
0.008397198,
+ -0.00535031,
+ 0.008968075,
+ 0.014351576,
+ -0.014086051,
+ 0.015055214,
+ -0.022211088,
+ -0.025198232,
+ 0.0065186154,
+ -0.036350243,
+ 0.009180495,
+ -0.009698266,
+ 0.009446018,
+ -0.008463579,
+ -0.0040426035,
+ -0.03443847,
+ -0.00091273896,
+ -0.0019217303,
+ 0.002349888,
+ -0.021560553,
+ 0.016515596,
+ -0.015572986,
+ 0.0038666942,
+ -8.432463e-05
]
}
],
@@ -259,10 +262,10 @@ Status Code: 200
}
```
-## Chat completions
+## Chat completions
```HTTP
-POST https://{endpoint}/openai/deployments/{deployment-id}/chat/completions?api-version=2024-06-01
+POST https://{endpoint}/openai/deployments/{deployment-id}/chat/completions?api-version=2024-10-21
```
Creates a completion for the chat message
@@ -272,7 +275,7 @@ Creates a completion for the chat message
| Name | In | Required | Type | Description |
|------|------|----------|------|-----------|
| endpoint | path | Yes | string<br>url | Supported Azure OpenAI endpoints (protocol and hostname, for example: `https://aoairesource.openai.azure.com`. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
-| deployment-id | path | Yes | string | Deployment id of the model which was deployed. |
+| deployment-id | path | Yes | string | Deployment ID of the model which was deployed. |
| api-version | query | Yes | string | API version |
### Request Header
@@ -287,35 +290,28 @@ Creates a completion for the chat message
| Name | Type | Description | Required | Default |
|------|------|-------------|----------|---------|
-| temperature | number | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.<br>We generally recommend altering this or `top_p` but not both. | No | 1 |
-| top_p | number | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.<br>We generally recommend altering this or `temperature` but not both. | No | 1 |
-| stream | boolean | If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a `data: [DONE]` message. | No | False |
-| stop | string or array | Up to 4 sequences where the API will stop generating further tokens. | No | |
-| max_tokens | integer | The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens). | No | 4096 |
-| presence_penalty | number | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. | No | 0 |
-| frequency_penalty | number | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. | No | 0 |
-| logit_bias | object | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. | No | |
-| user | string | A unique identifier representing your end-user, which can help Azure OpenAI to monitor and detect abuse. | No | |
-| messages | array | A list of messages comprising the conversation so far. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb). | No | |
+| temperature | number | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.<br><br>We generally recommend altering this or `top_p` but not both.<br> | No | 1 |
+| top_p | number | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.<br><br>We generally recommend altering this or `temperature` but not both.<br> | No | 1 |
+| stream | boolean | If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. | No | False |
+| stop | string or array | Up to four sequences where the API will stop generating further tokens.<br> | No | |
+| max_tokens | integer | The maximum number of tokens that can be generated in the chat completion.<br><br>The total length of input tokens and generated tokens is limited by the model's context length.| No | |
+| max_completion_tokens | integer | An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens. | No | |
+| presence_penalty | number | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.<br> | No | 0 |
+| frequency_penalty | number | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.<br> | No | 0 |
+| logit_bias | object | Modify the likelihood of specified tokens appearing in the completion.<br><br>Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.<br> | No | None |
+| user | string | A unique identifier representing your end-user, which can help to monitor and detect abuse.<br> | No | |
+| messages | array | A list of messages comprising the conversation so far. | Yes | |
| data_sources | array | The configuration entries for Azure OpenAI chat extensions that use them.<br> This additional specification is only compatible with Azure OpenAI. | No | |
-| n | integer | How many chat completion choices to generate for each input message. | No | 1 |
-| seed | integer | If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same `seed` and parameters should return the same result. Determinism isn'tguaranteed, and you should refer to the `system_fingerprint` response parameter to monitor changes in the backend. | No | 0 |
-| logprobs | boolean | Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`. This option is currently not available on the `gpt-4-vision-preview` model. | No | False |
-| top_logprobs | integer | An integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability. `logprobs` must be set to `true` if this parameter is used. | No | |
-| response_format | object | An object specifying the format that the model must output. Used to enable JSON mode. | No | |
-| tools | array | A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. | No | |
-| tool_choice | [chatCompletionToolChoiceOption](#chatcompletiontoolchoiceoption) | Controls which (if any) function is called by the model. `none` means the model will not call a function and instead generates a message. `auto` means the model can pick between generating a message or calling a function. Specifying a particular function via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that function. | No | |
-| functions | array | Deprecated in favor of `tools`. A list of functions the model may generate JSON inputs for. | No | |
-| function_call | string or object | Deprecated in favor of `tool_choice`. Controls how the model responds to function calls. "none" means the model doesn't call a function, and responds to the end-user. "auto" means the model can pick between an end-user or calling a function. Specifying a particular function via `{"name":\ "my_function"}` forces the model to call that function. "none" is the default when no functions are present. "auto" is the default if functions are present. | No | |
-
-
-### Properties for response_format
-
-#### Type
-
-| Name | Type | Description | Default |
-|------|------|-------------|--------|
-| type | [chatCompletionResponseFormat](#chatcompletionresponseformat) | Setting to `json_object` enables JSON mode. This guarantees that the message the model generates is valid JSON. | text |
+| logprobs | boolean | Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`. | No | False |
+| top_logprobs | integer | An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. `logprobs` must be set to `true` if this parameter is used. | No | |
+| n | integer | How many chat completion choices to generate for each input message. Note that you'll be charged based on the number of generated tokens across all of the choices. Keep `n` as `1` to minimize costs. | No | 1 |
+| parallel_tool_calls | [ParallelToolCalls](#paralleltoolcalls) | Whether to enable parallel function calling during tool use. | No | True |
+| response_format | [ResponseFormatText](#responseformattext) or [ResponseFormatJsonObject](#responseformatjsonobject) or [ResponseFormatJsonSchema](#responseformatjsonschema) | An object specifying the format that the model must output. Compatible with [GPT-4o](/azure/ai-services/openai/concepts/models#gpt-4-and-gpt-4-turbo-models), [GPT-4o mini](/azure/ai-services/openai/concepts/models#gpt-4-and-gpt-4-turbo-models), [GPT-4 Turbo](/azure/ai-services/openai/concepts/models#gpt-4-and-gpt-4-turbo-models) and all [GPT-3.5](/azure/ai-services/openai/concepts/models#gpt-35) Turbo models newer than `gpt-3.5-turbo-1106`.<br><br>Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which guarantees the model will match your supplied JSON schema.<br><br>Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.<br><br>**Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.<br> | No | |
+| seed | integer | This feature is in Beta.<br>If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same `seed` and parameters should return the same result.<br>Determinism isn't guaranteed, and you should refer to the `system_fingerprint` response parameter to monitor changes in the backend.<br> | No | |
+| tools | array | A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.<br> | No | |
+| tool_choice | [chatCompletionToolChoiceOption](#chatcompletiontoolchoiceoption) | Controls which (if any) tool is called by the model. `none` means the model won't call any tool and instead generates a message. `auto` means the model can pick between generating a message or calling one or more tools. `required` means the model must call one or more tools. Specifying a particular tool via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that tool. `none` is the default when no tools are present. `auto` is the default if tools are present. | No | |
+| function_call | string or [chatCompletionFunctionCallOption](#chatcompletionfunctioncalloption) | Deprecated in favor of `tool_choice`.<br><br>Controls which (if any) function is called by the model.<br>`none` means the model won't call a function and instead generates a message.<br>`auto` means the model can pick between generating a message or calling a function.<br>Specifying a particular function via `{"name": "my_function"}` forces the model to call that function.<br><br>`none` is the default when no functions are present. `auto` is the default if functions are present.<br> | No | |
+| functions | array | Deprecated in favor of `tools`.<br><br>A list of functions the model may generate JSON inputs for.<br> | No | |
### Responses
@@ -325,7 +321,7 @@ Creates a completion for the chat message
|**Content-Type**|**Type**|**Description**|
|:---|:---|:---|
-|application/json | [createChatCompletionResponse](#createchatcompletionresponse) | |
+|application/json | [createChatCompletionResponse](#createchatcompletionresponse) or [createChatCompletionStreamResponse](#createchatcompletionstreamresponse) | |
**Status Code:** default
@@ -339,16 +335,16 @@ Creates a completion for the chat message
### Example
-Creates a completion for the provided prompt, parameters and chosen model.
+Creates a completion for the provided prompt, parameters, and chosen model.
```HTTP
-POST https://{endpoint}/openai/deployments/{deployment-id}/chat/completions?api-version=2024-06-01
+POST https://{endpoint}/openai/deployments/{deployment-id}/chat/completions?api-version=2024-10-21
{
"messages": [
{
"role": "system",
- "content": "you're a helpful assistant that talks like a pirate"
+ "content": "you are a helpful assistant that talks like a pirate"
},
{
"role": "user",
@@ -390,7 +386,7 @@ Status Code: 200
Creates a completion based on Azure Search data and system-assigned managed identity.
```HTTP
-POST https://{endpoint}/openai/deployments/{deployment-id}/chat/completions?api-version=2024-06-01
+POST https://{endpoint}/openai/deployments/{deployment-id}/chat/completions?api-version=2024-10-21
{
"messages": [
@@ -458,7 +454,7 @@ Status Code: 200
Creates a completion based on Azure Search vector data, previous assistant message and user-assigned managed identity.
```HTTP
-POST https://{endpoint}/openai/deployments/{deployment-id}/chat/completions?api-version=2024-06-01
+POST https://{endpoint}/openai/deployments/{deployment-id}/chat/completions?api-version=2024-10-21
{
"messages": [
@@ -496,7 +492,7 @@ POST https://{endpoint}/openai/deployments/{deployment-id}/chat/completions?api-
"in_scope": true,
"top_n_documents": 5,
"strictness": 3,
- "role_information": "you're an AI assistant that helps people find information.",
+ "role_information": "You are an AI assistant that helps people find information.",
"fields_mapping": {
"content_fields_separator": "\\n",
"content_fields": [
@@ -559,7 +555,7 @@ Status Code: 200
Creates a completion for the provided Azure Cosmos DB.
```HTTP
-POST https://{endpoint}/openai/deployments/{deployment-id}/chat/completions?api-version=2024-06-01
+POST https://{endpoint}/openai/deployments/{deployment-id}/chat/completions?api-version=2024-10-21
{
"messages": [
@@ -636,10 +632,10 @@ Status Code: 200
}
```
-## Transcriptions
+## Transcriptions - Create
```HTTP
-POST https://{endpoint}/openai/deployments/{deployment-id}/audio/transcriptions?api-version=2024-06-01
+POST https://{endpoint}/openai/deployments/{deployment-id}/audio/transcriptions?api-version=2024-10-21
```
Transcribes audio into the input language.
@@ -649,7 +645,7 @@ Transcribes audio into the input language.
| Name | In | Required | Type | Description |
|------|------|----------|------|-----------|
| endpoint | path | Yes | string<br>url | Supported Azure OpenAI endpoints (protocol and hostname, for example: `https://aoairesource.openai.azure.com`. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
-| deployment-id | path | Yes | string | Deployment id of the whisper model. |
+| deployment-id | path | Yes | string | Deployment ID of the whisper model. |
| api-version | query | Yes | string | API version |
### Request Header
@@ -688,7 +684,7 @@ Transcribes audio into the input language.
Gets transcribed text and associated metadata from provided spoken audio data.
```HTTP
-POST https://{endpoint}/openai/deployments/{deployment-id}/audio/transcriptions?api-version=2024-06-01
+POST https://{endpoint}/openai/deployments/{deployment-id}/audio/transcriptions?api-version=2024-10-21
```
@@ -707,7 +703,7 @@ Status Code: 200
Gets transcribed text and associated metadata from provided spoken audio data.
```HTTP
-POST https://{endpoint}/openai/deployments/{deployment-id}/audio/transcriptions?api-version=2024-06-01
+POST https://{endpoint}/openai/deployments/{deployment-id}/audio/transcriptions?api-version=2024-10-21
"---multipart-boundary\nContent-Disposition: form-data; name=\"file\"; filename=\"file.wav\"\nContent-Type: application/octet-stream\n\nRIFF..audio.data.omitted\n---multipart-boundary--"
@@ -722,10 +718,10 @@ Status Code: 200
}
```
-## Translations
+## Translations - Create
```HTTP
-POST https://{endpoint}/openai/deployments/{deployment-id}/audio/translations?api-version=2024-06-01
+POST https://{endpoint}/openai/deployments/{deployment-id}/audio/translations?api-version=2024-10-21
```
Transcribes and translates input audio into English text.
@@ -735,7 +731,7 @@ Transcribes and translates input audio into English text.
| Name | In | Required | Type | Description |
|------|------|----------|------|-----------|
| endpoint | path | Yes | string<br>url | Supported Azure OpenAI endpoints (protocol and hostname, for example: `https://aoairesource.openai.azure.com`. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
-| deployment-id | path | Yes | string | Deployment id of the whisper model which was deployed. |
+| deployment-id | path | Yes | string | Deployment ID of the whisper model which was deployed. |
| api-version | query | Yes | string | API version |
### Request Header
@@ -773,7 +769,7 @@ Transcribes and translates input audio into English text.
Gets English language transcribed text and associated metadata from provided spoken audio data.
```HTTP
-POST https://{endpoint}/openai/deployments/{deployment-id}/audio/translations?api-version=2024-06-01
+POST https://{endpoint}/openai/deployments/{deployment-id}/audio/translations?api-version=2024-10-21
"---multipart-boundary\nContent-Disposition: form-data; name=\"file\"; filename=\"file.wav\"\nContent-Type: application/octet-stream\n\nRIFF..audio.data.omitted\n---multipart-boundary--"
@@ -794,7 +790,7 @@ Status Code: 200
Gets English language transcribed text and associated metadata from provided spoken audio data.
```HTTP
-POST https://{endpoint}/openai/deployments/{deployment-id}/audio/translations?api-version=2024-06-01
+POST https://{endpoint}/openai/deployments/{deployment-id}/audio/translations?api-version=2024-10-21
"---multipart-boundary\nContent-Disposition: form-data; name=\"file\"; filename=\"file.wav\"\nContent-Type: application/octet-stream\n\nRIFF..audio.data.omitted\n---multipart-boundary--"
@@ -812,17 +808,17 @@ Status Code: 200
## Image generation
```HTTP
-POST https://{endpoint}/openai/deployments/{deployment-id}/images/generations?api-version=2024-06-01
+POST https://{endpoint}/openai/deployments/{deployment-id}/images/generations?api-version=2024-10-21
```
-Generates a batch of images from a text caption on a given DALLE model deployment
+Generates a batch of images from a text caption on a given dall-e model deployment
### URI Parameters
| Name | In | Required | Type | Description |
|------|------|----------|------|-----------|
| endpoint | path | Yes | string<br>url | Supported Azure OpenAI endpoints (protocol and hostname, for example: `https://aoairesource.openai.azure.com`. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
-| deployment-id | path | Yes | string | Deployment id of the `dall-e` model which was deployed. |
+| deployment-id | path | Yes | string | Deployment ID of the dall-e model which was deployed. |
| api-version | query | Yes | string | API version |
### Request Header
@@ -837,7 +833,7 @@ Generates a batch of images from a text caption on a given DALLE model deploymen
| Name | Type | Description | Required | Default |
|------|------|-------------|----------|---------|
-| prompt | string | A text description of the desired image(s). The maximum length is 4000 characters. | Yes | |
+| prompt | string | A text description of the desired image(s). The maximum length is 4,000 characters. | Yes | |
| n | integer | The number of images to generate. | No | 1 |
| size | [imageSize](#imagesize) | The size of the generated images. | No | 1024x1024 |
| response_format | [imagesResponseFormat](#imagesresponseformat) | The format in which the generated images are returned. | No | url |
@@ -870,7 +866,7 @@ Generates a batch of images from a text caption on a given DALLE model deploymen
Creates images given a prompt.
```HTTP
-POST https://{endpoint}/openai/deployments/{deployment-id}/images/generations?api-version=2024-06-01
+POST https://{endpoint}/openai/deployments/{deployment-id}/images/generations?api-version=2024-10-21
{
"prompt": "In the style of WordArt, Microsoft Clippy wearing a cowboy hat.",
@@ -958,7 +954,7 @@ Status Code: 200
| message | string | | No | |
-### Error
+### error
@@ -1075,7 +1071,7 @@ Inner error with additional details.
|------|------|-------------|--------|
| URL | string | | |
-#### License
+#### license
| Name | Type | Description | Default |
|------|------|-------------|--------|
@@ -1113,7 +1109,7 @@ Information about the content filtering category (hate, sexual, violence, self_h
### contentFilterChoiceResults
-Information about the content filtering category (hate, sexual, violence, self_harm), if it has been detected, as well as the severity level (very_low, low, medium, high-scale that determines the intensity and risk level of harmful content) and if it has been filtered or not. Information about third-party text and profanity, if it has been detected, and if it has been filtered or not. And information about customer blocklist, if it has been filtered and its id.
+Information about the content filtering category (hate, sexual, violence, self_harm), if it has been detected, as well as the severity level (very_low, low, medium, high-scale that determines the intensity and risk level of harmful content) and if it has been filtered or not. Information about third party text and profanity, if it has been detected, and if it has been filtered or not. And information about customer blocklist, if it has been filtered and its id.
| Name | Type | Description | Required | Default |
|------|------|-------------|----------|---------|
@@ -1141,6 +1137,8 @@ Content filtering results for a single prompt in the request.
Content filtering results for zero or more prompts in the request. In a streaming request, results for different prompts may arrive at different times or in different orders.
+No properties defined for this component.
+
### dalleContentFilterResults
@@ -1177,166 +1175,213 @@ Information about the content filtering category (hate, sexual, violence, self_h
| temperature | number | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.<br>We generally recommend altering this or `top_p` but not both. | No | 1 |
| top_p | number | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.<br>We generally recommend altering this or `temperature` but not both. | No | 1 |
| stream | boolean | If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a `data: [DONE]` message. | No | False |
-| stop | string or array | Up to 4 sequences where the API will stop generating further tokens. | No | |
-| max_tokens | integer | The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens). | No | 4096 |
+| stop | string or array | Up to four sequences where the API will stop generating further tokens. | No | |
+| max_tokens | integer | The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens). This value is now deprecated in favor of `max_completion_tokens`, and isn't compatible with o1 series models. | No | 4096 |
+| max_completion_tokens | integer | An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens. | No | |
| presence_penalty | number | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. | No | 0 |
| frequency_penalty | number | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. | No | 0 |
| logit_bias | object | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. | No | |
| user | string | A unique identifier representing your end-user, which can help Azure OpenAI to monitor and detect abuse. | No | |
-### createChatCompletionRequest
+### createCompletionRequest
| Name | Type | Description | Required | Default |
|------|------|-------------|----------|---------|
-| temperature | number | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.<br>We generally recommend altering this or `top_p` but not both. | No | 1 |
-| top_p | number | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.<br>We generally recommend altering this or `temperature` but not both. | No | 1 |
-| stream | boolean | If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a `data: [DONE]` message. | No | False |
-| stop | string or array | Up to 4 sequences where the API will stop generating further tokens. | No | |
-| max_tokens | integer | The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens). | No | 4096 |
-| presence_penalty | number | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. | No | 0 |
-| frequency_penalty | number | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. | No | 0 |
-| logit_bias | object | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. | No | |
-| user | string | A unique identifier representing your end-user, which can help Azure OpenAI to monitor and detect abuse. | No | |
-| messages | array | A list of messages comprising the conversation so far. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb). | No | |
-| data_sources | array | The configuration entries for Azure OpenAI chat extensions that use them.<br> This additional specification is only compatible with Azure OpenAI. | No | |
-| n | integer | How many chat completion choices to generate for each input message. | No | 1 |
-| seed | integer | If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same `seed` and parameters should return the same result. Determinism isn'tguaranteed, and you should refer to the `system_fingerprint` response parameter to monitor changes in the backend. | No | 0 |
-| logprobs | boolean | Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`. This option is currently not available on the `gpt-4-vision-preview` model. | No | False |
-| top_logprobs | integer | An integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability. `logprobs` must be set to `true` if this parameter is used. | No | |
-| response_format | object | An object specifying the format that the model must output. Used to enable JSON mode. | No | |
-| tools | array | A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. | No | |
-| tool_choice | [chatCompletionToolChoiceOption](#chatcompletiontoolchoiceoption) | Controls which (if any) function is called by the model. `none` means the model will not call a function and instead generates a message. `auto` means the model can pick between generating a message or calling a function. Specifying a particular function via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that function. | No | |
-| functions | array | Deprecated in favor of `tools`. A list of functions the model may generate JSON inputs for. | No | |
-| function_call | string or object | Deprecated in favor of `tool_choice`. Controls how the model responds to function calls. "none" means the model doesn't call a function, and responds to the end-user. "auto" means the model can pick between an end-user or calling a function. Specifying a particular function via `{"name":\ "my_function"}` forces the model to call that function. "none" is the default when no functions are present. "auto" is the default if functions are present. | No | |
+| prompt | string or array | The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays.<br><br>Note that <|endoftext|> is the document separator that the model sees during training, so if a prompt isn't specified the model will generate as if from the beginning of a new document.<br> | Yes | |
+| best_of | integer | Generates `best_of` completions server-side and returns the "best" (the one with the highest log probability per token). Results can't be streamed.<br><br>When used with `n`, `best_of` controls the number of candidate completions and `n` specifies how many to return – `best_of` must be greater than `n`.<br><br>**Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for `max_tokens` and `stop`.<br> | No | 1 |
+| echo | boolean | Echo back the prompt in addition to the completion<br> | No | False |
+| frequency_penalty | number | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.<br> | No | 0 |
+| logit_bias | object | Modify the likelihood of specified tokens appearing in the completion.<br><br>Accepts a JSON object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.<br><br>As an example, you can pass `{"50256": -100}` to prevent the <|endoftext|> token from being generated.<br> | No | None |
+| logprobs | integer | Include the log probabilities on the `logprobs` most likely output tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the five most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response.<br><br>The maximum value for `logprobs` is 5.<br> | No | None |
+| max_tokens | integer | The maximum number of tokens that can be generated in the completion.<br><br>The token count of your prompt plus `max_tokens` can't exceed the model's context length.| No | 16 |
+| n | integer | How many completions to generate for each prompt.<br><br>**Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for `max_tokens` and `stop`.<br> | No | 1 |
+| presence_penalty | number | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.<br> | No | 0 |
+| seed | integer | If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same `seed` and parameters should return the same result.<br><br>Determinism isn't guaranteed, and you should refer to the `system_fingerprint` response parameter to monitor changes in the backend.<br> | No | |
+| stop | string or array | Up to four sequences where the API will stop generating further tokens. The returned text won't contain the stop sequence.<br> | No | |
+| stream | boolean | Whether to stream back partial progress. If set, tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. | No | False |
+| suffix | string | The suffix that comes after a completion of inserted text.<br><br>This parameter is only supported for `gpt-3.5-turbo-instruct`.<br> | No | None |
+| temperature | number | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.<br><br>We generally recommend altering this or `top_p` but not both.<br> | No | 1 |
+| top_p | number | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.<br><br>We generally recommend altering this or `temperature` but not both.<br> | No | 1 |
+| user | string | A unique identifier representing your end-user, which can help to monitor and detect abuse.<br> | No | |
-### Properties for response_format
+### createCompletionResponse
-#### Type
+Represents a completion response from the API. Note: both the streamed and nonstreamed response objects share the same shape (unlike the chat endpoint).
-| Name | Type | Description | Default |
-|------|------|-------------|--------|
-| type | [chatCompletionResponseFormat](#chatcompletionresponseformat) | Setting to `json_object` enables JSON mode. This guarantees that the message the model generates is valid JSON. | text |
+| Name | Type | Description | Required | Default |
+|------|------|-------------|----------|---------|
+| id | string | A unique identifier for the completion. | Yes | |
+| choices | array | The list of completion choices the model generated for the input prompt. | Yes | |
+| created | integer | The Unix timestamp (in seconds) of when the completion was created. | Yes | |
+| model | string | The model used for completion. | Yes | |
+| prompt_filter_results | [promptFilterResults](#promptfilterresults) | Content filtering results for zero or more prompts in the request. In a streaming request, results for different prompts may arrive at different times or in different orders. | No | |
+| system_fingerprint | string | This fingerprint represents the backend configuration that the model runs with.<br><br>Can be used in conjunction with the `seed` request parameter to understand when backend changes have been made that might impact determinism.<br> | No | |
+| object | enum | The object type, which is always "text_completion"<br>Possible values: text_completion | Yes | |
+| usage | [completionUsage](#completionusage) | Usage statistics for the completion request. | No | |
-### chatCompletionResponseFormat
-Setting to `json_object` enables JSON mode. This guarantees that the message the model generates is valid JSON.
+### createChatCompletionRequest
-**Description**: Setting to `json_object` enables JSON mode. This guarantees that the message the model generates is valid JSON.
-**Type**: string
-**Default**: text
+| Name | Type | Description | Required | Default |
+|------|------|-------------|----------|---------|
+| temperature | number | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.<br><br>We generally recommend altering this or `top_p` but not both.<br> | No | 1 |
+| top_p | number | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.<br><br>We generally recommend altering this or `temperature` but not both.<br> | No | 1 |
+| stream | boolean | If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. | No | False |
+| stop | string or array | Up to four sequences where the API will stop generating further tokens.<br> | No | |
+| max_tokens | integer | The maximum number of tokens that can be generated in the chat completion.<br><br>The total length of input tokens and generated tokens is limited by the model's context length.| No | |
+| max_completion_tokens | integer | An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens. | No | |
+| presence_penalty | number | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.<br> | No | 0 |
+| frequency_penalty | number | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.<br> | No | 0 |
+| logit_bias | object | Modify the likelihood of specified tokens appearing in the completion.<br><br>Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.<br> | No | None |
+| user | string | A unique identifier representing your end-user, which can help to monitor and detect abuse.<br> | No | |
+| messages | array | A list of messages comprising the conversation so far. | Yes | |
+| data_sources | array | The configuration entries for Azure OpenAI chat extensions that use them.<br> This additional specification is only compatible with Azure OpenAI. | No | |
+| logprobs | boolean | Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`. | No | False |
+| top_logprobs | integer | An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. `logprobs` must be set to `true` if this parameter is used. | No | |
+| n | integer | How many chat completion choices to generate for each input message. Note that you'll be charged based on the number of generated tokens across all of the choices. Keep `n` as `1` to minimize costs. | No | 1 |
+| parallel_tool_calls | [ParallelToolCalls](#paralleltoolcalls) | Whether to enable parallel function calling during tool use. | No | True |
+| response_format | [ResponseFormatText](#responseformattext) or [ResponseFormatJsonObject](#responseformatjsonobject) or [ResponseFormatJsonSchema](#responseformatjsonschema) | An object specifying the format that the model must output. Compatible with [GPT-4o](/azure/ai-services/openai/concepts/models#gpt-4-and-gpt-4-turbo-models), [GPT-4o mini](/azure/ai-services/openai/concepts/models#gpt-4-and-gpt-4-turbo-models), [GPT-4 Turbo](/azure/ai-services/openai/concepts/models#gpt-4-and-gpt-4-turbo-models) and all [GPT-3.5](/azure/ai-services/openai/concepts/models#gpt-35) Turbo models newer than `gpt-3.5-turbo-1106`.<br><br>Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which guarantees the model will match your supplied JSON schema.<br><br>Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.<br><br>**Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.<br> | No | |
+| seed | integer | This feature is in Beta.<br>If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same `seed` and parameters should return the same result.<br>Determinism isn't guaranteed, and you should refer to the `system_fingerprint` response parameter to monitor changes in the backend.<br> | No | |
+| tools | array | A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.<br> | No | |
+| tool_choice | [chatCompletionToolChoiceOption](#chatcompletiontoolchoiceoption) | Controls which (if any) tool is called by the model. `none` means the model won't call any tool and instead generates a message. `auto` means the model can pick between generating a message or calling one or more tools. `required` means the model must call one or more tools. Specifying a particular tool via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that tool. `none` is the default when no tools are present. `auto` is the default if tools are present. | No | |
+| function_call | string or [chatCompletionFunctionCallOption](#chatcompletionfunctioncalloption) | Deprecated in favor of `tool_choice`.<br><br>Controls which (if any) function is called by the model.<br>`none` means the model won't call a function and instead generates a message.<br>`auto` means the model can pick between generating a message or calling a function.<br>Specifying a particular function via `{"name": "my_function"}` forces the model to call that function.<br><br>`none` is the default when no functions are present. `auto` is the default if functions are present.<br> | No | |
+| functions | array | Deprecated in favor of `tools`.<br><br>A list of functions the model may generate JSON inputs for.<br> | No | |
-**Enum Name**: ChatCompletionResponseFormat
-**Enum Values**:
+### chatCompletionFunctions
-| Value | Description |
-|-------|-------------|
-| text | Response format is a plain text string. |
-| json_object | Response format is a JSON object. |
-### chatCompletionFunction
+| Name | Type | Description | Required | Default |
+|------|------|-------------|----------|---------|
+| description | string | A description of what the function does, used by the model to choose when and how to call the function. | No | |
+| name | string | The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64. | Yes | |
+| parameters | [FunctionParameters](#functionparameters) | The parameters the functions accepts, described as a JSON Schema object. See the guide](/azure/ai-services/openai/how-to/function-calling) for examples, and the [JSON Schema reference](https://json-schema.org/understanding-json-schema/) for documentation about the format. <br><br>Omitting `parameters` defines a function with an empty parameter list. | No | |
+### chatCompletionFunctionCallOption
+
+Specifying a particular function via `{"name": "my_function"}` forces the model to call that function.
+
| Name | Type | Description | Required | Default |
|------|------|-------------|----------|---------|
-| name | string | The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64. | Yes | |
-| description | string | The description of what the function does. | No | |
-| parameters | [chatCompletionFunctionParameters](#chatcompletionfunctionparameters) | The parameters the functions accepts, described as a JSON Schema object. See the [JSON Schema reference](https://json-schema.org/understanding-json-schema/) for documentation about the format. | No | |
+| name | string | The name of the function to call. | Yes | |
-### chatCompletionFunctionParameters
+### chatCompletionRequestMessage
-The parameters the functions accepts, described as a JSON Schema object. See the [JSON Schema reference](https://json-schema.org/understanding-json-schema/) for documentation about the format.
-### chatCompletionRequestMessage
+This component can be one of the following:
+
+
+### chatCompletionRequestSystemMessage
| Name | Type | Description | Required | Default |
|------|------|-------------|----------|---------|
-| role | [chatCompletionRequestMessageRole](#chatcompletionrequestmessagerole) | The role of the messages author. | Yes | |
+| content | string or array | The contents of the system message. | Yes | |
+| role | enum | The role of the messages author, in this case `system`.<br>Possible values: system | Yes | |
+| name | string | An optional name for the participant. Provides the model information to differentiate between participants of the same role. | No | |
-### chatCompletionRequestMessageRole
+### chatCompletionRequestUserMessage
-The role of the messages author.
-**Description**: The role of the messages author.
-**Type**: string
+| Name | Type | Description | Required | Default |
+|------|------|-------------|----------|---------|
+| content | string or array | The contents of the user message.<br> | Yes | |
+| role | enum | The role of the messages author, in this case `user`.<br>Possible values: user | Yes | |
+| name | string | An optional name for the participant. Provides the model information to differentiate between participants of the same role. | No | |
-**Default**:
-**Enum Name**: ChatCompletionRequestMessageRole
+### chatCompletionRequestAssistantMessage
-**Enum Values**:
-| Value | Description |
-|-------|-------------|
-| system | The message author role is system. |
-| user | The message author role is user. |
-| assistant | The message author role is assistant. |
-| tool | The message author role is tool. |
-| function | Deprecated. The message author role is function. |
+| Name | Type | Description | Required | Default |
+|------|------|-------------|----------|---------|
+| content | string or array | The contents of the assistant message. Required unless `tool_calls` or `function_call` is specified.<br> | No | |
+| refusal | string | The refusal message by the assistant. | No | |
+| role | enum | The role of the messages author, in this case `assistant`.<br>Possible values: assistant | Yes | |
+| name | string | An optional name for the participant. Provides the model information to differentiate between participants of the same role. | No | |
+| tool_calls | [chatCompletionMessageToolCalls](#chatcompletionmessagetoolcalls) | The tool calls generated by the model, such as function calls. | No | |
+| function_call | object | Deprecated and replaced by `tool_calls`. The name and arguments of a function that should be called, as generated by the model. | No | |
-### chatCompletionRequestMessageSystem
+### Properties for function_call
+#### arguments
-| Name | Type | Description | Required | Default |
-|------|------|-------------|----------|---------|
-| role | [chatCompletionRequestMessageRole](#chatcompletionrequestmessagerole) | The role of the messages author. | Yes | |
-| content | string | The contents of the message. | No | |
+| Name | Type | Description | Default |
+|------|------|-------------|--------|
+| arguments | string | The arguments to call the function with, as generated by the model in JSON format. Note that the model doesn't always generate valid JSON, and may generate parameters not defined by your function schema. Validate the arguments in your code before calling your function. | |
+#### name
+
+| Name | Type | Description | Default |
+|------|------|-------------|--------|
+| name | string | The name of the function to call. | |
-### chatCompletionRequestMessageUser
+
+### chatCompletionRequestToolMessage
| Name | Type | Description | Required | Default |
|------|------|-------------|----------|---------|
-| role | [chatCompletionRequestMessageRole](#chatcompletionrequestmessagerole) | The role of the messages author. | Yes | |
-| content | string or array | | No | |
+| role | enum | The role of the messages author, in this case `tool`.<br>Possible values: tool | Yes | |
+| content | string or array | The contents of the tool message. | Yes | |
+| tool_call_id | string | Tool call that this message is responding to. | Yes | |
-### chatCompletionRequestMessageContentPart
+### chatCompletionRequestFunctionMessage
| Name | Type | Description | Required | Default |
|------|------|-------------|----------|---------|
-| type | [chatCompletionRequestMessageContentPartType](#chatcompletionrequestmessagecontentparttype) | The type of the content part. | Yes | |
+| role | enum | The role of the messages author, in this case `function`.<br>Possible values: function | Yes | |
+| content | string | The contents of the function message. | Yes | |
+| name | string | The name of the function to call. | Yes | |
-### chatCompletionRequestMessageContentPartType
+### chatCompletionRequestSystemMessageContentPart
-The type of the content part.
-**Description**: The type of the content part.
-**Type**: string
+This component can be one of the following:
-**Default**:
-**Enum Name**: ChatCompletionRequestMessageContentPartType
+### chatCompletionRequestUserMessageContentPart
-**Enum Values**:
-| Value | Description |
-|-------|-------------|
-| text | The content part type is text. |
-| image_url | The content part type is image_url. |
+
+This component can be one of the following:
+
+
+### chatCompletionRequestAssistantMessageContentPart
+
+
+
+This component can be one of the following:
+
+
+### chatCompletionRequestToolMessageContentPart
+
+
+
+This component can be one of the following:
### chatCompletionRequestMessageContentPartText
@@ -1345,8 +1390,8 @@ The type of the content part.
| Name | Type | Description | Required | Default |
|------|------|-------------|----------|---------|
-| type | [chatCompletionRequestMessageContentPartType](#chatcompletionrequestmessagecontentparttype) | The type of the content part. | Yes | |
-| text | string | The text content. | No | |
+| type | enum | The type of the content part.<br>Possible values: text | Yes | |
+| text | string | The text content. | Yes | |
### chatCompletionRequestMessageContentPartImage
@@ -1355,42 +1400,33 @@ The type of the content part.
| Name | Type | Description | Required | Default |
|------|------|-------------|----------|---------|
-| type | [chatCompletionRequestMessageContentPartType](#chatcompletionrequestmessagecontentparttype) | The type of the content part. | Yes | |
-| url | string | Either a URL of the image or the base64 encoded image data. | No | |
-| detail | [imageDetailLevel](#imagedetaillevel) | Specifies the detail level of the image. | No | auto |
+| type | enum | The type of the content part.<br>Possible values: image_url | Yes | |
+| image_url | object | | Yes | |
-### imageDetailLevel
-
-Specifies the detail level of the image.
-
-**Description**: Specifies the detail level of the image.
-
-**Type**: string
+### Properties for image_url
-**Default**: auto
+#### url
-**Enum Name**: ImageDetailLevel
+| Name | Type | Description | Default |
+|------|------|-------------|--------|
+| url | string | Either a URL of the image or the base64 encoded image data. | |
-**Enum Values**:
+#### detail
-| Value | Description |
-|-------|-------------|
-| auto | The image detail level is auto. |
-| low | The image detail level is low. |
-| high | The image detail level is high. |
+| Name | Type | Description | Default |
+|------|------|-------------|--------|
+| detail | string | Specifies the detail level of the image. Learn more in the [Vision guide](/azure/ai-services/openai/how-to/gpt-with-vision?tabs=rest%2Csystem-assigned%2Cresource#detail-parameter-settings-in-image-processing-low-high-auto). | auto |
-### chatCompletionRequestMessageAssistant
+### chatCompletionRequestMessageContentPartRefusal
| Name | Type | Description | Required | Default |
|------|------|-------------|----------|---------|
-| role | [chatCompletionRequestMessageRole](#chatcompletionrequestmessagerole) | The role of the messages author. | Yes | |
-| content | string | The contents of the message. | No | |
-| tool_calls | array | The tool calls generated by the model, such as function calls. | No | |
-| context | [azureChatExtensionsMessageContext](#azurechatextensionsmessagecontext) | A representation of the additional context information available when Azure OpenAI chat extensions are involved<br> in the generation of a corresponding chat completions response. This context information is only populated when<br> using an Azure OpenAI request configured to use a matching extension. | No | |
+| type | enum | The type of the content part.<br>Possible values: refusal | Yes | |
+| refusal | string | The refusal message generated by the model. | Yes | |
### azureChatExtensionConfiguration
@@ -1410,7 +1446,7 @@ Specifies the detail level of the image.
completions request that should use Azure OpenAI chat extensions to augment the response behavior.
The use of this configuration is compatible only with Azure OpenAI.
-**Description**: A representation of configuration data for a single Azure OpenAI chat extension. This will be used by a chat<br> Completions request that should use Azure OpenAI chat extensions to augment the response behavior.<br> The use of this configuration is compatible only with Azure OpenAI.
+**Description**: A representation of configuration data for a single Azure OpenAI chat extension. This will be used by a chat completions request that should use Azure OpenAI chat extensions to augment the response behavior. The use of this configuration is compatible only with Azure OpenAI.
**Type**: string
@@ -1481,7 +1517,7 @@ The type of Azure Search retrieval query that should be executed when using it a
**Default**:
-**Enum Name**: azureSearchQueryType
+**Enum Name**: AzureSearchQueryType
**Enum Values**:
@@ -1622,7 +1658,7 @@ An abstract representation of a vectorization source for Azure OpenAI On Your Da
Represents the available sources Azure OpenAI On Your Data can use to configure vectorization of data for use with
vector search.
-**Description**: Represents the available sources Azure OpenAI On Your Data can use to configure vectorization of data for use with<br>Vector search.
+**Description**: Represents the available sources Azure OpenAI On Your Data can use to configure vectorization of data for use with<br>vector search.
**Type**: string
@@ -1635,7 +1671,7 @@ vector search.
| Value | Description |
|-------|-------------|
| endpoint | Represents vectorization performed by public service calls to an Azure OpenAI embedding model. |
-| deployment_name | Represents an Ada model deployment name to use. This model deployment must be in the same Azure OpenAI resource, but<br>The on your data feature will use this model deployment via an internal call rather than a public one, which enables vector<br>search even in private networks. |
+| deployment_name | Represents an Ada model deployment name to use. This model deployment must be in the same Azure OpenAI resource, but<br>On Your Data will use this model deployment via an internal call rather than a public one, which enables vector<br>search even in private networks. |
### onYourDataDeploymentNameVectorizationSource
@@ -1652,7 +1688,7 @@ on an internal embeddings model deployment name in the same Azure OpenAI resourc
### onYourDataEndpointVectorizationSource
The details of a vectorization source, used by Azure OpenAI On Your Data when applying vector search, that is based
-on public embeddings endpoint for Azure OpenAI.
+on a public Azure OpenAI endpoint call for embeddings.
| Name | Type | Description | Required | Default |
|------|------|-------------|----------|---------|
@@ -1673,9 +1709,9 @@ on public embeddings endpoint for Azure OpenAI.
| intent | string | The detected intent from the chat history, used to pass to the next turn to carry over the context. | No | |
-### Citation
+### citation
-Citation information for a chat completions response message.
+citation information for a chat completions response message.
| Name | Type | Description | Required | Default |
|------|------|-------------|----------|---------|
@@ -1699,17 +1735,17 @@ Citation information for a chat completions response message.
### Properties for function
-#### Name
+#### name
| Name | Type | Description | Default |
|------|------|-------------|--------|
| name | string | The name of the function to call. | |
-#### Arguments
+#### arguments
| Name | Type | Description | Default |
|------|------|-------------|--------|
-| arguments | string | The arguments to call the function with, as generated by the model in JSON format. Note that the model doesn't always generate valid JSON, and may fabricate parameters not defined by your function schema. Validate the arguments in your code before calling your function. | |
+| arguments | string | The arguments to call the function with, as generated by the model in JSON format. Note that the model doesn't always generate valid JSON, and may generate parameters not defined by your function schema. Validate the arguments in your code before calling your function. | |
### toolCallType
@@ -1737,7 +1773,6 @@ The type of the tool call, in this case `function`.
| Name | Type | Description | Required | Default |
|------|------|-------------|----------|---------|
-| role | [chatCompletionRequestMessageRole](#chatcompletionrequestmessagerole) | The role of the messages author. | Yes | |
| tool_call_id | string | Tool call that this message is responding to. | No | |
| content | string | The contents of the message. | No | |
@@ -1748,25 +1783,104 @@ The type of the tool call, in this case `function`.
| Name | Type | Description | Required | Default |
|------|------|-------------|----------|---------|
-| role | enum | The role of the messages author, in this case `function`.<br>Possible values: function | Yes | |
+| role | enum | The role of the messages author, in this case `function`.<br>Possible values: function | No | |
| name | string | The contents of the message. | No | |
| content | string | The contents of the message. | No | |
### createChatCompletionResponse
-
+Represents a chat completion response returned by model, based on the provided input.
| Name | Type | Description | Required | Default |
|------|------|-------------|----------|---------|
| id | string | A unique identifier for the chat completion. | Yes | |
-| object | [chatCompletionResponseObject](#chatcompletionresponseobject) | The object type. | Yes | |
+| prompt_filter_results | [promptFilterResults](#promptfilterresults) | Content filtering results for zero or more prompts in the request. In a streaming request, results for different prompts may arrive at different times or in different orders. | No | |
+| choices | array | A list of chat completion choices. Can be more than one if `n` is greater than 1. | Yes | |
| created | integer | The Unix timestamp (in seconds) of when the chat completion was created. | Yes | |
| model | string | The model used for the chat completion. | Yes | |
+| system_fingerprint | string | This fingerprint represents the backend configuration that the model runs with.<br><br>Can be used in conjunction with the `seed` request parameter to understand when backend changes have been made that might impact determinism.<br> | No | |
+| object | enum | The object type, which is always `chat.completion`.<br>Possible values: chat.completion | Yes | |
| usage | [completionUsage](#completionusage) | Usage statistics for the completion request. | No | |
-| system_fingerprint | string | Can be used in conjunction with the `seed` request parameter to understand when backend changes have been made that might impact determinism. | No | |
-| prompt_filter_results | [promptFilterResults](#promptfilterresults) | Content filtering results for zero or more prompts in the request. In a streaming request, results for different prompts may arrive at different times or in different orders. | No | |
-| choices | array | | No | |
+
+
+### createChatCompletionStreamResponse
+
+Represents a streamed chunk of a chat completion response returned by model, based on the provided input.
+
+| Name | Type | Description | Required | Default |
+|------|------|-------------|----------|---------|
+| id | string | A unique identifier for the chat completion. Each chunk has the same ID. | Yes | |
+| choices | array | A list of chat completion choices. Can contain more than one elements if `n` is greater than 1.<br> | Yes | |
+| created | integer | The Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the same timestamp. | Yes | |
+| model | string | The model to generate the completion. | Yes | |
+| system_fingerprint | string | This fingerprint represents the backend configuration that the model runs with.<br>Can be used in conjunction with the `seed` request parameter to understand when backend changes have been made that might impact determinism.<br> | No | |
+| object | enum | The object type, which is always `chat.completion.chunk`.<br>Possible values: chat.completion.chunk | Yes | |
+
+
+### chatCompletionStreamResponseDelta
+
+A chat completion delta generated by streamed model responses.
+
+| Name | Type | Description | Required | Default |
+|------|------|-------------|----------|---------|
+| content | string | The contents of the chunk message. | No | |
+| function_call | object | Deprecated and replaced by `tool_calls`. The name and arguments of a function that should be called, as generated by the model. | No | |
+| tool_calls | array | | No | |
+| role | enum | The role of the author of this message.<br>Possible values: system, user, assistant, tool | No | |
+| refusal | string | The refusal message generated by the model. | No | |
+
+
+### Properties for function_call
+
+#### arguments
+
+| Name | Type | Description | Default |
+|------|------|-------------|--------|
+| arguments | string | The arguments to call the function with, as generated by the model in JSON format. Note that the model doesn't always generate valid JSON, and may generate parameters not defined by your function schema. Validate the arguments in your code before calling your function. | |
+
+#### name
+
+| Name | Type | Description | Default |
+|------|------|-------------|--------|
+| name | string | The name of the function to call. | |
+
+
+### chatCompletionMessageToolCallChunk
+
+
+
+| Name | Type | Description | Required | Default |
+|------|------|-------------|----------|---------|
+| index | integer | | Yes | |
+| id | string | The ID of the tool call. | No | |
+| type | enum | The type of the tool. Currently, only `function` is supported.<br>Possible values: function | No | |
+| function | object | | No | |
+
+
+### Properties for function
+
+#### name
+
+| Name | Type | Description | Default |
+|------|------|-------------|--------|
+| name | string | The name of the function to call. | |
+
+#### arguments
+
+| Name | Type | Description | Default |
+|------|------|-------------|--------|
+| arguments | string | The arguments to call the function with, as generated by the model in JSON format. Note that the model doesn't always generate valid JSON, and may generate parameters not defined by your function schema. Validate the arguments in your code before calling your function. | |
+
+
+### chatCompletionStreamOptions
+
+Options for streaming response. Only set this when you set `stream: true`.
+
+
+| Name | Type | Description | Required | Default |
+|------|------|-------------|----------|---------|
+| include_usage | boolean | If set, an additional chunk will be streamed before the `data: [DONE]` message. The `usage` field on this chunk shows the token usage statistics for the entire request, and the `choices` field will always be an empty array. All other chunks will also include a `usage` field, but with a null value.<br> | No | |
### chatCompletionChoiceLogProbs
@@ -1776,6 +1890,7 @@ Log probability information for the choice.
| Name | Type | Description | Required | Default |
|------|------|-------------|----------|---------|
| content | array | A list of message content tokens with log probability information. | Yes | |
+| refusal | array | A list of message refusal tokens with log probability information. | No | |
### chatCompletionTokenLogprob
@@ -1786,7 +1901,7 @@ Log probability information for the choice.
|------|------|-------------|----------|---------|
| token | string | The token. | Yes | |
| logprob | number | The log probability of this token. | Yes | |
-| bytes | array | A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be `null` if there is no bytes representation for the token. | Yes | |
+| bytes | array | A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be `null` if there's no bytes representation for the token. | Yes | |
| top_logprobs | array | List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested `top_logprobs` returned. | Yes | |
@@ -1796,8 +1911,9 @@ A chat completion message generated by the model.
| Name | Type | Description | Required | Default |
|------|------|-------------|----------|---------|
-| role | [chatCompletionResponseMessageRole](#chatcompletionresponsemessagerole) | The role of the author of the response message. | No | |
-| content | string | The contents of the message. | No | |
+| role | [chatCompletionResponseMessageRole](#chatcompletionresponsemessagerole) | The role of the author of the response message. | Yes | |
+| refusal | string | The refusal message generated by the model. | Yes | |
+| content | string | The contents of the message. | Yes | |
| tool_calls | array | The tool calls generated by the model, such as function calls. | No | |
| function_call | [chatCompletionFunctionCall](#chatcompletionfunctioncall) | Deprecated and replaced by `tool_calls`. The name and arguments of a function that should be called, as generated by the model. | No | |
| context | [azureChatExtensionsMessageContext](#azurechatextensionsmessagecontext) | A representation of the additional context information available when Azure OpenAI chat extensions are involved<br> in the generation of a corresponding chat completions response. This context information is only populated when<br> using an Azure OpenAI request configured to use a matching extension. | No | |
@@ -1820,7 +1936,7 @@ The role of the author of the response message.
### chatCompletionToolChoiceOption
-Controls which (if any) function is called by the model. `none` means the model will not call a function and instead generates a message. `auto` means the model can pick between generating a message or calling a function. Specifying a particular function via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that function.
+Controls which (if any) tool is called by the model. `none` means the model won't call any tool and instead generates a message. `auto` means the model can pick between generating a message or calling one or more tools. `required` means the model must call one or more tools. Specifying a particular tool via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that tool. `none` is the default when no tools are present. `auto` is the default if tools are present.
This component can be one of the following:
@@ -1831,121 +1947,155 @@ Specifies a tool the model should use. Use to force the model to call a specific
| Name | Type | Description | Required | Default |
|------|------|-------------|----------|---------|
-| type | enum | The type of the tool. Currently, only `function` is supported.<br>Possible values: function | No | |
-| function | object | | No | |
+| type | enum | The type of the tool. Currently, only `function` is supported.<br>Possible values: function | Yes | |
+| function | object | | Yes | |
### Properties for function
-#### Name
+#### name
| Name | Type | Description | Default |
|------|------|-------------|--------|
| name | string | The name of the function to call. | |
+### ParallelToolCalls
+
+Whether to enable parallel function calling during tool use.
+
+No properties defined for this component.
+
+
+### chatCompletionMessageToolCalls
+
+The tool calls generated by the model, such as function calls.
+
+No properties defined for this component.
+
+
### chatCompletionFunctionCall
Deprecated and replaced by `tool_calls`. The name and arguments of a function that should be called, as generated by the model.
| Name | Type | Description | Required | Default |
|------|------|-------------|----------|---------|
| name | string | The name of the function to call. | Yes | |
-| arguments | string | The arguments to call the function with, as generated by the model in JSON format. Note that the model doesn't always generate valid JSON, and may fabricate parameters not defined by your function schema. Validate the arguments in your code before calling your function. | Yes | |
+| arguments | string | The arguments to call the function with, as generated by the model in JSON format. Note that the model doesn't always generate valid JSON, and may generate parameters not defined by your function schema. Validate the arguments in your code before calling your function. | Yes | |
-### chatCompletionsResponseCommon
+### completionUsage
+
+Usage statistics for the completion request.
+
+| Name | Type | Description | Required | Default |
+|------|------|-------------|----------|---------|
+| prompt_tokens | integer | Number of tokens in the prompt. | Yes | |
+| completion_tokens | integer | Number of tokens in the generated completion. | Yes | |
+| total_tokens | integer | Total number of tokens used in the request (prompt + completion). | Yes | |
+| completion_tokens_details | object | Breakdown of tokens used in a completion. | No | |
+
+
+### Properties for completion_tokens_details
+
+#### reasoning_tokens
+
+| Name | Type | Description | Default |
+|------|------|-------------|--------|
+| reasoning_tokens | integer | Tokens generated by the model for reasoning. | |
+
+
+### chatCompletionTool
| Name | Type | Description | Required | Default |
|------|------|-------------|----------|---------|
-| id | string | A unique identifier for the chat completion. | Yes | |
-| object | [chatCompletionResponseObject](#chatcompletionresponseobject) | The object type. | Yes | |
-| created | integer | The Unix timestamp (in seconds) of when the chat completion was created. | Yes | |
-| model | string | The model used for the chat completion. | Yes | |
-| usage | [completionUsage](#completionusage) | Usage statistics for the completion request. | No | |
-| system_fingerprint | string | Can be used in conjunction with the `seed` request parameter to understand when backend changes have been made that might impact determinism. | No | |
+| type | enum | The type of the tool. Currently, only `function` is supported.<br>Possible values: function | Yes | |
+| function | [FunctionObject](#functionobject) | | Yes | |
-### chatCompletionResponseObject
+### FunctionParameters
-The object type.
+The parameters the functions accepts, described as a JSON Schema object. See the guide](/azure/ai-services/openai/how-to/function-calling) for examples, and the [JSON Schema reference](https://json-schema.org/understanding-json-schema/) for documentation about the format.
-**Description**: The object type.
+Omitting `parameters` defines a function with an empty parameter list.
-**Type**: string
+No properties defined for this component.
-**Default**:
-**Enum Name**: ChatCompletionResponseObject
+### FunctionObject
-**Enum Values**:
-| Value | Description |
-|-------|-------------|
-| chat.completion | The object type is chat completion. |
+| Name | Type | Description | Required | Default |
+|------|------|-------------|----------|---------|
+| description | string | A description of what the function does, used by the model to choose when and how to call the function. | No | |
+| name | string | The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64. | Yes | |
+| parameters | [FunctionParameters](#functionparameters) | The parameters the functions accepts, described as a JSON Schema object. See the guide](/azure/ai-services/openai/how-to/function-calling) for examples, and the [JSON Schema reference](https://json-schema.org/understanding-json-schema/) for documentation about the format. <br><br>Omitting `parameters` defines a function with an empty parameter list. | No | |
+| strict | boolean | Whether to enable strict schema adherence when generating the function call. If set to true, the model will follow the exact schema defined in the `parameters` field. Only a subset of JSON Schema is supported when `strict` is `true`. | No | False |
+
+
+### ResponseFormatText
-### completionUsage
-Usage statistics for the completion request.
| Name | Type | Description | Required | Default |
|------|------|-------------|----------|---------|
-| prompt_tokens | integer | Number of tokens in the prompt. | Yes | |
-| completion_tokens | integer | Number of tokens in the generated completion. | Yes | |
-| total_tokens | integer | Total number of tokens used in the request (prompt + completion). | Yes | |
+| type | enum | The type of response format being defined: `text`<br>Possible values: text | Yes | |
-### chatCompletionTool
+### ResponseFormatJsonObject
| Name | Type | Description | Required | Default |
|------|------|-------------|----------|---------|
-| type | [chatCompletionToolType](#chatcompletiontooltype) | The type of the tool. Currently, only `function` is supported. | Yes | |
-| function | object | | Yes | |
+| type | enum | The type of response format being defined: `json_object`<br>Possible values: json_object | Yes | |
-### Properties for function
+### ResponseFormatJsonSchemaSchema
-#### Description
+The schema for the response format, described as a JSON Schema object.
-| Name | Type | Description | Default |
-|------|------|-------------|--------|
-| description | string | A description of what the function does, used by the model to choose when and how to call the function. | |
+No properties defined for this component.
-#### Name
-| Name | Type | Description | Default |
-|------|------|-------------|--------|
-| name | string | The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64. | |
+### ResponseFormatJsonSchema
-#### Parameters
-| Name | Type | Description | Default |
-|------|------|-------------|--------|
-| parameters | [chatCompletionFunctionParameters](#chatcompletionfunctionparameters) | The parameters the functions accepts, described as a JSON Schema object. See the [JSON Schema reference](https://json-schema.org/understanding-json-schema/) for documentation about the format. | |
+| Name | Type | Description | Required | Default |
+|------|------|-------------|----------|---------|
+| type | enum | The type of response format being defined: `json_schema`<br>Possible values: json_schema | Yes | |
+| json_schema | object | | Yes | |
-### chatCompletionToolType
-The type of the tool. Currently, only `function` is supported.
+### Properties for json_schema
-**Description**: The type of the tool. Currently, only `function` is supported.
+#### description
-**Type**: string
+| Name | Type | Description | Default |
+|------|------|-------------|--------|
+| description | string | A description of what the response format is for, used by the model to determine how to respond in the format. | |
-**Default**:
+#### name
-**Enum Name**: ChatCompletionToolType
+| Name | Type | Description | Default |
+|------|------|-------------|--------|
+| name | string | The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64. | |
-**Enum Values**:
+#### schema
-| Value | Description |
-|-------|-------------|
-| function | The tool type is function. |
+| Name | Type | Description | Default |
+|------|------|-------------|--------|
+| schema | [ResponseFormatJsonSchemaSchema](#responseformatjsonschemaschema) | The schema for the response format, described as a JSON Schema object. | |
+
+#### strict
+
+| Name | Type | Description | Default |
+|------|------|-------------|--------|
+| strict | boolean | Whether to enable strict schema adherence when generating the output. If set to true, the model will always follow the exact schema defined in the `schema` field. Only a subset of JSON Schema is supported when `strict` is `true`. | False |
### chatCompletionChoiceCommon
@@ -2032,14 +2182,14 @@ Transcription or translation segment.
|------|------|-------------|----------|---------|
| id | integer | Segment identifier. | No | |
| seek | number | Offset of the segment. | No | |
-| start | number | The segment start offset. | No | |
+| start | number | Segment start offset. | No | |
| end | number | Segment end offset. | No | |
| text | string | Segment text. | No | |
| tokens | array | Tokens of the text. | No | |
| temperature | number | Temperature. | No | |
| avg_logprob | number | Average log probability. | No | |
| compression_ratio | number | Compression ratio. | No | |
-| no_speech_prob | number | Probability of 'no speech'. | No | |
+| no_speech_prob | number | Probability of `no speech`. | No | |
### imageQuality
@@ -2129,7 +2279,7 @@ The style of the generated images.
| Name | Type | Description | Required | Default |
|------|------|-------------|----------|---------|
-| prompt | string | A text description of the desired image(s). The maximum length is 4000 characters. | Yes | |
+| prompt | string | A text description of the desired image(s). The maximum length is 4,000 characters. | Yes | |
| n | integer | The number of images to generate. | No | 1 |
| size | [imageSize](#imagesize) | The size of the generated images. | No | 1024x1024 |
| response_format | [imagesResponseFormat](#imagesresponseformat) | The format in which the generated images are returned. | No | url |