View Diff on GitHub
Highlights
この文書の変更点には、新機能、仕様変更、及び既存のドキュメントの削除が含まれています。特に、いくつかの重要な情報の削除はユーザーにとって重大な影響を及ぼす可能性があり、新しい代替手段を提供する必要があります。
New features
- 新しいメタデータ属性やリダイレクト設定の追加により、AIサービスにおける情報の精緻化と利用者の利便性を向上。
- ブロックリストやコンテンツフィルタリングの手順、マルチモーダルビジョンのクイックスタートガイドの更新により、ユーザーの操作性と情報の発見が向上。
Breaking changes
- 多くの文書が完全に削除され、これにはデプロイメントタイプ、エンドポイント、モデル推論サービスなどが含まれています。これらの削除は関連情報を探す際のハードルを上げるため、影響は大きいです。
- FAQやコンテンツ安全性に関する重要なガイドが消失したことで、ユーザーの自己解決能力が削減。
Other updates
- 共有フォルダーパスの変更や管理ネットワークに関する微調整。
- Azure AI Studioの目次やリンクの整理によるドキュメントナビゲーションの改善。
Insights
文書の多くの箇所において、Azure AIサービスのドキュメントが大幅に見直されてきたことが窺えます。新しいメタデータやリダイレクト設定の追加により、データの精緻化や利用者利便性の向上が図られています。
一方で、いくつかの重要な情報が削除されたことが大きな点です。具体的には、デプロイメントやエンドポイントの情報が削除され、さらにFAQも無くなったため、ユーザーがこれらのサービスを適切に利用し、理解するための支援を受けにくくなる恐れがあります。この状況では新しいガイドラインや代替情報源の提供が不可欠です。
文書の読解性やユーザビリティ向上を目指している点も強調されており、具体例を通じた手順説明やビジュアルを用いたガイドラインの強化が行われています。これにより、新規ユーザーや学習者がAIモデルや機能を効果的に活用できるよう促進されています。
全体として、この変更はAzureのAIサービスのドキュメントを再構成し、古い情報を削除しつつ、新しい機能や情報に合わせてレイアウトを見直したものであり、継続的なサポートと教育資料の提供が重要となるでしょう。
Summary Table
Modified Contents
articles/ai-services/document-intelligence/containers/disconnected.md
Diff
@@ -204,14 +204,14 @@ services:
apikey: ${FORM_RECOGNIZER_KEY}
billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
Logging:Console:LogLevel:Default: Information
- SharedRootFolder: /shared
- Mounts:Shared: /shared
+ SharedRootFolder: /share
+ Mounts:Shared: /share
Mounts:Output: /logs
Mounts:License: /license
volumes:
- type: bind
source: ${SHARED_MOUNT_PATH}
- target: /shared
+ target: /share
- type: bind
source: ${OUTPUT_MOUNT_PATH}
target: /logs
@@ -233,14 +233,14 @@ services:
apikey: ${FORM_RECOGNIZER_KEY}
billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
Logging:Console:LogLevel:Default: Information
- SharedRootFolder: /shared
- Mounts:Shared: /shared
+ SharedRootFolder: /share
+ Mounts:Shared: /share
Mounts:Output: /logs
Mounts:License: /license
volumes:
- type: bind
source: ${SHARED_MOUNT_PATH}
- target: /shared
+ target: /share
- type: bind
source: ${OUTPUT_MOUNT_PATH}
target: /logs
Summary
{
"modification_type": "minor update",
"modification_title": "共有フォルダーのパスを変更"
}
Explanation
この変更では、指定されたフォルダーのパスが /shared
から /share
に変更されました。具体的には、Dockerの設定ファイル内で、SharedRootFolder
と Mounts:Shared
の値が更新され、関連するボリュームのターゲットパスも一貫して変更されています。この調整により、フォルダー構成の一貫性が保たれ、システム全体の設定が整理されることが目的とされています。全体として、変更は6行追加され、6行削除されており、合計で12行の変更が行われています。
articles/ai-services/document-intelligence/containers/install-run.md
Diff
@@ -1,12 +1,12 @@
---
-title: Install and run Docker containers for Document Intelligence
+title: Install and run Docker containers for Document Intelligence
titleSuffix: Azure AI services
description: Use the Docker containers for Document Intelligence on-premises to identify and extract key-value pairs, selection marks, tables, and structure from forms and documents.
author: laujan
manager: nitinme
ms.service: azure-ai-document-intelligence
ms.topic: how-to
-ms.date: 11/19/2024
+ms.date: 01/22/2025
ms.author: lajanuar
---
@@ -272,13 +272,13 @@ services:
- AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000
ports:
- "5000:5050"
- azure-cognitive-service-read:
- container_name: azure-cognitive-service-read
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.1
- environment:
- - EULA=accept
- - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - apiKey={FORM_RECOGNIZER_KEY}
+ azure-cognitive-service-read:
+ container_name: azure-cognitive-service-read
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.1
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
```
Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
@@ -345,6 +345,7 @@ services:
- apiKey={FORM_RECOGNIZER_KEY}
```
+
### [Custom](#tab/custom)
In addition to the [prerequisites](#prerequisites), you need to do the following to process a custom document:
@@ -385,23 +386,25 @@ In addition to the [prerequisites](#prerequisites), you need to do the following
1. Declare the following environment variables:
- ```text
+```bash
+
+
SHARED_MOUNT_PATH="./share"
OUTPUT_MOUNT_PATH="./output"
FILE_MOUNT_PATH="./files"
DB_MOUNT_PATH="./db"
FORM_RECOGNIZER_ENDPOINT_URI="YourFormRecognizerEndpoint"
FORM_RECOGNIZER_KEY="YourFormRecognizerKey"
NGINX_CONF_FILE="./nginx.conf"
- ```
+```
#### Create an **nginx** file
1. Name this file **nginx.conf**.
1. Enter the following configuration:
-```text
+```bash
worker_processes 1;
events { worker_connections 1024; }
@@ -443,6 +446,10 @@ http {
proxy_pass http://docker-custom/swagger;
}
+ location /api-docs {
+ proxy_pass http://docker-custom/api-docs;
+ }
+
location /formrecognizer/documentModels/prebuilt-layout {
proxy_set_header Host $host:$server_port;
proxy_set_header Referer $scheme://$host:$server_port;
@@ -491,6 +498,9 @@ http {
}
```
+::: moniker-end
+
+:::moniker range="<=doc-intel-3.0.0"
#### Create a **docker compose** file
@@ -506,7 +516,94 @@ services:
container_name: reverseproxy
depends_on:
- layout
- - custom-template
+ - custom-template
+ volumes:
+ - ${NGINX_CONF_FILE}:/etc/nginx/nginx.conf
+ ports:
+ - "5000:5000"
+ layout:
+ container_name: azure-cognitive-service-layout
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.0:latest
+ environment:
+ eula: accept
+ apikey: ${FORM_RECOGNIZER_KEY}
+ billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
+ Logging:Console:LogLevel:Default: Information
+ SharedRootFolder: /share
+ Mounts:Shared: /share
+ Mounts:Output: /logs
+ volumes:
+ - type: bind
+ source: ${SHARED_MOUNT_PATH}
+ target: /share
+ - type: bind
+ source: ${OUTPUT_MOUNT_PATH}
+ target: /logs
+ expose:
+ - "5000"
+
+ custom-template:
+ container_name: azure-cognitive-service-custom-template
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-template-3.0:latest
+ restart: always
+ depends_on:
+ - layout
+ environment:
+ AzureCognitiveServiceLayoutHost: http://azure-cognitive-service-layout:5000
+ eula: accept
+ apikey: ${FORM_RECOGNIZER_KEY}
+ billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
+ Logging:Console:LogLevel:Default: Information
+ SharedRootFolder: /share
+ Mounts:Shared: /share
+ Mounts:Output: /logs
+ volumes:
+ - type: bind
+ source: ${SHARED_MOUNT_PATH}
+ target: /share
+ - type: bind
+ source: ${OUTPUT_MOUNT_PATH}
+ target: /logs
+ expose:
+ - "5000"
+
+ studio:
+ container_name: form-recognizer-studio
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/studio:3.0
+ environment:
+ ONPREM_LOCALFILE_BASEPATH: /onprem_folder
+ STORAGE_DATABASE_CONNECTION_STRING: /onprem_db/Application.db
+ volumes:
+ - type: bind
+ source: ${FILE_MOUNT_PATH} # path to your local folder
+ target: /onprem_folder
+ - type: bind
+ source: ${DB_MOUNT_PATH} # path to your local folder
+ target: /onprem_db
+ ports:
+ - "5001:5001"
+ user: "1000:1000" # echo $(id -u):$(id -g)
+
+ ```
+::: moniker-end
+
+:::moniker range=">=doc-intel-3.1.0"
+
+#### Create a **docker compose** file
+
+1. Name this file **docker-compose.yml**
+
+2. The following code sample is a self-contained `docker compose` example to run Document Intelligence Layout, Studio, and Custom template containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration.
+
+```yml
+version: '3.3'
+services:
+ nginx:
+ image: nginx:alpine
+ container_name: reverseproxy
+ depends_on:
+ - layout
+ - custom-template
volumes:
- ${NGINX_CONF_FILE}:/etc/nginx/nginx.conf
ports:
@@ -559,7 +656,7 @@ services:
studio:
container_name: form-recognizer-studio
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/studio:3.0
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/studio:3.1
environment:
ONPREM_LOCALFILE_BASEPATH: /onprem_folder
STORAGE_DATABASE_CONNECTION_STRING: /onprem_db/Application.db
@@ -575,6 +672,7 @@ services:
user: "1000:1000" # echo $(id -u):$(id -g)
```
+::: moniker-end
The custom template container and Layout container can use Azure Storage queues or in memory queues. The `Storage:ObjectStore:AzureBlob:ConnectionString` and `queue:azure:connectionstring` environment variables only need to be set if you're using Azure Storage queues. When running locally, delete these variables.
@@ -635,20 +733,21 @@ $b64String = [System.Convert]::ToBase64String($bytes, [System.Base64FormattingOp
Use the build model API to post the request.
```http
-POST http://localhost:5000/formrecognizer/documentModels:build?api-version=2023-07-31
-
-{
- "modelId": "mymodel",
- "description": "test model",
- "buildMode": "template",
-
- "base64Source": "<Your base64 encoded string>",
- "tags": {
- "additionalProp1": "string",
- "additionalProp2": "string",
- "additionalProp3": "string"
- }
-}
+
+ POST http://localhost:5000/formrecognizer/documentModels:build?api-version=2023-07-31
+
+ {
+ "modelId": "mymodel",
+ "description": "test model",
+ "buildMode": "template",
+
+ "base64Source": "<Your base64 encoded string>",
+ "tags": {
+ "additionalProp1": "string",
+ "additionalProp2": "string",
+ "additionalProp3": "string"
+ }
+ }
```
---
@@ -720,4 +819,4 @@ That's it! In this article, you learned concepts and workflows for downloading,
* [Document Intelligence container configuration settings](configuration.md)
* [Azure container instance recipe](../../../ai-services/containers/azure-container-instance-recipe.md)
-::: moniker-end
+
Summary
{
"modification_type": "minor update",
"modification_title": "ドキュメントインテリジェンス用のDockerコンテナのインストールおよび実行に関する更新"
}
Explanation
この変更では、ドキュメントインテリジェンス用のDockerコンテナのインストールと実行に関する文書が大幅に更新されました。合計で128行が追加され、29行が削除され、157行の変更が行われています。
主な変更点は以下の通りです:
1. メタデータの更新: 文書のタイトルや日付などのメタ情報が調整され、最新の情報が反映されています。
2. Dockerサービスの設定: docker-compose.yml
ファイルの内容に新しいサービスや環境変数が追加され、より詳細な構成が提供されています。これにより、ユーザーがコンテナを簡単に設定し、実行できるようになります。
3. 構文の改善: 環境変数やコンテナ設定の書き方が改善され、より読みやすくなっています。
4. タブ区切りの内容: カスタムドキュメント処理のための手順や設定について、異なるバージョン用の設定が紹介されています。
この更新は、特にAzureのドキュメントインテリジェンス機能を利用するユーザーにとって、より明確で使いやすいガイドとなると考えられます。
articles/ai-services/document-intelligence/train/custom-model.md
Diff
@@ -95,7 +95,7 @@ If the language of your documents and extraction scenarios supports custom neura
* For custom model training, the maximum number of pages for training data is 500 for the custom template model and 50,000 for the custom neural model.
-* For custom extraction model training, the total size of training data is 50 MB for template model and 1G-MB for the neural model.
+* For custom extraction model training, the total size of training data is 50 MB for template model and 1GB for the neural model.
* For custom classification model training, the total size of training data is `1GB` with a maximum of 10,000 pages.
Summary
{
"modification_type": "minor update",
"modification_title": "カスタムモデルのトレーニングデータサイズの修正"
}
Explanation
この変更では、カスタムモデルのトレーニングデータに関する情報が修正されました。具体的には、カスタム抽出モデルのトレーニングデータサイズにおいて、「1G-MB」という誤表記が「1GB」に修正されています。これにより、ドキュメントがより正確で明確になり、ユーザーが理解しやすくなっています。
全体として、1行の追加と1行の削除が行われ、合計で2行の変更が施されています。この修正は、カスタムモデルの正しい容量制限を提供するために重要です。
articles/ai-services/language-service/conversational-language-understanding/how-to/tag-utterances.md
Diff
@@ -14,9 +14,9 @@ ms.custom: language-service-clu
# Label your utterances in Language Studio
-Once you have [built a schema](build-schema.md) for your project, you should add training utterances to your project. The utterances should be similar to what your users will use when interacting with the project. When you add an utterance, you have to assign which intent it belongs to. After the utterance is added, label the words within your utterance that you want to extract as entities.
+Once you have [built a schema](build-schema.md) for your project, you should add training utterances to your project. The utterances should be similar to what your users use when interacting with the project. When you add an utterance, you have to assign which intent it belongs to. After the utterance is added, label the words within your utterance that you want to extract as entities.
-Data labeling is a crucial step in development lifecycle; this data will be used in the next step when training your model so that your model can learn from the labeled data. If you already have labeled utterances, you can directly [import it into your project](create-project.md#import-project), but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md). See [create project](create-project.md#import-project) to learn more about importing labeled data into your project. Labeled data informs the model how to interpret text, and is used for training and evaluation.
+Data labeling is a crucial step in development lifecycle; this data are used in the next step when training your model so that your model can learn from the labeled data. If you already have labeled utterances, you can directly [import it into your project](create-project.md#import-project), but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md). See [create project](create-project.md#import-project) to learn more about importing labeled data into your project. Labeled data informs the model how to interpret text, and is used for training and evaluation.
## Prerequisites
@@ -28,7 +28,7 @@ See the [project development lifecycle](../overview.md#project-development-lifec
## Data labeling guidelines
-After [building your schema](build-schema.md) and [creating your project](create-project.md), you will need to label your data. Labeling your data is important so your model knows which words and sentences will be associated with the intents and entities in your project. You will want to spend time labeling your utterances - introducing and refining the data that will be used to in training your models.
+After [building your schema](build-schema.md) and [creating your project](create-project.md), you need to label your data. Labeling your data is important so your model knows which words and sentences are associated with the intents and entities in your project. Spend time labeling your utterances - introducing and refining the data that is used in training your models.
As you add utterances and label them, keep in mind:
@@ -71,7 +71,7 @@ Use the following steps to label your utterances:
|Option |Description |
|---------|---------|
|Label using a brush | Select the brush icon next to an entity in the right pane, then highlight the text in the utterance you want to label. |
- |Label using inline menu | Highlight the word you want to label as an entity, and a menu will appear. Select the entity you want to label these words with. |
+ |Label using inline menu | Highlight the word you want to label as an entity, and a menu appears. Select the entity you want to label these words with. |
6. In the right side pane, under the **Labels** pivot, you can find all the entity types in your project and the count of labeled instances per each.
@@ -105,11 +105,11 @@ Before you get started, the suggest utterances feature is only available if your
In the Data Labeling page:
-1. Select the **Suggest utterances** button. A pane will open up on the right side prompting you to select your Azure OpenAI resource and deployment.
+1. Select the **Suggest utterances** button. A pane opens up on the right side prompting you to select your Azure OpenAI resource and deployment.
2. On selection of an Azure OpenAI resource, select **Connect**, which allows your Language resource to have direct access to your Azure OpenAI resource. It assigns your Language resource the role of `Cognitive Services User` to your Azure OpenAI resource, which allows your current Language resource to have access to Azure OpenAI's service. If the connection fails, follow these [steps](#add-required-configurations-to-azure-openai-resource) below to add the right role to your Azure OpenAI resource manually.
-3. Once the resource is connected, select the deployment. The recommended model for the Azure OpenAI deployment is `text-davinci-002`.
+3. Once the resource is connected, select the deployment. The recommended model for the Azure OpenAI deployment is `gpt-35-turbo-instruct`.
4. Select the intent you'd like to get suggestions for. Make sure the intent you have selected has at least 5 saved utterances to be enabled for utterance suggestions. The suggestions provided by Azure OpenAI are based on the **most recent utterances** you've added for that intent.
-5. Select **Generate utterances**. Once complete, the suggested utterances will show up with a dotted line around it, with the note *Generated by AI*. Those suggestions need to be accepted or rejected. Accepting a suggestion simply adds it to your project, as if you had added it yourself. Rejecting it deletes the suggestion entirely. Only accepted utterances will be part of your project and used for training or testing. You can accept or reject by clicking on the green check or red cancel buttons beside each utterance. You can also use the `Accept all` and `Reject all` buttons in the toolbar.
+5. Select **Generate utterances**. Once complete, the suggested utterances show up with a dotted line around it, with the note *Generated by AI*. Those suggestions need to be accepted or rejected. Accepting a suggestion simply adds it to your project, as if you had added it yourself. Rejecting it deletes the suggestion entirely. Only accepted utterances are part of your project and used for training or testing. You can accept or reject by clicking on the green check or red cancel buttons beside each utterance. You can also use the `Accept all` and `Reject all` buttons in the toolbar.
:::image type="content" source="../media/suggest-utterances.png" alt-text="A screenshot showing utterance suggestions in Language Studio." lightbox="../media/suggest-utterances.png":::
@@ -153,7 +153,7 @@ After enabling managed identity, assign the role `Cognitive Services User` to yo
:::image type="content" source="../media/add-role-azure-openai.gif" alt-text="Multiple screenshots showing the steps to add the required role to your Azure OpenAI resource." lightbox="../media/add-role-azure-openai.gif":::
-After a few minutes, refresh the Language Studio and you will be able to successfully connect to Azure OpenAI.
+After a few minutes, refresh the Language Studio and you are able to successfully connect to Azure OpenAI.
## Next Steps
* [Train Model](./train-model.md)
Summary
{
"modification_type": "minor update",
"modification_title": "会話型言語理解の発話タグ付け手順の修正"
}
Explanation
この変更では、会話型言語理解における発話のタグ付け手順に関する文書が更新され、全体で8行が追加され、8行が削除され、合計で16行の変更が行われました。主な修正点は以下の通りです。
- 文章の明確性の向上: 発話の説明で、「ユーザーが利用する」部分について、文体を改善し、「ユーザーが使用する」とすることで、より自然な表現になっています。
- データラベリングの重要性: データラベリングに関する説明において、「このデータは次のステップで使用される」という文章を「これらのデータは次のステップで使用される」と修正しています。
- 手順の簡略化: 一部のステップにおいて、動詞の時制を統一し、より明快な指示となるよう修正されています。
これらの変更により、文書はより読みやすくなり、ユーザーが手順に従いやすくなっています。この更新は、特に新しいユーザーが会話型言語理解モデルを効果的に使用するための手助けとなるでしょう。
articles/ai-services/language-service/named-entity-recognition/concepts/entity-metadata.md
Diff
@@ -33,6 +33,7 @@ Examples: "10 years old", "23 months old", "sixty Y.O."
```json
"metadata": {
+ "metadataKind": "AgeMetadata",
"unit": "Year",
"value": 10
}
@@ -344,4 +345,4 @@ Possible values for "unit":
- Celsius
- Fahrenheit
- Kelvin
-- Rankine
\ No newline at end of file
+- Rankine
Summary
{
"modification_type": "minor update",
"modification_title": "エンティティメタデータの更新"
}
Explanation
この変更では、エンティティメタデータに関する文書が修正され、主に2行の追加と1行の削除が行われました。具体的な変更内容は以下の通りです。
- 新しいメタデータ属性の追加: JSON例に「metadataKind」フィールドが追加され、「AgeMetadata」という値が設定されました。これにより、年齢に関連するメタデータの種類が明確に示されています。
- ユニットのリストの更新: ユニットの可能な値のリストに「Rankine」が追加され、これは熱力学の単位として広く使用されています。
これにより、エンティティメタデータの仕様がより精緻化され、開発者がこのデータを使用する際により高い明確性と精度が得られるようになります。この更新は、特に年齢に関するデータを扱う際に役立つでしょう。
articles/ai-services/language-service/named-entity-recognition/concepts/named-entity-categories.md
Diff
@@ -21,9 +21,9 @@ Use this article to find the entity categories that can be returned by [Named En
# [Generally Available API](#tab/ga-api)
-## Category: Person
+## Type: Person
-This category contains the following entity:
+This type contains the following entity:
:::row:::
:::column span="":::
@@ -46,9 +46,9 @@ This category contains the following entity:
:::column-end:::
:::row-end:::
-## Category: PersonType
+## Type: PersonType
-This category contains the following entity:
+This type contains the following entity:
:::row:::
@@ -72,9 +72,9 @@ This category contains the following entity:
:::column-end:::
:::row-end:::
-## Category: Location
+## Type: Location
-This category contains the following entity:
+This type contains the following entity:
:::row:::
:::column span="":::
@@ -97,13 +97,13 @@ This category contains the following entity:
:::column-end:::
:::row-end:::
-#### Subcategories
+#### Subtype
-The entity in this category can have the following subcategories.
+The entity in this type can have the following subtypes.
:::row:::
:::column span="":::
- **Entity subcategory**
+ **Entity subtype**
Geopolitical Entity (GPE)
@@ -156,9 +156,9 @@ The entity in this category can have the following subcategories.
:::column-end:::
:::row-end:::
-## Category: Organization
+## Type: Organization
-This category contains the following entity:
+This type contains the following entity:
:::row:::
:::column span="":::
@@ -181,13 +181,13 @@ This category contains the following entity:
:::column-end:::
:::row-end:::
-#### Subcategories
+#### Subtype
-The entity in this category can have the following subcategories.
+The entity in this type can have the following subtype.
:::row:::
:::column span="":::
- **Entity subcategory**
+ **Entity subtype**
Medical
@@ -240,9 +240,9 @@ The entity in this category can have the following subcategories.
:::column-end:::
:::row-end:::
-## Category: Event
+## Type: Event
-This category contains the following entity:
+This type contains the following entity:
:::row:::
:::column span="":::
@@ -265,13 +265,13 @@ This category contains the following entity:
:::column-end:::
:::row-end:::
-#### Subcategories
+#### Subtypes
-The entity in this category can have the following subcategories.
+The entity in this type can have the following subtype.
:::row:::
:::column span="":::
- **Entity subcategory**
+ **Entity subtype**
Cultural
@@ -324,9 +324,9 @@ The entity in this category can have the following subcategories.
:::column-end:::
:::row-end:::
-## Category: Product
+## Type: Product
-This category contains the following entity:
+This type contains the following entity:
:::row:::
:::column span="":::
@@ -338,7 +338,7 @@ This category contains the following entity:
:::column span="2":::
**Details**
- Physical objects of various categories.
+ Physical objects of various types.
:::column-end:::
:::column span="2":::
@@ -350,13 +350,13 @@ This category contains the following entity:
:::row-end:::
-#### Subcategories
+#### Subtype
-The entity in this category can have the following subcategories.
+The entity in this type can have the following subtype.
:::row:::
:::column span="":::
- **Entity subcategory**
+ **Entity subtype**
Computing products
:::column-end:::
@@ -374,9 +374,9 @@ The entity in this category can have the following subcategories.
:::column-end:::
:::row-end:::
-## Category: Skill
+## Type: Skill
-This category contains the following entity:
+This type contains the following entity:
:::row:::
:::column span="":::
@@ -399,9 +399,9 @@ This category contains the following entity:
:::column-end:::
:::row-end:::
-## Category: Address
+## Type: Address
-This category contains the following entity:
+This type contains the following entity:
:::row:::
:::column span="":::
@@ -424,9 +424,9 @@ This category contains the following entity:
:::column-end:::
:::row-end:::
-## Category: PhoneNumber
+## Type: PhoneNumber
-This category contains the following entity:
+This type contains the following entity:
:::row:::
:::column span="":::
@@ -449,9 +449,9 @@ This category contains the following entity:
:::column-end:::
:::row-end:::
-## Category: Email
+## Type: Email
-This category contains the following entity:
+This type contains the following entity:
:::row:::
:::column span="":::
@@ -474,9 +474,9 @@ This category contains the following entity:
:::column-end:::
:::row-end:::
-## Category: URL
+## Type: URL
-This category contains the following entity:
+This type contains the following entity:
:::row:::
:::column span="":::
@@ -499,9 +499,9 @@ This category contains the following entity:
:::column-end:::
:::row-end:::
-## Category: IP
+## Type: IP
-This category contains the following entity:
+This type contains the following entity:
:::row:::
:::column span="":::
@@ -524,9 +524,9 @@ This category contains the following entity:
:::column-end:::
:::row-end:::
-## Category: DateTime
+## Type: DateTime
-This category contains the following entities:
+This type contains the following entities:
:::row:::
:::column span="":::
@@ -549,15 +549,15 @@ This category contains the following entities:
:::column-end:::
:::row-end:::
-Entities in this category can have the following subcategories
+Entities in this type can have the following subtypes
-#### Subcategories
+#### Subtypes
-The entity in this category can have the following subcategories.
+The entity in this type can have the following subtypes.
:::row:::
:::column span="":::
- **Entity subcategory**
+ **Entity subtype**
Date
@@ -661,9 +661,9 @@ The entity in this category can have the following subcategories.
:::column-end:::
:::row-end:::
-## Category: Quantity
+## Type: Quantity
-This category contains the following entities:
+This type contains the following entities:
:::row:::
:::column span="":::
@@ -686,13 +686,13 @@ This category contains the following entities:
:::column-end:::
:::row-end:::
-#### Subcategories
+#### Subtypes
-The entity in this category can have the following subcategories.
+The entity in this type can have the following subtypes.
:::row:::
:::column span="":::
- **Entity subcategory**
+ **Entity subtype**
Number
@@ -809,10 +809,6 @@ The entity in this category can have the following subcategories.
# [Preview API](#tab/preview-api)
-## Supported Named Entity Recognition (NER) entity categories
-
-Use this article to find the entity types and the additional tags that can be returned by [Named Entity Recognition (NER)](../how-to-call.md). NER runs a predictive model to identify and categorize named entities from an input document.
-
### Type: Address
Specific street-level mentions of locations: house/building numbers, streets, avenues, highways, intersections referenced by name.
Summary
{
"modification_type": "breaking change",
"modification_title": "ネーミングエンティティカテゴリーの構造が変更"
}
Explanation
この変更では、ネーミングエンティティカテゴリーに関する文書が大幅に修正され、48行が追加され、52行が削除され、合計で100行の変更が加えられました。主な変更点は以下の通りです。
用語の変更: 文書内の「Category」という用語が「Type」に変更され、さらに「このタイプは次のエンティティを含む」という表現に統一されました。これにより、エンティティカテゴリが明確に分類された形となります。
サブカテゴリーの表現の変更: 以前の「サブカテゴリー」に関する説明は「Subtype」に変更されました。これにより、エンティティにおける階層構造が明確になっています。
全体的な構成の見直し: 各エンティティタイプに対して、その詳細な説明が見やすく整理され、より一貫性のある形式で情報が提示されています。また、各エンティティの具体的な例が追加されているため、利用者が具体的にどのようなエンティティが存在するのかを理解しやすくなっています。
これらの変更により、ネーミングエンティティ認識の仕様が大幅に改訂され、開発者がエンティティを利用する際の明瞭性と効率を高めています。この更新は、特に開発者がエンティティ分類を活用する際に極めて重要です。
articles/ai-studio/.openpublishing.redirection.ai-studio.json
Diff
@@ -189,6 +189,51 @@
"source_path_from_root": "/articles/ai-studio/quickstarts/content-safety.md",
"redirect_url": "/azure/ai-studio/concepts/content-filtering",
"redirect_document_id": false
- }
+ },
+ {
+ "source_path_from_root": "/articles/ai-studio/ai-services/model-inference.md",
+ "redirect_url": "/azure/ai-foundry/model-inference/overview",
+ "redirect_document_id": false
+ },
+ {
+ "source_path_from_root": "/articles/ai-studio/ai-services/how-to/quickstart-github-models.md",
+ "redirect_url": "/azure/ai-foundry/model-inference/how-to/quickstart-github-models",
+ "redirect_document_id": false
+ },
+ {
+ "source_path_from_root": "/articles/ai-studio/ai-services/how-to/create-model-deployments.md",
+ "redirect_url": "/azure/ai-foundry/model-inference/how-to/create-model-deployments",
+ "redirect_document_id": false
+ },
+ {
+ "source_path_from_root": "/articles/ai-studio/ai-services/how-to/content-safety.md",
+ "redirect_url": "/azure/ai-foundry/model-inference/how-to/configure-content-safety",
+ "redirect_document_id": false
+ },
+ {
+ "source_path_from_root": "/articles/ai-studio/ai-services/concepts/quotas-limits.md",
+ "redirect_url": "/azure/ai-foundry/model-inference/quotas-limits",
+ "redirect_document_id": false
+ },
+ {
+ "source_path_from_root": "/articles/ai-studio/ai-services/concepts/endpoints.md",
+ "redirect_url": "/azure/ai-foundry/model-inference/concepts/endpoints",
+ "redirect_document_id": false
+ },
+ {
+ "source_path_from_root": "/articles/ai-studio/ai-services/concepts/deployment-types.md",
+ "redirect_url": "/azure/ai-foundry/model-inference/concepts/deployment-types",
+ "redirect_document_id": false
+ },
+ {
+ "source_path_from_root": "/articles/ai-studio/ai-services/faq.yml",
+ "redirect_url": "/azure/ai-foundry/model-inference/faq",
+ "redirect_document_id": false
+ },
+ {
+ "source_path_from_root": "/articles/ai-studio/how-to/data-image-add.md",
+ "redirect_url": "/azure/ai-studio/quickstarts/multimodal-vision",
+ "redirect_document_id": false
+ }
]
}
\ No newline at end of file
Summary
{
"modification_type": "minor update",
"modification_title": "リダイレクト設定の追加"
}
Explanation
この変更では、リダイレクト設定を含むJSONファイルに対して大きな修正が行われました。具体的には、46行が追加され、1行が削除され、合計で47行の変更が加えられています。主な内容は以下の通りです。
新しいリダイレクトの追加: いくつかの新しいリダイレクトエントリが追加され、特に「/articles/ai-studio/ai-services/」に関連するいくつかの文書が「/azure/ai-foundry/model-inference/」に向けてリダイレクトされるようになっています。これにより、ユーザーが古いURLにアクセスした場合に新しい関連情報に自動的に誘導される仕組みが整えられています。
構成の一貫性: 各リダイレクトエントリは「source_path_from_root」と「redirect_url」というフィールドを持ち、それぞれの旧URLと新URLが明示的に記述されています。この一貫した形式により、新規追加されたリダイレクトの情報が容易に理解できるようになっています。
この更新により、ユーザーエクスペリエンスの向上が図られ、古いコンテンツから新しい情報へのスムーズな移行が可能になるため、特にドキュメンテーションやAPI使用による情報検索が生かされるようになっています。
articles/ai-studio/ai-services/concepts/deployment-types.md
Diff
@@ -1,52 +0,0 @@
----
-title: Understanding deployment types in Azure AI model inference
-titleSuffix: Azure AI services
-description: Learn how to use deployment types in Azure AI model deployments
-author: sdgilley
-manager: scottpolly
-ms.service: azure-ai-studio
-ms.topic: conceptual
-ms.date: 10/24/2024
-ms.author: fasantia
-ms.reviewer: fasantia
-ms.custom: github-universe-2024
----
-
-# Deployment types in Azure AI model inference
-
-Azure AI model inference in Azure AI services provides customers with choices on the hosting structure that fits their business and usage patterns. The service offers two main types of deployment: **standard** and **provisioned**. Standard is offered with a global deployment option, routing traffic globally to provide higher throughput. Provisioned is also offered with a global deployment option, allowing customers to purchase and deploy provisioned throughput units across Azure global infrastructure.
-
-All deployments can perform the exact same inference operations, however the billing, scale, and performance are substantially different. As part of your solution design, you need to make two key decisions:
-
-- **Data residency needs**: global vs. regional resources
-- **Call volume**: standard vs. provisioned
-
-Deployment types support varies by model and model provider.
-
-## Global versus regional deployment types
-
-For standard and provisioned deployments, you have an option of two types of configurations within your resource – **global** or **regional**. Global standard is the recommended starting point.
-
-Global deployments use Azure's global infrastructure, dynamically route customer traffic to the data center with best availability for the customer’s inference requests. This means you get the highest initial throughput limits and best model availability with Global while still providing our uptime SLA and low latency. For high volume workloads above the specified usage tiers on standard and global standard, you may experience increased latency variation. For customers that require the lower latency variance at large workload usage, we recommend purchasing provisioned throughput.
-
-Our global deployments are the first location for all new models and features. Customers with very large throughput requirements should consider our provisioned deployment offering.
-
-## Standard
-
-Standard deployments provide a pay-per-call billing model on the chosen model. Provides the fastest way to get started as you only pay for what you consume. Models available in each region as well as throughput may be limited.
-
-Standard deployments are optimized for low to medium volume workloads with high burstiness. Customers with high consistent volume may experience greater latency variability.
-
-Only Azure OpenAI models support this deployment type.
-
-## Global standard
-
-Global deployments are available in the same Azure AI services resources as nonglobal deployment types but allow you to use Azure's global infrastructure to dynamically route traffic to the data center with best availability for each request. Global standard provides the highest default quota and eliminates the need to load balance across multiple resources.
-
-Customers with high consistent volume may experience greater latency variability. The threshold is set per model. For applications that require the lower latency variance at large workload usage, we recommend purchasing provisioned throughput if available.
-
-## Global provisioned
-
-Global deployments are available in the same Azure AI services resources as nonglobal deployment types but allow you to leverage Azure's global infrastructure to dynamically route traffic to the data center with best availability for each request. Global provisioned deployments provide reserved model processing capacity for high and predictable throughput using Azure global infrastructure.
-
-Only Azure OpenAI models support this deployment type.
Summary
{
"modification_type": "breaking change",
"modification_title": "デプロイメントタイプに関する文書の削除"
}
Explanation
この変更では、ファイル「deployment-types.md」が完全に削除されました。具体的には、52行が削除され、内容が全て消失しています。この文書は、Azure AIサービスにおけるデプロイメントの種類に関する重要な情報を提供していました。この変更に伴う影響は以下の通りです。
情報の喪失: 元の文書は、Azure AIモデル推論におけるデプロイタイプの選択肢や、それぞれのデプロイメントが提供する特徴について詳しく説明していました。特に「標準」と「プロビジョン済み」のデプロイメントオプションについての情報が含まれており、ユーザーはどのデプロイメントを選択するかを決定するのに役立っていました。
ビジネスへの影響: これらの情報が削除されたことにより、Azureユーザーや開発者はデプロイメントの選択肢やそのメリット、デメリットを理解するためのリソースを失ったことになります。このため、正しいデプロイメント戦略を策定するための支援が不足する可能性があります。
この削除は、ドキュメントやリファレンス資料の構成において大きな変化をもたらし、特にデプロイメントに関する理解を深めたいと考えていたユーザーにとっては、重要な情報が失われたことを意味します。今後、代替の情報源やアップデートが提供される必要があります。
articles/ai-studio/ai-services/concepts/endpoints.md
Diff
@@ -1,105 +0,0 @@
----
-title: Use the Azure AI model inference endpoint
-titleSuffix: Azure AI Foundry
-description: Learn about to use the Azure AI model inference endpoint and how to configure it.
-ms.service: azure-ai-studio
-ms.topic: conceptual
-author: sdgilley
-manager: scottpolly
-ms.date: 10/24/2024
-ms.author: sgilley
-ms.reviewer: fasantia
-ms.custom: github-universe-2024
----
-
-# Use the Azure AI model inference endpoint
-
-Azure AI inference service in Azure AI services allows customers to consume the most powerful models from flagship model providers using a single endpoint and credentials. This means that you can switch between models and consume them from your application without changing a single line of code.
-
-The article explains how models are organized inside of the service and how to use the inference endpoint to invoke them.
-
-## Deployments
-
-Azure AI model inference service makes models available using the **deployment** concept. **Deployments** are a way to give a model a name under certain configurations. Then, you can invoke such model configuration by indicating its name on your requests.
-
-Deployments capture:
-
-> [!div class="checklist"]
-> * A model name
-> * A model version
-> * A provisioning/capacity type<sup>1</sup>
-> * A content filtering configuration<sup>1</sup>
-> * A rate limiting configuration<sup>1</sup>
-
-<sup>1</sup> Configurations may vary depending on the model you have selected.
-
-An Azure AI services resource can have as many model deployments as needed and they don't incur in cost unless inference is performed for those models. Deployments are Azure resources and hence they are subject to Azure policies.
-
-To learn more about how to create deployments see [Add and configure model deployments](../how-to/create-model-deployments.md).
-
-## Azure AI inference endpoint
-
-The Azure AI inference endpoint allows customers to use a single endpoint with the same authentication and schema to generate inference for the deployed models in the resource. This endpoint follows the [Azure AI model inference API](../../reference/reference-model-inference-api.md) which is supported by all the models in Azure AI model inference service.
-
-You can see the endpoint URL and credentials in the **Overview** section. The endpoint usually has the form `https://<resource-name>.services.ai.azure.com/models`:
-
-:::image type="content" source="../../media/ai-services/overview/overview-endpoint-and-key.png" alt-text="A screenshot showing how to get the URL and key associated with the resource." lightbox="../../media/ai-services/overview/overview-endpoint-and-key.png":::
-
-You can connect to the endpoint using the Azure AI Inference SDK:
-
-[!INCLUDE [code-create-chat-client](../../includes/ai-services/code-create-chat-client.md)]
-
-See [Supported languages and SDKs](#supported-languages-and-sdks) for more code examples and resources.
-
-### Routing
-
-The inference endpoint routes requests to a given deployment by matching the parameter `name` inside of the request to the name of the deployment. This means that *deployments work as an alias of a given model under certain configurations*. This flexibility allows you to deploy a given model multiple times in the service but under different configurations if needed.
-
-:::image type="content" source="../../media/ai-services/endpoint/endpoint-routing.png" alt-text="An illustration showing how routing works for a Meta-llama-3.2-8b-instruct model by indicating such name in the parameter 'model' inside of the payload request." lightbox="../../media/ai-services/endpoint/endpoint-routing.png":::
-
-For example, if you create a deployment named `Mistral-large`, then such deployment can be invoked as:
-
-[!INCLUDE [code-create-chat-completion](../../includes/ai-services/code-create-chat-completion.md)]
-
-> [!TIP]
-> Deployment routing is not case sensitive.
-
-### Supported languages and SDKs
-
-All models deployed in Azure AI model inference service support the [Azure AI model inference API](https://aka.ms/aistudio/modelinference) and its associated family of SDKs, which are available in the following languages:
-
-| Language | Documentation | Package | Examples |
-|------------|---------|-----|-------|
-| C# | [Reference](https://aka.ms/azsdk/azure-ai-inference/csharp/reference) | [azure-ai-inference (NuGet)](https://www.nuget.org/packages/Azure.AI.Inference/) | [C# examples](https://aka.ms/azsdk/azure-ai-inference/csharp/samples) |
-| Java | [Reference](https://aka.ms/azsdk/azure-ai-inference/java/reference) | [azure-ai-inference (Maven)](https://central.sonatype.com/artifact/com.azure/azure-ai-inference/) | [Java examples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/ai/azure-ai-inference/src/samples) |
-| JavaScript | [Reference](/javascript/api/overview/azure/ai-inference-rest-readme?view=azure-node-preview&preserve-view=true) | [@azure/ai-inference (npm)](https://www.npmjs.com/package/@azure/ai-inference) | [JavaScript examples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) |
-| Python | [Reference](https://aka.ms/azsdk/azure-ai-inference/python/reference) | [azure-ai-inference (PyPi)](https://pypi.org/project/azure-ai-inference/) | [Python examples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-inference/samples) |
-
-## Azure OpenAI inference endpoint
-
-Azure OpenAI models also support the Azure OpenAI API. This API exposes the full capabilities of OpenAI models and supports additional features like assistants, threads, files, and batch inference.
-
-Each OpenAI model deployment has its own URL associated with such deployment under the Azure OpenAI inference endpoint. However, the same authentication mechanism can be used to consume it. URLs are usually in the form of `https://<resource-name>.openai.azure.com/openai/deployments/<model-deployment-name>`. Learn more in the reference page for [Azure OpenAI API](../../../ai-services/openai/reference.md)
-
-:::image type="content" source="../../media/ai-services/endpoint/endpoint-openai.png" alt-text="An illustration showing how Azure OpenAI deployments contain a single URL for each deployment." lightbox="../../media/ai-services/endpoint/endpoint-openai.png":::
-
-Each deployment has a URL that is the concatenations of the **Azure OpenAI** base URL and the route `/deployments/<model-deployment-name>`.
-
-> [!IMPORTANT]
-> There is no routing mechanism for the Azure OpenAI endpoint, as each URL is exclusive for each model deployment.
-
-### Supported languages and SDKs
-
-The Azure OpenAI endpoint is supported by the **OpenAI SDK (`AzureOpenAI` class)** and **Azure OpenAI SDKs**, which are available in multiple languages:
-
-| Language | Source code | Package | Examples |
-|------------|---------|-----|-------|
-| C# | [Source code](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/openai/Azure.AI.OpenAI) | [Azure.AI.OpenAI (NuGet)](https://www.nuget.org/packages/Azure.AI.OpenAI/) | [C# examples](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/openai/Azure.AI.OpenAI/tests/Samples) |
-| Go | [Source code](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/ai/azopenai) | [azopenai (Go)](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/ai/azopenai)| [Go examples](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/ai/azopenai#pkg-examples) |
-| Java | [Source code](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/openai/azure-ai-openai) | [azure-ai-openai (Maven)](https://central.sonatype.com/artifact/com.azure/azure-ai-openai/) | [Java examples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/openai/azure-ai-openai/src/samples) |
-| JavaScript | [Source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/openai/openai) | [@azure/openai (npm)](https://www.npmjs.com/package/@azure/openai) | [JavaScript examples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/openai/openai/samples/) |
-| Python | [Source code](https://github.com/openai/openai-python) | [openai (PyPi)](https://pypi.org/project/openai/) | [Python examples](https://github.com/openai/openai-cookbook) |
-
-## Next steps
-
-- [Deployment types](deployment-types.md)
Summary
{
"modification_type": "breaking change",
"modification_title": "エンドポイントに関する文書の削除"
}
Explanation
この変更では、「endpoints.md」というファイルが完全に削除され、105行が削除されました。この文書は、Azure AIモデル推論におけるエンドポイントの使用方法と構成方法について詳述しており、この情報を失うことは以下のような影響を及ぼします。
情報の消失: 元の文書には、Azure AI推論サービスのエンドポイントを使用することで、ユーザーがどのようにモデルを呼び出し、設定を行うかに関する詳細な説明が含まれていました。特に、エンドポイントの構造、異なるデプロイメントの概念、そしてどのようにAPIを通じてリクエストを行うかが記載されていたため、ユーザーにとって重要な資料が失われたことになります。
ビジネスニーズへの影響: エンドポイントの管理と利用に関する情報がなくなることで、開発者やビジネスユーザーはAzure AIサービスの利用を最適化するための知識を失うことになります。このことは、特に新規ユーザーや初めてAzureのAIモデルを使用するユーザーにとって、障壁を増やす要因となります。
この削除は、AzureにおけるAIサービスのドキュメント構成において重要な変化をもたらし、今後の利用者が情報にアクセスできる手段が限られる可能性を示しています。新しい代替リソースが提供されることが望まれます。
articles/ai-studio/ai-services/faq.yml
Diff
@@ -1,119 +0,0 @@
-### YamlMime:FAQ
-metadata:
- title: Azure AI model inference service frequently asked questions
- titleSuffix: Azure AI Foundry
- description: Get answers to the most popular questions about Azure AI model service
- #services: cognitive-services
- ms.service: azure-ai-studio
- ms.topic: faq
- author: sdgilley
- manager: scottpolly
- ms.date: 10/24/2024
- ms.author: sgilley
- ms.reviewer: fasantia
-title: Azure AI model inference service frequently asked questions
-summary: |
- If you can't find answers to your questions in this document, and still need help check the [Azure AI services support options guide](../../ai-services/cognitive-services-support-options.md?context=/azure/ai-studio/context/context).
-sections:
- - name: General
- questions:
- - question: |
- What's the difference between Azure OpenAI service and Azure AI model inference service?
- answer: |
- Azure OpenAI service gives customers access to advanced language models from OpenAI. Azure AI model inference service gives customers access to all the flagship models in Azure AI, including Azure OpenAI, Cohere, Mistral AI, Meta Llama, AI21 labs, etc. This access is under the same service, endpoint, and credentials. Customers can seamlessly switch between models without changing their code.
-
- Both Azure OpenAI Service and Azure AI model inference service are part of the Azure AI services family and build on top of the same security and enterprise promise of Azure.
-
- While Azure AI model inference service focus on inference, Azure OpenAI Service can be used with more advanced APIs like batch, fine-tuning, assistants, and files.
- - question: |
- What's the difference between OpenAI and Azure OpenAI?
- answer: |
- Azure AI Models and Azure OpenAI Service give customers access to advanced language models from OpenAI with the security and enterprise promise of Azure. Azure OpenAI codevelops the APIs with OpenAI, ensuring compatibility and a smooth transition from one to the other.
-
- Customers get the security capabilities of Microsoft Azure while running the same models as OpenAI. It offers private networking, regional availability, and responsible AI content filtering.
-
- Learn more about the [Azure OpenAI service](../../ai-services/openai/overview.md).
- - question: |
- What's the difference between Azure AI model inference and Azure AI Foundry?
- answer: |
- Azure AI services are a suite of AI services that provide prebuilt APIs for common AI scenarios. One of them is Azure AI model inference service which focuses on inference service of different state-of-the-art models. Azure AI Foundry portal is a web-based tool that allows you to build, train, and deploy machine learning models. Azure AI services can be used in Azure AI Foundry portal to enhance your models with prebuilt AI capabilities.
- - question: |
- What's the difference between Azure AI model inference service and Serverless API model deployments in Azure AI Foundry portal?
- answer: |
- Both technologies allow you to deploy models without requiring compute resources as they are based on the Models as a Service idea. [Serverless API model deployments](../how-to/deploy-models-serverless.md) allow you to deploy a single model under a unique endpoint and credentials. You need to create a different endpoint for each model you want to deploy. On top of that, they are always created in the context of the project and while they can be shared by creating connections from other projects, they live in the context of a given project.
-
- Azure AI model inference service allows you to deploy multiple models under the same endpoint and credentials. You can switch between models without changing your code. They are also in the context of a shared resource, the Azure AI Services resource, which implies you can connect the resource to any project or hub that requires to consume the models you made available. Azure AI model inference service comes with a built-in model routing capability that routes the request to the right model based on the model name you pass in the request.
-
- These two model deployment options have some differences in terms of their capabilities too. You can read about them at [../concepts/deployment-overview.md]
- - name: Models
- questions:
- - question: |
- Why aren't all the models in the Azure AI model catalog supported in Azure AI model inference in Azure AI Services?
- answer: |
- The Azure AI model inference service in AI services supports all the models in the Azure AI catalog with pay-as-you-go billing (per-token). For more information, see [the Models section](model-inference.md#models).
-
- The Azure AI model catalog contains a wider list of models, however, those models require compute quota from your subscription. They also need to have a project or AI hub where to host the deployment. For more information, see [deployment options in Azure AI Foundry portal](../concepts/deployments-overview.md).
- - question: |
- Why I can't add OpenAI o1-preview or OpenA o1-mini-preview to my resource?
- answer: |
- The Azure OpenAI Service o1 models require registration and are eligible only to customers on the Enterprise Agreement Offer. Subscriptions not under the Enterprise Agreement Offer are subject to denial. We onboard eligible customers as we have space. Due to high demand, eligible customers may remain on the waitlist until space is available.
-
- Other models ([see list](../../ai-services/openai/concepts/models.md)) don't require registration. [Learn more about limited access to Azure OpenAI Service](/legal/cognitive-services/openai/limited-access?context=/azure/cognitive-services/model-inference/context/context).
- - name: SDKs and programming languages
- questions:
- - question: |
- Which are the supported SDKs and programming languages for Azure AI model inference service?
- answer: |
- You can use Azure Inference SDK with any model that is supported by:
- * The Azure AI Inference SDK
- * The `AzureOpenAI` class in OpenAI SDK
- * The Azure OpenAI SDK
-
- Cohere SDK, Mistral SDK, and model provider-specific SDKs are not supported when connected to Azure AI model inference service.
-
- For more information, see [supported SDKs and programming languages](concepts/endpoints.md).
- - question: |
- Does Azure AI model inference service work with the latest Python library released by OpenAI (version>=1.0)?
- answer: |
- The latest release of the [OpenAI Python library (version>=1.0)](https://pypi.org/project/openai/) supports Azure AI services.
- - question: |
- I'm making a request for a model that Azure AI model inference service supports, but I'm getting a 404 error. What should I do?
- answer: |
- Ensure you created a deployment for the given model and that the deployment name matches **exactly** the value you're passing in `model` parameter. Although routing isn't case sensitive, ensure there's no special punctuation or spaces typos.
- - question: |
- I'm using the azure-ai-inference package for Python and I get a 401 error when I try to authenticate using keys. What should I do?
- answer: |
- Azure AI Services resource requires the version `azure-ai-inference>=1.0.0b5` for Python. Ensure you're using that version.
- - question: |
- I'm using OpenAI SDK and indicated the Azure OpenAI inference endpoint as base URL (https://<resource-name>.openai.azure.com). However, I get a 404 error. What should I do?
- answer: |
- Ensure you're using the correct endpoint for the Azure OpenAI service and the right set of credentials. Also, ensure that you're using the class `AzureOpenAI` from the OpenAI SDK as the authentication mechanism and URLs used are different.
- - question: |
- Does Azure AI model inference service support custom API headers? We append other custom headers to our API requests and are seeing HTTP 431 failure errors.
- answer: |
- Our current APIs allow up to 10 custom headers, which are passed through the pipeline, and returned. We notice some customers now exceed this header count resulting in HTTP 431 errors. There's no solution for this error, other than to reduce header volume. We recommend customers not depend on custom headers in future system architectures.
- - name: Pricing and Billing
- questions:
- - question: |
- How is Azure AI model inference service billed?
- answer: |
- You're billed for inputs and outputs to the APIs, typically in tokens. There are no cost associated with the resource itself or the deployments.
-
- The token price varies per each model and you're billed per 1,000 tokens. You can see the pricing details before deploying a given model.
- - question: |
- Where can I see the bill details?
- answer: |
- Billing and costs are displayed in [Microsoft Cost Management + Billing](/azure/cost-management-billing/understand/download-azure-daily-usage). You can see the usage details in the [Azure portal](https://portal.azure.com).
-
- Billing isn't shown in Azure AI Foundry portal.
- - question: |
- How can I place a spending limit to my bill?
- answer: |
- You can set up a spending limit in the [Azure portal](https://portal.azure.com) under **Cost Management**. This limit prevents you from spending more than the amount you set. Once the spending limit is reached, the subscription is disabled and you can't use the endpoint until the next billing cycle. For more information, see [Tutorial: Create and manage budgets](/azure/cost-management-billing/costs/tutorial-acm-create-budgets).
- - name: Data and Privacy
- questions:
- - question: |
- Do you use my company data to train any of the models?
- answer: |
- Azure AI model inference doesn't use customer data to retrain models. Your data is never shared with model providers.
-additionalContent: |
\ No newline at end of file
Summary
{
"modification_type": "breaking change",
"modification_title": "Azure AIモデル推論サービスに関するFAQの削除"
}
Explanation
この変更により、「faq.yml」というファイルが完全に削除され、119行が削除されました。この文書は、Azure AIモデル推論サービスに関するよくある質問(FAQ)を提供しており、ユーザーがよく抱える疑問に対する回答が含まれていました。この削除が及ぼす影響は以下の通りです。
顧客サポートの低下: FAQは、ユーザーが特に抱えがちな疑問や問題を解決するために役立つリソースでした。具体的な質問からなるセクションがあり、Azure OpenAIサービスやAzure AIモデル推論サービスの違い、料金体系、SDKやプログラミング言語のサポートに関する情報を提供していました。この情報の喪失により、ユーザーがスムーズにサービスを利用するためのハードルが増えることが考えられます。
セルフサポートの減少: ユーザーはFAQを通じて、いちいちサポートチームに連絡することなく、速やかに疑問を解消することができました。このリソースが失われることで、ユーザーはより多くの時間を要し、問題解決の効率が落ちる可能性があります。
文書構成の変更の影響: FAQが削除されることで、AzureのAIサービスに関するドキュメント全体の見通しが変わる可能性があります。新しいユーザーや開発者に対して、直感的に情報を得る手段が減るため、他のドキュメントへの導線が必要になります。
この変更は、Azure AIサービスのドキュメントの利用価値に影響を与える重要な更新です。今後、代替のサポート手段や情報源が提供されることが望まれます。
articles/ai-studio/ai-services/how-to/content-safety.md
Diff
@@ -1,115 +0,0 @@
----
-title: Use Content Safety in Azure AI Foundry portal
-titleSuffix: Azure AI services
-description: Learn how to use the Content Safety try it out page in Azure AI Foundry portal to experiment with various content safety features such as text and image content, using adjustable thresholds to filter for inappropriate or harmful content.
-ms.service: azure-ai-studio
-ms.custom:
- - ignite-2024
-ms.topic: how-to
-author: PatrickFarley
-manager: nitinme
-ms.date: 11/09/2024
-ms.author: pafarley
----
-
-# Use Content Safety in Azure AI Foundry portal
-
-Azure AI Foundry includes a Content Safety **try it out** page that lets you use the core detection models and other content safety features.
-
-## Prerequisites
-
-- An Azure account. If you don't have one, you can [create one for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?icid=ai-services).
-- An [Azure AI resource](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/AIServices).
-
-
-## Setup
-
-Follow these steps to use the Content Safety **try it out** page:
-
-1. Go to [Azure AI Foundry](https://ai.azure.com/) and navigate to your project/hub. Then select the **Safety+ Security** tab on the left nav and select the **Try it out** tab.
-1. On the **Try it out** page, you can experiment with various content safety features such as text and image content, using adjustable thresholds to filter for inappropriate or harmful content.
-
-:::image type="content" source="../../media/content-safety/try-it-out.png" alt-text="Screenshot of the try it out page for content safety.":::
-
-## Analyze text
-
-1. Select the **Moderate text content** panel.
-1. Add text to the input field, or select sample text from the panels on the page.
-1. Select **Run test**.
- The service returns all the categories that were detected, with the severity level for each: 0-Safe, 2-Low, 4-Medium, 6-High. It also returns a binary **Accepted**/**Rejected** result, based on the filters you configure. Use the matrix in the **Configure filters** tab to set your allowed/prohibited severity levels for each category. Then you can run the text again to see how the filter works.
-
-### Use a blocklist
-
-The **Use blocklist** tab lets you create, edit, and add a blocklist to the moderation workflow. If you have a blocklist enabled when you run the test, you get a **Blocklist detection** panel under **Results**. It reports any matches with the blocklist.
-
-:::image type="content" source="../../media/content-safety/blocklist-panel.png" alt-text="Screenshot of the Use blocklist panel.":::
-
-## Analyze images
-
-The **Moderate image** page provides capability for you to quickly try out image moderation.
-
-1. Select the **Moderate image content** panel.
-1. Select a sample image from the panels on the page, or upload your own image.
-1. Select **Run test**.
- The service returns all the categories that were detected, with the severity level for each: 0-Safe, 2-Low, 4-Medium, 6-High. It also returns a binary **Accepted**/**Rejected** result, based on the filters you configure. Use the matrix in the **Configure filters** tab on the right to set your allowed/prohibited severity levels for each category. Then you can run the text again to see how the filter works.
-
-## View and export code
-
-You can use the **View Code** feature in either the **Analyze text content** or **Analyze image content** pages to view and copy the sample code, which includes configuration for severity filtering, blocklists, and moderation functions. You can then deploy the code on your end.
-
-:::image type="content" source="../../media/content-safety/view-code-option.png" alt-text="Screenshot of the View code button.":::
-
-## Use Prompt Shields
-
-The **Prompt Shields** panel lets you try out user input risk detection. Detect User Prompts designed to provoke the Generative AI model into exhibiting behaviors it was trained to avoid or break the rules set in the System Message. These attacks can vary from intricate role-play to subtle subversion of the safety objective.
-
-1. Select the **Prompt Shields** panel.
-1. Select a sample text on the page, or input your own content for testing.
-1. Select **Run test**.
- The service returns the risk flag and type for each sample.
-
-For more information, see the [Prompt Shields conceptual guide](/azure/ai-services/content-safety/concepts/jailbreak-detection).
-
-
-
-## Use Groundedness detection
-
-The Groundedness detection panel lets you detect whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users.
-
-1. Select the **Groundedness detection** panel.
-1. Select a sample content set on the page, or input your own for testing.
-1. Optionally, enable the reasoning feature and select your Azure OpenAI resource from the dropdown.
-1. Select **Run test**.
- The service returns the groundedness detection result.
-
-
-For more information, see the [Groundedness detection conceptual guide](/azure/ai-services/content-safety/concepts/groundedness).
-
-
-## Use Protected material detection
-
-This feature scans AI-generated text for known text content (for example, song lyrics, articles, recipes, selected web content).
-
-1. Select the **Protected material detection for text** or **Protected material detection for code** panel.
-1. Select a sample text on the page, or input your own for testing.
-1. Select **Run test**.
- The service returns the protected content result.
-
-For more information, see the [Protected material conceptual guide](/azure/ai-services/content-safety/concepts/protected-material).
-
-## Use custom categories
-
-This feature lets you create and train your own custom content categories and scan text for matches.
-
-1. Select the **Custom categories** panel.
-1. Select **Add a new category** to open a dialog box. Enter your category name and a text description, and connect a blob storage container with text training data. Select **Create and train**.
-1. Select a category and enter your sample input text, and select **Run test**.
- The service returns the custom category result.
-
-
-For more information, see the [Custom categories conceptual guide](/azure/ai-services/content-safety/concepts/custom-categories).
-
-
-## Next step
-
-To use Azure AI Content Safety features with your Generative AI models, see the [Content filtering](../../concepts/content-filtering.md) guide.
Summary
{
"modification_type": "breaking change",
"modification_title": "Azure AIの内容安全性機能に関する文書の削除"
}
Explanation
この変更によって、「content-safety.md」というファイルが完全に削除され、115行が削除されました。この文書は、Azure AI Foundryポータルにおけるコンテンツの安全性を確保するための機能を使用する方法について詳述していました。以下のような影響があります。
機能の利用方法の消失: 元の文書には、コンテンツの安全性機能を使用するための手順や、テキストと画像を対象にしたモデレーションの手法が詳細に記載されていました。これにより、ユーザーは不適切な内容や有害なコンテンツをフィルタリングするためのハンズオンガイドを失い、ユーザーエクスペリエンスが低下します。
自己学習の機会の減少: FAQや手順が文書から消えることで、ユーザーは新たに学ぶ機会を失い、特定の問題に対する即時の解決策を見つけることが難しくなります。
サービスの透明性の喪失: コンテンツ安全性に関する情報が削除されることで、Azure AIサービスの利用者がその機能や仕組みを理解する妨げとなり、信頼性が損なわれる恐れがあります。
この削除は、AzureのAIサービスに関心を持つユーザーにとって、利用方法や安全性についての重要な情報源が失われたことを意味します。今後、新たな文書やリソースが投入されることが期待されます。
articles/ai-studio/ai-services/how-to/create-model-deployments.md
Diff
@@ -1,65 +0,0 @@
----
-title: Add and configure models to Azure AI model inference service
-titleSuffix: Azure AI services
-description: Learn how to add and configure new models to the Azure AI model's inference endpoint in Azure AI services.
-ms.service: azure-ai-studio
-ms.topic: how-to
-author: sdgilley
-manager: scottpolly
-ms.date: 10/24/2024
-ms.author: sgilley
-ms.reviewer: fasantia
-recommendations: false
----
-
-# Add and configure models to Azure AI model inference service
-
-You can decide and configure which models are available for inference in the resource's model inference endpoint. When a given model is configured, you can then generate predictions from it by indicating its model name or deployment name on your requests. No further changes are required in your code to use it.
-
-In this article, you learn how to add a new model to the Azure AI model inference service in Azure AI services.
-
-## Prerequisites
-
-To complete this article, you need:
-
-* An Azure subscription. If you're using [GitHub Models](https://docs.github.com/en/github-models/), you can upgrade your experience and create an Azure subscription in the process. Learn more at [Upgrade from GitHub Models to Azure AI Models in AI Services](quickstart-github-models.md).
-* An Azure AI services resource. For more information, see [Create an Azure AI Services resource](../../../ai-services/multi-service-resource.md??context=/azure/ai-studio/context/context).
-
-
-## Add a model
-
-[!INCLUDE [add-model-deployments](../../includes/ai-services/add-model-deployments.md)]
-
-## Use the model
-
-Deployed models in Azure AI services can be consumed using the [Azure AI model's inference endpoint](../concepts/endpoints.md) for the resource.
-
-To use it:
-
-1. Get the Azure AI model's inference endpoint URL and keys from the **deployment page** or the **Overview** page. If you're using Microsoft Entra ID authentication, you don't need a key.
-
- :::image type="content" source="../../media/ai-services/add-model-deployments/models-deploy-endpoint-url.png" alt-text="A screenshot showing how to get the URL and key associated with the deployment." lightbox="../../media/ai-services/add-model-deployments/models-deploy-endpoint-url.png":::
-
-2. Use the model inference endpoint URL and the keys from before when constructing your client. The following example uses the Azure AI Inference package:
-
- [!INCLUDE [code-create-chat-client](../../includes/ai-services/code-create-chat-client.md)]
-
-3. When constructing your request, indicate the parameter `model` and insert the model deployment name you created.
-
- [!INCLUDE [code-create-chat-completion](../../includes/ai-services/code-create-chat-completion.md)]
-
-> [!TIP]
-> When using the endpoint, you can change the `model` parameter to any available model deployment in your resource.
-
-Additionally, Azure OpenAI models can be consumed using the [Azure OpenAI service endpoint](../../../ai-services/openai/supported-languages.md) in the resource. This endpoint is exclusive for each model deployment and has its own URL.
-
-## Model deployment customization
-
-When creating model deployments, you can configure other settings including content filtering and rate limits. To configure more settings, select the option **Customize** in the deployment wizard.
-
-> [!NOTE]
-> Configurations may vary depending on the model you're deploying.
-
-## Next steps
-
-* [Develop applications using Azure AI model inference service in Azure AI services](../concepts/endpoints.md)
Summary
{
"modification_type": "breaking change",
"modification_title": "Azure AIモデル推論サービスにモデルを追加する方法に関する文書の削除"
}
Explanation
この変更により、「create-model-deployments.md」というファイルが完全に削除され、65行が削除されました。この文書は、Azure AI модель推論サービスに新しいモデルを追加し、設定を行う手順を詳細に説明していました。削除の影響は以下の通りです。
手順の消失: 削除された文書には、モデルをどのように追加し、構成するかについての具体的な手順が示されていました。これにより、新規ユーザーや開発者がモデルを効率よく利用するための指針が失われ、サービスの利用が難しくなる可能性があります。
情報の非対称性: 他のユーザーが参照できる情報源が減ることで、機能の理解が深まりにくくなり、特にAzure AIサービスに新しく参加しようとしているユーザーにとっての障壁が高まります。
サポート業務の圧迫: 文書が削除されることで、ユーザーからの質問やサポートリクエストが増加する可能性があり、サポートチームに対する負担が増すことが懸念されます。
この変更は、Azure AIサービスを利用する際のユーザー体験に大きな影響を与えるものです。新しいガイドラインや代替リソースの提供が期待されます。
articles/ai-studio/ai-services/how-to/quickstart-github-models.md
Diff
@@ -1,99 +0,0 @@
----
-title: Upgrade from GitHub Models to Azure AI model inference in Azure AI Services
-titleSuffix: Azure AI Services
-description: Learn how to upgrade your endpoint from GitHub Models to Azure AI Models in AI Services
-ms.service: azure-ai-studio
-ms.topic: how-to
-ms.date: 10/01/2024
-ms.custom: github-universe-2024
-manager: nitinme
-author: mrbullwinkle
-ms.author: fasantia
-recommendations: false
----
-
-# Upgrade from GitHub Models to the Azure AI model inference service
-
-If you want to develop a generative AI application, you can use [GitHub Models](https://docs.github.com/en/github-models/) to find and experiment with AI models for free. The playground and free API usage are [rate limited](https://docs.github.com/en/github-models/prototyping-with-ai-models#rate-limits) by requests per minute, requests per day, tokens per request, and concurrent requests. If you get rate limited, you need to wait for the rate limit that you hit to reset before you can make more requests.
-
-Once you're ready to bring your application to production, you can upgrade your experience by deploying an Azure AI Services resource in an Azure subscription and start using the Azure AI model inference service. You don't need to change anything else in your code.
-
-The following article explains how to get started from GitHub Models in Azure AI Models for Azure AI services.
-
-## Prerequisites
-
-To complete this tutorial, you need:
-
-* A GitHub account with access to [GitHub Models](https://docs.github.com/en/github-models/).
-
-* An Azure subscription. If you don't have one, you are prompted to create or update your Azure account to a pay as you go account when you're ready to deploy your model to production.
-
-## Upgrade to Azure AI Services
-
-The rate limits for the playground and free API usage are intended to help you experiment with models and develop your AI application. Once you're ready to bring your application to production, use a key and endpoint from a paid Azure account. You don't need to change anything else in your code.
-
-To obtain the key and endpoint:
-
-1. In the playground for your model, select **Get API key**.
-
-1. Select **Get production key**.
-
-1. If you don't have an Azure account, select Create my account and follow the steps to create one.
-
-1. If you have an Azure account, select **Sign back in**.
-
-1. If your existing account is a free account, you first have to upgrade to a Pay as you go plan. Once you upgrade, go back to the playground and select **Get API key** again, then sign in with your upgraded account.
-
-1. Once you've signed in to your Azure account, you're taken to [Azure AI Foundry](https://ai.azure.com).
-
-1. At the top of the page, select **Go to your GitHub AI resource** to go to Azure AI Foundry / GitHub](https://ai.azure.com/github). It might take one or two minutes to load your initial model details in Azure AI Foundry portal.
-
-1. The page is loaded with your model's details. Select the **Create a Deployment** button to deploy the model to your account.
-
-1. Once it's deployed, your model's API Key and endpoint are shown in the Overview. Use these values in your code to use the model in your production environment.
-
- :::image type="content" source="../../media/ai-services/add-model-deployments/models-deploy-endpoint-url.png" alt-text="A screenshot showing how to get the URL and key associated with the deployment." lightbox="../../media/ai-services/add-model-deployments/models-deploy-endpoint-url.png":::
-
-At this point, the model you selected is ready to consume.
-
-> [!TIP]
-> Use the parameter `model="<deployment-name>` to route your request to this deployment. *Deployments work as an alias of a given model under certain configurations*. See [Routing](../concepts/endpoints.md#routing) concept page to learn how Azure AI Services route deployments.
-
-## Upgrade your code to use the new endpoint
-
-Once your Azure AI Services resource is configured, you can start consuming it from your code. You need the endpoint URL and key for it, which can be found in the **Overview** section:
-
-:::image type="content" source="../../media/ai-services/overview/overview-endpoint-and-key.png" alt-text="A screenshot showing how to get the URL and key associated with the resource." lightbox="../../media/ai-services/overview/overview-endpoint-and-key.png":::
-
-You can use any of the supported SDKs to get predictions out from the endpoint. The following SDKs are officially supported:
-
-* OpenAI SDK
-* Azure OpenAI SDK
-* Azure AI Inference SDK
-
-See the [supported languages and SDKs](../concepts/endpoints.md#azure-ai-inference-endpoint) section for more details and examples. The following example shows how to use the Azure AI model inference SDK with the newly deployed model:
-
-[!INCLUDE [code-create-chat-client](../../includes/ai-services/code-create-chat-client.md)]
-
-Generate your first chat completion:
-
-[!INCLUDE [code-create-chat-completion](../../includes/ai-services/code-create-chat-completion.md)]
-
-## Explore more features
-
-Azure AI model inference supports more features not available in GitHub Models, including:
-
-* [Explore the model catalog](https://ai.azure.com/github/models) to see other models not available in GitHub Models.
-* Configure [content filtering](../../concepts/content-filtering.md).
-* Configure rate limiting (for specific models).
-* Explore more [deployment SKUs (for specific models)](../concepts/deployment-types.md).
-* Configure [private networking](../../../ai-services/cognitive-services-virtual-networks.md?context=/azure/ai-studio/context/context).
-
-## Got troubles?
-
-See the [FAQ section](../faq.yml) to explore more help.
-
-## Next steps
-
-* [Add more models](create-model-deployments.md) to your endpoint.
-* [Explore the model catalog](https://ai.azure.com/github/models) in Azure AI Foundry portal.
\ No newline at end of file
Summary
{
"modification_type": "breaking change",
"modification_title": "GitHubモデルからAzure AIモデル推論サービスへのアップグレードに関する文書の削除"
}
Explanation
この変更によって、「quickstart-github-models.md」というファイルが完全に削除され、99行が削除されました。この文書は、GitHubモデルからAzure AIモデル推論サービスへのアップグレード手順を詳細に説明していました。削除による影響は以下の通りです。
ユーザーガイダンスの喪失: 削除された文書には、GitHubモデルの利用からAzure AIモデル推論サービスに移行する際の具体的な手順が詳細に記載されていました。そのため、ユーザーは新しいサービスを利用する際に必要な情報や手順を失い、移行作業が困難になる可能性があります。
サポートの必要性の増加: ユーザーが情報を自ら探すのが難しくなることで、サポートチームに対する問い合わせが増加する懸念があります。特に、新しいサービスに不慣れなユーザーは移行プロセスでのサポートを求める傾向があります。
情報の不整合: 削除された文書が他のリソースとの関係で情報が整合しなくなる可能性があり、ユーザーがAzureの機能を正しく理解する妨げとなります。
この変更は、Azure AIサービスを活用しようとするユーザーにとって重要な情報を失うことを意味します。今後、代替の文書やリソースが提供されることが期待されます。
articles/ai-studio/ai-services/model-inference.md
Diff
@@ -1,46 +0,0 @@
----
-title: What is Azure AI model inference service?
-titleSuffix: Azure AI Foundry
-description: Apply advanced language models to variety of use cases with Azure AI model inference
-manager: nitinme
-author: mrbullwinkle
-ms.author: fasantia
-ms.service: azure-ai-studio
-ms.topic: overview
-ms.date: 08/14/2024
-ms.custom: github-universe-2024
-recommendations: false
----
-
-# What is Azure AI model inference service?
-
-Azure AI models inference service provides access to the most powerful models available in the **Azure AI model catalog**. Coming from the key model providers in the industry including OpenAI, Microsoft, Meta, Mistral, Cohere, G42, and AI21 Labs; these models can be integrated with software solutions to deliver a wide range of tasks including content generation, summarization, image understanding, semantic search, and code generation.
-
-The Azure AI model inference service provides a way to **consume models as APIs without hosting them on your infrastructure**. Models are hosted in a Microsoft-managed infrastructure, which enables API-based access to the model provider's model. API-based access can dramatically reduce the cost of accessing a model and simplify the provisioning experience.
-
-## Models
-
-You can get access to the key model providers in the industry including OpenAI, Microsoft, Meta, Mistral, Cohere, G42, and AI21 Labs. Model providers define the license terms and set the price for use of their models. The following list shows all the models available:
-
-| Model provider | Models |
-| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| AI21 Labs | - AI21-Jamba-1.5-Mini <br/> - AI21-Jamba-1.5-Large </br> |
-| Azure OpenAI | - o1-preview ([Request Access](https://aka.ms/oai/modelaccess)) </br> - o1-mini ([Request Access](https://aka.ms/oai/modelaccess)) </br> - gpt-4o-mini </br> - gpt-4o </br> - text-embedding-3-small </br> - text-embedding-3-large </br> |
-| Cohere | - Cohere-command-r-plus-08-2024 </br> - Cohere-command-r-08-2024 </br> - Cohere-embed-v3-multilingual </br> - Cohere-embed-v3-english </br> - Cohere-command-r-plus </br> - Cohere-command-r </br> |
-| Meta AI | - Meta-Llama-3-8B-Instruct </br> - Meta-Llama-3-70B-Instruct </br> - Meta-Llama-3.1-8B-Instruct</br> - Meta-Llama-3.1-70B-Instruct </br> - Meta-Llama-3.1-405B-Instruct </br> - Llama-3.2-11B-Vision-Instruct </br> - Llama-3.2-90B-Vision-Instruct |
-| Mistral AI | - Mistral-Small </br> - Mistral-Nemo </br> - Mistral-large </br> - Mistral-large-2407 |
-| Microsoft | - Phi-3-mini-4k-instruct </br> - Phi-3-medium-4k-instruct </br> - Phi-3-mini-128k-instruct </br> - Phi-3-medium-128k-instruct </br> - Phi-3-small-8k-instruct </br> - Phi-3-small-128k-instruct </br> - Phi-3.5-vision-instruct </br> - Phi-3.5-mini-instruct </br> - Phi-3.5-MoE-instruct </br> |
-
-You can [decide and configure which models are available for inference](how-to/create-model-deployments.md) in the created resource. When a given model is configured, you can then generate predictions from it by indicating its model name or deployment name on your requests. No further changes are required in your code to use it.
-
-To learn how to add models to the Azure AI model inference resource and use them read [Add and configure models to Azure AI Models in Azure AI model inference](how-to/create-model-deployments.md).
-
-## Pricing
-
-Models that are offered by non-Microsoft providers (for example, Meta AI and Mistral models) are billed through the Azure Marketplace. For such models, you're required to subscribe to the particular model offering in accordance with the [Microsoft Commercial Marketplace Terms of Use](/legal/marketplace/marketplace-terms). Users accept license terms for use of the models. Pricing information for consumption is provided during deployment.
-
-Models that are offered by Microsoft (for example, Phi-3 models and Azure OpenAI models) don't have this requirement, and they are billed via Azure meters as First Party Consumption Services. As described in the [Product Terms](https://www.microsoft.com/licensing/terms/welcome/welcomepage), you purchase First Party Consumption Services by using Azure meters, but they aren't subject to Azure service terms.
-
-## Next steps
-
-* [Create your first model deployment in Azure AI model inference](how-to/create-model-deployments.md)
\ No newline at end of file
Summary
{
"modification_type": "breaking change",
"modification_title": "Azure AIモデル推論サービスに関する文書の削除"
}
Explanation
この変更により、「model-inference.md」というファイルが完全に削除され、46行が削除されました。この文書は、Azure AIモデル推論サービスの概要、利用可能なモデル、料金体系、利用方法について詳しく説明していました。削除による影響は以下の通りです。
サービスの理解の困難: 削除された文書には、Azure AIモデル推論サービスで提供される機能やAPIの使い方が説明されていました。これにより、新たにこのサービスを利用しようとするユーザーは、その存在や利用方法について理解するのが難しくなります。
モデル選択の指針喪失: 削除された内容には、利用可能なモデルのリストと、それらをどのように選択・構成するかに関する情報が含まれていました。この情報が失われることで、ユーザーは適切なモデルを見つけるのが難しくなり、実装の検討にも影響を与える可能性があります。
料金理解の不明確さ: 各モデルの料金体系についての説明も含まれていたため、これが削除されることで、ユーザーはコストに関する情報を得る手段を失います。この損失は、特に予算を持つ開発者や企業にとって重要です。
この変更は、Azure AIモデル推論サービスを利用するユーザーにとって致命的な情報の損失を意味します。今後、代替のドキュメントやリソースの提供が期待されます。
articles/ai-studio/concepts/content-filtering.md
Diff
@@ -9,7 +9,7 @@ ms.custom:
- build-2024
- ignite-2024
ms.topic: conceptual
-ms.date: 5/21/2024
+ms.date: 01/10/2025
ms.reviewer: eur
ms.author: pafarley
author: PatrickFarley
@@ -59,15 +59,15 @@ The following special filters work for both input and output of generative AI mo
### Other input filters
You can also enable special filters for generative AI scenarios:
-- Jailbreak attacks: Jailbreak Attacks are User Prompts designed to provoke the Generative AI model into exhibiting behaviors it was trained to avoid or to break the rules set in the System Message.
-- Indirect attacks: Indirect Attacks, also referred to as Indirect Prompt Attacks or Cross-Domain Prompt Injection Attacks, are a potential vulnerability where third parties place malicious instructions inside of documents that the Generative AI system can access and process.
+- **Jailbreak attacks**: Jailbreak Attacks are User Prompts designed to provoke the Generative AI model into exhibiting behaviors it was trained to avoid or to break the rules set in the System Message.
+- **Indirect attacks**: Indirect Attacks, also referred to as Indirect Prompt Attacks or Cross-Domain Prompt Injection Attacks, are a potential vulnerability where third parties place malicious instructions inside of documents that the Generative AI system can access and process.
### Other output filters
You can also enable the following special output filters:
-- Protected material for text: Protected material text describes known text content (for example, song lyrics, articles, recipes, and selected web content) that can be outputted by large language models.
-- Protected material for code: Protected material code describes source code that matches a set of source code from public repositories, which can be outputted by large language models without proper citation of source repositories.
-- Groundedness: The groundedness detection filter detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users.
+- **Protected material for text**: Protected material text describes known text content (for example, song lyrics, articles, recipes, and selected web content) that can be outputted by large language models.
+- **Protected material for code**: Protected material code describes source code that matches a set of source code from public repositories, which can be outputted by large language models without proper citation of source repositories.
+- **Groundedness**: The groundedness detection filter detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users.
[!INCLUDE [create-content-filter](../includes/create-content-filter.md)]
Summary
{
"modification_type": "minor update",
"modification_title": "コンテンツフィルタリングに関する文書の改訂"
}
Explanation
この変更では、「content-filtering.md」というファイルに対して修正が加えられ、6行が追加され、6行が削除されました。主な変更点は、特定のフィルタに関する説明の強調および日付の更新です。
視覚的な強調: ジェネレーティブAIモデルにおける「Jailbreak attacks」と「Indirect attacks」の説明が、リスト形式から強調表示のある形式に変更されました。これにより、内容が見やすくなり、重要な情報が一目でわかりやすくなっています。
用語の明確化: 特殊フィルタの説明において、以前の記述から明確さと読みやすさが向上しました。強調を加えることで、新しい利用者にも理解しやすい情報提供が可能になっています。
日付の更新: 文書の更新日が「5/21/2024」から「01/10/2025」に変更されました。これは、文書の内容が最新の情報に基づいていることを示し、利用者に対して信頼性を高めます。
全体的に見て、この修正は文書の視認性と理解しやすさを向上させるものであり、コンテンツフィルタリングに関連する重要な情報を利用者に明確に伝えることを目的としています。
articles/ai-studio/concepts/deployments-overview.md
Diff
@@ -29,9 +29,9 @@ Deployment options vary depending on the model type:
Azure AI Foundry offers four different deployment options:
-|Name | Azure OpenAI service | Azure AI model inference service | Serverless API | Managed compute |
+|Name | Azure OpenAI service | Azure AI model inference | Serverless API | Managed compute |
|-------------------------------|----------------------|-------------------|----------------|-----------------|
-| Which models can be deployed? | [Azure OpenAI models](../../ai-services/openai/concepts/models.md) | [Azure OpenAI models and Models as a Service](../ai-services/model-inference.md#models) | [Models as a Service](../how-to/model-catalog-overview.md#content-safety-for-models-deployed-via-serverless-apis) | [Open and custom models](../how-to/model-catalog-overview.md#availability-of-models-for-deployment-as-managed-compute) |
+| Which models can be deployed? | [Azure OpenAI models](../../ai-services/openai/concepts/models.md) | [Azure OpenAI models and Models as a Service](../../ai-foundry/model-inference/concepts/models.md) | [Models as a Service](../how-to/model-catalog-overview.md#content-safety-for-models-deployed-via-serverless-apis) | [Open and custom models](../how-to/model-catalog-overview.md#availability-of-models-for-deployment-as-managed-compute) |
| Deployment resource | Azure OpenAI resource | Azure AI services resource | AI project resource | AI project resource |
| Best suited when | You are planning to use only OpenAI models | You are planning to take advantage of the flagship models in Azure AI catalog, including OpenAI. | You are planning to use a single model from a specific provider (excluding OpenAI). | If you plan to use open models and you have enough compute quota available in your subscription. |
| Billing bases | Token usage & PTU | Token usage | Token usage<sup>1</sup> | Compute core hours<sup>2</sup> |
@@ -48,7 +48,7 @@ Azure AI Foundry offers four different deployment options:
Azure AI Foundry encourages customers to explore the deployment options and pick the one that best suites their business and technical needs. In general you can use the following thinking process:
-1. Start with the deployment options that have the bigger scopes. This allows you to iterate and prototype faster in your application without having to rebuild your architecture each time you decide to change something. [Azure AI model inference service](../ai-services/model-inference.md) is a deployment target that supports all the flagship models in the Azure AI catalog, including latest innovation from Azure OpenAI.
+1. Start with the deployment options that have the bigger scopes. This allows you to iterate and prototype faster in your application without having to rebuild your architecture each time you decide to change something. [Azure AI model inference](../../ai-foundry/model-inference/overview.md) is a deployment target that supports all the flagship models in the Azure AI catalog, including latest innovation from Azure OpenAI. To get started, follow [Configure your AI project to use Azure AI model inference](../../ai-foundry/model-inference/how-to/quickstart-ai-project.md).
2. When you are looking to use a specific model:
@@ -63,7 +63,8 @@ Azure AI Foundry encourages customers to explore the deployment options and pick
## Related content
-* [Add and configure models to the Azure AI model inference service](../ai-services/how-to/create-model-deployments.md)
+* [Configure your AI project to use Azure AI model inference](../../ai-foundry/model-inference/how-to/quickstart-ai-project.md)
+* [Add and configure models to Azure AI model inference](../ai-services/how-to/create-model-deployments.md)
* [Deploy Azure OpenAI models with Azure AI Foundry](../how-to/deploy-models-openai.md)
* [Deploy open models with Azure AI Foundry](../how-to/deploy-models-open.md)
* [Model catalog and collections in Azure AI Foundry portal](../how-to/model-catalog-overview.md)
Summary
{
"modification_type": "minor update",
"modification_title": "デプロイメントオーバービューに関する文書の修正"
}
Explanation
この変更では、「deployments-overview.md」というファイルに対して改訂が行われ、5行が追加され、4行が削除されました。主な変更点は、文書内のリンクの更新と内容の明確化です。
リンクの修正: Azure AIモデリングに関連するリンクが新たに更新され、古いリンクからより適切な参照先に変更されました。例えば、“Azure AI model inference service”から”Azure AI model inference”への変更や、他の関連リンクも新しいパスに合わせて修正されています。これにより、読者は最新の情報にアクセスしやすくなります。
情報の強化: 新たに追加されたコンテンツにより、文書の内容が具体的になり、特定のモデルを使用する際の手順が示されています。特に、「Azure AI model inferenceを利用するための設定」に関するリンクが追加され、利用者が必要な情報を見つけやすくなっています。
視認性の向上: 特殊なタブ形式のテーブル内での情報が整頓され、また一部の文がより明瞭に表現されることで、情報の理解を助けています。
全体的に、この修正はドキュメントの関連性と有用性を向上させ、Azure AI Foundryを利用したデプロイメントに関連する情報をより明確に提供することを目的としています。
articles/ai-studio/how-to/configure-managed-network.md
Diff
@@ -786,7 +786,6 @@ The hosts in this section are used to install Visual Studio Code packages to est
| `code.visualstudio.com` | Required to download and install VS Code desktop. This host isn't required for VS Code Web. |
| `update.code.visualstudio.com`<br>`*.vo.msecnd.net` | Used to retrieve VS Code server bits that are installed on the compute instance through a setup script. |
| `marketplace.visualstudio.com`<br>`vscode.blob.core.windows.net`<br>`*.gallerycdn.vsassets.io` | Required to download and install VS Code extensions. These hosts enable the remote connection to compute instances. For more information, see [Get started with Azure AI Foundry projects in VS Code](./develop/vscode.md). |
-| `https://github.com/microsoft/vscode-tools-for-ai/tree/master/azureml_remote_websocket_server/*`<br>`raw.githubusercontent.com` | Used to retrieve websocket server bits that are installed on the compute instance. The websocket server is used to transmit requests from Visual Studio Code client (desktop application) to Visual Studio Code server running on the compute instance. |
| `vscode.download.prss.microsoft.com` | Used for Visual Studio Code download CDN |
#### Ports
Summary
{
"modification_type": "minor update",
"modification_title": "管理ネットワークの構成に関するファイルの修正"
}
Explanation
この変更では、「configure-managed-network.md」というファイルの修正が行われ、1行が削除されました。主な変更点は、Visual Studio Codeのウェブソケットサーバに関する情報の削除です。
ウェブソケットサーバに関する情報の削除: 特に、https://github.com/microsoft/vscode-tools-for-ai/tree/master/azureml_remote_websocket_server/*
およびraw.githubusercontent.com
に関連する行が削除されました。この行は、Visual Studio Codeクライアントから計算インスタンス上で実行されるVisual Studio Codeサーバにリクエストを送信するためのウェブソケットサーバに関する情報を提供していました。この削除により、関連情報が簡略化されています。
内容の簡素化: ウェブソケットに対する具体的な説明が無くなったため、全体的な内容が簡潔になり、利用者にとっての読みやすさが向上しています。
この修正は、管理ネットワークに関連する文書の明瞭さを高め、不要な情報を排除することで、情報提供をよりスムーズに行うことを目的としています。
articles/ai-studio/how-to/data-image-add.md
Diff
@@ -1,172 +0,0 @@
----
-title: 'Use your image data with Azure OpenAI Service'
-titleSuffix: Azure AI Foundry
-description: Use this article to learn about using your image data for image generation in Azure AI Foundry portal.
-manager: nitinme
-ms.service: azure-ai-studio
-ms.custom:
- - build-2024
-ms.topic: how-to
-ms.date: 5/21/2024
-ms.reviewer: sgilley
-ms.author: pafarley
-author: PatrickFarley
----
-
-# Azure OpenAI enterprise chat with images using GPT-4 Turbo with Vision
-
-[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-
-Use this article to learn how to provide your own image data for GPT-4 Turbo with Vision, Azure OpenAI's vision model. GPT-4 Turbo with Vision enterprise chat allows the model to generate more customized and targeted answers using retrieval augmented generation based on your own images and image metadata.
-
-> [!TIP]
-> This article is for using your image data on the GPT-4 Turbo with Vision model. See [Deploy an enterprise chat web app](../tutorials/deploy-chat-web-app.md) for a tutorial on how to deploy a chat web app using your text data.
-
-## Prerequisites
-
-- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.
-- An Azure OpenAI resource with the GPT-4 Turbo with Vision model deployed. For more information about model deployment, see the [resource deployment guide](../../ai-services/openai/how-to/create-resource.md).
-- Be sure that you're assigned at least the [Cognitive Services Contributor role](../../ai-services/openai/how-to/role-based-access-control.md#cognitive-services-contributor) for the Azure OpenAI resource.
-- An Azure AI Search resource. See [create an Azure AI Search service in the portal](/azure/search/search-create-service-portal). If you don't have an Azure AI Search resource, you're prompted to create one when you add your data source later in this guide.
-- An [Azure AI Foundry hub](../how-to/create-azure-ai-resource.md) with your Azure OpenAI resource and Azure AI Search resource added as connections.
-
-
-## Deploy a GPT-4 Turbo with Vision model
-
-1. Sign in to [Azure AI Foundry](https://ai.azure.com) and select the hub you'd like to work in.
-1. On the left nav menu, select **AI Services**. Select the **Try out GPT-4 Turbo** panel.
-1. On the gpt-4 page, select **Deploy**. In the window that appears, select your Azure OpenAI resource. Select `vision-preview` as the model version.
-1. Select **Deploy**.
-1. Next, go to your new model's page and select **Open in playground**. In the chat playground, the GPT-4 deployment you created should be selected in the **Deployment** dropdown.
- :::image type="content" source="../media/quickstarts/multimodal-vision/chat-multi-modal-image-select.png" alt-text="Screenshot of the chat playground with mode and deployment highlighted." lightbox="../media/quickstarts/multimodal-vision/chat-multi-modal-image-select.png":::
-
-## Select your image data source
-
-1. On the left pane, select the **Add your data** tab and select **Add a data source**.
-1. In the window that appears, select a data source option. Each option uses an Azure AI Search index that's trained on your images and can be used for retrieval augmented generation in the chat playground.
- * **Azure AI Search**: If you have an existing [Azure AI Search](/azure/search/search-what-is-azure-search) index, you can use it as a data source.
- * **Azure Blob Storage**: The Azure Blob storage option is especially useful if you have a large number of image files and don't want to manually upload each one. Azure AI Foundry will generate an image search index for you.
- * **Upload image files and metadata**: You can upload image files and metadata using the playground. This option is useful if you have a small number of image files. Azure AI Foundry will generate an image search index for you.
-
-## Add your image data
-
-# [Azure AI Search](#tab/azure-ai-search)
-
-If you have an existing [Azure AI Search](/azure/search/search-what-is-azure-search) index, you can use it as a data source. If you don't already have a search index but you'd like to create one on your own, follow the [AI Search vector search repository on GitHub](https://github.com/Azure/cognitive-search-vector-pr), which provides scripts to create an index with your image files.
-
-1. In the playground's **Select or add data source** window, choose **Azure AI Search** and enter your index's details. Select the boxes to acknowledge that deployments and connections incur usage on your account.
-1. Optionally enable the **Use custom field mapping** option. This lets you control the mapping between the custom fields in your search index and the standard fields that Azure OpenAI chat models use during retrieval augmented generation.
-1. Select **Next** and review your settings on the next page. Then select **Save and close**.
-1. In the chat playground, you can see that your data has been added.
-
-# [Azure Blob storage](#tab/azure-blob-storage)
-
-If you have an existing [Azure Blob Storage](/azure/storage/blobs/storage-blobs-introduction) container with images, you can use it to create an image search index. If you want to create a new blob storage account, see the [Azure Blob storage quickstart](/azure/storage/blobs/storage-quickstart-blobs-portal) documentation.
-
-Your Azure Blob storage account must contain both image files and a JSON file with the image file paths and metadata.
-
-Your metadata JSON file must:
-- Have a file name that starts with the word `metadata` (all in lowercase without a space).
-- List no more than 10,000 image files. If you have more files in your container, you can have multiple JSON files each with up to this maximum.
-
-The JSON metadata file should be formatted like this:
-
-```json
-[
- {
- "image_blob_path": "image1.jpg",
- "description": "description of image1"
- },
- {
- "image_blob_path": "image2.jpg",
- "description": "description of image2"
- },
- ...
- {
- "image_blob_path": "image50.jpg",
- "description": "description of image50"
- }
-]
-```
-
-After you have a blob storage container populated with image files and at least one metadata JSON file, you're ready to add the blob storage as a data source.
-
-1. In the playground's **Select or add data source** window, choose **Azure Blob Storage** and enter your data source details. Also choose a name for the Azure AI Search index that will be created.
-
- > [!NOTE]
- > Azure OpenAI needs both a storage account resource and a search resource to access and index your data. Your data is stored securely in your Azure subscription.
- >
- > When adding data to the selected storage account for the first time in Azure AI Foundry portal, you might be prompted to turn on [cross-origin resource sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services). Azure AI Foundry and Azure OpenAI need access your Azure Blob storage account.
-
- :::image type="content" source="../media/data-add/use-your-image-data/add-image-data-blob.png" alt-text="A screenshot showing the Azure storage account and Azure AI Search index selection." lightbox="../media/data-add/use-your-image-data/add-image-data-blob.png":::
-
-1. Select the boxes to acknowledge that deployments and connections incur usage on your account. Then select **Next**.
-
-1. Review the details you entered, and select **Save and close**.
-
- :::image type="content" source="../media/data-add/use-your-image-data/add-your-data-blob-review-finish.png" alt-text="Screenshot of the review and finish page for adding data via Azure blob storage." lightbox="../media/data-add/use-your-image-data/add-your-data-blob-review-finish.png":::
-
-1. Now in the chat playground, you can see that your data ingestion is in progress. Before proceeding, wait until you see the data source and index name in place of the status.
-
-# [Upload image files and metadata](#tab/upload-image-files-and-metadata)
-
-1. In the **Select or add data source** page, select **Upload files** from the **Select data source** dropdown.
-
-1. Enter your data source details. Also choose a name for the Azure AI Search index that will be created.
-
- > [!NOTE]
- > Azure OpenAI needs both a storage account resource and a search resource to access and index your data. Your data is stored securely in your Azure subscription.
- >
- > When adding data to the selected storage account for the first time in Azure AI Foundry portal, you might be prompted to turn on [cross-origin resource sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services). Azure AI Foundry and Azure OpenAI need access your Azure Blob storage account.
-
- :::image type="content" source="../media/data-add/use-your-image-data/add-image-data-upload.png" alt-text="A screenshot showing the storage account and index selection for image file upload." lightbox="../media/data-add/use-your-image-data/add-image-data-upload.png":::
-
-1. Select the boxes to acknowledge that deployments and connections incur usage on your account. Then select **Next**.
-1. On the **Upload files** page, select **Browse for a file** and select the files you want to upload. If you want to upload more than one file, do so now. You won't be able to add more files later in the same playground session.
-
- The following file types are supported for your image files, up to 16 MB in size:
- * .jpg
- * .png
- * .gif
- * .bmp
- * .tiff
-
-1. Select **Upload** to upload the files to your Azure Blob storage account. Then select **Next**.
-
- :::image type="content" source="../media/data-add/use-your-image-data/add-your-data-uploaded.png" alt-text="Screenshot of the dialog to select and upload files." lightbox="../media/data-add/use-your-image-data/add-your-data-uploaded.png":::
-
-1. On the **Add metadata** page, enter a text description for each image in the corresponding text fields. Then select **Next**.
-
- :::image type="content" source="../media/data-add/use-your-image-data/add-image-metadata.png" alt-text="A screenshot showing the metadata entry field." lightbox="../media/data-add/use-your-image-data/add-image-metadata.png":::
-
-1. Review the details you entered. You can see the names of the storage container and search index that will be created for you. Select **Save and close**.
-
- :::image type="content" source="../media/data-add/use-your-image-data/add-your-data-review-finish.png" alt-text="Screenshot of the review and finish page for adding data." lightbox="../media/data-add/use-your-image-data/add-your-data-review-finish.png":::
-
-1. Now in the chat playground, you can see that your data ingestion is in progress. Before proceeding, wait until you see the data source and index name in place of the status.
-
----
-
-
-## Use your data with your GPT-4 Turbo with Vision model
-
-After you add your image data, you can try out a chat conversation that's grounded on your image data.
-
-1. Use the attachment button in the chat window to upload a new image. Ask a question about its relationship to the other images in your data set.
-
- <!--:::image type="content" source="../media/data-add/use-your-image-data/select-image-for-chat.png" alt-text="Screenshot of the chat playground with the status of data ingestion in view." lightbox="../media/data-add/use-your-image-data/select-image-for-chat.png":::-->
-
-2. The model will respond with an answer that's grounded on your image data.
-
- <!--:::image type="content" source="../media/data-add/use-your-image-data/chat-with-data.png" alt-text="Screenshot of the assistant's reply with grounding data." lightbox="../media/data-add/use-your-image-data/chat-with-data.png":::-->
-
-## Add and remove data sources
-
-Azure OpenAI only allows one data source to be used per a chat session. If you want to add a new data source, you must remove the existing data source first. Do this by selecting **Remove data source** under your data source information.
-
-When you remove a data source, you'll see a warning message. Removing a data source clears the chat session and resets all playground settings.
-
-## Next steps
-
-- Learn how to [create a project in Azure AI Foundry portal](./create-projects.md).
-- [Deploy an enterprise chat web app](../tutorials/deploy-chat-web-app.md)
Summary
{
"modification_type": "breaking change",
"modification_title": "データ画像追加に関する記事の削除"
}
Explanation
この変更では、「data-image-add.md」というファイルが完全に削除され、172行のコンテンツが失われました。このアーティクルは、Azure AI Foundryを使用して画像データを追加し、GPT-4 Turbo with Visionモデルを利用する方法について説明していました。以下に、この変更の主要な影響を示します。
コンテンツの完全削除: この記事は、Azure OpenAIのビジョンモデルを使用し、画像データを提供する手順、必要な前提条件、およびデータソースの選択に関する詳細情報を提供していました。これらの情報が完全に削除されたため、関連するガイドラインや手順を必要とするユーザーにとっては、重要なリソースが失われています。
ユーザーへの影響: 画像データを使用したAIモデルの展開や管理に関心のあるユーザーは、代替のドキュメントや情報を探す必要があります。関連した操作を行うための情報源が不足しているため、これにより作業の進行が妨げられる可能性があります。
システムの変更: この削除は、システム全体のドキュメント構造に影響を与える可能性があり、他の関連文書や操作の流れにも影響を及ぼすことが考えられます。
全体的に、この変更は、Azure AI Foundryにおける画像データの管理に関する重要な情報の喪失を伴っており、ユーザーは代わりのリソースを見つける必要があります。
articles/ai-studio/how-to/deploy-models-managed.md
Diff
@@ -16,7 +16,7 @@ author: msakande
# How to deploy and inference a managed compute deployment with code
-the Azure AI Foundry portal [model catalog](../how-to/model-catalog-overview.md) offers over 1,600 models, and the most common way to deploy these models is to use the managed compute deployment option, which is also sometimes referred to as a managed online deployment.
+The Azure AI Foundry portal [model catalog](../how-to/model-catalog-overview.md) offers over 1,600 models, and the most common way to deploy these models is to use the managed compute deployment option, which is also sometimes referred to as a managed online deployment.
Deployment of a large language model (LLM) makes it available for use in a website, an application, or other production environment. Deployment typically involves hosting the model on a server or in the cloud and creating an API or other interface for users to interact with the model. You can invoke the deployment for real-time inference of generative AI applications such as chat and copilot.
Summary
{
"modification_type": "minor update",
"modification_title": "マネージドコンピュートデプロイメントに関する文の修正"
}
Explanation
この変更では、「deploy-models-managed.md」というファイルに対して1行の追加と1行の削除が行われ、合計で2つの変更がありました。具体的には、Azure AI Foundryポータルに関する説明の一部が修正されています。
内容の微調整: この変更では、文書内の特定のフレーズが調整されました。具体的には、Azure AI Foundryポータルの「モデルカタログ」に関する説明が繰り返されていますが、実際の意味や内容には大きな変化はないようです。このような修正は、文の流れや内容の重複を避け、読みやすさを向上させる目的があります。
文書の明瞭さ向上: 微修正により、ユーザーが情報を理解しやすくなるよう工夫されています。また、この文書はモデルのデプロイメントに関する重要な手順や概念を提供しているため、正確で分かりやすい表現が求められます。
この変更は主に文体の調整に関するものであり、情報の本質には影響を与えていないものの、よりクリーンで効果的なコミュニケーションを促進するためのステップと考えられます。
articles/ai-studio/includes/use-blocklists.md
Diff
@@ -14,17 +14,22 @@ ms.custom: include
## Create a blocklist
1. Go to [Azure AI Foundry](https://ai.azure.com/) and navigate to your project/hub. Then select the **Safety+ Security** page on the left nav and select the **Blocklists** tab.
+
:::image type="content" source="../media/content-safety/content-filter/select-blocklists.png" lightbox="../media/content-safety/content-filter/select-blocklists.png" alt-text="Screenshot of the Blocklists page tab.":::
-1. Select **Create a blocklist**. Enter a name for your blocklist, add a description, and select an Azure OpenAI resource to connect it to. Then select **Create Blocklist**.
-1. Select your new blocklist once it's created. On the blocklist's page, select **Add new term**.
-1. Enter the term that should be filtered and select **Add term**. You can also use a regex.
- You can delete each term in your blocklist.
+
+2. Select **Create a blocklist**. Enter a name for your blocklist, add a description, and select an Azure OpenAI resource to connect it to. Then select **Create Blocklist**.
+
+3. Select your new blocklist once it's created. On the blocklist's page, select **Add new term**.
+
+4. Enter the term that should be filtered and select **Add term**. You can also use a regex. You can delete each term in your blocklist.
## Attach a blocklist to a content filter configuration
1. Once the blocklist is ready, go back to the **Safety+ Security** page and select the **Content filters** tab. Create a new content filter configuration. This opens a wizard with several AI content safety components.
+
:::image type="content" source="../media/content-safety/content-filter/create-content-filter.png" lightbox="../media/content-safety/content-filter/create-content-filter.png" alt-text="Screenshot of the Create content filter button.":::
-1. On the **Input filter** and **Output filter** screens, toggle the **Blocklist** button on. You can then select a blocklist from the list.
+
+2. On the **Input filter** and **Output filter** screens, toggle the **Blocklist** button on. You can then select a blocklist from the list.
There are two types of blocklists: the custom blocklists you created, and prebuilt blocklists that Microsoft provides—in this case a Profanity blocklist (English).
-1. You can now decide which of the available blocklists you want to include in your content filtering configuration. The last step is to review and finish the content filtering configuration by selecting **Next**.
- You can always go back and edit your configuration. Once it’s ready, select a **Create content filter**. The new configuration that includes your blocklists can now be applied to a deployment.
\ No newline at end of file
+
+3. You can now decide which of the available blocklists you want to include in your content filtering configuration. The last step is to review and finish the content filtering configuration by selecting **Next**. You can always go back and edit your configuration. Once it’s ready, select a **Create content filter**. The new configuration that includes your blocklists can now be applied to a deployment.
\ No newline at end of file
Summary
{
"modification_type": "minor update",
"modification_title": "ブロックリスト使用に関する手順の改訂"
}
Explanation
この変更では、「use-blocklists.md」というファイルに対する更新が行われ、12行が追加され、7行が削除されました。合計で19の変更が行われ、主にブロックリストを作成および使用する手順が改訂されています。
手順の明確化: ブロックリストの作成に関する手順が番号付けされ、段階的な説明が行われています。特に、作成後に新しいブロックリストを選択する手順が明確に追加され、ユーザーが迷わずに操作できるようになっています。
ビジュアル情報の追加: 画像が追加されたことで、ユーザーが手順を視覚的に理解しやすくなっています。「Blocklists」ページおよび「Create content filter」ボタンに関するスクリーンショットが含まれ、具体的なインターフェースが示されています。これにより、ユーザーは手順を追いやすくなるでしょう。
内容の整理: セクション間で手順がより整理され、情報が整理された形式で提供されるようになりました。この更新により、特に新しいユーザーが手順を理解しやすくなることが期待されます。
この変更は、ブロックリストの使用に関するガイドラインを向上させ、ユーザーがAzure AI Foundryでの操作をより効果的に行えるようにすることを目的としています。
articles/ai-studio/quickstarts/multimodal-vision.md
Diff
@@ -1,31 +1,33 @@
---
-title: Get started using GPT-4 Turbo with Vision on your images and videos in Azure AI Foundry portal
+title: Get started vision-enabled chats in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: Get started using GPT-4 Turbo with Vision on your images and videos in Azure AI Foundry portal.
+description: Get started using vision-enabled chats in Azure AI Foundry portal.
manager: nitinme
ms.service: azure-ai-studio
ms.custom:
- build-2024
ms.topic: quickstart
-ms.date: 5/21/2024
+ms.date: 01/10/2025
ms.reviewer: eur
ms.author: pafarley
author: PatrickFarley
---
-# Quickstart: Get started using GPT-4 Turbo with Vision on your images and videos in Azure AI Foundry portal
+# Quickstart: Get started using vision-enabled chats in Azure AI Foundry portal
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-Use this article to get started using [Azure AI Foundry](https://ai.azure.com) to deploy and test the GPT-4 Turbo with Vision model.
+Use this article to get started using [Azure AI Foundry](https://ai.azure.com) to deploy and test a chat completion model with image understanding.
+<!--
GPT-4 Turbo with Vision and [Azure AI Vision](../../ai-services/computer-vision/overview.md) offer advanced functionality including:
- Optical Character Recognition (OCR): Extracts text from images and combines it with the user's prompt and image to expand the context.
- Object grounding: Complements the GPT-4 Turbo with Vision text response with object grounding and outlines salient objects in the input images.
- Video prompts: GPT-4 Turbo with Vision can answer questions by retrieving the video frames most relevant to the user's prompt.
+-->
-Extra usage fees might apply when using GPT-4 Turbo with Vision and Azure AI Vision functionality.
+Extra usage fees might apply when using chat completion models with vision functionality.
## Prerequisites
@@ -35,29 +37,29 @@ Extra usage fees might apply when using GPT-4 Turbo with Vision and Azure AI Vis
## Prepare your media
-You need an image to complete the image quickstarts. You can use this sample image or any other image you have available.
+You need an image to complete this quickstart. You can use this sample image or any other image you have available.
:::image type="content" source="../media/quickstarts/multimodal-vision/car-accident.png" alt-text="Photo of a car accident that can be used to complete the quickstart." lightbox="../media/quickstarts/multimodal-vision/car-accident.png":::
-For video prompts, you need a video that's under three minutes in length.
-
-## Deploy a GPT-4 Turbo with Vision model
+## Deploy a vision-enabled chat model
1. Sign in to [Azure AI Foundry](https://ai.azure.com) and select the hub you'd like to work in.
-1. On the left nav menu, select **AI Services**. Select the **Try out GPT-4 Turbo** panel.
-1. On the gpt-4 page, select **Deploy**. In the window that appears, select your Azure OpenAI resource. Select `vision-preview` as the model version.
+1. On the left nav menu, select **Models + endpoints** and select **+ Deploy model**.
+1. On the model selection page, select a vision-enabled model like GPT-4o. In the window that appears, select a name and deployment type. Make sure your Azure OpenAI resource is connected.
1. Select **Deploy**.
-1. Next, go to your new model's page and select **Open in playground**. In the chat playground, the GPT-4 deployment you created should be selected in the **Deployment** dropdown.
+1. Next, select your new model and select **Open in playground**. In the chat playground, the deployment you created should be selected in the **Deployment** dropdown.
-# [Image prompts](#tab/image-chat)
+<!-- # [Image prompts](#tab/image-chat) -->
-In this chat session, you instruct the assistant to aid in understanding images that you input.
+## Image prompts
-1. In the **System message** text box on the **System message** tab, provide this prompt to guide the assistant: `"You're an AI assistant that helps people find information."` You can tailor the prompt to your image or scenario.
+In this chat session, you instruct the assistant to aid you in understanding images that you input.
+
+1. In the context text box on the **Setup** panel, provide this prompt to guide the assistant: `"You're an AI assistant that helps people find information."` Or, you can tailor the prompt to your image or scenario.
1. Select **Apply changes** to save your changes.
1. In the chat session pane, select the attachment button and then **Upload image**. Choose your image.
-1. Add the following question in the chat field: `"Describe this image"`, and then select the right arrow icon to send.
-1. The right arrow icon is replaced by a Stop button. If you select it, the assistant stops processing your request. For this quickstart, let the assistant finish its reply.
+1. Add the following prompt in the chat field: `"Describe this image"`, and then select the send icon to submit it.
+1. The send icon is replaced by a stop button. If you select it, the assistant stops processing your request. For this quickstart, let the assistant finish its reply.
1. The assistant replies with a description of the image.
<!--:::image type="content" source="../media/quickstarts/multimodal-vision/chat-car-accident-reply-license.png" alt-text="Screenshot of the chat playground with the assistant's reply for basic image analysis." lightbox="../media/quickstarts/multimodal-vision/chat-car-accident-reply-license.png":::-->
1. Ask a follow-up question related to the analysis of your image. You could enter, `"What should I highlight about this image to my insurance company?"`.
@@ -79,7 +81,7 @@ In this chat session, you instruct the assistant to aid in understanding images
Remember to be factual and descriptive, avoiding speculation about the cause of the accident, as the insurance company will conduct its own investigation.
```
-
+<!--
# [Image prompt enhancements](#tab/enhanced-image-chat)
In this chat session, you instruct the assistant to aid in understanding images that you input. Try out the capabilities of the augmented vision model.
@@ -90,7 +92,7 @@ In this chat session, you instruct the assistant to aid in understanding images
1. Add the following question in the chat field: `"Describe this image"`, and then select the right arrow icon to send.
1. The right arrow icon is replaced by a Stop button. If you select it, the assistant stops processing your request. For this quickstart, let the assistant finish its reply.
1. The assistant replies with a description of the image. It uses the Azure AI Vision service to extract more detail from the image you uploaded.
- <!--:::image type="content" source="../media/quickstarts/multimodal-vision/chat-image-read-text.png" alt-text="Screenshot of the chat playground with the model output where the text in the image is read and returned." lightbox="../media/quickstarts/multimodal-vision/chat-image-read-text.png":::-->
+
1. Ask a follow-up question related to the analysis of your image. Enter, `"What should I highlight about this image to my insurance company?" `and then select the right arrow icon to send.
1. You should receive a relevant response similar to what's shown here:
```
@@ -124,6 +126,7 @@ In this chat session, you are instructing the assistant to aid in understanding
1. The assistant should reply with a description of the video.
1. Feel free to ask any follow-up questions related to the analysis of your video.
+
## Limitations
Below are the known limitations of the video prompt enhancements.
@@ -135,6 +138,8 @@ Below are the known limitations of the video prompt enhancements.
- **Language support:** Currently, the system primarily supports English for grounding with transcripts. Transcripts don't provide accurate information on lyrics from songs.
---
+-->
+
## View and export code
Summary
{
"modification_type": "minor update",
"modification_title": "マルチモーダルビジョンに関するクイックスタートガイドの更新"
}
Explanation
この変更では、「multimodal-vision.md」というファイルに対する大規模な更新が行われ、25行が追加され、20行が削除され、合計で45の変更がありました。主に、Azure AI Foundryポータルにおける視覚機能搭載チャットの使用方法に焦点を当てています。
タイトルと説明の変更: ドキュメントのタイトルと説明が「GPT-4 Turbo with Vision」から「視覚機能を持つチャット」に変更され、内容がより具体的になりました。これにより、読者はこのクイックスタートの焦点が明確になります。
手順の改良: モデルをデプロイする手順が再構成され、ユーザーが実行可能な状態になるように示されています。特に、モデルの選択や接続の指示が詳しく説明されています。また、手順中の番号が整理され、わかりやすさが向上しています。
新機能の説明: 新しいセクションとして「制限」が追加され、ビデオプロンプトの機能に関する既知の制限事項が記載されています。これにより、ユーザーは機能の使用における制約を理解することができます。
視覚的要素の強化: 画像やビジュアル要素が適切に組み込まれ、読者が手順を追いやすくなるように工夫されています。特に、画像の使用や結果の表示が強調され、実際のインターフェースのイメージが伝わるようになっています。
この変更により、Azure AI Foundryでのマルチモーダルビジョンの使用に関するガイドラインが強化され、ユーザーが機能をより効果的に活用できるようになります。
articles/ai-studio/toc.yml
Diff
@@ -70,9 +70,6 @@ items:
href: ../ai-services/openai/realtime-audio-quickstart.md?context=/azure/ai-studio/context/context
- name: Analyze images and video with GPT-4 for Vision in the playground
href: quickstarts/multimodal-vision.md
- - name: Use your image data with Azure OpenAI
- href: how-to/data-image-add.md
- displayName: vision, gpt, turbo
- name: Azure AI Speech
items:
- name: Real-time speech to text
@@ -180,19 +177,17 @@ items:
- name: Azure AI model inference
items:
- name: What is the Azure AI model inference service?
- href: ai-services/model-inference.md
+ href: ../ai-foundry/model-inference/overview.md?context=/azure/ai-studio/context/context
- name: Upgrade from GitHub Models
- href: ai-services/how-to/quickstart-github-models.md
+ href: ../ai-foundry/model-inference/how-to/quickstart-github-models.md?context=/azure/ai-studio/context/context
- name: Add and configure models
- href: ai-services/how-to/create-model-deployments.md
+ href: ../ai-foundry/model-inference/how-to/create-model-deployments.md?context=/azure/ai-studio/context/context
- name: Deployment types
- href: ai-services/concepts/deployment-types.md
+ href: ../ai-foundry/model-inference/concepts/deployment-types.md?context=/azure/ai-studio/context/context
- name: Use the inference endpoint
- href: ai-services/concepts/endpoints.md
+ href: ../ai-foundry/model-inference/concepts/endpoints.md?context=/azure/ai-studio/context/context
- name: Quotas and limits
- href: ai-services/concepts/quotas-limits.md
- - name: Azure AI model inference FAQ
- href: ai-services/faq.yml
+ href: ../ai-foundry/model-inference/quotas-limits.md?context=/azure/ai-studio/context/context
- name: Serverless API
items:
- name: Deploy models as serverless API
Summary
{
"modification_type": "minor update",
"modification_title": "AI Studioの目次におけるリンクの整理"
}
Explanation
この変更では、「toc.yml」ファイルに対する修正が行われ、6行が追加され、11行が削除され、合計で17の変更が加えられました。主な変更点は、Azure AI Studioに関連するドキュメントのリンクの整理と更新です。
リンクの削除: 「Use your image data with Azure OpenAI」という項目が削除され、関連する情報が目次から除外されました。これにより、古いまたは不必要な情報が整理され、目次がすっきりしました。
リンクの更新: Azure AIモデル推論に関するいくつかのリンクが新しいパスに更新されています。具体的には、以前の「ai-services」フォルダーから新しい「ai-foundry」フォルダーへのリンクに変更されており、内容の構成が改良されています。これにより、ユーザーが最新の情報にアクセスしやすくなります。
コンテキストの追加: 各新しいリンクには、?context=/azure/ai-studio/context/context
というコンテキスト情報が追加され、ユーザーのナビゲーションと情報の理解が向上します。このコンテキストは、特定の作業環境を考慮に入れたドキュメントの関連性を高めます。
これらの変更により、Azure AI Studioの目次がよりクリーンで整然となり、ユーザーが必要な情報を容易に見つけることができるようになります。