View Diff on GitHub
Highlights
このコードの変更は、Azure AIリソースの構成およびカスタム命名エンティティ認識(NER)に関するドキュメント全般の更新を主な目的としています。新しいAzure AI Foundryのプラットフォーム名の導入や、文書表現の改良、新しい画像の追加、そして既存セクションの再構成が行われました。
New features
- Azure AI Foundryを使用したクイックスタートガイドが追加。
- 訓練済みモデルのデプロイなどに関する新しい画像が追加され、視覚的なサポートが強化。
- 目次に新しいセクションが追加され、リソース設定のガイドラインが提供されるようになった。
Breaking changes
- 以前の「会話型言語理解」に関するセクションが「Azure AI Foundryタスク」として更新され、古いプラットフォームに関連する名称が変更された。
Other updates
- 文書内の表現がより明確化され、一貫性が持たされるようになった。
- より直感的に理解しやすくするためにタイトルやセクションヘッダーが変更。
- 目次ファイルが再構成され、ユーザーエクスペリエンスが向上。
Insights
Azure AIサービスのドキュメント更新は、ユーザーが最新の情報を元にリソースを活用できるようにすることを目的としています。重要な点は、Azure AI Foundryという新しいプラットフォーム名称が導入され、これがあらゆる関連文書に反映されたことです。これにより、ユーザーがAzureのAI機能を活用するための一貫した経験を提供することが可能になりました。
また、新たに追加された視覚的コンテンツ(図や画像)により、技術的手順の理解が深まり、ユーザーの操作経験が豊かになることが期待されます。画像として示された具体的手順は、特に設定やデプロイメントのような複雑なプロセスにおける学習曲線を緩和します。
さらに、目次の再構成は情報へのアクセス性を高め、ユーザーが必要な情報を迅速に、かつ効率的に見つけられるよう工夫されています。今回の変更は、Azure AIサービスを利用する際のユーザーエクスペリエンス全体を向上させるための戦略的な取り組みの一環といえます。
Summary Table
Modified Contents
articles/ai-services/language-service/concepts/configure-azure-resources.md
Diff
@@ -11,7 +11,7 @@ ms.custom: language-service-question-answering
# Configure your environment for Azure AI resources and permissions
-In this guide, we walk you through configuring your Azure AI resources and permissions for conversational language understanding (CLU) projects. We present two options:
+In this guide, we walk you through configuring your Azure AI resources and permissions for Azure AI Foundry tasks. We present two options:
* [**Option 1: Configure an Azure AI Foundry resource**](#option-1-configure-an-azure-ai-foundry-resource). Azure AI Foundry offers a unified environment for building generative AI applications and using Azure AI services. All essential tools are together in one environment for all stages of AI app development.
@@ -26,8 +26,8 @@ In addition, we show you how to assign the correct roles and permissions within
Before you can set up your resources, you need:
* **An active Azure subscription**. If you don't have one, you can [create one for free](https://azure.microsoft.com/free/cognitive-services).
-* **Requisite permissions**. Make sure the person establishing the account and project is assigned as the Azure AI Account Owner role at the subscription level. Alternatively, having either the **Contributor** or **Cognitive Services Contributor** role at the subscription scope also meets this requirement. For more information, *see* [Role based access control (RBAC)](../../../openai/how-to/role-based-access-control.md#cognitive-services-contributor).
-* An [Azure AI Foundry resource](../../../multi-service-resource.md) or an [Azure AI Language resource](https://portal.azure.com/?Microsoft_Azure_PIMCommon=true#create/Microsoft.CognitiveServicesTextAnalytics).
+* **Requisite permissions**. Make sure the person establishing the account and project is assigned as the Azure AI Account Owner role at the subscription level. Alternatively, having either the **Contributor** or **Cognitive Services Contributor** role at the subscription scope also meets this requirement. For more information, *see* [Role based access control (RBAC)](/azure/ai-foundry/openai/how-to/role-based-access-control#cognitive-services-contributor).
+* An [Azure AI Foundry resource](/azure/ai-services/multi-service-resource) or an [Azure AI Language resource](https://portal.azure.com/?Microsoft_Azure_PIMCommon=true#create/Microsoft.CognitiveServicesTextAnalytics).
* An [Azure OpenAI resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesOpenAI) (optional but required for [option 2](#option-2-configure-azure-language-resource-and-azure-openai-resources))
@@ -37,29 +37,29 @@ Before you can set up your resources, you need:
## Option 1: Configure an Azure AI Foundry resource
-Azure AI Foundry offers a unified platform for building, managing, and deploying AI solutions with a wide array of models and tools. With this integration, you gain access to features like **Quick Deploy** for rapid model **fine-tuning** and **suggest utterances** to expand your training data with generative AI. New features are continually added, making Azure AI Foundry the recommended choice for scalable CLU solutions.
+Azure AI Foundry offers a unified platform for building, managing, and deploying AI solutions with a wide array of models and tools. With this integration, you gain access to features like **Quick Deploy** for rapid model **fine-tuning** and **suggest utterances** to expand your training data with generative AI. New features are continually added, making Azure AI Foundry is the recommended choice for scalable solutions.
1. Navigate to the [Azure portal](https://azure.microsoft.com/#home).
1. Go to your Azure AI Foundry resource (select **All resources** to locate your resource).
1. Next, select **Access Control (IAM)** on the left panel, then select **Add role assignment**.
- :::image type="content" source="../media/configure-resources/add-role-assignment.png" alt-text="Screenshot of add role assignment selector in the Azure portal.":::
+ :::image type="content" source="../conversational-language-understanding/media/configure-resources/add-role-assignment.png" alt-text="Screenshot of add role assignment selector in the Azure portal.":::
1. Search and select the **Cognitive Services User** role. Select **Next**.
- :::image type="content" source="../media/configure-resources/cognitive-services-user.png" alt-text="Screenshot of Cognitive Services User from the job function roles list in the Azure portal.":::
+ :::image type="content" source="../conversational-language-understanding/media/configure-resources/cognitive-services-user.png" alt-text="Screenshot of Cognitive Services User from the job function roles list in the Azure portal.":::
1. Navigate to the **Members** tab and then select **Managed Identity**.
- :::image type="content" source="../media/configure-resources/managed-identity.png" alt-text="Screenshot of assign member access selector in the Azure portal.":::
+ :::image type="content" source="../conversational-language-understanding/media/configure-resources/managed-identity.png" alt-text="Screenshot of assign member access selector in the Azure portal.":::
-1. Select **Select members**, then in the right panel, search for and choose your Azure AI Foundry resource (the one you're using for this project), and choose **Select**.
+1. Choose **Select members**, then in the right panel, search for and choose your Azure AI Foundry resource (the one you're using for this project), and choose **Select**.
1. Finally, select **Review + assign** to confirm your selection.
-1. Your resources are now set up properly. Continue with setting up the fine-tuning task and continue customizing your CLU project.
+1. Your resources are now set up properly. Proceed with initializing the fine-tuning process and optimizing your AI models and solutions for advanced customization and deployment.
## Option 2: Configure Azure Language resource and Azure OpenAI resources
@@ -73,15 +73,15 @@ Azure OpenAI is a cloud-based solution that brings the advanced capabilities of
1. Next, select **Access Control (IAM)** on the left panel, then select **Add role assignment**.
- :::image type="content" source="../media/configure-resources/add-role-assignment.png" alt-text="Screenshot of add role assignment selector in the Azure portal.":::
+ :::image type="content" source="../conversational-language-understanding/media/configure-resources/add-role-assignment.png" alt-text="Screenshot of add role assignment selector in the Azure portal.":::
1. Search and select the **Cognitive Services User** role, then select **Next**.
- :::image type="content" source="../media/configure-resources/cognitive-services-user.png" alt-text="Screenshot of Cognitive Services User from the job function roles list in the Azure portal.":::
+ :::image type="content" source="../conversational-language-understanding/media/configure-resources/cognitive-services-user.png" alt-text="Screenshot of Cognitive Services User from the job function roles list in the Azure portal.":::
1. Navigate to the **Members** tab and then select **Managed Identity**.
- :::image type="content" source="../media/configure-resources/managed-identity.png" alt-text="Screenshot of assign member access selector in the Azure portal.":::
+ :::image type="content" source="../conversational-language-understanding/media/configure-resources/managed-identity.png" alt-text="Screenshot of assign member access selector in the Azure portal.":::
1. Select **Select members**, then in the right panel, search for and choose your Azure AI Foundry resource (the one you're using for this project), and choose **Select**.
@@ -98,12 +98,12 @@ Azure AI Foundry offers a unified platform where you can easily build, manage, a
1. Scroll to the **Connected resources** section of the Management center.
- :::image type="content" source="../media/configure-resources/ai-foundry-management-center.png" alt-text="Screenshot of the management center selector in the Azure AI Foundry.":::
+ :::image type="content" source="../conversational-language-understanding/media/configure-resources/ai-foundry-management-center.png" alt-text="Screenshot of the management center selector in the Azure AI Foundry.":::
1. Select the **+ New connection** button.
- :::image type="content" source="../media/configure-resources/new-connection.png" alt-text="Screenshot of the new connection button in the Azure AI Foundry.":::
+ :::image type="content" source="../conversational-language-understanding/media/configure-resources/new-connection.png" alt-text="Screenshot of the new connection button in the Azure AI Foundry.":::
1. In the new window, select **Azure AI Language** as the resource type, then find your Azure AI Language resource.
@@ -116,29 +116,29 @@ Azure AI Foundry offers a unified platform where you can easily build, manage, a
1. Select **Add connection**, then select **Close**.
- :::image type="content" source="../media/configure-resources/connect-language-resource.png" alt-text="Screenshot of connect search resource selector in the Azure AI Foundry.":::
+ :::image type="content" source="../conversational-language-understanding/media/configure-resources/connect-language-resource.png" alt-text="Screenshot of connect search resource selector in the Azure AI Foundry.":::
## Import an existing Azure AI project
Azure AI Foundry allows you to connect to your existing Azure AI services resources. This means you can establish a connection within your Azure AI Foundry project to the Azure AI Language resource where your custom models are stored.
-To import an existing Azure AI services project with Azure AI Foundry, you need to create a connection to the Azure AI services resource within your Azure AI Foundry project. For more information, *see* [Connect Azure AI Services projects to Azure AI Foundry](../../../../ai-services/connect-services-ai-foundry-portal.md)
+To import an existing Azure AI services project with Azure AI Foundry, you need to create a connection to the Azure AI services resource within your Azure AI Foundry project. For more information, *see* [Connect Azure AI Services projects to Azure AI Foundry](/azure/ai-services/connect-services-ai-foundry-portal)
## Export a project
-You can download a CLU project as a **config.json** file:
+You can download a project as a **config.json** file:
1. Navigate to your project home page.
1. At the top of the page, select your project from the right page ribbon area.
1. Select **Download config file**.
- :::image type="content" source="../media/create-project/download-config-json.png" alt-text="Screenshot of project drop-down menu with the download config file hyperlink in the Azure AI Foundry.":::
+ :::image type="content" source="../conversational-language-understanding/media/create-project/download-config-json.png" alt-text="Screenshot of project drop-down menu with the download config file hyperlink in the Azure AI Foundry.":::
That's it! Your resources are now set up properly. Continue with setting up the fine-tuning task and customizing your CLU project.
## Next Steps
-[Create a CLU fine-tuning task](train-model.md#train-your-model)
+[Model lifecycle](../concepts/model-lifecycle.md)
Summary
{
"modification_type": "minor update",
"modification_title": "Azure AIリソースの構成ガイドの更新"
}
Explanation
この変更は、Azure AIリソースの構成に関するガイドにおいて、前のタイトル「会話型言語理解」の部分を「Azure AI Foundryタスク」という新しいタイトルに変更しました。これに伴い、関連する章や説明も更新され、リソースの設定手順や役割の割り当てに使用されるリンクが修正されました。また、図のパスも新しいリポジトリの構成に合わせて変更されています。これにより、Azure AI Foundryの利用を強調し、ユーザーが最新の情報に基づいてリソースを構成できるようにしています。
articles/ai-services/language-service/concepts/developer-guide.md
Diff
@@ -22,7 +22,7 @@ The Language service provides support through a REST API, and client libraries i
## Client libraries (Azure SDK)
-The Language service provides three namespaces for using the available features. Depending on which features and programming language you're using, you'll need to download one or more of the following packages, and have the following framework/language version support:
+The Language service provides three namespaces for using the available features. Depending on which features and programming language you're using, you need to download one or more of the following packages, and have the following framework/language version support:
|Framework/Language | Minimum supported version |
|---------|---------|
@@ -34,13 +34,13 @@ The Language service provides three namespaces for using the available features.
### Azure.AI.TextAnalytics
>[!NOTE]
-> If you're using custom named entity recognition or custom text classification, you will need to create a project and train a model before using the SDK. The SDK only provides the ability to analyze text using models you create. See the following quickstarts for information on creating a model.
+> If you're using custom named entity recognition or custom text classification, you need to create a project and train a model before using the SDK. The SDK only allows for you to analyze text using models you create. See the following quickstarts for information on creating a model.
> * [Custom named entity recognition](../custom-named-entity-recognition/quickstart.md)
> * [Custom text classification](../custom-text-classification/quickstart.md)
The `Azure.AI.TextAnalytics` namespace enables you to use the following Language features. Use the following links for articles to help you send API requests using the SDK.
-* [Custom named entity recognition](../custom-named-entity-recognition/how-to/call-api.md?tabs=client#send-an-entity-recognition-request-to-your-model)
+* [Custom named entity recognition](../custom-named-entity-recognition/how-to/call-api.md?tabs=client)
* [Custom text classification](../custom-text-classification/how-to/call-api.md?tabs=client-libraries#send-a-text-classification-request-to-your-model)
* [Document summarization](../summarization/quickstart.md)
* [Entity linking](../entity-linking/quickstart.md)
@@ -63,7 +63,7 @@ As you use these features in your application, use the following documentation a
### Azure.AI.Language.Conversations
> [!NOTE]
-> If you're using conversational language understanding or orchestration workflow, you'll need to create a project and train a model before using the SDK. The SDK only provides the ability to analyze text using models you create. See the following quickstarts for more information.
+> If you're using conversational language understanding or orchestration workflow, you need to create a project and train a model before using the SDK. The SDK only allows you to analyze text using models you create. For more information, *see*:
> * [Conversational language understanding](../conversational-language-understanding/quickstart.md)
> * [Orchestration workflow](../orchestration-workflow/quickstart.md)
@@ -108,19 +108,19 @@ The conversation analysis authoring API enables you to author custom models and
* [Conversational language understanding](../conversational-language-understanding/quickstart.md?pivots=rest-api)
* [Orchestration workflow](../orchestration-workflow/quickstart.md?pivots=rest-api)
-As you use this API in your application, see the [reference documentation](/rest/api/language/2023-04-01/conversational-analysis-authoring) for additional information.
+For more information, *see* the [reference documentation](/rest/api/language/2023-04-01/conversational-analysis-authoring).
### Conversation analysis runtime API
-The conversation analysis runtime API enables you to send requests to custom models you've created for:
+The conversation analysis runtime API enables you to send requests to custom models you create for the following features:
* [Conversational language understanding](../conversational-language-understanding/how-to/call-api.md?tabs=REST-APIs#send-a-conversational-language-understanding-request)
* [Orchestration workflow](../orchestration-workflow/how-to/call-api.md?tabs=REST-APIs#send-an-orchestration-workflow-request)
It additionally enables you to use the following features, without creating any models:
* [Conversation summarization](../summarization/quickstart.md?pivots=rest-api&tabs=conversation-summarization)
* [Personally Identifiable Information (PII) detection for conversations](../personally-identifiable-information/how-to-call-for-conversations.md?tabs=rest-api#examples)
-As you use this API in your application, see the [reference documentation](/rest/api/language) for additional information.
+For more information, *see* the [reference documentation](/rest/api/language).
### Text analysis authoring API
@@ -129,11 +129,10 @@ The text analysis authoring API enables you to author custom models and create/m
* [Custom named entity recognition](../custom-named-entity-recognition/quickstart.md?pivots=rest-api)
* [Custom text classification](../custom-text-classification/quickstart.md?pivots=rest-api)
-As you use this API in your application, see the [reference documentation](/rest/api/language/2023-04-01/text-analysis-authoring) for additional information.
-
+For more information, *see* the [reference documentation](/rest/api/language/2023-04-01/text-analysis-authoring).
### Text analysis runtime API
-The text analysis runtime API enables you to send requests to custom models you've created for:
+The text analysis runtime API enables you to send requests to custom models you create for the following features:
* [Custom named entity recognition](../custom-named-entity-recognition/quickstart.md?pivots=rest-api)
* [Custom text classification](../custom-text-classification/quickstart.md?pivots=rest-api)
@@ -148,15 +147,15 @@ It additionally enables you to use the following features, without creating any
* [Sentiment analysis and opinion mining](../sentiment-opinion-mining/quickstart.md?pivots=rest-api)
* [Text analytics for health](../text-analytics-for-health/quickstart.md?pivots=rest-api)
-As you use this API in your application, see the [reference documentation](https://go.microsoft.com/fwlink/?linkid=2239169) for additional information.
+For more information, *see* the [reference documentation](https://go.microsoft.com/fwlink/?linkid=2239169).
### Question answering APIs
The question answering APIs enables you to use the [question answering](../question-answering/quickstart/sdk.md?pivots=rest) feature.
#### Reference documentation
-As you use this API in your application, see the following reference documentation for additional information.
+For more information, *see* the following reference documentation:
* [Prebuilt API](/azure/ai-services/language-service/question-answering/how-to/prebuilt) - Use the prebuilt runtime API to answer specified question using text provided by users.
* [Custom authoring API](/azure/ai-services/language-service/question-answering/how-to/authoring) - Create a knowledge base to answer questions.
Summary
{
"modification_type": "minor update",
"modification_title": "開発者ガイドの言語サービスに関する文言の更新"
}
Explanation
この変更は、言語サービスの開発者ガイドにおいて、文言の微調整や更新が行われています。具体的には、「必要があります」という表現に変更され、文章全体がより明確で一貫した表現になるように修正されました。また、実際のリソースやAPIに関連する情報へのリンクも、ユーザーがよりスムーズに参照できるよう整えられています。これにより、開発者がSDKを使用する際の理解が深まることを目指しています。
articles/ai-services/language-service/concepts/role-based-access-control.md
Diff
@@ -1,57 +1,57 @@
---
title: Role-based access control for the Language service
titleSuffix: Azure AI services
-description: Learn how to use Azure RBAC for managing individual access to Azure resources.
+description: Learn how to use Azure role based access control (RBAC) for managing individual access to Azure resources.
author: laujan
manager: nitinme
ms.service: azure-ai-language
ms.topic: conceptual
-ms.date: 06/30/2025
+ms.date: 09/22/2025
ms.author: lajanuar
---
# Language role-based access control
-Azure AI Language supports Azure role-based access control (Azure RBAC), an authorization system for managing individual access to Azure resources. Using Azure RBAC, you assign different team members different levels of permissions for your projects authoring resources. See the [Azure RBAC documentation](/azure/role-based-access-control/) for more information.
+Azure AI Language supports Azure role-based access control (Azure RBAC), an authorization system for managing individual access to Azure resources. Using Azure RBAC, you assign different team members different levels of permissions for your projects authoring resources. For more information, *see* the [Azure RBAC documentation](/azure/role-based-access-control/).
<a name='enable-azure-active-directory-authentication'></a>
## Enable Microsoft Entra authentication
To use Azure RBAC, you must enable Microsoft Entra authentication. You can [create a new resource with a custom subdomain](../../authentication.md#create-a-resource-with-a-custom-subdomain) or [create a custom subdomain for your existing resource](../../cognitive-services-custom-subdomains.md#how-does-this-impact-existing-resources).
-## Add role assignment to Language resource
+## Add role assignment to Azure resource
-Azure RBAC can be assigned to a Language resource. To grant access to an Azure resource, you add a role assignment.
+Azure RBAC can be assigned to an Azure resource. To do so, you can add a role assignment.
1. In the [Azure portal](https://portal.azure.com/), select **All services**.
-1. Select **Azure AI services**, and navigate to your specific Language resource.
+1. Select **Azure AI services**, and navigate to your specific Azure resource.
> [!NOTE]
- > You can also set up Azure RBAC for whole resource groups, subscriptions, or management groups. Do this by selecting the desired scope level and then navigating to the desired item. For example, selecting **Resource groups** and then navigating to a specific resource group.
+ > You can also set up Azure RBAC for whole resource groups, subscriptions, or management groups. Complete your configuration by selecting the desired scope level and then navigating to the desired item. For example, selecting **Resource groups** and then navigating to a specific resource group.
1. Select **Access control (IAM)** on the left pane.
1. Select **Add**, then select **Add role assignment**.
1. On the **Role** tab on the next screen, select a role you want to add.
1. On the **Members** tab, select a user, group, service principal, or managed identity.
1. On the **Review + assign** tab, select **Review + assign** to assign the role.
-Within a few minutes, the target will be assigned the selected role at the selected scope. For help with these steps, see [Assign Azure roles using the Azure portal](/azure/role-based-access-control/role-assignments-portal).
+Within a few minutes, the target is assigned to the selected role at the selected scope. For help with these steps, see [Assign Azure roles using the Azure portal](/azure/role-based-access-control/role-assignments-portal).
## Language role types
Use the following table to determine access needs for your Language projects.
These custom roles only apply to Language resources.
> [!NOTE]
-> * All prebuilt capabilities are accessible to all roles
-> * *Owner* and *Contributor* roles take priority over the custom language roles
-> * Microsoft Entra ID is only used in case of custom Language roles
-> * If you are assigned as a *Contributor* on Azure, your role will be shown as *Owner* in Language studio portal.
+> * All prebuilt capabilities are accessible to all roles.
+> * *Owner* and *Contributor* roles take priority over the custom language roles.
+> * Microsoft Entra ID is only used with custom Language roles.
+> * If you're assigned as a *Contributor* on Azure, your role is shown as *Owner* in Language studio portal.
### Cognitive Services Language Reader
-A user that should only be validating and reviewing the Language apps, typically a tester to ensure the application is performing well before deploying the project. They might want to review the application’s assets to notify the app developers of any changes that need to be made, but do not have direct access to make them. Readers will have access to view the evaluation results.
+A user that should only be validating and reviewing the Language apps, typically a tester to ensure the application is performing well before deploying the project. They might want to review the application's assets to notify the app developers of any changes that need to be made, but don't have direct access to make them. Readers have access to view the evaluation results.
:::row:::
@@ -78,14 +78,14 @@ A user that should only be validating and reviewing the Language apps, typically
Only Export POST operation under:
* [Question Answering Projects](/rest/api/questionanswering/question-answering-projects/export)
All the Batch Testing Web APIs
- *[Language Runtime CLU APIs](/rest/api/language)
+ *[Language Runtime `CLU` APIs](/rest/api/language)
*[Language Runtime Text Analysis APIs](https://go.microsoft.com/fwlink/?linkid=2239169)
:::column-end:::
:::row-end:::
### Cognitive Services Language Writer
-A user that is responsible for building and modifying an application, as a collaborator in a larger team. The collaborator can modify the Language apps in any way, train those changes, and validate/test those changes in the portal. However, this user shouldn’t have access to deploying this application to the runtime, as they might accidentally reflect their changes in production. They also shouldn’t be able to delete the application or alter its prediction resources and endpoint settings (assigning or unassigning prediction resources, making the endpoint public). This restricts this role from altering an application currently being used in production. They might also create new applications under this resource, but with the restrictions mentioned.
+A user responsible for building and modifying an application as a collaborator in a larger team. The collaborator can modify the Language apps in any way, train those changes, and validate/test those changes in the portal. However, this user shouldn't have access to deploying this application to the runtime, as they might accidentally reflect their changes in production. They also shouldn't be able to delete the application or alter its prediction resources and endpoint settings (assigning or unassigning prediction resources, making the endpoint public). This restriction prevents the role from altering an application currently being used in production. They might also create new applications under this resource, but with the restrictions mentioned.
:::row:::
:::column span="":::
@@ -119,7 +119,7 @@ A user that is responsible for building and modifying an application, as a colla
### Cognitive Services Language Owner
> [!NOTE]
-> If you are assigned as an *Owner* and *Language Owner* you will be shown as *Cognitive Services Language Owner* in Language studio portal.
+> If you're assigned as an Owner and Language Owner,* you considered a *Cognitive Services Language Owner* by the Language studio portal.
These users are the gatekeepers for the Language applications in production environments. They should have full access to any of the underlying functions and thus can view everything in the application and have direct access to edit any changes for both authoring and runtime environments
Summary
{
"modification_type": "minor update",
"modification_title": "役割ベースのアクセス制御に関するドキュメントの更新"
}
Explanation
この変更は、Azure AI言語サービスにおける役割ベースのアクセス制御(RBAC)に関するドキュメントを更新するもので、主に文言の明確化と詳細の追加が行われています。具体的には、役割の割り当てやAzureリソースに関する説明が改善され、ユーザーがAzure RBACをより理解しやすくなっています。また、日付の更新や文中の注記の修正も含まれており、内容を最新の情報に調整しています。この結果、ユーザーは役割ごとのアクセス権やリソースの設定方法をより明確に把握でき、効果的に管理を行うことができるようになります。
articles/ai-services/language-service/conversational-language-understanding/how-to/create-project.md
Diff
@@ -29,9 +29,9 @@ A Conversational Language Understanding (CLU) fine-tuning task is a workspace pr
* An [Azure AI Foundry resource](../../../multi-service-resource.md). For more information, *see* [Configure an Azure AI Foundry resource](configure-azure-resources.md#option-1-configure-an-azure-ai-foundry-resource). Alternately, you can use an [Azure AI Language resource](https://portal.azure.com/?Microsoft_Azure_PIMCommon=true#create/Microsoft.CognitiveServicesTextAnalytics).
* A Foundry project created in the Azure AI Foundry. For more information, *see* [Create an AI Foundry project](../../../../ai-foundry/how-to/create-projects.md).
-## Create a CLU fine-tuning task project
+## Fine-tune a CLU model
- To create a CLU fine-tuning task project, you first configure your environment and then create a fine-tuning task, which serves as your workspace for customizing your CLU model.
+ To create a CLU fine-tuning model, you first configure your environment and then create a fine-tuning project, which serves as your workspace for customizing your CLU model.
### [Azure AI Foundry](#tab/azure-ai-foundry)
@@ -160,7 +160,7 @@ To delete the hub along with all its projects:
:::image type="content" source="../media/create-project/hub-details.png" alt-text="Screenshot of the hub details list in the Azure AI Foundry.":::
1. On the right, select **Delete hub**.
-1. The link opens the Azure portal for you to delete the hub there.
+1. The link opens the Azure portal for you to delete the hub.
:::image type="content" source="../media/create-project/delete-hub.png" alt-text="Screenshot of the Delete hub button in the Azure AI Foundry.":::
Summary
{
"modification_type": "minor update",
"modification_title": "CLUモデルの微調整タスクに関するドキュメントの更新"
}
Explanation
この変更は、「会話型言語理解(CLU)」の微調整タスクに関するドキュメントを更新しています。具体的には、タスクの名称が「CLU微調整タスクプロジェクト」から「CLUモデル微調整」に変更され、より正確な用語に更新されています。また、モデルの作成プロセスに関する説明が明確化され、環境の設定から微調整プロジェクトの作成までの流れがより分かりやすく示されています。これにより、ユーザーがプロジェクトを効果的に進める手助けとなることを目的としています。最後に、ハブを削除する手順も若干簡素化されており、全体的にユーザーにとって分かりやすくなっています。
articles/ai-services/language-service/conversational-language-understanding/includes/quickstarts/azure-ai-foundry.md
Diff
@@ -7,10 +7,6 @@ ms.date: 09/15/2025
ms.author: lajanuar
---
-Azure AI Foundry offers a unified platform for building, managing, and deploying AI solutions with a wide array of models and tools. Azure AI Foundry playgrounds are interactive environments within the Azure AI Foundry portal designed for exploring, testing, and prototyping with various AI models and tools.
-
-Use this article to get started with Conversational Language understanding using Azure AI Foundry or the REST API.
-
> [!NOTE]
>
> * If you already have an Azure AI Language or multi-service resource—whether used on its own or through Language Studio—you can continue to use those existing Language resources within the Azure AI Foundry portal.
@@ -21,8 +17,8 @@ Use this article to get started with Conversational Language understanding using
* **Azure subscription**. If you don't have one, you can [create one for free](https://azure.microsoft.com/free/cognitive-services).
* **Requisite permissions**. Make sure the person establishing the account and project is assigned as the Azure AI Account Owner role at the subscription level. Alternatively, having either the **Contributor** or **Cognitive Services Contributor** role at the subscription scope also meets this requirement. For more information, *see* [Role based access control (RBAC)](/azure/ai-foundry/openai/how-to/role-based-access-control#cognitive-services-contributor).
-* [Azure AI Foundry resource](/azure/ai-services/multi-service-resource). For more information, *see* [Configure an Azure AI Foundry resource](../../how-to/configure-azure-resources.md#option-1-configure-an-azure-ai-foundry-resource). Alternately, you can use an [Azure AI Language resource](https://portal.azure.com/?Microsoft_Azure_PIMCommon=true#create/Microsoft.CognitiveServicesTextAnalytics).
-* A Foundry project created in the Azure AI Foundry. For more information, *see* [Create an AI Foundry project](/azure/ai-foundry/how-to/create-projects).
+* [Azure AI Foundry resource](/azure/ai-services/multi-service-resource). For more information, *see* [Configure an Azure AI Foundry resource](../../../concepts/configure-azure-resources.md). Alternately, you can use an [Azure AI Language resource](https://portal.azure.com/?Microsoft_Azure_PIMCommon=true#create/Microsoft.CognitiveServicesTextAnalytics).
+* **A Foundry project created in the Azure AI Foundry**. For more information, *see* [Create an AI Foundry project](/azure/ai-foundry/how-to/create-projects).
## Get started with Azure AI Foundry
@@ -86,7 +82,7 @@ After project creation, the next steps are [schema construction](../../how-to/bu
:::image type="content" source="../../media/quickstarts/review-selections.png" alt-text="Screenshot of the review selections window in the Azure AI Foundry.":::
-## Deploy model
+## Deploy your model
Typically, after training a model, you review its evaluation details. For this quickstart, you can just deploy your model and make it available to test in the Language playground, or by calling the [prediction API](https://aka.ms/clu-apis). However, if you wish, you can take a moment to select **Evaluate your model** from the left-side menu and explore the in-depth telemetry for your model. Complete the following steps to deploy your model within Azure AI Foundry:
Summary
{
"modification_type": "minor update",
"modification_title": "Azure AI Foundryに関するドキュメントの更新"
}
Explanation
この変更は、Azure AI Foundryに関するクイックスタート記事を更新しています。主な変更点には、Azure AI Foundryの説明が一部簡略化され、コンテンツがより明確に整理されています。具体的には、プロジェクトの作成や必要なリソースに関する情報が変更され、利用者に提供される情報の質が向上しています。また、モデルのデプロイに関するセクションのタイトルが「デプロイモデル」から「あなたのモデルをデプロイ」に変更され、より親しみやすい表現になっています。これにより、ユーザーが手順を把握しやすくなり、Azure AI Foundryを使い始める際の障壁が低くなっています。全体として、ユーザーエクスペリエンスの向上を図るための微調整が施されています。
articles/ai-services/language-service/conversational-language-understanding/includes/quickstarts/rest-api.md
Diff
@@ -7,7 +7,6 @@ ms.date: 08/14/2025
ms.author: lajanuar
---
-Use this article to get started with Conversational Language Understanding (CLU) using Azure AI Foundry or the REST API.
## Prerequisites
Summary
{
"modification_type": "minor update",
"modification_title": "REST APIに関するクイックスタート記事の修正"
}
Explanation
この変更は、「会話型言語理解(CLU)」におけるREST APIに関するクイックスタート記事の内容を微調整しています。具体的には、はじめに記事の目的を説明する文から「Azure AI FoundryまたはREST APIを使用して」という部分が削除され、よりシンプルな表現になっています。この変更により、記事の焦点がより明確になり、ユーザーが必要とする情報へ直接アクセスしやすくなっています。全体として、記事の読みやすさや理解しやすさを向上させるための修正となっています。
articles/ai-services/language-service/conversational-language-understanding/quickstart.md
Diff
@@ -9,11 +9,15 @@ ms.topic: quickstart
ms.date: 05/01/2025
ms.author: lajanuar
ms.custom: language-service-clu, mode-other
-zone_pivot_groups: language-clu-quickstart
+zone_pivot_groups: foundry-rest-api
---
# Quickstart: Conversational language understanding
+Azure AI Foundry offers a unified platform for building, managing, and deploying AI solutions with a wide array of models and tools. Azure AI Foundry playgrounds are interactive environments within the Azure AI Foundry portal designed for exploring, testing, and prototyping with various AI models and tools.
+
+Use this article to get started with Conversational Language understanding using Azure AI Foundry or the REST API.
+
::: zone pivot="azure-ai-foundry"
[!INCLUDE [Azure AI Foundry quickstart](includes/quickstarts/azure-ai-foundry.md)]
Summary
{
"modification_type": "minor update",
"modification_title": "会話型言語理解に関するクイックスタート記事の更新"
}
Explanation
この変更は、Azure AI Foundryを活用した会話型言語理解に関するクイックスタート記事を更新しています。主な変更点には、新たにAzure AI Foundryに関する背景情報が追加され、プラットフォームの特徴や利用方法が明確に説明されています。具体的には、Azure AI FoundryがさまざまなAIモデルやツールを使ってAIソリューションを構築、管理、展開するための統一されたプラットフォームであることが強調されています。また、記事の冒頭に「Azure AI FoundryまたはREST APIを使用して会話型言語理解を始めるために、この資料を活用してください」という指示が追加され、読者がこの記事の目的をより理解しやすくなっています。全体として、情報の充実と構造の整理が行われ、ユーザーが学習しやすい形式になっています。
articles/ai-services/language-service/custom-named-entity-recognition/concepts/data-formats.md
Diff
@@ -1,5 +1,5 @@
---
-title: Custom NER data formats
+title: Custom named entity recognition (NER) data formats
titleSuffix: Azure AI services
description: Learn about the data formats accepted by custom NER.
author: laujan
@@ -13,11 +13,11 @@ ms.custom: language-service-custom-ner
# Accepted custom NER data formats
-If you are trying to [import your data](../how-to/create-project.md#import-project) into custom NER, it has to follow a specific format. If you don't have data to import, you can [create your project](../how-to/create-project.md) and use Language Studio to [label your documents](../how-to/tag-data.md).
+If you're trying to [import your data](../how-to/create-project.md#import-project) into custom NER, it has to follow a specific format. If you don't have data to import, you can [create your project](../how-to/create-project.md) and use [Azure AI Foundry](https://ai.azure.com/) to label your documents.
## Labels file format
-Your Labels file should be in the `json` format below to be used in [importing](../how-to/create-project.md#import-project) your labels into a project.
+Your Labels file should be in `json` format for use in [importing](../how-to/create-project.md#import-project) your labels into a project.
```json
{
@@ -95,16 +95,16 @@ Your Labels file should be in the `json` format below to be used in [importing](
| `multilingual` | `true`| A boolean value that enables you to have documents in multiple languages in your dataset and when your model is deployed you can query the model in any supported language (not necessarily included in your training documents). See [language support](../language-support.md#multi-lingual-option) to learn more about multilingual support. | `true`|
|`projectName`|`{PROJECT-NAME}`|Project name|`myproject`|
| storageInputContainerName|`{CONTAINER-NAME}`|Container name|`mycontainer`|
-| `entities` | | Array containing all the entity types you have in the project. These are the entity types that will be extracted from your documents into.| |
+| `entities` | | Array containing all the entity types you have in the project. Entity types extracted from your documents.| |
| `documents` | | Array containing all the documents in your project and list of the entities labeled within each document. | [] |
-| `location` | `{DOCUMENT-NAME}` | The location of the documents in the storage container. Since all the documents are in the root of the container this should be the document name.|`doc1.txt`|
-| `dataset` | `{DATASET}` | The test set to which this file will go to when split before training. Learn more about data splitting [here](../how-to/train-model.md#data-splitting) . Possible values for this field are `Train` and `Test`. |`Train`|
+| `location` | `{DOCUMENT-NAME}` | The location of the documents in the storage container. Since all the documents are in the root of the container, this location should be the document name.|`doc1.txt`|
+| `dataset` | `{DATASET}` | The test set to which this file goes to when split before training. Learn more about data splitting [here](../how-to/train-model.md#data-splitting) . Possible values for this field are `Train` and `Test`. |`Train`|
| `regionOffset` | | The inclusive character position of the start of the text. |`0`|
| `regionLength` | | The length of the bounding box in terms of UTF16 characters. Training only considers the data in this region. |`500`|
| `category` | | The type of entity associated with the span of text specified. | `Entity1`|
| `offset` | | The start position for the entity text. | `25`|
| `length` | | The length of the entity in terms of UTF16 characters. | `20`|
-| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the document used in your project. If your project is a multilingual project, choose the language code of the majority of the documents. See [Language support](../language-support.md) for more information about supported language codes. |`en-us`|
+| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the document used in your project. If your project is a multilingual project, choose the language code for most of the documents. For more information, *see* [Language support](../language-support.md).|`en-us`|
Summary
{
"modification_type": "minor update",
"modification_title": "カスタムNERデータフォーマットに関するドキュメントの更新"
}
Explanation
この変更は、カスタム命名エンティティ認識(NER)のデータフォーマットに関するドキュメントを更新しています。主な変更点として、記事のタイトルが「カスタムNERデータフォーマット」から「カスタム命名エンティティ認識 (NER) データフォーマット」に変更され、用語がより明確になりました。また、Azure AI Foundryへの言及が追加され、ドキュメントをラベル付けする方法をより具体的に案内しています。
さらに、いくつかの文の表現が洗練されており、特にデータ形式の説明や手順が簡略化されています。これにより、ユーザーはカスタムNERで使用するデータのインポートやラベル付けを行う際の理解がしやすくなっています。データフィールドの説明が整理され、特にマルチリンガルプロジェクトに関する情報が明確に示されているため、ユーザーが必要な情報を迅速に見つけることができるようになります。全体として、情報の整合性とアクセシビリティが向上しています。
articles/ai-services/language-service/custom-named-entity-recognition/concepts/evaluation-metrics.md
Diff
@@ -6,31 +6,31 @@ author: laujan
manager: nitinme
ms.service: azure-ai-language
ms.topic: conceptual
-ms.date: 06/30/2025
+ms.date: 09/24/2025
ms.author: lajanuar
ms.custom: language-service-custom-ner
---
# Evaluation metrics for custom named entity recognition models
-Your [dataset is split](../how-to/train-model.md#data-splitting) into two parts: a set for training, and a set for testing. The training set is used to train the model, while the testing set is used as a test for model after training to calculate the model performance and evaluation. The testing set is not introduced to the model through the training process, to make sure that the model is tested on new data.
+Your [dataset is split](../how-to/train-model.md#data-splitting) into two parts: a set for training, and a set for testing. The training set is used to train the model, while the testing set is used as a test for model after training to calculate the model performance and evaluation. The testing set isn't introduced to the model through the training process, to make sure that the model is tested on new data.
-Model evaluation is triggered automatically after training is completed successfully. The evaluation process starts by using the trained model to predict user defined entities for documents in the test set, and compares them with the provided data tags (which establishes a baseline of truth). The results are returned so you can review the model’s performance. For evaluation, custom NER uses the following metrics:
+Model evaluation is triggered automatically after training is completed successfully. The evaluation process starts by using the trained model to predict user defined entities for documents in the test set, and compares them with the provided data tags (which establishes a baseline of truth). The results are returned so you can review the model's performance. For evaluation, custom NER uses the following metrics:
-* **Precision**: Measures how precise/accurate your model is. It is the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted entities are correctly labeled.
+* **Precision**: Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted entities are correctly labeled.
`Precision = #True_Positive / (#True_Positive + #False_Positive)`
-* **Recall**: Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted entities are correct.
+* **Recall**: Measures the model's ability to predict actual positive classes. It's the ratio between the predicted true positives and what was tagged. The recall metric reveals how many of the predicted entities are correct.
`Recall = #True_Positive / (#True_Positive + #False_Negatives)`
-* **F1 score**: The F1 score is a function of Precision and Recall. It's needed when you seek a balance between Precision and Recall.
+* **F1 score**: The F1 score is a function used when you seek a balance between Precision and Recall.
`F1 Score = 2 * Precision * Recall / (Precision + Recall)` <br>
>[!NOTE]
-> Precision, recall and F1 score are calculated for each entity separately (*entity-level* evaluation) and for the model collectively (*model-level* evaluation).
+> Precision, recall, and F1 score are calculated for each entity separately (*entity-level* evaluation) and for the model collectively (*model-level* evaluation).
## Model-level and entity-level evaluation metrics
@@ -40,7 +40,7 @@ The definitions of precision, recall, and evaluation are the same for both entit
### Example
-*The first party of this contract is John Smith, resident of 5678 Main Rd., City of Frederick, state of Nebraska. And the second party is Forrest Ray, resident of 123-345 Integer Rd., City of Corona, state of New Mexico. There is also Fannie Thomas resident of 7890 River Road, city of Colorado Springs, State of Colorado.*
+*The first party of this contract is John Smith, resident of 5678 Main Rd., City of Frederick, state of Nebraska. And the second party is Forrest Ray, resident of 123-345 Integer Rd., City of Corona, state of New Mexico. There's also Fannie Thomas resident of 7890 River Road, city of Colorado Springs, State of Colorado.*
The model extracting entities from this text could have the following predictions:
@@ -59,8 +59,8 @@ The model would have the following entity-level evaluation, for the *person* ent
| Key | Count | Explanation |
|--|--|--|
| True Positive | 2 | *John Smith* and *Fannie Thomas* were correctly predicted as *person*. |
-| False Positive | 1 | *Frederick* was incorrectly predicted as *person* while it should have been *city*. |
-| False Negative | 1 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. |
+| False Positive | 1 | *Frederick* was incorrectly predicted as *person* while it should be *city*. |
+| False Negative | 1 | *Forrest* was incorrectly predicted as *city* while it should be *person*. |
* **Precision**: `#True_Positive / (#True_Positive + #False_Positive)` = `2 / (2 + 1) = 0.67`
* **Recall**: `#True_Positive / (#True_Positive + #False_Negatives)` = `2 / (2 + 1) = 0.67`
@@ -73,8 +73,8 @@ The model would have the following entity-level evaluation, for the *city* entit
| Key | Count | Explanation |
|--|--|--|
| True Positive | 1 | *Colorado Springs* was correctly predicted as *city*. |
-| False Positive | 1 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. |
-| False Negative | 1 | *Frederick* was incorrectly predicted as *person* while it should have been *city*. |
+| False Positive | 1 | *Forrest* was incorrectly predicted as *city* while it should be *person*. |
+| False Negative | 1 | *Frederick* was incorrectly predicted as *person* while it should be *city*. |
* **Precision** = `#True_Positive / (#True_Positive + #False_Positive)` = `1 / (1 + 1) = 0.5`
* **Recall** = `#True_Positive / (#True_Positive + #False_Negatives)` = `1 / (1 + 1) = 0.5`
@@ -86,9 +86,9 @@ The model would have the following evaluation for the model in its entirety:
| Key | Count | Explanation |
|--|--|--|
-| True Positive | 3 | *John Smith* and *Fannie Thomas* were correctly predicted as *person*. *Colorado Springs* was correctly predicted as *city*. This is the sum of true positives for all entities. |
-| False Positive | 2 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. *Frederick* was incorrectly predicted as *person* while it should have been *city*. This is the sum of false positives for all entities. |
-| False Negative | 2 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. *Frederick* was incorrectly predicted as *person* while it should have been *city*. This is the sum of false negatives for all entities. |
+| True Positive | 3 | *John Smith* and *Fannie Thomas* were correctly predicted as *person*. *Colorado Springs* was correctly predicted as *city*. This number is the sum of true positives for all entities. |
+| False Positive | 2 | *Forrest* was incorrectly predicted as *city* while it should be *person*. *Frederick* was incorrectly predicted as *person* while it should be *city*. This number is the sum of false positives for all entities. |
+| False Negative | 2 | *Forrest* was incorrectly predicted as *city* while it should be *person*. *Frederick* was incorrectly predicted as *person* while it should be *city*. This number is the sum of false negatives for all entities. |
* **Precision** = `#True_Positive / (#True_Positive + #False_Positive)` = `3 / (3 + 2) = 0.6`
* **Recall** = `#True_Positive / (#True_Positive + #False_Negatives)` = `3 / (3 + 2) = 0.6`
@@ -100,35 +100,35 @@ So what does it actually mean to have high precision or high recall for a certai
| Recall | Precision | Interpretation |
|--|--|--|
-| High | High | This entity is handled well by the model. |
-| Low | High | The model cannot always extract this entity, but when it does it is with high confidence. |
-| High | Low | The model extracts this entity well, however it is with low confidence as it is sometimes extracted as another type. |
-| Low | Low | This entity type is poorly handled by the model, because it is not usually extracted. When it is, it is not with high confidence. |
+| High | High | The model identified the entity. |
+| Low | High | The model can't always extract this entity, but when it does it is with high confidence. |
+| High | Low | The model extracts this entity well; however it is with low confidence as it is sometimes extracted as another type. |
+| Low | Low | The model doesn't identify this entity type because it isn't normally extracted. When it is, it isn't with high confidence. |
## Guidance
-After you trained your model, you will see some guidance and recommendation on how to improve the model. It's recommended to have a model covering all points in the guidance section.
+After you trained your model, you see some guidance and recommendation on how to improve the model. A model that covers all points in the guidance section is recommended.
-* Training set has enough data: When an entity type has fewer than 15 labeled instances in the training data, it can lead to lower accuracy due to the model not being adequately trained on these cases. In this case, consider adding more labeled data in the training set. You can check the *data distribution* tab for more guidance.
+* Training set has enough data: When an entity type has fewer than 15 labeled examples in the training data, the model's accuracy drops. This result occurs because it lacks sufficient exposure to those cases. In this case, consider adding more labeled data in the training set. You can check the *data distribution* tab for more guidance.
-* All entity types are present in test set: When the testing data lacks labeled instances for an entity type, the model’s test performance may become less comprehensive due to untested scenarios. You can check the *test set data distribution* tab for more guidance.
+* All entity types are present in test set: When the testing data lacks labeled instances for an entity type, the model's test performance may become less comprehensive due to untested scenarios. You can check the *test set data distribution* tab for more guidance.
-* Entity types are balanced within training and test sets: When sampling bias causes an inaccurate representation of an entity type’s frequency, it can lead to lower accuracy due to the model expecting that entity type to occur too often or too little. You can check the *data distribution* tab for more guidance.
+* Entity types are balanced within training and test sets: When sampling bias causes an inaccurate representation of an entity type's frequency, it can lead to lower accuracy due to the model expecting that entity type to occur too often or too little. You can check the *data distribution* tab for more guidance.
-* Entity types are evenly distributed between training and test sets: When the mix of entity types doesn’t match between training and test sets, it can lead to lower testing accuracy due to the model being trained differently from how it’s being tested. You can check the *data distribution* tab for more guidance.
+* Entity types are evenly distributed between training and test sets: When the mix of entity types doesn't match between training and test sets, it can lead to lower testing accuracy due to the model being trained differently from how it's being tested. You can check the *data distribution* tab for more guidance.
-* Unclear distinction between entity types in training set: When the training data is similar for multiple entity types, it can lead to lower accuracy because the entity types may be frequently misclassified as each other. Review the following entity types and consider merging them if they’re similar. Otherwise, add more examples to better distinguish them from each other. You can check the *confusion matrix* tab for more guidance.
+* Unclear distinction between entity types in training set: When the training data is similar for multiple entity types, it can lead to lower accuracy because the entity types may be frequently misclassified as each other. Review the following entity types and consider merging them if they're similar. Otherwise, add more examples to better distinguish them from each other. You can check the *confusion matrix* tab for more guidance.
## Confusion matrix
A Confusion matrix is an N x N matrix used for model performance evaluation, where N is the number of entities.
The matrix compares the expected labels with the ones predicted by the model.
-This gives a holistic view of how well the model is performing and what kinds of errors it is making.
+This matrix gives a holistic view of how well the model is performing and what kinds of errors it's making.
You can use the Confusion matrix to identify entities that are too close to each other and often get mistaken (ambiguity). In this case consider merging these entity types together. If that isn't possible, consider adding more tagged examples of both entities to help the model differentiate between them.
-The highlighted diagonal in the image below is the correctly predicted entities, where the predicted tag is the same as the actual tag.
+The highlighted diagonal in the following image is the correctly predicted entities, where the predicted tag is the same as the actual tag.
:::image type="content" source="../media/confusion.png" alt-text="A screenshot that shows an example confusion matrix." lightbox="../media/confusion.png":::
@@ -146,5 +146,4 @@ Similarly,
## Next steps
-* [View a model's performance in Language Studio](../how-to/view-model-evaluation.md)
-* [Train a model](../how-to/train-model.md)
+[Train a model](../how-to/train-model.md)
Summary
{
"modification_type": "minor update",
"modification_title": "カスタムNER評価指標に関するドキュメントの更新"
}
Explanation
この変更は、カスタム命名エンティティ認識(NER)の評価指標に関するドキュメントを更新しています。主な変更点として、具体的な言葉遣いや説明の明確化が図られ、文章がより読みやすく、理解しやすくなりました。たとえば、「モデルは新しいデータでテストされる」ことを強調するために、“isn’t”という言い回しが用いられています。
また、評価指標である精度、再現率、F1スコアの説明が洗練され、より簡潔に述べられています。特に、精度や再現率の計算式の部分では、表現を明確にし、ユーザーがこれらの指標を簡単に理解できるように配慮されています。
さらに、ガイダンスセクションの内容が強化され、モデルの改善方法に関する具体的な推奨事項が示されています。特に、トレーニングデータとテストデータの分布のバランスやラベル付けされたインスタンスの数が、モデルの精度に与える影響についての注意が促されています。最後に、混同行列に関する説明も整理され、どのようにモデルの性能を可視化できるかが詳述されています。この変更により、ユーザーはカスタムNERの評価メトリクスおよびその改善方法について、より深く理解できるようになります。
articles/ai-services/language-service/custom-named-entity-recognition/faq.md
Diff
@@ -6,7 +6,7 @@ author: laujan
manager: nitinme
ms.service: azure-ai-language
ms.topic: faq
-ms.date: 06/30/2025
+ms.date: 09/18/2025
ms.author: lajanuar
ms.custom: language-service-custom-ner
---
@@ -18,17 +18,17 @@ Find answers to commonly asked questions about concepts, and scenarios related t
## How do I get started with the service?
-See the [quickstart](./quickstart.md) to quickly create your first project, or view [how to create projects](how-to/create-project.md) for more detailed information.
+For more information, *see* our [quickstart](./quickstart.md) or [how to create projects](how-to/create-project.md).
## What are the service limits?
-See the [service limits article](service-limits.md) for more information.
+For more information, *see* [service limits](service-limits.md).
## How many tagged files are needed?
-Generally, diverse and representative [tagged data](how-to/tag-data.md) leads to better results, given that the tagging is done precisely, consistently and completely. There is no set number of tagged instances that will make every model perform well. Performance highly dependent on your schema, and the ambiguity of your schema. Ambiguous entity types need more tags. Performance also depends on the quality of your tagging. The recommended number of tagged instances per entity is 50.
+Generally, diverse and representative [tagged data](how-to/tag-data.md) leads to better results, given that the tagging is done precisely, consistently and completely. There's no set number of tagged instances for a model to perform well. Performance highly dependent on your schema, and the ambiguity of your schema. Ambiguous entity types need more tags. Performance also depends on the quality of your tagging. The recommended number of tagged instances per entity is 50.
-## Training is taking a long time, is this expected?
+## How long should it take to train a model?
The training process can take a long time. As a rough estimate, the expected training time for files with a combined length of 12,800,000 chars is 6 hours.
@@ -42,32 +42,46 @@ When you're ready to start [using your model to make predictions](#how-do-i-use-
## What is the recommended CI/CD process?
-You can train multiple models on the same dataset within the same project. After you have trained your model successfully, you can [view its performance](how-to/view-model-evaluation.md). You can [deploy and test](quickstart.md#deploy-your-model) your model within [Language studio](https://aka.ms/languageStudio). You can add or remove labels from your data and train a **new** model and test it as well. View [service limits](service-limits.md) to learn about maximum number of trained models with the same project. When you [train a model](how-to/train-model.md), you can determine how your dataset is split into training and testing sets. You can also have your data split randomly into training and testing set where there is no guarantee that the reflected model evaluation is about the same test set, and the results are not comparable. It's recommended that you develop your own test set and use it to evaluate both models so you can measure improvement.
+Here's a list of actions you take within [Azure AI Foundry](https://ai.azure.com/):
+
+* Train multiple models on the same dataset within a single project.
+* View your model's performance.
+* Deploy and test your model and add or remove labels from your data.
+* Choose how your dataset is split into training and testing sets.<br><br>
+
+Your data can be split randomly into training and testing sets, but this means model evaluation may not be based on the same test set, making results noncomparable. We recommended that you develop your own test set and use it to evaluate both models to accurately measure improvements.<br><br>
+
+Make sure to review service limits to understand the maximum number of trained models allowed per project.
## Does a low or high model score guarantee bad or good performance in production?
-Model evaluation may not always be comprehensive. This depends on:
-* If the **test set** is too small so the good/bad scores are not representative of model's actual performance. Also if a specific entity type is missing or under-represented in your test set it will affect model performance.
-* **Data diversity** if your data only covers few scenarios/examples of the text you expect in production, your model will not be exposed to all possible scenarios and might perform poorly on the scenarios it hasn't been trained on.
-* **Data representation** if the dataset used to train the model is not representative of the data that would be introduced to the model in production, model performance will be affected greatly.
+Model evaluation may not always be comprehensive. The scope depends on the following factors:
+
+* The size of the **test set**. If the test set is too small, the good/bad scores aren't as representative of model's actual performance. Also if a specific entity type is missing or under-represented in your test set it affects model performance.
+* The **diversity of your data**. If your data only includes a limited number of scenarios or examples of the text you anticipate in production, your model may not encounter every possible situation. As a result, the model could perform poorly when faced with unfamiliar scenarios.
+* The **representation within your data**. If the dataset used to train the model isn't representative of the data that would be introduced to the model in production, model performance is affected greatly.
-See the [data selection and schema design](how-to/design-schema.md) article for more information.
+For more information, *see* [data selection and schema design](how-to/design-schema.md).
## How do I improve model performance?
-* View the model [confusion matrix](how-to/view-model-evaluation.md). If you notice that a certain entity type is frequently not predicted correctly, consider adding more tagged instances for this class. If you notice that two entity types are frequently predicted as each other, this means the schema is ambiguous, and you should consider merging them both into one entity type for better performance.
+* View the model [confusion matrix](how-to/view-model-evaluation.md). If you notice that a certain entity type is frequently not predicted correctly, consider adding more tagged instances for this class.
+
+When two different entity types are often being predicted as one another, it indicates that the schema lacks clarity. To improve performance, you should think about combining these two entity types into a single, unified type. If two entity types are consistently mistaken for each other during prediction, this result suggests ambiguity in your schema. Merging them into one entity type can help enhance overall model accuracy.
* [Review test set predictions](how-to/view-model-evaluation.md). If one of the entity types has a lot more tagged instances than the others, your model may be biased towards this type. Add more data to the other entity types or remove examples from the dominating type.
* Learn more about [data selection and schema design](how-to/design-schema.md).
-* [Review your test set](how-to/view-model-evaluation.md) to see predicted and tagged entities side-by-side so you can get a better idea of your model performance, and decide if any changes in the schema or the tags are necessary.
+* [Review your test set](how-to/view-model-evaluation.md). Review the predicted entities alongside the tagged entities and gain a clearer understanding of your model's accuracy. This comparison can help you determine whether adjustments to the schema or tag set are needed.
## Why do I get different results when I retrain my model?
-* When you [train your model](how-to/train-model.md), you can determine if you want your data to be split randomly into train and test sets. If you do, so there is no guarantee that the reflected model evaluation is on the same test set, so results are not comparable.
+* When you [train your model](how-to/train-model.md), you can determine if you want your data to be split randomly into train and test sets. If you choose to proceed, there's no assurance that the model evaluation is performed on the same test set, which means the results may not be directly comparable. By doing so, you risk evaluating the model on a different test set, making it impossible to reliably compare the outcomes.
+
+
+* If you're retraining the same model, your test set is the same, but you might notice a slight change in predictions made by the model. The issue arises because the trained model lacks sufficient robustness. This outcome is dependent on how well your data represents different scenarios, how distinct the data points are, and the overall quality of your data tagging. Several factors influence the model's performance. The model's robustness, the distinctiveness and diversity of the dataset, and the precision and uniformity of the tags assigned to the data all play important roles. To achieve optimal results, you must ensure your dataset not only accurately represents the target domain but also offers unique examples, and that all tags are applied with both consistency and accuracy throughout the data.
-* If you're retraining the same model, your test set will be the same, but you might notice a slight change in predictions made by the model. This is because the trained model is not robust enough and this is a factor of how representative and distinct your data is and the quality of your tagged data.
## How do I get predictions in different languages?
@@ -83,13 +97,12 @@ After deploying your model, you [call the prediction API](how-to/call-api.md), u
## Data privacy and security
-Custom NER is a data processor for General Data Protection Regulation (GDPR) purposes. In compliance with GDPR policies, Custom NER users have full control to view, export, or delete any user content either through the [Language Studio](https://aka.ms/languageStudio) or programmatically by using [REST APIs](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob).
+Your data is only stored in your Azure Storage account. Custom NER only has access to read from it during training. Custom NER users have full control to view, export, or delete any user content either through the [Azure AI Foundry](https://ai.azure.com/) or programmatically by using [REST APIs](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob). For more information, *see* [Data, privacy, and security for Azure AI Language](/azure/ai-foundry/responsible-ai/language-service/data-privacy)
-Your data is only stored in your Azure Storage account. Custom NER only has access to read from it during training.
## How to clone my project?
-To clone your project you need to use the export API to export the project assets, and then import them into a new project. See the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob) reference for both operations.
+To clone your project, you need to use the export API to export the project assets, and then import them into a new project. See the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob) reference for both operations.
## Next steps
Summary
{
"modification_type": "minor update",
"modification_title": "カスタムNERに関するFAQの更新"
}
Explanation
この変更は、カスタム命名エンティティ認識(NER)に関連するよくある質問(FAQ)のドキュメントを更新しています。主な変更点は、文章の表現を改善して読みやすくすること、さらに情報の明確化です。たとえば、モデルのトレーニングにかかる時間やサービスの制限、モデルのパフォーマンス向上のためのガイダンスが、より直感的に理解できる形で提示されています。
さらに、特定の質問のヘッダーが変更されたり、答えにおいて具体的な情報がより明確に表現されています。また、「はじめてのサービス利用」や「モデルパフォーマンスの改善方法」など、各セクションでの問い合わせに対して、具体的なアクションリストを示すことで、ユーザーが必要な情報を迅速に得られるように配慮されています。
データプライバシーに関するセクションも更新され、Azure Storageアカウント内でのデータの保管方法やユーザーが自分のコンテンツにアクセスするための手段がより明確に説明されています。これにより、ユーザーは自分のデータの取り扱いに関して、より安心してカスタムNERサービスを利用できるようになります。全体として、FAQセクションはより情報豊かになり、ユーザーの質問に対する答えがより整理された形で提供されています。
articles/ai-services/language-service/custom-named-entity-recognition/glossary.md
Diff
@@ -6,7 +6,7 @@ author: laujan
manager: nitinme
ms.service: azure-ai-language
ms.topic: conceptual
-ms.date: 06/30/2025
+ms.date: 09/18/2025
ms.author: lajanuar
ms.custom: language-service-custom-ner
---
@@ -17,7 +17,7 @@ Use this article to learn about some of the definitions and terms you may encoun
## Entity
-An entity is a span of text that indicates a certain type of information. The text span can consist of one or more words. In the scope of custom NER, entities represent the information that the user wants to extract from the text. Developers tag entities within their data with the needed entities before passing it to the model for training. For example "Invoice number", "Start date", "Shipment number", "Birthplace", "Origin city", "Supplier name" or "Client address".
+An entity is a span of text that indicates a certain type of information. The text span can consist of one or more words. In the scope of custom NER, entities represent the information that the user wants to extract from the text. Developers tag entities within their data with the needed entities before passing it to the model for training. For example "Invoice number," "Start date," "Shipment number," "Birthplace," "Origin city," "Supplier name" or "Client address."
For example, in the sentence "*John borrowed 25,000 USD from Fred.*" the entities might be:
@@ -28,7 +28,7 @@ For example, in the sentence "*John borrowed 25,000 USD from Fred.*" the entitie
| Loan Amount | *25,000 USD* |
## F1 score
-The F1 score is a function of Precision and Recall. It's needed when you seek a balance between [precision](#precision) and [recall](#recall).
+The F1 score is needed when you seek a balance between [precision](#precision) and [recall](#recall).
## Model
@@ -43,19 +43,19 @@ Measures how precise/accurate your model is. It's the ratio between the correctl
## Project
-A project is a work area for building your custom ML models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used.
+A project is a work area for building your custom ML models based on your data. Your project is only accessible by you and others who have access to the Azure resource being used.
As a prerequisite to creating a custom entity extraction project, you have to connect your resource to a storage account with your dataset when you [create a new project](how-to/create-project.md). Your project automatically includes all the `.txt` files available in your container.
-Within your project you can do the following actions:
+Here's a list of actions you can take:
* **Label your data**: The process of labeling your data so that when you train your model it learns what you want to extract.
* **Build and train your model**: The core step of your project, where your model starts learning from your labeled data.
-* **View model evaluation details**: Review your model performance to decide if there is room for improvement, or you are satisfied with the results.
-* **Deployment**: After you have reviewed the model's performance and decided it can be used in your environment, you need to assign it to a deployment to use it. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
-* **Test model**: After deploying your model, test your deployment in [Language Studio](https://aka.ms/LanguageStudio) to see how it would perform in production.
+* **View model evaluation details**: Review your model performance to decide if there's room for improvement, or you're satisfied with the results.
+* **Deployment**: After you review the model's performance and decided it can be used in your environment, you need to assign it to a deployment to use it. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
+* **Test model**: After deploying your model, test your deployment in [Azure AI Foundry](https://ai.azure.com/) to see how it would perform in production.
## Recall
-Measures the model's ability to predict actual positive classes. It's the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
+Measures the model's ability to predict actual positive classes. It's the ratio between the predicted true positives and what was tagged. The recall metric reveals how many of the predicted classes are correct.
## Next steps
Summary
{
"modification_type": "minor update",
"modification_title": "カスタムNER用語集の更新"
}
Explanation
この変更は、カスタム命名エンティティ認識(NER)に関する用語集のドキュメントを更新しています。主な変更点は、文の構造の改善とより一貫した表現にあります。例えば、エンティティの説明やF1スコアのデフィニションが明確化され、特定の用語が整理されています。また、プロジェクトの項目に関して、ユーザーが取ることのできるアクションリストがわかりやすく提示されています。
具体的には、「プロジェクト」セクションでは、プロジェクトへのアクセス権限に関する説明が強化され、ユーザーが何をすることができるかが一目でわかるようにリスト形式で提示されています。また、「ラベル付け」、「モデルの構築とトレーニング」、「モデルの評価の詳細を表示」などのアクションについても、クリアな表現が用いられています。
さらに、訓練後のモデル評価に関連するセクションが精緻化され、ユーザーがモデルのパフォーマンスを確認し、改善点を見つけるための方向性が示されています。全体として、用語集は情報の整合性向上に寄与しており、読者が専門用語やコンセプトをよりよく理解できるように工夫されています。
articles/ai-services/language-service/custom-named-entity-recognition/how-to/call-api.md
Diff
@@ -6,7 +6,7 @@ author: laujan
manager: nitinme
ms.service: azure-ai-language
ms.topic: how-to
-ms.date: 06/30/2025
+ms.date: 09/24/2025
ms.author: lajanuar
ms.devlang: csharp
# ms.devlang: csharp, python
@@ -16,30 +16,16 @@ ms.custom: language-service-custom-ner
# Query your custom model
After the deployment is added successfully, you can query the deployment to extract entities from your text based on the model you assigned to the deployment.
-You can query the deployment programmatically using the [Prediction API](/rest/api/language/text-analysis-runtime/analyze-text) or through the client libraries (Azure SDK).
+You can query the deployment programmatically using the [Prediction API](/rest/api/language/text-analysis-runtime/analyze-text) or through the client libraries (Azure SDK).
## Test deployed model
-You can use Language Studio to submit the custom entity recognition task and visualize the results.
+You can retrieve up-to-date information about your projects, make any necessary changes, and oversee project management tasks efficiently through the Azure AI Foundry.
[!INCLUDE [Test model](../../includes/custom/language-studio/test-model.md)]
:::image type="content" source="../media/test-model-results.png" alt-text="A screenshot showing the model test results." lightbox="../media/test-model-results.png":::
-
-## Send an entity recognition request to your model
-
-# [Language Studio](#tab/language-studio)
-
-[!INCLUDE [Get prediction URL](../../includes/custom/language-studio/get-prediction-url.md)]
-
-# [REST API](#tab/rest-api)
-
-First you need to get your resource key and endpoint:
-
-[!INCLUDE [Get keys and endpoint Azure Portal](../../includes/key-endpoint-page-azure-portal.md)]
-
-
### Submit a custom NER task
[!INCLUDE [submit a custom NER task using the REST API](../includes/rest-api/submit-task.md)]
@@ -55,28 +41,28 @@ First you need to get your resource key and endpoint:
[!INCLUDE [Get keys and endpoint Azure Portal](../../includes/get-key-endpoint.md)]
3. Download and install the client library package for your language of choice:
-
+
|Language |Package version |
|---------|---------|
|.NET | [5.2.0-beta.3](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0-beta.3) |
|Java | [5.2.0-beta.3](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.2.0-beta.3) |
|JavaScript | [6.0.0-beta.1](https://www.npmjs.com/package/@azure/ai-text-analytics/v/6.0.0-beta.1) |
|Python | [5.2.0b4](https://pypi.org/project/azure-ai-textanalytics/5.2.0b4/) |
-
-4. After you've installed the client library, use the following samples on GitHub to start calling the API.
-
+
+4. After you install the client library, use the following samples on GitHub to start calling the API.
+
* [C#](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample8_RecognizeCustomEntities.md)
* [Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/RecognizeCustomEntities.java)
* [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-text-analytics_6.0.0-beta.1/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js)
* [Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_custom_entities.py)
-
-5. See the following reference documentation for more information on the client, and return object:
-
+
+5. For more information, *see* the following reference documentation:
+
* [C#](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true)
* [Java](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true)
* [JavaScript](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true)
* [Python](/python/api/azure-ai-textanalytics/azure.ai.textanalytics?view=azure-python-preview&preserve-view=true)
-
+
---
## Next steps
Summary
{
"modification_type": "minor update",
"modification_title": "API呼び出し手順の更新"
}
Explanation
この変更は、カスタム命名エンティティ認識(NER)のAPI呼び出しに関する手順を更新しています。主な変更点は、文章の簡潔さと明確化にあり、特にプログラム的にデプロイメントをクエリする際の説明が改善されています。
まず、「言語スタジオを使用してモデルをテスト」というセクションは、新たに「Azure AI Foundry」を通じてプロジェクト管理や情報の更新を行う方法に変更され、より効率的にプロジェクトを監視する手段が強調されています。また、モデルのテスト結果を可視化する手法についてもより明確に説明されています。
さらに、「カスタム NER タスクの提出」に関するセクションでは、リソースキーとエンドポイントの取得方法が簡素化され、ターゲットとなるプログラミング言語やそのパッケージバージョンの情報も整備されています。具体的なコードサンプルへのリンクを保持しつつ、利用者がAPI呼び出しを理解しやすくするための全体的な流れが整理されています。
これらの変更点により、ユーザーはAPI呼び出しを迅速かつ効果的に行うことができるようになり、ドキュメントがより利用しやすいものとなっています。全体として、文書はユーザーにとって親しみやすくなるよう配慮されており、必要な情報がスムーズに得られるように工夫されています。
articles/ai-services/language-service/custom-named-entity-recognition/how-to/create-project.md
Diff
@@ -1,42 +1,41 @@
---
-title: Create custom NER projects and use Azure resources
+title: Create custom named entity recognition (NER) projects and use Azure resources
titleSuffix: Azure AI services
description: Learn how to create and manage projects and Azure resources for custom NER.
author: laujan
manager: nitinme
ms.service: azure-ai-language
ms.topic: how-to
-ms.date: 06/30/2025
+ms.date: 09/24/2025
ms.author: lajanuar
ms.custom: language-service-custom-ner, references_regions
---
-# How to create custom NER project
+# How to create custom named entity recognition (NER) project
Use this article to learn how to set up the requirements for starting with custom NER and create a project.
## Prerequisites
-Before you start using custom NER, you will need:
+Before you start using custom NER, you need:
* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services).
## Create a Language resource
-Before you start using custom NER, you will need an Azure AI Language resource. It is recommended to create your Language resource and connect a storage account to it in the Azure portal. Creating a resource in the Azure portal lets you create an Azure storage account at the same time, with all of the required permissions pre-configured. You can also read further in the article to learn how to use a pre-existing resource, and configure it to work with custom named entity recognition.
+Before you start using custom NER, you need an Azure AI Language resource. We recommend that you create your Language resource and connect a storage account to it in the Azure portal. Creating a resource in the Azure portal lets you create an Azure storage account at the same time, with all of the required permissions preconfigured. You can also read further in the article to learn how to use a preexisting resource, and configure it to work with custom named entity recognition.
-You also will need an Azure storage account where you will upload your `.txt` documents that will be used to train a model to extract entities.
+You also need an Azure storage account where you upload your `.txt` documents that are used to train a model to extract entities.
> [!NOTE]
> * You need to have an **owner** role assigned on the resource group to create a Language resource.
-> * If you will connect a pre-existing storage account, you should have an owner role assigned to it.
+> * If you connect a preexisting storage account, you should have an owner role assigned to it.
## Create Language resource and connect storage account
You can create a resource in the following ways:
* The Azure portal
-* Language Studio
* PowerShell
> [!Note]
@@ -50,16 +49,16 @@ You can create a resource in the following ways:
> [!NOTE]
-> * The process of connecting a storage account to your Language resource is irreversible, it cannot be disconnected later.
+> * The process of connecting a storage account to your Language resource is irreversible. It can't be disconnected later.
> * You can only connect your language resource to one storage account.
-## Using a pre-existing Language resource
+## Using a preexisting Language resource
[!INCLUDE [use an existing resource](../includes/use-pre-existing-resource.md)]
## Create a custom named entity recognition project
-Once your resource and storage container are configured, create a new custom NER project. A project is a work area for building your custom AI models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used. If you have labeled data, you can use it to get started by [importing a project](#import-project).
+Once your resource and storage container are configured, create a new custom NER project. A project is a work area for building your custom AI models based on your data. Only you can access your project along with others who have access to the Azure resource being used. If you labeled data, you can use it to get started by [importing a project](#import-project).
### [Language Studio](#tab/language-studio)
@@ -73,7 +72,7 @@ Once your resource and storage container are configured, create a new custom NER
## Import project
-If you have already labeled data, you can use it to get started with the service. Make sure that your labeled data follows the [accepted data formats](../concepts/data-formats.md).
+If you already labeled data, you can use it to get started with the service. Make sure that your labeled data follows the [accepted data formats](../concepts/data-formats.md).
### [Language Studio](#tab/language-studio)
@@ -111,6 +110,6 @@ If you have already labeled data, you can use it to get started with the service
## Next steps
-* You should have an idea of the [project schema](design-schema.md) you will use to label your data.
+* You should have an idea of the [project schema](design-schema.md) you use to label your data.
-* After your project is created, you can start [labeling your data](tag-data.md), which will inform your entity extraction model how to interpret text, and is used for training and evaluation.
+* After your project is created, you can start [labeling your data](tag-data.md). This process informs your entity extraction model how to interpret text, and is used for training and evaluation.
\ No newline at end of file
Summary
{
"modification_type": "minor update",
"modification_title": "カスタムNERプロジェクト作成手順の更新"
}
Explanation
この変更は、カスタム命名エンティティ認識(NER)プロジェクトの作成手順に関するドキュメントを更新しています。主な焦点は、文言の調整と明確化であり、ステップの説明が一層分かりやすくなっています。
具体的には、プロジェクトの概要と要件がより明確に表現されており、「カスタムNERを使用する前に必要なもの」というセクションでは、必要なAzureリソースやストレージアカウントに関する説明が改善されています。また、リソースの作成方法についても文中の表現が簡潔になり、利用者が理解しやすくなっています。
特に、「既存のリソースを使用する」セクションでは、既に存在するストレージアカウントを接続する際の役割について、明確な指示が加えられています。また、プロジェクトやデータのインポートに関する表現も改善されており、ユーザー向けの手順がスムーズに理解できるように工夫されています。
全体として、これらの変更により、ユーザーはカスタムNERプロジェクトの作成に必要な手順をより簡単に把握でき、実際の操作に役立つ情報が整理されて提供されています。
articles/ai-services/language-service/custom-named-entity-recognition/how-to/deploy-model.md
Diff
@@ -1,36 +1,36 @@
---
-title: How to deploy a custom NER model
+title: How to deploy a custom named entity recognition (NER) model
titleSuffix: Azure AI services
description: Learn how to deploy a model for custom NER.
author: laujan
manager: nitinme
ms.service: azure-ai-language
ms.topic: how-to
-ms.date: 06/30/2025
+ms.date: 09/24/2025
ms.author: lajanuar
ms.custom: language-service-custom-ner
---
# Deploy a model and extract entities from text using the runtime API
-Once you are satisfied with how your model performs, it is ready to be deployed and used to recognize entities in text. Deploying a model makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
+Once you're satisfied with how your model performs, it's ready to be deployed and used to recognize entities in text. Deploying a model makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
## Prerequisites
* A successfully [created project](create-project.md) with a configured Azure storage account.
-* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account.
+* Text data that is [uploaded](design-schema.md#data-preparation) to your storage account.
* [Labeled data](tag-data.md) and successfully [trained model](train-model.md)
* Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
-See [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+ For more information, *see* [project development lifecycle](../overview.md#project-development-lifecycle).
## Deploy model
-After you've reviewed your model's performance and decided it can be used in your environment, you need to assign it to a deployment. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger). It is recommended to create a deployment named *production* to which you assign the best model you have built so far and use it in your system. You can create another deployment called *staging* to which you can assign the model you're currently working on to be able to test it. You can have a maximum of 10 deployments in your project.
+After you review your model's performance and decided it can be used in your environment, you need to assign it to a deployment. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger). We recommend that you create a deployment named *production* to which you assign the best model you built so far and use it in your system. You can create another deployment called *staging* to which you can assign the model you're currently working on to be able to test it. You can have a maximum of 10 deployments in your project.
-# [Language Studio](#tab/language-studio)
+# [Azure AI Foundry](#tab/azure-ai-foundry)
-[!INCLUDE [Deploy a model using Language Studio](../includes/language-studio/deploy-model.md)]
+For information on how to deploy your custom model in the Azure AI Foundry, *see* [Deploy your fine-tuned model ](/azure/ai-foundry/openai/how-to/fine-tuning-deploy?tabs=portal#deploy-your-fine-tuned-model).
# [REST APIs](#tab/rest-api)
@@ -46,11 +46,17 @@ After you've reviewed your model's performance and decided it can be used in you
## Swap deployments
-After you are done testing a model assigned to one deployment and you want to assign this model to another deployment you can swap these two deployments. Swapping deployments involves taking the model assigned to the first deployment, and assigning it to the second deployment. Then taking the model assigned to second deployment, and assigning it to the first deployment. You can use this process to swap your *production* and *staging* deployments when you want to take the model assigned to *staging* and assign it to *production*.
+After you're done testing a model assigned to one deployment and you want to assign this model to another deployment, you can swap these two deployments. Swapping deployments involves taking the model assigned to the first deployment, and assigning it to the second deployment. Then taking the model assigned to second deployment, and assigning it to the first deployment. You can use this process to swap your *production* and *staging* deployments when you want to take the model assigned to *staging* and assign it to *production*.
-# [Language Studio](#tab/language-studio)
+# [Azure AI Foundry](#tab/azure-ai-foundry)
-[!INCLUDE [Swap deployments](../includes/language-studio/swap-deployment.md)]
+To replace a deployed model, you can exchange the deployed model with a different model in the same region:
+
+1. Select the model name under **Name** then select **Deploy model**.
+
+1. Select **Swap model**.
+
+ The redeployment takes several minutes to complete. In the meantime, deployed model continues to be available for use with the Translator API until this process is complete.
# [REST APIs](#tab/rest-api)
@@ -61,9 +67,20 @@ After you are done testing a model assigned to one deployment and you want to as
## Delete deployment
-# [Language Studio](#tab/language-studio)
+# [Azure AI Foundry](#tab/azure-ai-foundry)
+If you no longer need your project, you can delete it from the Azure AI Foundry.
+
+1. Navigate to the [Azure AI Foundry](https://ai.azure.com/) home page. Initiate the authentication process by signing in, unless you already completed this step and your session is active.
+1. Select the project that you want to delete from the **Keep building with Azure AI Foundry**
+1. Select **Management center**.
+1. Select **Delete project**.
+
+To delete the hub along with all its projects:
+
+1. Navigate to the **Overview** tab inn the **Hub** section.
-[!INCLUDE [Delete deployment](../includes/language-studio/delete-deployment.md)]
+1. On the right, select **Delete hub**.
+1. The link opens the Azure portal for you to delete the hub.
# [REST APIs](#tab/rest-api)
@@ -75,9 +92,9 @@ After you are done testing a model assigned to one deployment and you want to as
You can [deploy your project to multiple regions](../../concepts/custom-features/multi-region-deployment.md) by assigning different Language resources that exist in different regions.
-# [Language Studio](#tab/language-studio)
+# [Azure AI Foundry](#tab/azure-ai-foundry)
-[!INCLUDE [Assign resource](../../conversational-language-understanding/includes/language-studio/assign-resources.md)]
+For more information on how to deploy you custom model, *see* [Deploy your fine-tuned model](/azure/ai-foundry/openai/how-to/fine-tuning-deploy?tabs=python#deploy-your-fine-tuned-model)
# [REST APIs](#tab/rest-api)
@@ -87,12 +104,25 @@ You can [deploy your project to multiple regions](../../concepts/custom-features
## Unassign deployment resources
-When unassigning or removing a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
+To unassign or remove a deployment resource from a project, you also delete all the deployments for to that resource region.
-# [Language Studio](#tab/language-studio)
+# [Azure AI Foundry](#tab/azure-ai-foundry)
-[!INCLUDE [Unassign resource](../../conversational-language-understanding/includes/language-studio/unassign-resources.md)]
+If you no longer need your project, you can delete it from the Azure AI Foundry.
+1. Navigate to the [Azure AI Foundry](https://ai.azure.com/) home page. Initiate the authentication process by signing in, unless you already completed this step and your session is active.
+1. Select the project that you want to delete from the **Keep building with Azure AI Foundry**
+1. Select **Management center**.
+1. Select **Delete project**.
+
+To delete the hub along with all its projects:
+
+1. Navigate to the **Overview** tab inn the **Hub** section.
+
+1. On the right, select **Delete hub**.
+1. The link opens the Azure portal for you to delete the hub.
+
+
# [REST APIs](#tab/rest-api)
[!INCLUDE [Unassign resource](../../custom-text-classification/includes/rest-api/unassign-resources.md)]
Summary
{
"modification_type": "minor update",
"modification_title": "カスタムNERモデルのデプロイ手順の更新"
}
Explanation
この変更は、カスタム命名エンティティ認識(NER)モデルをデプロイするための手順を更新しています。主なポイントは、文言の改良と新しい情報の追加であり、ユーザーがモデルをデプロイする際の理解を深める内容となっています。
まず、タイトルが「カスタム NER モデルのデプロイ」と明確化され、読者にとっての理解が向上しています。また、デプロイメントの設定方法についての指示が具体的になり、条件が明確に示されています。特に、プロジェクトの要件やテキストデータのアップロード状況に関する文言が改善され、より直感的に理解できるよう工夫されています。
さらに、デプロイメントを入れ替える手順や、Azure AI Foundry内でのプロジェクトの削除方法に関する詳細が追加されており、ユーザーがデプロイメントを管理するための具体的なガイドラインが提供されています。
全体として、これらの変更により、カスタムNERモデルのデプロイプロセスがスムーズに行えるよう配慮されており、ユーザーにとって実用的で分かりやすい内容に仕上げられています。特に実際の操作手順が詳細に記載されているため、実用性が向上したと言えます。
articles/ai-services/language-service/custom-named-entity-recognition/how-to/design-schema.md
Diff
@@ -1,19 +1,19 @@
---
-title: Preparing data and designing a schema for custom NER
+title: Preparing data and designing a schema for custom named entity recognition (NER)
titleSuffix: Azure AI services
description: Learn about how to select and prepare data, to be successful in creating custom NER projects.
author: laujan
manager: nitinme
ms.service: azure-ai-language
ms.topic: how-to
-ms.date: 06/30/2025
+ms.date: 09/24/2025
ms.author: lajanuar
ms.custom: language-service-custom-ner
---
# How to prepare data and define a schema for custom NER
-In order to create a custom NER model, you will need quality data to train it. This article covers how you should select and prepare your data, along with defining a schema. Defining the schema is the first step in [project development lifecycle](../overview.md#project-development-lifecycle), and it defines the entity types/categories that you need your model to extract from the text at runtime.
+In order to create a custom NER model, you need quality data to train it. This article covers how you should select and prepare your data, along with defining a schema. Defining the schema is the first step in [project development lifecycle](../overview.md#project-development-lifecycle), and it defines the entity types/categories that you need your model to extract from the text at runtime.
## Schema design
@@ -23,34 +23,34 @@ The schema defines the entity types/categories that you need your model to extra
* Identify the [entities](../glossary.md#entity) you want to extract from the data.
- For example, if you are extracting entities from support emails, you might need to extract "Customer name", "Product name", "Request date", and "Contact information".
+ For example, if you're extracting entities from support emails, you might need to extract "Customer name," "Product name," "Request date," and "Contact information."
* Avoid entity types ambiguity.
- **Ambiguity** happens when entity types you select are similar to each other. The more ambiguous your schema the more labeled data you will need to differentiate between different entity types.
+ **Ambiguity** happens when entity types you select are similar to each other. The more ambiguous your schema the more labeled data you need to differentiate between different entity types.
- For example, if you are extracting data from a legal contract, to extract "Name of first party" and "Name of second party" you will need to add more examples to overcome ambiguity since the names of both parties look similar. Avoid ambiguity as it saves time, effort, and yields better results.
+ For example, if you're extracting data from a legal contract, to extract "Name of first party" and "Name of second party" you need to add more examples to overcome ambiguity since the names of both parties look similar. Avoid ambiguity as it saves time, effort, and yields better results.
-* Avoid complex entities. Complex entities can be difficult to pick out precisely from text, consider breaking it down into multiple entities.
+* Avoid complex entities. Complex entities can be difficult to pick out precisely from text. Consider breaking it down into multiple entities.
- For example, extracting "Address" would be challenging if it's not broken down to smaller entities. There are so many variations of how addresses appear, it would take large number of labeled entities to teach the model to extract an address, as a whole, without breaking it down. However, if you replace "Address" with "Street Name", "PO Box", "City", "State" and "Zip", the model will require fewer labels per entity.
+ For example, extracting "Address" would be challenging if not broken down into smaller entities. There are so many variations of how addresses appear, it would take large number of labeled entities to teach the model to extract an address, as a whole, without breaking it down. However, if you replace "Address" with "Street Name," "PO Box," "City," "State" and "Zip," the model requires fewer labels per entity.
## Data selection
The quality of data you train your model with affects model performance greatly.
-* Use real-life data that reflects your domain's problem space to effectively train your model. You can use synthetic data to accelerate the initial model training process, but it will likely differ from your real-life data and make your model less effective when used.
+* Use real-life data that reflects your domain's problem space to effectively train your model. You can use synthetic data to accelerate the initial model training process, but it differs from your real-life data and make your model less effective when used.
-* Balance your data distribution as much as possible without deviating far from the distribution in real-life. For example, if you are training your model to extract entities from legal documents that may come in many different formats and languages, you should provide examples that exemplify the diversity as you would expect to see in real life.
+* Balance your data distribution as much as possible without deviating far from the distribution in real-life.
* Use diverse data whenever possible to avoid overfitting your model. Less diversity in training data may lead to your model learning spurious correlations that may not exist in real-life data.
* Avoid duplicate documents in your data. Duplicate data has a negative effect on the training process, model metrics, and model performance.
-* Consider where your data comes from. If you are collecting data from one person, department, or part of your scenario, you are likely missing diversity that may be important for your model to learn about.
+* Consider where your data comes from. If you're collecting data from one person, department, or part of your scenario, you're likely missing diversity that may be important for your model to learn about.
> [!NOTE]
-> If your documents are in multiple languages, select the **enable multi-lingual** option during [project creation](../quickstart.md) and set the **language** option to the language of the majority of your documents.
+> If your documents are in multiple languages, select the **enable multi-lingual** option during [project creation](../quickstart.md) and set the **language** option to the language of most your documents.
## Data preparation
@@ -61,12 +61,12 @@ As a prerequisite for creating a project, your training data needs to be uploade
You can only use `.txt` documents. If your data is in other format, you can use [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to change your document format.
-You can upload an annotated dataset, or you can upload an unannotated one and [label your data](../how-to/tag-data.md) in Language studio.
+You can upload an annotated dataset, or you can upload an unannotated one and label your data.
## Test set
-When defining the testing set, make sure to include example documents that are not present in the training set. Defining the testing set is an important step to calculate the [model performance](view-model-evaluation.md#model-details). Also, make sure that the testing set include documents that represent all entities used in your project.
+When defining the testing set, make sure to include example documents that aren't present in the training set. Defining the testing set is an important step to calculate the [model performance](view-model-evaluation.md#model-details). Also, make sure that the testing set include documents that represent all entities used in your project.
## Next steps
-If you haven't already, create a custom NER project. If it's your first time using custom NER, consider following the [quickstart](../quickstart.md) to create an example project. You can also see the [how-to article](../how-to/create-project.md) for more details on what you need to create a project.
+If you haven't already, create a custom NER project. If it's your first time using custom NER, consider following the [quickstart](../quickstart.md) to create an example project. For more information, *see* the [how-to article](../how-to/create-project.md).
\ No newline at end of file
Summary
{
"modification_type": "minor update",
"modification_title": "カスタムNER向けデータ準備とスキーマ設計の更新"
}
Explanation
この変更は、カスタム命名エンティティ認識(NER)モデルのためのデータ準備とスキーマ設計に関するドキュメントを更新しています。主な変更点は、表現の明確化や一貫性のある用語の使用に重点を置いています。具体的には、タイトルが「カスタム命名エンティティ認識(NER)のためのデータ準備とスキーマ設計」という形で明確化されました。
文中では、データの品質がモデルの性能にどのように影響するかを強調し、具体的な事例を交えてスキーマ設計における注意点が説明されています。例えば、曖昧なエンティティタイプの回避や、複雑なエンティティの分解に関する具体的な指示が含まれています。このような改善が行われた結果、読者にとって理解しやすく、実際のプロジェクトに役立つ情報が提供されています。
また、データ選択の段階で考慮すべき点が明確化され、データの出所や多様性についての重要性が強調されています。ドキュメント全体を通じて、ユーザーに対する具体的なアドバイスが一貫しており、プロジェクトの成功に向けた指針が示されています。
全体として、これらの変更により、カスタムNERプロジェクトに必要なデータの準備およびスキーマ設計に関する手順がより明確になっており、ユーザーが効果的にプロジェクトを進めるための実用的な情報が提供されています。
articles/ai-services/language-service/custom-named-entity-recognition/how-to/tag-data.md
Diff
@@ -6,42 +6,42 @@ author: laujan
manager: nitinme
ms.service: azure-ai-language
ms.topic: how-to
-ms.date: 06/30/2025
+ms.date: 09/24/2025
ms.author: lajanuar
ms.custom: language-service-custom-ner
---
-# Label your data in Language Studio
+# Label your data in Azure Language Studio
-Before training your model you need to label your documents with the custom entities you want to extract. Data labeling is a crucial step in development lifecycle. In this step you can create the entity types you want to extract from your data and label these entities within your documents. This data will be used in the next step when training your model so that your model can learn from the labeled data. If you already have labeled data, you can directly [import](create-project.md#import-project) it into your project but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md). See [create project](create-project.md#import-project) to learn more about importing labeled data into your project.
+Before training your model, you need to label your documents with the custom entities you want to extract. Data labeling is a crucial step in development lifecycle. You can create the entity types you want to extract from your data and label these entities within your documents. This data will be used in the next step when training your model so that your model can learn from the labeled data. If you already labeled data, you can directly [import](create-project.md#import-project) it into your project, but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md). See [create project](create-project.md#import-project) to learn more about importing labeled data into your project.
-Before creating a custom NER model, you need to have labeled data first. If your data isn't labeled already, you can label it in the [Language Studio](https://aka.ms/languageStudio). Labeled data informs the model how to interpret text, and is used for training and evaluation.
+Before creating a custom NER model, you first need to label your data. If your data isn't labeled already, you can label it in the [Language Studio](https://aka.ms/languageStudio). Labeled data informs the model how to interpret text, and is used for training and evaluation.
## Prerequisites
Before you can label your data, you need:
* A successfully [created project](create-project.md) with a configured Azure blob storage account
-* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
+* Text data is [uploaded](design-schema.md#data-preparation) to your storage account.
-See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+For more information, *see* the [project development lifecycle](../overview.md#project-development-lifecycle).
## Data labeling guidelines
-After [preparing your data, designing your schema](design-schema.md) and [creating your project](create-project.md), you will need to label your data. Labeling your data is important so your model knows which words will be associated with the entity types you need to extract. When you label your data in [Language Studio](https://aka.ms/languageStudio) (or import labeled data), these labels will be stored in the JSON document in your storage container that you have connected to this project.
+After [preparing your data, designing your schema](design-schema.md), and [creating your project](create-project.md), you need to label your data. Labeling your data is important so your model knows which words are associated with the entity types you need to extract. When you label your data in [Language Studio](https://aka.ms/languageStudio) (or import labeled data), these labels are stored in the JSON document in your storage container that you connected to this project.
As you label your data, keep in mind:
* In general, more labeled data leads to better results, provided the data is labeled accurately.
-* The precision, consistency and completeness of your labeled data are key factors to determining model performance.
+* The precision, consistency, and completeness of your labeled data are key factors to determining model performance.
- * **Label precisely**: Label each entity to its right type always. Only include what you want extracted, avoid unnecessary data in your labels.
+ * **Label precisely**: Label each entity to its right type always. Only include what you want extracted. Avoid unnecessary data in your labels.
* **Label consistently**: The same entity should have the same label across all the documents.
- * **Label completely**: Label all the instances of the entity in all your documents. You can use the [auto labelling feature](use-autolabeling.md) to ensure complete labeling.
+ * **Label completely**: Label all the instances of the entity in all your documents. You can use the [autolabeling feature](use-autolabeling.md) to ensure complete labeling.
> [!NOTE]
- > There is no fixed number of labels that can guarantee your model will perform the best. Model performance is dependent on possible ambiguity in your [schema](design-schema.md), and the quality of your labeled data. Nevertheless, we recommend having around 50 labeled instances per entity type.
+ > There's no fixed number of labels that can guarantee your model performs the best. Model performance is dependent on possible ambiguity in your [schema](design-schema.md), and the quality of your labeled data. Nevertheless, we recommend having around 50 labeled instances per entity type.
## Label your data
@@ -60,7 +60,7 @@ Use the following steps to label your data:
3. Change to a single document view from the left side in the top menu or select a specific document to start labeling. You can find a list of all `.txt` documents available in your project to the left. You can use the **Back** and **Next** button from the bottom of the page to navigate through your documents.
> [!NOTE]
- > If you enabled multiple languages for your project, you will find a **Language** dropdown in the top menu, which lets you select the language of each document.
+ > If you enabled multiple languages for your project, you find a **Language** dropdown in the top menu, which lets you select the language of each document.
4. In the right side pane, **Add entity type** to your project so you can start labeling your data with them.
@@ -71,24 +71,24 @@ Use the following steps to label your data:
|Option |Description |
|---------|---------|
|Label using a brush | Select the brush icon next to an entity type in the right pane, then highlight the text in the document you want to annotate with this entity type. |
- |Label using a menu | Highlight the word you want to label as an entity, and a menu will appear. Select the entity type you want to assign for this entity. |
+ |Label using a menu | Highlight the word you want to label as an entity, and a menu appears. Select the entity type you want to assign for this entity. |
- The below screenshot shows labeling using a brush.
+ The following screenshot shows labeling using a brush.
:::image type="content" source="../media/tag-options.png" alt-text="A screenshot showing the labeling options offered in Custom NER." lightbox="../media/tag-options.png":::
6. In the right side pane under the **Labels** pivot you can find all the entity types in your project and the count of labeled instances per each.
-6. In the bottom section of the right side pane you can add the current document you are viewing to the training set or the testing set. By default all the documents are added to your training set. Learn more about [training and testing sets](train-model.md#data-splitting) and how they are used for model training and evaluation.
+6. In the bottom section of the right side pane, you can add the current document you're viewing to the training set or the testing set. By default all the documents are added to your training set. Learn more about [training and testing sets](train-model.md#data-splitting) and how they're used for model training and evaluation.
> [!TIP]
- > If you are planning on using **Automatic** data splitting, use the default option of assigning all the documents into your training set.
+ > If you're planning on using **Automatic** data splitting, use the default option of assigning all the documents into your training set.
-7. Under the **Distribution** pivot you can view the distribution across training and testing sets. You have two options for viewing:
+7. Under the **Distribution** pivot, you can view the distribution across training and testing sets. You have two options for viewing:
* *Total instances* where you can view count of all labeled instances of a specific entity type.
* *documents with at least one label* where each document is counted if it contains at least one labeled instance of this entity.
-7. When you're labeling, your changes will be synced periodically, if they have not been saved yet you will find a warning at the top of your page. If you want to save manually, select **Save labels** button at the bottom of the page.
+7. When you're labeling, your changes are synced periodically, if they aren't saved yet you get a warning at the top of your page. If you want to save manually, select **Save labels** button at the bottom of the page.
## Remove labels
@@ -99,8 +99,8 @@ To remove a label
## Delete entities
-To delete an entity, select the delete icon next to the entity you want to remove. Deleting an entity will remove all its labeled instances from your dataset.
+To delete an entity, select the delete icon next to the entity you want to remove. Deleting an entity removes all its labeled instances from your dataset.
## Next steps
-After you've labeled your data, you can begin [training a model](train-model.md) that will learn based on your data.
+After you label your data, you can begin [training a model](train-model.md) that will learn based on your data.
Summary
{
"modification_type": "minor update",
"modification_title": "Azure Language Studioでのデータラベリング手順の更新"
}
Explanation
この変更は、カスタム命名エンティティ認識(NER)モデルのためのデータラベリングに関するドキュメントを更新しています。主なポイントは、表現の明確化や一貫性を持たせることに重点が置かれています。
まず、セクションのタイトルが「Language Studio」から「Azure Language Studio」に更新され、具体的な名称が明示されています。これは、ユーザーにとってリソースの認識を助けるための改良です。
内容面では、データラベリングが開発ライフサイクルの重要なステップであることが強調されており、エンティティタイプの作成と文書内でのラベリングプロセスについて具体的に示されています。また、ラベリングの精度、一貫性、完全性といった重要な要素に関する指示も詳細に改善されています。これにより、ユーザーがモデルのパフォーマンスに影響を与える要因について理解を深めることができます。
さらに、手順の説明においても、明確で簡潔な言葉に改善され、ユーザーが操作を容易に実行できるよう配慮されています。例として、ラベリングの方法、ドキュメントの追加方法、ラベルの削除についての手順が整理され、視認性が向上しています。
全体として、これらの変更により、データラベリングプロセスが明確になり、ユーザーが自信を持ってエンティティをラベル付けし、モデルのトレーニングを進めるための実用的で分かりやすい情報が提供されています。
articles/ai-services/language-service/custom-named-entity-recognition/includes/language-studio/create-project.md
Diff
@@ -3,35 +3,35 @@ author: laujan
manager: nitinme
ms.service: azure-ai-language
ms.topic: include
-ms.date: 06/30/2025
+ms.date: 09/24/2025
ms.author: lajanuar
---
-1. Sign into the [Language Studio](https://aka.ms/languageStudio). A window will appear to let you select your subscription and Language resource. Select the Language resource you created in the above step.
+1. Sign into the [Language Studio](https://aka.ms/languageStudio). A window appears to let you select your subscription and Language resource. Select the Language resource you created in the previous step.
2. Under the **Extract information** section of Language Studio, select **Custom named entity recognition**.
- :::image type="content" source="../../media/select-custom-ner.png" alt-text="A screenshot showing the location of custom NER in the Language Studio landing page." lightbox="../../media/select-custom-ner.png":::
+ :::image type="content" source="../../media/select-custom-ner.png" alt-text="A screenshot showing the location of custom named entity recognition (NER) in the Language Studio landing page." lightbox="../../media/select-custom-ner.png":::
-3. Select **Create new project** from the top menu in your projects page. Creating a project will let you tag data, train, evaluate, improve, and deploy your models.
+3. Select **Create new project** from the top menu in your projects page. Creating a project lets you tag data, train, evaluate, improve, and deploy your models.
:::image type="content" source="../../media/create-project.png" alt-text="A screenshot of the project creation page." lightbox="../../media/create-project.png":::
-4. After you click, **Create new project**, a window will appear to let you connect your storage account. If you've already connected a storage account, you will see the storage accounted connected. If not, choose your storage account from the dropdown that appears and select **Connect storage account**; this will set the required roles for your storage account. This step will possibly return an error if you are not assigned as **owner** on the storage account.
+4. After you select, **Create new project**, a window will appear to let you connect your storage account. If you already connected a storage account, the connected storage accounted appears in the window. If not, choose your storage account from the dropdown that appears and select **Connect storage account**; this sets the required roles for your storage account. This step can return an error if you aren't assigned as **owner** on the storage account.
>[!NOTE]
> * You only need to do this step once for each new resource you use.
- > * This process is irreversible, if you connect a storage account to your Language resource you cannot disconnect it later.
+ > * This process is irreversible, if you connect a storage account to your Language resource you can't disconnect it later.
> * You can only connect your Language resource to one storage account.
:::image type="content" source="../../media/connect-storage.png" alt-text="A screenshot showing the storage connection screen." lightbox="../../media/connect-storage.png":::
-5. Enter the project information, including a name, description, and the language of the files in your project. If you're using the [example dataset](https://go.microsoft.com/fwlink/?linkid=2175226), select **English**. You won’t be able to change the name of your project later. Select **Next**
+5. Enter the project information, including a name, description, and the language of the files in your project. If you're using the [example dataset](https://go.microsoft.com/fwlink/?linkid=2175226), select **English**. You can't change the name of your project later. Select **Next**
> [!TIP]
> Your dataset doesn't have to be entirely in the same language. You can have multiple documents, each with different supported languages. If your dataset contains documents of different languages or if you expect text from different languages during runtime, select **enable multi-lingual dataset** option when you enter the basic information for your project. This option can be enabled later from the **Project settings** page.
-6. Select the container where you have uploaded your dataset.
-If you have already labeled data make sure it follows the [supported format](../../concepts/data-formats.md) and select **Yes, my files are already labeled and I have formatted JSON labels file** and select the labels file from the drop-down menu. Select **Next**.
+6. Select the container where you uploaded your dataset.
+If you already labeled data make sure it follows the [supported format](../../concepts/data-formats.md) and select **Yes, my files are already labeled and I have formatted JSON labels file** and select the labels file from the drop-down menu. Select **Next**.
7. Review the data you entered and select **Create Project**.
Summary
{
"modification_type": "minor update",
"modification_title": "プロジェクト作成手順の明確化"
}
Explanation
この変更は、Azure Language Studioでのカスタム命名エンティティ認識(NER)プロジェクトの作成手順に関するドキュメントを更新しています。具体的には、言語リソースの選択やストレージアカウントの接続手順に関する表現を明確化し、文書全体の可読性を向上させています。
更新点としては、プロジェクト作成時に表示されるウィンドウについての詳細がより簡潔に表現され、誤解を招く余地が少なくなっています。また、ストレージアカウントの接続プロセスやその注意点についても、より分かりやすい表現に改善されています。例えば、「storage accounted connected」が「connected storage accounted appears」に修正され、文の流れがスムーズになっています。
さらに、プロジェクト情報の入力段階での注意事項やオプションについての記述もクリアにされ、「変更できない」という言及が簡潔になり、あわせて多言語データセットの利用に関するヒントも強調されています。
これらの改良により、ユーザーがプロジェクトをスムーズに作成できるよう配慮されており、プロセスの各ステップで不明点が解消されていることがポイントです。全体として、この変更はドキュメントをより実用的でアクセスしやすいものにしています。
articles/ai-services/language-service/custom-named-entity-recognition/includes/quickstarts/azure-ai-foundry.md
Diff
@@ -0,0 +1,237 @@
+---
+author: laujan
+manager: nitinme
+ms.service: azure-ai-language
+ms.topic: include
+ms.date: 09/24/2025
+ms.author: lajanuar
+---
+
+> [!NOTE]
+>
+> * This project requires that you have an **Azure AI Foundry hub-based project with an Azure storage account** (not a Foundry project). For more information, *see* [How to create and manage an Azure AI Foundry hub](/azure/ai-foundry/how-to/create-azure-ai-resource)
+> * If you already have an Azure AI Language or multi-service resource—whether used on its own or through Language Studio—you can continue to use those existing Language resources within the Azure AI Foundry portal. For more information, see [How to use Azure AI services in the Azure AI Foundry portal](/azure/ai-services/connect-services-ai-foundry-portal).
+
+## Prerequisites
+
+* An **Azure subscription**. If you don't have one, you can [create one for free](https://azure.microsoft.com/free/cognitive-services).
+
+* The **Requisite permissions**. Make sure the person establishing the account and project is assigned as the Azure AI Account Owner role at the subscription level. Alternatively, having either the **Contributor** or **Cognitive Services Contributor** role at the subscription scope also meets this requirement. For more information, *see* [Role based access control (RBAC)](/azure/ai-foundry/openai/how-to/role-based-access-control).
+
+* An [**Azure AI Language resource with a storage account**](https://portal.azure.com/?Microsoft_Azure_PIMCommon=true#create/Microsoft.CognitiveServicesTextAnalytics). On the **select additional features** page, select the **Custom text classification, Custom named entity recognition, Custom sentiment analysis & Custom Text Analytics for health** box to link a required storage account with this resource:
+
+ :::image type="content" source="../../media/foundry-next/select-additional-features.png" alt-text="Screenshot of the select additional features option in the Azure AI Foundry.":::
+
+ > [!NOTE]
+ > * You need to have an **owner** role assigned on the resource group to create a Language resource.
+ > * If you're connecting a preexisting storage account, you should have an owner role assigned to it.
+ > * Don't move the storage account to a different resource group or subscription once linked with the Language resource.
+
+
+* **An Azure AI Foundry hub-based project**. For more information about Foundry hub-based project, *see* [Create a hub project for Azure AI Foundry](/azure/ai-foundry/how-to/hub-create-projects).
+
+* **A custom NER dataset uploaded to your storage container**. A custom named entity recognition (NER) dataset is the collection of labeled text documents used to train your custom NER model. You can [download our sample dataset](https://go.microsoft.com/fwlink/?linkid=2175226) for this quickstart. The source language is English.
+
+## Step 1: Configure required roles, permissions, and settings
+
+Let's begin by configuring your resources.
+
+### Enable custom named entity recognition feature
+
+Make sure the **Custom text classification / Custom Named Entity Recognition** feature is enabled in the [Azure portal](https://portal.azure.com/).
+
+1. Go to your Language resource in the [Azure portal](https://portal.azure.com).
+1. From the left side menu, under **Resource Management** section, select **Features**.
+1. Make sure the **Custom text classification / Custom Named Entity Recognition** feature is enabled.
+1. If your storage account isn't assigned, select and connect your storage account.
+1. Select **Apply**.
+
+### Add required roles for your Azure AI Language resource
+
+1. Go to your storage account or Language resource in the [Azure portal](https://portal.azure.com/).
+1. Select **Access Control (IAM)** in the left pane.
+1. Select **Add** to **Add Role Assignments**, and choose the appropriate role for your account.
+
+ * You should have the **Cognitive Services Language Owner** or **Cognitive Services Contributor** role assignment for your Language resource.
+
+1. Within **Assign access to**, select **User, group, or service principal**.
+1. Select **Select members**.
+1. Select ***your user name***. You can search for user names in the **Select** field. Repeat this step for all roles.
+1. Repeat these steps for all the user accounts that need access to this resource.
+
+
+### Add required roles for your storage account
+
+1. Go to your storage account page in the [Azure portal](https://portal.azure.com/).
+1. Select **Access Control (IAM)** in the left pane.
+1. Select **Add** to **Add Role Assignments**, and choose the **Storage blob data contributor** role on the storage account.
+1. Within **Assign access to**, select **Managed identity**.
+1. Select **Select members**.
+1. Select your subscription, and **Language** as the managed identity. You can search for your language resource in the **Select** field.
+
+### Add required user roles
+
+> [!IMPORTANT]
+> If you skip this step, you get a 403 error when you try to connect to your custom project. It's important that your current user has this role to access storage account blob data, even if you're the owner of the storage account.
+>
+
+1. Go to your storage account page in the [Azure portal](https://portal.azure.com/).
+1. Select **Access Control (IAM)** in the left pane.
+1. Select **Add** to **Add Role Assignments**, and choose the **Storage blob data contributor** role on the storage account.
+1. Within **Assign access to**, select **User, group, or service principal**.
+1. Select **Select members**.
+1. Select your User. You can search for user names in the **Select** field.
+
+> [!IMPORTANT]
+> If you have a Firewall or virtual network or private endpoint, be sure to select **Allow Azure services on the trusted services list to access this storage account** under the **Networking tab** in the Azure portal.
+
+ :::image type="content" source="../../media/foundry-next/allow-azure-services.png" alt-text="Screenshot of allow Azure services enabled in Azure AI Foundry.":::
+
+## Step 2: Upload your dataset to your storage container
+
+Next, let's add a container and upload your dataset files directly to the root directory of your storage container. These documents are used to train your model.
+
+1. Add a container to the storage account associated with your language resource. For more information, *see* [create a container](/azure/storage/blobs/storage-quickstart-blobs-portal#create-a-container).
+
+1. [Download the sample dataset](https://go.microsoft.com/fwlink/?linkid=2175226) from GitHub. The provided sample dataset contains 20 loan agreements:
+
+ * Each agreement includes two parties: a lender and a borrower.
+ * You extract relevant information for: both parties, agreement date, loan amount, and interest rate.
+
+1. Open the .zip file, and extract the folder containing the documents.
+
+1. Navigate to the Azure AI Foundry.
+
+1. If you aren't already signed in, the portal prompts you to do so with your Azure credentials.
+
+1. Once signed in, access your existing Azure AI Foundry hub-based project for this quickstart.
+
+1. Select **Management center** from the left navigation menu.
+
+1. Select **Connected resources** from the **Hub** section of the **Management center** menu.
+
+1. Next choose the workspace blob storage that was set up for you as a connected resource.
+
+1. On the workspace blob storage, select **View in Azure Portal**.
+
+
+1. On the **AzurePortal** page for your blob storage, select **Upload** from the top menu. Next, choose the `.txt` and `.json` files you downloaded earlier. Finally, select the **Upload** button to add the file to your container.
+
+ :::image type="content" source="../../media/foundry-next/upload-blob-files.png" alt-text="A screenshot showing the button for uploading files to the storage account.":::
+
+
+Now that the required Azure resources are provisioned and configured within the Azure portal, let's use these resources in the Azure AI Foundry to create a fine-tuned custom Named Entity Recognition (NER) model.
+
+## Step 3: Connect your Azure AI Language resource
+
+Next we create a connection to your Azure AI Language resource so Azure AI Foundry can access it securely. This connection provides secure identity management and authentication, as well as controlled and isolated access to data.
+
+1. Return to the [Azure AI Foundry](https://ai.azure.com/).
+
+1. Access your existing Azure AI Foundry hub-based project for this quickstart.
+
+1. Select **Management center** from the left navigation menu.
+
+1. Select **Connected resources** from the **Hub** section of the **Management center** menu.
+
+1. In the main window, select the **+ New connection** button.
+
+1. Select **Azure AI Language** from the **Add a connection to external assets** window.
+
+1. Select **Add connection**, then select **Close.**
+
+ :::image type="content" source="../../media/foundry-next/add-connection.png" alt-text="Screenshot of the connection window in Azure AI Foundry.":::
+
+## Step 4: Fine tune your custom NER model
+
+Now, we're ready to create a custom NER fine-tune model.
+
+1. From the **Project** section of the **Management center** menu, select **Go to project**.
+
+1. From the **Overview** menu, select **Fine-tuning**.
+
+1. From the main window, select **the AI Service fine-tuning** tab and then the **+ Fine-tune** button.
+
+1. From the **Create service fine-tuning** window, choose the **Custom named entity recognition** tab, and then select **Next**.
+
+ :::image type="content" source="../../media/foundry-next/create-fine-tuning.png" alt-text="Screenshot of the fine-tuning selection tile in Azure AI Foundry." lightbox="../../media/foundry-next/create-fine-tuning.png":::
+
+1. In the **Create service fine-tuning task** window, complete the fields as follows:
+
+ * **Connected service**. The name of your language service resource should already appear in this field by default. if not, add it from the drop-down menu.
+
+ * **Name**. Give your fine-tuning task project a name.
+
+ * **Language**. English is set as the default and already appears in the field.
+
+ * **Description**. You can optionally provide a description or leave this field empty.
+
+ * **Blob store container**. Select the workspace blob storage container from [Step 2](#step-2-upload-your-dataset-to-your-storage-container) and choose the **Connect** button.
+
+1. Finally, select the **Create** button. It can take a few minutes for the *creating* operation to complete.
+
+## Step 5: Train your model
+
+ :::image type="content" source="../../media/foundry-next/workflow.png" alt-text="Screenshot of fine-tuning workflow in Azure AI Foundry.":::
+
+
+1. From the **Getting Started** menu, choose **Manage data**. In the **Add data for training and testing** window, you see the sample data that you previously uploaded to your Azure Blob Storage container.
+1. Next, from the **Getting Started** menu, select **Train model**.
+1. Select the **+ Train model button**. When the **Train a new model** window appears, enter a name for your new model and keep the default values. Select the **Next** button.
+1. In the **Train a new model** window, keep the default **Automatically split the testing set from training data** enabled with the recommended percentage set at 80% for training data and 20% for testing data.
+1. Review your model configuration then select the **Create** button.
+1. After training a model, you can select **Evaluate model** from the **Getting started** menu. You can select your model from the **Evaluate you model** window and make improvements if necessary.
+
+## Step 6: Deploy your model
+
+Typically, after training a model, you review its evaluation details. For this quickstart, you can just deploy your model and make it available to test in the Language playground, or by calling the [prediction API](https://aka.ms/clu-apis). However, if you wish, you can take a moment to select **Evaluate your model** from the left-side menu and explore the in-depth telemetry for your model. Complete the following steps to deploy your model within Azure AI Foundry.
+
+1. Select **Deploy model** from the left-side menu.
+1. Next, select **➕Deploy a trained model** from the **Deploy your model** window.
+
+ :::image type="content" source="../../media/foundry-next/deploy-trained-model.png" alt-text="Screenshot of the deploy your model window in Azure AI Foundry.":::
+
+1. Make sure the **Create a new deployment** button is selected.
+
+1. Complete the **Deploy a trained model** window fields:
+
+ * **Deployment name**. Name your model.
+ * **Assign a model**. Select your trained model from the drop-down menu.
+ * **Region**. Select a region from the drop-down menu.
+
+1. Finally, select the **Create** button. It may take a few minutes for your model to deploy.
+
+1. After successful deployment, you can view your model's deployment status on the **Deploy your model** page. The expiration date that appears marks the date when your deployed model becomes unavailable for prediction tasks. This date is usually 18 months after a training configuration is deployed.
+
+ :::image type="content" source="../../media/foundry-next/deployed-model.png" alt-text="Screenshot of the deploy your model status window in Azure AI Foundry.":::
+
+## Step 7: Try the Language playground
+
+The Language playground provides a sandbox to test and configure your fine-tuned model before deploying it to production, all without writing code.
+
+1. From the top menu bar, select **Try in playground**.
+1. In the Language Playground window, select the **Custom named entity recognition** tile.
+1. In the **Configuration** section, select your **Project name** and **Deployment name** from the drop-down menus.
+1. Enter an entity and select **Run**.
+1. You can evaluate the results in the **Details** window.
+
+
+That's it, congratulations!
+
+In this quickstart, you created a fine-tuned custom NER model, deployed it in Azure AI Foundry, and tested your model in the Language playground.
+
+## Clean up resources
+
+If you no longer need your project, you can delete it from the Azure AI Foundry.
+
+1. Navigate to the [Azure AI Foundry](https://ai.azure.com/) home page. Initiate the authentication process by signing in, unless you already completed this step and your session is active.
+1. Select the project that you want to delete from the **Keep building with Azure AI Foundry**.
+1. Select **Management center**.
+1. Select **Delete project**.
+
+To delete the hub along with all its projects:
+
+1. Navigate to the **Overview** tab in the **Hub** section.
+
+1. On the right, select **Delete hub**.
+1. The link opens the Azure portal for you to delete the hub there.
Summary
{
"modification_type": "new feature",
"modification_title": "Azure AI Foundryを利用したカスタムNERクイックスタートの追加"
}
Explanation
この変更は、Azure AI Foundryを利用したカスタム命名エンティティ認識(NER)モデルの作成に関する新しいクイックスタートガイドを追加しています。このガイドは、ユーザーがAzureのリソースを迅速に設定し、カスタムNERモデルを作成してデプロイするための手順を提供しています。
ドキュメントは、プロジェクトを開始するために必要な前提条件から始まり、Azureサブスクリプションや必要な権限、ストレージアカウントとの接続方法などが詳細に説明されています。また、Azure Portalでの役割および権限の設定手順や、データセットをストレージコンテナにアップロードする方法についても具体的に説明されています。
具体的なステップは、カスタムNER機能の有効化、役割の設定、データセットのアップロード、Azure AI Languageリソースとの接続、モデルの微調整、そしてモデルのトレーニングとデプロイに関連する手順を含んでいます。特に、Azure AI Foundryの管理センターを通じて行う操作が詳細に示されており、ユーザーが一連の工程をスムーズに進行できるよう配慮されています。
ガイドの最後には、言語プレイグラウンドでモデルをテストし、必要に応じてプロジェクトのクリーンアップを行う手順も記載されています。全体として、この新しいドキュメントは、ユーザーがAzure AI Foundryを通じてカスタムNERモデルを素早く作成し、デプロイするための価値あるリソースとなります。
articles/ai-services/language-service/custom-named-entity-recognition/includes/quickstarts/language-studio.md
Diff
@@ -1,9 +1,9 @@
----
+<!-- ---
author: laujan
manager: nitinme
ms.service: azure-ai-language
ms.topic: include
-ms.date: 06/30/2025
+ms.date: 09/18/2025
ms.author: lajanuar
---
@@ -15,7 +15,7 @@ ms.author: lajanuar
## Create a new Azure AI Language resource and Azure storage account
-Before you can use custom NER, you'll need to create an Azure AI Language resource, which will give you the credentials that you need to create a project and start training a model. You'll also need an Azure storage account, where you can upload your dataset that will be used to build your model.
+Before you can use custom named entity recognition, you'll need to create an Azure AI Language resource, which will give you the credentials that you need to create a project and start training a model. You'll also need an Azure storage account, where you can upload your dataset that will be used to build your model.
> [!IMPORTANT]
> To quickly get started, we recommend creating a new Azure AI Language resource using the steps provided in this article. Using the steps in this article will let you create the Language resource and storage account at the same time, which is easier than doing it later.
@@ -67,4 +67,4 @@ After your model is deployed, you can start using it to extract entities from yo
## Clean up resources
-[!INCLUDE [Delete project using Language Studio](../language-studio/delete-project.md)]
+[!INCLUDE [Delete project using Language Studio](../language-studio/delete-project.md)] -->
Summary
{
"modification_type": "minor update",
"modification_title": "ドキュメントの日付と表現の更新"
}
Explanation
この変更は、Azure Language Studioに関連するカスタム命名エンティティ認識(NER)のクイックスタートガイドにおいて、いくつかの小さな更新を行っています。具体的には、ドキュメントの日付の更新や、カスタムNERの表現の明確化が主な内容です。
まず、文書に記載されている日付が「2025年6月30日」から「2025年9月18日」に変更されており、これにより情報が最新のものとなっています。また、「custom NER」という表現が「custom named entity recognition」とより詳細に表現され、利用者にとっての可読性が向上しています。これに伴い、当該機能の使用に必要なリソースや手順に関する説明がより具体的に理解できる内容に改良されています。
加えて、リソースのクリーンアップに関するリンク部分には変更がないものの、マークダウンのコメントとして無効化が加えられており、これにより今後の編集や表示時の管理がしやすくなっています。全体として、この改訂は文章の最新性および明瞭性を高め、ユーザーが情報を正確に理解しやすくなるよう配慮されています。
articles/ai-services/language-service/custom-named-entity-recognition/includes/quickstarts/rest-api.md
Diff
@@ -15,12 +15,12 @@ ms.author: lajanuar
## Create a new Azure AI Language resource and Azure storage account
-Before you can use custom NER, you'll need to create an Azure AI Language resource, which will give you the credentials that you need to create a project and start training a model. You'll also need an Azure storage account, where you can upload your dataset that will be used in building your model.
+Before you can use custom named entity recognition (NER), you need to create an Azure AI Language resource, which gives you the credentials that you need to create a project and start training a model. You also need an Azure storage account, where you can upload your dataset that is used in building your model.
> [!IMPORTANT]
-> To get started quickly, we recommend creating a new Azure AI Language resource using the steps provided in this article, which will let you create the Language resource, and create and/or connect a storage account at the same time, which is easier than doing it later.
+> To get started quickly, we recommend creating a new Azure AI Language resource. Use the steps provided in this article, to create the Language resource, and create and/or connect a storage account at the same time. Creating both at the same time is easier than doing it later.
>
-> If you have a pre-existing resource that you'd like to use, you will need to connect it to storage account. See [create project](../../how-to/create-project.md#using-a-pre-existing-language-resource) for information.
+> If you have a preexisting resource that you'd like to use, you need to connect it to storage account. See [create project](../../how-to/create-project.md) for information.
[!INCLUDE [create a new resource from the Azure portal](../resource-creation-azure-portal.md)]
@@ -40,7 +40,7 @@ Before you can use custom NER, you'll need to create an Azure AI Language resour
## Create a custom NER project
-Once your resource and storage account are configured, create a new custom NER project. A project is a work area for building your custom ML models based on your data. Your project can only be accessed by you and others who have access to the Language resource being used.
+Once your resource and storage account are configured, create a new custom NER project. A project is a work area for building your custom ML models based on your data. Your project is accessed you and others who have access to the Language resource being used.
Use the tags file you downloaded from the [sample data](https://github.com/Azure-Samples/cognitive-services-sample-data-files) in the previous step and add it to the body of the following request.
@@ -58,27 +58,27 @@ Use the tags file you downloaded from the [sample data](https://github.com/Azure
## Train your model
-Typically after you create a project, you go ahead and start [tagging the documents](../../how-to/tag-data.md) you have in the container connected to your project. For this quickstart, you have imported a sample tagged dataset and initialized your project with the sample JSON tags file.
+Typically after you create a project, you go ahead and start [tagging the documents](../../how-to/tag-data.md) you have in the container connected to your project. For this quickstart, you imported a sample tagged dataset and initialized your project with the sample JSON tags file.
### Start training job
-After your project has been imported, you can start training your model.
+After your project is imported, you can start training your model.
[!INCLUDE [train model](../rest-api/train-model.md)]
### Get training job status
-Training could take sometime between 10 and 30 minutes for this sample dataset. You can use the following request to keep polling the status of the training job until it is successfully completed.
+Training could take sometime between 10 and 30 minutes for this sample dataset. You can use the following request to keep polling the status of the training job until successfully completed.
[!INCLUDE [get training model status](../rest-api/get-training-status.md)]
## Deploy your model
-Generally after training a model you would review it's [evaluation details](../../how-to/view-model-evaluation.md) and [make improvements](../../how-to/view-model-evaluation.md) if necessary. In this quickstart, you will just deploy your model, and make it available for you to try in Language Studio, or you can call the [prediction API](https://aka.ms/ct-runtime-swagger).
+Generally after training a model you would review it's [evaluation details](../../how-to/view-model-evaluation.md) and [make improvements](../../how-to/view-model-evaluation.md) if necessary. In this quickstart, you just deploy your model, and make it available for you to try in Language Studio, or you can call the [prediction API](https://aka.ms/ct-runtime-swagger).
### Start deployment job
@@ -94,7 +94,7 @@ Generally after training a model you would review it's [evaluation details](../.
## Extract custom entities
-After your model is deployed, you can start using it to extract entities from your text using the [prediction API](https://aka.ms/ct-runtime-swagger). In the sample dataset you downloaded earlier you can find some test documents that you can use in this step.
+After your model is deployed, you can start using it to extract entities from your text using the [prediction API](https://aka.ms/ct-runtime-swagger). In the sample dataset, downloaded earlier, you can find some test documents that you can use in this step.
### Submit a custom NER task
Summary
{
"modification_type": "minor update",
"modification_title": "REST APIクイックスタートの表現の明確化と文言修正"
}
Explanation
この変更は、カスタム命名エンティティ認識(NER)を使用するためのREST APIクイックスタートガイドにおいて、いくつかの文言やフレーズの修正を行っています。主な変更点は、説明の明確化と文法の改善です。
具体的には、「custom NER」という表現が「custom named entity recognition (NER)」と完全な形で記載され、その機能がより明確に表現されています。また、いくつかの文で構造が改善され、文章の流れがスムーズになるよう修正されています。例えば、「you can also need an Azure storage account」が「you also need an Azure storage account」に変更され、文章が自然な形になっています。
さらに、ガイドの各セクションにおいて、プロジェクトやデータセットのインポートに関する説明が簡素化され、より分かりやすいものとなっています。また、文法上の誤りである「it’s」が「its」に修正され、内容の正確性も向上しています。
全体として、この更新はユーザーがクイックスタートを実行する際によりスムーズに情報を理解できるようにすることを目的としており、文書の読みやすさを向上させています。
articles/ai-services/language-service/custom-named-entity-recognition/includes/rest-api/import-project.md
Diff
@@ -19,7 +19,7 @@ If a project with the same name already exists, the data of that project is repl
|---------|---------|---------|
|`{ENDPOINT}` | The endpoint for authenticating your API request. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
|`{PROJECT-NAME}` | The name for your project. This value is case-sensitive. | `myProject` |
-|`{API-VERSION}` | The version of the API you are calling. The value referenced here is for the latest version released. See [Model lifecycle](../../../concepts/model-lifecycle.md#choose-the-model-version-used-on-your-data) to learn more about other available API versions. | `2022-05-01` |
+|`{API-VERSION}` | The version of the API you're calling. The value referenced here's for the latest version released. See [Model lifecycle](../../../concepts/model-lifecycle.md#choose-the-model-version-used-on-your-data) to learn more about other available API versions. | `2022-05-01` |
### Headers
@@ -32,7 +32,7 @@ Use the following header to authenticate your request.
### Body
-Use the following JSON in your request. Replace the placeholder values below with your own values.
+Use the following JSON in your request. Replace the placeholder values with your own values.
```json
{
@@ -106,29 +106,29 @@ Use the following JSON in your request. Replace the placeholder values below wit
|Key |Placeholder |Value | Example |
|---------|---------|----------|--|
-| `api-version` | `{API-VERSION}` | The version of the API you are calling. The version used here must be the same API version in the URL. Learn more about other available [API versions](../../../concepts/model-lifecycle.md#choose-the-model-version-used-on-your-data) | `2022-03-01-preview` |
+| `api-version` | `{API-VERSION}` | The version of the API you're calling. The version used here must be the same API version in the URL. Learn more about other available [API versions](../../../concepts/model-lifecycle.md#choose-the-model-version-used-on-your-data) | `2022-03-01-preview` |
| `projectName` | `{PROJECT-NAME}` | The name of your project. This value is case-sensitive. | `myProject` |
| `projectKind` | `CustomEntityRecognition` | Your project kind. | `CustomEntityRecognition` |
-| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the documents used in your project. If your project is a multilingual project, choose the [language code](../../language-support.md) of the majority of the documents. |`en-us`|
-| `multilingual` | `true`| A boolean value that enables you to have documents in multiple languages in your dataset and when your model is deployed you can query the model in any supported language (not necessarily included in your training documents. See [language support](../../language-support.md#multi-lingual-option) for information on multilingual support. | `true`|
-| `storageInputContainerName` | {CONTAINER-NAME} | The name of your Azure storage container where you have uploaded your documents. | `myContainer` |
-| `entities` | | Array containing all the entity types you have in the project. These are the entity types that will be extracted from your documents into.| |
+| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the documents used in your project. If your project is a multilingual project, choose the [language code](../../language-support.md) of most the documents. |`en-us`|
+| `multilingual` | `true`| A boolean value that enables you to have documents in multiple languages in your dataset and when your model is deployed you can query the model in any supported language (not necessarily included in your training documents. See [language support](../../language-support.md#multi-lingual-option) for information on multilingual support.| `true`|
+| `storageInputContainerName` | {CONTAINER-NAME} | The name of your Azure storage container containing your uploaded documents. | `myContainer` |
+| `entities` | | Array containing all the entity types you have in the project and extracted from your documents.| |
| `documents` | | Array containing all the documents in your project and list of the entities labeled within each document. | [] |
-| `location` | `{DOCUMENT-NAME}` | The location of the documents in the storage container. Since all the documents are in the root of the container this should be the document name.|`doc1.txt`|
-| `dataset` | `{DATASET}` | The test set to which this file will go to when split before training. See [How to train a model](../../how-to/train-model.md#data-splitting) for more information on how your data is split. Possible values for this field are `Train` and `Test`. |`Train`|
+| `location` | `{DOCUMENT-NAME}` | The location of the documents in the storage container.|`doc1.txt`|
+| `dataset` | `{DATASET}` | The test set to which this file goes to when split before training. For more information, *see* [How to train a model](../../how-to/train-model.md#data-splitting). Possible values for this field are `Train` and `Test`. |`Train`|
-Once you send your API request, you’ll receive a `202` response indicating that the job was submitted correctly. In the response headers, extract the `operation-location` value. It will be formatted like this:
+Once you send your API request, you receive a `202` response indicating that the job was submitted correctly. In the response headers, extract the `operation-location` value. Here's an example of the format:
```rest
{ENDPOINT}/language/authoring/analyze-text/projects/{PROJECT-NAME}/import/jobs/{JOB-ID}?api-version={API-VERSION}
```
-`{JOB-ID}` is used to identify your request, since this operation is asynchronous. You’ll use this URL to get the import job status.
+`{JOB-ID}` is used to identify your request, since this operation is asynchronous. You use this URL to get the import job status.
Possible error scenarios for this request:
-* The selected resource doesn't have [proper permissions](../../how-to/create-project.md#using-a-pre-existing-language-resource) for the storage account.
+* The selected resource doesn't have [proper permissions](../../how-to/create-project.md) for the storage account.
* The `storageInputContainerName` specified doesn't exist.
* Invalid language code is used, or if the language code type isn't string.
* `multilingual` value is a string and not a boolean.
Summary
{
"modification_type": "minor update",
"modification_title": "プロジェクトインポート手順の文言修正"
}
Explanation
この変更は、カスタム命名エンティティ認識(NER)に関するREST APIのプロジェクトインポート手順に関する文書における文言の修正を行ったものです。主な目的は、文章の可読性を向上させ、情報をより明確にすることです。
具体的には、API呼び出しに必要なパラメータについての説明が改善されました。例えば、APIバージョンに関する文言が「you’re calling」と変更され、より自然な表現になりました。同様に、「the placeholder values below with your own values」という部分が「the placeholder values with your own values」に簡略化され、冗長な表現が削減されています。
また、各パラメータの説明においても、「most of the documents」として言語コードの説明がより具体的になり、利用者が場合に応じて適切に選択できるよう配慮されています。その他にも、APIリクエストに対するレスポンスの記述が適切な用語に修正され、情報の正確性が向上しています。
最後に、可能なエラーシナリオに関する説明も文書全体で見直され、ドキュメント内のリンクが最新のものに修正されています。このような変更は、ユーザーにとっての利便性や理解のしやすさを高めることを目的としています。全体的に、文書の質が向上し、技術ユーザーがプロジェクトのインポートをよりスムーズに行えるようになります。
articles/ai-services/language-service/custom-named-entity-recognition/includes/use-pre-existing-resource.md
Diff
@@ -1,43 +1,32 @@
---
titleSuffix: Azure AI services
-description: Learn about the steps for using Azure resources with custom NER.
+description: Learn about the steps for using Azure resources with custom named entity recognition (NER).
author: laujan
manager: nitinme
ms.service: azure-ai-language
ms.topic: include
-ms.date: 06/30/2025
+ms.date: 09/24/2025
ms.author: lajanuar
---
You can use an existing Language resource to get started with custom NER as long as this resource meets the below requirements:
|Requirement |Description |
|---------|---------|
-|Regions | Make sure your existing resource is provisioned in one of the [supported regions](../service-limits.md#regional-availability). If not, you will need to create a new resource in one of these regions. |
+|Regions | Make sure your existing resource is provisioned in one of the [supported regions](../service-limits.md#regional-availability). If not, you need to create a new resource in one of these regions. |
|Pricing tier | Learn more about [supported pricing tiers](../service-limits.md#language-resource-limits). |
|Managed identity | Make sure that the resource's managed identity setting is enabled. Otherwise, read the next section. |
-To use custom named entity recognition, you'll need to [create an Azure storage account](/azure/storage/common/storage-account-create) if you don't have one already.
+To use custom named entity recognition, you need to [create an Azure storage account](/azure/storage/common/storage-account-create) if you don't have one already.
## Enable identity management for your resource
-# [Azure portal](#tab/portal)
-
Your Language resource must have identity management, to enable it using the [Azure portal](https://portal.azure.com):
1. Go to your Language resource
2. From left hand menu, under **Resource Management** section, select **Identity**
3. From **System assigned** tab, make sure to set **Status** to **On**
-# [Language Studio](#tab/studio)
-
-Your Language resource must have identity management, to enable it using [Language Studio](https://aka.ms/languageStudio):
-
-1. Select the settings icon in the top right corner of the screen
-2. Select **Resources**
-3. Select the check box **Managed Identity** for your Azure AI Language resource.
-
----
### Enable custom named entity recognition feature
@@ -50,7 +39,7 @@ Make sure to enable **Custom text classification / Custom Named Entity Recogniti
5. Select **Apply**.
>[!Important]
-> Make sure that the user making changes has **storage blob data contributor** role assigned for them.
+ > Make sure that the user making changes the **storage blob data contributor** role assigned for them.
### Add required roles
Summary
{
"modification_type": "minor update",
"modification_title": "既存リソースの利用に関する文言修正"
}
Explanation
この変更は、カスタム命名エンティティ認識(NER)を利用するための既存のAzureリソースの使用に関するドキュメントの表現を見直したものです。主な目的は、文章の明確化と一貫性の確保です。
具体的には、リソースの説明部分において「custom named entity recognition (NER)」といった用語を追加することで、文書の記述がより正確になりました。また、「you need to create」という表現が使われ、よりストレートで理解しやすい指示文になっています。これにより、ユーザーが手順をより容易に理解できるよう配慮されています。
さらに、インデントや文の構成が見直され、情報が整理されています。特に、ユーザーが重要な項目を読みやすくするために、段落が簡潔に修正されました。例えば、リソースの立ち上げに関する具体的なステップが削除され、必要な情報を効率的に伝えることに焦点が当てられています。
その他の文の明確化としては、重要な役割についての注意喚起が、より直接的な形で表現されています。これらの小さな調整は、全体的にユーザーの利便性を高めるために重要であり、ドキュメントの質を向上させています。
articles/ai-services/language-service/custom-named-entity-recognition/language-support.md
Diff
@@ -6,7 +6,7 @@ author: laujan
manager: nitinme
ms.service: azure-ai-language
ms.topic: conceptual
-ms.date: 06/30/2025
+ms.date: 09/24/2025
ms.custom: language-service-custom-ner
ms.author: lajanuar
---
@@ -17,15 +17,15 @@ Use this article to learn about the languages currently supported by custom name
## Multi-lingual option
-With custom NER, you can train a model in one language and use to extract entities from documents in another language. This feature is powerful because it helps save time and effort. Instead of building separate projects for every language, you can handle multi-lingual dataset in one project. Your dataset doesn't have to be entirely in the same language but you should enable the multi-lingual option for your project while creating or later in project settings. If you notice your model performing poorly in certain languages during the evaluation process, consider adding more data in these languages to your training set.
+With custom named entity recognition (NER), you can train a model in one language and use to extract entities from documents in another language. This feature is powerful because it helps save time and effort. Instead of building separate projects for every language, you can handle multi-lingual dataset in one project. Your dataset doesn't have to be entirely in the same language but you should enable the multi-lingual option for your project while creating or later in project settings. If you notice your model performing poorly in certain languages during the evaluation process, consider adding more data in these languages to your training set.
You can train your project entirely with English documents, and query it in: French, German, Mandarin, Japanese, Korean, and others. Custom named entity recognition
makes it easy for you to scale your projects to multiple languages by using multilingual technology to train your models.
-Whenever you identify that a particular language is not performing as well as other languages, you can add more documents for that language in your project. In the [data labeling](how-to/tag-data.md) page in Language Studio, you can select the language of the document you're adding. When you introduce more documents for that language to the model, it is introduced to more of the syntax of that language, and learns to predict it better.
+Whenever you identify that a particular language isn't performing as well as other languages, you can add more documents for that language in your project. For data labeling in [Azure AI Foundry](https://ai.azure.com/), you can select the language of the document you're adding. When you introduce more documents for that language to the model, the model is introduced to more of the syntax of that language, and learns to predict it better.
-You aren't expected to add the same number of documents for every language. You should build the majority of your project in one language, and only add a few documents in languages you observe aren't performing well. If you create a project that is primarily in English, and start testing it in French, German, and Spanish, you might observe that German doesn't perform as well as the other two languages. In that case, consider adding 5% of your original English documents in German, train a new model and test in German again. You should see better results for German queries. The more labeled documents you add, the more likely the results are going to get better.
+You aren't expected to add the same number of documents for every language. You should build most your project in one language, and only add a few documents in languages you observe aren't performing well. If you develop a project mainly in English, and then begin testing it in French, German, and Spanish, you may notice some differences. Specifically, German may underperform compared to the other two languages. While French and Spanish might yield better results, German could present more challenges or produce less favorable outcomes during testing. In that case, consider adding 5% of your original English documents in German, train a new model and test in German again. You should see better results for German queries. The more labeled documents you add, the more likely the results are going to get better.
When you add data in another language, you shouldn't expect it to negatively affect other languages.
Summary
{
"modification_type": "minor update",
"modification_title": "カスタムNERに関する多言語サポートの文言修正"
}
Explanation
この変更は、カスタム命名エンティティ認識(NER)に関連する多言語サポートを扱った記事に対する文言の修正を行ったものです。主な目的は、用語の統一と表現の明確化です。
具体的には、最初の段落で「custom NER」という略称から「custom named entity recognition (NER)」に変更され、略称が初めて登場する際にフルネームを記載することで、文書の一貫性が向上しました。これにより、初心者のユーザーにも理解しやすい形になっています。
また、言語のパフォーマンスについての説明も調整され、より明確で具体的な表現に改善されています。「isn’t performing as well as」に書き換えられることで、状況をより正確に反映しています。さらに、書き換えが行われた部分では、例として利用される言語やモデルのパフォーマンスに関する具体的なケーススタディが追加され、ユーザーがプロジェクトの改善に向けるべきアプローチについての理解が深まるよう意図されています。
最後に、データのラベリングに関する情報が、Azure AI Foundryへのリンクを介して更新され、リソースへのアクセスを容易にしています。これらの変更は、全体的にユーザーの実務的な理解を助け、カスタムNERを使用する際の手引きとしての品質を向上させることを目的としています。
articles/ai-services/language-service/custom-named-entity-recognition/media/foundry-next/add-connection.png
Summary
{
"modification_type": "new feature",
"modification_title": "新しい画像の追加: 接続の追加に関する図"
}
Explanation
この変更では、カスタム命名エンティティ認識に関連するドキュメントに新たに画像ファイルが追加されました。具体的には、接続を追加するプロセスに関するビジュアルコンテンツが提供されており、ユーザーにとってより視覚的に理解しやすい情報源となります。
追加された画像は「add-connection.png」と名付けられており、ユーザーがシステムやプラットフォームとの接続を行う際の手順を示しています。このような視覚的なサポートは、テキストだけでは伝わりにくいプロセスを効率的に理解するのに役立ちます。
この画像は、特に技術的な手順を説明する際に有用であり、ユーザーが複雑な設定を行う際のガイドとして機能します。全体として、ビジュアルコンテンツの追加は、使用者の体験を向上させ、文書全体の質を高める重要な要素となっています。
articles/ai-services/language-service/custom-named-entity-recognition/media/foundry-next/allow-azure-services.png
Summary
{
"modification_type": "new feature",
"modification_title": "新しい画像の追加: Azureサービスを許可する図"
}
Explanation
この変更では、カスタム命名エンティティ認識に関連するドキュメントに新たに画像ファイルが追加されました。画像の名称は「allow-azure-services.png」であり、Azureサービスの許可に関する重要な情報を視覚的に示しています。
追加されたこの画像は、ユーザーがAzureプラットフォームと連携する際に、どのようにサービスを許可するかの手順を理解するのに役立ちます。視覚的な要素があることで、テキストだけの説明では理解しにくいプロセスをより直感的に認識できるようになります。
特に技術的な設定や操作が必要な場合、このような画像はユーザーにとって非常に貴重なリソースとなります。画像の追加により、文書の全体的な使いやすさと理解度が向上し、ユーザーが必要とする情報にアクセスしやすくなることを目的としています。これにより、カスタムNERの機能を効果的に利用するためのサポートが強化されます。
articles/ai-services/language-service/custom-named-entity-recognition/media/foundry-next/create-fine-tuning.png
Summary
{
"modification_type": "new feature",
"modification_title": "新しい画像の追加: ファインチューニングの作成に関する図"
}
Explanation
この変更では、カスタム命名エンティティ認識に関する文書に新たに画像が追加されました。画像の名称は「create-fine-tuning.png」であり、ファインチューニングのプロセスを視覚的に示したものです。
追加されたこの画像は、ユーザーがカスタムモデルをファインチューニングする際の手順を理解するための補助として役立ちます。具体的な作業や設定に関する視覚的ガイドがあれば、ユーザーは複雑なプロセスをより簡単に把握できるでしょう。
このようなビジュアル要素は、特に技術的な説明が必要な場合、文書全体の理解を深める効果があります。ユーザーは必要な情報に迅速にアクセスでき、カスタム命名エンティティ認識機能を効果的に利用できるようになります。全体として、この変更は、ユーザー体験を向上させるための重要な一歩といえます。
articles/ai-services/language-service/custom-named-entity-recognition/media/foundry-next/create-storage-container.png
Summary
{
"modification_type": "new feature",
"modification_title": "新しい画像の追加: ストレージコンテナの作成に関する図"
}
Explanation
この変更では、カスタム命名エンティティ認識に関する文書に新たに画像が追加されました。画像の名称は「create-storage-container.png」であり、ストレージコンテナを作成する手順を視覚的に解説しています。
追加されたこの画像は、ユーザーがAzure環境でストレージコンテナを設定する際のガイドとして重要です。視覚的な提示により、プロセスが明確になり、ユーザーが手続きをよりスムーズに進めることができるようになります。
この画像は技術的手順の理解を助けるものであり、特に初心者にとっては、操作の流れを把握するのに役立ちます。このような情報提供は、ユーザーエクスペリエンスの向上に寄与し、カスタム命名エンティティ認識機能の活用をさらに促進するものと言えます。全体として、ビジュアルの追加はユーザーが手順をより効率的に理解するための重要な補足情報となります。
articles/ai-services/language-service/custom-named-entity-recognition/media/foundry-next/deploy-trained-model.png
Summary
{
"modification_type": "new feature",
"modification_title": "新しい画像の追加: 訓練済みモデルのデプロイに関する図"
}
Explanation
この変更では、カスタム命名エンティティ認識に関する文書に新たに画像が追加されました。画像の名称は「deploy-trained-model.png」で、訓練済みモデルをデプロイする手順を視覚的に示したものです。
新たに追加されたこの画像は、ユーザーがモデルデプロイメントのプロセスを理解するのに役立つビジュアルガイドとして機能します。専門的な手順を図示することで、ユーザーは操作をより容易に理解し、実施できるようになります。
このようなビジュアル要素は、特にテクニカルな内容を扱う場合において、文書の価値を高め、情報伝達がより効果的になることが期待されます。ユーザーは、実際にデプロイする以前に、プロセスの全体像を把握することができ、成功率を高める助けとなるでしょう。全体として、この変更は、カスタム命名エンティティ認識機能における実践的な活用を促進する重要な手段と言えます。
articles/ai-services/language-service/custom-named-entity-recognition/media/foundry-next/deployed-model.png
Summary
{
"modification_type": "new feature",
"modification_title": "新しい画像の追加: デプロイされたモデルに関する図"
}
Explanation
この変更では、カスタム命名エンティティ認識に関連する文書に新たに画像が追加されました。画像は「deployed-model.png」で、デプロイされたモデルの様子を視覚的に示しています。
この追加画像は、モデルのデプロイメント後の状態を理解するための重要なビジュアル情報を提供します。ユーザーがデプロイされたモデルの確認や評価を行う際に役立つものであり、テクニカル プロセスの透明性と効果的な理解を向上させることが期待されます。
ビジュアル要素は、特に複雑な技術的コンセプトを扱う場合において、情報の消化を助け、実践的な作業を円滑に進めることに寄与します。この変更は、カスタム命名エンティティ認識機能を活用するユーザーにとって、参考資料としての価値を高め、操作をより容易に理解させるための良い手段となります。
articles/ai-services/language-service/custom-named-entity-recognition/media/foundry-next/select-additional-features.png
Summary
{
"modification_type": "new feature",
"modification_title": "新しい画像の追加: 追加機能の選択に関する図"
}
Explanation
この変更では、カスタム命名エンティティ認識に関連する文書に新たに画像が追加されました。画像は「select-additional-features.png」で、追加機能を選択する際の手順やオプションを示しています。
この画像は、ユーザーが提供される追加機能を理解し、選択する際の参考となる重要なビジュアルガイドです。特に、複数の機能がオプションとして提示される場合において、視覚的な情報を通じて、ユーザーが直感的に理解できるようになることを目的としています。
ビジュアルコンテンツの追加は、文書全体の理解を深め、特に技術的な手続きを行う際の支援となります。この変更は、カスタム命名エンティティ認識機能の使用を促進し、作業の効率を向上させるための良い手段です。また、ユーザーが選択肢を評価する際に助けとなることで、より良い意思決定ができるようにサポートします。
articles/ai-services/language-service/custom-named-entity-recognition/media/foundry-next/upload-blob-files.png
Summary
{
"modification_type": "new feature",
"modification_title": "新しい画像の追加: Blobファイルのアップロードに関する図"
}
Explanation
この変更では、カスタム命名エンティティ認識に関連する文書に新たに画像が追加されました。画像の名前は「upload-blob-files.png」で、Blobファイルをアップロードする際の手順を視覚的に示しています。
この追加された画像は、ユーザーがBlobファイルを効率的にアップロードするためのプロセスを理解しやすくすることを目的としています。特に、技術的な操作においては、文字だけの説明よりもビジュアルガイドの方が理解を助け、実際の作業をサポートすることがあります。
ユーザーがBlobファイルのアップロードを行う際に、この画像を参照することで、手順を簡単に追跡し、自信を持って作業を進めることができるようになります。この変更は、カスタム命名エンティティ認識機能を利用するユーザーに対して、視覚的なリソースを提供し、よりスムーズな作業体験を目指しています。
articles/ai-services/language-service/custom-named-entity-recognition/media/foundry-next/workflow.png
Summary
{
"modification_type": "new feature",
"modification_title": "新しい画像の追加: ワークフローに関する図"
}
Explanation
この変更では、カスタム命名エンティティ認識に関連する文書に新たに画像が追加されました。画像名は「workflow.png」で、文書内のワークフローやプロセスを視覚的に表現しています。
この画像は、カスタム命名エンティティ認識の実行手順を示し、一連の作業フローがどのように連携するかを理解するためのガイドです。特に、複雑なプロセスを視覚化することにより、ユーザーが手順を追いやすくし、理解を深めることが目的です。
ユーザーはこの図を参照することで、各ステップがどのように関連し合っているかを直感的に把握でき、実際の作業を進める際の助けとなります。この変更は、特に視覚的な情報が重要な技術的なタスクにおいて、ユーザーエクスペリエンスを向上させることを目指しています。
articles/ai-services/language-service/custom-named-entity-recognition/overview.md
Diff
@@ -15,7 +15,7 @@ ms.custom: language-service-custom-ner
Custom named entity recognition (NER) is a cloud-based API service that uses machine learning to help you build models designed for your unique entity recognition requirements. It's one of the specialized features available through [Azure AI Language](../overview.md). With custom NER, you can create AI models that extract domain-specific entities from unstructured text, such as contracts or financial documents. When you start a Custom NER project, you can repeatedly label data, train and evaluate your model, and improve its performance before deploying it. The quality of your labeled data is essential, as it directly impacts the model's accuracy.
-To simplify building and customizing your model, the service offers a custom web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](quickstart.md).
+To simplify building and customizing your model, the service offers a custom web platform that can be accessed through the [Azure AI Foundry](https://ai.azure.com/). You can easily get started with the service by following the steps in this [quickstart](quickstart.md).
This documentation contains the following article types:
Summary
{
"modification_type": "minor update",
"modification_title": "カスタム命名エンティティ認識のプラットフォーム情報の更新"
}
Explanation
この変更では、カスタム命名エンティティ認識(NER)の概要に関する文書が修正されました。具体的には、カスタムモデルの構築とカスタマイズを簡素化するために提供されるプラットフォームの名称が「Language studio」から「Azure AI Foundry」へと変更されました。
この更新は、ユーザーに最新のリソースを正確に案内するために重要です。特に、AIサービスやプラットフォームは常に進化しているため、正式な名称やリンクを正確に維持することが、ユーザーにとっての利便性を高めます。また、プラットフォーム名の変更により、より明確にAzureのAI関連機能との統合が示されており、ユーザーに対するサービスの理解を助けます。
このドキュメントは、カスタムNERプロジェクトの開始に役立つ情報を提供しており、最新のプラットフォームを通じてモデルの構築を促進しています。
articles/ai-services/language-service/custom-named-entity-recognition/quickstart.md
Diff
@@ -6,26 +6,28 @@ author: laujan
manager: nitinme
ms.service: azure-ai-language
ms.topic: quickstart
-ms.date: 07/22/2025
+ms.date: 09/18/2025
ms.author: lajanuar
ms.custom: language-service-custom-ner, mode-other
-zone_pivot_groups: usage-custom-language-features
+zone_pivot_groups: foundry-rest-api
---
# Quickstart: Custom named entity recognition
-Use this article to get started with creating a custom NER project where you can train custom models for custom entity recognition. A model artificial intelligence software trained to achieve a specific task. For this system, the models extract named entities and are trained by learning from tagged data.
+This guide provides step-by-step instructions for using custom named entity recognition (NER) with Azure AI Foundry or the REST API. NER lets you detect and categorize entities in unstructured text—like people, places, organizations, and numbers. With custom NER, you can train models to identify entities specific to your business and adapt them as needs evolve.
-In this article, we use Language Studio to demonstrate key concepts of custom Named Entity Recognition (NER). As an example, let's build a custom NER model to extract the following relevant entities from loan agreements:
+To get start, [a sample loan agreement](https://go.microsoft.com/fwlink/?linkid=2175226) is provided as a dataset to build a custom NER model and extract these key entities:
-* Date of the agreement
-* Borrower's name, address, city, and state
-* Lender's name, address, city, and state
-* Loan and interest amounts
+* Date of the agreement
+* Borrower's name, address, city, and state
+* Lender's name, address, city, and state
+* Loan and interest amounts
-::: zone pivot="language-studio"
-[!INCLUDE [Language Studio quickstart](includes/quickstarts/language-studio.md)]
+
+::: zone pivot="azure-ai-foundry"
+
+[!INCLUDE [Azure AI Foundry](includes/quickstarts/azure-ai-foundry.md)]
::: zone-end
@@ -35,7 +37,7 @@ In this article, we use Language Studio to demonstrate key concepts of custom Na
::: zone-end
-## Next steps
+## Related content
After you create your entity extraction model, you can [use the runtime API to extract entities](how-to/call-api.md).
Summary
{
"modification_type": "minor update",
"modification_title": "クイックスタートガイドの内容とリソースの更新"
}
Explanation
この変更では、カスタム命名エンティティ認識(NER)のクイックスタートガイドに対して、幾つかの重要な更新が行われました。主な変更点は、ガイドの目的がより明確になり、使用するプラットフォームが「Language Studio」から「Azure AI Foundry」やREST APIに拡張されたことです。
更新後のガイドは、ユーザーがカスタムNERプロジェクトを作成し、ビジネス固有のエンティティを抽出するための手順を詳しく説明しています。具体的には、サンプルのローン契約書をデータセットとして使用し、その中からエンティティを抽出する方法を示しています。
また、関連する内容の見出しが追加され、前の「次のステップ」が「関連コンテンツ」に変更されたことで、ユーザーが次に何をするべきかを直感的に理解しやすくなっています。全体として、これらの変更は、ユーザーがカスタムNERを始めるための支援を強化し、最新のリソースに基づいた効果的な指導を提供することを目指しています。
articles/ai-services/language-service/toc.yml
Diff
@@ -22,6 +22,9 @@ items:
- name: Quotas and limits
href: concepts/data-limits.md
displayName: service limits, rate, usage
+ - name: Configure Azure resources
+ href: conversational-language-understanding/how-to/configure-azure-resources.md
+ displayName: configuration, fine-tuning, azure ai foundry, azure portal
- name: Azure AI Language capabilities
items:
- name: Custom text classification
@@ -149,9 +152,6 @@ items:
href: ../containers/azure-container-instance-recipe.md?context=/azure/ai-services/language-service/context/context
- name: Azure AI containers overview
href: ../cognitive-services-container-support.md
- - name: Configure Azure resources
- href: conversational-language-understanding/how-to/configure-azure-resources.md
- displayName: configuration, fine-tuning, azure ai foundry, azure portal
- name: Create a fine-tuning task project
href: conversational-language-understanding/how-to/create-project.md
displayName: creation, clu project, setup
@@ -314,7 +314,7 @@ items:
- name: Integrate Power BI
href: key-phrase-extraction/tutorials/integrate-power-bi.md
displayName: business intelligence, data visualization
- - name: Named Entity Recognition (NER)
+ - name: Named Entity Recognition
items:
- name: Overview
href: named-entity-recognition/overview.md
@@ -365,48 +365,50 @@ items:
- name: Extract information in Excel using Power Automate
href: named-entity-recognition/tutorials/extract-excel-information.md
displayName: excel integration, power automate, ner automation, extract entities
- - name: Custom
+ - name: Custom Named Entity Recognition
+ items:
+ - name: Overview
+ href: custom-named-entity-recognition/overview.md
+ - name: Quickstart
+ href: custom-named-entity-recognition/quickstart.md
+ - name: Language support
+ href: custom-named-entity-recognition/language-support.md
+ - name: FAQ
+ href: custom-named-entity-recognition/faq.md
+ - name: Glossary
+ href: custom-named-entity-recognition/glossary.md
+ - name: How-to guides
items:
- - name: Overview
- href: custom-named-entity-recognition/overview.md
- - name: Quickstart
- href: custom-named-entity-recognition/quickstart.md
- - name: Language support
- href: custom-named-entity-recognition/language-support.md
- - name: FAQ
- href: custom-named-entity-recognition/faq.md
- - name: How-to guides
+ - name: Create projects
+ href: custom-named-entity-recognition/how-to/create-project.md
+ - name: Data selection and schema design
+ href: custom-named-entity-recognition/how-to/design-schema.md
+ - name: Label data
+ href: custom-named-entity-recognition/how-to/tag-data.md
+ - name: Auto label your data (preview)
+ href: custom-named-entity-recognition/how-to/use-autolabeling.md
+ - name: Label data with Azure Machine Learning
+ href: custom/azure-machine-learning-labeling.md
+ - name: Train a model
+ href: custom-named-entity-recognition/how-to/train-model.md
+ - name: Model performance (preview)
+ href: custom-named-entity-recognition/how-to/view-model-evaluation.md
+ - name: Deploy a model
+ href: custom-named-entity-recognition/how-to/deploy-model.md
+ - name: Extract entities from text
+ href: custom-named-entity-recognition/how-to/call-api.md
+ - name: Back up and recover your models
+ href: custom-named-entity-recognition/fail-over.md
+ - name: Use Custom NER containers
items:
- - name: Create projects
- href: custom-named-entity-recognition/how-to/create-project.md
- - name: Data selection and schema design
- href: custom-named-entity-recognition/how-to/design-schema.md
- - name: Label data
- href: custom-named-entity-recognition/how-to/tag-data.md
- - name: Auto label your data (preview)
- href: custom-named-entity-recognition/how-to/use-autolabeling.md
- - name: Label data with Azure Machine Learning
- href: custom/azure-machine-learning-labeling.md
- - name: Train a model
- href: custom-named-entity-recognition/how-to/train-model.md
- - name: Model performance (preview)
- href: custom-named-entity-recognition/how-to/view-model-evaluation.md
- - name: Deploy a model
- href: custom-named-entity-recognition/how-to/deploy-model.md
- - name: Extract entities from text
- href: custom-named-entity-recognition/how-to/call-api.md
- - name: Back up and recover your models
- href: custom-named-entity-recognition/fail-over.md
- - name: Use Custom NER containers
- items:
- - name: Use Custom NER Docker containers
- href: custom-named-entity-recognition/how-to/use-containers.md
- - name: Configure containers
- href: concepts/configure-containers.md
- - name: Use container instances
- href: ../containers/azure-container-instance-recipe.md?context=/azure/ai-services/language-service/context/context
- - name: Azure AI containers overview
- href: ../cognitive-services-container-support.md
+ - name: Use Custom NER Docker containers
+ href: custom-named-entity-recognition/how-to/use-containers.md
+ - name: Configure containers
+ href: concepts/configure-containers.md
+ - name: Use container instances
+ href: ../containers/azure-container-instance-recipe.md?context=/azure/ai-services/language-service/context/context
+ - name: Azure AI containers overview
+ href: ../cognitive-services-container-support.md
- name: Concepts
items:
- name: Evaluation metrics
@@ -473,7 +475,7 @@ items:
href: orchestration-workflow/service-limits.md
- name: Glossary
href: orchestration-workflow/glossary.md
- - name: Personally Identifiable Information (PII) detection
+ - name: Personally Identifiable Information detection
items:
- name: Overview
href: personally-identifiable-information/overview.md
@@ -569,7 +571,7 @@ items:
href: question-answering/how-to/change-default-answer.md
- name: Configure your environment and Azure resources
href: question-answering/how-to/configure-azure-resources.md
- displayName: configuration, fine-tuning, azure ai foundry, azure portal
+ displayName: configuration, fine-tuning, azure ai foundry, azure portal
- name: Analytics
href: question-answering/how-to/analytics.md
- name: Manage projects
@@ -712,7 +714,7 @@ items:
href: text-analytics-for-health/concepts/relation-extraction.md
- name: Assertion detection
href: text-analytics-for-health/concepts/assertion-detection.md
- - name: Fast Healthcare Interoperability Resources (FHIR) structuring
+ - name: Fast Healthcare Interoperability Resources structuring
href: text-analytics-for-health/concepts/fhir.md
- name: Summarization
items:
Summary
{
"modification_type": "minor update",
"modification_title": "目次ファイルの更新と再構成"
}
Explanation
この変更では、カスタム命名エンティティ認識や関連する機能に関する目次ファイル(toc.yml)が大幅に更新され、内容が再構成されました。具体的には、新しいセクションが追加され、既存のセクションが整理され、一部の項目名が調整されています。
新しい「Azureリソースの構成」セクションが追加され、ユーザーがリソースを設定するためのガイドラインが提供されています。また、従来の「カスタム」項目は「カスタム命名エンティティ認識」と改名され、より具体的な説明がもたらされています。この新しいセクション内には、オーバービュー、クイックスタート、FAQ、言語サポートなど、より多くの情報が整理され、ユーザーが必要とするリソースにアクセスしやすくなっています。
さらに、複数の手順ガイドが追加され、それぞれが主要なプロセスを詳細に説明しています。これらの変更は、ユーザーに対して情報を整理し、より直感的なナビゲーションを提供することを目的としています。全体的に、目次の更新は、ユーザーのエクスペリエンスを向上させるための重要なステップです。