Diff Insight Report - openai

最終更新日: 2025-03-04

利用上の注意

このポストは Microsoft 社の Azure 公式ドキュメント(CC BY 4.0 または MIT ライセンス) をもとに生成AIを用いて翻案・要約した派生作品です。 元の文書は MicrosoftDocs/azure-ai-docs にホストされています。

生成AIの性能には限界があり、誤訳や誤解釈が含まれる可能性があります。 本ポストはあくまで参考情報として用い、正確な情報は必ず元の文書を参照してください。

このポストで使用されている商標はそれぞれの所有者に帰属します。これらの商標は技術的な説明のために使用されており、商標権者からの公式な承認や推奨を示すものではありません。

View Diff on GitHub


# Highlights
この一連の変更は、ドキュメント全体に Azure AI Foundry ポータルへのリンクを追加または修正することを中心としており、ユーザーの利便性とドキュメントの明確さを向上させることを目的としています。

New features

  • Azure AI Foundry ポータルへのリンクを広範に追記。
  • REST API仕様のバージョンを更新。

Breaking changes

  • 特にありません。

Other updates

  • ドキュメントの可読性とアクセスビリティを向上。
  • 一貫性のあるリンク追加による文書の整理。

Insights

今回の変更は、Azure AI関連のドキュメントにおけるユーザーエクスペリエンスの向上を主な目的としています。具体的には、度々参照されるAzure AI Foundry ポータルへのリンクをドキュメントのさまざまな部分に追加することで、ユーザーが必要なリソースや情報に迅速にアクセスしやすくなりました。

Azure AI Foundryポータルへのリンクが文中に明示されることで、ユーザーはリソースに手軽にアクセスし、設定手順や詳細情報を確認しやすくなっています。特に、設定やデプロイメントに関する説明の中でリンクが挿入されたことで、ドキュメントはよりインタラクティブで実用的なものとなり、操作中の作業がスムーズに進むよう配慮されています。これにより、Azureの各機能を利用する際の障害が減少し、ユーザーが求める情報にすばやくたどり着ける環境が整備されました。

また、REST API仕様へのリンクバージョンが最新のものに更新されているため、ユーザーは常に最新のガイドラインに基づいて作業を進められるようになっています。このように、今回の更新は、小規模ながらもドキュメントの使いやすさと情報の正確性を著しく高めるための重要なステップを踏んでいます。この改善を通じて、Azure AIの利用をより円滑にし、ユーザーの学習曲線の軽減が期待できます。

Summary Table

Filename Type Title Status A D M
assistants.md minor update Azure AI アシスタントのリンク修正 modified 1 1 2
content-filter.md minor update Azure AI Foundry ポータルのリンク修正 modified 2 2 4
customizing-llms.md minor update Azure AI Foundry ポータルのリンク修正 modified 2 2 4
gpt-with-vision.md minor update Azure AI Foundry ポータルのリンク修正 modified 3 3 6
provisioned-migration.md minor update Azure AI Foundry ポータルのリンク追加 modified 1 1 2
use-your-data.md minor update Azure AI Foundry ポータルのリンク追加 modified 14 14 28
faq.yml minor update Azure AI Foundry ポータルのリンク追加 modified 8 8 16
batch.md minor update Azure AI Foundry ポータルのリンク追加 modified 2 2 4
evaluations.md minor update Azure AI Foundry ポータルのリンク追加 modified 1 1 2
fine-tuning.md minor update Azure AI Foundry ポータルのリンク追加 modified 1 1 2
on-your-data-configuration.md minor update Azure AI Foundry ポータルへのリンクの追加 modified 8 8 16
provisioned-get-started.md minor update Azure AI Foundry ポータルへのリンク追加 modified 3 3 6
provisioned-throughput-onboarding.md minor update Azure AI Foundry ポータルへのリンクの追加 modified 2 2 4
quota.md minor update Azure AI Foundry ポータルへのリンクの追加 modified 3 3 6
role-based-access-control.md minor update Azure AI Foundryポータルへのリンクの統一 modified 11 11 22
use-web-app.md minor update リンクの追加と整備 modified 8 8 16
working-with-models.md minor update Azure AI Foundryポータルへのリンクの追加 modified 2 2 4
audio-completions-rest.md minor update REST API仕様のバージョン更新 modified 1 1 2
overview.md minor update Azure AI Foundryポータルへのリンクの追加 modified 1 1 2
quotas-limits.md minor update Azure AI Foundryポータルへのリンクの追加 modified 3 3 6
fine-tune.md minor update Azure AI Foundryポータルへのリンクの追加 modified 2 2 4
whats-new.md minor update Azure AI Foundryポータルへのリンクの追加 modified 3 3 6

Modified Contents

articles/ai-services/openai/concepts/assistants.md

Diff
@@ -33,7 +33,7 @@ Assistants API supports persistent automatically managed threads. This means tha
 > [!TIP]
 > There is no additional [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) or [quota](../quotas-limits.md) for using Assistants unless you use the [code interpreter](../how-to/code-interpreter.md) or [file search](../how-to/file-search.md) tools.
 
-Assistants API is built on the same capabilities that power OpenAI’s GPT product. Some possible use cases range from AI-powered product recommender, sales analyst app, coding assistant, employee Q&A chatbot, and more. Start building on the no-code Assistants playground on the Azure AI Foundry portal or start building with the API.
+Assistants API is built on the same capabilities that power OpenAI’s GPT product. Some possible use cases range from AI-powered product recommender, sales analyst app, coding assistant, employee Q&A chatbot, and more. Start building on the no-code Assistants playground on the [Azure AI Foundry portal](https://ai.azure.com/) or start building with the API.
 
 > [!IMPORTANT]
 > Retrieving untrusted data using Function calling, Code Interpreter or File Search with file input, and Assistant Threads functionalities could compromise the security of your Assistant, or the application that uses the Assistant. Learn about mitigation approaches [here](https://aka.ms/oai/assistant-rai).

Summary

{
    "modification_type": "minor update",
    "modification_title": "Azure AI アシスタントのリンク修正"
}

Explanation

この変更では、Azure AI に関する文章の一部が修正されました。具体的には、アシスタント API の利用を促す文に含まれているリンクが修正され、新しいリンク先である Azure AI Foundry portal が明示的に示されています。この更新は、ユーザーがリソースに容易にアクセスできるようにするための小さな改善です。また、デフォルトとして設定されていたリンクが実際のページに変更されたことで、より正確な情報提供が可能となりました。全体として、この更新は数行の変更で構成されており、文の明確さと正確性を向上させることを目的としています。

articles/ai-services/openai/concepts/content-filter.md

Diff
@@ -881,15 +881,15 @@ Customers must understand that while the feature improves latency, it's a trade-
 
 **Customer Copyright Commitment**: Content that is retroactively flagged as protected material may not be eligible for Customer Copyright Commitment coverage. 
 
-To enable Asynchronous Filter in Azure AI Foundry portal, follow the [Content filter how-to guide](/azure/ai-services/openai/how-to/content-filters) to create a new content filtering configuration, and select **Asynchronous Filter** in the Streaming section.
+To enable Asynchronous Filter in [Azure AI Foundry portal](https://ai.azure.com/), follow the [Content filter how-to guide](/azure/ai-services/openai/how-to/content-filters) to create a new content filtering configuration, and select **Asynchronous Filter** in the Streaming section.
 
 ### Comparison of content filtering modes
 
 | Compare | Streaming - Default | Streaming - Asynchronous Filter |
 |---|---|---|
 |Status |GA |Public Preview |
 | Eligibility |All customers |Customers approved for modified content filtering |
-| How to enable | Enabled by default, no action needed |Customers approved for modified content filtering can configure it directly in Azure AI Foundry portal (as part of a content filtering configuration, applied at the deployment level) |
+| How to enable | Enabled by default, no action needed |Customers approved for modified content filtering can configure it directly in [Azure AI Foundry portal](https://ai.azure.com/) (as part of a content filtering configuration, applied at the deployment level) |
 |Modality and availability |Text; all GPT models |Text; all GPT models |
 |Streaming experience |Content is buffered and returned in chunks |Zero latency (no buffering, filters run asynchronously) |
 |Content filtering signal |Immediate filtering signal |Delayed filtering signal (in up to ~1,000-character increments) |

Summary

{
    "modification_type": "minor update",
    "modification_title": "Azure AI Foundry ポータルのリンク修正"
}

Explanation

この変更では、Azure AI のコンテンツフィルタに関する文書の一部が修正されました。具体的には、非同期フィルタを有効にする手順に関連する文言が更新され、Azure AI Foundry ポータルのリンクが明示的に追加されました。このリンクにより、ユーザーは Azure AI Foundry portal に簡単にアクセスできるようになり、設定手順をよりスムーズに実行できるようになります。

また、コンテンツフィルタの比較表にある「有効にする方法」セクションの説明も適切にリンクが追加され、文書全体の一貫性が向上しています。これにより、ユーザーは機能を利用する際の理解が深まり、利便性が向上しました。全体として、この更新は小規模ながら、重要な情報提供の改善を意図したものです。

articles/ai-services/openai/concepts/customizing-llms.md

Diff
@@ -62,7 +62,7 @@ A corporate HR department is looking to provide an intelligent assistant that an
 
 ### Getting started
 
-- [Retrieval Augmented Generation in Azure AI Foundry portal - Azure AI Foundry | Microsoft Learn](../../../ai-studio/concepts/retrieval-augmented-generation.md)
+- [Retrieval Augmented Generation in [Azure AI Foundry portal](https://ai.azure.com/) - Azure AI Foundry | Microsoft Learn](../../../ai-studio/concepts/retrieval-augmented-generation.md)
 - [Retrieval Augmented Generation (RAG) in Azure AI Search](/azure/search/retrieval-augmented-generation-overview)
 - [Retrieval Augmented Generation using Azure Machine Learning prompt flow (preview)](/azure/machine-learning/concept-retrieval-augmented-generation)
 
@@ -99,4 +99,4 @@ They fine-tune GPT-4o mini with hundreds of requests and correct responses and p
 - [When to use Azure OpenAI fine-tuning](./fine-tuning-considerations.md)
 - [Customize a model with fine-tuning](../how-to/fine-tuning.md)
 - [Azure OpenAI GPT-4o Turbo fine-tuning tutorial](../tutorials/fine-tune.md)
-- [To fine-tune or not to fine-tune? (Video)](https://www.youtube.com/watch?v=0Jo-z-MFxJs)
\ No newline at end of file
+- [To fine-tune or not to fine-tune? (Video)](https://www.youtube.com/watch?v=0Jo-z-MFxJs)

Summary

{
    "modification_type": "minor update",
    "modification_title": "Azure AI Foundry ポータルのリンク修正"
}

Explanation

この変更では、カスタマイズ可能な大規模言語モデル(LLM)に関する文書の一部が更新されました。具体的には、Azure AI Foundry ポータルに関連するリンクが修正され、リンクテキストの中でポータルの名前が明示的に強調される形になっています。これにより、ユーザーは対象のサービスへとより容易にアクセスすることができるようになりました。

また、別の箇所では、ファインチューニングに関するビデオリンクに変更はありませんが、改行の追加が行われています。この更新によって、文書の可読性が若干向上していますが、主要な変更はリンクの修正にあります。全体として、この更新は、ユーザビリティを向上させるための重要な改善を意図したものであり、具体的なリソースへの誘導が強化されています。

articles/ai-services/openai/concepts/gpt-with-vision.md

Diff
@@ -78,14 +78,14 @@ This section describes the limitations of vision-enabled chat models.
 
 - **Maximum input image size**: The maximum size for input images is restricted to 20 MB.
 - **Low resolution accuracy**: When images are analyzed using the "low resolution" setting, it allows for faster responses and uses fewer input tokens for certain use cases. However, this could impact the accuracy of object and text recognition within the image.
-- **Image chat restriction**: When you upload images in Azure AI Foundry portal or the API, there is a limit of 10 images per chat call.
+- **Image chat restriction**: When you upload images in [Azure AI Foundry portal](https://ai.azure.com/) or the API, there is a limit of 10 images per chat call.
 
 <!--
 ### Video support
 
 - **Low resolution**: Video frames are analyzed using GPT-4 Turbo with Vision's "low resolution" setting, which may affect the accuracy of small object and text recognition in the video.
-- **Video file limits**: Both MP4 and MOV file types are supported. In Azure AI Foundry portal, videos must be less than 3 minutes long. When you use the API there is no such limitation.
-- **Prompt limits**: Video prompts only contain one video and no images. In Azure AI Foundry portal, you can clear the session to try another video or images.
+- **Video file limits**: Both MP4 and MOV file types are supported. In [Azure AI Foundry portal](https://ai.azure.com/), videos must be less than 3 minutes long. When you use the API there is no such limitation.
+- **Prompt limits**: Video prompts only contain one video and no images. In [Azure AI Foundry portal](https://ai.azure.com/), you can clear the session to try another video or images.
 - **Limited frame selection**: The service selects 20 frames from the entire video, which might not capture all the critical moments or details. Frame selection can be approximately evenly spread through the video or focused by a specific video retrieval query, depending on the prompt.
 - **Language support**: The service primarily supports English for grounding with transcripts. Transcripts don't provide accurate information on lyrics in songs.
 -->

Summary

{
    "modification_type": "minor update",
    "modification_title": "Azure AI Foundry ポータルのリンク修正"
}

Explanation

この変更では、視覚機能を備えたチャットモデルに関する文書が更新されました。具体的には、Azure AI Foundry ポータルに関する複数の文にリンクが追加され、ユーザーがポータルに直接アクセスできるようになりました。この改善により、文書がより使いやすくなり、関連情報へのアクセスが容易になります。

修正された内容には、画像のアップロード制限や動画ファイルの制限に関する情報が含まれており、全ての関連記述において Azure AI Foundry ポータルのリンクが強調されています。文書の中での一貫性が向上し、特にポータルを利用する際の具体的なガイダンスが強化されているため、ユーザーが必要な手続きを迅速に行えるよう配慮されています。全体として、これらの更新はユーザーエクスペリエンスを向上させるための小規模な改善を反映したものです。

articles/ai-services/openai/concepts/provisioned-migration.md

Diff
@@ -244,7 +244,7 @@ Customers must reach out to their account teams to schedule a managed migration.
 
 ## Managing Provisioned Throughput Commitments
 
-Provisioned throughput commitments are created and managed by selecting **Management center** in the Azure AI Foundry portal's navigation menu > **Quota** > **Manage Commitments**. 
+Provisioned throughput commitments are created and managed by selecting **Management center** in the [Azure AI Foundry portal](https://ai.azure.com/)'s navigation menu > **Quota** > **Manage Commitments**. 
 
 :::image type="content" source="../media/how-to/provisioned-onboarding/notifications.png" alt-text="Screenshot of commitment purchase UI with notifications." lightbox="../media/how-to/provisioned-onboarding/notifications.png":::
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "Azure AI Foundry ポータルのリンク追加"
}

Explanation

この変更では、プロビジョンされた移行に関する文書の一部が更新され、Azure AI Foundryポータルへのリンクが追加されました。この変更は、プロビジョンされたスループットコミットメントを作成および管理する手順を説明するセクションに適用されています。

具体的には、Azure AI Foundryポータルのナビゲーションメニュー内での選択手順が示されており、その中にリンクが組み込まれることで、ユーザーが指定されたセクションに直接アクセスできるようになっています。このリンクの追加により、ユーザーは必要な情報を迅速に見つけやすくなり、操作を円滑に進めることができるため、文書の有用性が向上しています。また、全体的なユーザーエクスペリエンスの向上を図るための小規模な改善として評価できます。

articles/ai-services/openai/concepts/use-your-data.md

Diff
@@ -26,7 +26,7 @@ Azure OpenAI On Your Data enables you to run advanced AI models such as GPT-35-T
 :::image type="content" source="../media/use-your-data/workflow-diagram.png" alt-text="A diagram showing an example workflow.":::
 
 Typically, the development process you'd use with Azure OpenAI On Your Data is:
-1. **Ingest**: Upload files using either Azure AI Foundry portal or the ingestion API. This enables your data to be cracked, chunked and embedded into an Azure AI Search instance that can be used by Azure OpenAI models. If you have an existing [supported data source](#supported-data-sources), you can also connect it directly.
+1. **Ingest**: Upload files using either [Azure AI Foundry portal](https://ai.azure.com/) or the ingestion API. This enables your data to be cracked, chunked and embedded into an Azure AI Search instance that can be used by Azure OpenAI models. If you have an existing [supported data source](#supported-data-sources), you can also connect it directly.
 
 1. **Develop**: After trying Azure OpenAI On Your Data, begin developing your application using the available REST API and SDKs, which are available in several languages. It will create prompts and search intents to pass to the Azure OpenAI service.
 
@@ -39,7 +39,7 @@ Typically, the development process you'd use with Azure OpenAI On Your Data is:
     
     1. **Response generation**: The resulting data is submitted along with other information like the system message to the Large Language Model (LLM) and the response is sent back to the application.
 
-To get started, [connect your data source](../use-your-data-quickstart.md) using Azure AI Foundry portal and start asking questions and chatting on your data.
+To get started, [connect your data source](../use-your-data-quickstart.md) using [Azure AI Foundry portal](https://ai.azure.com/) and start asking questions and chatting on your data.
 
 ## Azure Role-based access controls (Azure RBAC) for adding data sources
 
@@ -140,7 +140,7 @@ Azure OpenAI On Your Data lets you restrict the documents that can be used in re
 
 ### Index field mapping 
 
-If you're using your own index, you'll be prompted in the Azure AI Foundry portal to define which fields you want to map for answering questions when you add your data source. You can provide multiple fields for *Content data*, and should include all fields that have text pertaining to your use case. 
+If you're using your own index, you'll be prompted in the [Azure AI Foundry portal](https://ai.azure.com/) to define which fields you want to map for answering questions when you add your data source. You can provide multiple fields for *Content data*, and should include all fields that have text pertaining to your use case. 
 
 :::image type="content" source="../media/use-your-data/index-data-mapping.png" alt-text="A screenshot showing the index field mapping options in Azure AI Foundry portal." lightbox="../media/use-your-data/index-data-mapping.png":::
 
@@ -193,7 +193,7 @@ You might want to use Azure Blob Storage as a data source if you want to connect
 
 To keep your Azure AI Search index up-to-date with your latest data, you can schedule an automatic index refresh rather than manually updating it every time your data is updated. Automatic index refresh is only available when you choose **Azure Blob Storage** as the data source. To enable an automatic index refresh:
 
-1. [Add a data source](../quickstart.md) using Azure AI Foundry portal.
+1. [Add a data source](../quickstart.md) using [Azure AI Foundry portal](https://ai.azure.com/).
 1. Under **Select or add data source** select **Indexer schedule** and choose the refresh cadence you would like to apply.
 
     :::image type="content" source="../media/use-your-data/indexer-schedule.png" alt-text="A screenshot of the indexer schedule in Azure AI Foundry portal." lightbox="../media/use-your-data/indexer-schedule.png":::
@@ -225,7 +225,7 @@ To modify the schedule, you can use the [Azure portal](https://portal.azure.com/
 
 # [Upload files (preview)](#tab/file-upload)
 
-Using Azure AI Foundry portal, you can upload files from your machine to try Azure OpenAI On Your Data. You also have the option to create a new Azure Blob Storage account and Azure AI Search resource. The service then stores the files to an Azure storage container and performs ingestion from the container. You can use the [quickstart](../use-your-data-quickstart.md) article to learn how to use this data source option.
+Using [Azure AI Foundry portal](https://ai.azure.com/), you can upload files from your machine to try Azure OpenAI On Your Data. You also have the option to create a new Azure Blob Storage account and Azure AI Search resource. The service then stores the files to an Azure storage container and performs ingestion from the container. You can use the [quickstart](../use-your-data-quickstart.md) article to learn how to use this data source option.
 
 :::image type="content" source="../media/quickstarts/add-your-data-source.png" alt-text="A screenshot showing options for selecting a data source in Azure AI Foundry portal." lightbox="../media/quickstarts/add-your-data-source.png":::
 
@@ -309,11 +309,11 @@ Mapping these fields correctly helps ensure the model has better response and ci
 
 ### Use Elasticsearch as a data source via API  
 
-Along with using Elasticsearch databases in Azure AI Foundry portal, you can also use your Elasticsearch database using the [API](../references/elasticsearch.md). 
+Along with using Elasticsearch databases in [Azure AI Foundry portal](https://ai.azure.com/), you can also use your Elasticsearch database using the [API](../references/elasticsearch.md). 
 
 # [MongoDB Atlas (preview)](#tab/mongo-db-atlas)
 
-You can connect your MongoDB Atlas vector index with Azure OpenAI On Your Data for inferencing. You can use it through the Azure AI Foundry portal, API and SDK.
+You can connect your MongoDB Atlas vector index with Azure OpenAI On Your Data for inferencing. You can use it through the [Azure AI Foundry portal](https://ai.azure.com/), API and SDK.
 
 ### Prerequisites 
 
@@ -364,15 +364,15 @@ When you add your MongoDB Atlas data source, you can specify data fields to prop
 
 ## Deploy to a copilot (preview), Teams app (preview), or web app 
 
-After you connect Azure OpenAI to your data, you can deploy it using the **Deploy to** button in Azure AI Foundry portal.
+After you connect Azure OpenAI to your data, you can deploy it using the **Deploy to** button in [Azure AI Foundry portal](https://ai.azure.com/).
 
 :::image type="content" source="../media/use-your-data/deploy-model.png" alt-text="A screenshot showing the model deployment button in Azure AI Foundry portal." lightbox="../media/use-your-data/deploy-model.png":::
 
 This gives you multiple options for deploying your solution.
 
 #### [Copilot (preview)](#tab/copilot)
 
-You can deploy to a copilot in [Copilot Studio](/microsoft-copilot-studio/fundamentals-what-is-copilot-studio) (preview) directly from Azure AI Foundry portal, enabling you to bring conversational experiences to various channels such as: Microsoft Teams, websites, Dynamics 365, and other [Azure Bot Service channels](/microsoft-copilot-studio/publication-connect-bot-to-azure-bot-service-channels). The tenant used in the Azure OpenAI service and Copilot Studio (preview) should be the same. For more information, see [Use a connection to Azure OpenAI On Your Data](/microsoft-copilot-studio/nlu-generative-answers-azure-openai).
+You can deploy to a copilot in [Copilot Studio](/microsoft-copilot-studio/fundamentals-what-is-copilot-studio) (preview) directly from [Azure AI Foundry portal](https://ai.azure.com/), enabling you to bring conversational experiences to various channels such as: Microsoft Teams, websites, Dynamics 365, and other [Azure Bot Service channels](/microsoft-copilot-studio/publication-connect-bot-to-azure-bot-service-channels). The tenant used in the Azure OpenAI service and Copilot Studio (preview) should be the same. For more information, see [Use a connection to Azure OpenAI On Your Data](/microsoft-copilot-studio/nlu-generative-answers-azure-openai).
 
 > [!NOTE]
 > Deploying to a copilot in Copilot Studio (preview) is only available in US regions.
@@ -394,15 +394,15 @@ A Teams app lets you bring conversational experience to your users in Teams to i
 - Your Azure account has been assigned **Cognitive Services OpenAI user** or **Cognitive Services OpenAI Contributor** role of the Azure OpenAI resource you're using, allowing your account to make Azure OpenAI API calls. For more information, see [Azure OpenAI On Your data configuration](../how-to/on-your-data-configuration.md#using-the-api) and [Add role assignment to an Azure OpenAI resource](/azure/ai-services/openai/how-to/role-based-access-control#add-role-assignment-to-an-azure-openai-resource) for instructions on setting this role in the Azure portal. 
 
 
-You can deploy to a standalone Teams app directly from Azure AI Foundry portal. Follow the steps below: 
+You can deploy to a standalone Teams app directly from [Azure AI Foundry portal](https://ai.azure.com/). Follow the steps below: 
 
 1. After you've added your data to the chat model, select **Deploy** and then **a new Teams app (preview)**. 
 
 1. Enter the name of your Teams app and download the resulting .zip file.
 
 1. Extract the .zip file and open the folder in Visual Studio Code.
 
-1. If you chose **API key** in the data connection step, manually copy and paste your Azure AI Search key into the `src\prompts\chat\config.json` file. Your Azure AI Search Key can be found in Azure AI Foundry portal Playground by selecting the **View code** button with the key located under Azure Search Resource Key. If you chose **System assigned managed identity**, you can skip this step. Learn more about different data connection options in the [Data connection](/azure/ai-services/openai/concepts/use-your-data?tabs=ai-search#data-connection) section.
+1. If you chose **API key** in the data connection step, manually copy and paste your Azure AI Search key into the `src\prompts\chat\config.json` file. Your Azure AI Search Key can be found in [Azure AI Foundry portal](https://ai.azure.com/) Playground by selecting the **View code** button with the key located under Azure Search Resource Key. If you chose **System assigned managed identity**, you can skip this step. Learn more about different data connection options in the [Data connection](/azure/ai-services/openai/concepts/use-your-data?tabs=ai-search#data-connection) section.
 
 1. Open the Visual Studio Code terminal and log into Azure CLI, selecting the account that you assigned **Cognitive Service OpenAI User** role to. Use the `az login` command in the terminal to log in.
 
@@ -468,7 +468,7 @@ A small chunk size like 256 produces more granular chunks. This size also means
 
 ### Runtime parameters
 
-You can modify the following additional settings in the **Data parameters** section in Azure AI Foundry portal and [the API](../references/on-your-data.md). You don't need to reingest your data when you update these parameters. 
+You can modify the following additional settings in the **Data parameters** section in [Azure AI Foundry portal](https://ai.azure.com/) and [the API](../references/on-your-data.md). You don't need to reingest your data when you update these parameters. 
 
 
 |Parameter name  | Description  |
@@ -485,7 +485,7 @@ It's possible for the model to return `"TYPE":"UNCITED_REFERENCE"` instead of `"
 
 You can define a system message to steer the model's reply when using Azure OpenAI On Your Data. This message allows you to customize your replies on top of the retrieval augmented generation (RAG) pattern that Azure OpenAI On Your Data uses. The system message is used in addition to an internal base prompt to provide the experience. To support this, we truncate the system message after a specific [number of tokens](#token-usage-estimation-for-azure-openai-on-your-data) to ensure the model can answer questions using your data. If you are defining extra behavior on top of the default experience, ensure that your system prompt is detailed and explains the exact expected customization. 
 
-Once you select add your dataset, you can use the **System message** section in the Azure AI Foundry portal, or the `role_information` [parameter in the API](../references/on-your-data.md).
+Once you select add your dataset, you can use the **System message** section in the [Azure AI Foundry portal](https://ai.azure.com/), or the `role_information` [parameter in the API](../references/on-your-data.md).
 
 :::image type="content" source="../media/use-your-data/system-message.png" alt-text="A screenshot showing the system message option in Azure AI Foundry portal." lightbox="../media/use-your-data/system-message.png":::
 
@@ -680,7 +680,7 @@ token_output = TokenEstimator.estimate_tokens(input_text)
 
 ## Troubleshooting 
 
-To troubleshoot failed operations, always look out for errors or warnings specified either in the API response or Azure AI Foundry portal. Here are some of the common errors and warnings: 
+To troubleshoot failed operations, always look out for errors or warnings specified either in the API response or [Azure AI Foundry portal](https://ai.azure.com/). Here are some of the common errors and warnings: 
 
 ### Failed ingestion jobs
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "Azure AI Foundry ポータルのリンク追加"
}

Explanation

この変更では、Azure OpenAI On Your Dataに関する文書の中で、Azure AI Foundryポータルへのリンクが複数箇所に追加されました。これにより、ユーザーは指示に従いながら、直接ポータルにアクセスすることが容易になります。

具体的には、データの取り込み、管理、インデックス作成、デプロイメントなどのプロセスに関連する各セクションで、Azure AI Foundryポータルへのリンクが挿入されています。このリンクの追加により、文書はよりインタラクティブで便利になり、ユーザーが必要な情報を迅速に取得できるようになっています。内容全体を通して、ポータルを利用する手順が明確になり、ユーザーエクスペリエンスが向上しています。このように、全体的な明確さを高めるための小規模な修正が行われたことが理解できます。

articles/ai-services/openai/faq.yml

Diff
@@ -127,7 +127,7 @@ sections:
         answer: |
           A Limited Access registration form is not required to access most Azure OpenAI models. Learn more on the [Azure OpenAI Limited Access page](/legal/cognitive-services/openai/limited-access?context=/azure/ai-services/openai/context/context).
       - question: |
-          My guest account has been given access to an Azure OpenAI resource, but I'm unable to access that resource in the Azure AI Foundry portal. How do I enable access?
+          My guest account has been given access to an Azure OpenAI resource, but I'm unable to access that resource in the [Azure AI Foundry portal](https://ai.azure.com/). How do I enable access?
         answer: | 
           This is expected behavior when using the default sign-in experience for the [Azure AI Foundry](https://ai.azure.com).
           
@@ -139,7 +139,7 @@ sections:
           4. Enter the domain name of the organization that granted your guest account access to the Azure OpenAI resource. 
           5. Now sign-in with your guest account credentials. 
           
-          You should now be able to access the resource via the Azure AI Foundry portal.
+          You should now be able to access the resource via the [Azure AI Foundry portal](https://ai.azure.com/).
           
           Alternatively if you're signed into the [Azure portal](https://portal.azure.com) from the Azure OpenAI resource's Overview pane you can select **Go to Azure AI Foundry** to automatically sign in with the appropriate organizational context.   
   
@@ -177,7 +177,7 @@ sections:
         answer:
           We do offer an Availability SLA for all resources and a Latency SLA for Provisioned-Managed Deployments. For more information about the SLA for Azure OpenAI Service, see the [Service Level Agreements (SLA) for Online Services page](https://azure.microsoft.com/support/legal/sla/cognitive-services/v1_1/). 
       - question: |
-          How do I enable fine-tuning? Create a custom model is greyed out in Azure AI Foundry portal.  
+          How do I enable fine-tuning? Create a custom model is greyed out in [Azure AI Foundry portal](https://ai.azure.com/).  
         answer: |
           In order to successfully access fine-tuning, you need Cognitive Services OpenAI Contributor assigned. Even someone with high-level Service Administrator permissions would still need this account explicitly set in order to access fine-tuning. For more information, please review the [role-based access control guidance](/azure/ai-services/openai/how-to/role-based-access-control#cognitive-services-openai-contributor).
       - question: |
@@ -306,9 +306,9 @@ sections:
         answer:
           You can customize your published web app in the Azure portal. The source code for the published web app is [available on GitHub](https://go.microsoft.com/fwlink/?linkid=2244395), where you can find information on changing the app frontend, as well as instructions for building and deploying the app.
       - question: |
-          Will my web app be overwritten when I deploy the app again from the Azure AI Foundry portal?
+          Will my web app be overwritten when I deploy the app again from the [Azure AI Foundry portal](https://ai.azure.com/)?
         answer:
-          Your app code won't be overwritten when you update your app. The app will be updated to use the Azure OpenAI resource, Azure AI Search index (if you're using Azure OpenAI on your data), and model settings selected in the Azure AI Foundry portal without any change to the appearance or functionality. 
+          Your app code won't be overwritten when you update your app. The app will be updated to use the Azure OpenAI resource, Azure AI Search index (if you're using Azure OpenAI on your data), and model settings selected in the [Azure AI Foundry portal](https://ai.azure.com/) without any change to the appearance or functionality. 
   - name: Using your data
     questions:
       - question: |
@@ -318,15 +318,15 @@ sections:
       - question: |
           How can I access Azure OpenAI on your data?  
         answer:
-          All Azure OpenAI customers can use Azure OpenAI on your data via the Azure AI Foundry portal and Rest API.
+          All Azure OpenAI customers can use Azure OpenAI on your data via the [Azure AI Foundry portal](https://ai.azure.com/) and Rest API.
       - question: |
           What data sources does Azure OpenAI on your data support?
         answer:
           Azure OpenAI on your data supports ingestion from Azure AI Search, Azure Blob Storage, and uploading local files. You can learn more about Azure OpenAI on your data from the [conceptual article](./concepts/use-your-data.md) and [quickstart](./use-your-data-quickstart.md).
       - question: |
           How much does it cost to use Azure OpenAI on your data?
         answer:
-          When using Azure OpenAI on your data, you incur costs when you use Azure AI Search, Azure Blob Storage, Azure Web App Service, semantic search and OpenAI models. There's no additional cost for using the "your data" feature in the Azure AI Foundry portal.
+          When using Azure OpenAI on your data, you incur costs when you use Azure AI Search, Azure Blob Storage, Azure Web App Service, semantic search and OpenAI models. There's no additional cost for using the "your data" feature in the [Azure AI Foundry portal](https://ai.azure.com/).
       - question: |
           How can I customize or automate the index creation process?
         answer:
@@ -356,7 +356,7 @@ sections:
         answer:
           You must send queries in the same language of your data. Your data can be in any of the languages supported by [Azure AI Search](/azure/search/search-language-support).
       - question: |
-          If Semantic Search is enabled for my Azure AI Search resource, will it be automatically applied to Azure OpenAI on your data in the Azure AI Foundry portal?
+          If Semantic Search is enabled for my Azure AI Search resource, will it be automatically applied to Azure OpenAI on your data in the [Azure AI Foundry portal](https://ai.azure.com/)?
         answer:
           When you select "Azure AI Search" as the data source, you can choose to apply semantic search. 
           If you select "Azure Blob Container" or "Upload files" as the data source, you can create the index as usual. Afterwards you would reingest the data using the "Azure AI Search" option to select the same index and apply Semantic Search. You will then be ready to chat on your data with semantic search applied.

Summary

{
    "modification_type": "minor update",
    "modification_title": "Azure AI Foundry ポータルのリンク追加"
}

Explanation

この変更では、Azure OpenAIに関連するFAQのYAMLファイルにおいて、複数の質問の回答部分にAzure AI Foundryポータルへのリンクが追加されました。これにより、ユーザーが必要な情報に迅速にアクセスできるようになります。

具体的には、FAQセクション内で、Azure AI Foundryポータルに関する操作やアクセスの方法についての質問に対し、ポータルのURLが直接リンクとして組み込まれています。この変更によって、ユーザーは記述された手順を実行する際に、文書内から直接ポータルへ移動することが可能になり、利便性が向上しています。また、全体的な読解の容易さを向上させ、情報の明確さを高めるための改善となります。

articles/ai-services/openai/how-to/batch.md

Diff
@@ -94,7 +94,7 @@ The following aren't currently supported:
 ### Batch deployment
 
 > [!NOTE]
-> In the Azure AI Foundry portal the batch deployment types will appear as `Global-Batch` and `Data Zone Batch`. To learn more about Azure OpenAI deployment types, see our [deployment types guide](../how-to/deployment-types.md).
+> In the [Azure AI Foundry portal](https://ai.azure.com/) the batch deployment types will appear as `Global-Batch` and `Data Zone Batch`. To learn more about Azure OpenAI deployment types, see our [deployment types guide](../how-to/deployment-types.md).
 
 :::image type="content" source="../media/how-to/global-batch/global-batch.png" alt-text="Screenshot that shows the model deployment dialog in Azure AI Foundry portal with Global-Batch deployment type highlighted." lightbox="../media/how-to/global-batch/global-batch.png":::
 
@@ -166,7 +166,7 @@ Yes. Similar to other deployment types, you can create content filters and assoc
 
 ### Can I request additional quota?
 
-Yes, from the quota page in the Azure AI Foundry portal. Default quota allocation can be found in the [quota and limits article](../quotas-limits.md#batch-quota).
+Yes, from the quota page in the [Azure AI Foundry portal](https://ai.azure.com/). Default quota allocation can be found in the [quota and limits article](../quotas-limits.md#batch-quota).
 
 ### What happens if the API doesn't complete my request within the 24 hour time frame?
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "Azure AI Foundry ポータルのリンク追加"
}

Explanation

この変更では、Azure OpenAIに関する「バッチ」ドキュメントの中で、Azure AI Foundryポータルへのリンクがいくつかの箇所に追加されました。この修正により、ユーザーはポータルに関する情報を簡単にアクセスできるようになります。

具体的には、バッチデプロイメントの種類に関する情報や、クォータページの参照の部分で、Azure AI Foundryポータルへのリンクが入れられています。これにより、文書内の指示に従ってポータルに直接移動し、関連情報を迅速に取得できるようになりました。全体として、ユーザーエクスペリエンスが向上し、ドキュメントがよりインタラクティブで実用的なものとなっています。

articles/ai-services/openai/how-to/evaluations.md

Diff
@@ -104,7 +104,7 @@ Testing criteria is used to assess the effectiveness of each output generated by
 
 ## Getting started
 
-1. Select the **Azure OpenAI Evaluation (PREVIEW)** within Azure AI Foundry portal. To see this view as an option may need to first select an existing Azure OpenAI resource in a supported region.
+1. Select the **Azure OpenAI Evaluation (PREVIEW)** within [Azure AI Foundry portal](https://ai.azure.com/). To see this view as an option may need to first select an existing Azure OpenAI resource in a supported region.
 2. Select **New evaluation**
 
     :::image type="content" source="../media/how-to/evaluations/new-evaluation.png" alt-text="Screenshot of the Azure OpenAI evaluation UX with new evaluation selected." lightbox="../media/how-to/evaluations/new-evaluation.png":::

Summary

{
    "modification_type": "minor update",
    "modification_title": "Azure AI Foundry ポータルのリンク追加"
}

Explanation

この変更では、Azure OpenAIの評価に関するドキュメントにおいて、Azure AI Foundryポータルへのリンクが追加されました。リンクは、ユーザーが特定の機能にアクセスするための手順の中で提案されています。

具体的には、Azure OpenAI Evaluationの選択に関する手順にリンクが組み込まれ、ユーザーが直接ポータルに移動して必要なセクションにアクセスできるようになりました。この更新により、ドキュメントの操作性が向上し、ユーザーが評価プロセスをより簡単に開始できるように配慮されています。全体的に、情報の明確性と便益が強化される結果となっています。

articles/ai-services/openai/how-to/fine-tuning.md

Diff
@@ -239,7 +239,7 @@ In order to successfully access fine-tuning, you need **Cognitive Services OpenA
 
 ### Why did my upload fail?
 
-If your file upload fails in Azure AI Foundry portal, you can view the error message under **Data files** in Azure AI Foundry portal. Hover your mouse over where it says “error” (under the status column) and an explanation of the failure will be displayed.
+If your file upload fails in [Azure AI Foundry portal](https://ai.azure.com/), you can view the error message under **Data files** in [Azure AI Foundry portal](https://ai.azure.com/). Hover your mouse over where it says “error” (under the status column) and an explanation of the failure will be displayed.
 
 :::image type="content" source="../media/fine-tuning/error.png" alt-text="Screenshot of fine-tuning error message." lightbox="../media/fine-tuning/error.png":::
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "Azure AI Foundry ポータルのリンク追加"
}

Explanation

この変更は、ファインチューニングに関するドキュメント内で、Azure AI Foundryポータルへのリンクが追加されたことを示しています。具体的には、ファイルのアップロードが失敗した場合の手順において、ポータルへのリンクが繰り返し挿入されています。

ユーザーがエラーメッセージを確認するために必要な操作を理解しやすくするために、Azure AI Foundryポータルへ直接アクセスできるようリンクされることで、リソースへの接続がスムーズになります。このように、情報を簡単に取得できることにより、ユーザー体験が向上し、問題解決が迅速かつ容易に行えるようになっています。

articles/ai-services/openai/how-to/on-your-data-configuration.md

Diff
@@ -169,7 +169,7 @@ This step can be skipped only if you have a [shared private link](#create-shared
 
 You can disable public network access of your Azure OpenAI resource in the Azure portal. 
 
-To allow access to your Azure OpenAI Service from your client machines, like using Azure AI Foundry portal, you need to create [private endpoint connections](/azure/ai-services/cognitive-services-virtual-networks?tabs=portal#use-private-endpoints) that connect to your Azure OpenAI resource.
+To allow access to your Azure OpenAI Service from your client machines, like using [Azure AI Foundry portal](https://ai.azure.com/), you need to create [private endpoint connections](/azure/ai-services/cognitive-services-virtual-networks?tabs=portal#use-private-endpoints) that connect to your Azure OpenAI resource.
 
 
 ## Configure Azure AI Search
@@ -193,7 +193,7 @@ For more information, see the [Azure AI Search RBAC article](/azure/search/searc
 
 You can disable public network access of your Azure AI Search resource in the Azure portal. 
 
-To allow access to your Azure AI Search resource from your client machines, like using Azure AI Foundry portal, you need to create [private endpoint connections](/azure/search/service-create-private-endpoint) that connect to your Azure AI Search resource.
+To allow access to your Azure AI Search resource from your client machines, like using [Azure AI Foundry portal](https://ai.azure.com/), you need to create [private endpoint connections](/azure/search/service-create-private-endpoint) that connect to your Azure AI Search resource.
 
 
 ### Enable trusted service
@@ -247,7 +247,7 @@ In the Azure portal, navigate to your storage account networking tab, choose "Se
 
 You can disable public network access of your Storage Account in the Azure portal. 
 
-To allow access to your Storage Account from your client machines, like using Azure AI Foundry portal, you need to create [private endpoint connections](/azure/storage/common/storage-private-endpoints) that connect to your blob storage.
+To allow access to your Storage Account from your client machines, like using [Azure AI Foundry portal](https://ai.azure.com/), you need to create [private endpoint connections](/azure/storage/common/storage-private-endpoints) that connect to your blob storage.
 
 
 
@@ -276,9 +276,9 @@ To enable the developers to use these resources to build applications, the admin
 
 |Role| Resource | Description |
 |--|--|--|
-| `Cognitive Services OpenAI Contributor` | Azure OpenAI | Call public ingestion API from Azure AI Foundry portal. The `Contributor` role is not enough, because if you only have `Contributor` role, you cannot call data plane API via Microsoft Entra ID authentication, and Microsoft Entra ID authentication is required in the secure setup described in this article. |
-| `Contributor` | Azure AI Search | List API-Keys to list indexes from Azure AI Foundry portal.|
-| `Contributor` | Storage Account | List Account SAS to upload files from Azure AI Foundry portal.|
+| `Cognitive Services OpenAI Contributor` | Azure OpenAI | Call public ingestion API from [Azure AI Foundry portal](https://ai.azure.com/). The `Contributor` role is not enough, because if you only have `Contributor` role, you cannot call data plane API via Microsoft Entra ID authentication, and Microsoft Entra ID authentication is required in the secure setup described in this article. |
+| `Contributor` | Azure AI Search | List API-Keys to list indexes from [Azure AI Foundry portal](https://ai.azure.com/).|
+| `Contributor` | Storage Account | List Account SAS to upload files from [Azure AI Foundry portal](https://ai.azure.com/).|
 | `Contributor` | The resource group or Azure subscription where the developer need to deploy the web app to | Deploy web app to the developer's Azure subscription.|
 | `Role Based Access Control Administrator` | Azure OpenAI | Permission to configure the necessary role assignment on the Azure OpenAI resource. Enables the web app to call Azure OpenAI. |
 
@@ -302,7 +302,7 @@ Configure your local machine `hosts` file to point your resources host names to
 
 ## Azure AI Foundry portal
 
-You should be able to use all Azure AI Foundry portal features, including both ingestion and inference, from your on-premises client machines.
+You should be able to use all [Azure AI Foundry portal](https://ai.azure.com/) features, including both ingestion and inference, from your on-premises client machines.
 
 ## Web app
 The web app communicates with your Azure OpenAI resource. Since your Azure OpenAI resource has public network disabled, the web app needs to be set up to use the private endpoint in your virtual network to access your Azure OpenAI resource.
@@ -313,7 +313,7 @@ The web app needs to resolve your Azure OpenAI host name to the private IP of th
 1. [Add a DNS record](/azure/dns/private-dns-getstarted-portal#create-an-additional-dns-record). The IP is the private IP of the private endpoint for your Azure OpenAI resource, and you can get the IP address from the network interface associated with the private endpoint for your Azure OpenAI.
 1. [Link the private DNS zone to your virtual network](/azure/dns/private-dns-getstarted-portal#link-the-virtual-network) so the web app integrated in this virtual network can use this private DNS zone.
 
-When deploying the web app from Azure AI Foundry portal, select the same location with the virtual network, and select a proper SKU, so it can support the [virtual network integration feature](/azure/app-service/overview-vnet-integration). 
+When deploying the web app from [Azure AI Foundry portal](https://ai.azure.com/), select the same location with the virtual network, and select a proper SKU, so it can support the [virtual network integration feature](/azure/app-service/overview-vnet-integration). 
 
 After the web app is deployed, from the Azure portal networking tab, configure the web app outbound traffic virtual network integration, choose the third subnet that you reserved for web app.
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "Azure AI Foundry ポータルへのリンクの追加"
}

Explanation

この変更では、ドキュメント内の複数のセクションでAzure AI Foundryポータルへのリンクが追加されました。この更新により、ユーザーはポータルの関連情報に直接アクセスできるようになり、手順をより簡単に理解できるようになります。

具体的には、Azure OpenAIリソース、Azure AI Searchリソース、ストレージアカウントをクライアントマシンからアクセスする方法に関する説明で、Azure AI Foundryポータルへのリンクが記載されています。また、役割の説明やWebアプリのデプロイに関する段落でも同様のリンクが追加されています。このようなリンクの追加によって、リソースに関する具体的な情報に容易にアクセスできるようになり、全体のユーザーエクスペリエンスが改善されています。

articles/ai-services/openai/how-to/provisioned-get-started.md

Diff
@@ -36,7 +36,7 @@ Creating a new deployment requires available (unused) quota to cover the desired
 
 Then 200 PTUs of quota are considered used, and there are 300 PTUs available for use to create new deployments. 
 
-A default amount of global, data zone, and regional provisioned quota is assigned to eligible subscriptions in several regions. You can view the quota available to you in a region by visiting the Quotas pane in Azure AI Foundry portal and selecting the desired subscription and region. For example, the screenshot below shows a quota limit of 500 PTUs in West US for the selected subscription. Note that you might see lower values of available default quotas. 
+A default amount of global, data zone, and regional provisioned quota is assigned to eligible subscriptions in several regions. You can view the quota available to you in a region by visiting the Quotas pane in [Azure AI Foundry portal](https://ai.azure.com/) and selecting the desired subscription and region. For example, the screenshot below shows a quota limit of 500 PTUs in West US for the selected subscription. Note that you might see lower values of available default quotas. 
 
 :::image type="content" source="../media/provisioned/available-quota.png" alt-text="A screenshot of the available quota in Azure AI Foundry portal." lightbox="../media/provisioned/available-quota.png":::
 
@@ -106,7 +106,7 @@ REST, ARM template, Bicep, and Terraform can also be used to create deployments.
 
 Due to the dynamic nature of capacity availability, it is possible that the region of your selected resource might not have the service capacity to create the deployment of the specified model, version, and number of PTUs. 
 
-In this event, the wizard in Azure AI Foundry portal will direct you to other regions with available quota and capacity to create a deployment of the desired model. If this happens, the deployment dialog will look like this: 
+In this event, the wizard in [Azure AI Foundry portal](https://ai.azure.com/) will direct you to other regions with available quota and capacity to create a deployment of the desired model. If this happens, the deployment dialog will look like this: 
 
 :::image type="content" source="../media/provisioned/deployment-screen-2.png" alt-text="Screenshot of the Azure AI Foundry portal deployment page for a provisioned deployment with no capacity available." lightbox="../media/provisioned/deployment-screen-2.png":::
 
@@ -165,7 +165,7 @@ The inferencing code for provisioned deployments is the same a standard deployme
 
 ## Understanding expected throughput
 The amount of throughput that you can achieve on the endpoint is a factor of the number of PTUs deployed, input size, output size, and call rate. The number of concurrent calls and total tokens processed can vary based on these values. Our recommended way for determining the throughput for your deployment is as follows:
-1. Use the Capacity calculator for a sizing estimate. You can find the capacity calculator in Azure AI Foundry portal under the quotas page and Provisioned tab.  
+1. Use the Capacity calculator for a sizing estimate. You can find the capacity calculator in [Azure AI Foundry portal](https://ai.azure.com/) under the quotas page and Provisioned tab.  
 1. Benchmark the load using real traffic workload. For more information about benchmarking, see the [benchmarking](#run-a-benchmark) section.
 
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "Azure AI Foundry ポータルへのリンク追加"
}

Explanation

この変更は、Azure AI Foundryポータルに関する複数のセクションでリンクが追加されたことを示しています。具体的には、リソースのプロビジョニングに関する情報がより容易にアクセスできるようになり、ユーザー体験が向上しています。

主な更新内容として、プロビジョニングされたクォータの情報や、リソースのデプロイメントウィザードが他の利用可能なリージョンにユーザーを誘導する部分において、Azure AI Foundryポータルへのリンクが追加されました。また、キャパシティ計算機の情報が実際にどのように利用できるかについても、リンクが挿入されています。この対策により、ユーザーは関連情報に直接アクセスでき、ドキュメントの利便性が高まりました。

articles/ai-services/openai/how-to/provisioned-throughput-onboarding.md

Diff
@@ -95,7 +95,7 @@ Customers that require long-term usage of provisioned, data zoned provisioned, a
 
 Discounts on top of the hourly usage price can be obtained by purchasing an Azure Reservation for Azure OpenAI Provisioned, Data Zone Provisioned, and Global Provisioned. An Azure Reservation is a term-discounting mechanism shared by many Azure products. For example, Compute and Cosmos DB. For Azure OpenAI Provisioned, Data Zone Provisioned, and Global Provisioned, the reservation provides a discount in exchange for committing to payment for fixed number of PTUs for a one-month or one-year period.  
 
-* Azure Reservations are purchased via the Azure portal, not the Azure AI Foundry portal Link to Azure reservation portal.
+* Azure Reservations are purchased via the Azure portal, not the [Azure AI Foundry portal](https://ai.azure.com/) Link to Azure reservation portal.
 
 * Reservations are purchased regionally and can be flexibly scoped to cover usage from a group of deployments. Reservation scopes include: 
 
@@ -147,4 +147,4 @@ After a reservation is created, it is a best practice monitor it to ensure it is
 
 - [Provisioned Throughput Units (PTU) getting started guide](./provisioned-get-started.md)
 - [Provisioned Throughput Units (PTU) concepts](../concepts/provisioned-throughput.md)
-- [Provisioned Throughput reservation documentation](https://aka.ms/oai/docs/ptum-reservations) 
\ No newline at end of file
+- [Provisioned Throughput reservation documentation](https://aka.ms/oai/docs/ptum-reservations) 

Summary

{
    "modification_type": "minor update",
    "modification_title": "Azure AI Foundry ポータルへのリンクの追加"
}

Explanation

この変更では、Azure AI Foundryポータルに関する情報の一部でリンクが追加されています。この更新により、ユーザーがAzure予約を購入する際の手順がより明確になり、該当情報へのアクセスが容易になっています。

具体的には、Azure予約がAzureポータルを介して購入されることを説明する部分に、Azure AI Foundryポータルへのリンクが付加されました。また、最後のセクションでは、Provisioned Throughputに関連するすべてのドキュメントが見やすく整理されています。これにより、ユーザーは予約に関する詳細情報やガイドに直接アクセスできるようになり、全体的な使いやすさが向上しています。

articles/ai-services/openai/how-to/quota.md

Diff
@@ -44,11 +44,11 @@ The flexibility to distribute TPM globally within a subscription and region has
 
 When you create a model deployment, you have the option to assign Tokens-Per-Minute (TPM) to that deployment. TPM can be modified in increments of 1,000, and will map to the TPM and RPM rate limits enforced on your deployment, as discussed above.
 
-To create a new deployment from within the Azure AI Foundry portal select **Deployments** > **Deploy model** > **Deploy base model** > **Select Model** > **Confirm**.
+To create a new deployment from within the [Azure AI Foundry portal](https://ai.azure.com/) select **Deployments** > **Deploy model** > **Deploy base model** > **Select Model** > **Confirm**.
 
 :::image type="content" source="../media/quota/deployment-new.png" alt-text="Screenshot of the deployment UI of Azure AI Foundry" lightbox="../media/quota/deployment-new.png":::
 
-Post deployment you can adjust your TPM allocation by selecting and editing your model from the **Deployments** page in Azure AI Foundry portal. You can also modify this setting from the **Management** > **Model quota** page.
+Post deployment you can adjust your TPM allocation by selecting and editing your model from the **Deployments** page in [Azure AI Foundry portal](https://ai.azure.com/). You can also modify this setting from the **Management** > **Model quota** page.
 
 > [!IMPORTANT]
 > Quotas and limits are subject to change, for the most up-date-information consult our [quotas and limits article](../quotas-limits.md).
@@ -68,7 +68,7 @@ All other model classes have a common max TPM value.
 
 ## View and request quota
 
-For an all up view of your quota allocations across deployments in a given region, select **Management** > **Quota** in Azure AI Foundry portal:
+For an all up view of your quota allocations across deployments in a given region, select **Management** > **Quota** in [Azure AI Foundry portal](https://ai.azure.com/):
 
 :::image type="content" source="../media/quota/quota-new.png" alt-text="Screenshot of the quota UI of Azure AI Foundry" lightbox="../media/quota/quota-new.png":::
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "Azure AI Foundry ポータルへのリンクの追加"
}

Explanation

この変更では、Azure AI Foundryポータルに関連する複数のセクションにリンクが追加されています。これにより、ユーザーがクォータやトークン管理の操作をより簡単に行えるようになります。

具体的には、モデルデプロイメントの作成や、TPM(トークン・パー・ミニット)の調整に関する手順が明確化され、Azure AI Foundryポータルへのリンクが挿入されました。これによりユーザーは、導入プロセス中に必要な情報に簡単にアクセスできるようになります。また、クォータの全体的な確認ができる場所についても、同様のリンクが付加されています。この変更は、ユーザーが関連情報をすぐに見つけられるようにし、全体的な効率を向上させるものとなっています。

articles/ai-services/openai/how-to/role-based-access-control.md

Diff
@@ -50,8 +50,8 @@ If a user were granted role-based access to only this role for an Azure OpenAI r
 
 ✅ View the resource in [Azure portal](https://portal.azure.com) <br>
 ✅ View the resource endpoint under **Keys and Endpoint** <br>
-✅ Ability to view the resource and associated model deployments in Azure AI Foundry portal. <br>
-✅ Ability to view what models are available for deployment in Azure AI Foundry portal. <br>
+✅ Ability to view the resource and associated model deployments in [Azure AI Foundry portal](https://ai.azure.com/). <br>
+✅ Ability to view what models are available for deployment in [Azure AI Foundry portal](https://ai.azure.com/). <br>
 ✅ Use the Chat, Completions, and DALL-E (preview) playground experiences to generate text and images with any models that have already been deployed to this Azure OpenAI resource. <br>
 ✅ Make inference API calls with Microsoft Entra ID.
 
@@ -93,7 +93,7 @@ This role is typically granted access at the resource group level for a user in
 ✅ View resources in the assigned resource group in the [Azure portal](https://portal.azure.com). <br>
 ✅ View the resource endpoint under **Keys and Endpoint** <br>
 ✅ View/Copy/Regenerate keys under **Keys and Endpoint** <br>
-✅ Ability to view what models are available for deployment in Azure AI Foundry portal <br>
+✅ Ability to view what models are available for deployment in [Azure AI Foundry portal](https://ai.azure.com/) <br>
 ✅ Use the Chat, Completions, and DALL-E (preview) playground experiences to generate text and images with any models that have already been deployed to this Azure OpenAI resource <br>
 ✅ Create customized content filters <br>
 ✅ Add data sources to Azure OpenAI On Your Data. **You must also have the [Cognitive Services OpenAI Contributor](#cognitive-services-openai-contributor) role as well**.
@@ -114,27 +114,27 @@ Viewing quota requires the **Cognitive Services Usages Reader** role. This role
 
 This role can be found in the Azure portal under **Subscriptions** > ***Access control (IAM)** > **Add role assignment** > search for **Cognitive Services Usages Reader**. The role must be applied at the subscription level, it does not exist at the resource level.
 
-If you don't wish to use this role, the subscription **Reader** role provides equivalent access, but it also grants read access beyond the scope of what is needed for viewing quota. Model deployment via the Azure AI Foundry portal is also partially dependent on the presence of this role.
+If you don't wish to use this role, the subscription **Reader** role provides equivalent access, but it also grants read access beyond the scope of what is needed for viewing quota. Model deployment via the [Azure AI Foundry portal](https://ai.azure.com/) is also partially dependent on the presence of this role.
 
 This role provides little value by itself and is instead typically assigned in combination with one or more of the previously described roles.
 
 #### Cognitive Services Usages Reader + Cognitive Services OpenAI User
 
 All the capabilities of Cognitive Services OpenAI User plus the ability to:
 
-✅ View quota allocations in Azure AI Foundry portal
+✅ View quota allocations in [Azure AI Foundry portal](https://ai.azure.com/)
 
 #### Cognitive Services Usages Reader + Cognitive Services OpenAI Contributor
 
 All the capabilities of Cognitive Services OpenAI Contributor plus the ability to:
 
-✅ View quota allocations in Azure AI Foundry portal
+✅ View quota allocations in [Azure AI Foundry portal](https://ai.azure.com/)
 
 #### Cognitive Services Usages Reader + Cognitive Services Contributor
 
 All the capabilities of Cognitive Services Contributor plus the ability to:
 
-✅ View & edit quota allocations in Azure AI Foundry portal <br>
+✅ View & edit quota allocations in [Azure AI Foundry portal](https://ai.azure.com/) <br>
 ✅ Create new model deployments or edit existing model deployments (via Azure AI Foundry) <br>
 
 ## Summary
@@ -143,8 +143,8 @@ All the capabilities of Cognitive Services Contributor plus the ability to:
 |-------------|--------------------|------------------------|------------------|-------------------------|
 |View the resource in Azure portal |✅|✅|✅| ➖ |
 |View the resource endpoint under “Keys and Endpoint” |✅|✅|✅| ➖ |
-|View the resource and associated model deployments in Azure AI Foundry portal |✅|✅|✅| ➖ |
-|View what models are available for deployment in Azure AI Foundry portal|✅|✅|✅| ➖ |
+|View the resource and associated model deployments in [Azure AI Foundry portal](https://ai.azure.com/) |✅|✅|✅| ➖ |
+|View what models are available for deployment in [Azure AI Foundry portal](https://ai.azure.com/)|✅|✅|✅| ➖ |
 |Use the Chat, Completions, and DALL-E (preview) playground experiences with any models that have already been deployed to this Azure OpenAI resource.|✅|✅|✅| ➖ |
 |Create or edit model deployments|❌|✅|✅| ➖ |
 |Create or deploy custom fine-tuned models|❌|✅|✅| ➖ |
@@ -162,7 +162,7 @@ All the capabilities of Cognitive Services Contributor plus the ability to:
 
 **Issue:**
 
-When selecting an existing Azure Cognitive Search resource the search indices don't load, and the loading wheel spins continuously. In Azure AI Foundry portal, go to **Playground Chat** > **Add your data (preview)** under Assistant setup. Selecting **Add a data source** opens a modal that allows you to add a data source through either Azure Cognitive Search or Blob Storage. Selecting the Azure Cognitive Search option and an existing Azure Cognitive Search resource should load the available Azure Cognitive Search indices to select from.
+When selecting an existing Azure Cognitive Search resource the search indices don't load, and the loading wheel spins continuously. In [Azure AI Foundry portal](https://ai.azure.com/), go to **Playground Chat** > **Add your data (preview)** under Assistant setup. Selecting **Add a data source** opens a modal that allows you to add a data source through either Azure Cognitive Search or Blob Storage. Selecting the Azure Cognitive Search option and an existing Azure Cognitive Search resource should load the available Azure Cognitive Search indices to select from.
 
 **Root cause** 
 
@@ -188,7 +188,7 @@ For this API call, you need a **subscription-level scope** role. You can use the
 
 **Root cause:**
 
-Insufficient subscription-level access for the user attempting to access the blob storage in Azure AI Foundry portal. The user may **not** have the necessary permissions to call the Azure Management API endpoint: ```https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}/listAccountSas?api-version=2022-09-01```
+Insufficient subscription-level access for the user attempting to access the blob storage in [Azure AI Foundry portal](https://ai.azure.com/). The user may **not** have the necessary permissions to call the Azure Management API endpoint: ```https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}/listAccountSas?api-version=2022-09-01```
 
 Public access to the blob storage is disabled by the owner of the Azure subscription for security reasons.
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "Azure AI Foundryポータルへのリンクの統一"
}

Explanation

この変更では、Azure AI Foundryポータルへのリンクが記事全体で統一され、ハイパーリンク形式で追加されました。これにより、ユーザーがポータルの機能や役割に関する情報に直接アクセスできるようになり、利便性が向上しています。

具体的には、ユーザーが役割に基づくアクセス制御を設定する際の手順や権限に関するセクションにおいて、Azure AI Foundryポータルへのリンクが強調されています。例えば、TPM(トークン・パー・ミニット)の調整や、クォータの確認に関する記述の中で、ポータルへの明示的なリンクが追加されました。また、クォータの管理やリソースの確認に関する情報も、一覧表の形式で整理され、ユーザーが必要な機能に迅速にアクセスできるようになっています。これにより、役割ベースのアクセス制御についての理解が深まり、利用が容易になります。

articles/ai-services/openai/how-to/use-web-app.md

Diff
@@ -17,7 +17,7 @@ recommendations: false
 > [!NOTE]
 > The web app and its [source code](https://github.com/microsoft/sample-app-aoai-chatGPT) are provided "as is" and as a sample only. Customers are responsible for all customization and implementation of their web apps. See the support section for the web app on [GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT/blob/main/SUPPORT.md) for more information.
 
-Along with Azure AI Foundry portal, APIs, and SDKs, you can use the customizable standalone web app to interact with Azure OpenAI models by using a graphical user interface. Key features include:
+Along with [Azure AI Foundry portal](https://ai.azure.com/), APIs, and SDKs, you can use the customizable standalone web app to interact with Azure OpenAI models by using a graphical user interface. Key features include:
 * Connectivity with multiple data sources to support rich querying and retrieval-augmented generation, including Azure AI Search, Prompt Flow, and more.
 * Conversation history and user feedback collection through Cosmos DB.
 * Authentication with role-based access control via Microsoft Entra ID.
@@ -191,15 +191,15 @@ To connect to Azure AI Search without redeploying your app, you can modify the f
 - `AZURE_SEARCH_ENABLE_IN_DOMAIN`: Limits responses to queries related only to your data.
     - Data type: boolean, should be set to `True`.
 - `AZURE_SEARCH_CONTENT_COLUMNS`: Specifies the list of fields in your Azure AI Search index that contain the text content of your documents, used when formulating a bot response.
-    - Data type: text, defaults to `content` if deployed from Azure AI Foundry portal,
+    - Data type: text, defaults to `content` if deployed from [Azure AI Foundry portal](https://ai.azure.com/),
 - `AZURE_SEARCH_FILENAME_COLUMN`: Specifies the field from your Azure AI Search index that provides a unique identifier of the source data to display in the UI.
-    - Data type: text, defaults to `filepath` if deployed from Azure AI Foundry portal,
+    - Data type: text, defaults to `filepath` if deployed from [Azure AI Foundry portal](https://ai.azure.com/),
 - `AZURE_SEARCH_TITLE_COLUMN`: Specifies the field from your Azure AI Search index that provides a relevant title or header for your data content to display in the UI.
-    - Data type: text, defaults to `title` if deployed from Azure AI Foundry portal,
+    - Data type: text, defaults to `title` if deployed from [Azure AI Foundry portal](https://ai.azure.com/),
 - `AZURE_SEARCH_URL_COLUMN`: Specifies the field from your Azure AI Search index that contains a URL for the document.
-    - Data type: text, defaults to `url` if deployed from Azure AI Foundry portal,
+    - Data type: text, defaults to `url` if deployed from [Azure AI Foundry portal](https://ai.azure.com/),
 - `AZURE_SEARCH_VECTOR_COLUMNS`: Specifies the list of fields in your Azure AI Search index that contain vector embeddings of your documents, used when formulating a bot response.
-    - Data type: text, defaults to `contentVector` if deployed from Azure AI Foundry portal,
+    - Data type: text, defaults to `contentVector` if deployed from [Azure AI Foundry portal](https://ai.azure.com/),
 - `AZURE_SEARCH_QUERY_TYPE`: Specifies the query type to use: `simple`, `semantic`, `vector`, `vectorSimpleHybrid`, or `vectorSemanticHybrid`. This setting takes precedence over `AZURE_SEARCH_USE_SEMANTIC_SEARCH`.
     - Data type: text, we recommend testing with `vectorSemanticHybrid`.
 - `AZURE_SEARCH_PERMITTED_GROUPS_COLUMN`: Specifies the field from your Azure AI Search index that contains Microsoft Entra group IDs, determining document-level access control.
@@ -294,7 +294,7 @@ The JSON to paste in the Advanced edit JSON editor is:
 
 ### Creating and deploying your prompt flow in Azure AI Foundry portal
 
-Follow [this tutorial](/azure/ai-studio/tutorials/deploy-copilot-ai-studio) to create, test, and deploy an inferencing endpoint for your prompt flow in Azure AI Foundry portal.
+Follow [this tutorial](/azure/ai-studio/tutorials/deploy-copilot-ai-studio) to create, test, and deploy an inferencing endpoint for your prompt flow in [Azure AI Foundry portal](https://ai.azure.com/).
 
 ### Enable underlying citations from your prompt flow
 
@@ -404,7 +404,7 @@ If you customized or changed the app's source code, you need to update your app'
 
 ## Deleting your Cosmos DB instance
 
-Deleting your web app doesn't delete your Cosmos DB instance automatically. To delete your Cosmos DB instance along with all stored chats, you need to go to the associated resource in the [Azure portal](https://portal.azure.com) and delete it. If you delete the Cosmos DB resource but keep the chat history option selected on subsequent updates from the Azure AI Foundry portal, the application notifies the user of a connection error. However, the user can continue to use the web app without access to the chat history.
+Deleting your web app doesn't delete your Cosmos DB instance automatically. To delete your Cosmos DB instance along with all stored chats, you need to go to the associated resource in the [Azure portal](https://portal.azure.com) and delete it. If you delete the Cosmos DB resource but keep the chat history option selected on subsequent updates from the [Azure AI Foundry portal](https://ai.azure.com/), the application notifies the user of a connection error. However, the user can continue to use the web app without access to the chat history.
 
 ## Enabling Microsoft Entra ID authentication between services
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "リンクの追加と整備"
}

Explanation

この変更では、Azure OpenAIモデルに対するインタラクションを促進するため、ウェブアプリに関する情報の一部にリンクが追加されました。これにより、ユーザーは関連するリソースや役立つ情報に簡単にアクセスできるようになっています。

具体的には、Azure AI Foundryポータルへのリンクが、アプリの機能に関する主要なセクションや設定項目に追加されました。これにより、ユーザーはガイドラインやカスタマイズオプション、APIの設定方法についての詳細情報をすぐに確認できるようになっています。また、ウェブアプリの機能や設定に関しても、直感的なユーザーインターフェースを通じてAzure OpenAIモデルとの相互作用が強調されています。さらに、Cosmos DBインスタンスの削除に関する情報も整理されており、ユーザーに必要なデータ管理の手順を明確に示しています。全体として、この変更はユーザーエクスペリエンスを向上させ、情報アクセスの効率を高めることを目的としています。

articles/ai-services/openai/how-to/working-with-models.md

Diff
@@ -20,7 +20,7 @@ You can get a list of models that are available for both inference and fine-tuni
 
 ## Model updates
 
-Azure OpenAI now supports automatic updates for select model deployments. On models where automatic update support is available, a model version drop-down is visible in Azure AI Foundry portal under **Deployments** and **Edit**:
+Azure OpenAI now supports automatic updates for select model deployments. On models where automatic update support is available, a model version drop-down is visible in [Azure AI Foundry portal](https://ai.azure.com/) under **Deployments** and **Edit**:
 
 :::image type="content" source="../media/models/auto-update-new.png" alt-text="Screenshot of the deploy model UI in the Azure AI Foundry portal." lightbox="../media/models/auto-update-new.png":::
 
@@ -43,7 +43,7 @@ When you select a specific model version for a deployment, this version remains
 
 ## Viewing retirement dates
 
-For currently deployed models, in the Azure AI Foundry portal select **Deployments**:
+For currently deployed models, in the [Azure AI Foundry portal](https://ai.azure.com/) select **Deployments**:
 
 :::image type="content" source="../media/models/deployments-new.png" alt-text="Screenshot of the deployment UI of the Azure AI Foundry portal." lightbox="../media/models/deployments-new.png":::
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "Azure AI Foundryポータルへのリンクの追加"
}

Explanation

この変更では、Azure OpenAIモデルに関する情報を提供する文書に、Azure AI Foundryポータルへのリンクが追加されました。具体的には、モデルの自動更新機能や退職日を確認する手順において、ポータルへの明確なリンクが挿入されています。

これにより、ユーザーはモデルのデプロイメントや管理に関する具体的な情報へ直接アクセスできるようになり、利便性が向上しています。特に、モデルのバージョン選択や現在のデプロイされたモデルの確認といった手続きが強調されており、ユーザーが操作を実行する際の参照として役立つ内容となっています。この変更は、情報の把握を容易にし、ユーザーエクスペリエンスを向上させることを目的としています。

articles/ai-services/openai/includes/audio-completions-rest.md

Diff
@@ -7,7 +7,7 @@ ms.topic: include
 ms.date: 1/21/2025
 ---
 
-[REST API Spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2024-11-01/inference.json?azure-portal=true) |
+[REST API Spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2024-10-21/inference.json) |
 
 [!INCLUDE [Audio completions introduction](audio-completions-intro.md)]
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "REST API仕様のバージョン更新"
}

Explanation

この変更では、Azure OpenAIに関連するREST API仕様のバージョンが更新されました。具体的には、API仕様へのリンクが以前のバージョン(2024-11-01)から新しいバージョン(2024-10-21)に変更されています。

この更新により、ユーザーは最新のAPI仕様を参照できるようになり、使い方のガイドラインやエンドポイントに関する情報を適切に取得することが可能になります。また、これに際して不必要になった以前のバージョンのリンクが削除されており、文書が整理され、ユーザーにとっての情報の明瞭さが向上しています。この変更は、情報の正確性を維持し、ユーザーが最新のリソースにアクセスしやすくするための重要なステップとなっています。

articles/ai-services/openai/overview.md

Diff
@@ -42,7 +42,7 @@ Start with the [Create and deploy an Azure OpenAI Service resource](./how-to/cre
 1. When you have an Azure OpenAI Service resource, you can deploy a model such as GPT-4o.
 1. When you have a deployed model, you can:
 
-    - Try out the Azure AI Foundry portal playgrounds to explore the capabilities of the models. 
+    - Try out the [Azure AI Foundry portal](https://ai.azure.com/) playgrounds to explore the capabilities of the models. 
     - You can also just start making API calls to the service using the REST API or SDKs.
     
     For example, you can try [real-time audio](./realtime-audio-quickstart.md) and [assistants](./assistants-quickstart.md) in the playgrounds or via code.

Summary

{
    "modification_type": "minor update",
    "modification_title": "Azure AI Foundryポータルへのリンクの追加"
}

Explanation

この変更では、Azure OpenAIに関する概要ドキュメントにおいて、Azure AI Foundryポータルへのリンクが追加されました。具体的には、モデルの機能を探索する際に利用できるプレイグラウンドを指す文の中に、ポータルへの直接的なリンクが挿入されています。

この変更により、読者はより簡単にAzure AI Foundryポータルにアクセスし、利用可能な機能を試すことができるようになります。また、文書内でのリンクの追加は、情報の正確性と利便性を向上させ、ユーザーエクスペリエンスを向上させる効果があります。このような小さな更新でも、ユーザーが必要な情報に迅速にアクセスできるようにする重要なステップです。

articles/ai-services/openai/quotas-limits.md

Diff
@@ -44,8 +44,8 @@ The following sections provide you with a quick guide to the default quotas and
 | Max number of `/chat/completions` functions | 128 |
 | Max number of `/chat completions` tools | 128 |
 | Maximum number of Provisioned throughput units per deployment | 100,000 |
-| Max files per Assistant/thread | 10,000 when using the API or Azure AI Foundry portal. In Azure OpenAI Studio the limit was 20.|
-| Max file size for Assistants & fine-tuning | 512 MB<br/><br/>200 MB via Azure AI Foundry portal |
+| Max files per Assistant/thread | 10,000 when using the API or [Azure AI Foundry portal](https://ai.azure.com/). In Azure OpenAI Studio the limit was 20.|
+| Max file size for Assistants & fine-tuning | 512 MB<br/><br/>200 MB via [Azure AI Foundry portal](https://ai.azure.com/) |
 | Max size for all uploaded files for Assistants |100 GB |
 | Assistants token limit | 2,000,000 token limit |
 | GPT-4o max images per request (# of images in the messages array/conversation history) | 50 |
@@ -145,7 +145,7 @@ M = million | K = thousand
 
 ## gpt-4o audio
 
-The rate limits for each `gpt-4o` audio model deployment are 100K TPM and 1K RPM. During the preview, Azure AI Foundry portal and APIs might inaccurately show different rate limits. Even if you try to set a different rate limit, the actual rate limit will be 100K TPM and 1K RPM.
+The rate limits for each `gpt-4o` audio model deployment are 100K TPM and 1K RPM. During the preview, [Azure AI Foundry portal](https://ai.azure.com/) and APIs might inaccurately show different rate limits. Even if you try to set a different rate limit, the actual rate limit will be 100K TPM and 1K RPM.
 
 | Model|Tier| Quota Limit in tokens per minute (TPM) | Requests per minute |
 |---|---|:---:|:---:|

Summary

{
    "modification_type": "minor update",
    "modification_title": "Azure AI Foundryポータルへのリンクの追加"
}

Explanation

この変更では、Azure OpenAIのクォータおよび制限に関するドキュメントで、Azure AI Foundryポータルへのリンクがいくつかの箇所に追加されました。具体的には、アシスタントやファインチューニングに関する制限を説明する文中に、Azure AI Foundryポータルへのリンクが挿入されています。

これにより、ユーザーはAzure AI Foundryポータルに直接リンクを通じてアクセスできるようになり、関連する情報を迅速に確認できます。この更新は、情報のアクセス性を向上させており、ユーザーが最新のリソースにより簡単に導かれるように配慮されています。このようなリンクの追加は、ドキュメントの利便性を高め、ユーザーエクスペリエンスを向上させる効果があります。

articles/ai-services/openai/tutorials/fine-tune.md

Diff
@@ -35,7 +35,7 @@ In this tutorial you learn how to:
 - [Jupyter Notebooks](https://jupyter.org/)
 - An Azure OpenAI resource in a [region where `gpt-4o-mini-2024-07-18` fine-tuning is available](../concepts/models.md). If you don't have a resource the process of creating one is documented in our resource [deployment guide](../how-to/create-resource.md).
 - Fine-tuning access requires **Cognitive Services OpenAI Contributor**.
-- If you don't already have access to view quota and deploy models in Azure AI Foundry portal, then you need [more permissions](../how-to/role-based-access-control.md).
+- If you don't already have access to view quota and deploy models in [Azure AI Foundry portal](https://ai.azure.com/), then you need [more permissions](../how-to/role-based-access-control.md).
 
 > [!IMPORTANT]
 > We recommend reviewing the [pricing information](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/#pricing) for fine-tuning to familiarize yourself with the associated costs. Testing of this tutorial resulted in 48,000 tokens being billed (4,800 training tokens * 10 epochs of training). Training costs are in addition to the costs that are associated with fine-tuning inference, and the hourly hosting costs of having a fine-tuned model deployed. Once you have completed the tutorial, you should delete your fine-tuned model deployment otherwise you continue to incur the hourly hosting cost.
@@ -892,7 +892,7 @@ print(r.reason)
 print(r.json())
 ```
 
-You can check on your deployment progress in the Azure AI Foundry portal.
+You can check on your deployment progress in the [Azure AI Foundry portal](https://ai.azure.com/).
 
 It isn't uncommon for this process to take some time to complete when dealing with deploying fine-tuned models.
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "Azure AI Foundryポータルへのリンクの追加"
}

Explanation

この変更では、Azure OpenAIのファインチューニングに関するチュートリアルドキュメント内で、Azure AI Foundryポータルへのリンクがいくつかの箇所に追加されました。具体的には、モデルのデプロイやクォータを確認する際のリファレンスとして、ポータルへの直接のリンクが挿入されています。

これにより、ユーザーは新たに追加されたリンクを通じて、Azure AI Foundryポータルに簡単にアクセスできるようになります。この更新は、情報の正確性と利便性を向上させており、ユーザーが必要な情報を迅速に見つけることができるよう考慮されています。こうした小さな修正により、全体的なドキュメントの利用価値が高まる効果があります。

articles/ai-services/openai/whats-new.md

Diff
@@ -70,7 +70,7 @@ The `gpt-4o-realtime-preview` model version 2024-12-17 is available for global d
 
 - Added support for [prompt caching](./how-to/prompt-caching.md) with the `gpt-4o-realtime-preview` model.
 - Added support for new voices. The `gpt-4o-realtime-preview` models now support the following voices: "alloy", "ash", "ballad", "coral", "echo", "sage", "shimmer", "verse".
-- Rate limits are no longer based on connections per minute. Rate limiting is now based on RPM (requests per minute) and TPM (tokens per minute) for the `gpt-4o-realtime-preview` model. The rate limits for each `gpt-4o-realtime-preview` model deployment are 100K TPM and 1K RPM. During the preview, Azure AI Foundry portal and APIs might inaccurately show different rate limits. Even if you try to set a different rate limit, the actual rate limit will be 100K TPM and 1K RPM.
+- Rate limits are no longer based on connections per minute. Rate limiting is now based on RPM (requests per minute) and TPM (tokens per minute) for the `gpt-4o-realtime-preview` model. The rate limits for each `gpt-4o-realtime-preview` model deployment are 100K TPM and 1K RPM. During the preview, [Azure AI Foundry portal](https://ai.azure.com/) and APIs might inaccurately show different rate limits. Even if you try to set a different rate limit, the actual rate limit will be 100K TPM and 1K RPM.
 
 For more information, see the [GPT-4o real-time audio quickstart](realtime-audio-quickstart.md) and the [how-to guide](./how-to/realtime-audio.md).
 
@@ -200,7 +200,7 @@ Global batch now supports GPT-4o (2024-08-06). See the [global batch getting sta
 
 ### Azure OpenAI Studio UX updates
 
-As of September 19, 2024, when you go to the [Azure OpenAI Studio](https://oai.azure.com/) you no longer see the legacy Azure OpenAI Studio by default. If needed you'll still be able to go back to the previous experience by using the **Switch to the old look** toggle in the top bar of the UI for the next couple of weeks. If you switch back to legacy Azure AI Foundry portal, it helps if you fill out the feedback form to let us know why. We're actively monitoring this feedback to improve the new experience.
+As of September 19, 2024, when you go to the [Azure OpenAI Studio](https://oai.azure.com/) you no longer see the legacy Azure OpenAI Studio by default. If needed you'll still be able to go back to the previous experience by using the **Switch to the old look** toggle in the top bar of the UI for the next couple of weeks. If you switch back to legacy [Azure AI Foundry portal](https://ai.azure.com/), it helps if you fill out the feedback form to let us know why. We're actively monitoring this feedback to improve the new experience.
 
 
 ### GPT-4o 2024-08-06 provisioned deployments
@@ -301,7 +301,7 @@ On August 6, 2024, OpenAI [announced](https://openai.com/index/introducing-struc
 
 Azure customers can test out GPT-4o `2024-08-06` today in the new Azure AI Foundry early access playground (preview).
 
-Unlike the previous early access playground, the Azure AI Foundry portal early access playground (preview) doesn't require you to have a resource in a specific region.
+Unlike the previous early access playground, the [Azure AI Foundry portal](https://ai.azure.com/) early access playground (preview) doesn't require you to have a resource in a specific region.
 
 > [!NOTE]
 > Prompts and completions made through the early access playground (preview) might be processed in any Azure OpenAI region, and are currently subject to a 10 request per minute per Azure subscription limit. This limit might change in the future.

Summary

{
    "modification_type": "minor update",
    "modification_title": "Azure AI Foundryポータルへのリンクの追加"
}

Explanation

この変更では、Azure OpenAIの新機能に関するドキュメント内で、Azure AI Foundryポータルへのリンクがいくつかの箇所に追加されました。具体的には、レート制限の説明や、Azure OpenAI Studioに関する情報で、ポータルへの直接のリンクが加えられています。

これにより、ユーザーは関連する情報を参照しやすくなり、Azure AI Foundryポータルへのアクセスが容易になります。この更新は、情報の明確性と利便性を向上させ、ユーザーが最新の機能や変更点を効率的に把握する手助けとなります。このような小さな改良でも、ドキュメントの利用価値を高めることに寄与しています。