Diff Insight Report - openai

最終更新日: 2025-02-22

利用上の注意

このポストは Microsoft 社の Azure 公式ドキュメント(CC BY 4.0 または MIT ライセンス) をもとに生成AIを用いて翻案・要約した派生作品です。 元の文書は MicrosoftDocs/azure-ai-docs にホストされています。

生成AIの性能には限界があり、誤訳や誤解釈が含まれる可能性があります。 本ポストはあくまで参考情報として用い、正確な情報は必ず元の文書を参照してください。

このポストで使用されている商標はそれぞれの所有者に帰属します。これらの商標は技術的な説明のために使用されており、商標権者からの公式な承認や推奨を示すものではありません。

View Diff on GitHub

Highlights

このコード差分における変更は、主にドキュメントの小規模な更新を中心としており、情報をより明確かつ最新の状態に保つことを目的としています。しかし、いくつかのセクションでは大幅な更新が実施され、特にAPI関連の変更はユーザーに直接的な影響を与える可能性があります。

New features

  • 「ファインチューニングの例」と「クイックリファレンス」では、新しいモデル「Clippy」を例として導入し、ユーザーの理解を助ける多様な対話例を提供。

Breaking changes

  • 「最新の推論プレビューAPI」でのパラメーター再編成は重要な変更で、特に音声関連の設定に関する情報が強化されています。

Other updates

  • ほとんどの変更は文書の日付の更新であり、内容の最新性を高めています。
  • 一部の説明や用語が明確化され、より読みやすくなっています。
  • 具体的なリンクの追加により、参照情報へのアクセスが改善されました。

Insights

今回の変更は、主にドキュメントのアップデートを通じて、利用者が効率よく最新情報に基づいて行動できるようにすることを目標としています。特に、Azure OpenAIサービスに関連する複数のガイドや説明文が日付の更新を受けており、読む人に最新情報としての信頼感を与える構成となっています。

API関連の更新においては、ガイドの更新がユーザーの適用しやすさを高めるように意図されています。特に音声パラメーターの追加と整理は、新しい技術対応をユーザーがスムーズに利用しやすくするためです。これらの変更は、サービスの提供能力を広げ、利用者の技術的統合を促進します。

一方、ファインチューニング関連の改善では、新しく導入された「Clippy」チャットボットが、Azure OpenAIの多様な使い方を示しています。このような実践的な例は、ユーザーがテクノロジーの適用方法をより直感的に学べるよう支援しています。

これらの修正を通じて、Azure OpenAIサービスはその提供情報の信頼性とユーザビリティを向上させよと強調しています。今後の技術革新に対応するため、定期的なドキュメントの見直しがこの業界でどれほど重要であるかを示唆しており、利用者に対しても持続的なサポートを提供する姿勢がうかがえます。

Summary Table

Filename Type Title Status A D M
content-credentials.md minor update コンテンツ認証に関する文書の更新 modified 13 16 29
content-filter.md minor update コンテンツフィルタに関する日付更新 modified 1 1 2
default-safety-policies.md minor update Azure OpenAIのデフォルトコンテンツセーフティポリシーの改善 modified 11 11 22
gpt-4-v-prompt-engineering.md minor update 画像プロンプトエンジニアリング技術に関する文書の更新 modified 11 12 23
gpt-with-vision.md minor update GPT-4とビジョンに関する文書の更新 modified 7 4 11
prompt-transformation.md minor update プロンプト変換に関する文書の更新 modified 5 8 13
safety-system-message-templates.md minor update 安全システムメッセージテンプレートに関する文書の更新 modified 7 7 14
dall-e-quickstart.md minor update DALL-E クイックスタートの日付更新 modified 1 1 2
gpt-v-quickstart.md minor update GPT-V クイックスタートの更新 modified 2 2 4
dall-e.md minor update DALL-E モデルの使用方法に関する文書の更新 modified 6 6 12
gpt-with-vision.md minor update ビジョン対応チャットモデルの使用法更新 modified 8 7 15
risks-safety-monitor.md minor update リスクと安全性モニタリングの記事の更新 modified 6 5 11
use-blocklists.md minor update ブロックリストの使用法に関する記事の更新 modified 5 5 10
latest-inference-preview.md breaking change 最新の推論プレビューAPIの大幅な更新 modified 34 36 70
dall-e-rest.md minor update DALL·E REST APIガイドの日付更新 modified 1 1 2
fine-tuning-openai-in-ai-studio.md minor update ファインチューニングモデルの更新と例の変更 modified 11 4 15
fine-tuning-python.md minor update ファインチューニングの例とクイックリファレンスの更新 modified 10 3 13
fine-tuning-rest.md minor update ファインチューニングに関するREST APIの更新 modified 10 3 13

Modified Contents

articles/ai-services/openai/concepts/content-credentials.md

Diff
@@ -1,18 +1,18 @@
 ---
 title: Content Credentials in Azure OpenAI
 titleSuffix: Azure OpenAI
-description: Learn about the content credentials feature, which lets you verify that an image was generated by an AI model.
+description: Learn about the Content Credentials feature, which lets you verify that an image was generated by an AI model.
 author: PatrickFarley
 ms.author: pafarley
 ms.service: azure-ai-openai
 ms.topic: conceptual 
-ms.date: 8/28/2024
+ms.date: 02/20/2025
 manager: nitinme
 ---
 
 # Content Credentials
 
-With the improved quality of content from generative AI models, there is an increased need for more transparency on the origin of AI-generated content. All AI-generated images from Azure OpenAI Service now include Content Credentials, a tamper-evident way to disclose the origin and history of content. Content Credentials are based on an open technical specification from the [Coalition for Content Provenance and Authenticity (C2PA)](https://www.c2pa.org), a Joint Development Foundation project. 
+With the improved quality of content from generative AI models, there is an increased need for more transparency about the origin of AI-generated content. All AI-generated images from Azure OpenAI Service include Content Credentials, a tamper-evident way to disclose the origin and history of content. Content Credentials are based on an open technical specification from the [Coalition for Content Provenance and Authenticity (C2PA)](https://www.c2pa.org), a Joint Development Foundation project. 
 
 ## What are Content Credentials? 
 
@@ -29,21 +29,18 @@ The manifest contains several key pieces of information:
 
 Content Credentials in the Azure OpenAI Service can help people understand when visual content is AI-generated. For more information on how to responsibly build solutions with Azure OpenAI Service image-generation models, visit the [Azure OpenAI transparency note](/legal/cognitive-services/openai/transparency-note?tabs=text).
 
-## How do I leverage Content Credentials in my solution today?
+## How do I use Content Credentials in my solution today?
 
-Customers may leverage Content Credentials by:
+Customers may use Content Credentials by:
 - Ensuring that their AI generated images contain Content Credentials
-
-No additional set-up is necessary. Content Credentials are automatically applied to all generated images from DALL·E in the Azure OpenAI Service. 
-
+    No additional set-up is necessary. Content Credentials are automatically applied to all generated images from DALL·E in the Azure OpenAI Service. 
 - Verifying that an image has Content Credentials
-  
-There are two recommended ways today to check the Credential of an image generated by Azure OpenAI DALL-E models:
-
-1. **Content Credentials Verify webpage (contentcredentials.org/verify)**: This is a tool that allows users to inspect the Content Credentials of a piece of content. If an image was generated by DALL-E in Azure OpenAI, the tool will display that its Content Credentials were issued by Microsoft Corporation alongside the date and time of issuance.
-    
-  :::image type="content" source="../media/encryption/credential-check.png" alt-text="Screenshot of the content credential verification website.":::
+    There are two recommended ways today to check the credential of an image generated by Azure OpenAI DALL-E models:
 
-  This page shows that an image generated by Azure OpenAI DALL-E has Content Credentials issued by Microsoft.
+    - **Content Credentials Verify webpage (contentcredentials.org/verify)**: This is a tool that allows users to inspect the Content Credentials of a piece of content. If an image was generated by DALL-E in Azure OpenAI, the tool will display that its Content Credentials were issued by Microsoft Corporation alongside the date and time of issuance.
+       :::image type="content" source="../media/encryption/credential-check.png" alt-text="Screenshot of the content credential verification website.":::
     
-2. **Content Authenticity Initiative (CAI) open-source tools**: The CAI provides multiple open-source tools that validate and display C2PA Content Credentials. Find the tool right for your application and [get started here](https://opensource.contentauthenticity.org/).
+       This page shows that an image generated by Azure OpenAI DALL-E has Content Credentials issued by Microsoft.
+        
+    - **Content Authenticity Initiative (CAI) open-source tools**: The CAI provides multiple open-source tools that validate and display C2PA Content Credentials. Find the tool right for your application and [get started here](https://opensource.contentauthenticity.org/).
+    
\ No newline at end of file

Summary

{
    "modification_type": "minor update",
    "modification_title": "コンテンツ認証に関する文書の更新"
}

Explanation

この変更は、「コンテンツ認証」のドキュメントに対する小規模なアップデートです。具体的には、文書の説明文が修正され、より明確な表現に変更されました。さらに、発行日が更新され、2024年8月28日から2025年2月20日に変更されました。

文面の一部は、「生成されたAIモデルによって生成された画像を確認することができるコンテンツ認証機能について」という説明から、「コンテンツ認証機能について」という表現に変わりました。また、「活用する」という単語が「使う」に修正され、よりカジュアルな用語に変更されています。

この更新により、ユーザーがAI-generated画像の起源をより理解しやすくなるように、透明性のある情報が強調されています。また、コンテンツ認証をチェックするための手段についてのセクションも改善され、情報の整合性が高められています。

articles/ai-services/openai/concepts/content-filter.md

Diff
@@ -6,7 +6,7 @@ author: PatrickFarley
 ms.author: pafarley
 ms.service: azure-ai-openai
 ms.topic: conceptual 
-ms.date: 08/22/2024
+ms.date: 02/20/2025
 ms.custom: template-concept, devx-track-python
 manager: nitinme
 ---

Summary

{
    "modification_type": "minor update",
    "modification_title": "コンテンツフィルタに関する日付更新"
}

Explanation

この変更は、「コンテンツフィルタ」に関するドキュメントの日付を更新するための小規模なアップデートです。具体的には、文書の最終更新日が2024年8月22日から2025年2月20日へと変更されました。

この変更は、ドキュメントの内容が最新の情報を反映し、ユーザーに正確な日時を提供するために重要です。その他の内容に関しては、変更はなく、主に日付の更新に焦点を当てています。これにより、読者は該当文書の最新性を理解することができます。

articles/ai-services/openai/concepts/default-safety-policies.md

Diff
@@ -1,27 +1,27 @@
 ---
 title: Azure OpenAI default content safety policies
 titleSuffix: Azure OpenAI
-description: Learn about the default content safety policies that Azure OpenAI uses to flag content.
+description: Learn about the default content safety policies that Azure OpenAI uses to flag content and ensure responsible use of the service.
 author: PatrickFarley
 ms.author: pafarley
 ms.service: azure-ai-openai
-ms.topic: conceptual 
-ms.date: 07/15/2024
+ms.topic: conceptual
+ms.date: 02/20/2025
 manager: nitinme
 ---
 
 # Default content safety policies
 
 
-Azure OpenAI Service includes default safety applied to all models, excluding Azure OpenAI Whisper. These configurations provide you with a responsible experience by default, including [content filtering models](/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cpython-new), blocklists, prompt transformation, [content credentials](/azure/ai-services/openai/concepts/content-credentials), and others.
+Azure OpenAI Service includes default safety policies applied to all models, excluding Azure OpenAI Whisper. These configurations provide you with a responsible experience by default, including [content filtering models](/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cpython-new), blocklists, prompt transformation, [content credentials](/azure/ai-services/openai/concepts/content-credentials), and others.
 
-Default safety aims to mitigate risks such as hate and fairness, sexual, violence, self-harm, protected material content and user prompt injection attacks. To learn more about content filtering, visit our documentation describing categories and severity levels [here](/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cpython-new).
+Default safety aims to mitigate risks in different categories such as hate and fairness, sexual, violence, self-harm, protected material content, and user prompt injection attacks. To learn more about content filtering, visit our documentation describing [categories and severity levels](/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cpython-new).
 
-All safety is configurable. To learn more about configurability, visit our documentation on [configuring content filtering](/azure/ai-services/openai/how-to/content-filters).
+All safety policies are configurable. To learn more about configurability, see the documentation on [configuring content filtering](/azure/ai-services/openai/how-to/content-filters).
 
-## Text models: GPT-4, GPT-3.5
+## Text models
 
-Text models in the Azure OpenAI Service can take in and generate both text and code. These models leverage Azure’s text content filtering models to detect and prevent harmful content. This system works on both prompt and completion. 
+Text models in the Azure OpenAI Service can take in and generate both text and code. These models leverage Azure’s text content filters to detect and prevent harmful content. This system works on both prompts and completions. 
 
 | Risk Category                             | Prompt/Completion      | Severity Threshold |
 |-------------------------------------------|------------------------|---------------------|
@@ -35,9 +35,9 @@ Text models in the Azure OpenAI Service can take in and generate both text and c
 
 
 
-## Vision models: GPT-4o, GPT-4 Turbo, DALL-E 3, DALL-E 2
+## Vision models
 
-### GPT-4o and GPT-4 Turbo
+### Vision-enabled chat models
 
 | Risk Category                                        | Prompt/Completion      | Severity Threshold |
 |------------------------------------------------------|------------------------|---------------------|
@@ -48,7 +48,7 @@ Text models in the Azure OpenAI Service can take in and generate both text and c
 | Identification of Individuals and Inference of Sensitive Attributes | Prompts                | N/A                 |
 | User prompt injection attack (Jailbreak)             | Prompts                | N/A                 |
 
-### DALL-E 3 and DALL-E 2
+### Image generation models
 
 
 | Risk Category                                     | Prompt/Completion      | Severity Threshold |

Summary

{
    "modification_type": "minor update",
    "modification_title": "Azure OpenAIのデフォルトコンテンツセーフティポリシーの改善"
}

Explanation

この変更は、Azure OpenAIのデフォルトコンテンツセーフティポリシーに関する文書のアップデートです。主な修正点としては、説明文が拡充され、サービスの責任ある使用を保証するための内容が追加されました。また、文書の日付が2024年7月15日から2025年2月20日に変更されています。

具体的には、セーフティポリシーのリスクに関する説明が、「異なるカテゴリにおけるリスクを軽減する」という明確な表現に変更されました。さらに、「すべてのセーフティは設定可能です」という表現が、「すべてのセーフティポリシーは設定可能です」となり、より明確な情報提供が行われています。

モデルのセクションの見出しやいくつかの用語も更新されており、全体的に文書がよりわかりやすく、利用者の理解を助ける内容になっています。このアップデートにより、ユーザーはAzure OpenAIの安全性に関する理解が深まることが期待されます。

articles/ai-services/openai/concepts/gpt-4-v-prompt-engineering.md

Diff
@@ -6,28 +6,27 @@ author: PatrickFarley
 ms.author: pafarley
 ms.service: azure-ai-openai
 ms.topic: conceptual 
-ms.date: 06/10/2024
+ms.date: 02/20/2025
 manager: nitinme
 ---
 
 # Image prompt engineering techniques
 
 To unlock the full potential of vision-enabled chat models like GPT-4 Turbo with Vision and GPT-4o, it's essential to tailor the prompts to your specific needs. Here are some guidelines to enhance the accuracy and efficiency of your prompts.
 
-## Fundamentals of crafting your image prompt
+## Fundamentals of writing an image prompt
+
 - **Contextual specificity:** Adding context to the scenario at hand gives the model a better understanding of an appropriate output. This level of specificity aids in focusing on relevant aspects and avoiding extraneous details.
 - **Task-oriented prompts:** Focusing on a specific task helps the model to develop the output while taking that perspective into consideration.  
-- **Handling refusals:** When the model indicates an inability to perform a task, refining the prompt can be an effective solution. More specific prompts can guide the model towards a clearer understanding and better execution of the task. Some tips to keep in mind: 
-
+- **Handle refusals:** When the model indicates an inability to perform a task, refining the prompt can be an effective solution. More specific prompts can guide the model towards a clearer understanding and better execution of the task. Keep these tips in mind:
    - Request explanations for generated responses to enhance transparency in the model's output <br>
    - If using a single-image prompt, place the image before the text <br>
    - Ask the model to describe the image in details first and complete your specific task from the description
-
-- **Adding examples:** Add examples that represent the type of responses you're looking for
+- **Add examples:** Add examples that represent the type of responses you're looking for
 - **Break down requests:** Try breaking down complex requests step-by-step to create manageable sub-goals
 - **Define output format:** Clearly mention the desired format for the output, such as markdown, JSON, HTML, etc. You can also suggest a specific structure, length, or specific attributes about the response.
 
-## Example prompt inputs & outputs
+## Example prompt inputs and outputs
 There are many ways to craft system prompts to tailor the output specifically to your needs. The following sample inputs and outputs showcase how adjusting your prompts can give you different results. Try out the model for yourself using these images and adjusting the system prompt in the [Azure AI Foundry playground](https://ai.azure.com/).
 
 ### Contextual specificity  
@@ -62,7 +61,7 @@ When you receive a refusal output from a model, there are several ways you can t
 
 |Follow up prompt|Model output|
 |--------|-----------|
-| What information do you need? | `I apologize for the confusion earlier. To plan a meal based on the image provided, I would need to know the following: <br> The number of people dining to ensure there's enough food for everyone.<br>Any dietary restrictions or preferences to accommodate (e.g., vegetarian, gluten-free, allergies).<br>The occasion or type of meal being planned (e.g., casual lunch, dinner party).<br>Desired balance of the meal in terms of food groups and nutrition. <br>With this information, I could help suggest how to portion the dishes, create a balanced meal, and ensure the meal meets the needs of all diners.`|
+| What information do you need? | `I apologize for the confusion earlier. To plan a meal based on the image provided, I would need to know the following: `<br> `The number of people dining to ensure there's enough food for everyone.`<br>`Any dietary restrictions or preferences to accommodate (e.g., vegetarian, gluten-free, allergies).`<br>`The occasion or type of meal being planned (e.g., casual lunch, dinner party).`<br>`Desired balance of the meal in terms of food groups and nutrition.` <br>`With this information, I could help suggest how to portion the dishes, create a balanced meal, and ensure the meal meets the needs of all diners.`|
 
 ### Adding examples
 
@@ -74,17 +73,17 @@ When you receive a refusal output from a model, there are several ways you can t
 |--------|-----------|
 | Q: What is this dog breed and what is its average weight & height? :::image type="content" source="../media/concepts/gpt-v/pomeranian.jpg" lightbox="../media/concepts/gpt-v/pomeranian.jpg" alt-text="Photograph of a pomeranian dog sitting on a pool diving board above the water."::: <br> A: Breed: pomeranian; average weight: 3-7 lbs; average height: 8-14 inches <br> :::image type="content" source="../media/concepts/gpt-v/bulldog.jpg" lightbox="../media/concepts/gpt-v/bulldog.jpg" alt-text="Photograph of a bulldog laying in a grass field."::: A: Breed: English bulldog; average weight: 50-55 lbs; average height: 14-16 inches :::image type="content" source="../media/concepts/gpt-v/border-collie.jpg" lightbox="../media/concepts/gpt-v/border-collie.jpg" alt-text="Photograph of a border collie dog laying in a grass field."::: | `A: Breed: Border Collie; average weight: 30-45 lbs; average height: 18-22 inches` |
 
-### Break down requests
+### Breaking down requests
 
 |Basic prompt|Model output|
 |--------|-----------|
-| :::image type="content" source="../media/concepts/gpt-v/grandmas-recipe.jpg" lightbox="../media/concepts/gpt-v/grandmas-recipe.jpg" alt-text="Photograph of a recipe card detailing a recipe for Grandma's Apple Crumb Pie."::: What's in this image? | `The image contains a recipe card titled Grandma's Apple Crumb Pie. It is noted as a Thanksgiving tradition. The ingredients listed are:<br><ul><li>5-7 apples, peeled </li> <li>1 9-inch pie crust</li> <li>	3/4 cups sugar </li> <li>1/2 teaspoon cinnamon </li> <li>A dash of salt </li><li>	3 tablespoons butter</li> </ul><br>Please note that the measurements for sugar, flour, and butter are not fully visible in the image. If needed, the standard measures for a crumb topping typically involve a combination of sugar, flour, and butter mixed together until crumbly`|
+| :::image type="content" source="../media/concepts/gpt-v/grandmas-recipe.jpg" lightbox="../media/concepts/gpt-v/grandmas-recipe.jpg" alt-text="Photograph of a recipe card detailing a recipe for Grandma's Apple Crumb Pie."::: What's in this image? | `The image contains a recipe card titled Grandma's Apple Crumb Pie. It is noted as a Thanksgiving tradition. The ingredients listed are:`<br><ul><li>`5-7 apples, peeled `</li> <li>`1 9-inch pie crust`</li> <li>`	3/4 cups sugar `</li> <li>`1/2 teaspoon cinnamon `</li> <li>`A dash of salt `</li><li>`	3 tablespoons butter`</li> </ul><br>`Please note that the measurements for sugar, flour, and butter are not fully visible in the image. If needed, the standard measures for a crumb topping typically involve a combination of sugar, flour, and butter mixed together until crumbly`|
 
 |Updated prompt|Model output|
 |----------|-----------|
-| :::image type="content" source="../media/concepts/gpt-v/grandmas-recipe.jpg" lightbox="../media/concepts/gpt-v/grandmas-recipe.jpg" alt-text="Photograph of a recipe card detailing a recipe for Grandma's Apple Crumb Pie."::: You are an AI assistant that helps create shopping lists and instructions to prep food based on recipes. Please provide a list of ingredients to purchase, and then provide a list of instructions to prep the ingredients based on the recipe card.  | Sure, here's a shopping list based on the recipe card: <br>Shopping List:</br><ol><li>5-7 apples (suitable for baking, such as Granny Smith or Honeycrisp)</li> <li>1 ready-made pie crust (9-inch) </li><li>Sugar (ensure you have enough for 1/2 cup)</li><li>All-purpose flour (ensure you have enough for 3/4 cup)</li><li>Ground cinnamon</li><li>Salt</li><li>Butter (ensure you have enough for 3 tablespoons)</li></ol><br>Prepping Instructions:</br> <ul><li>Peel the apples: Using a vegetable peeler or a paring knife, remove the skin from 5 to 7 apples.</li><li>Core and slice the apples: Cut the apples into quarters, remove the cores, and then slice the apple quarters into thin slices or chunks, depending on your preference for pie texture.</li><li>Measure the ingredients: Measure out 3/4 cups of sugar, 1/2 teaspoon of cinnamon, a dash of salt (about 1/8 teaspoon), and 3 tablespoons of butter.</li><li>Prepare the butter: Cut the butter into small pieces to be ready to use in the recipe. It's often easier to incorporate if it's cold, so keep it refrigerated until it's needed</li><li>Preheat your oven according to the pie recipe's instructions so it's ready for baking once your pie is assembled.</li><li>Remember to follow any additional preparation or baking instructions provided in the full recipe for your apple crumb pie.</li></ul> |
+| :::image type="content" source="../media/concepts/gpt-v/grandmas-recipe.jpg" lightbox="../media/concepts/gpt-v/grandmas-recipe.jpg" alt-text="Photograph of a recipe card detailing a recipe for Grandma's Apple Crumb Pie."::: You are an AI assistant that helps create shopping lists and instructions to prep food based on recipes. Please provide a list of ingredients to purchase, and then provide a list of instructions to prep the ingredients based on the recipe card.  | `Sure, here's a shopping list based on the recipe card: `<br>`Shopping List:`</br><ol><li>`5-7 apples (suitable for baking, such as Granny Smith or Honeycrisp)`</li> <li>`1 ready-made pie crust (9-inch) `</li><li>`Sugar (ensure you have enough for 1/2 cup)`</li><li>`All-purpose flour (ensure you have enough for 3/4 cup)`</li><li>`Ground cinnamon`</li><li>`Salt`</li><li>`Butter (ensure you have enough for 3 tablespoons)`</li></ol><br>`Prepping Instructions:`</br> <ul><li>`Peel the apples: Using a vegetable peeler or a paring knife, remove the skin from 5 to 7 apples.`</li><li>`Core and slice the apples: Cut the apples into quarters, remove the cores, and then slice the apple quarters into thin slices or chunks, depending on your preference for pie texture.`</li><li>`Measure the ingredients: Measure out 3/4 cups of sugar, 1/2 teaspoon of cinnamon, a dash of salt (about 1/8 teaspoon), and 3 tablespoons of butter.`</li><li>`Prepare the butter: Cut the butter into small pieces to be ready to use in the recipe. It's often easier to incorporate if it's cold, so keep it refrigerated until it's needed`</li><li>`Preheat your oven according to the pie recipe's instructions so it's ready for baking once your pie is assembled.`</li><li>`Remember to follow any additional preparation or baking instructions provided in the full recipe for your apple crumb pie.`</li></ul> |
 
-### Define output format
+### Defining output format
 
 |Basic prompt|Model output|
 |----------|-----------|

Summary

{
    "modification_type": "minor update",
    "modification_title": "画像プロンプトエンジニアリング技術に関する文書の更新"
}

Explanation

この変更は、画像プロンプトエンジニアリング技術に関する文書の内容を更新するための小規模なアップデートです。主な変更点として、日付が2024年6月10日から2025年2月20日に変更され、文書の構成や用語が若干改良されています。

具体的には、プロンプトの書き方に関するセクションの見出しが「基本的な画像プロンプトの作成」から「画像プロンプトの書き方」に変更され、より明確な表現となりました。また、プロンプトをより適切にするための具体的なガイドラインが一部修正されています。例えば、「拒否の処理」に関する項目名が「拒否への対処」に変更され、説明文が整理されました。

さらに、例題の入力と出力のタイトルも調整され、段階的にリクエストを分解することを推奨するセクションが加えられています。文書は、読者がGPT-4 TurboやVisionを使用してプロンプトの改善に役立つ具体的なアドバイスを提供するために、内容が一層具体的で使いやすくなっています。このアップデートにより、ユーザーがプロンプトの設計において理解しやすく、実践的な変更が加えられることを目指しています。

articles/ai-services/openai/concepts/gpt-with-vision.md

Diff
@@ -6,7 +6,7 @@ author: PatrickFarley
 ms.author: pafarley
 ms.service: azure-ai-openai
 ms.topic: conceptual 
-ms.date: 09/24/2024
+ms.date: 02/20/2025
 manager: nitinme
 ---
 
@@ -18,7 +18,7 @@ To try out vision-enabled chat models, see the [quickstart](/azure/ai-services/o
 
 ## Vision-enabled chats
 
-The vision-enabled models answer general questions about what's present in the images or videos you upload.
+The vision-enabled models answer general questions about what's present in the images you upload.
 
 
 ## Special pricing information
@@ -51,7 +51,7 @@ For a typical use case, take an image with both visible objects and text and a 1
 | Output Tokens      | 100 tokens (assumed)    | $0.003       |
 | **Total** |  |**$0.00955** |
 
-
+<!--
 ### Example video price calculation
 
 > [!IMPORTANT]
@@ -68,6 +68,7 @@ For a typical use case, take a 3-minute video with a 100-token prompt input. The
 | **Total**      |      | **$0.03025** |
 
 Additionally, there's a one-time indexing cost of $0.15 to generate the Video Retrieval index for this 3-minute video. This index can be reused across any number of Video Retrieval and GPT-4 Turbo with Vision API calls.
+-->
 
 ## Input limitations
 
@@ -79,16 +80,18 @@ This section describes the limitations of vision-enabled chat models.
 - **Low resolution accuracy**: When images are analyzed using the "low resolution" setting, it allows for faster responses and uses fewer input tokens for certain use cases. However, this could impact the accuracy of object and text recognition within the image.
 - **Image chat restriction**: When you upload images in Azure AI Foundry portal or the API, there is a limit of 10 images per chat call.
 
+<!--
 ### Video support
 
 - **Low resolution**: Video frames are analyzed using GPT-4 Turbo with Vision's "low resolution" setting, which may affect the accuracy of small object and text recognition in the video.
 - **Video file limits**: Both MP4 and MOV file types are supported. In Azure AI Foundry portal, videos must be less than 3 minutes long. When you use the API there is no such limitation.
 - **Prompt limits**: Video prompts only contain one video and no images. In Azure AI Foundry portal, you can clear the session to try another video or images.
 - **Limited frame selection**: The service selects 20 frames from the entire video, which might not capture all the critical moments or details. Frame selection can be approximately evenly spread through the video or focused by a specific video retrieval query, depending on the prompt.
 - **Language support**: The service primarily supports English for grounding with transcripts. Transcripts don't provide accurate information on lyrics in songs.
+-->
 
 ## Next steps
 
 - Get started using vision-enabled models by following the [quickstart](/azure/ai-services/openai/gpt-v-quickstart).
-- For a more in-depth look at the APIs, and to use video prompts in chat, follow the [how-to guide](../how-to/gpt-with-vision.md).
+- For a more in-depth look at the APIs, follow the [how-to guide](../how-to/gpt-with-vision.md).
 - See the [completions and embeddings API reference](../reference.md)

Summary

{
    "modification_type": "minor update",
    "modification_title": "GPT-4とビジョンに関する文書の更新"
}

Explanation

この変更は、GPT-4とビジョンに関する文書の内容を更新するための小規模な編集です。主な更新点として、文書の日付が2024年9月24日から2025年2月20日に修正されています。また、いくつかの具体的な表現が調整されています。

具体的には、「画像やビデオに関する一般的な質問」に関する部分が、「画像のみをアップロード」に修正され、内容が明確になりました。また、ビデオに関する価格計算のセクションがコメントアウトされて追加され、ビデオサポートに関する情報が示されるようになりました。

さらに、入力制限に関する内容や、次のステップとしての案内が整理されています。ビデオファイルの制限や解析時の品質設定についての詳しい説明が加わり、ユーザーにとっての利用可能性や制約についての理解が深まるよう配慮されています。

このアップデートにより、ユーザーはGPT-4のビジョン機能をより効果的に活用できる情報を得ることができ、特に画像やビデオの処理に関する具体的なガイダンスが充実しています。

articles/ai-services/openai/concepts/prompt-transformation.md

Diff
@@ -6,13 +6,13 @@ author: PatrickFarley
 ms.author: pafarley
 ms.service: azure-ai-openai
 ms.topic: conceptual 
-ms.date: 07/16/2024
+ms.date: 02/20/2025
 manager: nitinme
 ---
 
 # What is prompt transformation?
 
-Prompt transformation is a process in DALL-E 3 image generation that applies a safety and quality system message to your original prompt using a large language model (LLM) call before being sent to the model for image generation. This system message enriches your original prompt with the goal of generating more diverse and higher-quality images, while maintaining intent. 
+Prompt transformation is a process included in DALL-E 3 image generation that applies a safety and quality system message to your original prompt. It uses a large language model (LLM) call to add the message before sending your prompt to the model for image generation. This system message enriches your original prompt with the goal of generating more diverse and higher-quality images while maintaining intent. 
 
 After prompt transformation is applied to the original prompt, content filtering is applied as a secondary step before image generation; for more information, see [Content filtering](./content-filter.md).
 
@@ -21,7 +21,6 @@ After prompt transformation is applied to the original prompt, content filtering
 
 ## Prompt transformation example
 
-
 | **Example text prompt** | **Example generated image without prompt transformation** | **Example generated image with prompt transformation** |
 |---|---|---|
 |"Watercolor painting of the Seattle skyline" | ![Watercolor painting of the Seattle skyline (simple).](../media/how-to/generated-seattle.png) | ![Watercolor painting of the Seattle skyline, with more detail and structure.](../media/how-to/generated-seattle-prompt-transformed.png) |
@@ -39,12 +38,10 @@ Prompt transformation is applied by default to all Azure OpenAI DALL-E 3 request
 
 Like image generation, prompt transformation is non-deterministic due to the nature of large language models. A single original prompt may lead to many image variants.
 
-
 ## View prompt transformations
 
 Your revised or transformed prompt is visible in the API response object as shown here, in the `revised_prompt` field. 
 
-
 ```json
 Input Content:
 {
@@ -69,7 +66,7 @@ Output Content:
 }
 ```
 
+## Next step
 
-## Next steps
-
-* [DALL-E quickstart](/azure/ai-services/openai/dall-e-quickstart)
\ No newline at end of file
+> [!div class="nextstepaction"]
+> [DALL-E quickstart](/azure/ai-services/openai/dall-e-quickstart)

Summary

{
    "modification_type": "minor update",
    "modification_title": "プロンプト変換に関する文書の更新"
}

Explanation

この変更は、プロンプト変換に関する文書の内容を更新するための小規模な編集です。主な更新点として、文書の日付が2024年7月16日から2025年2月20日に修正されています。プロンプト変換の定義がわずかに変更され、DALL-E 3の画像生成プロセス内におけるその役割についての説明が明確化されました。

具体的には、プロンプト変換がDALL-E 3の画像生成に含まれるプロセスとして、システムメッセージがオリジナルのプロンプトに適用される箇所が再整理されています。システムメッセージは、元のプロンプトに対し多様で高品質な画像を生成するために意図を維持しつつ加えられることが強調されています。また、コンテンツフィルタリングの適用が二次的な手続きであることも言及されています。

さらに、プロンプト変換の例に関する表が明確化され、表の前後に不要な改行が削除されました。新しいセクション「次のステップ」が設けられ、「DALL-Eクイックスタート」に関するリンクが追加され、ユーザーがさらに進めるための情報が提供されています。この変更により、読者はプロンプト変換の役割とその利用方法をより深く理解できるようになっています。

articles/ai-services/openai/concepts/safety-system-message-templates.md

Diff
@@ -4,7 +4,7 @@ titleSuffix: Azure OpenAI Service
 description: This article contains recommended safety system messages for your generative AI systems, to help reduce the propensity of harm in various concern areas.
 ms.service: azure-ai-openai
 ms.topic: conceptual
-ms.date: 09/20/2024
+ms.date: 02/20/2025
 ms.custom:
 manager: nitinme
 author: PatrickFarley
@@ -14,14 +14,14 @@ ms.author: pafarley
 
 # Safety system message templates
 
-
 This article contains recommended safety system messages for your generative AI systems, to help reduce the propensity of harm in various concern areas. Before you begin evaluating and integrating your safety system messages, visit the [Safety System Message documentation](/azure/ai-services/openai/concepts/system-message) to get started.  
 
-Note that using a safety system message is one of many techniques that can be used for mitigations risks in AI systems, and different from [Azure AI Content Safety](/azure/ai-services/content-safety/overview). 
+> [!NOTE]
+> Using a safety system message is one of many techniques that can be used for mitigations risks in AI systems and is different from [Azure AI Content Safety](/azure/ai-services/content-safety/overview). 
 
 ## Recommended system messages 
 
-Below are examples of recommended system message components you can include to potentially mitigate various harms in your system. 
+Below are examples of recommended system message components you can include to potentially mitigate various harms in your AI system. 
 
 | Category | Component | When this concern area may apply |
 | --- | --- | --- |
@@ -32,7 +32,7 @@ Below are examples of recommended system message components you can include to p
 
 ## Add safety system messages in Azure AI Foundry portal 
 
-The following steps show how to leverage safety system messages in Azure AI Foundry portal.  
+The following steps show how to leverage safety system messages in [Azure AI Foundry portal](https://ai.azure.com/).
 
 1. Go to Azure AI Foundry and navigate to Azure OpenAI and the Chat playground.
     :::image type="content" source="../media/navigate-chat-playground.PNG" alt-text="Screenshot of the Azure AI Foundry portal selection.":::
@@ -47,8 +47,8 @@ The following steps show how to leverage safety system messages in Azure AI Foun
 
 
 > [!NOTE]
-> If you are using a safety system message that is not integrated into the studio by default, simply copy the appropriate component and paste it in the safety system message section, or the system message section. Repeat steps 4 and 5 for optimal performance and safety. 
+> If you're using a safety system message that is not integrated into the studio by default, simply copy the appropriate component and paste it in the safety system message section, or the system message section. Repeat steps 4 and 5 for optimal performance and safety. 
 
 ## Evaluation 
 
-We recommend informing your safety system message approach based on an iterative process of identification and evaluation. Learn more in our [Safety System Message documentation](/azure/ai-services/openai/concepts/system-message). 
\ No newline at end of file
+We recommend adjusting your safety system message approach based on an iterative process of identification and evaluation. Learn more in our [Safety System Message documentation](/azure/ai-services/openai/concepts/system-message). 
\ No newline at end of file

Summary

{
    "modification_type": "minor update",
    "modification_title": "安全システムメッセージテンプレートに関する文書の更新"
}

Explanation

この変更は、生成AIシステムの安全システムメッセージテンプレートに関する文書を更新するための小規模な編集です。主な更新点として、文書の日付が2024年9月20日から2025年2月20日に修正されています。文書の中で安全システムメッセージの定義がわずかに明確化され、特に「AIシステム」との関連性が強調されています。

変更に伴い、重要な注意点が新しい形式で強調表示され、ユーザーが安全システムメッセージの使用がAIシステムのリスク軽減のためのテクニックの一つであることを理解しやすくなっています。具体的には、Azure AI Content Safetyとの違いについても詳しく説明されています。

また、推奨されるシステムメッセージの例を挙げる部分が調整され、「AIシステム」に関連する表現が改善されました。さらに、Azure AI Foundryポータルでの安全システムメッセージの使用方法が明確に記述され、実際の手順も簡素化されています。エラーメッセージや指示についての情報が統一され、ユーザーの便宜が図られています。

最後に、評価セクションも更新され、アプローチを改善するための方法が強調されています。これにより、読者は安全システムメッセージの効果的な活用方法をより良く理解し、実践できるようになります。

articles/ai-services/openai/dall-e-quickstart.md

Diff
@@ -9,7 +9,7 @@ ms.custom: devx-track-python, devx-track-dotnet, devx-track-extended-java, devx-
 ms.topic: quickstart
 author: PatrickFarley
 ms.author: pafarley
-ms.date: 09/06/2024
+ms.date: 02/20/2025
 zone_pivot_groups: openai-quickstart-dall-e
 ---
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "DALL-E クイックスタートの日付更新"
}

Explanation

この変更は、DALL-Eに関するクイックスタートガイドの日付を更新するための小規模な編集です。具体的には、文書の日付が2024年9月6日から2025年2月20日に修正されています。この更新により、内容の最新性が保たれ、読者に対する情報の信頼性が向上します。

その他の内容に大きな変更はなく、主に情報更新に焦点を当てた修正であるため、ユーザーが参照する際に新たな日付が反映されていることが重要です。このような日付の更新は、文書管理の良い慣行であり、使用者が最新の情報に基づいて行動できるようにするためのものです。

articles/ai-services/openai/gpt-v-quickstart.md

Diff
@@ -9,7 +9,7 @@ ms.custom: devx-track-python, devx-track-js, devx-track-ts
 ms.topic: quickstart
 author: PatrickFarley
 ms.author: pafarley
-ms.date: 10/03/2024
+ms.date: 02/20/2025
 zone_pivot_groups: openai-quickstart-gpt-v
 ---
 
@@ -64,7 +64,7 @@ Get started using GPT-4 Turbo with images with the Azure OpenAI Service.
 
 ## Next steps
 
-* [Get started with multimodal vision chat apps using Azure OpenAI](/azure/developer/ai/get-started-app-chat-vision?tabs=github-codespaces) AI App template
+* [Get started with multimodal vision chat apps using Azure OpenAI AI App template](/azure/developer/ai/get-started-app-chat-vision?tabs=github-codespaces)
 * Learn more about these APIs in the [Vision-enabled models how-to guide](./gpt-v-quickstart.md)
 * [GPT-4 Turbo with Vision frequently asked questions](./faq.yml#gpt-4-turbo-with-vision)
 * [GPT-4 Turbo with Vision API reference](https://aka.ms/gpt-v-api-ref)

Summary

{
    "modification_type": "minor update",
    "modification_title": "GPT-V クイックスタートの更新"
}

Explanation

この変更は、GPT-Vに関するクイックスタートガイドの内容を更新するための小規模な編集です。主な変更点は、文書の日付が2024年10月3日から2025年2月20日に更新されたことです。これにより、情報が最新のものとなり、ユーザーは信頼性の高いデータに基づいて行動することができます。

さらに、文書内のリスト項目のテキストが若干調整され、「Azure OpenAI AI App template」という表現が追加されています。この修正は、リンク先の内容を明確にするために行われたものです。その他のセクションは修正されておらず、文書全体の目的やテーマは変わりません。これにより、ユーザーが操作を始めるための手引きがいっそう分かりやすくなっています。

articles/ai-services/openai/how-to/dall-e.md

Diff
@@ -1,20 +1,20 @@
 ---
-title: How to work with DALL-E models 
+title: How to use DALL-E models 
 titleSuffix: Azure OpenAI Service
-description: Learn about the options for how to use the DALL-E image generation models.
+description: Learn how to generate images with the DALL-E models, and learn about the configuration options that are available.
 author: PatrickFarley
 ms.author: pafarley 
 ms.service: azure-ai-openai
 ms.custom: 
 ms.topic: how-to
-ms.date: 10/02/2024
+ms.date: 02/20/2025
 manager: nitinme
 keywords: 
 zone_pivot_groups: 
 # Customer intent: as an engineer or hobbyist, I want to know how to use DALL-E image generation models to their full capability.
 ---
 
-# How to work with the DALL-E models
+# How to use the DALL-E models
 
 OpenAI's DALL-E models generate images based on user-provided text prompts. This guide demonstrates how to use the DALL-E models and configure their options through REST API calls.
 
@@ -104,7 +104,7 @@ It's also possible that the generated image itself is filtered. In this case, th
 }
 ```
 
-## Writing image prompts
+## Write image prompts
 
 Your image prompts should describe the content you want to see in the image, and the visual style of image.
 
@@ -124,7 +124,7 @@ Specify the size of the generated images. Must be one of `1024x1024`, `1792x1024
 
 ### Style
 
-DALL-E 3 introduces two style options: `natural` and `vivid`. The natural style is more similar to the DALL-E 2 default style, while the vivid style generates more hyper-real and cinematic images.
+DALL-E 3 offers two style options: `natural` and `vivid`. The natural style is more similar to the default style of older models, while the vivid style generates more hyper-real and cinematic images.
 
 The natural style is useful in cases where DALL-E 3 over-exaggerates or confuses a subject that's meant to be more simple, subdued, or realistic.
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "DALL-E モデルの使用方法に関する文書の更新"
}

Explanation

この変更は、DALL-Eモデルの使用方法に関する文書の内容を更新するための小規模な編集です。主な修正点は、タイトルが「How to work with DALL-E models」から「How to use DALL-E models」に変更されたこと、そして説明文がより具体的に「DALL-Eモデルを用いた画像生成の方法と、利用可能な設定オプションについて学ぶ」と記述されるように更新された点です。また、文書の日付も2024年10月2日から2025年2月20日に修正されています。

さらに、セクション見出しや説明の言い回しが変更され、内容がより明確に読者に伝わるようになっています。例えば、「Writing image prompts」が「Write image prompts」に、またDALL-E 3のスタイルオプションに関する説明も若干調整されており、古いモデルと新しいモデルのスタイル比較がよりスムーズに理解できるようになっています。

これらの変更により、DALL-Eについての理解が深まり、ユーザーがその機能を十分に活用できるよう促進されることを目的としています。

articles/ai-services/openai/how-to/gpt-with-vision.md

Diff
@@ -1,35 +1,36 @@
 ---
 title: How to use vision-enabled chat models
 titleSuffix: Azure OpenAI Service
-description: Learn about the options for using vision-enabled chat models
+description: Learn how to use vision-enabled chat models in Azure OpenAI Service, including how to call the Chat Completion API and process images.
 author: PatrickFarley #dereklegenzoff
 ms.author: pafarley #delegenz
+#customer intent: As a developer, I want to learn how to use vision-enabled chat models so that I can integrate image processing capabilities into my applications.
 ms.service: azure-ai-openai
 ms.topic: how-to
-ms.date: 08/21/2024
+ms.date: 02/20/2025
 manager: nitinme
 ---
 
 # Use vision-enabled chat models
 
 
-Vision-enabled chat models are large multimodal models (LMM) developed by OpenAI that can analyze images and provide textual responses to questions about them. They incorporate both natural language processing and visual understanding. The current vision-enabled models are [o1](./reasoning.md), GPT-4o, and GPT-4o-mini, GPT-4 Turbo with Vision.
+Vision-enabled chat models are large multimodal models (LMM) developed by OpenAI that can analyze images and provide textual responses to questions about them. They incorporate both natural language processing and visual understanding. The current vision-enabled models are [o1](./reasoning.md), GPT-4o, GPT-4o-mini, and GPT-4 Turbo with Vision.
 
-The vision-enabled models answer general questions about what's present in the images you upload.
+The vision-enabled models can answer general questions about what's present in the images you upload.
 
 > [!TIP]
 > To use vision-enabled models, you call the Chat Completion API on a supported model that you have deployed. If you're not familiar with the Chat Completion API, see the [Vision-enabled chat how-to guide](/azure/ai-services/openai/how-to/chatgpt?tabs=python&pivots=programming-language-chat-completions).
 
 ## Call the Chat Completion APIs
 
-The following command shows the most basic way to use the GPT-4 Turbo with Vision model with code. If this is your first time using these models programmatically, we recommend starting with our [GPT-4 Turbo with Vision quickstart](../gpt-v-quickstart.md). 
+The following command shows the most basic way to use a vision-enabled chat model with code. If this is your first time using these models programmatically, we recommend starting with our [Chat with images quickstart](../gpt-v-quickstart.md). 
 
 #### [REST](#tab/rest)
 
 Send a POST request to `https://{RESOURCE_NAME}.openai.azure.com/openai/deployments/{DEPLOYMENT_NAME}/chat/completions?api-version=2024-02-15-preview` where 
 
 - RESOURCE_NAME is the name of your Azure OpenAI resource 
-- DEPLOYMENT_NAME is the name of your GPT-4 Turbo with Vision model deployment 
+- DEPLOYMENT_NAME is the name of your model deployment 
 
 **Required headers**: 
 - `Content-Type`: application/json 
@@ -75,7 +76,7 @@ The following is a sample request body. The format is the same as the chat compl
 #### [Python](#tab/python)
 
 1. Define your Azure OpenAI resource endpoint and key. 
-1. Enter the name of your GPT-4 Turbo with Vision model deployment.
+1. Enter the name of your model deployment.
 1. Create a client object using those values.
 
     ```python

Summary

{
    "modification_type": "minor update",
    "modification_title": "ビジョン対応チャットモデルの使用法更新"
}

Explanation

この変更は、Azure OpenAIサービスでのビジョン対応チャットモデルの使用法に関する文書の内容を更新したものです。主な変更点として、文書の説明部分が修正され、「ビジョン対応チャットモデルをどのように使用するか、チャット補完APIを呼び出し、画像を処理する方法について学ぶ」という具体的な内容が追加されました。また、文書の日付も2024年8月21日から2025年2月20日に更新されています。

さらに、モデルが「GPT-4 Turbo with Vision」から「ビジョン対応チャットモデル」とより一般的な表現に変更され、これにより読者に対して、さまざまなモデルでの使用方法が強調されています。また、「Chat Completion API」への参照も強調され、その重要性が明確にされました。

加えて、コード例や手順が簡素化され、使用手順が明確に示されています。特に、デプロイメント名を「GPT-4 Turbo with Vision」からより一般的な「モデルデプロイメント」に変更することで、さまざまなモデルに対する適用性が向上しています。

これらの変更によって、ユーザーはビジョン対応チャットモデルを利用する際のプロセスをより理解しやすくなり、直接的な技術的な統合が促進されることが期待されています。

articles/ai-services/openai/how-to/risks-safety-monitor.md

Diff
@@ -6,7 +6,7 @@ author: PatrickFarley
 ms.author: pafarley 
 ms.service: azure-ai-openai
 ms.topic: how-to
-ms.date: 12/05/2024
+ms.date: 02/20/2025
 manager: nitinme
 ---
 
@@ -50,13 +50,13 @@ To use Potentially abusive user detection, you need:
 - A content filter configuration applied to your deployment.
 - You must be sending user ID information in your Chat Completion requests (see the _user_ parameter of the [Completions API](/azure/ai-services/openai/reference#completions), for example).
     > [!CAUTION]
-    > Use GUID strings to identify individual users. Do not include sensitive personal information in the "user" field.
+    > Use GUID strings to identify individual users. Do not include sensitive personal information in the _user_ field.
 - An Azure Data Explorer database set up to store the user analysis results (instructions below).
 
 ### Set up your Azure Data Explorer database
 
 In order to protect the data privacy of user information and manage the permission of the data, we support the option for our customers to bring their own storage to get the detailed potentially abusive user detection insights (including user GUID and statistics on harmful request by category) stored in a compliant way and with full control. Follow these steps to enable it:
-1. In Azure AI Foundry, navigate to the model deployment that you'd like to set up user abuse analysis with, and select **Add a data store**. 
+1. In [Azure AI Foundry](https://ai.azure.com/), navigate to the model deployment that you'd like to set up user abuse analysis with, and select **Add a data store**. 
 1. Fill in the required information and select **Save**. We recommend you create a new database to store the analysis results.
 1. After you connect the data store, take the following steps to grant permission to write analysis results to the connected database:
     1. Go to your Azure OpenAI resource's page in the Azure portal, and choose the **Identity** tab.
@@ -89,8 +89,9 @@ The potentially abusive user detection relies on the user information that custo
 
 Combine this data with enriched signals to validate whether the detected users are truly abusive or not. If they are, then take responsive action such as throttling or suspending the user to ensure the responsible use of your application.
 
-## Next steps
+## Next step
 
 Next, create or edit a content filter configuration in Azure AI Foundry.
 
-- [Configure content filters with Azure OpenAI Service](/azure/ai-services/openai/how-to/content-filters)
+> [!div class="nextstepaction"]
+> [Configure content filters with Azure OpenAI Service](/azure/ai-services/openai/how-to/content-filters)

Summary

{
    "modification_type": "minor update",
    "modification_title": "リスクと安全性モニタリングの記事の更新"
}

Explanation

この変更は、リスクと安全性モニタリングに関する文書の内容を更新するための小規模な修正です。主な変更点として、日付が2024年12月5日から2025年2月20日に更新されています。また、特定の文言が調整され、よりわかりやすく説明されています。

具体的には、ユーザーの個人情報に関する注意喚起の部分で、「user」フィールド内に敏感な個人情報を含めないように注意点が強調され、_user_フィールドと表現が変更されています。これにより、読者が指示に従う際の理解が促進されます。

さらに、Azure AI Foundryに関する手順の説明において、「Azure AI Foundry」というリンクが追加され、直感的なナビゲーションが可能になるように配慮されています。このリンクによって、ユーザーは必要な情報にすぐにアクセスできるようになります。

最後に、「Next steps」セクションが「Next step」として単数形に変更され、手続きの流れがより一貫性を持って提示されています。これにより、読者は次に行うべき具体的な行動が明確になります。

全体として、これらの変更は文書の可読性や利便性を向上させ、ユーザーがリスクと安全性モニタリングの設定をより理解しやすくなることを目的としています。

articles/ai-services/openai/how-to/use-blocklists.md

Diff
@@ -6,14 +6,14 @@ description: Learn how to use blocklists with Azure OpenAI Service
 manager: nitinme
 ms.service: azure-ai-openai
 ms.topic: how-to
-ms.date: 12/05/2024
+ms.date: 02/20/2025
 author: PatrickFarley
 ms.author: pafarley
 ---
 
 # Use a blocklist with Azure OpenAI
 
-The configurable content filters are sufficient for most content moderation needs. However, you might need to filter terms specific to your use case. 
+The [configurable content filters](/azure/ai-services/openai/how-to/content-filters) available in Azure OpenAI are sufficient for most content moderation needs. However, you might need to filter terms specific to your use case. For this, you can use custom blocklists.
 
 ## Prerequisites
 
@@ -63,7 +63,7 @@ The response code should be `201` (created a new list) or `200` (updated an exis
 
 ### Apply a blocklist to a content filter
 
-If you haven't yet created a content filter, you can do so in Azure AI Foundry. See [Content filtering](/azure/ai-services/openai/how-to/content-filters#create-a-content-filter-in-azure-ai-foundry).
+If you haven't yet created a content filter, you can do so in [Azure AI Foundry](https://ai.azure.com/). See [Content filtering](/azure/ai-services/openai/how-to/content-filters#create-a-content-filter-in-azure-ai-foundry).
 
 To apply a **completion** blocklist to a content filter, use the following cURL command: 
 
@@ -178,7 +178,7 @@ In the below example, a GPT-35-Turbo deployment with a blocklist is blocking the
 } 
 ```
 
-If the completion itself is blocked, the response returns `200`, as the completion only cuts off when the blocklist content is matched. The annotations show that a blocklist was matched. 
+If the completion itself is blocked, the response returns `200`, as the completion only cuts off when the blocklist content is matched. The annotations show that a blocklist item was matched. 
 
 ```json
 { 
@@ -257,7 +257,7 @@ If the completion itself is blocked, the response returns `200`, as the completi
 
 ---
 
-## Next steps
+## Related content
 
 - Learn more about Responsible AI practices for Azure OpenAI: [Overview of Responsible AI practices for Azure OpenAI models](/legal/cognitive-services/openai/overview?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext). 
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "ブロックリストの使用法に関する記事の更新"
}

Explanation

この変更は、Azure OpenAIサービスにおけるブロックリストの使用法に関する文書に対する小規模な修正です。主な修正内容には、文書の日付が2024年12月5日から2025年2月20日に更新されたことが含まれます。

修正された内容には、「configurable content filters」の説明が改善され、Azure OpenAIで利用可能な設定可能なコンテンツフィルターに関する具体的なリンクが追加されました。これにより、利用者が必要な情報にアクセスしやすくなっています。

さらに、「Azure AI Foundry」という文言にもリンクが追加され、読者が直接該当サイトに遷移することができます。これにより、コンテンツフィルターの作成手順がより認識しやすくなっています。

また、ブロックリストアイテムが一致した場合の挙動に関する説明がわずかに修正され、「blocklist content」が「blocklist item」に変更されることで、より正確かつ明確に表現されています。

最後に、「Next steps」セクションの名称が「Related content」に変更され、関連する情報へのアクセスを容易にしています。この変更は、読者が責任あるAIの実践に関する追加情報を見つけやすくすることを目的としています。

全体として、これらの修正は文書の可読性を向上させ、ユーザーがAzure OpenAIサービスでのブロックリストの利用方法を理解しやすくすることを目的としています。

articles/ai-services/openai/includes/api-versions/latest-inference-preview.md

Diff
@@ -44,9 +44,6 @@ Creates a completion for the provided prompt, parameters and chosen model.
 | logprobs | integer | Include the log probabilities on the `logprobs` most likely output tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response.<br><br>The maximum value for `logprobs` is 5.<br> | No | None |
 | max_tokens | integer | The maximum number of tokens that can be generated in the completion.<br><br>The token count of your prompt plus `max_tokens` can't exceed the model's context length.  | No | 16 |
 | n | integer | How many completions to generate for each prompt.<br><br>**Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for `max_tokens` and `stop`.<br> | No | 1 |
-| modalities | [ChatCompletionModalities](#chatcompletionmodalities) | Output types that you would like the model to generate for this request.<br>Most models are capable of generating text, which is the default:<br><br>`["text"]`<br><br>The `gpt-4o-audio-preview` model can also be used to generate audio. To<br>request that this model generate both text and audio responses, you can<br>use:<br><br>`["text", "audio"]`<br> | No |  |
-| prediction | [PredictionContent](#predictioncontent) | Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time. This is most common when you are regenerating a file with only minor changes to most of the content. | No |  |
-| audio | object | Parameters for audio output. Required when audio output is requested with<br>`modalities: ["audio"]`.  | No |  |
 | presence_penalty | number | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.<br> | No | 0 |
 | seed | integer | If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same `seed` and parameters should return the same result.<br><br>Determinism isn't guaranteed, and you should refer to the `system_fingerprint` response parameter to monitor changes in the backend.<br> | No |  |
 | stop | string or array | Up to four sequences where the API will stop generating further tokens. The returned text won't contain the stop sequence.<br> | No |  |
@@ -57,20 +54,6 @@ Creates a completion for the provided prompt, parameters and chosen model.
 | user | string | A unique identifier representing your end-user, which can help to monitor and detect abuse.<br> | No |  |
 
 
-### Properties for audio
-
-#### voice
-
-| Name | Type | Description | Default |
-|------|------|-------------|--------|
-| voice | string | Specifies the voice type. Supported voices are `alloy`, `echo`, <br>`fable`, `onyx`, `nova`, and `shimmer`.<br> |  |
-
-#### format
-
-| Name | Type | Description | Default |
-|------|------|-------------|--------|
-| format | string | Specifies the output audio format. Must be one of `wav`, `mp3`, `flac`,<br>`opus`, or `pcm16`. <br> |  |
-
 ### Responses
 
 **Status Code:** 200
@@ -334,6 +317,23 @@ Creates a completion for the chat message
 | function_call | string or [chatCompletionFunctionCallOption](#chatcompletionfunctioncalloption) | Deprecated in favor of `tool_choice`.<br><br>Controls which (if any) function is called by the model.<br>`none` means the model won't call a function and instead generates a message.<br>`auto` means the model can pick between generating a message or calling a function.<br>Specifying a particular function via `{"name": "my_function"}` forces the model to call that function.<br><br>`none` is the default when no functions are present. `auto` is the default if functions are present.<br> | No |  |
 | functions | array | Deprecated in favor of `tools`.<br><br>A list of functions the model may generate JSON inputs for.<br> | No |  |
 | user_security_context | [userSecurityContext](#usersecuritycontext) | User security context contains several parameters that describe the AI application itself, and the end user that interacts with the AI application. These fields assist your security operations teams to investigate and mitigate security incidents by providing a comprehensive approach to protecting your AI applications. [Learn more](https://aka.ms/TP4AI/Documentation/EndUserContext) about protecting AI applications using Microsoft Defender for Cloud. | No |  |
+| modalities | [ChatCompletionModalities](#chatcompletionmodalities) | Output types that you would like the model to generate for this request.<br>Most models are capable of generating text, which is the default:<br><br>`["text"]`<br><br>The `gpt-4o-audio-preview` model can also be used to generate audio. To<br>request that this model generate both text and audio responses, you can<br>use:<br><br>`["text", "audio"]`<br> | No |  |
+| prediction | [PredictionContent](#predictioncontent) | Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time. This is most common when you are regenerating a file with only minor changes to most of the content. | No |  |
+| audio | object | Parameters for audio output. Required when audio output is requested with<br>`modalities: ["audio"]`.  | No |  |
+
+### Properties for audio
+
+#### voice
+
+| Name | Type | Description | Default |
+|------|------|-------------|--------|
+| voice | string | Specifies the voice type. Supported voices are `alloy`, `echo`, <br>`fable`, `onyx`, `nova`, and `shimmer`.<br> |  |
+
+#### format
+
+| Name | Type | Description | Default |
+|------|------|-------------|--------|
+| format | string | Specifies the output audio format. Must be one of `wav`, `mp3`, `flac`,<br>`opus`, or `pcm16`. <br> |  |
 
 ### Responses
 
@@ -4597,9 +4597,6 @@ Information about the content filtering category (hate, sexual, violence, self_h
 | logprobs | integer | Include the log probabilities on the `logprobs` most likely output tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response.<br><br>The maximum value for `logprobs` is 5.<br> | No | None |
 | max_tokens | integer | The maximum number of tokens that can be generated in the completion.<br><br>The token count of your prompt plus `max_tokens` can't exceed the model's context length.  | No | 16 |
 | n | integer | How many completions to generate for each prompt.<br><br>**Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for `max_tokens` and `stop`.<br> | No | 1 |
-| modalities | [ChatCompletionModalities](#chatcompletionmodalities) | Output types that you would like the model to generate for this request.<br>Most models are capable of generating text, which is the default:<br><br>`["text"]`<br><br>The `gpt-4o-audio-preview` model can also be used to generate audio. To<br>request that this model generate both text and audio responses, you can<br>use:<br><br>`["text", "audio"]`<br> | No |  |
-| prediction | [PredictionContent](#predictioncontent) | Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time. This is most common when you are regenerating a file with only minor changes to most of the content. | No |  |
-| audio | object | Parameters for audio output. Required when audio output is requested with<br>`modalities: ["audio"]`.  | No |  |
 | presence_penalty | number | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.<br> | No | 0 |
 | seed | integer | If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same `seed` and parameters should return the same result.<br><br>Determinism isn't guaranteed, and you should refer to the `system_fingerprint` response parameter to monitor changes in the backend.<br> | No |  |
 | stop | string or array | Up to four sequences where the API will stop generating further tokens. The returned text won't contain the stop sequence.<br> | No |  |
@@ -4610,21 +4607,6 @@ Information about the content filtering category (hate, sexual, violence, self_h
 | user | string | A unique identifier representing your end-user, which can help to monitor and detect abuse.<br> | No |  |
 
 
-### Properties for audio
-
-#### voice
-
-| Name | Type | Description | Default |
-|------|------|-------------|--------|
-| voice | string | Specifies the voice type. Supported voices are `alloy`, `echo`, <br>`fable`, `onyx`, `nova`, and `shimmer`.<br> |  |
-
-#### format
-
-| Name | Type | Description | Default |
-|------|------|-------------|--------|
-| format | string | Specifies the output audio format. Must be one of `wav`, `mp3`, `flac`,<br>`opus`, or `pcm16`. <br> |  |
-
-
 ### createCompletionResponse
 
 Represents a completion response from the API. Note: both the streamed and non-streamed response objects share the same shape (unlike the chat endpoint).
@@ -4674,8 +4656,24 @@ Represents a completion response from the API. Note: both the streamed and non-s
 | tool_choice | [chatCompletionToolChoiceOption](#chatcompletiontoolchoiceoption) | Controls which (if any) tool is called by the model. `none` means the model won't call any tool and instead generates a message. `auto` means the model can pick between generating a message or calling one or more tools. `required` means the model must call one or more tools. Specifying a particular tool via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that tool. `none` is the default when no tools are present. `auto` is the default if tools are present. | No |  |
 | function_call | string or [chatCompletionFunctionCallOption](#chatcompletionfunctioncalloption) | Deprecated in favor of `tool_choice`.<br><br>Controls which (if any) function is called by the model.<br>`none` means the model won't call a function and instead generates a message.<br>`auto` means the model can pick between generating a message or calling a function.<br>Specifying a particular function via `{"name": "my_function"}` forces the model to call that function.<br><br>`none` is the default when no functions are present. `auto` is the default if functions are present.<br> | No |  |
 | functions | array | Deprecated in favor of `tools`.<br><br>A list of functions the model may generate JSON inputs for.<br> | No |  |
-| user_security_context | [userSecurityContext](#usersecuritycontext) | User security context contains several parameters that describe the AI application itself, and the end user that interacts with the AI application. These fields assist your security operations teams to investigate and mitigate security incidents by providing a comprehensive approach to protecting your AI applications. [Learn more](https://aka.ms/TP4AI/Documentation/EndUserContext) about protecting AI applications using Microsoft Defender for Cloud. | No |  |
+| user_security_context | [userSecurityContext](#usersecuritycontext) | User security context contains several parameters that describe the AI application itself, and the end user that interacts with the AI application. These fields assist your security operations teams to investigate and mitigate security incidents by providing a comprehensive approach to protecting your AI applications. [Learn more](https://aka.ms/TP4AI/Documentation/EndUserContext) about protecting AI applications using Microsoft Defender for Cloud. | No |  | and detect abuse.<br> | No |  |
+| modalities | [ChatCompletionModalities](#chatcompletionmodalities) | Output types that you would like the model to generate for this request.<br>Most models are capable of generating text, which is the default:<br><br>`["text"]`<br><br>The `gpt-4o-audio-preview` model can also be used to [generate audio](/docs/guides/audio). To<br>request that this model generate both text and audio responses, you can<br>use:<br><br>`["text", "audio"]`<br> | No |  |
+| prediction | [PredictionContent](#predictioncontent) | Configuration for a [Predicted Output](/docs/guides/predicted-outputs), which can greatly improve response times when large parts of the model response are known ahead of time. This is most common when you are regenerating a file with only minor changes to most of the content. | No |  |
+| audio | object | Parameters for audio output. Required when audio output is requested with<br>`modalities: ["audio"]`. [Learn more](/docs/guides/audio).<br> | No |  |
+
+### Properties for audio
+
+#### voice
+
+| Name | Type | Description | Default |
+|------|------|-------------|--------|
+| voice | string | Specifies the voice type. Supported voices are `alloy`, `echo`, <br>`fable`, `onyx`, `nova`, and `shimmer`.<br> |  |
+
+#### format
 
+| Name | Type | Description | Default |
+|------|------|-------------|--------|
+| format | string | Specifies the output audio format. Must be one of `wav`, `mp3`, `flac`,<br>`opus`, or `pcm16`. <br> |  |
 
 ### userSecurityContext
 

Summary

{
    "modification_type": "breaking change",
    "modification_title": "最新の推論プレビューAPIの大幅な更新"
}

Explanation

この変更は、Azure OpenAIサービスの最新の推論プレビューAPIに関する文書に対して行われた大幅な更新です。この修正の結果、追加された行数は34で、削除された行数は36、全体の変更は70行に及びます。

主な変更のひとつは、APIのパラメーターに関する説明の整備です。特に「modalities」「prediction」「audio」セクションが新たに追加され、音声出力のパラメーターについて詳しい情報が提供されています。これにより、ユーザーは特定の音声タイプや音声フォーマットに関する設定を容易に理解できるようになっています。

従来の「音声」関連のプロパティは削除され、新たに整理された形で再追加されることで、情報が一貫性を持つように見直されています。具体的には、音声プロパティとして「voice」と「format」が含まれており、それぞれの説明が明確化されています。

また、いくつかの使われなくなったパラメーター(例: predictionなど)が削除され、これにより文書が簡潔になり、ユーザーが必要な情報を見つけやすくなっています。このトピックに関連する安全性やユーザーセキュリティの文脈も強調されており、ユーザーがAIアプリケーションを利用する際のセキュリティに関する情報が充実しています。

全体として、この文書の更新は、最新のAPI仕様に基づいており、ユーザーにとってより分かりやすく、使いやすい内容となることを目指しています。

articles/ai-services/openai/includes/dall-e-rest.md

Diff
@@ -8,7 +8,7 @@ ms.service: azure-ai-openai
 ms.custom:
   - ignite-2023
 ms.topic: include
-ms.date: 08/08/2023
+ms.date: 02/20/2025
 ---
 
 Use this guide to get started calling the Azure OpenAI Service image generation REST APIs by using Python.

Summary

{
    "modification_type": "minor update",
    "modification_title": "DALL·E REST APIガイドの日付更新"
}

Explanation

この変更は、Azure OpenAIサービスのDALL·E画像生成REST APIに関するガイドのドキュメントの日付を更新するために行われました。具体的には、元の日付である2023年8月8日が2025年2月20日に変更されています。この修正は、文書の最新化を目的としており、ユーザーが最新の情報を取得できるようにするものです。

この変更は特に重要ではありませんが、日付の更新により、読者が文書の内容が現在の状態を反映していることを理解する助けになります。全体として、この更新は文書の整合性を保つための小規模な修正です。

articles/ai-services/openai/includes/fine-tuning-openai-in-ai-studio.md

Diff
@@ -64,14 +64,21 @@ Your training data and validation data sets consist of input and output examples
 
 The training and validation data you use **must** be formatted as a JSON Lines (JSONL) document. For `gpt-35-turbo-0613` the fine-tuning dataset must be formatted in the conversational format that is used by the [Chat completions](../how-to/chatgpt.md) API.
 
-If you would like a step-by-step walk-through of fine-tuning a `gpt-35-turbo-0613` model please refer to the [Azure OpenAI fine-tuning tutorial.](../tutorials/fine-tune.md)
+If you would like a step-by-step walk-through of fine-tuning a `gpt-4o-mini-2024-07-18` model please refer to the [Azure OpenAI fine-tuning tutorial.](../tutorials/fine-tune.md)
 
 ### Example file format
 
 ```json
-{"messages": [{"role": "system", "content": "You are an Xbox customer support agent whose primary goal is to help users with issues they are experiencing with their Xbox devices. You are friendly and concise. You only provide factual answers to queries, and do not provide answers that are not related to Xbox."}, {"role": "user", "content": "Is Xbox better than PlayStation?"}, {"role": "assistant", "content": "I apologize, but I cannot provide personal opinions. My primary job is to assist you with any issues related to your Xbox device. Do you have any Xbox-related issues that need addressing?"}]}
-{"messages": [{"role": "system", "content": "You are an Xbox customer support agent whose primary goal is to help users with issues they are experiencing with their Xbox devices. You are friendly and concise. You only provide factual answers to queries, and do not provide answers that are not related to Xbox."}, {"role": "user", "content": "Can I play PC games on my Xbox?"}, {"role": "assistant", "content": "While Xbox consoles don't natively support PC games, they can play an array of Xbox games, many of which are also available on PC. Can I assist you with finding games compatible with your Xbox device?"}]}
-{"messages": [{"role": "system", "content": "You are an Xbox customer support agent whose primary goal is to help users with issues they are experiencing with their Xbox devices. You are friendly and concise. You only provide factual answers to queries, and do not provide answers that are not related to Xbox."}, {"role": "user", "content": "I'm having trouble connecting my Xbox to the Wi-Fi."}, {"role": "assistant", "content": "No worries, let's go through the network settings on your Xbox. Can you please tell me what happens when you try to connect it to the Wi-Fi?"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who discovered Antarctica?"}, {"role": "assistant", "content": "Some chaps named Fabian Gottlieb von Bellingshausen and Mikhail Lazarev, as if they don't teach that in every school!"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What is the biggest ocean?"}, {"role": "assistant", "content": "The Pacific Ocean. It's not like it's a small pond or anything."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What is the largest planet?"}, {"role": "assistant", "content": "It's called Jupiter, you might have heard of it...or not."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who painted the Mona Lisa?"}, {"role": "assistant", "content": "Oh, some obscure artist named Leonardo da Vinci. Not that he is famous or anything."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What's the boiling point of water?"}, {"role": "assistant", "content": "Exciting fact of the day: it's 100 degrees Celsius. Life-changing, isn't it?"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who wrote 'The Odyssey'?"}, {"role": "assistant", "content": "Oh, just some poet named Homer. Nothing special, really."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What is the largest mammal?"}, {"role": "assistant", "content": "Blue Whale. Quite a big fish isn't it? Oh wait, it's not a fish!"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What's the highest mountain peak?"}, {"role": "assistant", "content": "Everest. Brace yourself, it's not a walk in the park."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who invented the telephone?"}, {"role": "assistant", "content": "A guy named Alexander Graham Bell. Bet you didn't guess that one!"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What is the distance from Earth to the Sun?"}, {"role": "assistant", "content": "About 93 million miles. Just a quick drive, really."}]}
 ```
 
 ### Multi-turn chat file format

Summary

{
    "modification_type": "minor update",
    "modification_title": "ファインチューニングモデルの更新と例の変更"
}

Explanation

この変更は、Azure OpenAIのファインチューニングに関するドキュメントに対して行われた小規模な更新です。主な変更点は、ファインチューニングするモデルの名称が gpt-35-turbo-0613 から gpt-4o-mini-2024-07-18 へと変更されたことです。これにより、最新のモデルに基づくファインチューニングの手順をユーザーに提供する内容が反映されています。

また、文書内の例もいくつか更新され、新たに「Clippy」という皮肉を交えたチャットボットの登録が追加されました。これにより、ユーザーはファインチューニングに使用するデータの具体例を通じて、より実践的なイメージを持つことができるようになっています。具体的なメッセージの内容が示されており、従来の内容よりも多様性が加わり、興味深い対話の例が含まれています。

この更新により、ユーザーは新しいモデルを用いたファインチューニング方法を学び、実際の対話の文脈を理解しやすくなっています。全体的には、この変更はユーザーの学習体験を向上させることを目的としたものです。

articles/ai-services/openai/includes/fine-tuning-python.md

Diff
@@ -65,9 +65,16 @@ If you would like a step-by-step walk-through of fine-tuning a `gpt-35-turbo-061
 ### Example file format
 
 ```json
-{"messages": [{"role": "system", "content": "You are an Xbox customer support agent whose primary goal is to help users with issues they are experiencing with their Xbox devices. You are friendly and concise. You only provide factual answers to queries, and do not provide answers that are not related to Xbox."}, {"role": "user", "content": "Is Xbox better than PlayStation?"}, {"role": "assistant", "content": "I apologize, but I cannot provide personal opinions. My primary job is to assist you with any issues related to your Xbox device. Do you have any Xbox-related issues that need addressing?"}]}
-{"messages": [{"role": "system", "content": "You are an Xbox customer support agent whose primary goal is to help users with issues they are experiencing with their Xbox devices. You are friendly and concise. You only provide factual answers to queries, and do not provide answers that are not related to Xbox."}, {"role": "user", "content": "Can I play PC games on my Xbox?"}, {"role": "assistant", "content": "While Xbox consoles don't natively support PC games, they can play an array of Xbox games, many of which are also available on PC. Can I assist you with finding games compatible with your Xbox device?"}]}
-{"messages": [{"role": "system", "content": "You are an Xbox customer support agent whose primary goal is to help users with issues they are experiencing with their Xbox devices. You are friendly and concise. You only provide factual answers to queries, and do not provide answers that are not related to Xbox."}, {"role": "user", "content": "I'm having trouble connecting my Xbox to the Wi-Fi."}, {"role": "assistant", "content": "No worries, let's go through the network settings on your Xbox. Can you please tell me what happens when you try to connect it to the Wi-Fi?"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who discovered Antarctica?"}, {"role": "assistant", "content": "Some chaps named Fabian Gottlieb von Bellingshausen and Mikhail Lazarev, as if they don't teach that in every school!"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What is the biggest ocean?"}, {"role": "assistant", "content": "The Pacific Ocean. It's not like it's a small pond or anything."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What is the largest planet?"}, {"role": "assistant", "content": "It's called Jupiter, you might have heard of it...or not."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who painted the Mona Lisa?"}, {"role": "assistant", "content": "Oh, some obscure artist named Leonardo da Vinci. Not that he is famous or anything."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What's the boiling point of water?"}, {"role": "assistant", "content": "Exciting fact of the day: it's 100 degrees Celsius. Life-changing, isn't it?"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who wrote 'The Odyssey'?"}, {"role": "assistant", "content": "Oh, just some poet named Homer. Nothing special, really."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What is the largest mammal?"}, {"role": "assistant", "content": "Blue Whale. Quite a big fish isn't it? Oh wait, it's not a fish!"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What's the highest mountain peak?"}, {"role": "assistant", "content": "Everest. Brace yourself, it's not a walk in the park."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who invented the telephone?"}, {"role": "assistant", "content": "A guy named Alexander Graham Bell. Bet you didn't guess that one!"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What is the distance from Earth to the Sun?"}, {"role": "assistant", "content": "About 93 million miles. Just a quick drive, really."}]}
 ```
 
 ### Multi-turn chat file format

Summary

{
    "modification_type": "minor update",
    "modification_title": "ファインチューニングの例とクイックリファレンスの更新"
}

Explanation

この変更では、Azure OpenAIのファインチューニングに関するPythonドキュメントがいくつかの重要な更新を受けました。主な更新点は、ファインチューニングを行うモデルの例として、gpt-35-turbo-0613 の代わりに Clippyという皮肉を交えたチャットボットが導入されたことです。この新しいモデルに対するリファレンスが追加されたことで、ユーザーは新しいタイプの対話を理解しやすくなります。

さらに、従来のXboxカスタマーサポートエージェントのメッセージ例が削除され、より多様な質問に対するClippyのユーモラスな反応が追加されました。これにより、ファインチューニングのためのデータセットの具体的なフォーマットがrichedされ、使うことができる幅広い会話の文脈が提供されています。

この変更は、ユーザーがファインチューニングのプロセスを通じて新しいモデルをより良く理解し、応用できるようになることを目指しています。全体として、ものの見方を広げるための目立った変更が含まれており、ファインチューニングに関する実践的な知識を強化しています。

articles/ai-services/openai/includes/fine-tuning-rest.md

Diff
@@ -64,9 +64,16 @@ If you would like a step-by-step walk-through of fine-tuning a `gpt-35-turbo-061
 ### Example file format
 
 ```json
-{"messages": [{"role": "system", "content": "You are an Xbox customer support agent whose primary goal is to help users with issues they are experiencing with their Xbox devices. You are friendly and concise. You only provide factual answers to queries, and do not provide answers that are not related to Xbox."}, {"role": "user", "content": "Is Xbox better than PlayStation?"}, {"role": "assistant", "content": "I apologize, but I cannot provide personal opinions. My primary job is to assist you with any issues related to your Xbox device. Do you have any Xbox-related issues that need addressing?"}]}
-{"messages": [{"role": "system", "content": "You are an Xbox customer support agent whose primary goal is to help users with issues they are experiencing with their Xbox devices. You are friendly and concise. You only provide factual answers to queries, and do not provide answers that are not related to Xbox."}, {"role": "user", "content": "Can I play PC games on my Xbox?"}, {"role": "assistant", "content": "While Xbox consoles don't natively support PC games, they can play an array of Xbox games, many of which are also available on PC. Can I assist you with finding games compatible with your Xbox device?"}]}
-{"messages": [{"role": "system", "content": "You are an Xbox customer support agent whose primary goal is to help users with issues they are experiencing with their Xbox devices. You are friendly and concise. You only provide factual answers to queries, and do not provide answers that are not related to Xbox."}, {"role": "user", "content": "I'm having trouble connecting my Xbox to the Wi-Fi."}, {"role": "assistant", "content": "No worries, let's go through the network settings on your Xbox. Can you please tell me what happens when you try to connect it to the Wi-Fi?"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who discovered Antarctica?"}, {"role": "assistant", "content": "Some chaps named Fabian Gottlieb von Bellingshausen and Mikhail Lazarev, as if they don't teach that in every school!"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What is the biggest ocean?"}, {"role": "assistant", "content": "The Pacific Ocean. It's not like it's a small pond or anything."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What is the largest planet?"}, {"role": "assistant", "content": "It's called Jupiter, you might have heard of it...or not."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who painted the Mona Lisa?"}, {"role": "assistant", "content": "Oh, some obscure artist named Leonardo da Vinci. Not that he is famous or anything."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What's the boiling point of water?"}, {"role": "assistant", "content": "Exciting fact of the day: it's 100 degrees Celsius. Life-changing, isn't it?"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who wrote 'The Odyssey'?"}, {"role": "assistant", "content": "Oh, just some poet named Homer. Nothing special, really."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What is the largest mammal?"}, {"role": "assistant", "content": "Blue Whale. Quite a big fish isn't it? Oh wait, it's not a fish!"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What's the highest mountain peak?"}, {"role": "assistant", "content": "Everest. Brace yourself, it's not a walk in the park."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who invented the telephone?"}, {"role": "assistant", "content": "A guy named Alexander Graham Bell. Bet you didn't guess that one!"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What is the distance from Earth to the Sun?"}, {"role": "assistant", "content": "About 93 million miles. Just a quick drive, really."}]}
 ```
 
 ### Multi-turn chat file format

Summary

{
    "modification_type": "minor update",
    "modification_title": "ファインチューニングに関するREST APIの更新"
}

Explanation

この変更は、Azure OpenAIのファインチューニングに関するREST APIのドキュメントに対して行われた小規模な更新です。主な変更点は、ファインチューニングするモデルの例が従来のXboxカスタマーサポートエージェントから、皮肉交じりの回答をする「Clippy」チャットボットに変更されたことです。これにより、ユーザーは新しいタイプのデータセッティングを学びやすくなっています。

具体的には、Clippyがさまざまな質問に対してユーモアを交えた形で回答している例が追加され、これによりユーザーがファインチューニングで使用するデータの構造と内容をより直感的に理解できるようになります。元のメッセージ例は削除され、より広範なメッセージ形式が示されています。

この更新は、ユーザーに対してファインチューニングの手法をより魅力的に伝え、実際の応用可能なデータセットの理解を助けることを目的としています。全体として、より多様な対話形式を提供することで、ユーザーの学習体験を向上させる内容となっています。