Diff Insight Report - misc

最終更新日: 2024-11-23

利用上の注意

このポストは Microsoft 社の Azure 公式ドキュメント(CC BY 4.0 または MIT ライセンス) をもとに生成AIを用いて翻案・要約した派生作品です。 元の文書は MicrosoftDocs/azure-ai-docs にホストされています。

生成AIの性能には限界があり、誤訳や誤解釈が含まれる可能性があります。 本ポストはあくまで参考情報として用い、正確な情報は必ず元の文書を参照してください。

このポストで使用されている商標はそれぞれの所有者に帰属します。これらの商標は技術的な説明のために使用されており、商標権者からの公式な承認や推奨を示すものではありません。

View Diff on GitHub

ハイライト

この差分では、いくつかのドキュメントに対する軽微な更新がメインとなっており、特に日付の更新とテキストの表現改善が行われています。いくつかの重要な変更は以下のとおりです。

新機能

特に目立った新機能は含まれていませんが、QAEvaluatorの要件に関する情報が明確に記載されました。

破壊的変更

破壊的な変更は確認されていません。

その他の更新

  • 多くの日付情報が最新のものに更新されています。
  • いくつかのドキュメントで表現の修正が行われ、文章の整合性と読みやすさが向上しています。

洞察

このドキュメントの更新は、デジタルリソースの管理において不可欠なメンテナンスの一環です。いくつかの日付情報の変更は、ドキュメントの生命サイクル管理、およびリーディングユーザーに対する最新性の伝達を目的としています。ドキュメントが最新であることは、特に技術分野においては、その時々の最良の方法や最も安全な手法を反映するという点で重要です。このような日付の更新により、ユーザーに信頼性のある最新情報を提供し続けることが可能となります。

一方で、テキストの一貫性や表現の修正は、顧客体験およびユーザーが意図した内容を正しく理解できるようにするための重要な改善です。特に技術文書においては、曖昧さや冗長さを避け、明確で具体的な方向性を示すことが求められます。今回行われたようなマイナーアップデートは、直接的な機能改善ではないものの、ユーザーフレンドリーな文書提供を支える重要なステップといえます。

また、QAEvaluatorの要件に関する明示的な記載は、ドキュメントの利用者に重要な実施要件を正確に伝えるためのもので、ユーザーが必要な設定を迅速かつ的確に行えるように助ける役割を果たします。

このような日々の細かな改善が、最終的にはユーザーにとっての利用価値を高め、プラットフォーム全体の品質維持および向上に寄与することとなります。

Summary Table

Filename Type Title Status A D M
read.md minor update ファイル形式サポートの変更 (Locale: ja_JP) modified 0 1 1
connections.md minor update 更新された日付情報 (Locale: ja_JP) modified 1 1 2
encryption-keys-portal.md minor update 更新された日付情報 (Locale: ja_JP) modified 1 1 2
safety-evaluations-transparency-note.md minor update 更新された日付情報とテキスト修正 (Locale: ja_JP) modified 10 10 20
vulnerability-management.md minor update 更新された日付情報 (Locale: ja_JP) modified 1 1 2
access-on-premises-resources.md minor update 更新された日付情報と軽微なテキスト修正 (Locale: ja_JP) modified 5 5 10
create-azure-ai-hub-template.md minor update 更新された日付情報 (Locale: ja_JP) modified 1 1 2
create-hub-terraform.md minor update 更新された日付情報 (Locale: ja_JP) modified 1 1 2
create-secure-ai-hub.md minor update 更新された日付情報 (Locale: ja_JP) modified 1 1 2
deploy-models-jais.md minor update 更新された日付情報 (Locale: ja_JP) modified 1 1 2
deploy-models-serverless-availability.md minor update 更新された日付情報 (Locale: ja_JP) modified 1 1 2
create-hub-project-sdk.md minor update 更新された日付情報 (Locale: ja_JP) modified 1 1 2
evaluate-sdk.md minor update QAEvaluatorの要件の変更 (Locale: ja_JP) modified 1 1 2
index-build-consume-sdk.md minor update 日付情報の更新 (Locale: ja_JP) modified 1 1 2
evaluate-prompts-playground.md minor update 文章の一貫性向上のための修正 (Locale: ja_JP) modified 4 4 8
secure-data-playground.md minor update 文書の日付情報の更新 (Locale: ja_JP) modified 1 1 2
troubleshoot-secure-connection-project.md minor update 日付情報の更新 (Locale: ja_JP) modified 1 1 2
copilot-sdk-create-resources.md minor update 文書内の表現の改善 (Locale: ja_JP) modified 1 1 2

Modified Contents

articles/ai-services/document-intelligence/prebuilt/read.md

Diff
@@ -99,7 +99,6 @@ The searchable PDF capability enables you to convert an analog PDF, such as scan
   >
   > * Currently, the searchable PDF capability is only supported by Read OCR model `prebuilt-read`. When using this feature, please specify the `modelId` as `prebuilt-read`, as other model types will return error for this preview version.
   > * Searchable PDF is included with the 2024-07-31-preview `prebuilt-read` model with no additional cost for generating a searchable PDF output.
->   * Searchable PDF currently only supports PDF files as input. Support for other file types, such as image files, will be available later.
 
 ### Use searchable PDFs
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "ファイル形式サポートの変更 (Locale: ja_JP)"
}

Explanation

この変更は、articles/ai-services/document-intelligence/prebuilt/read.mdファイル内のドキュメントを修正するもので、具体的には「検索可能なPDF機能」に関する情報が更新されました。具体的には、「検索可能なPDFは現在、入力としてPDFファイルのみをサポートしており、画像ファイルなどの他のファイル形式のサポートは後で提供される予定」という文が削除されました。この変更により、機能の現在の制限についての情報が明確になります。

articles/ai-studio/concepts/connections.md

Diff
@@ -9,7 +9,7 @@ ms.custom:
   - build-2024
   - ignite-2024
 ms.topic: conceptual
-ms.date: 5/21/2024
+ms.date: 11/21/2024
 ms.reviewer: sgilley
 ms.author: sgilley
 author: sdgilley

Summary

{
    "modification_type": "minor update",
    "modification_title": "更新された日付情報 (Locale: ja_JP)"
}

Explanation

この変更は、articles/ai-studio/concepts/connections.mdファイルにおけるメタデータの更新に関連しています。具体的には、ms.dateの値が「2024年5月21日」から「2024年11月21日」へと変更されました。この日付の更新は、文書の公開またはレビューに関する最新情報を反映させるためのものであり、内容の正確性と関連性を高める目的があります。その他のメタデータには変更はなく、レビュアーや著者情報はそのまま維持されています。

articles/ai-studio/concepts/encryption-keys-portal.md

Diff
@@ -8,7 +8,7 @@ ms.service: azure-ai-services
 ms.custom:
   - ignite-2023
 ms.topic: concept-article
-ms.date: 10/7/2024
+ms.date: 11/21/2024
 ms.reviewer: deeikele
 # Customer intent: As an admin, I want to understand how I can use my own encryption keys with Azure AI Foundry.
 ---

Summary

{
    "modification_type": "minor update",
    "modification_title": "更新された日付情報 (Locale: ja_JP)"
}

Explanation

この変更は、articles/ai-studio/concepts/encryption-keys-portal.mdファイル内のメタデータに関するもので、主にms.dateの値が「2024年10月7日」から「2024年11月21日」に変更されました。この日付更新は、ドキュメントの公開またはレビュー日を最新の情報に反映させることを目的としています。その他のメタデータや内容には変更はなく、文書のコンテキストを維持しつつ日付情報のみが更新されています。

articles/ai-studio/concepts/safety-evaluations-transparency-note.md

Diff
@@ -7,7 +7,7 @@ ms.service: azure-ai-studio
 ms.custom:
   - build-2024
 ms.topic: article
-ms.date: 5/21/2024
+ms.date: 11/21/2024
 ms.reviewer: mithigpe
 ms.author: lagayhar
 author: lgayhardt
@@ -19,23 +19,23 @@ author: lgayhardt
 
 ## What is a Transparency Note
 
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it's deployed. Creating a system that is fit for its intended purpose requires an understanding of how the technology works, what its capabilities and limitations are, and how to achieve the best performance. Microsoft’s Transparency Notes are intended to help you understand how our AI technology works, the choices system owners can make that influence system performance and behavior, and the importance of thinking about the whole system, including the technology, the people, and the environment. You can use Transparency Notes when developing or deploying your own system, or share them with the people who will use or be affected by your system.
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it's deployed. Creating a system that is fit for its intended purpose requires an understanding of how the technology works, what its capabilities and limitations are, and how to achieve the best performance. Microsoft's Transparency Notes are intended to help you understand how our AI technology works, the choices system owners can make that influence system performance and behavior, and the importance of thinking about the whole system, including the technology, the people, and the environment. You can use Transparency Notes when developing or deploying your own system, or share them with the people who will use or be affected by your system.
 
-Microsoft’s Transparency Notes are part of a broader effort at Microsoft to put our AI Principles into practice. To find out more, see the [Microsoft AI principles](https://www.microsoft.com/en-us/ai/responsible-ai).
+Microsoft's Transparency Notes are part of a broader effort at Microsoft to put our AI Principles into practice. To find out more, see the [Microsoft AI principles](https://www.microsoft.com/en-us/ai/responsible-ai).
 
 ## The basics of Azure AI Foundry safety evaluations
 
 ### Introduction
 
-The Azure AI Foundry portal safety evaluations let users evaluate the output of their generative AI application for textual content risks: hateful and unfair content, sexual content, violent content, self-harm-related content, jailbreak vulnerability. Safety evaluations can also help generate adversarial datasets to help you accelerate and augment the red-teaming operation. Azure AI Foundry safety evaluations reflect Microsoft’s commitments to ensure AI systems are built safely and responsibly, operationalizing our Responsible AI principles.
+The Azure AI Foundry portal safety evaluations let users evaluate the output of their generative AI application for textual content risks: hateful and unfair content, sexual content, violent content, self-harm-related content, jailbreak vulnerability. Safety evaluations can also help generate adversarial datasets to help you accelerate and augment the red-teaming operation. Azure AI Foundry safety evaluations reflect Microsoft's commitments to ensure AI systems are built safely and responsibly, operationalizing our Responsible AI principles.
 
 ### Key terms
 
 - **Hateful and unfair content** refers to any language pertaining to hate toward or unfair representations of individuals and social groups along factors including but not limited to race, ethnicity, nationality, gender, sexual orientation, religion, immigration status, ability, personal appearance, and body size. Unfairness occurs when AI systems treat or represent social groups inequitably, creating or contributing to societal inequities.
 - **Sexual content** includes language pertaining to anatomical organs and genitals, romantic relationships, acts portrayed in erotic terms, pregnancy, physical sexual acts (including assault or sexual violence), prostitution, pornography, and sexual abuse.
 - **Violent content** includes language pertaining to physical actions intended to hurt, injure, damage, or kill someone or something. It also includes descriptions of weapons and guns (and related entities such as manufacturers and associations).
 - **Self-harm-related content** includes language pertaining to actions intended to hurt, injure, or damage one's body or kill oneself.
-- **Jailbreak**, direct prompt attacks, or user prompt injection attacks, refer to users manipulating prompts to inject harmful inputs into LLMs to distort actions and outputs. An example of a jailbreak command is a ‘DAN’ (Do Anything Now) attack, which can trick the LLM into inappropriate content generation or ignoring system-imposed restrictions.  
+- **Jailbreak**, direct prompt attacks, or user prompt injection attacks, refer to users manipulating prompts to inject harmful inputs into LLMs to distort actions and outputs. An example of a jailbreak command is a 'DAN' (Do Anything Now) attack, which can trick the LLM into inappropriate content generation or ignoring system-imposed restrictions.  
 - **Defect rate (content risk)** is defined as the percentage of instances in your test dataset that surpass a threshold on the severity scale over the whole dataset size.
 - **Red-teaming** has historically described systematic adversarial attacks for testing security vulnerabilities. With the rise of Large Language Models (LLM), the term has extended beyond traditional cybersecurity and evolved in common usage to describe many kinds of probing, testing, and attacking of AI systems. With LLMs, both benign and adversarial usage can produce potentially harmful outputs, which can take many forms, including harmful content such as hateful speech, incitement or glorification of violence, reference to self-harm-related content or sexual content.
 
@@ -60,7 +60,7 @@ The safety evaluations aren't intended to use for any purpose other than to eval
 We encourage customers to leverage Azure AI Foundry safety evaluations in their innovative solutions or applications. However, here are some considerations when choosing a use case:
 
 - **Safety evaluations should include human-in-the-loop**: Using automated evaluations like Azure AI Foundry safety evaluations should include human reviewers such as domain experts to assess whether your generative AI application has been tested thoroughly prior to deployment to end users.
-- **Safety evaluations do not include total comprehensive coverage**: Though safety evaluations can provide a way to augment your testing for potential content or security risks, it wasn't designed to replace manual red-teaming operations specifically geared towards your application’s domain, use cases, and type of end users.
+- **Safety evaluations do not include total comprehensive coverage**: Though safety evaluations can provide a way to augment your testing for potential content or security risks, it wasn't designed to replace manual red-teaming operations specifically geared towards your application's domain, use cases, and type of end users.
 - Supported scenarios:
     - For adversarial simulation: Question answering, multi-turn chat, summarization, search, text rewrite, ungrounded and grounded content generation.
     - For automated annotation: Question answering and multi-turn chat.
@@ -69,11 +69,11 @@ We encourage customers to leverage Azure AI Foundry safety evaluations in their
     - The hate- and unfairness metric includes some coverage for a limited number of marginalized groups for the demographic factor of gender (for example, men, women, non-binary people) and race, ancestry, ethnicity, and nationality (for example, Black, Mexican, European). Not all marginalized groups in gender and race, ancestry, ethnicity, and nationality are covered. Other demographic factors that are relevant to hate and unfairness don't currently have coverage (for example, disability, sexuality, religion).
     - The metrics for sexual, violent, and self-harm-related content are based on a preliminary conceptualization of these harms that are less developed than hate and unfairness. This means that we can make less strong claims about measurement coverage and how well the measurements represent the different ways these harms can occur. Coverage for these content types includes a limited number of topics relate to sex (for example, sexual violence, relationships, sexual acts), violence (for example, abuse, injuring others, kidnapping), and self-harm (for example, intentional death, intentional self-injury, eating disorders).
 - Azure AI Foundry safety evaluations don't currently allow for plug-ins or extensibility.
-- To keep quality up to date and improve coverage, we'll aim for a cadence of future releases of improvement to the service’s adversarial simulation and annotation capabilities.
+- To keep quality up to date and improve coverage, we'll aim for a cadence of future releases of improvement to the service's adversarial simulation and annotation capabilities.
 
 ### Technical limitations, operational factors, and ranges
 
-- The field of large language models (LLMs) continues to evolve at a rapid pace, requiring continuous improvement of evaluation techniques to ensure safe and reliable AI system deployment. Azure AI Foundry safety evaluations reflect Microsoft’s commitment to continue innovating in the field of LLM evaluation. We aim to provide the best tooling to help you evaluate the safety of your generative AI applications but recognize effective evaluation is a continuous work in progress.
+- The field of large language models (LLMs) continues to evolve at a rapid pace, requiring continuous improvement of evaluation techniques to ensure safe and reliable AI system deployment. Azure AI Foundry safety evaluations reflect Microsoft's commitment to continue innovating in the field of LLM evaluation. We aim to provide the best tooling to help you evaluate the safety of your generative AI applications but recognize effective evaluation is a continuous work in progress.
 - Customization of Azure AI Foundry safety evaluations is currently limited. We only expect users to provide their input generative AI application endpoint and our service will output a static dataset that is labeled for content risk.
 - Finally, it should be noted that this system doesn't automate any actions or tasks, it only provides an evaluation of your generative AI application outputs, which should be reviewed by a human decision maker in the loop before choosing to deploy the generative AI application or system into production for end users.
 
@@ -88,7 +88,7 @@ We encourage customers to leverage Azure AI Foundry safety evaluations in their
 
 ### Evaluation methods
 
-For all supported content risk types, we have internally checked the quality by comparing the rate of approximate matches between human labelers using a 0-7 severity scale and the safety evaluations’ automated annotator also using a 0-7 severity scale on the same datasets. For each risk area, we had both human labelers and an automated annotator label 500 English, single-turn texts. The human labelers and the automated annotator didn't use exactly the same versions of the annotation guidelines; while the automated annotator’s guidelines stemmed from the guidelines for humans, they have since diverged to varying degrees (with the hate and unfairness guidelines having diverged the most). Despite these slight to moderate differences, we believe it's still useful to share general trends and insights from our comparison of approximate matches. In our comparisons, we looked for matches with a 2-level tolerance (where human label matched automated annotator label exactly or was within 2 levels above or below in severity), matches with a 1-level tolerance, and matches with a 0-level tolerance.
+For all supported content risk types, we have internally checked the quality by comparing the rate of approximate matches between human labelers using a 0-7 severity scale and the safety evaluations' automated annotator also using a 0-7 severity scale on the same datasets. For each risk area, we had both human labelers and an automated annotator label 500 English, single-turn texts. The human labelers and the automated annotator didn't use exactly the same versions of the annotation guidelines; while the automated annotator's guidelines stemmed from the guidelines for humans, they have since diverged to varying degrees (with the hate and unfairness guidelines having diverged the most). Despite these slight to moderate differences, we believe it's still useful to share general trends and insights from our comparison of approximate matches. In our comparisons, we looked for matches with a 2-level tolerance (where human label matched automated annotator label exactly or was within 2 levels above or below in severity), matches with a 1-level tolerance, and matches with a 0-level tolerance.
 
 ### Evaluation results
 
@@ -100,7 +100,7 @@ Although our comparisons are between entities that used slightly to moderately d
 
 Measurement and evaluation of your generative AI application are a critical part of a holistic approach to AI risk management. Azure AI Foundry safety evaluations are complementary to and should be used in tandem with other AI risk management practices. Domain experts and human-in-the-loop reviewers should provide proper oversight when using AI-assisted safety evaluations in the generative AI application design, development, and deployment cycle. You should understand the limitations and intended uses of the safety evaluations, being careful not to rely on outputs produced by Azure AI Foundry AI-assisted safety evaluations in isolation.
 
-Due to the non-deterministic nature of the LLMs, you might experience false negative or positive results, such as a high-severity level of violent content scored as "very low" or “low.” Additionally, evaluation results might have different meanings for different audiences. For example, safety evaluations might generate a label for “low” severity of violent content that might not align to a human reviewer’s definition of how severe that specific violent content might be. In Azure AI Foundry portal, we provide a human feedback column with thumbs up and thumbs down when viewing your evaluation results to surface which instances were approved or flagged as incorrect by a human reviewer. Consider the context of how your results might be interpreted for decision making by others you can share evaluation with and validate your evaluation results with the appropriate level of scrutiny for the level of risk in the environment that each generative AI application operates in.
+Due to the non-deterministic nature of the LLMs, you might experience false negative or positive results, such as a high-severity level of violent content scored as "very low" or "low." Additionally, evaluation results might have different meanings for different audiences. For example, safety evaluations might generate a label for "low" severity of violent content that might not align to a human reviewer's definition of how severe that specific violent content might be. In Azure AI Foundry portal, we provide a human feedback column with thumbs up and thumbs down when viewing your evaluation results to surface which instances were approved or flagged as incorrect by a human reviewer. Consider the context of how your results might be interpreted for decision making by others you can share evaluation with and validate your evaluation results with the appropriate level of scrutiny for the level of risk in the environment that each generative AI application operates in.
 
 ## Learn more about responsible AI
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "更新された日付情報とテキスト修正 (Locale: ja_JP)"
}

Explanation

この変更は、articles/ai-studio/concepts/safety-evaluations-transparency-note.mdファイルにおけるメタデータと内容の修正を含んでいます。具体的には、ms.dateの値が「2024年5月21日」から「2024年11月21日」に更新されました。また、文中のテキストの一部にわずかな修正が加えられ、文の表現を統一するために他の小さな修正も含まれています。これにより、読者にとっての理解を助けるとともに、最新の情報を反映させることを目的としています。全体として、文書はAI安全評価における透明性を高めることを目指しています。

articles/ai-studio/concepts/vulnerability-management.md

Diff
@@ -7,7 +7,7 @@ ms.service: azure-ai-studio
 ms.custom:
   - build-2024
 ms.topic: conceptual
-ms.date: 5/21/2024
+ms.date: 11/21/2024
 ms.reviewer: deeikele
 ms.author: larryfr
 author: Blackmist

Summary

{
    "modification_type": "minor update",
    "modification_title": "更新された日付情報 (Locale: ja_JP)"
}

Explanation

この変更は、articles/ai-studio/concepts/vulnerability-management.mdファイルに関するもので、主にメタデータの更新が行われています。具体的には、ms.dateの値が「2024年5月21日」から「2024年11月21日」に変更されました。この日付の更新は、ドキュメントが最新の情報を反映するためのものであり、他のメタデータには変化はありません。全体として、文書の整合性を保ちつつ、情報を最新化することを目的として行われています。

articles/ai-studio/how-to/access-on-premises-resources.md

Diff
@@ -5,7 +5,7 @@ description: Learn how to configure an Azure AI Foundry managed network to secur
 manager: scottpolly
 ms.service: azure-ai-studio
 ms.topic: how-to
-ms.date: 10/24/2024
+ms.date: 11/22/2024
 ms.reviewer: meerakurup 
 ms.author: larryfr
 author: Blackmist
@@ -45,7 +45,7 @@ Follow the [Quickstart: Direct web traffic using the portal](/azure/application-
     - Azure AI Foundry only supports IPv4 for Application Gateway.
     - With your Azure Virtual Network, select one dedicated subnet for your Application Gateway. No other resources can be deployed in this subnet.
 
-1. From the __Frontends__ tab, Application Gateway doesn’t support private Frontend IP address only so Public IP addresses need to be selected or a new one created. Private IP addresses for the resources that the gateway connects to can be added within the range of the subnet you selected on the Basics tab.
+1. From the __Frontends__ tab, Application Gateway doesn't support private Frontend IP address only so Public IP addresses need to be selected or a new one created. Private IP addresses for the resources that the gateway connects to can be added within the range of the subnet you selected on the Basics tab.
 
 1. From the __Backends__ tab, you can add your backend target to a backend pool. You can manage your backend targets by creating different backend pools. Request routing is based on the pools. You can add backend targets such as a Snowflake database. 
 
@@ -56,7 +56,7 @@ Follow the [Quickstart: Direct web traffic using the portal](/azure/application-
         - If you want end-to-end TLS encryption, select HTTPS listener and upload your own certificate for Application Gateway to decrypt request received by listener. For more information, see [Enabling end to end TLS on Azure Application Gateway](/azure/application-gateway/ssl-overview#end-to-end-tls-encryption).
         - If you want a fully private backend target without any public network access, DO NOT setup a listener on the public frontend IP address and its associated routing rule. Application Gateway only forwards requests that listeners receive at the specific port. If you want to avoid adding public frontend IP listener by mistake, see [Network security rules](/azure/application-gateway/configuration-infrastructure#network-security-groups) to fully lock down public network access.
 
-    - In the __Backend targets__ section, if you want to use HTTPS and Backend server’s certificate is NOT issued by a well-known CA, you must upload the Root certificate (.CER) of the backend server. For more on configuring with a root certificate, see [Configure end-to-end TLS encryption using the portal](/azure/application-gateway/end-to-end-ssl-portal).
+    - In the __Backend targets__ section, if you want to use HTTPS and Backend server's certificate is NOT issued by a well-known CA, you must upload the Root certificate (.CER) of the backend server. For more on configuring with a root certificate, see [Configure end-to-end TLS encryption using the portal](/azure/application-gateway/end-to-end-ssl-portal).
 
 1. Once the Application Gateway resource is created, navigate to the new Application Gateway resource in the Azure portal. Under __Settings__, select, __Private link__ to enable a virtual network to privately access the Application Gateway through a private endpoint connection. The Private link configuration isn't created by default. 
 
@@ -68,7 +68,7 @@ Follow the [Quickstart: Direct web traffic using the portal](/azure/application-
 
 ## Configure private link
 
-1. Now that your Application Gateway’s front-end IP and backend pools are created, you can now configure the private endpoint from the managed virtual network to your Application Gateway. in the [Azure portal](https://portal.azure.com), navigate to your Azure AI Foundry hub's __Networking__ tab. Select __Workspace managed outbound access__, __+ Add user-defined outbound rules__. 
+1. Now that your Application Gateway's front-end IP and backend pools are created, you can now configure the private endpoint from the managed virtual network to your Application Gateway. in the [Azure portal](https://portal.azure.com), navigate to your Azure AI Foundry hub's __Networking__ tab. Select __Workspace managed outbound access__, __+ Add user-defined outbound rules__. 
 1. In the __Workspace Outbound rules__ form, select the following to create your private endpoint:
 
     - Rule name: Provide a name for your private endpoint to Application Gateway.
@@ -77,7 +77,7 @@ Follow the [Quickstart: Direct web traffic using the portal](/azure/application-
     - Resource Type: `Microsoft.Network/applicationGateways`
     - Resource name: The name of your Application Gateway resource.
     - Sub resource: `appGwPrivateFrontendIpIPv4` 
-    - FQDNs: These FQDNs are the aliases that you want to use inside the Azure AI Foundry portal. They're resolved to the managed private endpoint’s private IP address targeting Application Gateway. You might include multiple FQDNs depending on how many resources you would like to connect to with the Application Gateway.
+    - FQDNs: These FQDNs are the aliases that you want to use inside the Azure AI Foundry portal. They're resolved to the managed private endpoint's private IP address targeting Application Gateway. You might include multiple FQDNs depending on how many resources you would like to connect to with the Application Gateway.
 
     > [!NOTE]
     > - If you are using HTTPS listener with certificate uploaded, make sure the FQDN alias matches with the certificate's CN (Common Name) or SAN (Subject Alternative Name) otherwise HTTPS call will fail with SNI (Server Name Indication).

Summary

{
    "modification_type": "minor update",
    "modification_title": "更新された日付情報と軽微なテキスト修正 (Locale: ja_JP)"
}

Explanation

この変更は、articles/ai-studio/how-to/access-on-premises-resources.mdファイルに対するもので、主にメタデータの更新と文中の軽微なテキスト修正が行われています。具体的には、ms.dateが「2024年10月24日」から「2024年11月22日」に変更されました。また、いくつかの文において単なるスタイルの変更が加えられ、アポストロフィの使い方が統一されています。これにより、テキストの整合性が向上し、情報が最新のものに保たれることを目的としています。全体として、文書はアクセス管理における指針をより明確にし、読みやすさを向上させるために修正されています。

articles/ai-studio/how-to/create-azure-ai-hub-template.md

Diff
@@ -6,7 +6,7 @@ manager: scottpolly
 ms.service: azure-ai-studio
 ms.custom: devx-track-arm-template, devx-track-bicep, build-2024
 ms.topic: how-to
-ms.date: 5/21/2024
+ms.date: 11/21/2024
 ms.reviewer: deeikele
 ms.author: larryfr
 author: Blackmist

Summary

{
    "modification_type": "minor update",
    "modification_title": "更新された日付情報 (Locale: ja_JP)"
}

Explanation

この変更は、articles/ai-studio/how-to/create-azure-ai-hub-template.mdファイルに対するもので、主にメタデータの更新が行われています。具体的には、ms.dateの値が「2024年5月21日」から「2024年11月21日」に変更されました。この変更はドキュメントの日付を最新に保ち、利用者に正確な情報を提供するためのものです。その他のメタデータには変更はないため、文書の内容はそのまま維持されています。このようにして、文書が最新の状況を反映することを促進しています。

articles/ai-studio/how-to/create-hub-terraform.md

Diff
@@ -2,7 +2,7 @@
 title: 'Use Terraform to create an Azure AI Foundry hub'
 description: In this article, you create an Azure AI Foundry hub, an Azure AI Foundry project, an AI services resource, and more resources.
 ms.topic: how-to
-ms.date: 07/12/2024
+ms.date: 11/21/2024
 titleSuffix: Azure AI Foundry 
 ms.service: azure-ai-studio 
 manager: scottpolly 

Summary

{
    "modification_type": "minor update",
    "modification_title": "更新された日付情報 (Locale: ja_JP)"
}

Explanation

この変更は、articles/ai-studio/how-to/create-hub-terraform.mdファイルに対するもので、主にメタデータの更新が行われています。具体的には、ms.dateの値が「2024年7月12日」から「2024年11月21日」に変更されました。この変更は、文書の日付を最新の状態に保つ目的で行われており、ユーザーに正確な情報を提供することを意図しています。その他の情報には変更はありません。この修正により、ドキュメントが利用者の要求に適した内容となるよう維持されています。

articles/ai-studio/how-to/create-secure-ai-hub.md

Diff
@@ -6,7 +6,7 @@ ms.service: azure-ai-studio
 ms.custom:
   - build-2024
 ms.topic: how-to
-ms.date: 5/21/2024
+ms.date: 11/21/2024
 ms.reviewer: meerakurup 
 ms.author: larryfr
 author: Blackmist

Summary

{
    "modification_type": "minor update",
    "modification_title": "更新された日付情報 (Locale: ja_JP)"
}

Explanation

この変更は、articles/ai-studio/how-to/create-secure-ai-hub.mdファイルにおけるメタデータの更新に関するものです。具体的には、ms.dateの値が「2024年5月21日」から「2024年11月21日」に変更されました。この修正は、ドキュメントの日付を最新のものに保つために行われ、利用者に正確な情報を提供することを目的としています。その他のメタデータについては変更はないため、文書の内容自体は影響を受けません。この更新により、ドキュメントがより信頼性の高いリソースとして機能することが期待されます。

articles/ai-studio/how-to/deploy-models-jais.md

Diff
@@ -5,7 +5,7 @@ description: Learn how to use Jais chat models with Azure AI Foundry.
 ms.service: azure-ai-studio
 manager: scottpolly
 ms.topic: how-to
-ms.date: 08/08/2024
+ms.date: 11/21/2024
 ms.reviewer: haelhamm
 reviewer: hazemelh 
 ms.author: ssalgado

Summary

{
    "modification_type": "minor update",
    "modification_title": "更新された日付情報 (Locale: ja_JP)"
}

Explanation

この変更は、articles/ai-studio/how-to/deploy-models-jais.mdファイルにおけるメタデータの更新を反映しています。主な変更点は、ms.dateが「2024年8月8日」から「2024年11月21日」へと更新されたことです。この変更は、ドキュメントの日付を最新の情報に整えることを目的としており、ユーザーに正確なリファレンスを提供するためのものです。他のメタデータの内容については変更がなく、文書の全体的な内容や構造には影響を与えません。この更新により、ドキュメントは最新の状態が保たれ、より信頼性のある情報源として機能することが期待されます。

articles/ai-studio/how-to/deploy-models-serverless-availability.md

Diff
@@ -5,7 +5,7 @@ description: Learn about the regions where each model is available for deploymen
 manager: scottpolly
 ms.service: azure-ai-studio
 ms.topic: how-to
-ms.date: 5/21/2024
+ms.date: 11/21/2024
 ms.author: mopeakande
 author: msakande
 ms.reviewer: fasantia

Summary

{
    "modification_type": "minor update",
    "modification_title": "更新された日付情報 (Locale: ja_JP)"
}

Explanation

この変更は、articles/ai-studio/how-to/deploy-models-serverless-availability.mdファイルのメタデータに関するもので、主にms.dateフィールドが変更されました。具体的には、日付が「2024年5月21日」から「2024年11月21日」に更新されています。このマイナーな更新は、文書に記載されている情報が最新のものであることを保証し、ユーザーに正確な情報を提供することを目的としています。他のメタデータは変更されていないため、文書の内容自体には影響がありません。これにより、ドキュメントは引き続き信頼性の高いリソースとして機能することが期待されます。

articles/ai-studio/how-to/develop/create-hub-project-sdk.md

Diff
@@ -6,7 +6,7 @@ manager: scottpolly
 ms.service: azure-ai-studio
 ms.custom: build-2024, devx-track-azurecli
 ms.topic: how-to
-ms.date: 5/21/2024
+ms.date: 11/21/2024
 ms.reviewer: dantaylo
 ms.author: sgilley
 author: sdgilley

Summary

{
    "modification_type": "minor update",
    "modification_title": "更新された日付情報 (Locale: ja_JP)"
}

Explanation

この変更は、articles/ai-studio/how-to/develop/create-hub-project-sdk.mdファイルにおけるメタデータの更新を示しています。特に、ms.dateフィールドが「2024年5月21日」から「2024年11月21日」に変更されました。この更新により、文書の日付情報が最新のものとなり、ユーザーに正確な情報を提供する意図があります。その他のメタデータは変更されておらず、文書の内容には影響していません。このようなマイナーな更新は、ドキュメントの信頼性を保つために重要です。

articles/ai-studio/how-to/develop/evaluate-sdk.md

Diff
@@ -81,7 +81,7 @@ Built-in evaluators can accept *either* query and response pairs or a list of co
 | `HateUnfairnessEvaluator`        | Required: String | Required: String | N/A           | N/A           |Supported |
 | `IndirectAttackEvaluator`      | Required: String | Required: String | Required: String | N/A           |Supported |
 | `ProtectedMaterialEvaluator`  | Required: String | Required: String | N/A           | N/A           |Supported |
-| `QAEvaluator`      | Required: String | Required: String | Required: String | N/A           | Not supported |
+| `QAEvaluator`      | Required: String | Required: String | Required: String | Required: String           | Not supported |
 | `ContentSafetyEvaluator`      | Required: String | Required: String |  N/A  | N/A           | Supported |
 
 - Query: the query sent in to the generative AI application

Summary

{
    "modification_type": "minor update",
    "modification_title": "QAEvaluatorの要件の変更 (Locale: ja_JP)"
}

Explanation

この変更は、articles/ai-studio/how-to/develop/evaluate-sdk.mdファイルにおける表に関するもので、特にQAEvaluatorの要件が修正されています。具体的には、QAEvaluatorの列に「Required: String」が追加され、これによりこの評価器が必要な条件として明示されています。この変更は、QAEvaluatorが他の評価器と同様に、すべてのフィールドで文字列が必要であることを示しています。全体として、この修正は文書の明確さを向上させ、ユーザーに正確な情報を提供することを目指しています。その他の部分は変更されておらず、内容の一貫性が保たれています。

articles/ai-studio/how-to/develop/index-build-consume-sdk.md

Diff
@@ -7,7 +7,7 @@ ms.service: azure-ai-studio
 ms.custom:
   - build-2024
 ms.topic: how-to
-ms.date: 5/21/2024
+ms.date: 11/21/2024
 ms.reviewer: dantaylo
 ms.author: sgilley
 author: sdgilley

Summary

{
    "modification_type": "minor update",
    "modification_title": "日付情報の更新 (Locale: ja_JP)"
}

Explanation

この変更は、articles/ai-studio/how-to/develop/index-build-consume-sdk.mdファイルにおけるメタデータの日付情報の更新を示しています。具体的には、ms.dateフィールドが「2024年5月21日」から「2024年11月21日」に変更されました。このマイナーな更新により、文書がどの時点で作成または改訂されたのかが正確に反映され、利用者に最新の情報を提供することを目的としています。この変更は文書の内容には影響を与えず、信頼性を高めるための重要な改善です。

articles/ai-studio/how-to/evaluate-prompts-playground.md

Diff
@@ -8,7 +8,7 @@ ms.custom:
   - ignite-2023
   - build-2024
 ms.topic: how-to
-ms.date: 5/21/2024
+ms.date: 11/21/2024
 ms.reviewer: mithigpe
 ms.author: lagayhar
 author: lgayhardt
@@ -20,7 +20,7 @@ author: lgayhardt
 
 When you get started with prompt engineering, you should test different inputs one at a time to evaluate the effectiveness of the prompt can be very time intensive. This is because it's important to check whether the content filters are working appropriately, whether the response is accurate, and more. 
 
-To make this process simpler, you can utilize manual evaluation in Azure AI Foundry portal, an evaluation tool enabling you to continuously iterate and evaluate your prompt against your test data in a single interface. You can also manually rate the outputs, the model’s responses, to help you gain confidence in your prompt.  
+To make this process simpler, you can utilize manual evaluation in Azure AI Foundry portal, an evaluation tool enabling you to continuously iterate and evaluate your prompt against your test data in a single interface. You can also manually rate the outputs, the model's responses, to help you gain confidence in your prompt.  
 
 Manual evaluation can help you get started to understand how well your prompt is performing and iterate on your prompt to ensure you reach your desired level of confidence. 
 
@@ -55,7 +55,7 @@ You can also **Import Data** to choose one of your previous existing datasets in
 > [!NOTE]
 > You can add as many as 50 input rows to your manual evaluation. If your test data has more than 50 input rows, we will upload the first 50 in the input column. 
 
-Now that your data is added, you can **Run** to populate the output column with the model’s response. 
+Now that your data is added, you can **Run** to populate the output column with the model's response. 
 
 ## Rate your model responses 
 
@@ -67,7 +67,7 @@ You can provide a thumb up or down rating to each response to assess the prompt
 
 Based on your summary, you might want to make changes to your prompt. You can use the prompt controls above to edit your prompt setup. This can be updating the system message, changing the model, or editing the parameters. 
 
-After making your edits, you can choose to rerun all to update the entire table or focus on rerunning specific rows that didn’t meet your expectations the first time.  
+After making your edits, you can choose to rerun all to update the entire table or focus on rerunning specific rows that didn't meet your expectations the first time.  
 
 ## Save and compare results 
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "文章の一貫性向上のための修正 (Locale: ja_JP)"
}

Explanation

この変更は、articles/ai-studio/how-to/evaluate-prompts-playground.mdファイル内のテキストの表現を改善するためのもので、いくつかの表現が統一されました。具体的には、model’s responsesからmodel's responsesに修正され、この文書全体での一貫性が向上しています。また、ms.dateフィールドの日付が「2024年5月21日」から「2024年11月21日」に更新されました。これにより、最新の情報が反映され、ユーザーが文書を利用する際に正確な日付を確認できるようになりました。この修正は、全体の内容には影響を与えず、文書の品質を向上させています。

articles/ai-studio/how-to/secure-data-playground.md

Diff
@@ -5,7 +5,7 @@ description: Learn how to securely use the Azure AI Foundry portal playground ch
 manager: scottpolly
 ms.service: azure-ai-studio
 ms.topic: how-to
-ms.date: 09/13/2024
+ms.date: 11/21/2024
 ms.reviewer: meerakurup 
 ms.author: larryfr
 author: Blackmist

Summary

{
    "modification_type": "minor update",
    "modification_title": "文書の日付情報の更新 (Locale: ja_JP)"
}

Explanation

この変更は、articles/ai-studio/how-to/secure-data-playground.mdファイルにおいて、メタデータの日付情報を更新するものです。具体的には、ms.dateフィールドが「2024年9月13日」から「2024年11月21日」に変更されました。このマイナーな更新により、文書が最新の情報を反映し、利用者が正確な日付に基づいて内容を理解できるようにすることを目的としています。この変更は文書の内容に直接影響を与えず、利用者にとって重要な情報の透明性を向上させるためのものです。

articles/ai-studio/how-to/troubleshoot-secure-connection-project.md

Diff
@@ -7,7 +7,7 @@ ms.service: azure-ai-studio
 ms.custom:
   - build-2024
 ms.topic: how-to
-ms.date: 5/21/2024
+ms.date: 11/21/2024
 ms.reviewer: meerakurup
 ms.author: larryfr
 author: Blackmist

Summary

{
    "modification_type": "minor update",
    "modification_title": "日付情報の更新 (Locale: ja_JP)"
}

Explanation

この変更は、articles/ai-studio/how-to/troubleshoot-secure-connection-project.mdファイルにおいて、メタデータの日付情報を更新するものです。具体的には、ms.dateフィールドが「2024年5月21日」から「2024年11月21日」に修正されました。このマイナーな更新により、文書は最新の日付情報を反映することになり、読者が正確な情報に基づいて内容を理解できることを目指しています。この変更は、文書の内容には影響を与えず、情報の透明性と信頼性を高めるためのものです。

articles/ai-studio/tutorials/copilot-sdk-create-resources.md

Diff
@@ -107,7 +107,7 @@ In the Azure AI Foundry portal, check for an Azure AI Search connected resource.
 1. Use **API key** for **Authentication**.
 
     > [!NOTE]
-    > You can instead use **Microsoft Entra ID** for **Authentication**. If you do this, you must also configure access control for the Azure AI Search service. Assign yourself the **Search Index Data Contributor** and **Search Service Contributor** roles. If you don't know how to do this, or don't have the necessary permissions, use the **API key** for **Authentication**.
+    > You can instead use **Microsoft Entra ID** for **Authentication**. If you do this, you must also configure access control for the Azure AI Search service. Assign the **Search Index Data Contributor** and **Search Service Contributor** roles to your user account. If you don't know how to do this, or don't have the necessary permissions, use the **API key** for **Authentication**.
 
 1. Select **Add connection**.  
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "文書内の表現の改善 (Locale: ja_JP)"
}

Explanation

この変更は、articles/ai-studio/tutorials/copilot-sdk-create-resources.mdファイルにおいて、文書内の表現を微調整するものです。具体的には、Microsoft Entra IDを使用した認証方法についての説明が若干変更されました。以前の文は「Assign yourself the Search Index Data Contributor and Search Service Contributor roles.」と記載されていましたが、更新されて「Assign the Search Index Data Contributor and Search Service Contributor roles to your user account.」となりました。この修正により、文の明確性が向上し、読者に対してより具体的で理解しやすい指示が提供されています。全体としては、文書の内容に大きな変更はないものの、表現の改善に寄与するマイナーアップデートです。