Diff Insight Report - search

最終更新日: 2025-11-22

利用上の注意

このポストは Microsoft 社の Azure 公式ドキュメント(CC BY 4.0 または MIT ライセンス) をもとに生成AIを用いて翻案・要約した派生作品です。 元の文書は MicrosoftDocs/azure-ai-docs にホストされています。

生成AIの性能には限界があり、誤訳や誤解釈が含まれる可能性があります。 本ポストはあくまで参考情報として用い、正確な情報は必ず元の文書を参照してください。

このポストで使用されている商標はそれぞれの所有者に帰属します。これらの商標は技術的な説明のために使用されており、商標権者からの公式な承認や推奨を示すものではありません。

View Diff on GitHub


# Highlights
この更新では、いくつかのドキュメントにおいて、新機能の追加や軽微な修正が行われました。特に、本ドキュメント群ではAzureサービスの利用方法や設定手順を詳述したチュートリアルやガイドが追加されています。新しいC#ファイルの追加による知識ソースの設定方法の拡充が目立ちますが、それ以外にもバージョン情報の更新や表現の統一といった改良が含まれています。

New features

  • C#ファイルを利用したBlob、OneLake、検索インデックス、SharePointなどの知識ソース利用方法ガイドの新規追加
  • 各種知識ソースの設定方法を詳細に記載した新しいガイドの追加

Breaking changes

  • 特になし

Other updates

  • 多くのドキュメントで日付の更新
  • APIバージョンの情報を最新のものに更新
  • セマンティックランキングやクエリリライトなど、多数の機能説明の明確化
  • 文言修正により一貫性や可読性を向上
  • タイトル及び説明文のスタイルや内容の整え

Insights

今回の更新では、Azure AI Searchのユーザーに向けて、多くの新しいドキュメントや既存ドキュメントの改良が図られています。特に、新規に追加されたC#ファイルを使用したガイドは、より技術的な詳細に踏み込んだ内容であり、適切な知識ソースの設定や管理の方法を開発者に示しています。

これはAzure AI Searchを活用する際に、様々なデータソースを効率的に利用しやすくするための支援を目的としており、技術者が直面する可能性がある通常の課題を予測し、その解決策を納得できる形で提供するための重要なリソースになると考えられます。また、日付更新やAPIのバージョン追従といった継続的なメンテナンスは、ドキュメントの信頼性を保つための基本的且つ重要な対応だといえます。

全体として、今回のアップデートはユーザー体験を向上させ、より簡単かつ効率的にAzure AI Searchの機能を活用できるようにするための施策として実施されていると言えるでしょう。

Summary Table

Filename Type Title Status A D M
agentic-knowledge-source-how-to-blob.md minor update エージェント知識ソースの手順書更新 modified 2 6 8
agentic-knowledge-source-how-to-onelake.md minor update OneLake知識ソースの手順書更新 modified 3 6 9
agentic-knowledge-source-how-to-search-index.md minor update 検索インデックス知識ソースの手順書更新 modified 2 5 7
agentic-knowledge-source-how-to-sharepoint-indexed.md minor update SharePointインデックス知識ソースの手順書更新 modified 2 5 7
agentic-knowledge-source-how-to-sharepoint-remote.md minor update リモートSharePoint知識ソースの手順書更新 modified 2 5 7
agentic-knowledge-source-how-to-web-manage.md minor update Web管理の知識ソース手順書更新 modified 1 0 1
agentic-knowledge-source-how-to-web.md minor update Web知識ソースの手順書更新 modified 2 5 7
agentic-retrieval-how-to-create-knowledge-base.md minor update 知識ベース作成手順の更新 modified 2 6 8
agentic-retrieval-overview.md minor update エージェント回収の概要の修正 modified 4 3 7
cognitive-search-concept-image-scenarios.md minor update 画像シナリオに関するコンセプトの更新 modified 9 9 18
cognitive-search-concept-intro.md minor update Cognitive Search 概念のイントロダクションの修正 modified 1 1 2
hybrid-search-overview.md minor update ハイブリッド検索の概要の更新 modified 34 33 67
agentic-knowledge-source-how-to-blob-csharp.md new feature Blob 知識ソースの使用方法ガイドの追加 added 205 0 205
agentic-knowledge-source-how-to-onelake-csharp.md new feature OneLake 知識ソースの使用方法ガイドの追加 added 193 0 193
agentic-knowledge-source-how-to-search-index-csharp.md new feature 検索インデックス知識ソースの使用方法ガイドの追加 added 101 0 101
agentic-knowledge-source-how-to-sharepoint-indexed-csharp.md new feature SharePoint インデックス知識ソースの使用方法ガイドの追加 added 187 0 187
agentic-knowledge-source-how-to-sharepoint-remote-csharp.md new feature リモートSharePoint知識ソースの使用方法ガイドの追加 added 225 0 225
agentic-knowledge-source-how-to-web-csharp.md new feature Web 知識ソースの使用方法ガイドの追加 added 118 0 118
agentic-retrieval-how-to-create-knowledge-base-csharp.md new feature 知識ベースの作成方法ガイドの追加 added 329 0 329
agentic-retrieval-how-to-create-knowledge-base-rest.md minor update REST APIの知識ベースガイドの更新 modified 2 2 4
knowledge-source-check-csharp.md new feature 知識ソースの確認方法に関するガイドの追加 added 51 0 51
knowledge-source-delete-csharp.md new feature 知識ソース削除に関するガイドの追加 added 99 0 99
knowledge-source-ingestion-parameters-csharp.md new feature 知識ソースの取り込みパラメータに関するガイドの追加 added 21 0 21
knowledge-source-status-csharp.md new feature 知識ソースの状態を監視するためのガイドの追加 added 53 0 53
skillset-csharp.md minor update Skillsetチュートリアルのアップデート modified 6 6 12
skillset-rest.md minor update Skillset RESTチュートリアルの更新 modified 6 6 12
multimodal-search-overview.md minor update マルチモーダル検索の概要文書の更新 modified 2 2 4
samples-python.md minor update Pythonサンプルのドキュメントの更新 modified 4 2 6
samples-rest.md minor update RESTサンプルのドキュメントの更新 modified 11 4 15
search-api-versions.md minor update APIバージョンのドキュメントの更新 modified 1 1 2
search-blob-indexer-role-based-access.md minor update Blobインデクサーの役割ベースのアクセスに関するドキュメントの更新 modified 1 1 2
search-document-level-access-overview.md minor update ドキュメントレベルのアクセス制御の概要ドキュメントの更新 modified 1 1 2
search-how-to-index-azure-blob-encrypted.md minor update Azure Blobの暗号化インデクシング方法に関するドキュメントの修正 modified 1 1 2
search-how-to-index-onelake-files.md minor update OneLakeファイルのインデクシング方法に関するドキュメントの更新 modified 4 2 6
search-how-to-index-sql-database.md minor update SQLデータベースのインデクシング方法に関するドキュメントの修正 modified 1 4 5
search-howto-managed-identities-azure-functions.md minor update Azure FunctionsにおけるマネージドIDの設定手順の更新 modified 24 24 48
search-index-access-control-lists-and-rbac-push-api.md minor update アクセスポリシーに関するドキュメントの文言修正 modified 1 1 2
search-indexer-access-control-lists-and-role-based-access.md minor update アクセスポリシーに関するドキュメントの文言修正 modified 1 1 2
search-indexer-sensitivity-labels.md minor update 感度ラベルに関するドキュメントの修正 modified 2 2 4
search-indexer-tutorial.md minor update 検索インデクサーチュートリアルの文言修正 modified 1 1 2
search-markdown-data-tutorial.md minor update Markdownデータチュートリアルの改善 modified 107 98 205
search-query-access-control-rbac-enforcement.md minor update クエリ時のACLとRBAC強制のタイトル修正 modified 2 2 4
search-query-sensitivity-labels.md minor update センシティビティラベルに関する内容の修正 modified 5 5 10
search-security-rbac.md minor update RBACに関するドキュメントの更新 modified 5 29 34
search-semi-structured-data.md minor update 半構造化データに関する手順の更新 modified 4 2 6
semantic-how-to-configure.md minor update セマンティックランキングの構成に関するドキュメントの更新 modified 6 4 10
semantic-how-to-query-rewrite.md minor update クエリリライト機能に関するドキュメントの更新 modified 6 6 12
semantic-search-overview.md minor update セマンティックランキングの概要における修正 modified 3 3 6
tutorial-adls-gen2-indexer-acls.md minor update ACLおよびRBACの説明に関する文言の修正 modified 1 1 2
tutorial-create-custom-analyzer.md minor update 電話番号用カスタムアナライザー作成チュートリアルの改訂 modified 125 113 238
tutorial-csharp-create-load-index.md minor update C#によるインデックスの作成と読み込みチュートリアルの日付更新 modified 1 1 2
tutorial-multiple-data-sources.md minor update 複数データソースのチュートリアルにおける管理者キーの説明修正 modified 1 1 2
tutorial-optimize-indexing-push-api.md minor update Push APIを使用したインデックス最適化チュートリアルの内容更新 modified 30 28 58
tutorial-skillset.md minor update スキルセットチュートリアルの最終更新日更新 modified 1 1 2
vector-search-index-size.md minor update ベクトルインデックスサイズの記事の内容修正 modified 40 31 71
vector-search-integrated-vectorization-ai-studio.md minor update Microsoft Foundryを使用した統合ベクトル化の記事の内容修正 modified 23 21 44
vector-search-overview.md minor update ベクトル検索の概要の記事内容の修正 modified 5 5 10

Modified Contents

articles/search/agentic-knowledge-source-how-to-blob.md

Diff
@@ -7,20 +7,16 @@ author: HeidiSteen
 ms.author: heidist
 ms.service: azure-ai-search
 ms.topic: how-to
-ms.date: 11/14/2025
+ms.date: 11/19/2025
 zone_pivot_groups: agentic-retrieval-pivots
 ---
 
 # Create a blob knowledge source from Azure Blob Storage and ADLS Gen2
 
-<!--
 ::: zone pivot="csharp"
-[!INCLUDE [C#](includes/how-tos/file-name.md)]
+[!INCLUDE [C#](includes/how-tos/agentic-knowledge-source-how-to-blob-csharp.md)]
 ::: zone-end
 
-Add C# to agentic-retrieval-pivots in zone-pivot-groups.yml, and then uncomment this section.
--->
-
 ::: zone pivot="python"
 [!INCLUDE [Python](includes/how-tos/agentic-knowledge-source-how-to-blob-python.md)]
 ::: zone-end

Summary

{
    "modification_type": "minor update",
    "modification_title": "エージェント知識ソースの手順書更新"
}

Explanation

この変更では、agentic-knowledge-source-how-to-blob.mdファイルに対していくつかの小さな修正が行われました。主な変更内容は以下の通りです:

  1. 日付の更新: ドキュメントに記載されている日付が「2025年11月14日」から「2025年11月19日」に変更されました。
  2. C#のインクルードファイルの更新: 元々のC#インクルードファイルに関する行が更新され、新しいファイルパスが指定されました。
  3. コメントの削除: C#ゾーンに関連するコメントが削除されました。これにより、ドキュメントがより明確になり、ユーザーが必要な情報に迅速にアクセスできるようになります。

全体として、これらの変更はマイナーなものであり、ドキュメントの内容を改善することを目的としています。

articles/search/agentic-knowledge-source-how-to-onelake.md

Diff
@@ -9,20 +9,17 @@ ms.service: azure-ai-search
 ms.custom:
   - ignite-2025
 ms.topic: how-to
-ms.date: 11/14/2025
+ms.date: 11/20/2025
 zone_pivot_groups: agentic-retrieval-pivots
 ---
 
 # Create a OneLake knowledge source
 
-<!--
+
 ::: zone pivot="csharp"
-[!INCLUDE [C#](includes/how-tos/file-name.md)]
+[!INCLUDE [C#](includes/how-tos/agentic-knowledge-source-how-to-onelake-csharp.md)]
 ::: zone-end
 
-Add C# to agentic-retrieval-pivots in zone-pivot-groups.yml, and then uncomment this section.
--->
-
 ::: zone pivot="python"
 [!INCLUDE [Python](includes/how-tos/agentic-knowledge-source-how-to-onelake-python.md)]
 ::: zone-end

Summary

{
    "modification_type": "minor update",
    "modification_title": "OneLake知識ソースの手順書更新"
}

Explanation

この変更では、agentic-knowledge-source-how-to-onelake.mdファイルに小規模な修正が加えられました。主なポイントは以下の通りです:

  1. 日付の更新: ドキュメントに記載されている日付が「2025年11月14日」から「2025年11月20日」に変更されました。
  2. C#のインクルードファイルの更新: C#に関連するインクルードファイルのパスが新しいものに変更されました。これにより、正しいリソースが参照されるようになります。
  3. コメントの削除: C#ゾーン関連のコメントが削除され、ドキュメントがよりスムーズに読めるようになりました。

これらの変更は、情報提供を改善し、ユーザーが必要な内容に簡単にアクセスできるようにすることを目的としています。

articles/search/agentic-knowledge-source-how-to-search-index.md

Diff
@@ -7,19 +7,16 @@ author: HeidiSteen
 ms.author: heidist
 ms.service: azure-ai-search
 ms.topic: how-to
+ms.date: 11/19/2025
 zone_pivot_groups: agentic-retrieval-pivots
 ---
 
 # Create a search index knowledge source
 
-<!--
 ::: zone pivot="csharp"
-[!INCLUDE [C#](includes/how-tos/file-name.md)]
+[!INCLUDE [C#](includes/how-tos/agentic-knowledge-source-how-to-search-index-csharp.md)]
 ::: zone-end
 
-Add C# to agentic-retrieval-pivots in zone-pivot-groups.yml, and then uncomment this section.
--->
-
 ::: zone pivot="python"
 [!INCLUDE [Python](includes/how-tos/agentic-knowledge-source-how-to-search-index-python.md)]
 ::: zone-end

Summary

{
    "modification_type": "minor update",
    "modification_title": "検索インデックス知識ソースの手順書更新"
}

Explanation

この変更では、agentic-knowledge-source-how-to-search-index.mdファイルにいくつかの小規模な修正が行われました。主な変更点は以下の通りです:

  1. 新しい日付の追加: ドキュメントに「2025年11月19日」という日付が新たに追加されました。
  2. C#インクルードファイルの更新: C#ゾーンでのインクルードファイルのパスが更新され、正しいファイルが参照されるようになりました。
  3. コメントの削除: C#ゾーンに関連するコメントが削除され、文書がすっきりとし、より利用しやすくなりました。

これらの変更によって、ユーザーが必要とする情報をより容易に見つけられるようになり、ドキュメントの明瞭さが向上しました。

articles/search/agentic-knowledge-source-how-to-sharepoint-indexed.md

Diff
@@ -7,19 +7,16 @@ author: HeidiSteen
 ms.author: heidist
 ms.service: azure-ai-search
 ms.topic: how-to
+ms.date: 11/20/2025
 zone_pivot_groups: agentic-retrieval-pivots
 ---
 
 # Create an indexed SharePoint knowledge source
 
-<!--
 ::: zone pivot="csharp"
-[!INCLUDE [C#](includes/how-tos/file-name.md)]
+[!INCLUDE [C#](includes/how-tos/agentic-knowledge-source-how-to-sharepoint-indexed-csharp.md)]
 ::: zone-end
 
-Add C# to agentic-retrieval-pivots in zone-pivot-groups.yml, and then uncomment this section.
--->
-
 ::: zone pivot="python"
 [!INCLUDE [Python](includes/how-tos/agentic-knowledge-source-how-to-sharepoint-indexed-python.md)]
 ::: zone-end

Summary

{
    "modification_type": "minor update",
    "modification_title": "SharePointインデックス知識ソースの手順書更新"
}

Explanation

この変更では、agentic-knowledge-source-how-to-sharepoint-indexed.mdファイルに小規模な修正が行われました。主な変更点は以下の通りです:

  1. 新しい日付の追加: ドキュメントに「2025年11月20日」という新たな日付が追加されました。
  2. C#インクルードファイルの更新: C#ゾーン内のインクルードファイルのパスが更新され、正しいファイルが参照されるようになりました。
  3. コメントの削除: C#ゾーンに関連するコメントが削除され、文書の整合性と明瞭性が向上しました。

これらの変更により、ユーザーは必要な情報をより簡単に見つけることができ、ドキュメントがより効果的に活用できるようになります。

articles/search/agentic-knowledge-source-how-to-sharepoint-remote.md

Diff
@@ -7,19 +7,16 @@ author: HeidiSteen
 ms.author: heidist
 ms.service: azure-ai-search
 ms.topic: how-to
+ms.date: 11/19/2025
 zone_pivot_groups: agentic-retrieval-pivots
 ---
 
 # Create a remote SharePoint knowledge source
 
-<!--
 ::: zone pivot="csharp"
-[!INCLUDE [C#](includes/how-tos/file-name.md)]
+[!INCLUDE [C#](includes/how-tos/agentic-knowledge-source-how-to-sharepoint-remote-csharp.md)]
 ::: zone-end
 
-Add C# to agentic-retrieval-pivots in zone-pivot-groups.yml, and then uncomment this section.
--->
-
 ::: zone pivot="python"
 [!INCLUDE [Python](includes/how-tos/agentic-knowledge-source-how-to-sharepoint-remote-python.md)]
 ::: zone-end

Summary

{
    "modification_type": "minor update",
    "modification_title": "リモートSharePoint知識ソースの手順書更新"
}

Explanation

この変更では、agentic-knowledge-source-how-to-sharepoint-remote.mdファイルにいくつかの小規模な修正が行われました。主な変更点は以下の通りです:

  1. 新しい日付の追加: ドキュメントに「2025年11月19日」という新しい日付が加えられました。
  2. C#インクルードファイルの更新: C#ゾーン内のインクルードファイルのパスが修正され、正しいファイルが参照されるようになりました。
  3. コメントの削除: C#ゾーンに関連したコメントが削除され、文書がより読みやすく、理解しやすくなりました。

これらの修正により、ユーザーは必要な情報をより簡単に見つけることができ、全体的にドキュメントの利用価値が向上しています。

articles/search/agentic-knowledge-source-how-to-web-manage.md

Diff
@@ -9,6 +9,7 @@ ms.service: azure-ai-search
 ms.custom:
   - ignite-2025
 ms.topic: how-to
+ms.date: 11/19/2025
 zone_pivot_groups: agentic-retrieval-pivots
 ---
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "Web管理の知識ソース手順書更新"
}

Explanation

この変更では、agentic-knowledge-source-how-to-web-manage.mdファイルに小規模な修正が行われました。主な変更点は以下の通りです:

  1. 新しい日付の追加: ドキュメントに「2025年11月19日」という新たな日付が追加されました。これにより、ドキュメントの内容が最新であることが明記されます。

この更新により、ユーザーは情報が最新であることを確認しやすくなり、文書の信頼性が向上します。

articles/search/agentic-knowledge-source-how-to-web.md

Diff
@@ -9,19 +9,16 @@ ms.service: azure-ai-search
 ms.custom:
   - ignite-2025
 ms.topic: how-to
+ms.date: 11/19/2025
 zone_pivot_groups: agentic-retrieval-pivots
 ---
 
 # Create a Web Knowledge Source resource
 
-<!--
 ::: zone pivot="csharp"
-[!INCLUDE [C#](includes/how-tos/file-name.md)]
+[!INCLUDE [C#](includes/how-tos/agentic-knowledge-source-how-to-web-csharp.md)]
 ::: zone-end
 
-Add C# to agentic-retrieval-pivots in zone-pivot-groups.yml, and then uncomment this section.
--->
-
 ::: zone pivot="python"
 [!INCLUDE [Python](includes/how-tos/agentic-knowledge-source-how-to-web-python.md)]
 ::: zone-end

Summary

{
    "modification_type": "minor update",
    "modification_title": "Web知識ソースの手順書更新"
}

Explanation

この変更では、agentic-knowledge-source-how-to-web.mdファイルにいくつかの小規模な修正が加えられました。主な変更点は以下の通りです:

  1. 新しい日付の追加: ドキュメントに「2025年11月19日」という新たな日付が追加され、内容が最新であることが明示されました。
  2. C#インクルードファイルの更新: C#ゾーン内のインクルードファイルのパスが修正され、正しいファイルが参照されるように変更されました。
  3. コメントの削除: C#ゾーンに関連する一部のコメントが削除され、ドキュメントがよりシンプルで明瞭になりました。

これらの修正によって、ユーザーは必要な情報をより得やすくなり、全体的な文書のクオリティが向上しています。

articles/search/agentic-retrieval-how-to-create-knowledge-base.md

Diff
@@ -6,20 +6,16 @@ author: HeidiSteen
 ms.author: heidist
 ms.service: azure-ai-search
 ms.topic: how-to
-ms.date: 11/13/2025
+ms.date: 11/19/2025
 zone_pivot_groups: agentic-retrieval-pivots
 ---
 
 # Create a knowledge base in Azure AI Search
 
-<!--
 ::: zone pivot="csharp"
-[!INCLUDE [C#](includes/how-tos/file-name.md)]
+[!INCLUDE [C#](includes/how-tos/agentic-knowledge-source-how-to-search-index-csharp.md)]
 ::: zone-end
 
-Add C# to agentic-retrieval-pivots in zone-pivot-groups.yml, and then uncomment this section.
--->
-
 ::: zone pivot="python"
 [!INCLUDE [Python](includes/how-tos/agentic-retrieval-how-to-create-knowledge-base-python.md)]
 ::: zone-end

Summary

{
    "modification_type": "minor update",
    "modification_title": "知識ベース作成手順の更新"
}

Explanation

この変更では、agentic-retrieval-how-to-create-knowledge-base.mdファイルに対していくつかの修正が加えられました。主な変更点は以下の通りです:

  1. 日付の更新: ドキュメントの日付が「2025年11月13日」から「2025年11月19日」に変更され、情報が最新であることが示されました。
  2. C#インクルードファイルの修正: C#ゾーンのインクルードファイルのパスが更新され、正しいファイルが参照されるように変更されました。
  3. コメントの削除: C#のセクションに関連する不要なコメントが削除され、文書の読みやすさが向上しました。

これにより、ユーザーは最新の情報を元に知識ベースの作成手順をより容易に理解し、実行できるようになります。全体として、ドキュメントのクオリティと使いやすさが向上しています。

articles/search/agentic-retrieval-overview.md

Diff
@@ -31,7 +31,7 @@ Here's what it does:
 
 This high-performance pipeline helps you generate high quality grounding data (or an answer) for your chat application, with the ability to answer complex questions quickly.
 
-Programmatically, agentic retrieval is supported through a new [Knowledge Base object](/rest/api/searchservice/knowledgebases?view=rest-searchservice-2025-11-01-preview&preserve-view=true) in the 2025-11-01-preview and in Azure SDK preview packages that provide the feature. A knowledge base's retrieval response is designed for downstream consumption by other agents and chat apps.
+Programmatically, agentic retrieval is supported through a new [Knowledge Base object](/rest/api/searchservice/knowledge-bases?view=rest-searchservice-2025-11-01-preview&preserve-view=true) in the 2025-11-01-preview and in Azure SDK preview packages that provide the feature. A knowledge base's retrieval response is designed for downstream consumption by other agents and chat apps.
 
 ## Why use agentic retrieval
 
@@ -115,6 +115,7 @@ Currently, Azure portal support for agentic retrieval is limited to the 2025-08-
   + [Blob](agentic-knowledge-source-how-to-blob.md)
   + [OneLake](agentic-knowledge-source-how-to-onelake.md)
   + [Remote SharePoint](agentic-knowledge-source-how-to-sharepoint-remote.md)
+  + [Indexed SharePoint](agentic-knowledge-source-how-to-sharepoint-indexed.md)
   + [Search index](agentic-knowledge-source-how-to-search-index.md)
   + [Web](agentic-knowledge-source-how-to-web.md)
 + [Create a knowledge base](agentic-retrieval-how-to-create-knowledge-base.md)
@@ -130,12 +131,12 @@ Currently, Azure portal support for agentic retrieval is limited to the 2025-08-
 + [Quickstart-Agentic-Retrieval: Python](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/Quickstart-Agentic-Retrieval)
 + [Quickstart-Agentic-Retrieval: .NET](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/main/quickstart-agentic-retrieval)
 + [Quickstart-Agentic-Retrieval: REST](https://github.com/Azure-Samples/azure-search-rest-samples/tree/main/Quickstart-agentic-retrieval)
-+ [End-to-end with Azure AI Search and Azure AI Agent Service](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/agentic-retrieval-pipeline-example)
++ [End-to-end with Azure AI Search and Foundry Agent Service](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/agentic-retrieval-pipeline-example)
 
 ### [**REST API references**](#tab/rest-api-references)
 
 + [Knowledge Sources](/rest/api/searchservice/knowledge-sources?view=rest-searchservice-2025-11-01-preview&preserve-view=true)
-+ [Knowledge Bases](/rest/api/searchservice/knowledgebases?view=rest-searchservice-2025-11-01-preview&preserve-view=true)
++ [Knowledge Bases](/rest/api/searchservice/knowledge-bases?view=rest-searchservice-2025-11-01-preview&preserve-view=true)
 + [Knowledge Retrieval](/rest/api/searchservice/knowledge-retrieval/retrieve?view=rest-searchservice-2025-11-01-preview&preserve-view=true)
 
 ### [**Demos**](#tab/demos)

Summary

{
    "modification_type": "minor update",
    "modification_title": "エージェント回収の概要の修正"
}

Explanation

この変更では、agentic-retrieval-overview.mdファイルに対していくつかの小規模な修正が加えられました。主な変更点は以下の通りです:

  1. 知識ベースオブジェクトのURL更新: エージェント回収のプログラムでのサポートに関する記述において、Knowledge BaseオブジェクトのURLが修正され、正しいリソースにリンクされるようになりました。
  2. 新しいリンクの追加: 既存のリンクに加えて、インデックスされたSharePointに関する新しい項目が一覧に追加されたことで、選択肢が広がりました。
  3. エンドツーエンドサービスの名称変更: エンドツーエンドのソリューションの記述が、「Azure AI Agent Service」から「Foundry Agent Service」に変更され、最新の名称が反映されました。
  4. REST API参照のURL修正: APIの参照において、Knowledge Basesに関するURLの構成が修正され、アクセスしやすくなりました。

これらの修正は、ユーザーがエージェント回収の機能や関連リソースをより良く理解できるようにするために行われ、文書の全体的な使いやすさが向上しています。

articles/search/cognitive-search-concept-image-scenarios.md

Diff
@@ -6,7 +6,7 @@ author: HeidiSteen
 ms.author: heidist
 ms.service: azure-ai-search
 ms.topic: how-to
-ms.date: 05/01/2025
+ms.date: 11/21/2025
 ms.update-cycle: 180-days
 ms.custom:
   - devx-track-csharp
@@ -15,16 +15,16 @@ ms.custom:
 
 # Extract text and information from images by using AI enrichment
 
-Images often contain useful information that's relevant in search scenarios. You can [vectorize images](search-get-started-portal-image-search.md) to represent visual content in your search index. Or, you can use [AI enrichment and skillsets](cognitive-search-concept-intro.md) to create and extract searchable *text* from images, including:
+Images often contain useful information that's relevant in search scenarios. Azure AI Search doesn't query image content in real time, but you can extract information about an image during indexing and make that content searchable. To represent images in a search index, you can use these approaches:
 
- + [GenAI Prompt](cognitive-search-skill-genai-prompt.md) to pass a prompt to a chat completion skill, requesting a description of image content.
-+ [OCR](cognitive-search-skill-ocr.md) for optical character recognition of text and digits
-+ [Image Analysis](cognitive-search-skill-image-analysis.md) that describes images through visual features
-+ [Custom skills](#passing-images-to-custom-skills) to invoke any external image processing that you want to provide
++ [Vectorize images](search-get-started-portal-image-search.md) to represent visual content as a searchable vector.
++ [Verbalize images](cognitive-search-skill-genai-prompt.md) using the GenAI Prompt skill that sends a verbalization request to a chat completion model to describe the image.
++ [Analyze images](cognitive-search-skill-image-analysis.md) using an image analysis skill to generate a text representation of an image, such as *dandelion* for a photo of a dandelion, or the color *yellow*. You can also extract metadata about the image, such as its size.
++ [Use OCR](cognitive-search-skill-ocr.md) to extract text and from photos or pictures, such as the word *STOP* in a stop sign.
 
-By using OCR, you can extract text and from photos or pictures, such as the word *STOP* in a stop sign. Through image analysis, you can generate a text representation of an image, such as *dandelion* for a photo of a dandelion, or the color *yellow*. You can also extract metadata about the image, such as its size.
+You can also create a [custom skill](#passing-images-to-custom-skills) to invoke any external image processing that you want to provide.
 
-This article covers the fundamentals of working with images in skillsets, and also describes several common scenarios, such as working with embedded images, custom skills, and overlaying visualizations on original images.
+This article focuses on image analysis and OCR, custom skills that provide external processing, working with embedded images, and overlaying visualizations on original images. If verbalization or vectorization is your preferred approach, see [Multimodal search](multimodal-search-overview.md) instead.
 
 To work with image content in a skillset, you need:
 
@@ -102,7 +102,7 @@ Metadata adjustments are captured in a complex type created for each image. You
 
    The default of 2,000 pixels for the normalized images maximum width and height is based on the maximum sizes supported by the [OCR skill](cognitive-search-skill-ocr.md) and the [image analysis skill](cognitive-search-skill-image-analysis.md). The [OCR skill](cognitive-search-skill-ocr.md) supports a maximum width and height of 4,200 for non-English languages, and 10,000 for English. If you increase the maximum limits, processing could fail on larger images depending on your skillset definition and the language of the documents. 
 
-1.  Optionally, [set file type criteria](search-blob-storage-integration.md#PartsOfBlobToIndex) if the workload targets a specific file type. Blob indexer configuration includes file inclusion and exclusion settings. You can filter out files you don't want.
+1. Optionally, [set file type criteria](search-blob-storage-integration.md#PartsOfBlobToIndex) if the workload targets a specific file type. Blob indexer configuration includes file inclusion and exclusion settings. You can filter out files you don't want.
 
   ```json
   {

Summary

{
    "modification_type": "minor update",
    "modification_title": "画像シナリオに関するコンセプトの更新"
}

Explanation

この変更では、cognitive-search-concept-image-scenarios.mdファイルに対していくつかの修正が行われ、主に内容の明確化と最新情報の反映が目的とされています。具体的な変更点は以下の通りです:

  1. 日付の更新: ドキュメントの日付が「2025年5月1日」から「2025年11月21日」に変更され、情報が最新のものとなりました。
  2. 情報処理の説明の改善: 画像の情報をリアルタイムでクエリするのではなく、インデックス作成時に情報を抽出する方法について明記され、内容の理解が促進されました。
  3. 技術的手法の明確化: 画像を検索可能なベクターとして表現する手法が追加され、具体的なスキルセットの使用法に関する説明が整理されました。
  4. スキルの使用方法: OCR(光学文字認識)や画像分析のスキル、カスタムスキルの利用方法が明確に記載され、ユーザーに対する具体的な指導が提供されました。
  5. 記事の焦点の明確化: 記事の主題が画像分析やOCR、カスタムスキルにより重点を置いていることが強調され、読者が適切なリソースを見つけやすくなっています。

これらの改訂は、ユーザーが画像シナリオに関するAIの活用方法をより効果的に理解し、実装できるようにするために行われました。全体として、文書の質と有用性が向上しています。

articles/search/cognitive-search-concept-intro.md

Diff
@@ -85,7 +85,7 @@ In Azure AI Search, an indexer saves the output it creates. A single indexer run
 | Data store | Required | Location | Description |
 |------------|----------|----------|-------------|
 | [searchable index](search-what-is-an-index.md) | Required | Search service | Used for full-text search and other query forms. Specifying an index is an indexer requirement. Index content is populated from skill outputs, plus any source fields that are mapped directly to fields in the index. |
-| [knowledge store](knowledge-store-concept-intro.md) | Optional | Azure Storage | Used for downstream apps like knowledge mining, data science, and multimodal search. A knowledge store is defined within a skillset. Its definition determines whether your enriched documents are projected as tables or objects (files or blobs) in Azure Storage. For [multimodal search scenarios](multimodal-search-overview.md#how-multimodal-search-works-in-azure-ai-search), you can save extracted images to the knowledge store and reference them at query time, allowing the images to be returned directly to client apps. |
+| [knowledge store](knowledge-store-concept-intro.md) | Optional | Azure Storage | Used for downstream apps like knowledge mining, data science, and multimodal search. A knowledge store is defined within a skillset. Its definition determines whether your enriched documents are projected as tables or objects (files or blobs) in Azure Storage. For [multimodal search scenarios](multimodal-search-overview.md#how-does-multimodal-search-work), you can save extracted images to the knowledge store and reference them at query time, allowing the images to be returned directly to client apps. |
 | [enrichment cache](enrichment-cache-how-to-configure.md) | Optional | Azure Storage | Used for caching enrichments for reuse in subsequent skillset executions. The cache stores imported, unprocessed content (cracked documents). It also stores the enriched documents created during skillset execution. Caching is helpful if you're using image analysis or OCR, and you want to avoid the time and expense of reprocessing image files. |
 
 Indexes and knowledge stores are fully independent of each other. While you must attach an index to satisfy indexer requirements, if your sole objective is a knowledge store, you can ignore the index after it's populated.

Summary

{
    "modification_type": "minor update",
    "modification_title": "Cognitive Search 概念のイントロダクションの修正"
}

Explanation

この変更では、cognitive-search-concept-intro.mdファイルに対し、いくつかの小規模な修正が行われました。具体的なポイントは以下の通りです:

  1. リンクのテキスト修正: “multimodal search scenarios”に関するリンクのテキストが「How multimodal search works in Azure AI Search」から「How does multimodal search work」に変更され、リンク先の情報の理解が明確になりました。
  2. 文の構造の微修正: 知識ストアに関する説明文がわずかに変更され、情報の流れが改善されました。特に、知識ストアに保存される抽出画像のリファレンスについての説明が明示的になりました。

これらの変更は、ドキュメントの明確さと一貫性を向上させ、読者が特定の機能や操作をよりよく理解できるようにするために実施されました。全体として、文書の質が向上しています。

articles/search/hybrid-search-overview.md

Diff
@@ -1,41 +1,52 @@
 ---
-title: Hybrid search
+title: Hybrid Search
 titleSuffix: Azure AI Search
-description: Describes concepts and architecture of hybrid query processing and document retrieval. Hybrid queries combine vector search and full text search.
+description: Describes concepts and architecture of hybrid query processing and document retrieval. Hybrid queries combine vector search and full-text search.
 author: robertklee
 ms.author: robertlee
 ms.service: azure-ai-search
 ms.custom:
   - ignite-2023
 ms.topic: conceptual
-ms.date: 07/21/2025
+ms.date: 11/21/2025
 ---
 
 # Hybrid search using vectors and full text in Azure AI Search
 
-Hybrid search is a single query request, configured for full text and vector queries, that executes against a search index containing both searchable plain text content and generated embeddings. For query purposes, hybrid search is:
+A hybrid search is a single query request configured for both full-text and vector queries. It executes against a search index that contains searchable, plain-text content and generated embeddings. For query purposes, hybrid search:
 
-+ A single query request that includes both `search` and `vectors` query parameters
-+ Executing in parallel
-+ Merging results from each query using [Reciprocal Rank Fusion (RRF)](hybrid-search-ranking.md)
++ Is a single query request that includes both `search` and `vectors` query parameters.
++ Executes full-text search and vector search in parallel.
++ Merges results from each query using [Reciprocal Rank Fusion (RRF)](hybrid-search-ranking.md).
 
-This article explains the concepts, benefits, and limitations of hybrid search. Links at the end provide instructions and next steps. You can also watch this [embedded video](#why-choose-hybrid-search) for an explanation of how hybrid retrieval contributes to high quality generative search applications.
+This article explains the concepts, benefits, and limitations of hybrid search. Links at the end provide usage instructions and next steps. You can also watch the [embedded video](#why-use-hybrid-search) for an explanation of how hybrid retrieval contributes to high-quality generative search applications.
+
+## Why use hybrid search?
+
+Hybrid search combines the strengths of vector search and keyword search. The advantage of vector search is finding information that's conceptually similar to your search query, even if there are no keyword matches in the inverted index. The advantage of keyword or full-text search is precision, with the ability to apply optional semantic ranking that improves the quality of the initial results. Some scenarios, such as querying over product codes, highly specialized jargon, dates, and people's names, can perform better with keyword search because it can identify exact matches.
+
+Benchmark testing on real-world and benchmark datasets indicates that hybrid retrieval with semantic ranker offers significant benefits in search relevance.
+
+The following video explains how hybrid retrieval gives you optimal grounding data for generating useful AI responses.
+
+> [!VIDEO https://www.youtube.com/embed/Xwx1DJ0OqCk]
 
 ## How does hybrid search work?
 
-In a search index, vector fields containing embeddings coexist with textual and numerical fields, allowing you to formulate hybrid queries that execute in parallel. Hybrid queries can take advantage of existing text-based functionality like filtering, faceting, sorting, scoring profiles, and [semantic ranking](semantic-search-overview.md) on your text fields, while executing a similarity search against vectors, all in a single search request.
+In a search index, vector fields containing embeddings coexist with textual and numerical fields, allowing you to formulate hybrid queries that execute simultaneously. Hybrid queries take advantage of existing text-based functionality like filtering, faceting, sorting, scoring profiles, and [semantic ranking](semantic-search-overview.md) on your text fields, while executing a similarity search against vectors in a single search request.
 
-Hybrid search combines results from both full text and vector queries, which use different ranking functions such as BM25 for text, and Hierarchical Navigable Small World (HNSW) and exhaustive K Nearest Neighbors (eKNN) for vectors. A [Reciprocal Rank Fusion (RRF)](hybrid-search-ranking.md) algorithm merges the results. The query response provides just one result set, using RRF to rank the unified results.
+Hybrid search combines results from both full-text and vector queries, which use different ranking functions such as BM25 for text, and Hierarchical Navigable Small World (HNSW) and exhaustive K Nearest Neighbors (eKNN) for vectors. An [RRF](hybrid-search-ranking.md) algorithm merges the results. The query response provides just one result set, using RRF to rank the unified results.
 
 ## Structure of a hybrid query
 
-Hybrid search is predicated on having a search index that contains fields of various [data types](/rest/api/searchservice/supported-data-types), including plain text and numbers, geo coordinates if you want geospatial search, and vectors for a mathematical representation of a chunk of text. You can use almost all query capabilities in Azure AI Search with a vector query, except for pure text client-side interactions such as autocomplete and suggestions.
+Hybrid search relies on a search index that contains fields of various [data types](/rest/api/searchservice/supported-data-types), including plain text and numbers, geo coordinates if you want geospatial search, and vectors to mathematically represent a chunk of text. You can use almost all query capabilities in Azure AI Search with a vector query, except for pure text client-side interactions, such as autocomplete and suggestions.
 
-A representative hybrid query might be as follows (notice that the vector queries have placeholder values for brevity):
+A representative hybrid query might look like the following. For brevity, the vector queries have placeholder values.
 
 ```http
 POST https://{{searchServiceName}}.search.windows.net/indexes/hotels-vector-quickstart/docs/search?api-version=2025-09-01
-  content-type: application/JSON
+content-type: application/JSON
+
 {
     "count": true,
     "search": "historic hotel walk to restaurants and shopping",
@@ -69,22 +80,22 @@ POST https://{{searchServiceName}}.search.windows.net/indexes/hotels-vector-quic
 }
 ```
 
-Key points include:
+**Key points:**
 
-+ `search` specifies a single full text search query.
-+ `vectorQueries` for vector queries, which can be multiple, targeting multiple vector fields. If the embedding space includes multi-lingual content, vector queries can find the match with no language analyzers or translation required. If you're also using the semantic ranker, set `k` to 50 to maximize its inputs.
-+ `select` specifies which fields to return in results, which should be text fields that are human readable if you're showing them to users or sending them to an LLM.
-+ `filters` can specify geospatial search or other include and exclude criteria, such as whether parking is included. The geospatial query in this example finds hotels within a 300-kilometer radius of Washington D.C. You can apply the filter at the beginning or end of query processing. If you use the semantic ranker, you probably want post-filtering as the last step but you should test to confirm which behavior is best for your queries.
++ `search` specifies a single full-text search query.
++ `vectorQueries` specifies vector queries, which can be multiple, targeting multiple vector fields. If the embedding space includes multi-lingual content, vector queries can find the match with no language analyzers or translation required. If you're using semantic ranker, set `k` to 50 to maximize its inputs.
++ `select` specifies which fields to return in results, which should be human-readable text fields if you're showing them to users or sending them to a large language model (LLM).
++ `filters` can specify geospatial search or other inclusion and exclusion criteria, such as whether parking is included. The geospatial query in this example finds hotels within a 300-kilometer radius of Washington D.C. You can apply the filter at the beginning or end of query processing. If you're using semantic ranker, you probably want post-filtering as the last step, but you should test to confirm which behavior is best for your queries.
 + `facets` can be used to compute facet buckets over results that are returned from hybrid queries.
-+ `queryType=semantic` invokes semantic ranker, applying machine reading comprehension to surface more relevant search results. Semantic ranking is optional. If you aren't using that feature, remove the last three lines of the hybrid query.
++ `queryType=semantic` invokes [semantic ranker](semantic-search-overview.md), applying machine reading comprehension to surface more relevant search results. Semantic ranking is optional. If you aren't using this feature, remove the last three lines of the hybrid query.
 
-Filters and facets target data structures within the index that are distinct from the inverted indexes used for full text search and the vector indexes used for vector search. As such, when filters and faceted operations execute, the search engine can apply the operational result to the hybrid search results in the response.
+Filters and facets target data structures within the index that are distinct from the inverted indexes used for full-text search and the vector indexes used for vector search. As such, when filters and faceted operations execute, the search engine can apply the operational result to the hybrid search results in the response.
 
 Notice how there's no `orderby` in the query. Explicit sort orders override relevanced-ranked results, so if you want similarity and BM25 relevance, omit sorting in your query.
 
-A response from the above query might look like this:
+A response from the query might look like the following JSON.
 
-```http
+```json
 {
     "@odata.count": 3,
     "@search.facets": {
@@ -128,17 +139,7 @@ A response from the above query might look like this:
 }
 ```
 
-## Why choose hybrid search?
-
-Hybrid search combines the strengths of vector search and keyword search. The advantage of vector search is finding information that's conceptually similar to your search query, even if there are no keyword matches in the inverted index. The advantage of keyword or full text search is precision, with the ability to apply optional semantic ranking that improves the quality of the initial results. Some scenarios - such as querying over product codes, highly specialized jargon, dates, and people's names - can perform better with keyword search because it can identify exact matches.
-
-Benchmark testing on real-world and benchmark datasets indicates that hybrid retrieval with semantic ranker offers significant benefits in search relevance.
-
-The following video explains how hybrid retrieval gives you optimal grounding data for generating useful AI responses.
-
-> [!VIDEO https://www.youtube.com/embed/Xwx1DJ0OqCk]
-
-## See also
+## Related content
 
 + [Create a hybrid query](hybrid-search-how-to-query.md)
 + [Relevance scoring in hybrid search](hybrid-search-ranking.md)

Summary

{
    "modification_type": "minor update",
    "modification_title": "ハイブリッド検索の概要の更新"
}

Explanation

この変更では、hybrid-search-overview.mdファイルに対して大規模な修正が行われ、全体の構造や内容が改善されています。主な変更点は以下の通りです:

  1. タイトルと説明文の修正: タイトルが「Hybrid search」から「Hybrid Search」に変更され、説明中の「full text」も「full-text」に修正されました。これにより、正確性と一貫性が向上しました。

  2. 構造の改善: 新たに「なぜハイブリッド検索を使用するのか?」というセクションが追加され、ハイブリッド検索の利点について詳細に説明がされています。このセクションでは、ベクトル検索とキーワード検索の強みが比較されつつ、具体的なメリットが述べられています。

  3. 内容の明確化: ハイブリッド検索の動作やクエリの構造の説明が再構成され、シンプルで理解しやすい文章になっています。また、例として示されるハイブリッドクエリやその応答フォーマットがわかりやすく整形されています。

  4. 関連コンテンツの見出し変更: 「See also」が「Related content」というタイトルに変更され、関連リソースのリストが容易に見つけやすくなりました。

  5. ビデオリンクの更新と整形: ビデオリンクが更新され、より直感的に情報を伝えるために埋め込まれています。

これらの変更により、ドキュメントはさらに利用しやすくなり、ユーザーがハイブリッド検索の理解を深める手助けが強化されています。全体として、情報の流れや論理が改善され、読者に対して非常に明瞭なガイダンスが提供されています。

articles/search/includes/how-tos/agentic-knowledge-source-how-to-blob-csharp.md

Diff
@@ -0,0 +1,205 @@
+---
+manager: nitinme
+author: heidisteen
+ms.author: heidist
+ms.service: azure-ai-search
+ms.topic: include
+ms.date: 11/19/2025
+---
+
+[!INCLUDE [Feature preview](../previews/preview-generic.md)]
+
+Use a *blob knowledge source* to index and query Azure blob content in an agentic retrieval pipeline. [Knowledge sources](../../agentic-knowledge-source-overview.md) are created independently, referenced in a [knowledge base](../../agentic-retrieval-how-to-create-knowledge-base.md), and used as grounding data when an agent or chatbot calls a [retrieve action](../../agentic-retrieval-how-to-retrieve.md) at query time.
+
+Unlike a [search index knowledge source](../../agentic-knowledge-source-how-to-search-index.md), which specifies an existing and qualified index, a blob knowledge source specifies an external data source, models, and properties to automatically generate the following Azure AI Search objects:
+
++ A data source that represents a blob container.
++ A skillset that chunks and optionally vectorizes multimodal content from the container.
++ An index that stores enriched content and meets the criteria for agentic retrieval.
++ An indexer that uses the previous objects to drive the indexing and enrichment pipeline.
+
+> [!NOTE]
+> If user access is specified at the document (blob) level in Azure Storage, a knowledge source can carry permission metadata forward to indexed content in Azure AI Search. For more information, see [ADLS Gen2 permission metadata](/azure/search/search-indexer-access-control-lists-and-role-based-access) or [Blob RBAC scopes](/azure/search/search-blob-indexer-role-based-access).
+
+## Prerequisites
+
++ Azure AI Search in any [region that provides agentic retrieval](../../search-region-support.md). You must have [semantic ranker enabled](../../semantic-how-to-enable-disable.md).
+
++ An [Azure Blob Storage](/azure/storage/common/storage-account-create) or [Azure Data Lake Storage (ADLS) Gen2](/azure/storage/blobs/create-data-lake-storage-account) account. 
+
++ A blob container with [supported content types](../../search-how-to-index-azure-blob-storage.md#supported-document-formats) for text content. For optional image verbalization, the supported content type depends on whether your chat completion model can analyze and describe the image file.
+
++ The latest preview version of the [`Azure.Search.Documents` client library](https://www.nuget.org/packages/Azure.Search.Documents/11.8.0-beta.1) for the .NET SDK.
+
++ Permission to create and use objects on Azure AI Search. We recommend [role-based access](../../search-security-rbac.md), but you can use [API keys](../../search-security-api-keys.md) if a role assignment isn't feasible. For more information, see [Connect to a search service](../../search-get-started-rbac.md).
+
+> [!NOTE]
+> Although you can use the Azure portal to create blob knowledge sources, the portal uses the 2025-08-01-preview, which uses the previous "knowledge agent" terminology and doesn't support all 2025-11-01-preview features. For help with breaking changes, see [Migrate your agentic retrieval code](../../agentic-retrieval-how-to-migrate.md).
+
+## Check for existing knowledge sources
+
+[!INCLUDE [Check for existing knowledge sources using C#](knowledge-source-check-csharp.md)]
+
+The following JSON is an example response for a blob knowledge source.
+
+```json
+{
+  "name": "my-blob-ks",
+  "kind": "azureBlob",
+  "description": "A sample blob knowledge source.",
+  "encryptionKey": null,
+  "azureBlobParameters": {
+    "connectionString": "<REDACTED>",
+    "containerName": "blobcontainer",
+    "folderPath": null,
+    "isADLSGen2": false,
+    "ingestionParameters": {
+      "disableImageVerbalization": false,
+      "ingestionPermissionOptions": [],
+      "contentExtractionMode": "standard",
+      "identity": null,
+      "embeddingModel": {
+        "kind": "azureOpenAI",
+        "azureOpenAIParameters": {
+          "resourceUri": "<REDACTED>",
+          "deploymentId": "text-embedding-3-large",
+          "apiKey": "<REDACTED>",
+          "modelName": "text-embedding-3-large",
+          "authIdentity": null
+        }
+      },
+      "chatCompletionModel": {
+        "kind": "azureOpenAI",
+        "azureOpenAIParameters": {
+          "resourceUri": "<REDACTED>",
+          "deploymentId": "gpt-5-mini",
+          "apiKey": "<REDACTED>",
+          "modelName": "gpt-5-mini",
+          "authIdentity": null
+        }
+      },
+      "ingestionSchedule": null,
+      "assetStore": null,
+      "aiServices": {
+        "uri": "<REDACTED>",
+        "apiKey": "<REDACTED>"
+      }
+    },
+    "createdResources": {
+      "datasource": "my-blob-ks-datasource",
+      "indexer": "my-blob-ks-indexer",
+      "skillset": "my-blob-ks-skillset",
+      "index": "my-blob-ks-index"
+    }
+  }
+}
+```
+
+> [!NOTE]
+> Sensitive information is redacted. The generated resources appear at the end of the response.
+
+## Create a knowledge source
+
+Run the following code to [create a blob knowledge source](/dotnet/api/azure.search.documents.indexes.models.azureblobknowledgesource?view=azure-dotnet-preview&preserve-view=true).
+
+```csharp
+// Create a blob knowledge source
+using Azure.Search.Documents.Indexes;
+using Azure.Search.Documents.Indexes.Models;
+using Azure.Search.Documents.KnowledgeBases.Models;
+using Azure;
+
+var indexClient = new SearchIndexClient(new Uri(searchEndpoint), new AzureKeyCredential(apiKey));
+
+var chatCompletionParams = new AzureOpenAIVectorizerParameters
+{
+    ResourceUri = new Uri(aoaiEndpoint),
+    DeploymentName = aoaiGptDeployment,
+    ModelName = aoaiGptModel
+};
+
+var embeddingParams = new AzureOpenAIVectorizerParameters
+{
+    ResourceUri = new Uri(aoaiEndpoint),
+    DeploymentName = aoaiEmbeddingDeployment,
+    ModelName = aoaiEmbeddingModel
+};
+
+var ingestionParams = new KnowledgeSourceIngestionParameters
+{
+    DisableImageVerbalization = false,
+    ChatCompletionModel = new KnowledgeBaseAzureOpenAIModel(azureOpenAIParameters: chatCompletionParams),
+    EmbeddingModel = new KnowledgeSourceAzureOpenAIVectorizer
+    {
+        AzureOpenAIParameters = embeddingParams
+    }
+};
+
+var blobParams = new AzureBlobKnowledgeSourceParameters(
+    connectionString: connectionString,
+    containerName: containerName
+)
+{
+    IsAdlsGen2 = false,
+    IngestionParameters = ingestionParams
+};
+
+var knowledgeSource = new AzureBlobKnowledgeSource(
+    name: "my-blob-ks",
+    azureBlobParameters: blobParams
+)
+{
+    Description = "This knowledge source pulls from a blob storage container."
+};
+
+await indexClient.CreateOrUpdateKnowledgeSourceAsync(knowledgeSource);
+Console.WriteLine($"Knowledge source '{knowledgeSource.Name}' created or updated successfully.");
+```
+
+### Source-specific properties
+
+You can pass the following properties to create a blob knowledge source.
+
+| Name | Description | Type | Editable | Required |
+|--|--|--|--|--|
+| `name` | The name of the knowledge source, which must be unique within the knowledge sources collection and follow the [naming guidelines](/rest/api/searchservice/naming-rules) for objects in Azure AI Search. | String | No | Yes |
+| `Description` | A description of the knowledge source. | String | Yes | No |
+| `encryptionKey` | A [customer-managed key](../../search-security-manage-encryption-keys.md) to encrypt sensitive information in both the knowledge source and the generated objects. | Object | Yes | No |
+| `chatCompletionParams` | Parameters specific to chat completion models used for query planning and optional answer synthesis when the retrieval reasoning effort is low or medium. | Object |  | No |
+| `embeddingParams` | Parameters specific to embedding models used if you want to vectorize chunks of content.  | Object |  | No |
+| `azureBlobParameters` | Parameters specific to blob knowledge sources: `connectionString`, `containerName`, `folderPath`, and `isAdlsGen2`. | Object |  | No |
+| `connectionString` | A key-based [connection string](../../search-how-to-index-azure-blob-storage.md#supported-credentials-and-connection-strings) or, if you're using a managed identity, the resource ID. | String | No | Yes |
+| `containerName` | The name of the blob storage container. | String | No | Yes |
+| `folderPath` | A folder within the container. | String | No | No |
+| `isAdlsGen2` | The default is `False`. Set to `True` if you're using an ADLS Gen2 storage account. | Boolean | No | No |
+
+### `ingestionParameters` properties
+
+[!INCLUDE [C# ingestionParameters properties](knowledge-source-ingestion-parameters-csharp.md)]
+
+## Check ingestion status
+
+[!INCLUDE [C# knowledge source status](knowledge-source-status-csharp.md)]
+
+## Review the created objects
+
+When you create a blob knowledge source, your search service also creates an indexer, index, skillset, and data source. We don't recommend that you edit these objects, as introducing an error or incompatibility can break the pipeline.
+
+After you create a knowledge source, the response lists the created objects. These objects are created according to a fixed template, and their names are based on the name of the knowledge source. You can't change the object names.
+
+We recommend using the Azure portal to validate output creation. The workflow is:
+
+1. Check the indexer for success or failure messages. Connection or quota errors appear here.
+1. Check the index for searchable content. Use Search Explorer to run queries.
+1. Check the skillset to learn how your content is chunked and optionally vectorized.
+1. Check the data source for connection details. Our example uses API keys for simplicity, but you can use Microsoft Entra ID for authentication and role-based access control for authorization.
+
+## Assign to a knowledge base
+
+If you're satisfied with the knowledge source, continue to the next step: specify the knowledge source in a [knowledge base](../../agentic-retrieval-how-to-create-knowledge-base.md).
+
+After the knowledge base is configured, use the [retrieve action](../../agentic-retrieval-how-to-retrieve.md) to query the knowledge source.
+
+## Delete a knowledge source
+
+[!INCLUDE [Delete knowledge source using C#](knowledge-source-delete-csharp.md)]

Summary

{
    "modification_type": "new feature",
    "modification_title": "Blob 知識ソースの使用方法ガイドの追加"
}

Explanation

この変更では、Azureのエージェント型検索パイプラインにおけるblob 知識ソースを利用する方法に関する新しいガイドが追加されました。具体的なポイントは以下の通りです:

  1. 新規ファイルの作成: agentic-knowledge-source-how-to-blob-csharp.md というタイトルの新しいファイルが作成され、Azure Blob Storageを使用した知識ソースの設定と活用方法が詳細に説明されています。

  2. 知識ソースの定義: 知識ソースの概念、特にBlobストレージを利用してインデックスとクエリを行う方法が述べられています。Blob知識ソースは、コンテナのデータソース、スキルセット、インデックス、およびインデクサーを自動的に生成します。

  3. 前提条件の明示: Azure AI Searchに必要な前提条件として、エージェント型検索がサポートされているリージョン、Blobストレージアカウント、最新の.NET SDKなどが挙げられています。また、ユーザーアクセスを制御するための権限に関する注意点も記載されています。

  4. 実装例の提供: Blob知識ソースを作成するためのC#コードスニペットが提供されており、開発者が実際にどのように実装すればよいかを理解するための手助けをしています。

  5. オブジェクトの作成と管理ガイド: 知識ソースの作成後に生成されるオブジェクト(インデクサー、インデックス、スキルセットなど)についても説明があり、これらを編集することが推奨されていない理由についても触れられています。

  6. 知識ベースへの割り当て: 知識ソースの設定が完了した後のステップとして、知識ベースに知識ソースを指定する方法が記載されています。

この新しいガイドは、ユーザーがAzureのBlobストレージとエージェント型検索を連携させる際の手助けを強化し、より効果的に知識ソースを活用するための情報を提供しています。全体として、この追加はAzureのドキュメントを豊富にし、ユーザーの理解を深めることに貢献しています。

articles/search/includes/how-tos/agentic-knowledge-source-how-to-onelake-csharp.md

Diff
@@ -0,0 +1,193 @@
+---
+manager: nitinme
+author: haileytap
+ms.author: haileytapia
+ms.service: azure-ai-search
+ms.topic: include
+ms.date: 11/20/2025
+---
+
+[!INCLUDE [Feature preview](../previews/preview-generic.md)]
+
+Use a *OneLake knowledge source* to index and query Microsoft OneLake files in an agentic retrieval pipeline. [Knowledge sources](../../agentic-knowledge-source-overview.md) are created independently, referenced in a [knowledge base](../../agentic-retrieval-how-to-create-knowledge-base.md), and used as grounding data when an agent or chatbot calls a [retrieve action](../../agentic-retrieval-how-to-retrieve.md) at query time.
+
+When you create a OneLake knowledge source, you specify an external data source, models, and properties to automatically generate the following Azure AI Search objects:
+
++ A data source that represents a lakehouse.
++ A skillset that chunks and optionally vectorizes multimodal content from the lakehouse.
++ An index that stores enriched content and meets the criteria for agentic retrieval.
++ An indexer that uses the previous objects to drive the indexing and enrichment pipeline.
+
+The generated indexer conforms to the *OneLake indexer*, whose prerequisites, supported tasks, supported document formats, supported shortcuts, and limitations also apply to OneLake knowledge sources. For more information, see the [OneLake indexer documentation](../../search-how-to-index-onelake-files.md).
+
+## Prerequisites
+
++ Azure AI Search in any [region that provides agentic retrieval](../../search-region-support.md). You must have [semantic ranker enabled](../../semantic-how-to-enable-disable.md).
+
++ Completion of the [OneLake indexer prerequisites](../../search-how-to-index-onelake-files.md#prerequisites).
+
++ Completion of the [OneLake indexer data preparation](../../search-how-to-index-onelake-files.md#prepare-data-for-indexing).
+
++ The latest preview version of the [`azure-search-documents` client library](https://pypi.org/project/azure-search-documents/11.7.0b2/) for C#.
+
++ Permission to create and use objects on Azure AI Search. We recommend [role-based access](../../search-security-rbac.md), but you can use [API keys](../../search-security-api-keys.md) if a role assignment isn't feasible. For more information, see [Connect to a search service](../../search-get-started-rbac.md).
+
+## Check for existing knowledge sources
+
+[!INCLUDE [Check for existing knowledge sources using C#](knowledge-source-check-csharp.md)]
+
+The following JSON is an example response for a OneLake knowledge source.
+
+```json
+{
+  "name": "my-onelake-ks",
+  "kind": "indexedOneLake",
+  "description": "A sample indexed OneLake knowledge source.",
+  "encryptionKey": null,
+  "indexedOneLakeParameters": {
+    "fabricWorkspaceId": "<REDACTED>",
+    "lakehouseId": "<REDACTED>",
+    "targetPath": null,
+    "ingestionParameters": {
+      "disableImageVerbalization": false,
+      "ingestionPermissionOptions": [],
+      "contentExtractionMode": "standard",
+      "identity": null,
+      "embeddingModel": {
+        "kind": "azureOpenAI",
+        "azureOpenAIParameters": {
+          "resourceUri": "<REDACTED>",
+          "deploymentId": "text-embedding-3-large",
+          "apiKey": "<REDACTED>",
+          "modelName": "text-embedding-3-large"
+        }
+      },
+      "chatCompletionModel": {
+        "kind": "azureOpenAI",
+        "azureOpenAIParameters": {
+          "resourceUri": "<REDACTED>",
+          "deploymentId": "gpt-5-mini",
+          "apiKey": "<REDACTED>",
+          "modelName": "gpt-5-mini"
+        }
+      },
+      "ingestionSchedule": null,
+      "aiServices": {
+        "uri": "<REDACTED>",
+        "apiKey": "<REDACTED>"
+      }
+    },
+    "createdResources": {
+    "datasource": "my-onelake-ks-datasource",
+    "indexer": "my-onelake-ks-indexer",
+    "skillset": "my-onelake-ks-skillset",
+    "index": "my-onelake-ks-index"
+    }
+  }
+}
+```
+
+> [!NOTE]
+> Sensitive information is redacted. The generated resources appear at the end of the response.
+
+## Create a knowledge source
+
+Run the following code to create a OneLake knowledge source.
+
+```csharp
+// Create an IndexedOneLake knowledge source
+using Azure.Search.Documents.Indexes;
+using Azure.Search.Documents.Indexes.Models;
+using Azure.Search.Documents.KnowledgeBases.Models;
+using Azure;
+
+var indexClient = new SearchIndexClient(new Uri(searchEndpoint), new AzureKeyCredential(apiKey));
+
+var chatCompletionParams = new AzureOpenAIVectorizerParameters
+{
+    ResourceUri = new Uri(aoaiEndpoint),
+    DeploymentName = aoaiGptDeployment,
+    ModelName = aoaiGptModel
+};
+
+var embeddingParams = new AzureOpenAIVectorizerParameters
+{
+    ResourceUri = new Uri(aoaiEndpoint),
+    DeploymentName = aoaiEmbeddingDeployment,
+    ModelName = aoaiEmbeddingModel
+};
+
+var ingestionParams = new KnowledgeSourceIngestionParameters
+{
+    DisableImageVerbalization = false,
+    ChatCompletionModel = new KnowledgeBaseAzureOpenAIModel(azureOpenAIParameters: chatCompletionParams),
+    EmbeddingModel = new KnowledgeSourceAzureOpenAIVectorizer
+    {
+        AzureOpenAIParameters = embeddingParams
+    }
+};
+
+var oneLakeParams = new IndexedOneLakeKnowledgeSourceParameters(
+    fabricWorkspaceId: fabricWorkspaceId,
+    lakehouseId: lakehouseId)
+{
+    IngestionParameters = ingestionParams
+};
+
+var knowledgeSource = new IndexedOneLakeKnowledgeSource(
+    name: "my-onelake-ks",
+    indexedOneLakeParameters: oneLakeParams)
+{
+    Description = "This knowledge source pulls content from a lakehouse."
+};
+
+await indexClient.CreateOrUpdateKnowledgeSourceAsync(knowledgeSource);
+Console.WriteLine($"Knowledge source '{knowledgeSource.Name}' created or updated successfully.");
+```
+
+### Source-specific properties
+
+You can pass the following properties to create a OneLake knowledge source.
+
+| Name | Description | Type | Editable | Required |
+|--|--|--|--|--|
+| `Name` | The name of the knowledge source, which must be unique within the knowledge sources collection and follow the [naming guidelines](/rest/api/searchservice/naming-rules) for objects in Azure AI Search. | String | Yes | Yes |
+| `Description` | A description of the knowledge source. | String | Yes | No |
+| `EncryptionKey` | A [customer-managed key](../../search-security-manage-encryption-keys.md) to encrypt sensitive information in both the knowledge source and the generated objects. | Object | Yes | No |
+| `IndexedOneLakeKnowledgeSourceParameters` | Parameters specific to OneLake knowledge sources: `fabricWorkspaceId`, `lakehouseId`, and `targetPath`. | Object |  | Yes |
+| `fabricWorkspaceId` | The GUID of the workspace that contains the lakehouse. | String | No | Yes |
+| `lakehouseId` | The GUID of the lakehouse. | String | No | Yes |
+| `targetPath` | A folder or shortcut within the lakehouse. When unspecified, the entire lakehouse is indexed. | String | No | No |
+
+### `ingestionParameters` properties
+
+[!INCLUDE [C# ingestionParameters properties](knowledge-source-ingestion-parameters-csharp.md)]
+
+## Check ingestion status
+
+[!INCLUDE [C# knowledge source status](knowledge-source-status-csharp.md)]
+
+## Review the created objects
+
+When you create a OneLake knowledge source, your search service also creates an indexer, index, skillset, and data source. We don't recommend that you edit these objects, as introducing an error or incompatibility can break the pipeline.
+
+After you create a knowledge source, the response lists the created objects. These objects are created according to a fixed template, and their names are based on the name of the knowledge source. You can't change the object names.
+
+We recommend using the Azure portal to validate output creation. The workflow is:
+
+1. Check the indexer for success or failure messages. Connection or quota errors appear here.
+1. Check the index for searchable content. Use Search Explorer to run queries.
+1. Check the skillset to learn how your content is chunked and optionally vectorized.
+1. Check the data source for connection details. Our example uses API keys for simplicity, but you can use Microsoft Entra ID for authentication and role-based access control for authorization.
+
+## Assign to a knowledge base
+
+If you're satisfied with the knowledge source, continue to the next step: specify the knowledge source in a [knowledge base](../../search-agentic-retrieval-how-to-create.md).
+
+For any knowledge base that specifies a OneLake knowledge source, be sure to set `includeReferenceSourceData` to `true`. This step is necessary for pulling the source document URL into the citation.
+
+After the knowledge base is configured, use the [retrieve action](../../agentic-retrieval-how-to-retrieve.md) to query the knowledge source.
+
+## Delete a knowledge source
+
+[!INCLUDE [Delete knowledge source using C#](knowledge-source-delete-csharp.md)]

Summary

{
    "modification_type": "new feature",
    "modification_title": "OneLake 知識ソースの使用方法ガイドの追加"
}

Explanation

この変更では、Microsoft OneLakeのファイルを利用してインデックスを作成し、クエリを実行するためのOneLake知識ソースに関する新しいガイドが追加されました。主なポイントは以下の通りです:

  1. 新規ファイルの作成: agentic-knowledge-source-how-to-onelake-csharp.mdという名称の新しいファイルが作成され、OneLake 知識ソースの設定と使用方法が詳細に説明されています。

  2. 知識ソースの概要: 知識ソースがどのようにMicrosoft OneLakeのファイルを利用して、エージェント型検索パイプライン内でのインデックス作成とクエリに使用されるかが説明されています。Blobとは異なり、OneLake知識ソースでは湖の家のデータソースを指定し、必要なエンティティを生成します。

  3. 前提条件の明示: Azure AI Searchを使うための前提条件として、エージェント型検索がサポートされている地域での利用、OneLakeインデクサーの準備、Azure SDKの最新版、権限の確保などが挙げられています。

  4. 実装例の提供: OneLake知識ソースを作成するためのC#コードの例が示されており、開発者がどのように実装を進めるべきか理解できるように配慮されています。

  5. オブジェクトの作成と管理ガイド: 知識ソース作成後に生成されるオブジェクト(インデクサー、インデックス、スキルセット、データソースなど)の管理についても詳述されています。これらのオブジェクトは編集しないことが推奨されており、その理由も説明されています。

  6. 知識ベースへの割り当て: 知識ソースの設定が完了した後のステップとして、知識ベースにOneLake知識ソースを指定する方法についても触れられており、includeReferenceSourceDataの設定が必要であることもが指摘されています。

この新しいガイドは、ユーザーがOneLakeとAzureの検索サービスを統合する際の理解を深め、実際の実装を助けるものとなっています。全体として、明確で包括的な導入情報が提供されており、ユーザーの利便性向上に寄与しています。

articles/search/includes/how-tos/agentic-knowledge-source-how-to-search-index-csharp.md

Diff
@@ -0,0 +1,101 @@
+---
+manager: nitinme
+author: heidisteen
+ms.author: heidist
+ms.service: azure-ai-search
+ms.topic: include
+ms.date: 11/19/2025
+---
+
+[!INCLUDE [Feature preview](../previews/preview-generic.md)]
+
+A *search index knowledge source* specifies a connection to an Azure AI Search index that provides searchable content in an agentic retrieval pipeline. [Knowledge sources](../../agentic-knowledge-source-overview.md) are created independently, referenced in a [knowledge base](../../agentic-retrieval-how-to-create-knowledge-base.md), and used as grounding data when an agent or chatbot calls a [retrieve action](../../agentic-retrieval-how-to-retrieve.md) at query time.
+
+## Prerequisites
+
++ Azure AI Search in any [region that provides agentic retrieval](../../search-region-support.md). You must have [semantic ranker enabled](../../semantic-how-to-enable-disable.md). 
+
++ A search index containing plain text or vector content with a semantic configuration. [Review the index criteria for agentic retrieval](../../agentic-retrieval-how-to-create-index.md#criteria-for-agentic-retrieval). The index must be on the same search service as the knowledge base.
+
++ The latest preview version of the [`Azure.Search.Documents` client library](https://www.nuget.org/packages/Azure.Search.Documents/11.8.0-beta.1) for the .NET SDK.
+
++ Permission to create and use objects on Azure AI Search. We recommend [role-based access](../../search-security-rbac.md), but you can use [API keys](../../search-security-api-keys.md) if a role assignment isn't feasible. For more information, see [Connect to a search service](../../search-get-started-rbac.md).
+
+> [!NOTE]
+> Although you can use the Azure portal to create search index knowledge sources, the portal uses the 2025-08-01-preview, which uses the previous "knowledge agent" terminology and doesn't support all 2025-11-01-preview features. For help with breaking changes, see [Migrate your agentic retrieval code](../../agentic-retrieval-how-to-migrate.md).
+
+## Check for existing knowledge sources
+
+[!INCLUDE [Check for existing knowledge sources using C#](knowledge-source-check-csharp.md)]
+
+The following JSON is an example response for a search index knowledge source. Notice that the knowledge source specifies a single index name and which fields in the index to include in the query.
+
+```json
+{
+  "SearchIndexParameters": {
+    "SearchIndexName": "earth-at-night",
+    "SourceDataFields": [
+      {
+        "Name": "id"
+      },
+      {
+        "Name": "page_chunk"
+      },
+      {
+        "Name": "page_number"
+      }
+    ],
+    "SearchFields": [],
+    "SemanticConfigurationName": "semantic-config"
+  },
+  "Name": "earth-knowledge-source",
+  "Description": null,
+  "EncryptionKey": null,
+  "ETag": "<redacted>"
+}
+```
+
+## Create a knowledge source
+
+Run the following code to create a search index knowledge source.
+
+```csharp
+using Azure.Search.Documents.Indexes.Models;
+
+// Create the knowledge source
+var indexKnowledgeSource = new SearchIndexKnowledgeSource(
+    name: knowledgeSourceName,
+    searchIndexParameters: new SearchIndexKnowledgeSourceParameters(searchIndexName: indexName)
+    {
+        SourceDataFields = { new SearchIndexFieldReference(name: "id"), new SearchIndexFieldReference(name: "page_chunk"), new SearchIndexFieldReference(name: "page_number") }
+    }
+);
+
+await indexClient.CreateOrUpdateKnowledgeSourceAsync(indexKnowledgeSource);
+Console.WriteLine($"Knowledge source '{knowledgeSourceName}' created or updated successfully.");
+```
+
+### Source-specific properties
+
+You can pass the following properties to create a search index knowledge source.
+
+| Name | Description | Type | Editable | Required |
+|--|--|--|--|--|
+| `Name` | The name of the knowledge source, which must be unique within the knowledge sources collection and follow the [naming guidelines](/rest/api/searchservice/naming-rules) for objects in Azure AI Search. | String | No | Yes |
+| `Description` | A description of the knowledge source. | String | Yes | No |
+| `EncryptionKey` | A [customer-managed key](../../search-security-manage-encryption-keys.md) to encrypt sensitive information in both the knowledge source and the generated objects. | Object | Yes | No |
+| `SearchIndexParameters` | Parameters specific to search index knowledge sources: `search_index_name`, `SemanticConfigurationName`, `SourceDataFields`, and `SearchFields`. | Object | Yes | Yes |
+| `SearchIndexName` | The name of the existing search index. | String | Yes | Yes |
+| `SemanticConfigurationName` | Overrides the default semantic configuration for the search index. | String | Yes | No |
+| `SourceDataFields` | The index fields returned when you specify `include_reference_source_data` in the knowledge base definition. These fields are used for citations and should be `retrievable`. Examples include the document name, file name, page numbers, or chapter numbers. | Array | Yes | No |
+| `SearchFields` | The index fields to specifically search against. When unspecified, all fields are searched. | Array | Yes | No |
+
+## Assign to a knowledge base
+
+If you're satisfied with the knowledge source, continue to the next step: specify the knowledge source in a [knowledge base](../../agentic-retrieval-how-to-create-knowledge-base.md).
+
+After the knowledge base is configured, use the [retrieve action](../../agentic-retrieval-how-to-retrieve.md) to query the knowledge source.
+
+## Delete a knowledge source
+
+[!INCLUDE [Delete knowledge source using Python](knowledge-source-delete-python.md)]

Summary

{
    "modification_type": "new feature",
    "modification_title": "検索インデックス知識ソースの使用方法ガイドの追加"
}

Explanation

この変更では、Azure AI Searchの検索インデックスと連携する検索インデックス知識ソースの設定方法に関する新しいガイドが追加されました。主なポイントは以下の通りです:

  1. 新規ファイルの作成: agentic-knowledge-source-how-to-search-index-csharp.mdという新しいファイルが作成され、検索インデックス知識ソースの構成と活用方法が詳細に説明されています。

  2. 知識ソースの定義: 検索インデックス知識ソースは、エージェント型検索パイプライン内で検索可能なコンテンツを提供するための接続を指定します。知識ソースを利用して、エージェントやチャットボットがクエリ時に呼び出す基盤データとして機能します。

  3. 前提条件の明示: 知識ソースを設定するためには、エージェント型検索がサポートされる地域でのAzure AI Searchの利用や、適切な検索インデックスの準備など、いくつかの前提条件が記載されています。

  4. 具体例の提供: 検索インデックス知識ソースを作成するためのC#コード例が提供されており、開発者が実際の実装に役立てることができるようになっています。

  5. オブジェクトの作成とプロパティ: 知識ソース作成に必要なプロパティ(例:名前、説明、暗号化キーなど)や、その具体的な使用方法についても説明されています。また、使用するインデックス名やフィールドについての説明も含まれています。

  6. 知識ベースへの割り当て: 知識ソースの設定が完了した後のステップとして、知識ベースに知識ソースを指定する方法が記載されています。これはエージェント型検索を効率的に行うために必要なステップです。

この新しいガイドは、Azure AI Searchを利用する際の知識ソースの管理を支援し、ユーザーがよりスムーズに検索機能を実装できるように設計されています。全体として、明確で充実した内容が提供されており、目的に応じて知識ソースを効果的に活用するための情報を提供しています。

articles/search/includes/how-tos/agentic-knowledge-source-how-to-sharepoint-indexed-csharp.md

Diff
@@ -0,0 +1,187 @@
+---
+manager: nitinme
+author: heidisteen
+ms.author: heidist
+ms.service: azure-ai-search
+ms.topic: include
+ms.date: 11/20/2025
+---
+
+[!INCLUDE [Feature preview](../previews/preview-generic.md)]
+
+Use an *indexed SharePoint knowledge source* to index and query SharePoint content in an agentic retrieval pipeline. [Knowledge sources](../../agentic-knowledge-source-overview.md) are created independently, referenced in a [knowledge base](../../agentic-retrieval-how-to-create-knowledge-base.md), and used as grounding data when an agent or chatbot calls a [retrieve action](../../agentic-retrieval-how-to-retrieve.md) at query time.
+
+When you create an indexed SharePoint knowledge source, you specify a SharePoint connection string, models, and properties to automatically generate the following Azure AI Search objects:
+
++ A data source that points to SharePoint sites.
++ A skillset that chunks and optionally vectorizes multimodal content.
++ An index that stores enriched content and meets the criteria for agentic retrieval.
++ An indexer that uses the previous objects to drive the indexing and enrichment pipeline.
+
+## Prerequisites
+
++ Azure AI Search in any [region that provides agentic retrieval](../../search-region-support.md). You must have [semantic ranker enabled](../../semantic-how-to-enable-disable.md).
+
++ Completion of the [SharePoint indexer prerequisites](../../search-how-to-index-sharepoint-online.md#prerequisites).
+
++ Completion of three SharePoint indexer configuration steps:
+
+  + [Step 1: Enable a managed identity for Azure AI Search](../../search-how-to-index-sharepoint-online.md#step-1-optional-enable-system-assigned-managed-identity)
+  + [Step 2: Choose between delegated or application permissions](../../search-how-to-index-sharepoint-online.md#step-2-decide-which-permissions-the-indexer-requires)
+  + [Step 3: Application registration step for Microsoft Entra ID authentication](../../search-how-to-index-sharepoint-online.md#step-3-create-a-microsoft-entra-application-registration)
+
++ The latest preview version of the [`Azure.Search.Documents` client library](https://www.nuget.org/packages/Azure.Search.Documents/11.8.0-beta.1) for the .NET SDK.
+
++ Permission to create and use objects on Azure AI Search. We recommend [role-based access](../../search-security-rbac.md), but you can use [API keys](../../search-security-api-keys.md) if a role assignment isn't feasible. For more information, see [Connect to a search service](../../search-get-started-rbac.md).
+
+## Check for existing knowledge sources
+
+[!INCLUDE [Check for existing knowledge sources using C#](knowledge-source-check-csharp.md)]
+
+The following JSON is an example response for an indexed SharePoint knowledge source.
+
+```json
+{
+  "name": "my-indexed-sharepoint-ks",
+  "kind": "indexedSharePoint",
+  "description": "A sample indexed SharePoint knowledge source",
+  "encryptionKey": null,
+  "indexedSharePointParameters": {
+    "connectionString": "<redacted>",
+    "containerName": "defaultSiteLibrary",
+    "query": null,
+    "ingestionParameters": {
+      "disableImageVerbalization": false,
+      "ingestionPermissionOptions": [],
+      "contentExtractionMode": "minimal",
+      "identity": null,
+      "embeddingModel": {
+        "kind": "azureOpenAI",
+        "azureOpenAIParameters": {
+          "resourceUri": "<redacted>",
+          "deploymentId": "text-embedding-3-large",
+          "apiKey": "<redacted>",
+          "modelName": "text-embedding-3-large",
+          "authIdentity": null
+        }
+      },
+      "chatCompletionModel": null,
+      "ingestionSchedule": null,
+      "assetStore": null,
+      "aiServices": null
+    },
+    "createdResources": {
+      "datasource": "my-indexed-sharepoint-ks-datasource",
+      "indexer": "my-indexed-sharepoint-ks-indexer",
+      "skillset": "my-indexed-sharepoint-ks-skillset",
+      "index": "my-indexed-sharepoint-ks-index"
+    }
+  },
+  "indexedOneLakeParameters": null
+}
+```
+
+> [!NOTE]
+> Sensitive information is redacted. The generated resources appear at the end of the response.
+
+## Create a knowledge source
+
+Run the following code to create an indexed SharePoint knowledge source.
+
+```csharp
+// Create an IndexedSharePoint knowledge source
+using Azure.Search.Documents.Indexes;
+using Azure.Search.Documents.Indexes.Models;
+using Azure.Search.Documents.KnowledgeBases.Models;
+using Azure;
+
+var indexClient = new SearchIndexClient(new Uri(searchEndpoint), new AzureKeyCredential(apiKey));
+
+var chatCompletionParams = new AzureOpenAIVectorizerParameters
+{
+    ResourceUri = new Uri(aoaiEndpoint),
+    DeploymentName = aoaiGptDeployment,
+    ModelName = aoaiGptModel
+};
+
+var embeddingParams = new AzureOpenAIVectorizerParameters
+{
+    ResourceUri = new Uri(aoaiEndpoint),
+    DeploymentName = aoaiEmbeddingDeployment,
+    ModelName = aoaiEmbeddingModel
+};
+
+var ingestionParams = new KnowledgeSourceIngestionParameters
+{
+    DisableImageVerbalization = false,
+    ChatCompletionModel = new KnowledgeBaseAzureOpenAIModel(azureOpenAIParameters: chatCompletionParams),
+    EmbeddingModel = new KnowledgeSourceAzureOpenAIVectorizer
+    {
+        AzureOpenAIParameters = embeddingParams
+    }
+};
+
+var sharePointParams = new IndexedSharePointKnowledgeSourceParameters(
+    connectionString: sharePointConnectionString,
+    containerName: "defaultSiteLibrary")
+{
+    IngestionParameters = ingestionParams
+};
+
+var knowledgeSource = new IndexedSharePointKnowledgeSource(
+    name: "my-indexed-sharepoint-ks",
+    indexedSharePointParameters: sharePointParams)
+{
+    Description = "A sample indexed SharePoint knowledge source."
+};
+
+await indexClient.CreateOrUpdateKnowledgeSourceAsync(knowledgeSource);
+Console.WriteLine($"Knowledge source '{knowledgeSource.Name}' created or updated successfully.");
+```
+
+### Source-specific properties
+
+You can pass the following properties to create an indexed SharePoint knowledge source.
+
+| Name | Description | Type | Editable | Required |
+|--|--|--|--|--|
+| `Name` | The name of the knowledge source, which must be unique within the knowledge sources collection and follow the [naming guidelines](/rest/api/searchservice/naming-rules) for objects in Azure AI Search. | String | No | Yes |
+| `Description` | A description of the knowledge source. | String | Yes | No |
+| `EncryptionKey` | A [customer-managed key](../../search-security-manage-encryption-keys.md) to encrypt sensitive information in both the knowledge source and the generated objects. | Object | Yes | No |
+| `IndexedSharePointKnowledgeSourceParameters` | Parameters specific to indexed SharePoint knowledge sources: `connectionString`, `containerName`, and `query`. | Object | No | No |
+| `connectionString` | The connection string to a SharePoint site. For more information, see [Connection string syntax](../../search-how-to-index-sharepoint-online.md#connection-string-format). | String | Yes | Yes |
+| `containerName` | The SharePoint library to access. Use `defaultSiteLibrary` to index content from the site's default document library or `allSiteLibraries` to index content from every document library in the site. Ignore `useQuery` for now. | String | No | Yes |
+| `query` | Ignore for now. | String | Yes | No |
+
+### `ingestion_parameters` properties
+
+[!INCLUDE [C# ingestionParameters properties](knowledge-source-ingestion-parameters-csharp.md)]
+
+## Check ingestion status
+
+[!INCLUDE [C# knowledge source status](knowledge-source-status-csharp.md)]
+
+## Review the created objects
+
+When you create an indexed SharePoint knowledge source, your search service also creates an indexer, index, skillset, and data source. We don't recommend that you edit these objects, as introducing an error or incompatibility can break the pipeline.
+
+After you create a knowledge source, the response lists the created objects. These objects are created according to a fixed template, and their names are based on the name of the knowledge source. You can't change the object names.
+
+We recommend using the Azure portal to validate output creation. The workflow is:
+
+1. Check the indexer for success or failure messages. Connection or quota errors appear here.
+1. Check the index for searchable content. Use Search Explorer to run queries.
+1. Check the skillset to learn how your content is chunked and optionally vectorized.
+1. Check the data source for connection details. Our example uses API keys for simplicity, but you can use Microsoft Entra ID for authentication and role-based access control for authorization.
+
+## Assign to a knowledge base
+
+If you're satisfied with the knowledge source, continue to the next step: specify the knowledge source in a [knowledge base](../../search-agentic-retrieval-how-to-create.md).
+
+For any knowledge base that specifies an indexed SharePoint knowledge source, be sure to set `includeReferenceSourceData` to `true`. This step is necessary for pulling the source document URL into the citation.
+
+After the knowledge base is configured, use the [retrieve action](../../agentic-retrieval-how-to-retrieve.md) to query the knowledge source.
+
+## Delete a knowledge source
+
+[!INCLUDE [Delete knowledge source using C#](knowledge-source-delete-csharp.md)]

Summary

{
    "modification_type": "new feature",
    "modification_title": "SharePoint インデックス知識ソースの使用方法ガイドの追加"
}

Explanation

この変更では、インデックス付けされたSharePoint知識ソースを使用してSharePointコンテンツをインデックス化し、クエリを実行するための新しいガイドが追加されました。主なポイントは以下の通りです:

  1. 新規ファイルの作成: agentic-knowledge-source-how-to-sharepoint-indexed-csharp.mdという名称の新しいファイルが作成され、SharePoint知識ソースの設定と活用方法が詳細に説明されています。

  2. 知識ソースの目的: インデックス付けされたSharePoint知識ソースは、エージェント型検索パイプライン内でSharePointコンテンツにアクセスし、検索可能なデータを提供します。知識ソースは独立して作成され、知識ベースで参照されます。

  3. 前提条件の明示: この知識ソースを設定するためには、いくつかの前提条件が必要です。具体的には、エージェント型検索がサポートされる地域でのAzure AI Search、SharePointインデクサーの設定手順の完了、および必要な権限の付与などが含まれています。

  4. 具体例の提供: インデックス付けされたSharePoint知識ソースを作成するためのC#コードの例が提供され、開発者が実際の実装を行う際の参考になるように構成されています。

  5. オブジェクトの生成: 知識ソースが作成されると、関連するオブジェクト(データソース、インデクサー、スキルセット、インデックス)も生成されることが記載されています。これらは変更しないことが推奨され、正しい機能を維持するための注意点が述べられています。

  6. 知識ベースへの割り当て: 新しい知識ソースが確認できたら、それを知識ベースに指定する方法が説明されています。これにより、エージェント型検索機能を最大限に活用できます。

この新しいガイドは、ユーザーがSharePointとAzure AI Searchを効率的に統合できるように設計されており、検索機能の実装に必要な情報を総合的に提供しています。全体として、明確で実用的な内容が盛り込まれており、ユーザーの利便性を高めるものとなっています。

articles/search/includes/how-tos/agentic-knowledge-source-how-to-sharepoint-remote-csharp.md

Diff
@@ -0,0 +1,225 @@
+---
+manager: nitinme
+author: heidisteen
+ms.author: heidist
+ms.service: azure-ai-search
+ms.topic: include
+ms.date: 11/20/2025
+---
+
+[!INCLUDE [Feature preview](../previews/preview-generic.md)]
+
+A *remote SharePoint knowledge source* uses the [Copilot Retrieval API](/microsoft-365-copilot/extensibility/api/ai-services/retrieval/overview) to query textual content directly from SharePoint in Microsoft 365, returning results to the agentic retrieval engine for merging, ranking, and response formulation. There's no search index used by this knowledge source, and only textual content is queried.
+
+At query time, the remote SharePoint knowledge source calls the Copilot Retrieval API on behalf of the user identity, so no connection strings are needed in the knowledge source definition. All content to which a user has access is in-scope for knowledge retrieval. To limit sites or constrain search, set a [filter expression](/sharepoint/dev/general-development/keyword-query-language-kql-syntax-reference). Your Azure tenant and the Microsoft 365 tenant must use the same Microsoft Entra ID tenant, and the caller's identity must be recognized by both tenants.
+
++ You can use filters to scope search by URLs, date ranges, file types, and other metadata.
+
++ SharePoint permissions and Purview labels are honored in requests for content.
+
++ Usage is billed through Microsoft 365 and a Copilot license.
+
+Like any other knowledge source, you specify a remote SharePoint knowledge source in a [knowledge base](../../agentic-retrieval-how-to-create-knowledge-base.md) and use the results as grounding data when an agent or chatbot calls a [retrieve action](../../agentic-retrieval-how-to-retrieve.md) at query time.
+
+## Prerequisites
+
++ Azure AI Search in any [region that provides agentic retrieval](../../search-region-support.md). You must have [semantic ranker enabled](../../semantic-how-to-enable-disable.md). 
+
++ SharePoint in a Microsoft 365 tenant that's under the same Microsoft Entra ID tenant as Azure.
+
++ A personal access token for local development or a user's identity from a client application.
+
++ The latest preview version of the [`Azure.Search.Documents` client library](https://www.nuget.org/packages/Azure.Search.Documents/11.8.0-beta.1) for the .NET SDK.
+
++ Permission to create and use objects on Azure AI Search. We recommend [role-based access](../../search-security-rbac.md), but you can use [API keys](../../search-security-api-keys.md) if a role assignment isn't feasible.
+
+For local development, the agentic retrieval engine uses your access token to call SharePoint on your behalf. For more information about using a personal access token on requests, see [Connect to Azure AI Search](../../search-get-started-rbac.md).
+
+## Limitations
+
+The following limitations in the [Copilot Retrieval API](/microsoft-365-copilot/extensibility/api/ai-services/retrieval/overview) apply to remote SharePoint knowledge sources.
+
++ There's no support for Copilot connectors or OneDrive content. Content is retrieved from SharePoint sites only.
+
++ Limit of 200 requests per user per hour.
+
++ Query character limit of 1,500 characters.
+
++ Hybrid queries are only supported for the following file extensions: .doc, .docx, .pptx, .pdf, .aspx, and .one.
+
++ Multimodal retrieval (nontextual content, including tables, images, and charts) isn't supported.
+
++ Maximum of 25 results from a query.
+
++ Results are returned by Copilot Retrieval API as unordered.
+
++ Invalid Keyword Query Language (KQL) filter expressions are ignored and the query continues to execute without the filter.
+
+## Check for existing knowledge sources
+
+[!INCLUDE [Check for existing knowledge sources using C#](knowledge-source-check-csharp.md)]
+
+The following JSON is an example response for a remote SharePoint knowledge source.
+
+```json
+{
+  "name": "my-sharepoint-ks",
+  "kind": "remoteSharePoint",
+  "description": "A sample remote SharePoint knowledge source",
+  "encryptionKey": null,
+  "remoteSharePointParameters": {
+    "filterExpression": "filetype:docx",
+    "containerTypeId": null,
+    "resourceMetadata": [
+      "Author",
+      "Title"
+    ]
+  }
+}
+```
+
+## Create a knowledge source
+
+Run the following code to create a remote SharePoint knowledge source.
+
+[API keys](../../search-security-api-keys.md) are used for your client connection to Azure AI Search and Azure OpenAI. Your access token is used by Azure AI Search to connect to SharePoint in Microsoft 365 on your behalf. You can only retrieve content that you're permitted to access. For more information about getting a personal access token and other values, see [Connect to Azure AI Search](../../search-get-started-rbac.md).
+
+> [!NOTE]
+> You can also use your personal access token to access Azure AI Search and Azure OpenAI if you [set up role assignments on each resource](../../search-security-rbac.md). Using API keys allows you to omit this step in this example.
+
+```csharp
+// Create a remote SharePoint knowledge source
+using Azure.Search.Documents.Indexes;
+using Azure.Search.Documents.Indexes.Models;
+using Azure.Search.Documents.KnowledgeBases.Models;
+using Azure;
+
+var indexClient = new SearchIndexClient(new Uri(searchEndpoint), credential);
+
+var knowledgeSource = new RemoteSharePointKnowledgeSource(name: "my-remote-sharepoint-ks")
+{
+    Description = "This knowledge source queries .docx files in a trusted Microsoft 365 tenant.",
+    RemoteSharePointParameters = new RemoteSharePointKnowledgeSourceParameters()
+    {
+        FilterExpression = "filetype:docx",
+        ResourceMetadata = { "Author", "Title" }
+    }
+};
+
+await indexClient.CreateOrUpdateKnowledgeSourceAsync(knowledgeSource);
+Console.WriteLine($"Knowledge source '{knowledgeSource.Name}' created or updated successfully.");
+```
+
+### Source-specific properties
+
+You can pass the following properties to create a remote SharePoint knowledge source.
+
+| Name | Description | Type | Editable | Required |
+|--|--|--|--|--|
+| `name` | The name of the knowledge source, which must be unique within the knowledge sources collection and follow the [naming guidelines](/rest/api/searchservice/naming-rules) for objects in Azure AI Search. | String | No | Yes |
+| `description` | A description of the knowledge source. | String | Yes | No |
+| `encryptionKey` | A [customer-managed key](../../search-security-manage-encryption-keys.md) to encrypt sensitive information in the knowledge source. | Object | Yes | No |
+| `remoteSharePointParameters` | Parameters specific to remote SharePoint knowledge sources: `filterExpression`, `resourceMetadata`, and `containerTypeId`. | Object | No | No |
+| `filterExpression` | An expression written in the SharePoint [KQL](/sharepoint/dev/general-development/keyword-query-language-kql-syntax-reference), which is used to specify sites and paths to content. | String | Yes |No |
+| `resourceMetadata` | A comma-delimited list of standard metadata fields: author, file name, creation date, content type, and file type. | Array | Yes | No |
+| `containerTypeId` | Container ID for the SharePoint Embedded connection. When unspecified, SharePoint Online is used. | String | Yes | No |
+
+### Filter expression examples
+
+Not all SharePoint properties are supported in the `filterExpression`. For a list of supported properties, see the [API reference](/microsoft-365-copilot/extensibility/api/ai-services/retrieval/copilotroot-retrieval). Here's some more information about queryable properties that you can use in filter: [queryable properties](/graph/connecting-external-content-manage-schema#queryable).
+
+Learn more about [KQL filters](/microsoft-365-copilot/extensibility/api/ai-services/retrieval/copilotroot-retrieval?pivots=graph-v1#example-7-use-filter-expressions) in the syntax reference.
+
+| Example | Filter expression |
+|---------|-------------------|
+| Filter to a single site by ID | `"filterExpression": "SiteID:\"00aa00aa-bb11-cc22-dd33-44ee44ee44ee\""` |
+| Filter to multiple sites by ID | `"filterExpression": "SiteID:\"00aa00aa-bb11-cc22-dd33-44ee44ee44ee\" OR SiteID:\"11bb11bb-cc22-dd33-ee44-55ff55ff55ff\""` |
+| Filter to files under a specific path | `"filterExpression": "Path:\"https://my-demo.sharepoint.com/sites/mysite/Shared Documents/en/mydocs\""` |
+| Filter to a specific date range | `"filterExpression": "LastModifiedTime >= 2024-07-22 AND LastModifiedTime <= 2025-01-08"` |
+| Filter to files of a specific file type | `"filterExpression": "FileExtension:\"docx\" OR FileExtension:\"pdf\" OR FileExtension:\"pptx\""` |
+| Filter to files of a specific information protection label | `"filterExpression": "InformationProtectionLabelId:\"f0ddcc93-d3c0-4993-b5cc-76b0a283e252\""` |
+
+## Assign to a knowledge base
+
+If you're satisfied with the knowledge source, continue to the next step: specify the knowledge source in a [knowledge base](../../search-agentic-retrieval-how-to-create.md).
+
+After the knowledge base is configured, use the [retrieve action](../../agentic-retrieval-how-to-retrieve.md) to query the knowledge source.
+
+## Query a knowledge base
+
+The [retrieve action](../../agentic-retrieval-how-to-retrieve.md) on the knowledge base provides the user identity that authorizes access to content in Microsoft 365.
+
+Azure AI Search uses the access token to call the Copilot Retrieval API on behalf of the user identity. The access token is provided in the retrieve endpoint as a `xMsQuerySourceAuthorization` HTTP header.
+
+```csharp
+using Azure;
+using Azure.Search.Documents.KnowledgeBases;
+using Azure.Search.Documents.KnowledgeBases.Models;
+
+// Get access token
+var credential = new DefaultAzureCredential();
+var tokenRequestContext = new Azure.Core.TokenRequestContext(new[] { "https://search.azure.com/.default" });
+var accessToken = await credential.GetTokenAsync(tokenRequestContext);
+string token = accessToken.Token;
+
+// Create knowledge base retrieval client
+var baseClient = new KnowledgeBaseRetrievalClient(
+    endpoint: new Uri(searchEndpoint),
+    knowledgeBaseName: knowledgeBaseName,
+    credential: new AzureKeyCredential()
+);
+
+var spMessages = new List<Dictionary<string, string>>
+{
+    new Dictionary<string, string>
+    {
+        { "role", "user" },
+        { "content", @"contoso product planning" }
+    }
+};
+
+// Create retrieval request
+var retrievalRequest = new KnowledgeBaseRetrievalRequest();
+foreach (Dictionary<string, string> message in spMessages) {
+    if (message["role"] != "system") {
+        retrievalRequest.Messages.Add(new KnowledgeBaseMessage(content: new[] { new KnowledgeBaseMessageTextContent(message["content"]) }) { Role = message["role"] });
+    }
+}
+retrievalRequest.RetrievalReasoningEffort = new KnowledgeRetrievalLowReasoningEffort();
+var retrievalResult = await baseClient.RetrieveAsync(retrievalRequest, xMsQuerySourceAuthorization: token);
+
+Console.WriteLine((retrievalResult.Value.Response[0].Content[0] as KnowledgeBaseMessageTextContent).Text);
+```
+
+The response might look like the following:
+
+`Contoso's product planning for the NextGen Camera includes a 2019 launch with a core package design and minor modifications for three product versions, featuring Wi-Fi enabled technology and a new mobile app for photo organization and sharing, aiming for 100,000 users within six months [ref_id:0][ref_id:1]. Research and forecasting are central to their planning, with phase two research focusing on feedback from a diverse user group to shape deliverables and milestones [ref_id:0][ref_id:1].`
+
+The retrieve request also takes a [KQL filter](/microsoft-365-copilot/extensibility/api/ai-services/retrieval/copilotroot-retrieval?pivots=graph-v1#example-7-use-filter-expressions) (`filterExpressionAddOn`) if you want to apply constraints at query time. If you specify `filterExpressionAddOn` on both the knowledge source and knowledge base retrieve action, the filters are AND'd together.
+
+Queries asking questions about the content itself are more effective than questions about where a file is located or when it was last updated. For example, if you ask, "Where is the keynote doc for Ignite 2024", you might get "No relevant content was found for your query" because the content itself doesn't disclose its location. A filter on metadata is a better solution for file location or date-specific queries.
+
+A better question to ask is, "What is the keynote doc for Ignite 2024". The response includes the synthesized answer, query activity and token counts, plus the URL and other metadata.
+
+```json
+{
+    "resourceMetadata": {
+        "Author": "Nuwan Amarathunga;Nurul Izzati",
+        "Title": "Ignite 2024 Keynote Address"
+    }
+},
+"rerankerScore": 2.489522,
+"webUrl": "https://contoso-my.sharepoint.com/keynotes/nuamarth_contoso_com/Documents/Keynote-Ignite-2024.docx",
+"searchSensitivityLabelInfo": {
+        "displayName": "Confidential\\Contoso Extended",
+        "sensitivityLabelId": "aaaaaaaa-0b0b-1c1c-2d2d-333333333333",
+        "tooltip": "Data is classified and protected. Contoso Full Time Employees (FTE) and non-employees can edit, reply, forward and print. Recipient can unprotect content with the right justification.",
+        "priority": 5,
+        "color": "#FF8C00",
+        "isEncrypted": true
+    }
+```
+
+## Delete a knowledge source
+
+[!INCLUDE [Delete knowledge source using C#](knowledge-source-delete-csharp.md)]

Summary

{
    "modification_type": "new feature",
    "modification_title": "リモートSharePoint知識ソースの使用方法ガイドの追加"
}

Explanation

この変更では、リモートSharePoint知識ソースを使用してMicrosoft 365内のテキストコンテンツを直接クエリし、その結果をエージェント型検索エンジンに渡す方法に関する新しいガイドが追加されました。主なポイントは以下の通りです:

  1. 新規ファイルの作成: agentic-knowledge-source-how-to-sharepoint-remote-csharp.mdという新しいファイルが作成され、リモートSharePoint知識ソースの設定と利用方法が詳細に説明されています。

  2. 知識ソースの定義: リモートSharePoint知識ソースは、Copilot Retrieval APIを使用して、ユーザーのアイデンティティに基づいてテキストをクエリし、必要なコンテンツを取得します。この知識ソースでは検索インデックスは使用せず、テキストコンテンツのみにアクセスします。

  3. アクセスおよびフィルタリング: クエリ時には、ユーザーのアイデンティティに基づいてAPIが呼び出されるため、接続文字列は不要です。また、フィルタ式を使用して検索範囲を制限することができ、サイトやメタデータに基づいて絞り込みが可能です。

  4. 前提条件の提示: リモートSharePoint知識ソースを利用するための前提条件として、Microsoft Entra IDテナントの一致、適切な権限の保持、最新のSDKの利用などが挙げられています。

  5. 制限事項の説明: Copilot Retrieval APIに関連する制限事項(例:ユーザーあたりのリクエスト数の制限、クエリの文字数制限、サポートされるファイル形式など)についても詳述されています。

  6. 具体例の提供: リモートSharePoint知識ソースを作成するためのC#コードの例が提供されており、開発者が実際に実装を行う際の具体的な手助けとなります。

  7. 知識ベースへの割り当て: 知識ソースが作成された後、知識ベースに指定する方法や取り扱い方法についての指示も含まれています。これにより、エージェント型検索機能を効果的に利用できます。

この新しいガイドは、SharePointコンテンツとAzure AI Searchを統合するための手順を明確にし、ユーザーが効率的な検索ソリューションを構築するための情報を提供します。全体として、利用者が迅速に設定を行い、クエリを実行できるような支援がなされています。

articles/search/includes/how-tos/agentic-knowledge-source-how-to-web-csharp.md

Diff
@@ -0,0 +1,118 @@
+---
+manager: nitinme
+author: haileytap
+ms.author: haileytapia
+ms.service: azure-ai-search
+ms.topic: include
+ms.date: 11/20/2025
+---
+
+> [!IMPORTANT]
+> + Web Knowledge Source, which uses Grounding with Bing Search and/or Grounding with Bing Custom Search, is a [First Party Consumption Service](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/EAEAS) governed by the [Grounding with Bing terms of use](https://www.microsoft.com/en-us/bing/apis/grounding-legal-enterprise) and the [Microsoft Privacy Statement](https://www.microsoft.com/en-us/privacy/privacystatement).
+>
+> + The [Microsoft Data Protection Addendum](https://www.microsoft.com/licensing/docs/view/Microsoft-Products-and-Services-Data-Protection-Addendum-DPA) doesn't apply to data sent to Web Knowledge Source. When Customer uses Web Knowledge Source, Customer Data flows outside the Azure compliance and Geo boundary. This also means use of Web Knowledge Source waives all elevated Government Community Cloud security and compliance commitments to include data sovereignty and screened/citizenship-based support, as applicable.
+>
+> + Use of Web Knowledge Source incurs costs; learn more about [pricing](https://www.microsoft.com/en-us/bing/apis/grounding-pricing).
+>
+> + Learn more about how Azure admins can [manage access to use of Web Knowledge Source](../../agentic-knowledge-source-how-to-web-manage.md).
+
+[!INCLUDE [Feature preview](../previews/preview-generic.md)]
+
+*Web Knowledge Source* enables retrieval of real-time web data from Microsoft Bing in an agentic retrieval pipeline. [Knowledge sources](../../agentic-knowledge-source-overview.md) are created independently, referenced in a [knowledge base](../../agentic-retrieval-how-to-create-knowledge-base.md), and used as grounding data when an agent or chatbot calls a [retrieve action](../../agentic-retrieval-how-to-retrieve.md) at query time.
+
+Bing Custom Search is always the search provider for Web Knowledge Source. Although you can't specify alternative search providers or engines, you can include or exclude specific *domains*, such as https://learn.microsoft.com. When no domains are specified, Web Knowledge Source has unrestricted access to the entire public internet.
+
+Web Knowledge Source works best alongside other knowledge sources. Use Web Knowledge Source when your proprietary content doesn't provide complete, up-to-date answers or when you want to supplement results with information from a commercial search engine.
+
+When you use Web Knowledge Source, keep the following in mind:
+
++ The response is always a single, formulated answer to the query instead of raw search results from the web.
+
++ Because Web Knowledge Source doesn't support extractive data, your knowledge base must use [answer synthesis](../../agentic-retrieval-how-to-answer-synthesis.md) and [low or medium reasoning effort](../../agentic-retrieval-how-to-create-knowledge-base.md#create-a-knowledge-base). You also can't define answer instructions.
+
+## Prerequisites
+
++ An Azure subscription with [access to Web Knowledge Source](../../agentic-knowledge-source-how-to-web-manage.md). By default, access is enabled. Contact your admin if access is disabled.
+
++ An Azure AI Search service in any [region that provides agentic retrieval](../../search-region-support.md). You must have [semantic ranker enabled](../../semantic-how-to-enable-disable.md). The service must also be in an [Azure public region](../../search-region-support.md#azure-public-regions), as Web Knowledge Source isn't supported in private or sovereign clouds.
+
++ The latest preview version of the [`Azure.Search.Documents` client library](https://www.nuget.org/packages/Azure.Search.Documents/11.8.0-beta.1) for the .NET SDK.
+
++ Permission to create and use objects on Azure AI Search. We recommend [role-based access](../../search-security-rbac.md), but you can use [API keys](../../search-security-api-keys.md) if a role assignment isn't feasible. For more information, see [Connect to a search service](../../search-get-started-rbac.md).
+
+## Check for existing knowledge sources
+
+[!INCLUDE [Check for existing knowledge sources using C#](knowledge-source-check-csharp.md)]
+
+The following JSON is an example response for a Web Knowledge Source resource.
+
+```json
+{
+  "WebParameters": {
+    "Domains": null
+  },
+  "Name": "my-web-ks",
+  "Description": "A sample Web Knowledge Source.",
+  "EncryptionKey": null,
+}
+```
+
+## Create a knowledge source
+
+Run the following code to create a Web Knowledge Source resource.
+
+```csharp
+// Create Web Knowledge Source
+// Create a Web knowledge source
+using Azure.Search.Documents.Indexes;
+using Azure.Search.Documents.Indexes.Models;
+using Azure;
+
+var indexClient = new SearchIndexClient(new Uri(searchEndpoint), new AzureKeyCredential(apiKey));
+
+var knowledgeSource = new WebKnowledgeSource(name: "my-web-ks")
+{
+    Description = "A sample Web Knowledge Source.",
+    WebParameters = new WebKnowledgeSourceParameters
+    {
+        Domains = new WebKnowledgeSourceDomains
+        {
+            AllowedDomains = 
+            {
+                new WebKnowledgeSourceDomain(address: "learn.microsoft.com") { IncludeSubpages = true }
+            },
+            BlockedDomains = 
+            {
+                new WebKnowledgeSourceDomain(address: "bing.com") { IncludeSubpages = false }
+            }
+        }
+    }
+};
+
+await indexClient.CreateOrUpdateKnowledgeSourceAsync(knowledgeSource);
+Console.WriteLine($"Knowledge source '{knowledgeSource.Name}' created or updated successfully.");
+```
+
+### Source-specific properties
+
+You can pass the following properties to create a Web Knowledge Source resource.
+
+| Name | Description | Type | Editable | Required |
+|--|--|--|--|--|
+| `Name` | The name of the knowledge source, which must be unique within the knowledge sources collection and follow the [naming guidelines](/rest/api/searchservice/naming-rules) for objects in Azure AI Search. | String | Yes | Yes |
+| `Description` | A description of the knowledge source. When unspecified, Azure AI Search applies a default description. | String | Yes | No |
+| `EncryptionKey` | A [customer-managed key](../../search-security-manage-encryption-keys.md) to encrypt sensitive information in the knowledge source. | Object | Yes | No |
+| `WebParameters` | Parameters specific to Web Knowledge Source. Currently, this is only `Domains`. | Object | Yes | No |
+| `Domains` | Domains to allow or block from the search space. By default, the knowledge source uses [Grounding with Bing Search](/azure/ai-foundry/agents/how-to/tools/bing-grounding) to search the entire public internet. When you specify domains, the knowledge source uses [Grounding with Bing Custom Search](/azure/ai-foundry/agents/how-to/tools/bing-custom-search) to restrict results to the specified domains. In both cases, Bing Custom Search is the search provider. | Object | Yes | No |
+| `AllowedDomains` | Domains to include in the search space. For each domain, you must specify its `address` in the `website.com` format. You can also specify whether to include the domain's subpages by setting `IncludeSubpages` to `true` or `false`. | Array | Yes | No |
+| `BlockedDomains` | Domains to exclude from the search space. For each domain, you must specify its `address` in the `website.com` format. You can also specify whether to include the domain's subpages by setting `IncludeSubpages` to `true` or `false`. | Array | Yes | No |
+
+## Assign to a knowledge base
+
+If you're satisfied with the knowledge source, continue to the next step: specify the knowledge source in a [knowledge base](../../agentic-retrieval-how-to-create-knowledge-base.md).
+
+After the knowledge base is configured, use the [retrieve action](../../agentic-retrieval-how-to-retrieve.md) to query the knowledge source.
+
+## Delete a knowledge source
+
+[!INCLUDE [Delete knowledge source using C#](knowledge-source-delete-csharp.md)]
\ No newline at end of file

Summary

{
    "modification_type": "new feature",
    "modification_title": "Web 知識ソースの使用方法ガイドの追加"
}

Explanation

この変更により、Web 知識ソースを使用してMicrosoft Bingからリアルタイムのウェブデータを取得する方法に関する新しいガイドが追加されました。主なポイントは以下の通りです:

  1. 新規ファイルの作成: agentic-knowledge-source-how-to-web-csharp.mdという新しいファイルが作成され、Web 知識ソースの詳細な設定と使用方法が説明されています。

  2. Web 知識ソースの定義: Web 知識ソースは、エージェント型検索パイプライン内でBingを介して検索し、ユーザーがエージェントやチャットボットを通じてクエリを実行した際に利用される知識ソースです。検索結果は、個別の生成された回答として提供されます。

  3. 利用上の注意: Web 知識ソースの使用には、特定の条件や制限があります。たとえば、抽出データはサポートされず、答えの合成や理由付けのレベルはいくつかに制限されます。また、Web 知識ソースに関しては、Bing Custom Searchが常に使用され、他の検索エンジンは指定できません。

  4. 前提条件: Web 知識ソースを設定するためには、AzureのサブスクリプションやAI Searchサービスが必要であり、これらは特定の条件を満たす必要があります。ユーザーは、特定のドメインの許可やブロックを含む設定を行うこともできます。

  5. 具体例の提供: Web 知識ソースを作成するためのC#コードの例が記載されており、開発者が実際の実装に役立てることができます。

  6. 知識ベースへの割り当て: 作成した知識ソースを知識ベースに指定する方法や、そこからクエリを実行する方法についても指示があります。

この新しいガイドは、エージェント型検索を通じてWebデータを効率的に活用するための手順を提供し、ユーザーがAzure AI SearchとBingのサービスを統合する際に必要な情報を包括的に提供します。全体として、ユーザーが容易に設定し、効果的な検索機能を実現できるように設計されています。

articles/search/includes/how-tos/agentic-retrieval-how-to-create-knowledge-base-csharp.md

Diff
@@ -0,0 +1,329 @@
+---
+manager: nitinme
+author: haileytap
+ms.author: haileytapia
+ms.service: azure-ai-search
+ms.topic: include
+ms.date: 11/19/2025
+---
+
+[!INCLUDE [Feature preview](../previews/preview-generic.md)]
+
+In Azure AI Search, a *knowledge base* is a top-level object that orchestrates [agentic retrieval](../../agentic-retrieval-overview.md). It defines which knowledge sources to query and the default behavior for retrieval operations. At query time, the [retrieve method](../../agentic-retrieval-how-to-retrieve.md) targets the knowledge base to run the configured retrieval pipeline.
+
+A knowledge base specifies:
+
++ One or more knowledge sources that point to searchable content.
++ An optional LLM that provides reasoning capabilities for query planning and answer formulation.
++ A retrieval reasoning effort that determines whether an LLM is invoked and manages cost, latency, and quality.
++ Custom properties that control routing, source selection, output format, and object encryption.
+
+After you create a knowledge base, you can update its properties at any time. If the knowledge base is in use, updates take effect on the next retrieval.
+
+> [!IMPORTANT]
+> 2025-11-01-preview renames the 2025-08-01-preview *knowledge agent* to *knowledge base*. This is a breaking change. We recommend [migrating existing code](../../agentic-retrieval-how-to-migrate.md) to the new APIs as soon as possible.
+
+## Prerequisites
+
++ Azure AI Search in any [region that provides agentic retrieval](../../search-region-support.md). You must have [semantic ranker enabled](../../semantic-how-to-enable-disable.md). If you're using a [managed identity](../../search-how-to-managed-identities.md) for role-based access to deployed models, your search service must be on the Basic pricing tier or higher.
+
++ Azure OpenAI with a [supported LLM](#supported-models) deployment.
+
++ One or more [knowledge sources](../../agentic-knowledge-source-overview.md#supported-knowledge-sources) on your search service.
+
++ Permission to create and use objects on Azure AI Search. We recommend [role-based access](../../search-security-rbac.md). **Search Service Contributor** can create and manage a knowledge base. **Search Index Data Reader** can run queries. Alternatively, you can use [API keys](../../search-security-api-keys.md) if a role assignment isn't feasible. For more information, see [Connect to a search service](../../search-get-started-rbac.md).
+
++ The latest preview version of the [`azure-search-documents` client library](https://pypi.org/project/azure-search-documents/11.7.0b2/) for Python.
+
+> [!NOTE]
+> Although you can use the Azure portal to create knowledge bases, the portal uses the 2025-08-01-preview, which uses the previous "knowledge agent" terminology and doesn't support all 2025-11-01-preview features. For help with breaking changes, see [Migrate your agentic retrieval code](../../agentic-retrieval-how-to-migrate.md).
+
+### Supported models
+
+Use one of the following LLMs from Azure OpenAI or an equivalent open-source model. For deployment instructions, see [Deploy Azure OpenAI models with Microsoft Foundry](/azure/ai-foundry/how-to/deploy-models-openai).
+
++ `gpt-4o`
++ `gpt-4o-mini`
++ `gpt-4.1`
++ `gpt-4.1-nano`
++ `gpt-4.1-mini`
++ `gpt-5`
++ `gpt-5-nano`
++ `gpt-5-mini`
+
+## Configure access
+
+Azure AI Search needs access to the LLM from Azure OpenAI. We recommend Microsoft Entra ID for authentication and role-based access for authorization. You must be an **Owner or User Access Administrator** to assign roles. If roles aren't feasible, use key-based authentication instead.
+
+### [**Use roles**](#tab/rbac)
+
+1. [Configure Azure AI Search to use a managed identity](../../search-how-to-managed-identities.md).
+
+1. On your model provider, such as Foundry Models, assign **Cognitive Services User** to the managed identity of your search service. If you're testing locally, assign the same role to your user account.
+
+1. For local testing, follow the steps in [Quickstart: Connect without keys](../../search-get-started-rbac.md) to sign in to a specific subscription and tenant. Use `DefaultAzureCredential` instead of `AzureKeyCredential` in each request, which should look similar to the following example:
+
+    ```csharp
+    using Azure.Search.Documents.Indexes;
+    using Azure.Identity;
+    
+    var indexClient = new SearchIndexClient(new Uri(searchEndpoint), new DefaultAzureCredential());
+    ```
+
+### [**Use keys**](#tab/keys)
+
+1. [Copy an Azure AI Search admin API key](../../search-security-api-keys.md#find-existing-keys) from the Azure portal.
+
+1. Use `AzureKeyCredential` to specify the API key in each request, which should look similar to the following example:
+
+    ```csharp
+    using Azure.Search.Documents.Indexes;
+    using Azure;
+    
+    var indexClient = new SearchIndexClient(new Uri(searchEndpoint), new AzureKeyCredential(apiKey));
+    ```
+
+---
+
+> [!IMPORTANT]
+> Code snippets in this article use API keys. If you use role-based authentication, update each request accordingly. In a request that specifies both approaches, the API key takes precedence.
+
+## Check for existing knowledge bases
+
+Knowing about existing knowledge bases is helpful for either reuse or naming new objects. Any 2025-08-01-preview knowledge agents are returned in the knowledge bases collection.
+
+Run the following code to list existing knowledge bases by name.
+
+```csharp
+// List knowledge bases by name
+  using Azure.Search.Documents.Indexes;
+  
+  var indexClient = new SearchIndexClient(new Uri(searchEndpoint), credential);
+  var knowledgeBases = indexClient.GetKnowledgeBasesAsync();
+  
+  Console.WriteLine("Knowledge Bases:");
+  
+  await foreach (var kb in knowledgeBases)
+  {
+      Console.WriteLine($"  - {kb.Name}");
+  }
+```
+
+You can also return a single knowledge base by name to review its JSON definition.
+
+```csharp
+using Azure.Search.Documents.Indexes;
+using System.Text.Json;
+
+var indexClient = new SearchIndexClient(new Uri(searchEndpoint), credential);
+
+// Specify the knowledge base name to retrieve
+string kbNameToGet = "earth-knowledge-base";
+
+// Get a specific knowledge base definition
+var knowledgeBaseResponse = await indexClient.GetKnowledgeBaseAsync(kbNameToGet);
+var kb = knowledgeBaseResponse.Value;
+
+// Serialize to JSON for display
+string json = JsonSerializer.Serialize(kb, new JsonSerializerOptions { WriteIndented = true });
+Console.WriteLine(json);
+```
+
+The following JSON is an example of a knowledge base.
+
+```json
+{
+  "Name": "earth-knowledge-base",
+  "KnowledgeSources": [
+    {
+      "Name": "earth-knowledge-source"
+    }
+  ],
+  "Models": [
+    {}
+  ],
+  "RetrievalReasoningEffort": {},
+  "OutputMode": {},
+  "ETag": "\u00220x8DE278629D782B3\u0022",
+  "EncryptionKey": null,
+  "Description": null,
+  "RetrievalInstructions": null,
+  "AnswerInstructions": null
+}
+```
+
+## Create a knowledge base
+
+A knowledge base drives the agentic retrieval pipeline. In application code, it's called by other agents or chatbots.
+
+A knowledge base connects knowledge sources (searchable content) to an LLM deployment from Azure OpenAI. Properties on the LLM establish the connection, while properties on the knowledge source establish defaults that inform query execution and the response.
+
+Run the following code to create a knowledge base.
+
+```csharp
+using Azure.Search.Documents.Indexes.Models;
+using Azure.Search.Documents.KnowledgeBases.Models;
+
+var indexClient = new SearchIndexClient(new Uri(searchEndpoint), credential);
+
+// Create a knowledge base
+var knowledgeBase = new KnowledgeBase(
+    name: knowledgeBaseName,
+    knowledgeSources: new KnowledgeSourceReference[] { new KnowledgeSourceReference(knowledgeSourceName) }
+)
+{
+    RetrievalReasoningEffort = new KnowledgeRetrievalLowReasoningEffort(),
+    OutputMode = KnowledgeRetrievalOutputMode.AnswerSynthesis,
+    Models = { model }
+};
+await indexClient.CreateOrUpdateKnowledgeBaseAsync(knowledgeBase);
+Console.WriteLine($"Knowledge base '{knowledgeBaseName}' created or updated successfully.");
+```
+
+```csharp
+# Create a knowledge base
+using Azure.Search.Documents.Indexes;
+using Azure.Search.Documents.Indexes.Models;
+using Azure.Search.Documents.KnowledgeBases.Models;
+using Azure;
+
+var indexClient = new SearchIndexClient(new Uri(searchEndpoint), new AzureKeyCredential(apiKey));
+
+var aoaiParams = new AzureOpenAIVectorizerParameters
+{
+    ResourceUri = new Uri(aoaiEndpoint),
+    DeploymentName = aoaiGptDeployment,
+    ModelName = aoaiGptModel
+};
+
+var knowledgeBase = new KnowledgeBase(
+    name: "my-kb",
+    knowledgeSources: new KnowledgeSourceReference[] 
+    { 
+        new KnowledgeSourceReference("hotels-sample-knowledge-source"),
+        new KnowledgeSourceReference("earth-knowledge-source")
+    }
+)
+{
+    Description = "This knowledge base handles questions directed at two unrelated sample indexes.",
+    RetrievalInstructions = "Use the hotels knowledge source for queries about where to stay, otherwise use the earth at night knowledge source.",
+    AnswerInstructions = "Provide a two sentence concise and informative answer based on the retrieved documents.",
+    OutputMode = KnowledgeRetrievalOutputMode.AnswerSynthesis,
+    Models = { new KnowledgeBaseAzureOpenAIModel(azureOpenAIParameters: aoaiParams) },
+    RetrievalReasoningEffort = new KnowledgeRetrievalLowReasoningEffort()
+};
+
+await indexClient.CreateOrUpdateKnowledgeBaseAsync(knowledgeBase);
+Console.WriteLine($"Knowledge base '{knowledgeBase.Name}' created or updated successfully.");
+```
+
+### Knowledge base properties
+
+You can pass the following properties to create a knowledge base.
+
+| Name | Description | Type | Required |
+|--|--|--|--|
+| `name` | The name of the knowledge base, which must be unique within the knowledge bases collection and follow the [naming guidelines](/rest/api/searchservice/naming-rules) for objects in Azure AI Search. | String | Yes |
+| `knowledgeSources` | One or more [supported knowledge sources](../../agentic-knowledge-source-overview.md#supported-knowledge-sources). | Array | Yes |
+| `Description` | A description of the knowledge base. The LLM uses the description to inform query planning. | String | No |
+| `RetrievalInstructions` | A prompt for the LLM to determine whether a knowledge source should be in scope for a query, which is recommended when you have multiple knowledge sources. This field influences both knowledge source selection and query formulation. For example, instructions could append information or prioritize a knowledge source. Instructions are passed directly to the LLM, which means it's possible to provide instructions that break query planning, such as instructions that result in bypassing an essential knowledge source. | String | Yes |
+| `AnswerInstructions` | Custom instructions to shape synthesized answers. The default is null. For more information, see [Use answer synthesis for citation-backed responses](../../agentic-retrieval-how-to-answer-synthesis.md). | String | Yes |
+| `OutputMode` | Valid values are `AnswerSynthesis` for an LLM-formulated answer or `ExtractedData` for full search results that you can pass to an LLM as a downstream step. | String | Yes |
+| `Models` | A connection to a [supported LLM](#supported-models) used for answer formulation or query planning. In this preview, `Models` can contain just one model, and the model provider must be Azure OpenAI. Obtain model information from the Foundry portal or a command-line request. Provide the parameters using the [KnowledgeBaseAzureOpenAIModel class](/dotnet/api/azure.search.documents.indexes.models.knowledgebaseazureopenaimodel?view=azure-dotnet-preview). You can use role-based access control instead of API keys for the Azure AI Search connection to the model. For more information, see [How to deploy Azure OpenAI models with Foundry](/azure/ai-foundry/how-to/deploy-models-openai). | Object | No |
+| `RetrievalReasoningEffort` | Determines the level of LLM-related query processing. Valid values are `minimal`, `low` (default), and `medium`. For more information, see [Set the retrieval reasoning effort](../../agentic-retrieval-how-to-set-retrieval-reasoning-effort.md). | Object | No |
+
+## Query a knowledge base
+
+Set up the instructions and messages to send to the LLM.
+
+```csharp
+string instructions = @"
+Use the earth at night index to answer the question. If you can't find relevant content, say you don't know.
+";
+
+var messages = new List<Dictionary<string, string>>
+{
+    new Dictionary<string, string>
+    {
+        { "role", "system" },
+        { "content", instructions }
+    }
+};
+```
+
+Call the `retrieve` action on the knowledge base to verify the LLM connection and return results. For more information about the `retrieve` request and response schema, see [Retrieve data using a knowledge base in Azure AI Search](../../agentic-retrieval-how-to-retrieve.md).
+
+Replace "Where does the ocean look green?" with a query string that's valid for your knowledge sources.
+
+```csharp
+using Azure.Search.Documents.KnowledgeBases;
+using Azure.Search.Documents.KnowledgeBases.Models;
+
+var baseClient = new KnowledgeBaseRetrievalClient(
+    endpoint: new Uri(searchEndpoint),
+    knowledgeBaseName: knowledgeBaseName,
+    tokenCredential: new DefaultAzureCredential()
+);
+
+messages.Add(new Dictionary<string, string>
+{
+    { "role", "user" },
+    { "content", @"Where does the ocean look green?" }
+});
+
+var retrievalRequest = new KnowledgeBaseRetrievalRequest();
+foreach (Dictionary<string, string> message in messages) {
+    if (message["role"] != "system") {
+        retrievalRequest.Messages.Add(new KnowledgeBaseMessage(content: new[] { new KnowledgeBaseMessageTextContent(message["content"]) }) { Role = message["role"] });
+    }
+}
+retrievalRequest.RetrievalReasoningEffort = new KnowledgeRetrievalLowReasoningEffort();
+var retrievalResult = await baseClient.RetrieveAsync(retrievalRequest);
+
+messages.Add(new Dictionary<string, string>
+{
+    { "role", "assistant" },
+    { "content", (retrievalResult.Value.Response[0].Content[0] as KnowledgeBaseMessageTextContent).Text }
+});
+
+(retrievalResult.Value.Response[0].Content[0] as KnowledgeBaseMessageTextContent).Text 
+
+// Print the response, activity, and references
+Console.WriteLine("Response:");
+Console.WriteLine((retrievalResult.Value.Response[0].Content[0] as KnowledgeBaseMessageTextContent)!.Text);
+```
+
+**Key points:**
+
++ [KnowledgeBaseRetrievalRequest](/dotnet/api/azure.search.documents.knowledgebases.models.knowledgebaseretrievalrequest?view=azure-dotnet-preview&preserve-view=true) is the input contract for the retrieval request.
+
++ [RetrievalReasoningEffort](/dotnet/api/azure.search.documents.knowledgebases.models.knowledgebaseretrievalrequest.retrievalreasoningeffort?view=azure-dotnet-preview#azure-search-documents-knowledgebases-models-knowledgebaseretrievalrequest-retrievalreasoningeffort&preserve-view=true) is required. Setting it to `minimal` excludes LLMs from the query pipeline and only intents are used for the query input. The default is `low` and it supports LLM-based query planning and answer synthesis with messages and context.
+
++ [`knowledgeSourceParams`](/dotnet/api/azure.search.documents.knowledgebases.models.knowledgebaseretrievalrequest.knowledgesourceparams?view=azure-dotnet-preview&preserve-view=true) are used to overwrite default parameters at query time.
+
+The response to the sample query might look like the following example:
+
+```http
+  "response": [
+    {
+      "content": [
+        {
+          "type": "text",
+          "text": "The ocean appears green off the coast of Antarctica due to phytoplankton flourishing in the water, particularly in Granite Harbor near Antarctica’s Ross Sea, where they can grow in large quantities during spring, summer, and even autumn under the right conditions [ref_id:0]. Additionally, off the coast of Namibia, the ocean can also look green due to blooms of phytoplankton and yellow-green patches of sulfur precipitating from bacteria in oxygen-depleted waters [ref_id:1]. In the Strait of Georgia, Canada, the waters turned bright green due to a massive bloom of coccolithophores, a type of phytoplankton [ref_id:5]. Furthermore, a milky green and blue bloom was observed off the coast of Patagonia, Argentina, where nutrient-rich waters from different currents converge [ref_id:6]. Lastly, a large bloom of cyanobacteria was captured in the Baltic Sea, which can also give the water a green appearance [ref_id:9]."
+        }
+      ]
+    }
+  ]
+```
+
+## Delete a knowledge base
+
+If you no longer need the knowledge base or need to rebuild it on your search service, use this request to delete the object.
+
+```csharp
+using Azure.Search.Documents.Indexes;
+var indexClient = new SearchIndexClient(new Uri(searchEndpoint), credential);
+
+await indexClient.DeleteKnowledgeBaseAsync(knowledgeBaseName);
+System.Console.WriteLine($"Knowledge base '{knowledgeBaseName}' deleted successfully.");
+```

Summary

{
    "modification_type": "new feature",
    "modification_title": "知識ベースの作成方法ガイドの追加"
}

Explanation

この変更により、Azure AI Searchにおける知識ベースの作成方法に関する新しいガイドが追加されました。主なポイントは以下の通りです:

  1. 新規ファイルの作成: agentic-retrieval-how-to-create-knowledge-base-csharp.mdという新しいファイルが作成され、知識ベースの定義とその設定方法が詳述されています。

  2. 知識ベースの機能: 知識ベースはエージェント型検索のための主要なオブジェクトであり、検索する知識ソースやデフォルトの動作を定義します。これにより、クエリ時に設定された検索パイプラインを運用できます。

  3. 設定可能なプロパティ: 知識ベースには、検索可能なコンテンツを指す知識ソース、クエリ計画や答えの形成に役立つオプションのLLM、検索理由の努力レベル、カスタムプロパティなどが含まれます。

  4. 重要な変更の通知: 2025年11月1日のプレビューでは、以前の「知識エージェント」という名称が「知識ベース」に変更されるため、既存のコードを新しいAPIへ移行することが推奨されています。

  5. 前提条件の説明: 知識ベースを作成するためには、Azure AI Search及びAzure OpenAI、そして使用する知識ソースが必要です。また、適切なロールベースのアクセス権が求められます。

  6. C#コードの例: 知識ベースを作成するための具体的なC#コードのサンプルが提供されており、開発者がすぐに実装を行えるようになっています。

  7. 知識ベースの削除方法: 必要なくなった知識ベースを削除する方法についても明記されています。

このガイドは、Azure AI Searchを利用した知識ベースの設定と管理を行う際に必要なステップを明確にし、ユーザーが効果的にコンテンツを管理し、エージェント型検索を利用できるようにするための重要なリソースです。

articles/search/includes/how-tos/agentic-retrieval-how-to-create-knowledge-base-rest.md

Diff
@@ -31,9 +31,9 @@ After you create a knowledge base, you can update its properties at any time. If
 
 + One or more [knowledge sources](../../agentic-knowledge-source-overview.md#supported-knowledge-sources) on your search service.
 
-+ Permissions on your search service. **Search Service Contributor** can create and manage a knowledge base. **Search Index Data Reader** can run queries.
++ Permission to create and use objects on Azure AI Search. We recommend [role-based access](../../search-security-rbac.md). **Search Service Contributor** can create and manage a knowledge base. **Search Index Data Reader** can run queries. Alternatively, you can use [API keys](../../search-security-api-keys.md) if a role assignment isn't feasible. For more information, see [Connect to a search service](../../search-get-started-rbac.md)
 
-+ The [2025-11-01-preview](/rest/api/searchservice/operation-groups?view=rest-searchservice-2025-11-01-preview&preserve-view=true) version of the Search Service REST APIs.
++ The latest preview version of the [`Azure.Search.Documents` client library](https://www.nuget.org/packages/Azure.Search.Documents/11.8.0-beta.1) for the .NET SDK.
 
 > [!NOTE]
 > Although you can use the Azure portal to create knowledge bases, the portal uses the 2025-08-01-preview, which uses the previous "knowledge agent" terminology and doesn't support all 2025-11-01-preview features. For help with breaking changes, see [Migrate your agentic retrieval code](../../agentic-retrieval-how-to-migrate.md).

Summary

{
    "modification_type": "minor update",
    "modification_title": "REST APIの知識ベースガイドの更新"
}

Explanation

この変更は、Azure AI SearchのREST APIに関する知識ベース作成ガイドでの軽微な更新を反映しています。変更点は以下の通りです:

  1. パーミッションの説明の変更: 知識ベースの作成と使用に必要なパーミッションについて、より具体的な表現に変更されました。具体的には、役割ベースのアクセスが推奨されることが明記され、APIキーを使用するオプションが追加されました。

  2. APIバージョンに関する情報の更新: 以前のバージョンからの変更として、最新のプレビュー版の.NET SDK用のAzure.Search.Documentsクライアントライブラリへのリンクが追加され、知識ベース作成に必要な情報が更新されました。

  3. 全体的な整合性の向上: 更新により、ガイドの内容がさらに明確になり、ユーザーが必要となる情報をより簡単に見つけられるようになっています。

この修正により、Azure AI Searchの知識ベースの作成に関するガイドが最新の情報を反映し、利用者にとって有益なリソースとなることを目的としています。

articles/search/includes/how-tos/knowledge-source-check-csharp.md

Diff
@@ -0,0 +1,51 @@
+---
+manager: nitinme
+author: haileytap
+ms.author: haileytapia
+ms.service: azure-ai-search
+ms.topic: include
+ms.date: 11/19/2025
+---
+
+A knowledge source is a top-level, reusable object. Knowing about existing knowledge sources is helpful for either reuse or naming new objects.
+
+Run the following code to list knowledge sources by name and type.
+
+```csharp
+// List knowledge sources by name and type
+using Azure.Search.Documents.Indexes;
+
+var indexClient = new SearchIndexClient(new Uri(searchEndpoint), credential);
+var knowledgeSources = indexClient.GetKnowledgeSourcesAsync();
+
+Console.WriteLine("Knowledge Sources:");
+
+await foreach (var ks in knowledgeSources)
+{
+    Console.WriteLine($"  Name: {ks.Name}, Type: {ks.GetType().Name}");
+}
+```
+
+You can also return a single knowledge source by name to review its JSON definition.
+
+```csharp
+using Azure.Search.Documents.Indexes;
+using System.Text.Json;
+
+var indexClient = new SearchIndexClient(new Uri(searchEndpoint), credential);
+
+// Specify the knowledge source name to retrieve
+string ksNameToGet = "earth-knowledge-source";
+
+// Get its definition
+var knowledgeSourceResponse = await indexClient.GetKnowledgeSourceAsync(ksNameToGet);
+var ks = knowledgeSourceResponse.Value;
+
+// Serialize to JSON for display
+var jsonOptions = new JsonSerializerOptions 
+{ 
+    WriteIndented = true,
+    DefaultIgnoreCondition = System.Text.Json.Serialization.JsonIgnoreCondition.Never
+};
+Console.WriteLine(JsonSerializer.Serialize(ks, ks.GetType(), jsonOptions));
+```

Summary

{
    "modification_type": "new feature",
    "modification_title": "知識ソースの確認方法に関するガイドの追加"
}

Explanation

この変更により、Azure AI Searchにおける知識ソースの確認方法に関する新しいガイドが追加されました。主な内容は以下の通りです:

  1. 新規ファイルの作成: knowledge-source-check-csharp.mdという新しいファイルが作成され、知識ソースをリスト表示し、特定の知識ソースの詳細を取得するためのコード例が提供されています。

  2. 知識ソースの説明: ガイドの冒頭では、知識ソースが再利用可能なトップレベルオブジェクトであり、既存の知識ソースを把握することで新しいオブジェクトの再利用や命名が助けられることが説明されています。

  3. C#コードのサンプル: 知識ソースを名前とタイプでリスト表示するためのC#コードと、特定の知識ソースの定義を取得するためのコードが示されています。ユーザーはこれにより、知識ソースの管理や確認が容易になります。

  4. JSON形式での表示: 知識ソースの定義をJSON形式で表示する方法も示され、視覚的に理解しやすくなっています。

このガイドは、Azure AI Searchを利用する開発者にとって、知識ソースの作成や管理をサポートするための貴重なリソースとなります。ユーザーが知識ソースを簡単に確認し、有効活用できるようになることを目的としています。

articles/search/includes/how-tos/knowledge-source-delete-csharp.md

Diff
@@ -0,0 +1,99 @@
+---
+manager: nitinme
+author: haileytap
+ms.author: haileytapia
+ms.service: azure-ai-search
+ms.topic: include
+ms.date: 11/19/2025
+---
+
+Before you can delete a knowledge source, you must delete any knowledge base that references it or update the knowledge base definition to remove the reference. For knowledge sources that generate an index and indexer pipeline, all *generated objects* are also deleted. However, if you used an existing index to create a knowledge source, your index isn't deleted.
+
+If you try to delete a knowledge source that's in use, the action fails and returns a list of affected knowledge bases.
+
+To delete a knowledge source:
+
+1. Get a list of all knowledge bases on your search service.
+
+    ```csharp
+    using Azure.Search.Documents.Indexes;
+    
+    var indexClient = new SearchIndexClient(new Uri(searchEndpoint), credential);
+    var knowledgeBases = indexClient.GetKnowledgeBasesAsync();
+    
+    Console.WriteLine("Knowledge Bases:");
+    
+    await foreach (var kb in knowledgeBases)
+    {
+        Console.WriteLine($"  - {kb.Name}");
+    }
+    ```
+
+   An example response might look like the following:
+
+   ```md
+    Knowledge Bases:
+      - earth-knowledge-base
+      - hotels-sample-knowledge-base
+      - my-demo-knowledge-base
+    ```
+
+1. Get an individual knowledge base definition to check for knowledge source references.
+
+    ```csharp
+    using Azure.Search.Documents.Indexes;
+    using System.Text.Json;
+    
+    var indexClient = new SearchIndexClient(new Uri(searchEndpoint), credential);
+    
+    // Specify the knowledge base name to retrieve
+    string kbNameToGet = "earth-knowledge-base";
+    
+    // Get a specific knowledge base definition
+    var knowledgeBaseResponse = await indexClient.GetKnowledgeBaseAsync(kbNameToGet);
+    var kb = knowledgeBaseResponse.Value;
+    
+    // Serialize to JSON for display
+    string json = JsonSerializer.Serialize(kb, new JsonSerializerOptions { WriteIndented = true });
+    Console.WriteLine(json);
+    ```
+    
+   An example response might look like the following:
+
+   ```json
+    {
+      "Name": "earth-knowledge-base",
+      "KnowledgeSources": [
+        {
+          "Name": "earth-knowledge-source"
+        }
+      ],
+      "Models": [
+        {}
+      ],
+      "RetrievalReasoningEffort": {},
+      "OutputMode": {},
+      "ETag": "\u00220x8DE278629D782B3\u0022",
+      "EncryptionKey": null,
+      "Description": null,
+      "RetrievalInstructions": null,
+      "AnswerInstructions": null
+    }
+   ```
+
+1. Either delete the knowledge base or [update the knowledge base](/dotnet/api/azure.search.documents.indexes.searchindexclient.createorupdateknowledgebaseasync?view=azure-dotnet-preview&preserve-view=true) to remove the knowledge source if you have multiple sources. This example shows deletion.
+
+    ```csharp
+    using Azure.Search.Documents.Indexes;
+    var indexClient = new SearchIndexClient(new Uri(searchEndpoint), credential);
+    
+    await indexClient.DeleteKnowledgeBaseAsync(knowledgeBaseName);
+    System.Console.WriteLine($"Knowledge base '{knowledgeBaseName}' deleted successfully.");
+    ```
+
+1. Delete the knowledge source.
+
+    ```csharp
+    await indexClient.DeleteKnowledgeSourceAsync(knowledgeSourceName);
+    System.Console.WriteLine($"Knowledge source '{knowledgeSourceName}' deleted successfully.");
+    ```
\ No newline at end of file

Summary

{
    "modification_type": "new feature",
    "modification_title": "知識ソース削除に関するガイドの追加"
}

Explanation

この変更により、Azure AI Searchでの知識ソースの削除方法に関する新たなガイドが追加されました。主な内容は以下の通りです:

  1. 削除の前提条件: 知識ソースを削除する前に、その知識ソースを参照している知識ベースを削除するか、知識ベースの定義から参照を削除する必要があることが説明されています。知識ソースが生成したインデックスやインデクサーパイプラインも同時に削除されることが明記されています。

  2. C#コードによる手順: 知識ソースを削除するための具体的な手順が示されており、以下のフェーズがあります:

    • 検索サービス上のすべての知識ベースをリストするコード。
    • 個別の知識ベース定義を取得して、知識ソースの参照を確認するコード。
    • 知識ベースの削除または更新を行うコード。
    • 最後に、知識ソースを削除するためのコード。
  3. サンプルレスポンスの提示: プログラムで得られる出力の例として、知識ベースのリストや具体的な知識ベース定義のJSON形式での出力が提供されています。これにより、ユーザーは期待される出力を理解しやすくなっています。

このガイドは、Azure AI Searchを利用する開発者が知識ソースの管理をより容易にし、必要に応じて適切に削除できるようサポートすることを目的としています。

articles/search/includes/how-tos/knowledge-source-ingestion-parameters-csharp.md

Diff
@@ -0,0 +1,21 @@
+---
+manager: nitinme
+author: haileytap
+ms.author: haileytapia
+ms.service: azure-ai-search
+ms.topic: include
+ms.date: 11/19/2025
+---
+
+For indexed knowledge sources only, you can pass the following `ingestionParameters` properties to control how content is ingested and processed.
+
+| Name | Description | Type | Editable | Required |
+|--|--|--|--|--|
+| `Identity` | A [managed identity](../../search-how-to-managed-identities.md) to use in the generated indexer. | Object | Yes | No |
+| `DisableImageVerbalization` | Enables or disables the use of image verbalization. The default is `False`, which *enables* image verbalization. Set to `True` to *disable* image verbalization. | Boolean | No | No |
+| `ChatCompletionModel` | A chat completion model that verbalizes images or extracts content. Supported models are `gpt-4o`, `gpt-4o-mini`, `gpt-4.1`, `gpt-4.1-mini`, `gpt-4.1-nano`, `gpt-5`, `gpt-5-mini`, and `gpt-5-nano`. The [GenAI Prompt skill](../../cognitive-search-skill-genai-prompt.md) will be included in the generated skillset. Setting this parameter also requires that `disable_image_verbalization` is set to `False`. | Object | Only `api_key` and `deployment_name` are editable | No |
+| `EmbeddingModel` | A text embedding model that vectorizes text and image content during indexing and at query time. Supported models are `text-embedding-ada-002`, `text-embedding-3-small`, and `text-embedding-3-large`. The [Azure OpenAI Embedding skill](../../cognitive-search-skill-azure-openai-embedding.md) will be included in the generated skillset, and the [Azure OpenAI vectorizer](../../vector-search-vectorizer-azure-open-ai.md) will be included in the generated index. | Object | Only `api_key` and `deployment_name` are editable | No |
+| `ContentExtractionMode` | Controls how content is extracted from files. The default is `minimal`, which uses standard content extraction for text and images. Set to `standard` for advanced document cracking and chunking using the [Azure Content Understanding skill](../../cognitive-search-skill-content-understanding.md), which will be included in the generated skillset. For `standard` only, the `AiServices` and `AssetStore` parameters are specifiable. | String | No | No |
+| `AiServices` | A Microsoft Foundry resource to access Azure Content Understanding in Foundry Tools. Setting this parameter requires that `ContentExtractionMode` is set to `standard`. | Object | Only `api_key` is editable | Yes |
+| `IngestionSchedule` | Adds scheduling information to the generated indexer. You can also [add a schedule](../../search-howto-schedule-indexers.md) later to automate data refresh. | Object | Yes | No |
+| `IngestionPermissionOptions` | The document-level permissions to ingest from select knowledge sources: either [ADLS Gen2](../../agentic-knowledge-source-how-to-blob.md) or [indexed SharePoint](../../agentic-knowledge-source-how-to-sharepoint-indexed.md). If you specify `user_ids`, `group_ids`, or `rbac_scope`, the generated [ADLS Gen2 indexer](../../search-indexer-access-control-lists-and-role-based-access.md) or [SharePoint indexer](../../search-indexer-sharepoint-access-control-lists.md) will include the ingested permissions. | Array | No | No |

Summary

{
    "modification_type": "new feature",
    "modification_title": "知識ソースの取り込みパラメータに関するガイドの追加"
}

Explanation

この変更により、Azure AI Searchにおける知識ソースの取り込みパラメータに関する新しいガイドが追加されました。主な内容は以下の通りです:

  1. 取り込みパラメータの説明: このガイドでは、インデックスされた知識ソースに対して使用できるさまざまなingestionParametersプロパティについて説明しています。これにより、コンテンツの取り込みおよび処理方法を制御できます。

  2. パラメータの一覧表: 以下のプロパティについて詳細が記載されています:

    • Identity: 生成されたインデクサで使用されるマネージドアイデンティティ。
    • DisableImageVerbalization: 画像の言語化を有効または無効にするブール値。
    • ChatCompletionModel: 画像を言語化またはコンテンツを抽出するためのチャット完了モデル。
    • EmbeddingModel: テキストと画像コンテンツをベクトル化するテキスト埋め込みモデル。
    • ContentExtractionMode: ファイルからコンテンツを抽出する方法を制御します。
    • AiServices: Azure Content UnderstandingにアクセスするためのMicrosoft Foundryリソース。
    • IngestionSchedule: 生成されたインデクサにスケジューリング情報を追加します。
    • IngestionPermissionOptions: 特定の知識ソースからのドキュメントレベルの権限を取り込むためのオプション。
  3. 用途と要件: 各パラメータの用途、デフォルト値、編集可能性、必須性についても明記されており、開発者が容易に理解できるように配慮されています。

このガイドは、Azure AI Searchを利用する開発者が知識ソースの取り込みをより柔軟にコントロールできるようサポートすることを目的としています。

articles/search/includes/how-tos/knowledge-source-status-csharp.md

Diff
@@ -0,0 +1,53 @@
+---
+manager: nitinme
+author: haileytap
+ms.author: haileytapia
+ms.service: azure-ai-search
+ms.topic: include
+ms.date: 11/19/2025
+---
+
+Run the following code to monitor ingestion progress and health, including [indexer status](/dotnet/api/azure.search.documents.indexes.models.knowledgesourcestatus?view=azure-dotnet-preview&preserve-view=true) for knowledge sources that generate an indexer pipeline and populate a search index.
+
+```csharp
+// Get knowledge source ingestion status
+using Azure.Search.Documents.Indexes;
+using System.Text.Json;
+
+var indexClient = new SearchIndexClient(new Uri(searchEndpoint), new AzureKeyCredential(apiKey));
+
+// Get the knowledge source status
+var statusResponse = await indexClient.GetKnowledgeSourceStatusAsync(knowledgeSourceName);
+var status = statusResponse.Value;
+
+// Serialize to JSON for display
+var json = JsonSerializer.Serialize(status, new JsonSerializerOptions { WriteIndented = true });
+Console.WriteLine(json);
+```
+
+A response for a request that includes ingestion parameters and is actively ingesting content might look like the following example.
+
+```json
+{ 
+  "synchronizationStatus": "active", // creating, active, deleting 
+  "synchronizationInterval" : "1d", // null if no schedule 
+  "currentSynchronizationState" : { // spans multiple indexer "runs" 
+    "startTime": "2025-10-27T19:30:00Z", 
+    "itemUpdatesProcessed": 1100, 
+    "itemsUpdatesFailed": 100, 
+    "itemsSkipped": 1100, 
+  }, 
+  "lastSynchronizationState" : {  // null on first sync 
+    "startTime": "2025-10-27T19:30:00Z", 
+    "endTime": "2025-10-27T19:40:01Z", // this value appears on the activity record on each /retrieve 
+    "itemUpdatesProcessed": 1100, 
+    "itemsUpdatesFailed": 100, 
+    "itemsSkipped": 1100, 
+  }, 
+  "statistics": {  // null on first sync 
+    "totalSynchronization": 25, 
+    "averageSynchronizationDuration": "00:15:20", 
+    "averageItemsProcessedPerSynchronization" : 500 
+  } 
+} 
+```

Summary

{
    "modification_type": "new feature",
    "modification_title": "知識ソースの状態を監視するためのガイドの追加"
}

Explanation

この変更により、Azure AI Searchにおいて知識ソースの取り込み進捗と健康状態を監視するための新しいガイドが追加されました。主な内容は以下の通りです:

  1. 状態監視のコード例: 取り込みの進捗や健康状態を確認するためのC#コードが提供されています。具体的には、インデクサパイプラインを生成し、検索インデックスを充填する知識ソースの状態を取得するための手順が示されています。

  2. API呼び出し: GetKnowledgeSourceStatusAsyncメソッドを使用して、指定された知識ソースの状態を非同期的に取得し、その結果をJSON形式で整形して表示する方法が説明されています。これにより、ユーザーは簡単に現在の状態を確認できるようになります。

  3. サンプルレスポンス: リクエストに対するレスポンスの例として、知識ソースの取り込み状態や同期状況に関する情報が提供されています。具体的には、同期の状態(例:アクティブ、作成中、削除中)、同期時間、処理されたアイテム数、失敗したアイテム数などが含まれています。

  4. 統計情報: 最後に、統計情報として、全体の同期回数、平均同期時間、1回の同期で処理されるアイテム数の平均も示されており、ユーザーが監視と管理を行う上で必要なデータが提供されています。

このガイドは、Azure AI Searchを利用する開発者が知識ソースの状態を効果的に監視できるようサポートすることを目的としています。

articles/search/includes/tutorials/skillset-csharp.md

Diff
@@ -4,7 +4,7 @@ author: HeidiSteen
 ms.author: heidist
 ms.service: azure-ai-search
 ms.topic: include
-ms.date: 07/11/2025
+ms.date: 11/21/2025
 ms.custom:
   - devx-track-csharp
   - devx-track-dotnet
@@ -65,9 +65,7 @@ Once content is extracted, the [skillset](../../cognitive-search-working-with-sk
 
    1. Copy the connection string for either key one or key two. The connection string is similar to the following example:
 
-      ```http
-      DefaultEndpointsProtocol=https;AccountName=<your account name>;AccountKey=<your account key>;EndpointSuffix=core.windows.net
-      ```
+      `DefaultEndpointsProtocol=https;AccountName=<your account name>;AccountKey=<your account key>;EndpointSuffix=core.windows.net`
 
 ### Foundry Tools
 
@@ -77,9 +75,11 @@ Built-in AI enrichment is backed by Foundry Tools, including Azure Language and
 
 For this tutorial, connections to Azure AI Search require an endpoint and an API key. You can get these values from the Azure portal.
 
-1. Sign in to the [Azure portal](https://portal.azure.com), navigate to the search service **Overview** page, and copy the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
+1. Sign in to the [Azure portal](https://portal.azure.com) and select your search service.
 
-1. Under **Settings** > **Keys**, copy an admin key. Admin keys are used to add, modify, and delete objects. There are two interchangeable admin keys. Copy either one.
+1. From the left pane, select **Overview** and copy the endpoint. It should be in this format: `https://my-service.search.windows.net`
+
+1. From the left pane, select **Settings** > **Keys** and copy an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either key on requests to add, modify, or delete objects.
 
 ## Set up your environment
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "Skillsetチュートリアルのアップデート"
}

Explanation

この変更では、Azure AI Searchに関するSkillsetチュートリアルの文書が更新されました。主な内容は以下の通りです:

  1. 日付の更新: 文書の最終更新日が2025年7月11日から2025年11月21日に変更されました。これは、新しい情報の反映を示しています。

  2. コード形式の変更: コネクション文字列の表示方法が変更され、マークダウンのコードブロック形式からインラインのコード形式に修正されました。具体的には、コネクション文字列の例が次のように変更されています:

    • 以前:

      DefaultEndpointsProtocol=https;AccountName=<your account name>;AccountKey=<your account key>;EndpointSuffix=core.windows.net
    • 現在:
      DefaultEndpointsProtocol=https;AccountName=<your account name>;AccountKey=<your account key>;EndpointSuffix=core.windows.net

  3. 手順の明確化: Azureポータルでの手順が多少整理され、一部の段落が改善されました。具体的には、検索サービスへのサインインの手順が明確にされ、設定メニューのガイドラインが加わっています。

    • 選択肢の更新: ユーザーがAzureポータルにログインする際に、検索サービスを選択するステップが強調され、全体の流れをより分かりやすくするために手順が整頓されています。
    • キーメニューの選択方法: 管理者キーのコピー方法がより詳細に説明され、ビジネス継続性のために二つのキーが用意されていることも明記されています。

これらの変更は、利用者がAzure AI SearchのSkillsetをより効果的に利用できるようサポートすることを目指しています。

articles/search/includes/tutorials/skillset-rest.md

Diff
@@ -4,7 +4,7 @@ author: HeidiSteen
 ms.author: heidist
 ms.service: azure-ai-search
 ms.topic: include
-ms.date: 07/11/2025
+ms.date: 11/21/2025
 ms.custom:
   - ignite-2023
   - sfi-ropc-nochange
@@ -63,9 +63,7 @@ Download a zip file of the sample data repository and extract the contents. [Lea
 
    1. Copy the connection string for either key one or key two. The connection string is similar to the following example:
 
-      ```http
-      DefaultEndpointsProtocol=https;AccountName=<your account name>;AccountKey=<your account key>;EndpointSuffix=core.windows.net
-      ```
+      `DefaultEndpointsProtocol=https;AccountName=<your account name>;AccountKey=<your account key>;EndpointSuffix=core.windows.net`
 
 ### Foundry Tools
 
@@ -75,9 +73,11 @@ Built-in AI enrichment is backed by Foundry Tools, including Azure Language and
 
 For this tutorial, connections to Azure AI Search require an endpoint and an API key. You can get these values from the Azure portal.
 
-1. Sign in to the [Azure portal](https://portal.azure.com), navigate to the search service **Overview** page, and copy the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
+1. Sign in to the [Azure portal](https://portal.azure.com) and select your search service.
 
-1. Under **Settings** > **Keys**, copy an admin key. Admin keys are used to add, modify, and delete objects. There are two interchangeable admin keys. Copy either one.
+1. From the left pane, select **Overview** and copy the endpoint. It should be in this format: `https://my-service.search.windows.net`
+
+1. From the left pane, select **Settings** > **Keys** and copy an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either key on requests to add, modify, or delete objects.
 
    :::image type="content" source="../../media/search-get-started-rest/get-url-key.png" alt-text="Screenshot of the URL and API keys in the Azure portal.":::
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "Skillset RESTチュートリアルの更新"
}

Explanation

この変更では、Azure AI SearchにおけるSkillsetのRESTチュートリアルが更新されました。主な更新内容は以下の通りです:

  1. 日付の更新: ドキュメントの最終更新日が2025年7月11日から2025年11月21日に変更され、新しい情報が反映されています。

  2. コネクション文字列の形式変更: コネクション文字列の表示形式が、マークダウンのコードブロックからインラインのコード形式に変更されました。具体的には、コネクション文字列の例が以下のように修正されています:

    • 以前:

      DefaultEndpointsProtocol=https;AccountName=<your account name>;AccountKey=<your account key>;EndpointSuffix=core.windows.net
    • 現在:
      DefaultEndpointsProtocol=https;AccountName=<your account name>;AccountKey=<your account key>;EndpointSuffix=core.windows.net

  3. 手順の整理: Azureポータルでの手順がわかりやすく整理され、特に検索サービスを選択するアプローチが明瞭になりました。

    • サインイン手順の変更: サインイン後、検索サービスを選択する手順が強調され、直前の説明と合わせてよりスムーズな流れが確保されています。
    • 管理者キーの取得方法: 管理者キーをコピーする方法がより詳細に説明され、ビジネス継続性の観点から二つのキーが示されています。これにより、ユーザーは必要に応じていずれかのキーを使用できることが確実になります。

これらの変更は、ユーザーがAzure AI Searchの機能をより効果的に活用できるようにするために行われており、チュートリアルの明瞭さと利便性が向上しています。

articles/search/multimodal-search-overview.md

Diff
@@ -4,7 +4,7 @@ titleSuffix: Azure AI Search
 description: Learn what multimodal search is, how Azure AI Search supports it for text and image content, and where to find detailed concepts, tutorials, and samples.
 ms.service: azure-ai-search
 ms.topic: conceptual
-ms.date: 11/04/2025
+ms.date: 11/21/2025
 author: gmndrg
 ms.author: gimondra
 ---
@@ -35,7 +35,7 @@ Azure AI Search addresses these challenges by integrating images into the same r
 
 Multimodal search is ideal for [retrieval-augmented generation (RAG)](retrieval-augmented-generation-overview.md) scenarios. By interpreting the structural logic of images, multimodal search makes your RAG application or AI agent less likely overlook important visual details. It also provides your users with detailed answers that can be traced back to their original sources, regardless of the source's modality.
 
-## How multimodal search works in Azure AI Search
+## How does multimodal search work?
 
 To simplify the creation of a multimodal pipeline, Azure AI Search offers the **Import data (new)** wizard in the Azure portal. The wizard helps you configure a data source, define extraction and enrichment settings, and generate a multimodal index that contains text, embedded image references, and vector embeddings. For more information, see [Quickstart: Multimodal search in the Azure portal](search-get-started-portal-image-search.md).
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "マルチモーダル検索の概要文書の更新"
}

Explanation

この変更では、Azure AI Searchにおけるマルチモーダル検索の概要に関する文書が更新されました。主な変更内容は以下の通りです:

  1. 日付の更新: ドキュメントの最終更新日が2025年11月4日から2025年11月21日に修正され、新しい情報が反映されています。

  2. 見出しの変更: 「マルチモーダル検索の仕組み」という見出しが「マルチモーダル検索はどのように機能するか?」に変更され、内容の内容に対する分かりやすさが向上しました。この変更により、見出しがより直接的に質問を投げかける形になります。

  3. 表現の整理: 文書内の表現がわかりやすく整えられています。特に、マルチモーダル検索がどのように機能するかを説明する段落が、より明確に技術的なプロセスを説明する形となっています。

これらの変更は、ユーザーがAzure AI Searchのマルチモーダル検索の機能をより理解しやすくするためのものであり、文書全体の明瞭性を向上させています。

articles/search/samples-python.md

Diff
@@ -11,7 +11,7 @@ ms.custom:
   - devx-track-python
   - ignite-2023
 ms.topic: concept-article
-ms.date: 09/23/2025
+ms.date: 11/21/2025
 ---
 
 # Python samples for Azure AI Search
@@ -42,7 +42,7 @@ Code samples from the Azure AI Search team demonstrate features and workflows. T
 | [Quickstart-Semantic-Search](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/Quickstart-Semantic-Search) | [Quickstart: Semantic ranking](search-get-started-semantic.md) | Add semantic ranking to an index schema and run semantic queries. |
 | [Quickstart-Vector-Search](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/Quickstart-Vector-Search) | [Quickstart: Vector search](search-get-started-vector.md) | Index and query vector content. |
 | [Tutorial-RAG](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/Tutorial-RAG) | [Build a RAG solution using Azure AI Search](tutorial-rag-build-solution.md) | Create an indexing pipeline that loads, chunks, embeds, and ingests searchable content for RAG. |
-| [agentic-retrieval-pipeline-example](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/agentic-retrieval-pipeline-example) | [Tutorial: Build an end-to-end agentic retrieval solution](agentic-retrieval-how-to-create-pipeline.md) | Unlike [Quickstart-Agentic-Retrieval](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/Quickstart-Agentic-Retrieval), this sample incorporates Azure AI Agent for request orchestration. |
+| [agentic-retrieval-pipeline-example](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/agentic-retrieval-pipeline-example) | [Tutorial: Build an end-to-end agentic retrieval solution](agentic-retrieval-how-to-create-pipeline.md) | Unlike [Quickstart-Agentic-Retrieval](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/Quickstart-Agentic-Retrieval), this sample incorporates Foundry Agent Service for request orchestration. |
 
 ## Accelerators
 
@@ -68,6 +68,8 @@ The following samples are also published by the Azure AI Search team but aren't
 
 | Sample | Description |
 |--|--|
+| [Quickstart-Document-Permissions-Pull-API](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/Quickstart-Document-Permissions-Pull-API) | Using an indexer "pull API" approach, flow access control lists from a data source to search results and apply permission filters that restrict access to authorized content. |
+| [Quickstart-Document-Permissions-Push-API](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/Quickstart-Document-Permissions-Push-API) | Using the push APIs for indexing a JSON payload, flow embedded permission metadata to indexed documents and search results that are filtered based on user access to authorized content. |
 | [azure-function-search](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/azure-function-search) | Use an Azure function to send queries to a search service. You can substitute this Python version for the `api` code used in [Add search to web sites with .NET](tutorial-csharp-overview.md). |
 | [bulk-insert](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/bulk-insert) | [Use the push APIs](search-how-to-load-search-index.md) to upload and index documents. |
 | [index-backup-and-restore.ipynb](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python/code/utilities/index-backup-restore) | Make a local copy of retrievable fields in an index and push those fields to a new index. |

Summary

{
    "modification_type": "minor update",
    "modification_title": "Pythonサンプルのドキュメントの更新"
}

Explanation

この変更では、Azure AI Searchに関するPythonサンプルのドキュメントが更新されました。主な変更内容は以下の通りです:

  1. 日付の更新: ドキュメントの最終更新日が2025年9月23日から2025年11月21日に修正され、新しい情報が反映されています。

  2. サンプルの説明の改訂: エージェントリトリーバルパイプラインの例の説明が更新され、Azure AI AgentからFoundry Agent Serviceに変更されました。これにより、サンプルが最新の技術と一致する形となっています。

  3. 新しいサンプルの追加: ドキュメントには2つの新しいクイックスタートサンプルが追加されました。

    • ドキュメント権限プルAPI: インデクサーの「プルAPI」アプローチを使用して、データソースから検索結果へのアクセス制御リストを流し、権限フィルターを適用して認可されたコンテンツへのアクセスを制限します。
    • ドキュメント権限プッシュAPI: JSONペイロードをインデックスするためのプッシュAPIを使用し、埋め込まれた権限メタデータをインデックスされたドキュメントに流し込み、ユーザーのアクセスに基づいてフィルタリングされた検索結果を提供します。

これらの変更は、ユーザーがAzure AI Searchを利用する上での実用的なサンプルを提供することを目的としており、ドキュメントの内容が最新の状態に保たれています。

articles/search/samples-rest.md

Diff
@@ -9,7 +9,7 @@ ms.service: azure-ai-search
 ms.custom:
   - ignite-2023
 ms.topic: concept-article
-ms.date: 09/23/2025
+ms.date: 11/21/2025
 ---
 
 # REST samples for Azure AI Search
@@ -27,22 +27,29 @@ Code samples from the Azure AI Search team demonstrate features and workflows. T
 | Sample | Article | Description |
 |--|--|--|
 | [quickstart](https://github.com/Azure-Samples/azure-search-rest-samples/tree/main/Quickstart) | [Quickstart: Full-text search](search-get-started-text.md) | Create, load, and query a search index using sample data. |
+| [quickstart-ACL](https://github.com/Azure-Samples/azure-search-rest-samples/tree/main/Quickstart-ACL) | [Query-time ACL and RBAC enforcement](search-query-access-control-rbac-enforcement.md) | Implement query-time access control using role-based access control (RBAC) and access control lists (ACLs). |
 | [quickstart-agentic-retrieval](https://github.com/Azure-Samples/azure-search-rest-samples/tree/main/Quickstart-agentic-retrieval) | [Quickstart: Agentic retrieval](search-get-started-agentic-retrieval.md) | Integrate semantic ranking with LLM-powered query planning and answer generation. |
+| [quickstart-RAG](https://github.com/Azure-Samples/azure-search-rest-samples/tree/main/Quickstart-RAG) | [Quickstart: Classic generative search (RAG)](search-get-started-rag.md) | Use grounding data from Azure AI Search with a chat completion model from Azure OpenAI. |
+| [quickstart-semantic-search](https://github.com/Azure-Samples/azure-search-rest-samples/tree/main/Quickstart-semantic-search) | [Quickstart: Semantic ranking](search-get-started-semantic.md) | Add semantic ranking to an index schema and run semantic queries. |
 | [quickstart-vectors](https://github.com/Azure-Samples/azure-search-rest-samples/tree/main/Quickstart-vectors) | [Quickstart: Vector search](search-get-started-vector.md) | Index and query vector content. |
-| [skillset-tutorial](https://github.com/Azure-Samples/azure-search-rest-samples/tree/main/skillset-tutorial) | [Tutorial: AI-generated searchable content from Azure blobs](tutorial-skillset.md) | Create a skillset that iterates over Azure blobs to extract information and infer structure. |
-| [debug-sessions](https://github.com/Azure-Samples/azure-search-rest-samples/tree/main/Debug-sessions) | [Tutorial: Fix a skillset using Debug Sessions](cognitive-search-tutorial-debug-sessions.md) | Use REST to create search objects that you later debug in the Azure portal. |
 | [custom-analyzers](https://github.com/Azure-Samples/azure-search-rest-samples/tree/main/custom-analyzers) | [Tutorial: Create a custom analyzer for phone numbers](tutorial-create-custom-analyzer.md) | Use an analyzer to preserve patterns and special characters in searchable content. |
+| [debug-sessions](https://github.com/Azure-Samples/azure-search-rest-samples/tree/main/Debug-sessions) | [Tutorial: Fix a skillset using Debug Sessions](cognitive-search-tutorial-debug-sessions.md) | Create search objects that you later debug in the Azure portal. |
 | [index-json-blobs](https://github.com/Azure-Samples/azure-search-rest-samples/tree/main/index-json-blobs) | [Tutorial: Index JSON blobs from Azure Storage](search-semi-structured-data.md) | Create an indexer, data source, and index for nested JSON within a JSON array. Demonstrates the jsonArray parsing model and documentRoot parameters. |
 | [knowledge-store](https://github.com/Azure-Samples/azure-search-rest-samples/tree/main/knowledge-store) | [Create a knowledge store using REST](knowledge-store-create-rest.md) | Populate a knowledge store for knowledge mining workflows. |
 | [projections](https://github.com/Azure-Samples/azure-search-rest-samples/tree/main/projections) | [Define projections in a knowledge store](knowledge-store-projections-examples.md) | Specify the physical data structures in a knowledge store. |
+| [skillset-tutorial](https://github.com/Azure-Samples/azure-search-rest-samples/tree/main/skillset-tutorial) | [Tutorial: AI-generated searchable content from Azure blobs](tutorial-skillset.md) | Create a skillset that iterates over Azure blobs to extract information and infer structure. |
 
 ## Other samples
 
+Currently, there are no other REST samples available.
+
+<!--
 The following samples are also published by the Azure AI Search team but aren't referenced in documentation. Associated README files provide usage instructions.
 
 | Sample | Description |
 |--|--|
-| [skill-examples](https://github.com/Azure-Samples/azure-search-rest-samples/tree/main/skill-examples) | Skillset examples in indexer pipelines that include indexes and indexers so you can follow field mappings, output field mappings, and source paths. |
+| | |
+-->
 
 > [!TIP]
 > Use the [samples browser](/samples/browse/?expanded=azure&languages=http&products=azure-cognitive-search) to search for Microsoft code samples on GitHub. You can filter your search by product, service, and language.

Summary

{
    "modification_type": "minor update",
    "modification_title": "RESTサンプルのドキュメントの更新"
}

Explanation

この変更では、Azure AI Searchに関するRESTサンプルのドキュメントが更新されました。主な変更内容は以下の通りです:

  1. 日付の更新: ドキュメントの最終更新日が2025年9月23日から2025年11月21日に変更され、新しい情報が反映されています。

  2. 新しいサンプルの追加:

    • ACLおよびRBACの実装: クエリー時のアクセス制御(ACL)とロールベースのアクセス制御(RBAC)を実装する新しいサンプルが追加されました。
    • 古典的な生成検索: Azure AI Searchからのデータを使用して、Azure OpenAIのチャット完了モデルと組み合わせた新しいクイックスタートサンプルが追加されました。
    • セマンティック検索サンプル: インデックススキーマにセマンティックランキングを追加し、セマンティッククエリを実行するためのクイックスタートサンプルが追加されました。
  3. サンプルの説明の修正: 既存のサンプルの説明文やリンクの整理が行われ、一部のサンプルで内容説明がより明確になっています。

  4. 不要なセクションの削除: 他のRESTサンプルがない旨の記載が追加され、不必要な情報が削除されました。

これらの変更は、ユーザーがAzure AI SearchのREST APIを利用する際の参考資料としての価値を高めることを目的としており、最新の機能や使い方が反映されています。

articles/search/search-api-versions.md

Diff
@@ -13,7 +13,7 @@ ms.custom:
   - devx-track-python
   - ignite-2023
 ms.topic: conceptual
-ms.date: 07/31/2025
+ms.date: 11/21/2025
 ---
 
 # API versions in Azure AI Search

Summary

{
    "modification_type": "minor update",
    "modification_title": "APIバージョンのドキュメントの更新"
}

Explanation

この変更では、Azure AI Searchに関するAPIバージョンのドキュメントが更新されました。主な変更点は以下の通りです:

  1. 日付の更新: ドキュメントの最終更新日が2025年7月31日から2025年11月21日に変更され、新しい情報が反映されています。

この更新は、ユーザーに対してAPIバージョンに関する最新の情報を提供することを目的としており、コンテンツの信頼性を向上させています。

articles/search/search-blob-indexer-role-based-access.md

Diff
@@ -242,7 +242,7 @@ To effectively manage blob deletion, ensure that you have enabled [deletion trac
 ## See also
 
 + [Connect to Azure AI Search using roles](search-security-rbac.md)
-- [Query-Time ACL and RBAC enforcement](search-query-access-control-rbac-enforcement.md)
+- [Query-time ACL and RBAC enforcement](search-query-access-control-rbac-enforcement.md)
 - [azure-search-python-samples/Quickstart-Document-Permissions-Push-API](https://github.com/Azure-Samples/azure-search-python-samples/blob/main/Quickstart-Document-Permissions-Push-API)
 + [Search over Azure Blob Storage content](search-blob-storage-integration.md)
 + [Configure a blob indexer](search-how-to-index-azure-blob-storage.md)

Summary

{
    "modification_type": "minor update",
    "modification_title": "Blobインデクサーの役割ベースのアクセスに関するドキュメントの更新"
}

Explanation

この変更では、Azure AI SearchにおけるBlobインデクサーの役割ベースのアクセスに関するドキュメントが更新されました。主な変更点は以下の通りです:

  1. 関連リンクの追加:
    • 「Azure AI Searchにロールを使って接続する」ための新しいリンクが追加されました。このリンクは、ユーザーが役割に基づくアクセス制御を利用する際の接続方法を理解するのに役立ちます。
  2. リンクテキストの更新:
    • 既存の「クエリー時のACLとRBACの実装」のリンクの文言が少し修正され、リンク先の内容がより明確になります。

これらの修正により、Blobインデクサーに関する情報がより一貫性を持ち、関連情報へのアクセスが容易になることを目的としています。

articles/search/search-document-level-access-overview.md

Diff
@@ -134,7 +134,7 @@ For more information, see [Use Azure AI Search indexers to ingest Microsoft Purv
 
 With native [token-based querying](https://aka.ms/azs-query-preserving-permissions), Azure AI Search validates a user's [Microsoft Entra token](/Entra/identity/devices/concept-tokens-microsoft-Entra-id), trimming result sets to include only documents the user is authorized to access. 
 
-You can achieve automatic trimming by attaching the user's Microsoft Entra token to your query request. For more information, see [Query-Time ACL and RBAC enforcement in Azure AI Search](search-query-access-control-rbac-enforcement.md).
+You can achieve automatic trimming by attaching the user's Microsoft Entra token to your query request. For more information, see [Query-time ACL and RBAC enforcement in Azure AI Search](search-query-access-control-rbac-enforcement.md).
 
 ## Benefits of document-level access control  
   

Summary

{
    "modification_type": "minor update",
    "modification_title": "ドキュメントレベルのアクセス制御の概要ドキュメントの更新"
}

Explanation

この変更では、Azure AI Searchにおけるドキュメントレベルのアクセス制御に関する概要ドキュメントが更新されました。主な変更点は以下の通りです:

  1. リンクテキストの形式変更:
    • 「クエリー時のACLとRBACの実装」というリンクの文言が小文字に変更されました。この修正は、リンクの一貫性を保ち、文書全体のスタイルを向上させることを目的としています。
  2. 文の整形の維持:
    • 文自体は変更されておらず、意味が影響を受けることはありません。リンクの表現方法を調整することで、ユーザーが関連情報にアクセスしやすくなることを目指しています。

この更新により、文書の可読性が向上し、ユーザーが必要な情報を効果的に見つけられるようになります。

articles/search/search-how-to-index-azure-blob-encrypted.md

Diff
@@ -102,7 +102,7 @@ You should have an Azure Function app that contains the decryption logic and an
 
 1. On your search service **Overview** page, get the name of your search service. You can confirm your service name by reviewing the endpoint URL. For example, if your endpoint URL is `https://mydemo.search.windows.net`, your service name is `mydemo`.
 
-1. In **Settings** > **Keys**, get an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either the primary or secondary key on requests for adding, modifying, and deleting objects.
+1. In **Settings** > **Keys**, get an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either key on requests to add, modify, or delete objects.
 
 An API key is required in the header of every request sent to your service. A valid key establishes trust, on a per-request basis, between the application sending the request and the service that handles it.
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "Azure Blobの暗号化インデクシング方法に関するドキュメントの修正"
}

Explanation

この変更では、Azure Blobの暗号化をインデクシングする方法に関するドキュメントが修正されました。主な変更点は以下の通りです:

  1. 文の簡潔化:
    • 「リクエストのためにオブジェクトの追加、変更、削除を行う際には、主キーまたは副キーのいずれかを使用できます」という文が簡略化され、「リクエストを送信する際には、オブジェクトの追加、修正、削除のためにどちらのキーでも使用できます」と修正されました。この変更により、情報がより明確になり、理解しやすくなります。
  2. 表現の統一性:
    • この修正により、ドキュメント全体の表現の一貫性が保たれ、視認性が高まります。

これらの修正は、Azure Blobの暗号化に関する正確な処理手順を提供し、ユーザーが必要な情報を効果的に取得できるようにすることを意図しています。

articles/search/search-how-to-index-onelake-files.md

Diff
@@ -7,7 +7,7 @@ ms.author: gimondra
 manager: nitinme
 ms.service: azure-ai-search
 ms.topic: how-to
-ms.date: 09/26/2025
+ms.date: 11/21/2025
 ms.custom:
   - build-2024
   - ignite-2024
@@ -60,7 +60,9 @@ This article uses the REST APIs to illustrate each step.
 
 + There's no support to ingest files from **My Workspace** workspace in OneLake since this is a personal repository per user.
 
-+ Microsoft Purview Sensitivity Labels applied via Data Map are not currently supported. If sensitivity labels are applied to artifacts in OneLake using [Microsoft Purview Data Map](/purview/data-map-sensitivity-labels-apply), the indexer may fail to execute properly. To bypass this restriction, an exception must be granted by your organization’s IT team responsible for managing Purview sensitivity labels and Data Map configurations.
++ Microsoft Purview sensitivity labels [applied to Fabric items](/fabric/fundamentals/apply-sensitivity-labels) (such as lakehouses) will cause the indexer to fail if the search service doesn't have the required access. To prevent this behavior, you must either:
+    - Add the AI Search service’s Service Principal Name (SPN) to an existing organization group that grants access under the sensitivity label policy, or
+    - Request an exception from your organization’s IT team responsible for Purview sensitivity label policy configurations, and have them add the SPN directly to the policy.
   
 + Workspace role-based permissions in Microsoft OneLake may affect indexer access to files. Ensure that the Azure AI Search service principal (managed identity) has sufficient permissions over the files you intend to access in the target [Microsoft Fabric workspace](/fabric/fundamentals/workspaces). 
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "OneLakeファイルのインデクシング方法に関するドキュメントの更新"
}

Explanation

この変更では、OneLakeファイルのインデクシング方法に関するドキュメントが更新されました。主な変更点は以下の通りです:

  1. 日付の更新:
    • ドキュメントの日付が変更され、09/26/2025から11/21/2025に更新されました。これは、文書の有効性を保つための一般的な更新です。
  2. 新しい注意事項の追加:
    • OneLakeの「My Workspace」からのファイルの取り込みについてのサポートがない旨が追記されました。この更新は、ユーザーに個人リポジトリの制約を明確に伝えることを目的としています。
  3. セキュリティラベルに関する詳細の追加:
    • Microsoft Purviewの感度ラベルに関する情報が更新され、具体的な指示が追加されました。感度ラベルが適用された場合のインデクサーの動作についての詳細が含まれ、適切なアクセス権限の管理方法が説明されています。
  4. 基盤となる役割についての言及:
    • Microsoft OneLakeの役割ベースのアクセス許可がインデクサーのファイルアクセスに与える影響についても新たに触れられており、検索サービスの主な識別情報が必要であることが強調されています。

これらの更新は、OneLakeファイルのインデクシングを行うユーザーに対して、必要な手順や制約に関する明確で具体的な情報を提供し、スムーズな運用をサポートすることを目的としています。

articles/search/search-how-to-index-sql-database.md

Diff
@@ -7,7 +7,7 @@ author: HeidiSteen
 ms.author: heidist
 ms.service: azure-ai-search
 ms.topic: how-to
-ms.date: 03/18/2025
+ms.date: 11/21/2025
 ms.update-cycle: 180-days
 ms.custom:
   - ignite-2023
@@ -400,9 +400,6 @@ api-key: admin-key
 
 When using SQL integrated change tracking policy, don't specify a separate data deletion detection policy. The SQL integrated change tracking policy has built-in support for identifying deleted rows. However, for the deleted rows to be detected automatically, the document key in your search index must be the same as the primary key in the SQL table, and the primary key must be non-clustered.
 
-<!-- > [!NOTE]  
-> When using [TRUNCATE TABLE](/sql/t-sql/statements/truncate-table-transact-sql) to remove a large number of rows from a SQL table, the indexer needs to be [reset](/rest/api/searchservice/indexers/reset) to reset the change tracking state to pick up row deletions. -->
-
 <a name="HighWaterMarkPolicy"></a>
 
 ### High water mark change detection policy

Summary

{
    "modification_type": "minor update",
    "modification_title": "SQLデータベースのインデクシング方法に関するドキュメントの修正"
}

Explanation

この変更では、SQLデータベースのインデクシング方法に関するドキュメントが修正されました。主な変更点は以下の通りです:

  1. 日付の更新:
    • ドキュメントの日付が03/18/2025から11/21/2025に変更されました。この更新は、文書の最新性を保証するための一般的な対応です。
  2. ノートの削除:
    • TRUNCATE TABLEに関する注意書きが削除されました。このノートは、大量の行を削除するためにTRUNCATE TABLEを使用する際のインデクサーのリセットに関する情報を提供していましたが、削除によって説明が簡潔になりました。
  3. 構成の整理:
    • 文の構成が若干変更され、情報の流れがスムーズになっています。特に、変更追跡ポリシーに関する説明が明確にされ、ユーザーが必要な手順を理解しやすくすることを意図しています。

これらの変更は、SQLデータベースのインデクシングに関する正確で明確なガイダンスを提供し、ユーザーがインデクサーを効率的に管理できるよう支援する手助けとなります。

articles/search/search-howto-managed-identities-azure-functions.md

Diff
@@ -1,36 +1,36 @@
 ---
 title: Set up an indexer connection to Azure functions using "Easy Auth"
 titleSuffix: Azure AI Search
-description: Learn how to set up an indexer connection to an Azure Function using built-in authentication also known as "Easy Auth".
+description: Learn how to set up an indexer connection to an Azure Function using built-in authentication, which is also known as "Easy Auth".
 author: arv100kri
 ms.author: arjagann
 ms.service: azure-ai-search
 ms.topic: how-to
-ms.date: 01/20/2025
+ms.date: 11/21/2025
 ms.update-cycle: 180-days
 ms.custom:
   - subject-rbac-steps
 ---
 
-# Authenticate to Azure Function App using "Easy Auth" (Azure AI Search)
+# Authenticate to an Azure Function App using "Easy Auth" (Azure AI Search)
 
-This article explains how to set up an indexer connection to an Azure Function app using the [built-in authentication capabilities of App services](/azure/app-service/overview-authentication-authorization), also known as "Easy Auth". Azure Function apps are a great solution for hosting Custom Web APIs that an Azure AI Search service can use either to enrich content ingested during an indexer run, or to vectorize content in a search query if you're using a custom embedding model for [integrated vectorization](vector-search-integrated-vectorization.md).
+This article explains how to set up an indexer connection to an Azure Function app using the [built-in authentication capabilities of Azure App Service](/azure/app-service/overview-authentication-authorization), also known as "Easy Auth." Azure Function apps are a great solution for hosting Custom Web APIs that an Azure AI Search service can use to enrich content ingested during an indexer run or, if you're using a custom embedding model for [integrated vectorization](vector-search-integrated-vectorization.md), vectorize content in a search query.
 
-You can use either a system-assigned or a user-assigned identity of the search service to authenticate against the Azure Function app. This approach requires setting up a Microsoft Entra ID application registration to use as the authentication provider for the Azure Function app, as explained in this article.
+You can use a system-assigned or user-assigned managed identity of the search service to authenticate against the Azure Function app. This approach requires setting up a Microsoft Entra ID application registration to use as the authentication provider for the Azure Function app, which is explained in this article.
 
 ## Prerequisites
 
-* [Create a managed identity](search-how-to-managed-identities.md) for your search service.
+* A [managed identity](search-how-to-managed-identities.md) for your search service.
 
-## Configure Microsoft Entra ID application to use as authentication provider
+## Configure Microsoft Entra ID application as the authentication provider
 
-To use Microsoft Entra ID as an authentication provider to the Azure Function app, an application registration must be created. There are 2 options to do so - either creating one automatically via the Azure Function app itself, or using an already created existing application. To learn more about these steps follow [Azure App services' documentation](/azure/app-service/configure-authentication-provider-aad?tabs=workforce-configuration#choose-the-app-registration.md).
+To use Microsoft Entra ID as an authentication provider to the Azure Function app, an application registration must be created. There are two options: create one automatically via the Azure Function app itself or use an existing application. To learn more about these steps, see the [App Service documentation](/azure/app-service/configure-authentication-provider-aad?tabs=workforce-configuration#choose-the-app-registration.md).
 
-Regardless of either option, ensure that the app registration is configured per the following steps to ensure it being compatible with Azure AI Search.
+Regardless of the option, ensure that the app registration is configured per the following steps to ensure it's compatible with Azure AI Search.
 
 ### Ensure the app registration has application ID URI configured
 
-The app registration should be configured with an application ID URI, which can then be used as the token audience with Azure Function apps and Azure AI Search. Configure it in the format `api://<applicationId>`. This can be done by navigating to the **Overview** section of the app registration and setting the **Application ID URI** field.
+The app registration should be configured with an application ID URI, which can be used as the token audience with Azure Function apps and Azure AI Search. Configure it in the format `api://<applicationId>`. This can be done by navigating to the **Overview** section of the app registration and setting the **Application ID URI** field.
 
 [ ![Screenshot of an app registration configured with application ID URI.](./media/search-howto-managed-identities-azure-functions/app-registration-overview.png) ](./media/search-howto-managed-identities-azure-functions/app-registration-overview.png#lightbox)
 
@@ -42,7 +42,7 @@ Navigate to the **Authentication** section of the app registration and configure
 
 ### (Optional) Configure a client secret
 
-App services recommend utilizing a client secret for the authentication provider application. Authentication still works without client secret, as long as the delegated permissions are set up. To set up a client secret, navigate to the **Certificates & secrets** section of the app registration, and add a **New client secret** as explained [in this article](/entra/identity-platform/quickstart-register-app?tabs=client-secret#add-credentials).
+App Service recommends using a client secret for the authentication provider application. Authentication still works without client secret, as long as the delegated permissions are set up. To set up a client secret, navigate to the **Certificates & secrets** section of the app registration, and add a **New client secret** as explained [in this article](/entra/identity-platform/quickstart-register-app?tabs=client-secret#add-credentials).
 
 [ ![Screenshot of an app registration with option to configure client secret.](./media/search-howto-managed-identities-azure-functions/client-secret.png) ](./media/search-howto-managed-identities-azure-functions/client-secret.png#lightbox)
 
@@ -56,9 +56,9 @@ Once the delegated permissions scope is set up, you should notice in the **API p
 
 [ ![Screenshot of an app registration with delegated permissions.](./media/search-howto-managed-identities-azure-functions/api-permissions.png) ](./media/search-howto-managed-identities-azure-functions/api-permissions.png#lightbox)
 
-## Configure Microsoft Entra ID authentication provider in Azure Function app
+## Configure Microsoft Entra ID authentication provider in the Azure Function app
 
-With the client application registered with the exact specifications above, Microsoft Entra ID authentication for the Azure Function app can be set up by following the [guide from App Services](/azure/app-service/configure-authentication-provider-aad). Navigate to the **Authentication** section of the Azure Function app to set up the authentication details.
+With the client application registered with the previous specifications, Microsoft Entra ID authentication for the Azure Function app can be set up by following the [App Service documentation](/azure/app-service/configure-authentication-provider-aad). Navigate to the **Authentication** section of the Azure Function app to set up the authentication details.
 
 Ensure the following settings are configured to ensure that Azure AI Search can successfully authenticate to the Azure Function app.
 
@@ -77,41 +77,41 @@ The following screenshot highlights these specific settings for a sample Azure F
 * Add Microsoft Entra ID as the authentication provider for the Azure Function app.
 * Either create a new app registration or choose a previously configured app registration. Ensure that it's configured according to the guidelines in the previous section of this document.
 * Ensure that in the **Allowed token audiences** section, the application ID URI of the app registration is specified. It should be in the `api://<applicationId>` format, matching what was configured with the app registration created earlier.
-* If you desire, you can configure additional checks to restrict access specifically to the indexer. 
+* If you desire, you can configure other checks to restrict access specifically to the indexer.
 
 [ ![Screenshot of an Azure Function app with Microsoft Entra ID Authentication provider.](./media/search-howto-managed-identities-azure-functions/identity-provider.png) ](./media/search-howto-managed-identities-azure-functions/identity-provider.png#lightbox)
 
-### Configure additional checks
+### Configure other checks
 
 * Ensure that the **Object (principal) ID** of the specific Azure AI Search service's identity is specified as the **Identity requirement**, by checking the option **Allow requests from specific identities** and entering the **Object (principal) ID** in the identity section.
 
 [ ![Screenshot of the identity section for an Azure AI Search service.](./media/search-howto-managed-identities-azure-functions/search-service-identity.png) ](./media/search-howto-managed-identities-azure-functions/search-service-identity.png#lightbox)
 
-* In **Client application requirement** select the option **Allow requests from specific client application**. You need to look up the Client ID for the Azure AI Search service's identity. To do this, copy over the Object (principal) ID from the previous step and look up in your Microsoft Entra ID tenant. There should be a matching enterprise application whose overview page lists an **Application ID**, which is the GUID that needs to be specified as the client application requirement.
+* In **Client application requirement**, select the option **Allow requests from specific client application**. You need to look up the Client ID for the Azure AI Search service's identity. To do this, copy over the Object (principal) ID from the previous step and look up in your Microsoft Entra ID tenant. There should be a matching enterprise application whose overview page lists an **Application ID**, which is the GUID that needs to be specified as the client application requirement.
 
 [ ![Screenshot of the enterprise application details of an Azure AI Search service's identity.](./media/search-howto-managed-identities-azure-functions/search-identity-entra.png) ](./media/search-howto-managed-identities-azure-functions/search-identity-entra.png#lightbox)
 
-
 >[!NOTE]
 > This step is the most important configuration on the Azure Function app and doing it wrongly can result in the indexer being forbidden from accessing the Azure Function app. Ensure that you perform the lookup of the identity's enterprise application details correctly, and you specify the **Application ID** and **Object (principal) ID** in the right places.
 
-* For the **Tenant requirement**, choose any of the options that aligns with your security posture. Check out the [Azure App service documentation](/azure/app-service/configure-authentication-provider-aad) for more details.
+* For the **Tenant requirement**, choose any of the options that aligns with your security posture. For more information, see the [App Service documentation](/azure/app-service/configure-authentication-provider-aad).
+
+## Set up a connection to the Azure Function app
 
-## Setting up a connection to the Azure Function app
+Depending on whether the connection to the Azure Function app needs to be made in a Custom Web API skill or a Custom Web API vectorizer, the JSON definition is slightly different. In both cases, ensure that you specify the correct URI to the Azure Function app and set the `authResourceId` to be the same value as the **Allowed token audience** configured for the authentication provider.
 
-Depending on whether the connection to the Azure Function app needs to be made in a Custom Web API skill or a Custom Web API vectorizer, the JSON definition is slightly different. In both cases, ensure that you specify the correct URI to the Azure Function app and set the `authResourceId` to be the same value as the **Allowed token audience** configured for the authentication provider. 
+Depending on whether you choose to connect using a system-assigned identity or user-assigned identity, required properties differ slightly.
 
-Depending on whether you choose to connect using a system assigned identity or a user assigned identity, details required will be slightly different. 
+### Use a system-assigned identity
 
-### Using system assigned identity
 Here's an example to call into a function named `test` for the sample Azure Function app, where the system assigned identity of the search service is allowed to authenticate via "Easy Auth".
 
 ```json
 "uri": "https://contoso-function-app.azurewebsites.net/api/test?",
 "authResourceId": "api://00000000-0000-0000-0000-000000000000"
 ```
 
-### Using user assigned identity
+### Use a user-assigned identity
 
 Here's an example to call into the same function, where the specific user assigned identity is allowed to authenticate via "Easy Auth". You're expected to specify the resource ID of the exact user assigned identity to use in the `identity` property of the configuration.
 
@@ -133,7 +133,7 @@ For Custom Web API skills, permissions are validated during indexer run-time. Fo
 
 * If authentication issues persist, ensure that the right identity information - namely Application ID, Object (principal) ID for the Azure AI Search service's identity is specified in the Azure Function app's authentication provider.
 
-## See also
+## Related content
 
 * [Custom Web API skill](cognitive-search-custom-skill-web-api.md)
 * [Custom Web API vectorizer](vector-search-vectorizer-custom-web-api.md)

Summary

{
    "modification_type": "minor update",
    "modification_title": "Azure FunctionsにおけるマネージドIDの設定手順の更新"
}

Explanation

この変更では、Azure Functionsを使用したインデクサーの接続設定に関するドキュメントが更新されました。主な変更点は以下の通りです:

  1. 日付の更新:
    • ドキュメントの日付が01/20/2025から11/21/2025に変更され、文書の有効性を維持しています。
  2. 文言の修正:
    • 説明文やセクションタイトルの文言が修正され、より流暢で分かりやすい表現に更新されました。例えば、「built-in authentication capabilities of App services」から「built-in authentication capabilities of Azure App Service」への変更があります。この修正により、情報の明確性が向上しました。
  3. 項目の整理:
    • 一部の項目において、表現が整理されています。例えば、「either creating one automatically via the Azure Function app itself, or using an already created existing application」という表現が「create one automatically via the Azure Function app itself or use an existing application」に修正されました。これにより、箇条書きが簡潔になり、読みやすさが向上しました。
  4. 技術的説明の強化:
    • 認証プロバイダーの設定や、Azure AI Searchの構成に関連する具体的な指示が強調され、手順が明確に示されています。特に、管理IDの使用やMicrosoft Entra IDの設定に関して、具体的な手順や注意事項が追加されています。
  5. 関連情報の見直し:
    • ドキュメントの最後に設けられた「See also」セクションが「Related content」に改名され、他の関連ドキュメントへのリンクが更新されました。これにより、ユーザーが追加情報を見つけやすくなっています。

これらの変更は、Azure FunctionsとマネージドIDについての理解を深め、ユーザーが具体的な設定を行う際の参考となることを目的としています。

articles/search/search-index-access-control-lists-and-rbac-push-api.md

Diff
@@ -131,5 +131,5 @@ This example illustrates how the document access rules are resolved based on the
 ## See also
 
 - [Connect to Azure AI Search using roles](search-security-rbac.md)
-- [Query-Time ACL and RBAC enforcement](search-query-access-control-rbac-enforcement.md)
+- [Query-time ACL and RBAC enforcement](search-query-access-control-rbac-enforcement.md)
 - [azure-search-python-samples/Quickstart-Document-Permissions-Push-API](https://github.com/Azure-Samples/azure-search-python-samples/blob/main/Quickstart-Document-Permissions-Push-API)

Summary

{
    "modification_type": "minor update",
    "modification_title": "アクセスポリシーに関するドキュメントの文言修正"
}

Explanation

この変更では、Azure AI Searchに関連するアクセスポリシーのドキュメントが修正されました。具体的には、以下のように主に文言が調整されています:

  1. 文言の統一:
    • セクションでの「Query-Time ACL and RBAC enforcement」という表現が「Query-time ACL and RBAC enforcement」に修正されました。この変更は表現の一貫性を持たせるためによく行われます。
  2. リスト構造の修正:
    • 「Query-Time ACL and RBAC enforcement」の表記が小文字に統一されたことで、他の項目と同様の形式となり、全体の整合性が改善されました。これにより、ユーザーが他のリンクと比較しやすくなっています。

このような修正は、文書の可読性を向上させ、ユーザーが関連情報へアクセスしやすくするための重要なステップです。

articles/search/search-indexer-access-control-lists-and-role-based-access.md

Diff
@@ -357,5 +357,5 @@ To manage blob deletion effectively, make sure [deletion tracking](search-how-to
 ## See also
 
 + [Connect to Azure AI Search using roles](search-security-rbac.md)
-+ [Query-Time ACL and RBAC enforcement](search-query-access-control-rbac-enforcement.md)
++ [Query-time ACL and RBAC enforcement](search-query-access-control-rbac-enforcement.md)
 + [azure-search-python-samples/Quickstart-Document-Permissions-Push-API](https://github.com/Azure-Samples/azure-search-python-samples/blob/main/Quickstart-Document-Permissions-Push-API)

Summary

{
    "modification_type": "minor update",
    "modification_title": "アクセスポリシーに関するドキュメントの文言修正"
}

Explanation

この変更では、Azure AI Searchに関するアクセスポリシーのドキュメントが一部修正されました。主なポイントは以下の通りです:

  1. 文言の修正:
    • 「Query-Time ACL and RBAC enforcement」という表現が「Query-time ACL and RBAC enforcement」に修正され、表記の一貫性を持たせています。このような微細な変更は、ユーザーにとっての可読性を高め、文書の整合性を確保します。
  2. リスト項目の整合性:
    • リストの中で「Query-Time ACL and RBAC enforcement」という表現が以前の大文字表記から、他の項目と同様に小文字表記に統一されました。これによって、リスト項目がより整然とし、理解しやすくなります。

このような文言修正は、ドキュメントの更新を通じて情報の明確さを向上させ、ユーザーが必要な情報に迅速にアクセスできるようにする重要な措置です。

articles/search/search-indexer-sensitivity-labels.md

Diff
@@ -4,7 +4,7 @@ titleSuffix: Azure AI Search
 description: Learn how to configure Azure AI Search indexers to ingest Microsoft Purview sensitivity labels from supported data sources for document-level security enforcement.  
 ms.service: azure-ai-search  
 ms.topic: how-to  
-ms.date: 11/19/2025  
+ms.date: 11/20/2025  
 author: gmndrg  
 ms.author: gimondra  
 ---
@@ -32,7 +32,7 @@ This functionality is available for the following data sources:
 
 At query time, Azure AI Search evaluates sensitivity labels and enforces [document-level access control](search-document-level-access-overview.md) in accordance with the user's Microsoft Entra ID token and Purview label policies.  
 
-Only users authorized to access content with [extract usage right](/purview/rights-management-usage-rights) under a given label can retrieve corresponding documents in search results. There's a delay in how often the labels are pulled from a document after changed. 
+Only users authorized to access content with [READ usage right](/purview/rights-management-usage-rights) under a given label can retrieve corresponding documents in search results. There's a delay in how often the labels are pulled from a document after changed. 
 
 When configured [on a schedule](search-howto-schedule-indexers.md), the indexer pulls new documents and updates from the data source. It captures:
 - Newly added documents and their associated sensitivity labels

Summary

{
    "modification_type": "minor update",
    "modification_title": "感度ラベルに関するドキュメントの修正"
}

Explanation

この変更では、Azure AI Searchに関連する感度ラベルのドキュメントがいくつかの点で修正されました。以下に主な変更内容を示します:

  1. 日付の更新:
    • 文書の最終更新日が「11/19/2025」から「11/20/2025」に変更されました。このような日付の更新は、ドキュメントが最新であることを示すための重要な要素です。
  2. 権利の表現変更:
    • ユーザーが感度ラベルの下で文書を検索結果から取得できる条件が「extract usage right」から「READ usage right」に変更されました。この変更は、権利の表現をより明確にし、正確な情報を提供するために行われたものです。
  3. 説明の明確化:
    • 感度ラベルが更新された後にドキュメントで取得されるまでの遅延に関する説明はそのまま維持されていますが、権利に関する文言が明確になったことで、ユーザーがアクセス権についての理解を深める手助けとなっています。

これらの変更は、文書の正確性と可読性を高めるための重要なステップであり、ユーザーが必要とする情報をより容易に理解できるようにすることを目的としています。

articles/search/search-indexer-tutorial.md

Diff
@@ -100,7 +100,7 @@ API calls require the service URL and an access key. A search service is created
 
 1. Sign in to the [Azure portal](https://portal.azure.com). On your service **Overview** page, copy the endpoint URL. An example endpoint might look like `https://mydemo.search.windows.net`.
 
-1. On **Settings** > **Keys**, get an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either the primary or secondary key on requests for adding, modifying, and deleting objects.
+1. On **Settings** > **Keys**, get an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either key on requests to add, modify, or delete objects.
 
    :::image type="content" source="media/search-get-started-rest/get-url-key.png" alt-text="Screenshot of Azure portal pages showing the HTTP endpoint and access key location for a search service." border="false":::
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "検索インデクサーチュートリアルの文言修正"
}

Explanation

この変更では、Azure AI Searchに関する検索インデクサーチュートリアルのドキュメントが一部修正されました。以下の点に焦点を当てています:

  1. 文言の明確化:
    • 管理者キーを利用してオブジェクトの追加、修正、削除を行う際の表現が「requests for adding, modifying, and deleting objects」から「requests to add, modify, or delete objects」に修正されました。この変更により、文章がより明確で直感的になり、ユーザーが指示内容を理解しやすくなります。
  2. 一貫性の保持:
    • 修正された文言は、リスト内の他の項目と整合性を持つ形で調整されており、情報の明確さが向上しています。このような表現の整合性は、ドキュメント全体の可読性を高め、ユーザー体験を向上させる重要な要素です。

このような微細な修正でも、情報の正確性や理解しやすさに大きく寄与し、ユーザーが手順を効率的に把握できるようにするための重要なステップです。

articles/search/search-markdown-data-tutorial.md

Diff
@@ -6,7 +6,7 @@ author: mdonovan
 ms.author: mdonovan
 ms.service: azure-ai-search
 ms.topic: tutorial
-ms.date: 03/28/2025
+ms.date: 11/21/2025
 ms.update-cycle: 180-days
 ms.custom:
   - ignite-2024
@@ -20,7 +20,9 @@ ms.custom:
 
 Azure AI Search can index Markdown documents and arrays in Azure Blob Storage using an [indexer](search-indexer-overview.md) that knows how to read Markdown data.
 
-This tutorial shows you how to index Markdown files indexed using the `oneToMany` Markdown parsing mode. It uses a REST client and the [Search REST APIs](/rest/api/searchservice/) to:
+This tutorial shows you how to index Markdown files indexed using the `oneToMany` Markdown parsing mode and the [Search Service REST APIs](/rest/api/searchservice/).
+
+In this tutorial, you:
 
 > [!div class="checklist"]
 > + Set up sample data and configure an `azureblob` data source
@@ -32,16 +34,17 @@ This tutorial shows you how to index Markdown files indexed using the `oneToMany
 
 + An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?cid=msft_learn).
 
-+ [Azure Storage](/azure/storage/common/storage-account-create).
-
-+ [Azure AI Search](search-what-is-azure-search.md). [Create a service](search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) in your current subscription.
++ An [Azure Storage account](/azure/storage/common/storage-account-create).
++ An [Azure AI Search service](search-create-service-portal.md).
 
-+ [Visual Studio Code](https://code.visualstudio.com/download) with a [REST client](https://marketplace.visualstudio.com/items?itemName=humao.rest-client).
++ [Visual Studio Code](https://code.visualstudio.com/download) with the [REST Client Extension](https://marketplace.visualstudio.com/items?itemName=humao.rest-client).
 
 > [!NOTE]
-> You can use a free search service for this tutorial. The Free tier limits you to three indexes, three indexers, and three data sources. This tutorial creates one of each. Before you start, make sure you have room on your service to accept the new resources.
+> You can use a free search service for this tutorial. The Free tier limits you to three indexes, three indexers, and three data sources. This tutorial creates one of each. Before you start, make sure your service has room to accept the new resources.
+
+## Prepare sample data
 
-## Create a Markdown document
+### Create a Markdown file
 
 Copy and paste the following Markdown into a file named `sample_markdown.md`. The sample data is a single Markdown file containing various Markdown elements. We chose one Markdown file to stay under the storage limits of the Free tier.
 
@@ -192,19 +195,29 @@ Markdown is a lightweight yet powerful tool for writing documentation. It suppor
 Thank you for reviewing this example!

+### Upload the file and get a connection string
+
+Follow these instructions to upload the sample_markdown.md file to a container in your Azure Storage account. You must also get the storage account connection string. Make a note of the connection string and the container name for later use.
+
## Copy a search service URL and API key

For this tutorial, connections to Azure AI Search require an endpoint and an API key. You can get these values from the Azure portal. For alternative connection methods, see Managed identities.

-1. Sign in to the Azure portal, navigate to the search service Overview page, and copy the URL. An example endpoint might look like https://mydemo.search.windows.net.
+1. Sign in to the Azure portal and select your search service.

-1. Under Settings > Keys, copy an admin key. Admin keys are used to add, modify, and delete objects. There are two interchangeable admin keys. Copy either one.
+1. From the left pane, select Overview.
+
+1. Make a note of the URL, which should look like https://my-service.search.windows.net.
+
+1. From the left pane, select Settings > Keys.
+
+1. Make a note of an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either key on requests for adding, modifying, and deleting objects.

:::image type="content" source="media/search-markdown-data-tutorial/get-url-key.png" alt-text="Screenshot of the URL and API keys in the Azure portal.":::

## Set up your REST file

-1. Start Visual Studio Code and create a new file.
+1. Create a file in Visual Studio Code.

  1. Provide values for variables used in the request.

@@ -225,25 +238,25 @@ For help with the REST client, see [Quickstart: Full-text search using REST](sea

### Create a data source
-POST {{baseUrl}}/datasources?api-version=2024-11-01-preview  HTTP/1.1
-  Content-Type: application/json
-  api-key: {{apiKey}}
+POST {{baseUrl}}/datasources?api-version=2025-09-01  HTTP/1.1
+Content-Type: application/json
+api-key: {{apiKey}}

-    {
-        "name" : "sample-markdown-ds",
-        "description": null,
-        "type": "azureblob",
-        "subtype": null,
-        "credentials": {
-            "connectionString": "{{storageConnectionString}}"
-        },
-        "container": {
-            "name": "{{blobContainer}}",
-            "query": null
-        },
-        "dataChangeDetectionPolicy": null,
-        "dataDeletionDetectionPolicy": null
-    }
+{
+    "name" : "sample-markdown-ds",
+    "description": null,
+    "type": "azureblob",
+    "subtype": null,
+    "credentials": {
+        "connectionString": "{{storageConnectionString}}"
+    },
+    "container": {
+        "name": "{{blobContainer}}",
+        "query": null
+    },
+    "dataChangeDetectionPolicy": null,
+    "dataDeletionDetectionPolicy": null
+}

Send the request. The response should look like:
@@ -253,7 +266,7 @@ HTTP/1.1 201 Created
Transfer-Encoding: chunked
Content-Type: application/json; odata.metadata=minimal; odata.streaming=true; charset=utf-8
ETag: “0x8DCF52E926A3C76”
-Location: https://.search.windows.net:443/datasources(‘sample-markdown-ds’)?api-version=2024-11-01-preview
+Location: https://.search.windows.net:443/datasources(‘sample-markdown-ds’)?api-version=2025-09-01
Server: Microsoft-IIS/10.0
Strict-Transport-Security: max-age=2592000, max-age=15724800; includeSubDomains
Preference-Applied: odata.include-annotations=“*”
@@ -310,21 +323,21 @@ This example provides samples of how to index data both with and without field m

### Create an index
-POST {{baseUrl}}/indexes?api-version=2024-11-01-preview  HTTP/1.1
-  Content-Type: application/json
-  api-key: {{apiKey}}
+POST {{baseUrl}}/indexes?api-version=2025-09-01  HTTP/1.1
+Content-Type: application/json
+api-key: {{apiKey}}

-    {
-      "name": "sample-markdown-index",  
-      "fields": [
-        {"name": "id", "type": "Edm.String", "key": true, "searchable": true, "retrievable": true, "filterable": true, "facetable": true, "sortable": true},
-        {"name": "content", "type": "Edm.String", "key": false, "searchable": true, "retrievable": true, "filterable": true, "facetable": true, "sortable": true},
-        {"name": "title", "type": "Edm.String", "searchable": true, "retrievable": true, "filterable": true, "facetable": true, "sortable": true},
-        {"name": "h2_subheader", "type": "Edm.String", "searchable": true, "retrievable": true, "filterable": true, "facetable": true, "sortable": true},
-        {"name": "h3_subheader", "type": "Edm.String", "searchable": true, "retrievable": true, "filterable": true, "facetable": true, "sortable": true},
-        {"name": "ordinal_position", "type": "Edm.Int32", "searchable": false, "retrievable": true, "filterable": true, "facetable": true, "sortable": true}
-      ]
-    }
+{
+  "name": "sample-markdown-index",  
+  "fields": [
+    {"name": "id", "type": "Edm.String", "key": true, "searchable": true, "retrievable": true, "filterable": true, "facetable": true, "sortable": true},
+    {"name": "content", "type": "Edm.String", "key": false, "searchable": true, "retrievable": true, "filterable": true, "facetable": true, "sortable": true},
+    {"name": "title", "type": "Edm.String", "searchable": true, "retrievable": true, "filterable": true, "facetable": true, "sortable": true},
+    {"name": "h2_subheader", "type": "Edm.String", "searchable": true, "retrievable": true, "filterable": true, "facetable": true, "sortable": true},
+    {"name": "h3_subheader", "type": "Edm.String", "searchable": true, "retrievable": true, "filterable": true, "facetable": true, "sortable": true},
+    {"name": "ordinal_position", "type": "Edm.Int32", "searchable": false, "retrievable": true, "filterable": true, "facetable": true, "sortable": true}
+  ]
+}

### Index schema in a configuration with no field mappings
@@ -360,34 +373,34 @@ If you use this schema, be sure to adjust later requests accordingly. This will

### Create and run an indexer
-POST {{baseUrl}}/indexers?api-version=2024-11-01-preview  HTTP/1.1
-  Content-Type: application/json
-  api-key: {{apiKey}}
+POST {{baseUrl}}/indexers?api-version=2025-09-01  HTTP/1.1
+Content-Type: application/json
+api-key: {{apiKey}}

+{
+  "name": "sample-markdown-indexer",
+  "dataSourceName": "sample-markdown-ds",
+  "targetIndexName": "sample-markdown-index",
+  "parameters" : { 
+    "configuration": { 
+      "parsingMode": "markdown",
+      "markdownParsingSubmode": "oneToMany",
+      "markdownHeaderDepth": "h3"
+      }
+    },
+  "fieldMappings" : [ 
    {
-      "name": "sample-markdown-indexer",
-      "dataSourceName": "sample-markdown-ds",
-      "targetIndexName": "sample-markdown-index",
-      "parameters" : { 
-        "configuration": { 
-          "parsingMode": "markdown",
-          "markdownParsingSubmode": "oneToMany",
-          "markdownHeaderDepth": "h3"
-          }
-        },
-      "fieldMappings" : [ 
-        {
-          "sourceFieldName": "/sections/h1",
-          "targetFieldName": "title",
-          "mappingFunction": null
-        }
-      ]
+      "sourceFieldName": "/sections/h1",
+      "targetFieldName": "title",
+      "mappingFunction": null
    }
+  ]
+}

Key points:

-+ The indexer will only parse headers up to h3. Any lower-level headers (h4,h5,h6) will be treated as plain text and show up in the content field. This is why the index and field mappings only exist up to a depth of h3.
++ The indexer will only parse headers up to h3. Any lower-level headers (h4,h5,h6) are treated as plain text and show up in the content field. This is why the index and field mappings only exist up to a depth of h3.

  • The content and ordinal_position fields require no field mapping because they exist with those names in the enriched content.

@@ -397,14 +410,14 @@ You can start searching as soon as the first document is loaded.

### Query the index
-POST {{baseUrl}}/indexes/sample-markdown-index/docs/search?api-version=2024-11-01-preview  HTTP/1.1
-  Content-Type: application/json
-  api-key: {{apiKey}}
+POST {{baseUrl}}/indexes/sample-markdown-index/docs/search?api-version=2025-09-01  HTTP/1.1
+Content-Type: application/json
+api-key: {{apiKey}}
  
-  {
-    "search": "*",
-    "count": true
-  }
+{
+  "search": "*",
+  "count": true
+}

Send the request. This is an unspecified full-text search query that returns all of the fields marked as retrievable in the index, along with a document count. The response should look like:
@@ -438,14 +451,14 @@ Add a search parameter to search on a string.

### Query the index
-POST {{baseUrl}}/indexes/sample-markdown-index/docs/search?api-version=2024-11-01-preview  HTTP/1.1
-  Content-Type: application/json
-  api-key: {{apiKey}}
+POST {{baseUrl}}/indexes/sample-markdown-index/docs/search?api-version=2025-09-01  HTTP/1.1
+Content-Type: application/json
+api-key: {{apiKey}}
  
-  {
-    "search": "h4",
-    "count": true,
-  }
+{
+  "search": "h4",
+  "count": true
+}

Send the request. The response should look like:
@@ -491,16 +504,16 @@ Key points:
Add a select parameter to limit the results to fewer fields. Add a filter to further narrow the search.
```http
### Query the index
-POST {{baseUrl}}/indexes/sample-markdown-index/docs/search?api-version=2024-11-01-preview HTTP/1.1
- Content-Type: application/json
- api-key: {{apiKey}}
+POST {{baseUrl}}/indexes/sample-markdown-index/docs/search?api-version=2025-09-01 HTTP/1.1
+Content-Type: application/json
+api-key: {{apiKey}}

  • {
  • “search”: “Markdown”,
  • “count”: true,
  • “select”: “title, content, h2_subheader”,
  • “filter”: “h2_subheader eq ‘Conclusion’”
  • }
    +{
  • “search”: “Markdown”,
  • “count”: true,
  • “select”: “title, content, h2_subheader”,
  • “filter”: “h2_subheader eq ‘Conclusion’”
    +}

```json
@@ -543,20 +556,16 @@ Indexers can be reset to clear execution history, which allows a full rerun. The

```http
### Reset the indexer
-POST {{baseUrl}}/indexers/sample-markdown-indexer/reset?api-version=2024-11-01-preview  HTTP/1.1
-  api-key: {{apiKey}}
-```
+POST {{baseUrl}}/indexers/sample-markdown-indexer/reset?api-version=2025-09-01  HTTP/1.1
+api-key: {{apiKey}}

-```http
### Run the indexer
-POST {{baseUrl}}/indexers/sample-markdown-indexer/run?api-version=2024-11-01-preview  HTTP/1.1
-  api-key: {{apiKey}}
-```
+POST {{baseUrl}}/indexers/sample-markdown-indexer/run?api-version=2025-09-01  HTTP/1.1
+api-key: {{apiKey}}

-```http
### Check indexer status 
-GET {{baseUrl}}/indexers/sample-markdown-indexer/status?api-version=2024-11-01-preview  HTTP/1.1
-  api-key: {{apiKey}}
+GET {{baseUrl}}/indexers/sample-markdown-indexer/status?api-version=2025-09-01  HTTP/1.1
+api-key: {{apiKey}}

## Clean up resources

</details>

### Summary

```json
{
    "modification_type": "minor update",
    "modification_title": "Markdownデータチュートリアルの改善"
}
```

### Explanation
この変更では、Azure AI SearchにおけるMarkdownデータのインデクシングに関するチュートリアルが大幅に修正されました。主な変更点は以下の通りです:

1. **日付の更新**:
   - 文書の最終更新日が「03/28/2025」から「11/21/2025」に更新されました。この変更は新しい情報やガイドラインが提供されていることを反映しています。

2. **情報の明確化および整理**:
   - チュートリアルの内容が整理され、一部の文言が明確に修正されています。具体的には、MarkdownファイルをインデックスするためのREST APIを明示的に紹介し、チュートリアルの目的や手順が一層明確になっています。

3. **手順の詳細化**:
   - ユーザーが必要なAzureアカウントやサービスを確保するための具体的な手続きとリソースリンクの提供が改善され、初めてのユーザーでも理解しやすい内容になっています。

4. **APIバージョンの更新**:
   - APIのバージョンが「2024-11-01-preview」から「2025-09-01」に変更され、最新のAPIを使用することが保証されています。

5. **コードサンプルの見直し**:
   - REST APIリクエストのコードサンプルが改善され、視覚的に見やすくなっています。これは特にユーザーが実際に手順を実行する際に役立ちます。

このような変更により、Azure AI Searchを利用したMarkdownデータのインデクシングチュートリアルは、よりユーザーにとってアクセスしやすいリソースとなり、新しいユーザーが高度な情報を容易に理解できるように設計されています。

## articles/search/search-query-access-control-rbac-enforcement.md{#item-d24df7}

<details>
<summary>Diff</summary>
````diff
@@ -9,9 +9,9 @@ author: mattgotteiner
 ms.author: magottei 
 ---  
 
-# Query-Time ACL and RBAC enforcement in Azure AI Search  
+# Query-time ACL and RBAC enforcement in Azure AI Search
 
-Query-time access control ensures that users only retrieve search results they're authorized to access, based on their identity, group memberships, roles, or attributes. This functionality is essential for secure enterprise search and compliance-driven workflows. 
+Query-time access control ensures that users only retrieve search results they're authorized to access, based on their identity, group memberships, roles, or attributes. This functionality is essential for secure enterprise search and compliance-driven workflows.
 
 Authorized access depends on permission metadata that's ingested during indexing. For indexer data sources that have built-in access models, such as Azure Data Lake Storage (ADLS) Gen2 and SharePoint in Microsoft 365, an indexer can pull in the permission metadata for each document automatically. For other data sources, you must assemble the document payload yourself, and the payload must include both content and the associated permission metadata. You then use the [push APIs](search-index-access-control-lists-and-rbac-push-api.md) to load the index.
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "クエリ時のACLとRBAC強制のタイトル修正"
}

Explanation

この変更により、Azure AI Searchにおけるクエリ時のアクセス制御(ACL)とロールベースのアクセス制御(RBAC)の強制に関するドキュメントが修正されました。主な修正内容は以下の通りです:

  1. タイトルの表記修正:
    • タイトルの「Query-Time ACL and RBAC enforcement in Azure AI Search」が「Query-time ACL and RBAC enforcement in Azure AI Search」のように、ハイフンの位置が調整されました。この修正により、タイトルの可読性が向上し、正しいスタイルが反映されています。
  2. 文章の整形:
    • 説明文の整形が行われ、文の末尾にある不要な改行が削除されました。これにより、文書がよりスムーズに読みやすくなり、情報の流れが改善されています。
  3. 内容の一貫性:
    • ドキュメントの内容は、ABACやRBACの基本的な理解をサポートし、ユーザーがアクセス制御の重要性を理解する助けとなる構成が維持されています。特に、Azure Data Lake StorageやSharePointなどのデータソースに関連する権限メタデータの取り扱いに関する情報が明確に説明されています。

この修正は、ドキュメントの視覚的な整理と内容の一貫性を高め、読者にとってより親しみやすいリソースにすることを目的としています。ユーザーがアクセス制御の概念を理解しやすくなることで、Azure AI Searchの利用促進に寄与します。

articles/search/search-query-sensitivity-labels.md

Diff
@@ -13,7 +13,7 @@ ms.author: gimondra
 
 [!INCLUDE [Feature preview](./includes/previews/preview-generic.md)]
 
-At query time, Azure AI Search enforces sensitivity label policies defined in [Microsoft Purview](/purview/create-sensitivity-labels). These policies include evaluation of [extract usage rights](/purview/rights-management-usage-rights) tied to each document. As a result, users can only retrieve documents they are allowed to view.
+At query time, Azure AI Search enforces sensitivity label policies defined in [Microsoft Purview](/purview/create-sensitivity-labels). These policies include evaluation of [READ usage rights](/purview/rights-management-usage-rights) tied to each document. As a result, users can only retrieve documents they are allowed to view.
 
 This capability extends [document-level access control](search-document-level-access-overview.md) to align with your organization's [information protection and compliance requirements](/purview/create-sensitivity-labels) managed in Microsoft Purview.
 
@@ -107,10 +107,10 @@ Content-Type: application/json
 
 ## Sensitivity label handling in Azure AI Search
 
-When Azure AI Search indexes document content with sensitivity labels from sources like SharePoint, Azure Blob, and others, it stores both the content and the label metadata. If the user has data extract access, the query returns the indexed content along with the GUID that identifies the sensitivity label applied to the document. This GUID uniquely identifies the label but doesn't include human-readable properties such as the label name or associated permissions.
+When Azure AI Search indexes document content with sensitivity labels from sources like SharePoint, Azure Blob, and others, it stores both the content and the label metadata. The search query returns indexed content along with the GUID that identifies the sensitivity label applied to the document, only if the user has data READ access for that document. This GUID uniquely identifies the label but doesn't include human-readable properties such as the label name or associated permissions. 
 
-The GUID alone is insufficient for user interface scenarios because sensitivity labels often carry other policy controls enforced by [Microsoft Purview Information Protection (MIP)](/purview/sensitivity-labels), such as: print permissions or screenshot and screen capture restrictions.
+Note that the GUID alone is insufficient for scenarios that include user interface because sensitivity labels often carry other policy controls enforced by [Microsoft Purview Information Protection](/purview/sensitivity-labels), such as: print permissions or screenshot and screen capture restrictions. Azure AI Search doesn't surface these capabilities.
 
-Azure AI Search doesn't surface these capabilities. To display label names or enforce UI-specific restrictions, your application must call the Microsoft Purview Information Protection endpoint to retrieve full label metadata and associated permissions.
+To display label names and/or enforce UI-specific restrictions, your application must call the Microsoft Purview Information Protection endpoint to retrieve full label metadata and associated permissions.
 
-You can use the GUID returned by Azure AI Search to resolve the label properties and call the [MIP SDK](/information-protection/develop/setup-configure-mip) to fetch the label name, description, and policy settings. This [end-to-end demo sample](https://aka.ms/Ignite25/aisearch-purview-sensitivity-labels-repo) includes code that shows how to call the endpoint from a user interface. It also demonstrates how to extract the label name and expose it as part of the citations used in your RAG applications or agents.
+You can use the GUID returned by Azure AI Search to resolve the label properties and call the [Purview Labels APIs](/graph/api/sensitivitylabel-get) to fetch the label name, description, and policy settings. This [end-to-end demo sample](https://aka.ms/Ignite25/aisearch-purview-sensitivity-labels-repo) includes code that shows how to call the endpoint from a user interface. It also demonstrates how to extract the label name and expose it as part of the citations used in your RAG applications or agents.

Summary

{
    "modification_type": "minor update",
    "modification_title": "センシティビティラベルに関する内容の修正"
}

Explanation

この変更は、Azure AI Searchにおけるセンシティビティラベルの管理についてのドキュメントに対する修正を含みます。主要な修正内容は以下のとおりです:

  1. 使用権の表現の明確化:
    • 「extract usage rights」という表現が「READ usage rights」に変更されました。これにより、ユーザーが取り扱うべき権限の内容がより正確に伝わるようになっています。
  2. 条件付きの文言修正:
    • 検索クエリが返す内容に関する説明が修正され、ユーザーがデータへの「READ」アクセスを持つ場合にのみ、ラベルのGUIDが含まれることが明確にされました。この点での明確化は、セキュリティとアクセス制御の重要性を強調しています。
  3. ガイドラインの一貫性を高めるための修正:
    • センシティビティラベルのGUIDがユーザーインターフェースにおいて不十分であることに関する説明が修正され、必要な条件(ユーザーインターフェースに関連する場合)をより明確に述べています。
  4. API呼び出しの明示化:
    • 情報保護エンドポイントを呼び出して完全なラベルメタデータと関連する権限を取得する必要があることが強調され、具体的なAPIの名前(Purview Labels APIs)が明確に示されています。これにより、開発者は必要な情報を効率的に取得できるようになります。

この修正により、センシティビティラベルの取り扱いに関する情報がより明確かつ正確になり、ユーザーや開発者がAzure AI Searchの機能を適切に理解し、活用できるようサポートしています。

articles/search/search-security-rbac.md

Diff
@@ -5,7 +5,7 @@ description: Use Azure role-based access control for granular permissions on ser
 author: HeidiSteen
 ms.author: heidist
 manager: nitinme
-ms.date: 03/31/2025
+ms.date: 11/21/2025
 ms.service: azure-ai-search
 ms.update-cycle: 180-days
 ms.topic: how-to
@@ -26,7 +26,7 @@ In Azure AI Search, you can assign Azure roles for:
 + [Read-only access for queries](#assign-roles-for-read-only-queries)
 + [Scoped access to a single index](#grant-access-to-a-single-index)
 
-Per-user access over search results (sometimes referred to as *row-level security* or *document-level security*) isn't supported through role assignments. As a workaround, [create security filters](search-security-trimming-for-azure-search.md) that trim results by user identity, removing documents for which the requester shouldn't have access. See this [Enterprise chat sample using RAG](/azure/developer/python/get-started-app-chat-template) for a demonstration.
+Per-user access over search results (sometimes referred to as *row-level security* or *document-level access*) is supported through permission inheritance for Azure Data Lake Storage (ADLS) Gen2 and Azure blob indexes and through security filters for all other platforms (see [Document-level access control](search-document-level-access-overview.md)).
 
 Role assignments are cumulative and pervasive across all tools and client libraries. You can assign roles using any of the [supported approaches](/azure/role-based-access-control/role-assignments-steps) described in Azure role-based access control documentation.
 
@@ -110,8 +110,6 @@ Combine these roles to get sufficient permissions for your use case.
 
 Owners and Contributors grant the same permissions, except that only Owners can assign roles.
 
-<!-- Owners and Contributors can create, read, update, and delete objects in the Azure portal *if API keys are enabled*. the Azure portal uses keys on internal calls to data plane APIs. In you subsequently configure Azure AI Search to use "roles only", then Owner and Contributor won't be able to manage objects in the Azure portal using just those role assignments. The solution is to assign more roles, such as Search Index Data Reader, Search Index Data Contributor, and Search Service Contributor. -->
-
 ## Assign roles
 
 In this section, assign roles for:
@@ -120,28 +118,6 @@ In this section, assign roles for:
 + Development or write-access to a search service
 + Read-only access for queries
 
-<!-- + [Service administration](#assign-roles-for-service-administration)
-
-    | Role | ID|
-    | --- | --- |
-    |`Owner`|8e3af657-a8ff-443c-a75c-2fe8c4bcb635|
-    |`Contributor`|b24988ac-6180-42a0-ab88-20f7382dd24c|
-    |`Reader`|acdd72a7-3385-48ef-bd42-f606fba81ae7|
-
-+ [Development or write-access to a search service](#assign-roles-for-development)
-
-    | Task | Role | ID|
-    | --- | --- | --- |
-    | CRUD operations | `Search Service Contributor`|7ca78c08-252a-4471-8644-bb5ff32d4ba0|
-    | Load documents, run indexing jobs | `Search Index Data Contributor`|8ebe5a00-799e-43f5-93ac-243d3dce84a7|
-    | Query an index | `Search Index Data Reader`|1407120a-92aa-4202-b7e9-c0e197c71c8f|
-
-+ [Read-only access for queries](#assign-roles-for-read-only-queries)
-
-    | Role | ID|
-    | --- | --- |
-    | `Search Index Data Reader` [with PowerShell](search-security-rbac.md?tabs=roles-portal-admin%2Croles-portal%2Croles-portal-query%2Ctest-portal%2Ccustom-role-portal#grant-access-to-a-single-index)|1407120a-92aa-4202-b7e9-c0e197c71c8f| -->
-
 ### Assign roles for service administration
 
 As a service administrator, you can create and configure a search service, and perform all control plane operations described in the [Management REST API](/rest/api/searchmanagement/) or equivalent client libraries. If you're an Owner or Contributor, you can also perform most data plane [Search REST API](/rest/api/searchservice/) tasks in the Azure portal.
@@ -297,7 +273,7 @@ Use a client to test role assignments. Remember that roles are cumulative and in
 
 ### [**REST API**](#tab/test-rest)
 
-This approach assumes Visual Studio Code with a REST client extension.
+This approach assumes Visual Studio Code with a [REST client extension](https://marketplace.visualstudio.com/items?itemName=humao.rest-client).
 
 1. Open a command shell for Azure CLI and sign in to your Azure subscription.
 
@@ -406,7 +382,7 @@ For more information on how to acquire a token for a specific environment, see [
 
 ## Test as current user
 
-If you're already a Contributor or Owner of your search service, you can present a bearer token for your user identity for authentication to Azure AI Search. 
+If you're already a Contributor or Owner of your search service, you can present a bearer token for your user identity for authentication to Azure AI Search.
 
 1. Get a bearer token for the current user using the Azure CLI:
 
@@ -450,7 +426,7 @@ If you're already a Contributor or Owner of your search service, you can present
 
 In some scenarios, you might want to limit an application's access to a single resource, such as an index.
 
-the Azure portal doesn't currently support role assignments at this level of granularity, but it can be done with [PowerShell](/azure/role-based-access-control/role-assignments-powershell) or the [Azure CLI](/azure/role-based-access-control/role-assignments-cli).
+The Azure portal doesn't currently support role assignments at this level of granularity, but it can be done with [PowerShell](/azure/role-based-access-control/role-assignments-powershell) or the [Azure CLI](/azure/role-based-access-control/role-assignments-cli).
 
 In PowerShell, use [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment), providing the Azure user or group name, and the scope of the assignment.
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "RBACに関するドキュメントの更新"
}

Explanation

この変更は、Azure AI Searchにおけるロールベースのアクセス制御(RBAC)に関するドキュメントの修正を含んでいます。主な修正内容は以下の通りです:

  1. 日付の更新:
    • ドキュメントの最終更新日が「03/31/2025」から「11/21/2025」に変更されました。これにより、情報が最新であることが示されています。
  2. 文言の修正と明確化:
    • パーソナルユーザーへのアクセスに関する説明が変更され、「row-level security」または「document-level security」が「document-level access」という用語に置き換えられました。これにより、この機能がより広い範囲でサポートされていることが強調されています。
  3. セキュリティフィルタの説明の修正:
    • セキュリティフィルタに関する記述が明確化され、Azure Data Lake Storage(ADLS) Gen2およびAzure Blobインデックスの権限継承を通じて、ユーザーの個別のアクセスがサポートされることが具体的に言及されています。
  4. 不要なコメントの削除:
    • 古いコメントや説明(特に、オーナーと寄稿者の権限に関する詳細なテーブル情報)が削除され、文書がすっきりとした形になりました。
  5. 情報の一貫性:
    • Azure CLIやPowerShellを使用したロールの割り当てに関する情報が一貫して記載されており、読者が必要な情報を得られるようになっています。

この修正により、Azure AI SearchのRBACに関する情報がより明確で、ユーザーフレンドリーな文書へと改善され、利用者が適切な権限を割り当てる際のサポートを強化しています。

articles/search/search-semi-structured-data.md

Diff
@@ -108,9 +108,11 @@ Here's the first nested JSON in the file. The remainder of the file includes 1,5
 
 For this tutorial, connections to Azure AI Search require an endpoint and an API key. You can get these values from the Azure portal. For alternative connection methods, see [Managed identities](search-how-to-managed-identities.md).
 
-1. Sign in to the [Azure portal](https://portal.azure.com), navigate to the search service **Overview** page, and copy the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
+1. Sign in to the [Azure portal](https://portal.azure.com) and select your search service.
 
-1. Under **Settings** > **Keys**, copy an admin key. Admin keys are used to add, modify, and delete objects. There are two interchangeable admin keys. Copy either one.
+1. From the left pane, select **Overview** and copy the endpoint. It should be in this format: `https://my-service.search.windows.net`
+
+1. From the left pane, select **Settings** > **Keys** and copy an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either key on requests to add, modify, or delete objects.
 
    :::image type="content" source="media/search-get-started-rest/get-url-key.png" alt-text="Screenshot of the URL and API keys in the Azure portal.":::
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "半構造化データに関する手順の更新"
}

Explanation

この変更は、Azure AI Searchにおける半構造化データの取り扱いに関する手順を記述したドキュメントに対する修正を含みます。主な修正ポイントは以下の通りです:

  1. 接続方法の明確化:
    • Azure AI Searchへの接続に必要なエンドポイントとAPIキーに関する情報が、より明確に記述されました。
  2. 手順の更新:
    • Azureポータルでの操作手順が変更され、最初のステップが明確に簡略化されました。具体的には、ユーザーが選択する項目や手順がより直感的に理解できるように再構成されています。
  3. 情報の詳細化:
    • 以前の説明から、管理者キーの情報が詳しく記載され、特にビジネス・コンティニュイティの観点から二つのキーが提供されている理由や、その利用方法についても触れられています。
  4. ビジュアルサポート:
    • AzureポータルでのエンドポイントとAPIキーを示すスクリーンショットが引き続き提供され、読者が必要な情報を視覚的に確認できるように配慮されています。

この修正により、ドキュメントがよりユーザーフレンドリーになり、Azure AI Searchを利用する際のガイドラインが一層明確に示されることで、ユーザーの混乱を軽減し、作業の効率を向上させることが期待されています。

articles/search/semantic-how-to-configure.md

Diff
@@ -10,7 +10,7 @@ ms.update-cycle: 180-days
 ms.custom:
   - ignite-2023
 ms.topic: how-to
-ms.date: 04/04/2025
+ms.date: 11/21/2025
 ---
 
 # Configure semantic ranker and return captions in search results
@@ -43,7 +43,9 @@ You can specify a semantic configuration on new or existing indexes, using any o
 
 ## Add a semantic configuration
 
-A *semantic configuration* is a section in your index that establishes field inputs for semantic ranking. You can add or update a semantic configuration at any time, no rebuild necessary. If you create multiple configurations, you can specify a default. At query time, specify a semantic configuration on a [query request](semantic-how-to-query-request.md), or leave it blank to use the default.
+Some workloads create a semantic configuration automatically. If you're using [agentic retrieval](agentic-retrieval-overview.md) and a [knowledge source that indexes content](agentic-knowledge-source-overview.md#supported-knowledge-sources) on Azure AI Search, your generated index already has a semantic configuration that works for your content.
+
+For other workloads, you can set up a semantic configuration yourself. A *semantic configuration* is a section in your index that establishes the field inputs used for semantic ranking. You can add or update a semantic configuration at any time, no rebuild necessary. If you create multiple configurations, you can specify a default. At query time, specify a semantic configuration on a [query request](semantic-how-to-query-request.md), or leave it blank to use the default.
 
 You can create up to 100 semantic configurations in a single index.
 
@@ -165,12 +167,12 @@ SearchIndex searchIndex = new(indexName)
 
 [!INCLUDE [Feature preview](./includes/previews/preview-generic.md)]
 
-Starting in [2025-03-01-preview REST APIs](/rest/api/searchservice/operation-groups?view=rest-searchservice-2025-03-01-preview&preserve-view=true) and in Azure SDKs that provide the property, you can optionally configure an index to use prerelease semantic ranking models if one is deployed in your region. There's no mechanism for knowing if a prerelease is available, or if it was used on specific query. For this reason, we recommend that you use this property in test environments, and only if you're interested in trying out the very latest semantic ranking models.
+Using [previewREST APIs](/rest/api/searchservice/operation-groups?view=rest-searchservice-2025-11-01-preview&preserve-view=true) and preview Azure SDKs that provide the property, you can optionally configure an index to use prerelease semantic ranking models if one is deployed in your region. There's no mechanism for knowing if a prerelease is available, or if it was used on specific query. For this reason, we recommend that you use this property in test environments, and only if you're interested in trying out the very latest semantic ranking models.
 
 The configuration property is `"flightingOptIn": true`, and it's set in the semantic configuration section of an index. The property is null or false by default. You can set it true on a create or update request at any time, and it affects semantic queries moving forward, assuming the query stipulates a semantic configuration that includes the property.
 
 ```rest
-PUT https://myservice.search.windows.net/indexes('hotels')?allowIndexDowntime=False&api-version=2025-03-01-preview
+PUT https://myservice.search.windows.net/indexes('hotels')?allowIndexDowntime=False&api-version=2025-11-01-preview
 
 {
   "name": "hotels",

Summary

{
    "modification_type": "minor update",
    "modification_title": "セマンティックランキングの構成に関するドキュメントの更新"
}

Explanation

この変更は、Azure AI Searchにおけるセマンティックランキングの構成方法に関するドキュメントの修正を含みます。主な修正内容は以下の通りです:

  1. 日付の更新:
    • ドキュメントの最終更新日が「04/04/2025」から「11/21/2025」に変更され、最新情報が提供されていることを示しています。
  2. 自動生成のセマンティック構成の説明:
    • 新たに、エージェンティックリトリーバルを使用しているワークロードに対して、Azure AI Searchで生成されるインデックスが自動的にセマンティック構成を持つことが説明されました。この情報は、ユーザーが自分で構成を作成する必要がない場合があることを明確にしています。
  3. 構成の設定手順の整理:
    • セマンティック構成の設定についての説明が整理されており、新しいワークロードでの手動設定が可能であることが強調されています。この構成は、インデックスにおけるフィールド入力を定義するセクションとして位置付けられています。
  4. プレビューAPIのバージョン更新:
    • REST APIのバージョンが「2025-03-01-preview」から「2025-11-01-preview」に更新されており、最新のAPIを使用する際の情報が提供されています。また、プレビューのセマンティックランキングモデルに関する注意喚起はそのまま残されています。

この修正により、Azure AI Searchのセマンティックランキングに関する資料がより明確で、最新の情報をユーザーに提供することができるようになり、構成方法を理解しやすくします。ユーザーがAzureサービスを最大限に活用できるようサポートすることが目的です。

articles/search/semantic-how-to-query-rewrite.md

Diff
@@ -11,7 +11,7 @@ ms.custom:
   - ignite-2024
   - references_regions
 ms.topic: how-to
-ms.date: 11/05/2025
+ms.date: 11/21/2025
 ---
 
 # Rewrite queries with semantic ranker in Azure AI Search (Preview)
@@ -37,21 +37,21 @@ Query rewriting is an optional feature. Without query rewriting, the search serv
 
 - [Azure AI Search](search-create-service-portal.md) in any [region that provides query rewrite](search-region-support.md), with [semantic ranker enabled](semantic-how-to-enable-disable.md).
 
-- An existing search index with a [semantic configuration](semantic-how-to-configure.md) and rich text content. The examples in this guide use the [hotels-sample-index](search-get-started-portal.md) sample data to demonstrate query rewriting. You can use your own data and index to test query rewriting.
+- An existing search index with a [semantic configuration](semantic-how-to-configure.md) and rich text content. The examples in this guide use the [hotels-sample-index](search-get-started-portal.md) sample data to demonstrate query rewriting.
 
-- To follow the instructions in this article, you need a web client that supports REST API requests. The examples in this guide were tested with [Visual Studio Code](https://code.visualstudio.com/download) and the [REST Client](https://marketplace.visualstudio.com/items?itemName=humao.rest-client) extension. 
+- To follow the instructions in this article, you need a web client that supports REST API requests. The examples in this article were tested with [Visual Studio Code](https://code.visualstudio.com/download) and the [REST Client](https://marketplace.visualstudio.com/items?itemName=humao.rest-client) extension. 
 
 > [!TIP]
 > Content that includes explanations or definitions work best for semantic ranking. 
 
 ## Make a search request with query rewrites
 
-In this REST API example, use [Search Documents (preview)](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2025-03-01-preview&branch=searchindex202503&preserve-view=true) to formulate the request.
+In this REST API example, use [Search Documents (preview)](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2025-11-01-preview&preserve-view=true) to formulate the request.
 
 1. Paste the following request into a web client as a template. 
 
     ```http
-    POST https://[search-service-name].search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2025-03-01-preview
+    POST https://[search-service-name].search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2025-11-01-preview
     {
         "search": "newer hotel near the water with a great restaurant",
         "semanticConfiguration":"en-semantic-config",
@@ -202,7 +202,7 @@ Here's an example of a query that includes a vector query with query rewrites. M
 - The "text" value is the same as the "search" value. These values must be identical for query rewriting to work.
 
 ```http
-POST https://[search-service-name].search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2025-03-01-preview
+POST https://[search-service-name].search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2025-11-01-preview
 {
     "search": "newer hotel near the water with a great restaurant",
     "vectorQueries": [

Summary

{
    "modification_type": "minor update",
    "modification_title": "クエリリライト機能に関するドキュメントの更新"
}

Explanation

この変更は、Azure AI Searchにおけるクエリリライト機能に関するドキュメントの修正を含みます。主な修正内容は以下の通りです:

  1. 日付の更新:
    • ドキュメントの最終更新日が「11/05/2025」から「11/21/2025」に変更され、最新の情報が提供されていることを示しています。
  2. 内容の整頓:
    • 一部の文が修正されており、文の構造が改善され、読みやすさが向上しています。特に、リライト機能に関する説明がより明確になっています。
  3. APIバージョンの更新:
    • REST APIのバージョンが「2025-03-01-preview」から「2025-11-01-preview」に更新されており、最新のAPIでのリクエスト方法についての情報が提供されています。
  4. 指示の明確化:
    • クエリリライトに関する手順が再整理され、具体的なREST APIリクエストの例も更新されています。これにより、新たなAPIバージョンに対応したリクエストの方法が分かりやすく示されています。

この修正により、Azure AI Searchのクエリリライト機能に関する資料が最新の情報を反映したものとなり、ユーザーが実行する際の手順が明確に示されます。これにより、Azureのサービスをより効果的に利用する手助けをすることが目的となっています。

articles/search/semantic-search-overview.md

Diff
@@ -10,7 +10,7 @@ ms.update-cycle: 180-days
 ms.custom:
   - ignite-2023
 ms.topic: concept-article
-ms.date: 11/06/2025
+ms.date: 11/19/2025
 ---
 
 # Semantic ranking in Azure AI Search
@@ -150,9 +150,9 @@ Charges for semantic ranker are levied when query requests include `queryType=se
 
 1. [Check regional availability](search-region-support.md).
 
-1. [Sign in to Azure portal](https://portal.azure.com) to verify your search service is Basic or higher.
+1. [Sign in to Azure portal](https://portal.azure.com).
 
-1. [Configure semantic ranker for the search service, choosing a pricing plan](semantic-how-to-enable-disable.md).
+1. [Configure semantic ranker for the search service, choosing a pricing plan](semantic-how-to-enable-disable.md). The free plan is the default.
 
 1. [Configure semantic ranker in a search index](semantic-how-to-configure.md).
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "セマンティックランキングの概要における修正"
}

Explanation

この変更は、Azure AI Searchに関するセマンティックランキングの概要ページに対する修正を含んでいます。主な修正内容は以下の通りです:

  1. 日付の更新:
    • ドキュメントの最終更新日が「11/06/2025」から「11/19/2025」に変更され、最新の状態が反映されています。
  2. 手順の明確化:
    • Azureポータルにサインインする手順が簡略化されています。「検索サービスがBasicプラン以上であることを確認する」という文言が削除され、ユーザーにとっての手順がよりシンプルになっています。
  3. プランの説明の追加:
    • セマンティックランカーの設定手順に関する説明に「無料プランがデフォルトです」という文が追加され、利用可能なプランについての重要な情報が提供されています。これにより、ユーザーが料金プランの選択を意識しやすくなります。

これらの修正により、セマンティックランキングに関するページがよりコンパクトで明確になり、ユーザーが手順を理解しやすくなっています。目的は、Azure AI Searchの機能をより効果的に活用できるようにすることです。

articles/search/tutorial-adls-gen2-indexer-acls.md

Diff
@@ -187,7 +187,7 @@ After indexer creation and immediate run, the file content along with permission
 
 Now that documents are loaded, you can issue queries against them by using [Documents - Search Post (REST)](/rest/api/searchservice/documents/search-post).
 
-The URI is extended to include a query input, which is specified by using the `/docs/search` operator. The query token is passed in the request header. For more information, see [Query-Time ACL and RBAC enforcement](search-query-access-control-rbac-enforcement.md).
+The URI is extended to include a query input, which is specified by using the `/docs/search` operator. The query token is passed in the request header. For more information, see [Query-time ACL and RBAC enforcement](search-query-access-control-rbac-enforcement.md).
 
 ```http
 POST  {{endpoint}}/indexes/stateparks/docs/search?api-version=2025-11-01-preview

Summary

{
    "modification_type": "minor update",
    "modification_title": "ACLおよびRBACの説明に関する文言の修正"
}

Explanation

この変更は、ADLS Gen2インデクサーにおけるACL(アクセス制御リスト)およびRBAC(ロールベースアクセス制御)に関連するチュートリアルドキュメントの修正を含んでいます。主な修正内容は以下の通りです:

  1. 文言の修正:
    • 「Query-Time ACL and RBAC enforcement」という表現が「Query-time ACL and RBAC enforcement」に変更され、「時間」部分のハイフンが削除されました。これにより、より一貫性のある表現になっています。
  2. 全体的な流れの維持:
    • 文自体は変わっていないため、情報の流れや意味はそのまま保たれており、ユーザーがクエリの実行時にACLおよびRBACについて理解しやすくなっています。

この修正は、表現の明確化を目的としており、関連するドキュメントの一貫性を向上させるための小規模な更新です。ユーザーは依然として、検索リクエストの正しい構文やアクセス制御に関する情報を得ることができます。

articles/search/tutorial-create-custom-analyzer.md

Diff
@@ -9,16 +9,16 @@ ms.update-cycle: 180-days
 ms.custom:
   - ignite-2023
 ms.topic: tutorial
-ms.date: 03/28/2025
+ms.date: 11/21/2025
 ---
 
 # Tutorial: Create a custom analyzer for phone numbers
 
 In search solutions, strings that have complex patterns or special characters can be challenging to work with because the [default analyzer](search-analyzers.md) strips out or misinterprets meaningful parts of a pattern. This results in a poor search experience where users can't find the information they expect. Phone numbers are a classic example of strings that are difficult to analyze. They come in various formats and include special characters that the default analyzer ignores.
 
-With phone numbers as its subject, this tutorial shows you how to solve patterned data problems using a [custom analyzer](index-add-custom-analyzers.md). This approach can be used as is for phone numbers or adapted for fields with the same characteristics (patterned with special characters), such as URLs, emails, postal codes, and dates.
+With phone numbers as its subject, this tutorial uses the [Search Service REST APIs](/rest/api/searchservice/) to solve patterned data problems using a [custom analyzer](index-add-custom-analyzers.md). This approach can be used as is for phone numbers or adapted for fields with the same characteristics (patterned with special characters), such as URLs, emails, postal codes, and dates.
 
-In this tutorial, you use a REST client and the [Azure AI Search REST APIs](/rest/api/searchservice/) to:
+In this tutorial, you:
 
 > [!div class="checklist"]
 > + Understand the problem
@@ -30,9 +30,9 @@ In this tutorial, you use a REST client and the [Azure AI Search REST APIs](/res
 
 + An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?cid=msft_learn).
 
-+ [Azure AI Search](search-what-is-azure-search.md). [Create a service](search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) in your current subscription. For this tutorial, you can use a free service.
++ An [Azure AI Search service](search-create-service-portal.md).
 
-+ [Visual Studio Code](https://code.visualstudio.com/download) with a [REST client](https://marketplace.visualstudio.com/items?itemName=humao.rest-client).
++ [Visual Studio Code](https://code.visualstudio.com/download) with the [REST Client extension](https://marketplace.visualstudio.com/items?itemName=humao.rest-client).
 
 ### Download files
 
@@ -42,13 +42,13 @@ Source code for this tutorial is in the [custom-analyzer.rest](https://github.co
 
 The REST calls in this tutorial require a search service endpoint and an admin API key. You can get these values from the Azure portal.
 
-1. Sign in to the [Azure portal](https://portal.azure.com), go to the **Overview** page, and copy the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
+1. Sign in to the [Azure portal](https://portal.azure.com) and select your search service.
 
-1. Under **Settings** > **Keys**, copy an admin key. Admin keys are used to add, modify, and delete objects. There are two interchangeable admin keys. Copy either one.
+1. From the left pane, select **Overview** and copy the endpoint. It should be in this format: `https://my-service.search.windows.net`
 
-   :::image type="content" source="media/search-get-started-rest/get-url-key.png" alt-text="Screenshot of the URL and API keys in the Azure portal.":::
+1. From the left pane, select **Settings** > **Keys** and copy an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either key on requests to add, modify, or delete objects.
 
-A valid API key establishes trust, on a per-request basis, between the application sending the request and the search service handling it.
+   :::image type="content" source="media/search-get-started-rest/get-url-key.png" alt-text="Screenshot of the URL and API keys in the Azure portal.":::
 
 ## Create an initial index
 
@@ -63,101 +63,107 @@ A valid API key establishes trust, on a per-request basis, between the applicati
 
 1. Save the file with a `.rest` file extension.
 
-1. Paste the following example to create a small index called `phone-numbers-index` with two fields: `id` and `phone_number`. You haven't defined an analyzer yet, so the `standard.lucene` analyzer is used by default.
+1. Paste the following example to create a small index called `phone-numbers-index` with two fields: `id` and `phone_number`.
 
     ```http
     ### Create a new index
     POST {{baseUrl}}/indexes?api-version=2025-09-01  HTTP/1.1
-      Content-Type: application/json
-      api-key: {{apiKey}}
+    Content-Type: application/json
+    api-key: {{apiKey}}
 
-      {
-        "name": "phone-numbers-index",  
-        "fields": [
-          {
-            "name": "id",
-            "type": "Edm.String",
-            "key": true,
-            "searchable": true,
-            "filterable": false,
-            "facetable": false,
-            "sortable": true
-          },
-          {
-            "name": "phone_number",
-            "type": "Edm.String",
-            "sortable": false,
-            "searchable": true,
-            "filterable": false,
-            "facetable": false
-          }
-        ]
-      }
+    {
+      "name": "phone-numbers-index",  
+      "fields": [
+        {
+          "name": "id",
+          "type": "Edm.String",
+          "key": true,
+          "searchable": true,
+          "filterable": false,
+          "facetable": false,
+          "sortable": true
+        },
+        {
+          "name": "phone_number",
+          "type": "Edm.String",
+          "sortable": false,
+          "searchable": true,
+          "filterable": false,
+          "facetable": false
+        }
+      ]
+    }
     ```
 
+    You haven't defined an analyzer yet, so the `standard.lucene` analyzer is used by default.
+
 1. Select **Send request**. You should have an `HTTP/1.1 201 Created` response, and the response body should include the JSON representation of the index schema.
 
-1. Load data into the index, using documents that contain various phone number formats. This is your test data.
+1. Load data into the index using documents that contain various phone number formats. This is your test data.
 
     ```http
     ### Load documents
     POST {{baseUrl}}/indexes/phone-numbers-index/docs/index?api-version=2025-09-01  HTTP/1.1
-      Content-Type: application/json
-      api-key: {{apiKey}}
-    
-      {
-        "value": [
-          {
-            "@search.action": "upload",  
-            "id": "1",
-            "phone_number": "425-555-0100"
-          },
-          {
-            "@search.action": "upload",  
-            "id": "2",
-            "phone_number": "(321) 555-0199"
-          },
-          {  
-            "@search.action": "upload",  
-            "id": "3",
-            "phone_number": "+1 425-555-0100"
-          },
-          {  
-            "@search.action": "upload",  
-            "id": "4",  
-            "phone_number": "+1 (321) 555-0199"
-          },
-          {
-            "@search.action": "upload",  
-            "id": "5",
-            "phone_number": "4255550100"
-          },
-          {
-            "@search.action": "upload",  
-            "id": "6",
-            "phone_number": "13215550199"
-          },
-          {
-            "@search.action": "upload",  
-            "id": "7",
-            "phone_number": "425 555 0100"
-          },
-          {
-            "@search.action": "upload",  
-            "id": "8",
-            "phone_number": "321.555.0199"
-          }
-        ]  
-      }
+    Content-Type: application/json
+    api-key: {{apiKey}}
+
+    {
+      "value": [
+        {
+          "@search.action": "upload",
+          "id": "1",
+          "phone_number": "425-555-0100"
+        },
+        {
+          "@search.action": "upload",
+          "id": "2",
+          "phone_number": "(321) 555-0199"
+        },
+        {
+          "@search.action": "upload",
+          "id": "3",
+          "phone_number": "+1 425-555-0100"
+        },
+        {
+          "@search.action": "upload",
+          "id": "4",
+          "phone_number": "+1 (321) 555-0199"
+        },
+        {
+          "@search.action": "upload",
+          "id": "5",
+          "phone_number": "4255550100"
+        },
+        {
+          "@search.action": "upload",
+          "id": "6",
+          "phone_number": "13215550199"
+        },
+        {
+          "@search.action": "upload",
+          "id": "7",
+          "phone_number": "425 555 0100"
+        },
+        {
+          "@search.action": "upload",
+          "id": "8",
+          "phone_number": "321.555.0199"
+        }
+      ]
+    }
     ```
 
 1. Try queries similar to what a user might type. For example, a user might search for `(425) 555-0100` in any number of formats and still expect results to be returned. Start by searching `(425) 555-0100`.
 
     ```http  
     ### Search for a phone number
-    GET {{baseUrl}}/indexes/phone-numbers-index/docs/search?api-version=2025-09-01&search=(425) 555-0100  HTTP/1.1
-      Content-Type: application/json
-      api-key: {{apiKey}}
+    POST {{baseUrl}}/indexes/phone-numbers-index/docs/search?api-version=2025-09-01  HTTP/1.1
+    Content-Type: application/json
+    api-key: {{apiKey}}
+
+    {
+      "search": "(425) 555-0100"
+    }
     ```
 
     The query returns three out of four expected results but also returns two unexpected results.
@@ -192,11 +198,15 @@ A valid API key establishes trust, on a per-request basis, between the applicati
 1. Try again without any formatting: `4255550100`.
 
    ```http  
-    ### Search for a phone number
-    GET {{baseUrl}}/indexes/phone-numbers-index/docs/search?api-version=2025-09-01&search=4255550100  HTTP/1.1
-      Content-Type: application/json
-      api-key: {{apiKey}}
-    ```
+   ### Search for a phone number
+   POST {{baseUrl}}/indexes/phone-numbers-index/docs/search?api-version=2025-09-01  HTTP/1.1
+   Content-Type: application/json
+   api-key: {{apiKey}}
+
+   {
+     "search": "4255550100"
+   }
+   ```
 
    This query does even worse, returning only one of four correct matches.
 
@@ -255,14 +265,15 @@ Azure AI Search provides an [Analyze API](/rest/api/searchservice/indexes/analyz
 Call the Analyze API using the following request:
 
 ```http
+### Test analyzer
 POST {{baseUrl}}/indexes/phone-numbers-index/analyze?api-version=2025-09-01  HTTP/1.1
-  Content-Type: application/json
-  api-key: {{apiKey}}
+Content-Type: application/json
+api-key: {{apiKey}}
 
-  {
-    "text": "(425) 555-0100",
-    "analyzer": "standard.lucene"
-  }
+{
+  "text": "(425) 555-0100",
+  "analyzer": "standard.lucene"
+}
 ```
 
 The API returns the tokens extracted from the text, using the analyzer you specified. The standard Lucene analyzer splits the phone number into three separate tokens.
@@ -439,18 +450,18 @@ All of the tokens in the output column exist in the index. If your query include
 1. Delete the current index.
 
    ```http
-    ### Delete the index
-    DELETE {{baseUrl}}/indexes/phone-numbers-index?api-version=2025-09-01 HTTP/1.1
-        api-key: {{apiKey}}
+   ### Delete the index
+   DELETE {{baseUrl}}/indexes/phone-numbers-index?api-version=2025-09-01 HTTP/1.1
+   api-key: {{apiKey}}
     ```
 
 1. Recreate the index using the new analyzer. This index schema adds a custom analyzer definition and a custom analyzer assignment on the phone number field.
 
     ```http
     ### Create a new index
     POST {{baseUrl}}/indexes?api-version=2025-09-01  HTTP/1.1
-      Content-Type: application/json
-      api-key: {{apiKey}}
+    Content-Type: application/json
+    api-key: {{apiKey}}
     
     {
         "name": "phone-numbers-index-2",  
@@ -486,8 +497,8 @@ All of the tokens in the output column exist in the index. If your query include
               "phone_char_mapping"
               ]
             }
-          ],
-          "charFilters": [
+        ],
+        "charFilters": [
             {
               "@odata.type": "#Microsoft.Azure.Search.MappingCharFilter",
               "name": "phone_char_mapping",
@@ -507,24 +518,25 @@ All of the tokens in the output column exist in the index. If your query include
               "name": "custom_ngram_filter",
               "minGram": 3,
               "maxGram": 20
-            }
-          ]
-        }
+          }
+        ]
+    }
     ```
 
 ### Test the custom analyzer
 
 After you recreate the index, test the analyzer using the following request:
 
 ```http
-POST {{baseUrl}}/indexes/tutorial-first-analyzer/analyze?api-version=2025-09-01  HTTP/1.1
-  Content-Type: application/json
-  api-key: {{apiKey}} 
+### Test custom analyzer
+POST {{baseUrl}}/indexes/phone-numbers-index-2/analyze?api-version=2025-09-01  HTTP/1.1
+Content-Type: application/json
+api-key: {{apiKey}} 
 
-  {
-    "text": "+1 (321) 555-0199",
-    "analyzer": "phone_analyzer"
-  }
+{
+  "text": "+1 (321) 555-0199",
+  "analyzer": "phone_analyzer"
+}
 ```
 
 You should now see the collection of tokens resulting from the phone number.

Summary

{
    "modification_type": "minor update",
    "modification_title": "電話番号用カスタムアナライザー作成チュートリアルの改訂"
}

Explanation

この変更は、電話番号用のカスタムアナライザーを作成するためのチュートリアル文書の大幅な改訂を含んでいます。主な修正内容は以下の通りです:

  1. 日付の更新:
    • 最終更新日が「03/28/2025」から「11/21/2025」に変更され、最新の更新情報が反映されています。
  2. REST APIの使用:
    • チュートリアル内で、電話番号の問題解決に「Azure AI Search REST APIs」が使用されることが明示され、具体的にどのAPIsが利用されるかが説明されています。
  3. 手順の簡素化と明確化:
    • 多数の手順が簡素化され、実際のAPIコールの例や必要なリソース(例: Azureアカウント、Visual Studio Code、RESTクライアントの拡張)についての説明がより明確に示されるようになりました。
  4. クエリ方法の改善:
    • クエリを送信する際のHTTPメソッドが「GET」から「POST」に変更され、適切なリクエストボディ構造が追加され、クエリの実行方法が更新されています。
  5. インデックス作成の詳細追加:
    • 新しいインデックスの作成におけるJSON構造が整形され、ユーザーがどうやってフィールドやアナライザーを定義するのかがよりわかりやすく示されています。

この改訂により、ユーザーは電話番号の分析に特化したカスタムアナライザーを簡単に作成し、利用できるようになり、より効果的な検索エクスペリエンスを実現できるようになります。全体的に、チュートリアルがより使いやすく、理解しやすいものに更新されています。

articles/search/tutorial-csharp-create-load-index.md

Diff
@@ -8,7 +8,7 @@ ms.author: heidist
 ms.service: azure-ai-search
 ms.update-cycle: 180-days
 ms.topic: tutorial
-ms.date: 01/17/2025
+ms.date: 11/21/2025
 ms.custom:
   - devx-track-csharp
   - devx-track-azurecli

Summary

{
    "modification_type": "minor update",
    "modification_title": "C#によるインデックスの作成と読み込みチュートリアルの日付更新"
}

Explanation

この変更は、C#によるインデックスの作成と読み込みに関するチュートリアルの最終更新日を変更する内容です。具体的には、以下の点が修正されています:

  1. 更新日付の変更:
    • チュートリアルの最終更新日が「01/17/2025」から「11/21/2025」に修正され、最新の情報が反映されるようになりました。これにより、ユーザーはこの文書が最近更新されたことを認識でき、最新の情報に基づいて学習を進めることができます。

この変更は主に日付の更新ですが、ユーザーにとっては古い情報に基づいて作業をするリスクを避ける助けとなります。チュートリアルの内容自体には他の変更はなく、現在のドキュメントの信頼性が向上しています。

articles/search/tutorial-multiple-data-sources.md

Diff
@@ -124,7 +124,7 @@ To authenticate to your search service, you need the service URL and an access k
 
 1. From the left pane, select **Settings** > **Keys**.
 
-1. Make a note of an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either the primary or secondary key on requests for adding, modifying, and deleting objects.
+1. Make a note of an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either key on requests for adding, modifying, and deleting objects.
 
 ## Set up your environment
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "複数データソースのチュートリアルにおける管理者キーの説明修正"
}

Explanation

この変更は、複数のデータソースに関するチュートリアルにおいて、管理者キーの説明を若干修正したものです。具体的には、以下の点が更新されています:

  1. 管理者キーに関する説明の明確化:
    • 元の文では「主キーまたは副キーを使用できます」という表現が「いずれかのキーを使用できます」という形式に変更されました。これは、主キーと副キーのいずれかを使うことができるという意図をより明確に伝えるための修正です。

この変更は、ドキュメントの精度を向上させ、ユーザーが管理者キーの使用に関する理解を深めることを目的としており、全体的な内容には大きな影響を与えていません。文の簡潔さと情報の明確性が向上したことで、ユーザーはよりスムーズに手順を進めることができるでしょう。

articles/search/tutorial-optimize-indexing-push-api.md

Diff
@@ -1,23 +1,23 @@
 ---
-title: 'C# Tutorial: Optimize Indexing Using the Push API'
+title: 'C# Tutorial: Use the Push API to Optimize Indexing'
 titleSuffix: Azure AI Search
 description: Learn how to efficiently index data using Azure AI Search's push API. This tutorial and sample code are in C#.
 author: gmndrg
 ms.author: gimondra
 ms.service: azure-ai-search
 ms.update-cycle: 180-days
 ms.topic: tutorial
-ms.date: 03/28/2025
+ms.date: 11/21/2025
 ms.custom:
   - devx-track-csharp
   - ignite-2023
 ---
 
 # Tutorial: Optimize indexing using the push API
 
-Azure AI Search supports [two basic approaches](search-what-is-data-import.md) for importing data into a search index: *pushing* your data into the index programmatically, or *pulling* in your data by pointing an [Azure AI Search indexer](search-indexer-overview.md) to a supported data source.
+Azure AI Search supports two basic methods for [importing data](search-what-is-data-import.md) into a search index: *pushing* your data into the index programmatically or *pulling* your data by pointing an [indexer](search-indexer-overview.md) to a supported data source.
 
-This tutorial explains how to efficiently index data using the [push model](search-what-is-data-import.md#pushing-data-to-an-index) by batching requests and using an exponential backoff retry strategy. You can [download and run the sample application](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/main/optimize-data-indexing). This tutorial also explains the key aspects of the application and what factors to consider when indexing data.
+This tutorial explains how to efficiently index data using the [push model](search-what-is-data-import.md#pushing-data-to-an-index) by batching requests and using an exponential backoff retry strategy. You can download and run the [sample application](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/main/optimize-data-indexing/v11). This tutorial also explains the key aspects of the application and what factors to consider when indexing data.
 
 In this tutorial, you use C# and the [Azure.Search.Documents library](/dotnet/api/overview/azure/search) from the Azure SDK for .NET to:
 
@@ -43,44 +43,46 @@ Source code for this tutorial is in the [optimize-data-indexing/v11](https://git
 
 The following factors affect indexing speeds. For more information, see [Index large data sets](search-howto-large-index.md).
 
-+ **Service tier and number of partitions/replicas**: Adding partitions or upgrading your tier increases indexing speeds.
++ **Pricing tier and number of partitions/replicas**: Adding partitions or upgrading your tier increases indexing speeds.
 + **Index schema complexity**: Adding fields and field properties lowers indexing speeds. Smaller indexes are faster to index.
 + **Batch size**: The optimal batch size varies based on your index schema and dataset.
 + **Number of threads/workers**: A single thread doesn't take full advantage of indexing speeds.
 + **Retry strategy**: An exponential backoff retry strategy is a best practice for optimum indexing.
 + **Network data transfer speeds**: Data transfer speeds can be a limiting factor. Index data from within your Azure environment to increase data transfer speeds.
 
-## Create an Azure AI Search service
+## Create a search service
 
-This tutorial requires an Azure AI Search service, which you can [create in the Azure portal](search-create-service-portal.md). You can also [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) in your current subscription. To accurately test and optimize indexing speeds, we recommend using the same tier you plan to use in production.
+This tutorial requires an Azure AI Search service, which you can [create in the Azure portal](search-create-service-portal.md). You can also [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) in your current subscription. To accurately test and optimize indexing speeds, we recommend using the same pricing tier you plan to use in production.
 
 ### Get an admin key and URL for Azure AI Search
 
-This tutorial uses key-based authentication. Copy an admin API key to paste into the *appsettings.json* file.
+This tutorial uses key-based authentication. Copy an admin API key to paste into the `appsettings.json` file.
 
-1. Sign in to the [Azure portal](https://portal.azure.com). On your service **Overview** page, copy the endpoint URL. An example endpoint might look like `https://mydemo.search.windows.net`.
+1. Sign in to the [Azure portal](https://portal.azure.com) and select your search service.
 
-1. On **Settings** > **Keys**, get an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either the primary or secondary key on requests for adding, modifying, and deleting objects.
+1. From the left pane, select **Overview** and copy the endpoint. It should be in this format: `https://my-service.search.windows.net`
+
+1. From the left pane, select **Settings** > **Keys** and copy an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either key on requests to add, modify, or delete objects.
 
     :::image type="content" source="media/search-get-started-rest/get-url-key.png" alt-text="Screenshot of the HTTP endpoint and API key locations.":::
 
 ## Set up your environment
 
-1. Start Visual Studio and open *OptimizeDataIndexing.sln*.
+1. Open the `OptimizeDataIndexing.sln` file in Visual Studio.
 
-1. In Solution Explorer, open *appsettings.json* to provide your service's connection information.
+1. In Solution Explorer, edit the `appsettings.json` file with the connection information you collected in the previous step.
 
-```json
-{
-  "SearchServiceUri": "https://{service-name}.search.windows.net",
-  "SearchServiceAdminApiKey": "",
-  "SearchIndexName": "optimize-indexing"
-}
-```
+    ```json
+    {
+      "SearchServiceUri": "https://{service-name}.search.windows.net",
+      "SearchServiceAdminApiKey": "",
+      "SearchIndexName": "optimize-indexing"
+    }
+    ```
 
 ## Explore the code
 
-After you update *appsettings.json*, the sample program in *OptimizeDataIndexing.sln* should be ready to build and run.
+After you update `appsettings.json`, the sample program in `OptimizeDataIndexing.sln` should be ready to build and run.
 
 This code is derived from the C# section of [Quickstart: Full-text search](search-get-started-text.md), which provides detailed information about the basics of working with the .NET SDK.
 
@@ -92,20 +94,20 @@ This simple C#/.NET console app performs the following tasks:
     + Using multiple threads to increase indexing speeds
     + Using an exponential backoff retry strategy to retry failed items
 
- Before running the program, take a minute to study the code and the index definitions for this sample. The relevant code is in several files:
+Before you run the program, take a minute to study the code and the index definitions for this sample. The relevant code is in several files:
 
-  + *Hotel.cs* and *Address.cs* contain the schema that defines the index
-  + *DataGenerator.cs* contains a simple class to make it easy to create large amounts of hotel data
-  + *ExponentialBackoff.cs* contains code to optimize the indexing process as described in this article
-  + *Program.cs* contains functions that create and delete the Azure AI Search index, indexes batches of data, and tests different batch sizes
+  + `Hotel.cs` and `Address.cs` contain the schema that defines the index
+  + `DataGenerator.cs` contains a simple class to make it easy to create large amounts of hotel data
+  + `ExponentialBackoff.cs` contains code to optimize the indexing process as described in this article
+  + `Program.cs` contains functions that create and delete the Azure AI Search index, indexes batches of data, and tests different batch sizes
 
 ### Create the index
 
 This sample program uses the Azure SDK for .NET to define and create an Azure AI Search index. It takes advantage of the `FieldBuilder` class to generate an index structure from a C# data model class.
 
 The data model is defined by the `Hotel` class, which also contains references to the `Address` class. `FieldBuilder` drills down through multiple class definitions to generate a complex data structure for the index. Metadata tags are used to define the attributes of each field, such as whether it's searchable or sortable.
 
-The following snippets from the *Hotel.cs* file specify a single field and a reference to another data model class.
+The following snippets from the `Hotel.cs` file specify a single field and a reference to another data model class.
 
 ```csharp
 . . .
@@ -116,7 +118,7 @@ public Address Address { get; set; }
 . . .
 ```
 
-In the *Program.cs* file, the index is defined with a name and a field collection generated by the `FieldBuilder.Build(typeof(Hotel))` method, and then created as follows:
+In the `Program.cs` file, the index is defined with a name and a field collection generated by the `FieldBuilder.Build(typeof(Hotel))` method, and then created as follows:
 
 ```csharp
 private static async Task CreateIndexAsync(string indexName, SearchIndexClient indexClient)
@@ -133,7 +135,7 @@ private static async Task CreateIndexAsync(string indexName, SearchIndexClient i
 
 ### Generate data
 
-A simple class is implemented in the *DataGenerator.cs* file to generate data for testing. The purpose of this class is to make it easy to generate a large number of documents with a unique ID for indexing.
+A simple class is implemented in the `DataGenerator.cs` file to generate data for testing. The purpose of this class is to make it easy to generate a large number of documents with a unique ID for indexing.
 
 To get a list of 100,000 hotels with unique IDs, run the following code:
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "Push APIを使用したインデックス最適化チュートリアルの内容更新"
}

Explanation

この変更は、Push APIを使用してインデックスを最適化するC#チュートリアルの内容を更新したものです。主な変更点は以下の通りです:

  1. タイトルの変更:
    • チュートリアルのタイトルが「Optimize Indexing Using the Push API」から「Use the Push API to Optimize Indexing」に変更され、より具体的な内容を示す形になりました。
  2. 日付の更新:
    • 最終更新日が「03/28/2025」から「11/21/2025」に更新され、最新の情報が反映されています。
  3. 内容の明確化:
    • インデックスのインポートに関する記述や、管理者キーの取得方法、環境の設定方法、サンプルアプリケーションの説明が文言の改善により、より明確に記述されています。
    • 特に、サービスの「料金プラン」と「設定」の説明がより分かりやすく修正されました。
  4. コードブロックのフォーマット修正:
    • コードの説明部分で、引用符(’’)からバッククォート(``)に変更され、コードブロックが明確に示されるようになりました。

これにより、チュートリアルはより読みやすく、具体的な指示が提供されるようになっており、ユーザーはインデックス最適化の手順をより効果的に理解できるようになっています。全体として、内容の一貫性が高まっており、ユーザーエクスペリエンスの向上が図られています。

articles/search/tutorial-skillset.md

Diff
@@ -6,7 +6,7 @@ author: heidisteen
 ms.author: heidist
 ms.service: azure-ai-search
 ms.topic: tutorial
-ms.date: 7/11/2025
+ms.date: 11/21/2025
 zone_pivot_groups: tutorial-create-skillset
 ---
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "スキルセットチュートリアルの最終更新日更新"
}

Explanation

この変更は、スキルセットに関するチュートリアルの最終更新日を更新したものです。具体的な変更内容は以下の通りです:

  1. 最終更新日の日付変更:
    • 元の最終更新日「7/11/2025」が「11/21/2025」に変更され、ドキュメントが最新の情報を反映するようになりました。

この更新によって、チュートリアルが現在の情報を基にした内容を提供することが強調され、ユーザーはより信頼性の高い情報を得ることができるようになります。ドキュメントのメンテナンスが適切に行われていることを示す重要な変更です。

articles/search/vector-search-index-size.md

Diff
@@ -1,41 +1,45 @@
 ---
-title: Vector index limits
+title: Vector Index Limits
 titleSuffix: Azure AI Search
-description: Explanation of the factors affecting the size of a vector index.
+description: Learn about the factors that affect the size of a vector index.
 author: robertklee
 ms.author: robertlee
 ms.service: azure-ai-search
 ms.update-cycle: 180-days
 ms.topic: conceptual
-ms.date: 03/20/2025
+ms.date: 11/21/2025
 ms.custom:
   - build-2024
   - ignite-2024
   - sfi-image-nochange
 ---
 
-# Vector index size and staying under limits
+# Vector index size and limits
 
 For each vector field, Azure AI Search constructs an internal vector index using the algorithm parameters specified on the field. Because Azure AI Search imposes quotas on vector index size, you should know how to estimate and monitor vector size to ensure you stay under the limits.
 
-> [!NOTE]
-> A note about terminology. Internally, the physical data structures of a search index include raw content (used for retrieval patterns requiring non-tokenized content), inverted indexes (used for searchable text fields), and vector indexes (used for searchable vector fields). This article explains the limits for the internal vector indexes that back each of your vector fields.
+Internally, the physical data structures of a search index include:
 
-> [!TIP]
-> [Vector optimization techniques](vector-search-how-to-configure-compression-storage.md) are now generally available. Use capabilities like narrow data types, scalar and binary quantization, and elimination of redundant storage to reduce your vector quota and storage quota consumption.
++ Raw content (used for retrieval patterns requiring nontokenized content)
++ Inverted indexes (used for searchable text fields)
++ Vector indexes (used for searchable vector fields)
 
-> [!NOTE]
-> Not all algorithms consume vector index size quota. Vector quotas are established based on memory requirements of Approximate Nearest Neighbor (ANN) search. Vector fields created with the Hierarchical Navigable Small World (HNSW) algorithm need to reside in memory during query execution because of the random-access nature of graph-based traversals. Vector fields using the exhaustive K-Nearest Neighbors (KNN) algorithm are loaded into memory dynamically in pages during query execution, and as a result do not consume vector quota.
+This article explains the limits for the internal vector indexes that back each of your vector fields.
+
+> [!TIP]
+> [Vector optimization techniques](vector-search-how-to-configure-compression-storage.md) are generally available. Use capabilities like narrow data types, scalar and binary quantization, and elimination of redundant storage to reduce your vector quota and storage quota consumption.
 
 ## Key points about quota and vector index size
 
 + Vector index size is measured in bytes.
 
-+ The total storage of your service contains all of your vector index files. Azure AI Search maintains different copies of vector index files for different purposes. We offer additional options to reduce the [storage overhead of vector indexes](vector-search-how-to-storage-options.md) by eliminating some of these copies.
++ The total storage of your service contains all of your vector index files. Azure AI Search maintains different copies of vector index files for different purposes. We offer other options to reduce the [storage overhead of vector indexes](vector-search-how-to-storage-options.md) by eliminating some of these copies.
 
 + Vector quotas are enforced on the search service as a whole, per partition. If you add partitions, vector quota also increases. Per-partition vector quotas are higher on newer services. For more information, see [Vector index size limits](search-limits-quotas-capacity.md#vector-index-size-limits).
 
-## How to check partition size and quantity
++ Not all algorithms consume vector index size quota. Vector quotas are established based on memory requirements of Approximate Nearest Neighbor (ANN) search. Vector fields created with the Hierarchical Navigable Small World (HNSW) algorithm need to reside in memory during query execution because of the random-access nature of graph-based traversals. Vector fields using the exhaustive K-Nearest Neighbors (KNN) algorithm are loaded into memory dynamically in pages during query execution and thus don't consume vector quota.
+
+## Check partition size and quantity
 
 If you aren't sure what your search service limits are, here are two ways to get that information:
 
@@ -45,7 +49,7 @@ If you aren't sure what your search service limits are, here are two ways to get
 
 Your vector limit varies depending on your [service creation date](search-how-to-upgrade.md#check-your-service-creation-or-upgrade-date).
 
-## How to get vector index size
+## Check vector index size
 
 A request for vector metrics is a data plane operation. You can use the Azure portal, REST APIs, or Azure SDKs to get vector usage at the service level through service statistics and for individual indexes.
 
@@ -55,7 +59,7 @@ A request for vector metrics is a data plane operation. You can use the Azure po
 
 To get vector index size per index, select **Search management** > **Indexes** to view a list of indexes and the document count, the size of in-memory vector indexes, and total index size as stored on disk.
 
-Recall that vector quota is based on memory constraints. For vector indexes created using the HNSW algorithm, all searchable vector indexes are permanently loaded into memory. For indexes created using the exhaustive KNN algorithm, vector indexes are loaded in chunks, sequentially, during query time. There's no memory residency requirement for exhaustive KNN indexes. The lifetime of the loaded pages in memory is similar to text search and there are no other metrics applicable to exhaustive KNN indexes other than total storage. 
+Recall that vector quota is based on memory constraints. For vector indexes created using the HNSW algorithm, all searchable vector indexes are permanently loaded into memory. For indexes created using the exhaustive KNN algorithm, vector indexes are loaded in chunks, sequentially, during query time. There's no memory residency requirement for exhaustive KNN indexes. The lifetime of the loaded pages in memory is similar to text search and there are no other metrics applicable to exhaustive KNN indexes other than total storage.
 
 The following screenshot shows two versions of the same vector index. One version is created using HNSW algorithm, where the vector graph is memory resident. Another version is created using exhaustive KNN algorithm. With exhaustive KNN, there's no specialized in-memory vector index, so the portal shows 0 MB for vector index size. Those vectors still exist and are counted in overall storage size, but they don’t occupy the in-memory resource that the vector index size metric is tracking.
 
@@ -65,17 +69,17 @@ The following screenshot shows two versions of the same vector index. One versio
 
 To get vector index size for the search service as a whole, select the **Overview** page's **Usage** tab. Portal pages refresh every few minutes so if you recently updated an index, wait a bit before checking results.
 
-The following screenshot is for an older Standard 1 (S1) search service, configured for one partition and one replica. 
+The following screenshot is for an older Standard 1 (S1) search service, configured for one partition and one replica.
 
 + Storage quota is a disk constraint, and it's inclusive of all indexes (vector and nonvector) on a search service.
 
-+ Vector index size quota is a memory constraint. It's the amount of memory required to load all internal vector indexes created for each vector field on a search service. 
++ Vector index size quota is a memory constraint. It's the amount of memory required to load all internal vector indexes created for each vector field on a search service.
 
-The screenshot indicates that indexes (vector and nonvector) consume almost 460 megabytes of available disk storage. Vector indexes consume almost 93 megabytes of memory at the service level. 
+The screenshot indicates that indexes (vector and nonvector) consume almost 460 megabytes of available disk storage. Vector indexes consume almost 93 megabytes of memory at the service level.
 
 :::image type="content" source="media/vector-search-index-size/portal-vector-index-size.png" lightbox="media/vector-search-index-size/portal-vector-index-size.png" alt-text="Screenshot of the Overview page's usage tab showing vector index consumption against quota.":::
 
-Quotas for both storage and vector index size increase or decrease as you add or remove partitions. If you change the partition count, the tile shows a corresponding change in storage and vector quota. 
+Quotas for both storage and vector index size increase or decrease as you add or remove partitions. If you change the partition count, the tile shows a corresponding change in storage and vector quota.
 
 > [!NOTE]
 > On disk, vector indexes aren't 93 megabytes. Vector indexes on disk take up about three times more space than vector indexes in memory. See [How vector fields affect disk storage](#how-vector-fields-affect-disk-storage) for details.
@@ -84,7 +88,7 @@ Quotas for both storage and vector index size increase or decrease as you add or
 
 Data plane REST APIs (all newer APIs provide vector usage statistics):
 
-+ [GET Service Statistics](/rest/api/searchservice/get-service-statistics/get-service-statistics) returns quota and usage for the search service all-up. 
++ [GET Service Statistics](/rest/api/searchservice/get-service-statistics/get-service-statistics) returns quota and usage for the search service all-up.
 + [GET Index Statistics](/rest/api/searchservice/indexes/get-statistics) returns usage for a given index.
 
 Usage and quota are reported in bytes.
@@ -93,8 +97,8 @@ Here's GET Service Statistics:
 
 ```http
 GET {{baseUrl}}/servicestats?api-version=2025-09-01  HTTP/1.1
-    Content-Type: application/json
-    api-key: {{apiKey}}
+Content-Type: application/json
+api-key: {{apiKey}}
 ```
 
 Response includes metrics for `storageSize`, which doesn't distinguish between vector and nonvector indexes. The `vectorIndexSize` statistic shows usage and quota at the service level.  
@@ -135,11 +139,11 @@ You can also send a GET Index Statistics to get the physical size of the index o
 
 ```http
 GET {{baseUrl}}/indexes/vector-healthplan-idx/stats?api-version=2025-09-01  HTTP/1.1
-    Content-Type: application/json
-    api-key: {{apiKey}}
+Content-Type: application/json
+api-key: {{apiKey}}
 ```
 
-Response includes usage information at the index level. This example is based on the index created in the [integrated vectorization quickstart](search-get-started-portal-import-vectors.md) that chunks and vectorizes health plan PDFs. Each chunk contributes to `documentCount`.
+Response includes usage information at the index level. This example is based on the index created in [Quickstart: Vector search](search-get-started-portal-import-vectors.md) that chunks and vectorizes health plan PDFs. Each chunk contributes to `documentCount`.
 
 ```json
 {
@@ -166,7 +170,7 @@ Each vector is usually an array of single-precision floating-point numbers, in a
 
 Vector data structures require storage, represented in the following calculation as the "raw size" of your data. Use this _raw size_ to estimate the vector index size requirements of your vector fields.
 
-The storage size of one vector is determined by its dimensionality. Multiply the size of one vector by the number of documents containing that vector field to obtain the _raw size_: 
+The dimensionality of one vector determines its storage size. Multiply the size of one vector by the number of documents containing that vector field to obtain the _raw size_:
 
 `raw size = (number of documents) * (dimensions of vector field) * (size of data type)`
 
@@ -183,13 +187,13 @@ Every ANN algorithm generates extra data structures in memory to enable efficien
   
 **For the HNSW algorithm, the memory overhead ranges between 1% and 20% for uncompressed float32 (Edm.Single) vectors.**  
   
-As dimensionality increases, the memory overhead percentage decreases. This occurs because the raw size of the vectors increases in size while the additional data structures, which store graph connectivity information, remain a fixed size for a given `m`. As a result, the relative impact of these extra data structures diminishes in relation to the overall vector size.
+As dimensionality increases, the memory overhead percentage decreases. This occurs because the raw size of the vectors increases in size while the other data structures, which store graph connectivity information, remain a fixed size for a given `m`. As a result, the relative impact of these extra data structures diminishes in relation to the overall vector size.
   
 The memory overhead increases with larger values of the HNSW parameter `m`, which specifies the number of bi-directional links created for each new vector during index construction. This happens because each link contributes approximately 8 to 10 bytes per document, and the total overhead scales proportionally with `m`.
   
 The following table summarizes the overhead percentages observed in internal tests for *uncompressed* vector fields:  
   
-| Dimensions | HNSW Parameter (m) | Overhead Percentage |  
+| Dimensions | HNSW parameter (m) | Overhead percentage |  
 |------------|--------------------|---------------------|
 | 96         | 4                  | 20%                 |
 | 200        | 4                  | 8%                  |  
@@ -199,7 +203,7 @@ The following table summarizes the overhead percentages observed in internal tes
 
 These results demonstrate the relationship between dimensions, HNSW parameter `m`, and memory overhead for the HNSW algorithm.
 
-For vector fields which use compression techniques, such as [scalar or binary quantization](vector-search-how-to-quantization.md), the overhead percentage appears to consume a greater percentage of the total vector index size. As the size of the data decreases, the relative impact of the fixed-size data structures used to store graph connectivity information becomes more significant.
+For vector fields that use compression techniques, such as [scalar or binary quantization](vector-search-how-to-quantization.md), the overhead percentage appears to consume a greater percentage of the total vector index size. As the size of the data decreases, the relative impact of the fixed-size data structures used to store graph connectivity information becomes more significant.
 
 ### Overhead from deleting or updating documents within the index
 
@@ -209,18 +213,23 @@ We refer to this as the *deleted documents ratio*. Since the deleted documents r
 
 This is another factor impacting the size of your vector index. Unfortunately, we don't have a mechanism to surface your current deleted documents ratio.
 
-## Estimating the total size for your data in memory
+## Estimate total size of data in memory
 
 Taking the previously described factors into account, to estimate the total size of your vector index, use the following calculation:
 
 **`(raw_size) * (1 + algorithm_overhead (in percent)) * (1 + deleted_docs_ratio (in percent))`**
 
 For example, to calculate the **raw_size**, let's assume you're using a popular Azure OpenAI model, `text-embedding-ada-002` with 1,536 dimensions. This means one document would consume 1,536 `Edm.Single` (floats), or 6,144 bytes since each `Edm.Single` is 4 bytes. 1,000 documents with a single, 1,536-dimensional vector field would consume in total 1000 docs x 1536 floats/doc = 1,536,000 floats, or 6,144,000 bytes.
 
-If you have multiple vector fields, you need to perform this calculation for each vector field within your index and add them all together. For example, 1,000 documents with **two** 1,536-dimensional vector fields, consume 1000 docs x **2 fields** x 1536 floats/doc x 4 bytes/float = 12,288,000 bytes. 
+If you have multiple vector fields, you need to perform this calculation for each vector field within your index and add them all together. For example, 1,000 documents with **two** 1,536-dimensional vector fields, consume 1000 docs x **2 fields** x 1536 floats/doc x 4 bytes/float = 12,288,000 bytes.
 
 To obtain the **vector index size**, multiply this **raw_size** by the **algorithm overhead** and **deleted document ratio**. If your algorithm overhead for your chosen HNSW parameters is 10% and your deleted document ratio is 10%, then we get: `6.144 MB * (1 + 0.10) * (1 + 0.10) = 7.434 MB`.
 
 ## How vector fields affect disk storage
 
-Most of this article provides information about the size of vectors in memory. Read more about the [storage overhead of vector indexes](vector-search-how-to-storage-options.md).
+Most of this article provides information about the size of vectors in memory. For information about the storage overhead of vector indexes, see [Eliminate optional vector instances from storage](vector-search-how-to-storage-options.md).
+
+## Related content
+
++ [Vector search in Azure AI Search](vector-search-overview.md)
++ [Choose an approach for optimizing vector storage and processing](vector-search-how-to-configure-compression-storage.md)

Summary

{
    "modification_type": "minor update",
    "modification_title": "ベクトルインデックスサイズの記事の内容修正"
}

Explanation

この変更は、ベクトルインデックスサイズに関する記事の内容を更新したもので、いくつかの重要なポイントが修正されています。主な変更点は以下の通りです:

  1. タイトルのスタイル変更:
    • 記事のタイトルが「Vector index limits」から「Vector Index Limits」に変更され、大文字のスタイルが統一されました。
  2. 内容の明確化:
    • 説明文が「ベクトルインデックスのサイズに影響を与える要因の説明」から「Learn about the factors that affect the size of a vector index.」に変更され、より簡潔で明確になりました。
    • ベクトルインデックスの構造や関連する内部機能についての説明が、より分かりやすい文言に改善されました。
  3. 日付の更新:
    • 最終更新日が「03/20/2025」から「11/21/2025」に変更され、記事が最新の情報を反映するようになりました。
  4. 内容の追加と再構成:
    • ベクトルインデックスに関連する重要な要約ポイントが追加され、より体系的に情報が整理されました。
    • また、不要な情報を削除し、重要な情報に焦点を当てるためのリファクタリングが行われています。
  5. 関連コンテンツの追加:
    • 記事の最後に関連するリソースへのリンクが追加され、読者がさらに学習を進めやすくなっています。

これらの更新により、記事はより読みやすくなり、ユーザーがベクトルインデックスのサイズやその影響を理解する助けとなる内容が強化されています。

articles/search/vector-search-integrated-vectorization-ai-studio.md

Diff
@@ -1,24 +1,24 @@
 ---
-title: Integrated vectorization with models from Microsoft Foundry
+title: Integrated Vectorization With Models From Microsoft Foundry
 titleSuffix: Azure AI Search
-description: Learn  how to vectorize content during indexing on Azure AI Search with a Microsoft Foundry model.
+description: Learn how to vectorize content during indexing in Azure AI Search with a Microsoft Foundry model.
 author: gmndrg
 ms.author: gimondra
 ms.service: azure-ai-search
 ms.custom:
   - build-2024
 ms.topic: how-to
-ms.date: 10/23/2025
+ms.date: 11/21/2025
 ---
 
 # Use embedding models from the Microsoft Foundry model catalog for integrated vectorization
 
 > [!IMPORTANT]
-> This feature is in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). The [2024-05-01-Preview REST API](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true) supports this feature.
+> This feature is in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). The latest preview version of [Skillsets - Create Or Update (REST API)](/rest/api/searchservice/skillsets/create-or-update) supports this feature.
 
-In this article, you learn how to access embedding models from the [Foundry model catalog](/azure/ai-foundry/how-to/model-catalog-overview) for vector conversions during indexing and in queries in Azure AI Search.
+In this article, you learn how to access embedding models from the [Microsoft Foundry model catalog](/azure/ai-foundry/how-to/model-catalog-overview) for vector conversions during indexing and query execution in Azure AI Search.
 
-The workflow includes model deployment steps. The model catalog includes embedding models from Microsoft and other companies. Deploying a model is billable according to the billing structure of each provider.
+The workflow requires that you deploy a model from the catalog, which includes embedding models from Microsoft and other companies. Deploying a model is billable according to the billing structure of each provider.
 
 After the model is deployed, you can use it with the [AML skill](cognitive-search-aml-skill.md) for integrated vectorization during indexing or with the [Microsoft Foundry model catalog vectorizer](vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md) for queries.
 
@@ -27,29 +27,29 @@ After the model is deployed, you can use it with the [AML skill](cognitive-searc
 
 ## Prerequisites
 
-+ An [Azure AI Search service](search-create-service-portal.md) in any region and on any tier.
++ An [Azure AI Search service](search-create-service-portal.md) in any region and on any pricing tier.
 
-+ A [Foundry hub-based project](/azure/ai-foundry/how-to/hub-create-projects).
++ A [Microsoft Foundry hub-based project](/azure/ai-foundry/how-to/hub-create-projects).
 
 ## Supported embedding models
 
-Supported embedding models from the Foundry model catalog vary by usage method:
+Supported embedding models from the model catalog vary by usage method:
 
 + For the latest list of models supported programmatically, see the [AML skill](cognitive-search-aml-skill.md) and [Microsoft Foundry model catalog vectorizer](vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md) references.
 
 + For the latest list of models supported in the Azure portal, see [Quickstart: Vector search in the Azure portal](search-get-started-portal-import-vectors.md) and [Quickstart: Multimodal search in the Azure portal](search-get-started-portal-image-search.md).
 
 ## Deploy an embedding model from the model catalog
 
-1. Deploy a supported model to your project using [these instructions](/azure/ai-foundry/how-to/deploy-models-openai).
+1. Follow [these instructions](/azure/ai-foundry/how-to/deploy-models-openai) to deploy a supported model to your project.
 
 1. Make a note of the target URI, key, and model name. You need these values for the vectorizer definition in a search index and for the skillset that calls the model endpoints during indexing.
 
-    If you'd rather to use token authentication than key authentication, you only need to copy the URI and model name. However, make a note of the region to which the model is deployed.
+    If you prefer [token authentication](#connect-using-token-authentication) to key-based authentication, you only need to copy the URI and model name. However, make a note of the region to which the model is deployed.
 
 1. Configure a search index and indexer to use the deployed model.
 
-   + To use the model during indexing, see [steps to enable integrated vectorization](vector-search-integrated-vectorization.md#how-to-use-integrated-vectorization). Be sure to use the [AML skill](cognitive-search-aml-skill.md), not the [Azure OpenAI Embedding skill](cognitive-search-skill-azure-openai-embedding.md). The next section describes the skill configuration.
+   + To use the model during indexing, see [How to use integrated vectorization](vector-search-integrated-vectorization.md#how-to-use-integrated-vectorization). Be sure to use the [AML skill](cognitive-search-aml-skill.md), not the [Azure OpenAI Embedding skill](cognitive-search-skill-azure-openai-embedding.md). The next section describes the skill configuration.
 
    + To use the model as a vectorizer at query time, see [Configure a vectorizer](vector-search-how-to-configure-vectorizer.md). Be sure to use the [Microsoft Foundry model catalog vectorizer](vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md) for this step.
 
@@ -81,11 +81,11 @@ Supported embedding models from the Foundry model catalog vary by usage method:
    + To use the model as a vectorizer at query time, see [Configure a vectorizer](vector-search-how-to-configure-vectorizer.md). Be sure to use the [Microsoft Foundry model catalog vectorizer](vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md) for this step.
 -->
 
-## Sample AML skill payloads
+## Sample AML skill payload
 
-When you deploy embedding models from the Foundry model catalog, you connect to them using the [AML skill](cognitive-search-aml-skill.md) in Azure AI Search for indexing workloads.
+When you deploy embedding models from the model catalog, you connect to them using the [AML skill](cognitive-search-aml-skill.md) in Azure AI Search for indexing workloads.
 
-This section describes the AML skill definition and index mappings. It includes sample payloads that are already configured to work with their corresponding deployed endpoints. For more technical details on how these payloads work, see the [Skill context and input annotation language](cognitive-search-skill-annotation-language.md).
+This section describes the AML skill definition and index mappings. It includes a sample payload that's already configured to work with its corresponding deployed endpoint. For more information, see [Skill context and input annotation language](cognitive-search-skill-annotation-language.md).
 
 <!-- ### [**Text Input for "Inference" API**](#tab/inference-text)
 
@@ -252,9 +252,9 @@ If you selected a different `embedding_types` in your skill definition, change `
 }
 ```
 
-## Sample Foundry vectorizer payload
+## Sample vectorizer payload
 
-The [Microsoft Foundry model catalog vectorizer](vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md), unlike the AML skill, is tailored to work only with those embedding models that are deployable via the Foundry model catalog. The main difference is that you don't have to worry about the request and response payload, but you do have to provide the `modelName`, which corresponds to the "Model ID" that you copied after deploying the model in [Foundry portal](https://ai.azure.com/?cid=learnDocs). 
+The [Microsoft Foundry model catalog vectorizer](vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md), unlike the AML skill, is tailored to work only with embedding models that are deployable via the model catalog. The main difference is that you don't have to worry about the request and response payload. However, you must provide the `modelName`, which corresponds to the "Model ID" that you copied after deploying the model.
 
 Here's a sample payload of how you would configure the vectorizer on your index definition given the properties copied from Foundry.
 
@@ -276,18 +276,20 @@ For Cohere models, you should NOT add the `/v1/embed` path to the end of your UR
 
 ## Connect using token authentication
 
-If you can't use key-based authentication, you can instead configure the AML skill and Foundry vectorizer connection for [token authentication](../machine-learning/how-to-authenticate-online-endpoint.md) via role-based access control on Azure. The search service must have a [system or user-assigned managed identity](search-how-to-managed-identities.md), and the identity must have Owner or Contributor permissions for your AML project workspace. You can then remove the key field from your skill and vectorizer definition, replacing it with the resourceId field. If your AML project and search service are in different regions, also provide the region field.
+If you can't use key-based authentication, you can configure the AML skill and Microsoft Foundry model catalog vectorizer connection for [token authentication](../machine-learning/how-to-authenticate-online-endpoint.md) via role-based access control on Azure.
+
+Your search service must have a [system or user-assigned managed identity](search-how-to-managed-identities.md), and the identity must have **Owner** or **Contributor** permissions for your project. You can then remove the `key` field from your skill and vectorizer definition, replacing it with `resourceId`. If your project and search service are in different regions, also provide the `region` field.
 
 ```json
 "uri": "<YOUR_URL_HERE>",
 "resourceId": "subscriptions/<YOUR_SUBSCRIPTION_ID_HERE>/resourceGroups/<YOUR_RESOURCE_GROUP_NAME_HERE>/providers/Microsoft.MachineLearningServices/workspaces/<YOUR_AML_WORKSPACE_NAME_HERE>/onlineendpoints/<YOUR_AML_ENDPOINT_NAME_HERE>",
-"region": "westus", // Only need if AML project lives in different region from search service
+"region": "westus", // Only needed if project is in different region from search service
 ```
 
 > [!NOTE]
-> Token authentication is not currently supported for Cohere models for this integration; only key authentication is available at this time.  
+> This integration doesn't currently support token authentication for Cohere models. You must use key-based authentication.
 
-## Next steps
+## Related content
 
 + [Configure a vectorizer in a search index](vector-search-how-to-configure-vectorizer.md)
 + [Configure index projections in a skillset](index-projections-concept-intro.md)

Summary

{
    "modification_type": "minor update",
    "modification_title": "Microsoft Foundryを使用した統合ベクトル化の記事の内容修正"
}

Explanation

この変更は、Microsoft Foundryを使用した統合ベクトル化に関する記事を更新したもので、以下のようなポイントが修正されています。

  1. タイトルのスタイル変更:
    • 記事のタイトルが「Integrated vectorization with models from Microsoft Foundry」から「Integrated Vectorization With Models From Microsoft Foundry」に変更され、大文字のスタイルが統一されました。
  2. 説明文の改良:
    • 説明文が「Learn how to vectorize content during indexing on Azure AI Search with a Microsoft Foundry model.」から「Learn how to vectorize content during indexing in Azure AI Search with a Microsoft Foundry model.」に変更され、表現がより自然になりました。
  3. 日付の更新:
    • 最終更新日が「10/23/2025」から「11/21/2025」に変更され、情報が最新のものに更新されました。
  4. フローの明確化:
    • 記事内のフローや手順についての説明が整理され、明確化されました。また、関連情報へのリンクが豊富に追加されています。
  5. 関連コンテンツの強化:
    • 「次のステップ」という項目が「関連コンテンツ」に改名され、さらに関連コンテンツへのリンクが追加され、読者がより多くの情報を簡単に見つけられるようになっています。
  6. 専門用語の一貫性の確保:
    • 特定の用語やフレーズが修正され、文書全体で一貫性が保たれるようになっています。

これらの更新により、文章のクオリティが向上し、読者がMicrosoft Foundryを用いた統合ベクトル化プロセスについてより明確に理解できるようになっています。

articles/search/vector-search-overview.md

Diff
@@ -1,5 +1,5 @@
 ---
-title: Vector search
+title: Vector Search
 titleSuffix: Azure AI Search
 description: Describes concepts, scenarios, and availability of vector capabilities in Azure AI Search.
 author: robertklee
@@ -8,7 +8,7 @@ ms.service: azure-ai-search
 ms.custom:
   - ignite-2023
 ms.topic: conceptual
-ms.date: 11/06/2025
+ms.date: 11/21/2025
 ai-usage: ai-assisted
 ---
 
@@ -45,7 +45,7 @@ Vector search supports the following scenarios:
 
 + **Vector database**. Azure AI Search stores the data that you query over. Use it as a [pure vector index](vector-store.md) when you need long-term memory or a knowledge base, grounding data for the [retrieval-augmented generation (RAG)](retrieval-augmented-generation-overview.md) architecture, or an app that uses vectors.
 
-## How vector search works in Azure AI Search
+## How does vector search work?
 
 Azure AI Search supports indexing, storing, and querying vector embeddings from a search index. The following diagram shows the indexing and query workflows for vector search.
 
@@ -65,7 +65,7 @@ Azure AI Search supports [hybrid scenarios](hybrid-search-overview.md) that run
 
 ## Availability and pricing
 
-Vector search is available in [all regions](search-region-support.md) and on [all tiers](search-sku-tier.md) at no extra charge.
+Vector search is available in [all regions](search-region-support.md) and on [all tiers](search-sku-tier.md) at no extra charge. However, generating embeddings or using AI enrichment for vectorization might incur charges from the model provider.
 
 For portal and programmatic access to vector search, you can use:
 
@@ -86,7 +86,7 @@ Azure AI Search is deeply integrated across the Azure AI platform. The following
 | Product | Integration |
 |---------|-------------|
 | Azure OpenAI | Azure OpenAI provides embedding models and chat models. Demos and samples target the [text-embedding-ada-002](/azure/ai-services/openai/concepts/models#embeddings-models) model. We recommend Azure OpenAI for generating embeddings for text. |
-| Foundry Tools | [Image Retrieval Vectorize Image API (preview)](/azure/ai-services/computer-vision/how-to/image-retrieval#call-the-vectorize-image-api) supports vectorization of image content. We recommend this API for generating embeddings for images. |
+| Foundry Tools | [Image Retrieval Vectorize Image API](/azure/ai-services/computer-vision/how-to/image-retrieval#call-the-vectorize-image-api) supports vectorization of image content. We recommend this API for generating embeddings for images. |
 | Foundry Agent Service | In Azure AI Search, you can create an *indexed [knowledge source](agentic-knowledge-source-overview.md)* that points to a search index containing vector fields and a vectorizer. You can then parent the knowledge source to a *[knowledge base](agentic-retrieval-how-to-create-knowledge-base.md)* and [connect the knowledge base to Foundry Agent Service](/azure/ai-foundry/agents/how-to/tools/knowledge-retrieval), providing your agents with vector search results for enhanced knowledge retrieval. |
 | Azure data platforms: Azure Blob Storage, Azure Cosmos DB, Azure SQL, Microsoft OneLake | You can use [indexers](search-indexer-overview.md) to automate data ingestion, and then use [integrated vectorization](vector-search-integrated-vectorization.md) to generate embeddings. Azure AI Search can automatically index vector data from [Azure blob indexers](search-how-to-index-azure-blob-storage.md), [Azure Cosmos DB for NoSQL indexers](search-how-to-index-cosmosdb-sql.md), [Azure Data Lake Storage Gen2](search-how-to-index-azure-data-lake-storage.md), [Azure Table Storage](search-how-to-index-azure-tables.md), and [Microsoft OneLake](search-how-to-index-onelake-files.md). For more information, see [Add vector fields to a search index.](vector-search-how-to-create-index.md). |
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "ベクトル検索の概要の記事内容の修正"
}

Explanation

この変更は、Azure AI Searchにおけるベクトル検索の概要に関する記事を更新したもので、以下のポイントが修正されています。

  1. タイトルのスタイル変更:
    • 記事のタイトルが「Vector search」から「Vector Search」に変更され、大文字のスタイルが統一されました。
  2. 日付の更新:
    • 最終更新日が「11/06/2025」から「11/21/2025」に変更され、情報が最新のものに更新されました。
  3. 見出しの改良:
    • 「How vector search works in Azure AI Search」という見出しが「How does vector search work?」に変更され、質問形式になって読みやすくなりました。
  4. 内容の明確化:
    • ベクトル検索の可用性についての説明が強化され、ベクトル生成やAIによる強化の利用に関してモデル提供者からの料金が発生する可能性があることを明記しました。
  5. 表の修正:
    • 「Image Retrieval Vectorize Image API (preview)」の表記が修正され、シンプルに「Image Retrieval Vectorize Image API」となりました。

これらの更新により、記事はより一貫性があり、読みやすくなっており、Azure AI Searchのベクトル検索についての理解が深まるように改善されています。