Diff Insight Report - search

最終更新日: 2025-12-12

利用上の注意

このポストは Microsoft 社の Azure 公式ドキュメント(CC BY 4.0 または MIT ライセンス) をもとに生成AIを用いて翻案・要約した派生作品です。 元の文書は MicrosoftDocs/azure-ai-docs にホストされています。

生成AIの性能には限界があり、誤訳や誤解釈が含まれる可能性があります。 本ポストはあくまで参考情報として用い、正確な情報は必ず元の文書を参照してください。

このポストで使用されている商標はそれぞれの所有者に帰属します。これらの商標は技術的な説明のために使用されており、商標権者からの公式な承認や推奨を示すものではありません。

View Diff on GitHub


# Highlights
この変更は、主にAzure AI Searchドキュメントにおけるリダイレクト設定、テキスト修正、画像ファイルやチュートリアルの削除といった大規模な更新を含んでいます。特にチュートリアルやガイドの削除はブレイキングチェンジとなります。その他、Microsoft Foundryの用語修正、ドキュメントリンクの更新も行われています。

New features

  • マイクロソフトファウンドリモデルカタログの正確な名称表記。

Breaking changes

  • 多くのチュートリアルや手順ガイドの完全削除。
  • 画像ファイルの大量な削除により、視覚的な情報不足が発生。
  • チュートリアルとページナビゲーションが目次から削除され、関連する情報が失われた。

Other updates

  • ドキュメント内でのリンクやアクセス先の更新。
  • RAGソリューションや関連する技術のガイドが削除され、ドキュメントがスリム化された。
  • Metadataやトピック分類の更新で、記事の整合性が改善。

Insights

この変更は、ドキュメント内の広範な整理整頓と最新の技術情報への対応を目的としていると考えられます。ただし、特にブレイキングチェンジに指定された多くの削除や大幅なカットにより、既存のユーザーに重要な情報が不足する状況が生じています。削除された情報は多くの開発者や技術者にとって有用であったため、これらのリソースをどのように補完するかが課題となります。

一方、新しい用語やリンクの調整、さらにMicrosoft Foundryに関連した用語改訂は、一貫性をもたらし、ユーザーが最新の情報にアクセスできるようになる改良です。また、ドキュメントがどのように整理されたかを理解することで、新たに構築される情報源やレイアウトに適応するための準備が求められます。

今後期待されるのは、削除された機能やチュートリアルに対する代替リソースの整備です。ユーザーは、新たな指針や情報提供が行われることを期待し、それに基づいて最新の技術を十分に活用することが求められます。開発者にとっては、削除によって欠如した情報に関するコミュニケーションの迅速なサポートが不可欠で、Azure AI Searchの利用体験を向上させるために欠かせない要素です。

Summary Table

Filename Type Title Status A D M
.openpublishing.redirection.search.json minor update リダイレクト設定の更新: 追加されたチュートリアルリンク modified 46 1 47
cognitive-search-aml-skill.md minor update ドキュメントのテキスト修正: Microsoft Foundry の明示 modified 3 3 6
search-get-started-skillset-new-wizard.md breaking change クイックスタートガイドの削除: 新しいウィザード removed 0 208 208
search-get-started-skillset-old-wizard.md breaking change クイックスタートガイドの削除: 古いウィザード removed 0 196 196
1-10-1-parameter-metadata-search.png breaking change 画像の削除: パラメータメタデータ検索 removed 0 0 0
1-10-2-parameter-metadata-version.png breaking change 画像の削除: パラメータメタデータバージョン removed 0 0 0
1-10-4-parameter-metadata-select.png breaking change 画像の削除: パラメータメタデータ選択 removed 0 0 0
1-11-1-test-connector.png breaking change 画像の削除: テストコネクタ removed 0 0 0
1-11-2-test-connector.png breaking change 画像の削除: テストコネクタ(2) removed 0 0 0
1-2-custom-connector.png breaking change 画像の削除: カスタムコネクタ removed 0 0 0
1-3-create-blank.png breaking change 画像の削除: ブランク作成 removed 0 0 0
1-5-general-info.png breaking change 画像の削除: 一般情報 removed 0 0 0
1-6-authentication-type.png breaking change 画像の削除: 認証タイプ removed 0 0 0
1-7-new-action.png breaking change 画像の削除: 新しいアクション removed 0 0 0
1-8-1-import-from-sample.png breaking change 画像の削除: サンプルからのインポート removed 0 0 0
1-8-2-import-from-sample.png breaking change 画像の削除: サンプルからのインポートのサポート removed 0 0 0
2-3-connect-connector.png breaking change 画像の削除: コネクタの接続 removed 0 0 0
2-4-add-controls.png breaking change 画像の削除: コントロールの追加 removed 0 0 0
2-5-controls-layout.png breaking change 画像の削除: コントロールのレイアウト removed 0 0 0
2-6-search-button-event.png breaking change 画像の削除: 検索ボタンのイベント removed 0 0 0
2-7-gallery-select-fields.png breaking change 画像の削除: ギャラリーでのフィールド選択 removed 0 0 0
2-8-2-final.png breaking change 画像の削除: 最終結果 removed 0 0 0
2-8-3-final.png breaking change 画像の削除: 最終成果物3 removed 0 0 0
retrieval-augmented-generation-overview.md minor update 文書の修正: RAGの概要 modified 10 76 86
samples-dotnet.md minor update ドキュメント更新: .NET サンプルの修正 modified 1 10 11
samples-python.md minor update ドキュメント更新: Python サンプルの修正 modified 1 1 2
search-blob-storage-integration.md minor update ドキュメント更新: Blobストレージ統合に関する修正 modified 2 2 4
search-get-started-portal-image-search.md minor update ドキュメント更新: ポータル画像検索の使用方法に関する修正 modified 6 6 12
search-get-started-portal-import-vectors.md minor update ドキュメント更新: ベクトルインポートに関する修正 modified 5 5 10
search-get-started-rag.md minor update ドキュメント更新: RAGの導入に関する不要なコンテンツの削除 modified 1 5 6
search-get-started-skillset.md minor update ドキュメント更新: スキルセット作成のクイックスタート手順の強化 modified 199 8 207
search-how-to-define-index-projections.md minor update ドキュメント更新: インデックスプロジェクションのリファレンス変更 modified 4 4 8
search-how-to-integrated-vectorization.md minor update ドキュメント更新: 統合型ベクトル化に関するモデルの名称修正 modified 2 2 4
search-howto-complex-data-types.md minor update ドキュメント更新: リファレンスリンクとスペル修正 modified 2 2 4
search-howto-powerapps.md breaking change ドキュメント削除: Power Appsからのクエリ方法に関するチュートリアル removed 0 269 269
search-pagination-page-layout.md minor update ドキュメント更新: ASP.NET Coreアプリに関する情報の削除 modified 0 2 2
search-try-for-free.md minor update ドキュメント更新: トピックの変更とクイックスタートのリンク修正 modified 2 2 4
search-what-is-an-index.md minor update ドキュメント更新: クイックスタートリンクの修正 modified 1 1 2
toc.yml minor update 目次ファイルの更新: 不要な項目の削除 modified 0 20 20
tutorial-csharp-create-mvc-app.md breaking change チュートリアルの削除: ASP.NET Core MVCアプリにおける検索機能の追加 removed 0 482 482
tutorial-rag-build-solution-index-schema.md breaking change チュートリアルの削除: RAGソリューションのインデックススキーマ設計 removed 0 215 215
tutorial-rag-build-solution-maximize-relevance.md breaking change チュートリアルの削除: RAGソリューションの関連性最大化 removed 0 333 333
tutorial-rag-build-solution-minimize-storage.md breaking change チュートリアルの削除: ストレージとコストの最小化 removed 0 342 342
tutorial-rag-build-solution-models.md breaking change チュートリアルの削除: モデルの設定 removed 0 172 172
tutorial-rag-build-solution-pipeline.md breaking change チュートリアルの削除: インデクシングパイプラインの構築 removed 0 406 406
tutorial-rag-build-solution-query.md breaking change チュートリアルの削除: LLMを用いたデータ検索 removed 0 306 306
tutorial-rag-build-solution.md breaking change チュートリアルの削除: クラシックRAGソリューションの構築 removed 0 61 61
vector-search-how-to-configure-vectorizer.md minor update マイクロソフトファウンドリモデリングの名称変更 modified 1 1 2
vector-search-how-to-generate-embeddings.md minor update RAGソリューションに関するチュートリアルリンクの削除 modified 0 1 1
vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md minor update Microsoft Foundryに関連する用語の変更 modified 4 4 8

Modified Contents

articles/search/.openpublishing.redirection.search.json

Diff
@@ -30,7 +30,7 @@
             "redirect_url": "/azure/search/search-get-started-skillset",
             "redirect_document_id": true
         },
-                {
+        {
             "source_path_from_root": "/articles/search/cognitive-search-tutorial-blob.md",
             "redirect_url": "/azure/search/tutorial-skillset",
             "redirect_document_id": true
@@ -595,6 +595,51 @@
             "source_path_from_root": "/articles/search/search-how-to-index-logic-apps-indexers.md",
             "redirect_url": "/azure/search/search-how-to-index-logic-apps",
             "redirect_document_id": true
+        },
+        {
+            "source_path_from_root": "/articles/search/tutorial-rag-build-solution.md",
+            "redirect_url": "https://github.com/Azure-Samples/azure-search-classic-rag",
+            "redirect_document_id": false
+        },
+        {
+            "source_path_from_root": "/articles/search/tutorial-rag-build-solution-models.md",
+            "redirect_url": "https://github.com/Azure-Samples/azure-search-classic-rag",
+            "redirect_document_id": false
+        },
+        {
+            "source_path_from_root": "/articles/search/tutorial-rag-build-solution-index-schema.md",
+            "redirect_url": "https://github.com/Azure-Samples/azure-search-classic-rag",
+            "redirect_document_id": false
+        },
+        {
+            "source_path_from_root": "/articles/search/tutorial-rag-build-solution-pipeline.md",
+            "redirect_url": "https://github.com/Azure-Samples/azure-search-classic-rag",
+            "redirect_document_id": false
+        },
+        {
+            "source_path_from_root": "/articles/search/tutorial-rag-build-solution-query.md",
+            "redirect_url": "https://github.com/Azure-Samples/azure-search-classic-rag",
+            "redirect_document_id": false
+        },
+        {
+            "source_path_from_root": "/articles/search/tutorial-rag-build-solution-maximize-relevance.md",
+            "redirect_url": "https://github.com/Azure-Samples/azure-search-classic-rag",
+            "redirect_document_id": false
+        },
+        {
+            "source_path_from_root": "/articles/search/tutorial-rag-build-solution-minimize-storage.md",
+            "redirect_url": "https://github.com/Azure-Samples/azure-search-classic-rag",
+            "redirect_document_id": false
+        },
+        {
+            "source_path_from_root": "/articles/search/search-howto-powerapps.md",
+            "redirect_url": "/azure/search/search-what-is-azure-search",
+            "redirect_document_id": false
+        },
+        {
+            "source_path_from_root": "/articles/search/tutorial-csharp-create-mvc-app.md",
+            "redirect_url": "/azure/search/tutorial-csharp-overview",
+            "redirect_document_id": false
         }
     ]
   }
\ No newline at end of file

Summary

{
    "modification_type": "minor update",
    "modification_title": "リダイレクト設定の更新: 追加されたチュートリアルリンク"
}

Explanation

このコードの変更は、Azure AI ドキュメントにおけるリダイレクト設定の更新を示しています。具体的には、特定のファイル内で複数のチュートリアルリンクを新たに追加することによって、リダイレクトの機能が強化されています。変更された行数は47で、46行が新しく追加され、1行が削除されています。

変更の主要なポイントは、複数の新しいチュートリアルに関するリダイレクトエントリーが追加されたことです。これにより、ユーザーはAzure関連の新しいチュートリアルに容易にアクセスできるようになります。削除された行は、リダイレクトの構造を調整するためのものであり、全体としての過去のドキュメントの整合性を保つ助けとなります。

articles/search/cognitive-search-aml-skill.md

Diff
@@ -43,9 +43,9 @@ We recommend using the [**Import data (new)** wizard](search-get-started-portal-
 
 ## Prerequisites
 
-+ A [Foundry hub-based project](/azure/ai-foundry/how-to/hub-create-projects) or an [AML workspace](../machine-learning/concept-workspace.md) for a custom model that you create.
++ A [Microsoft Foundry hub-based project](/azure/ai-foundry/how-to/hub-create-projects) or an [AML workspace](../machine-learning/concept-workspace.md) for a custom model that you create.
 
-+ For hub-based projects only, a [serverless deployment](/azure/ai-foundry/how-to/deploy-models-serverless) of a [supported model](#skill-parameters) from the Foundry model catalog.
++ For hub-based projects only, a serverless deployment of a [supported model](#skill-parameters) from the Microsoft Foundry model catalog. You can use an [ARM/Bicep template](https://github.com/Azure-Samples/azure-ai-search-multimodal-sample/blob/42b4d07f2dd9f7720fdc0b0788bf107bdac5eecb/infra/ai/modules/project.bicep#L37C1-L38C1) to provision the serverless deployment.
 
 ## @odata.type
 
@@ -57,7 +57,7 @@ Parameters are case sensitive. The parameters you use depend on what [authentica
 
 | Parameter name | Description |
 |--------------------|-------------|
-| `uri` | (Required for [key authentication](#WhatSkillParametersToUse)) The target URI of the serverless deployment from the Foundry model catalog or the [scoring URI of the AML online endpoint](../machine-learning/how-to-authenticate-online-endpoint.md). Only the HTTPS URI scheme is allowed. Supported models from the model catalog are:<ul><li>Cohere-embed-v3-english</li><li>Cohere-embed-v3-multilingual</li><li>Cohere-embed-v4</li></ul> |
+| `uri` | (Required for [key authentication](#WhatSkillParametersToUse)) The target URI of the serverless deployment from the Microsoft Foundry model catalog or the [scoring URI of the AML online endpoint](../machine-learning/how-to-authenticate-online-endpoint.md). Only the HTTPS URI scheme is allowed. Supported models from the model catalog (serverless deployments only) are:<ul><li>Cohere-embed-v3-english</li><li>Cohere-embed-v3-multilingual</li><li>Cohere-embed-v4</li></ul> |
 | `key` | (Required for [key authentication](#WhatSkillParametersToUse)) The API key of the model provider. |
 | `resourceId` | (Required for [token authentication](#WhatSkillParametersToUse)) The Azure Resource Manager resource ID of the model provider. For an AML online endpoint, use the `subscriptions/{guid}/resourceGroups/{resource-group-name}/Microsoft.MachineLearningServices/workspaces/{workspace-name}/onlineendpoints/{endpoint_name}` format. |
 | `region` | (Optional for [token authentication](#WhatSkillParametersToUse)) The region in which the model provider is deployed. Required if the region is different from the region of the search service. |

Summary

{
    "modification_type": "minor update",
    "modification_title": "ドキュメントのテキスト修正: Microsoft Foundry の明示"
}

Explanation

このコードの変更は、Azure AI ドキュメント内の「cognitive-search-aml-skill.md」ファイルのテキストを修正するもので、主に用語の明確化が行われています。具体的には、“Foundry”から”Microsoft Foundry”に名称を変更し、これにより関連するリソースの明確な定義を提供しています。

変更は全体で6行に及び、3行が新たに追加され、3行が削除されています。この変更により、Hubベースのプロジェクトやサーバーレスデプロイメントに関する説明が洗練され、読者が主要なリソースをより理解しやすくなっています。また、ARM/Bicep テンプレートを使用してサーバーレスデプロイメントをプロビジョニングする方法についての情報も新たに追加されています。これにより、特に開発者にとって、Microsoft のリソースを利用する際の参考としての役割を果たします。

articles/search/includes/quickstarts/search-get-started-skillset-new-wizard.md

Diff
@@ -1,208 +0,0 @@
----
-manager: nitinme
-author: haileytap
-ms.author: haileytapia
-ms.service: azure-ai-search
-ms.topic: include
-ms.date: 09/16/2025
----
-
-> [!IMPORTANT]
-> The **Import data (new)** wizard now supports keyword search, which was previously only available in the **Import data** wizard. We recommend the new wizard for an improved search experience. For more information about how we're consolidating the wizards, see [Import data wizards in the Azure portal](../../search-import-data-portal.md).
-
-In this quickstart, you learn how a skillset in Azure AI Search adds optical character recognition (OCR), image analysis, language detection, text merging, and entity recognition to generate text-searchable content in an index.
-
-You can run the **Import data (new)** wizard in the Azure portal to apply skills that create and transform textual content during indexing. The input is your raw data, usually blobs in Azure Storage. The output is a searchable index containing AI-generated image text, captions, and entities. You can then query generated content in the Azure portal using [**Search explorer**](../../search-explorer.md).
-
-Before you run the wizard, you create a few resources and upload sample files.
-
-## Prerequisites
-
-+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?cid=msft_learn).
-
-+ An Azure AI Search service. [Create a service](../../search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) in your current subscription. You can use a free service for this quickstart.
-
-+ An [Azure Storage account](/azure/storage/common/storage-account-create). Use Azure Blob Storage on a standard performance (general-purpose v2) account. To avoid bandwidth charges, use the same region as Azure AI Search.
-
-> [!NOTE]
-> This quickstart uses [Foundry Tools](https://azure.microsoft.com/services/cognitive-services/) for AI enrichment. Because the workload is small, Foundry Tools is tapped behind the scenes for free processing up to 20 transactions. Therefore, you don't need to create a Microsoft Foundry resource.
-
-## Prepare sample data
-
-In this section, you create an Azure Storage container to store sample data consisting of various file types, including images and application files that aren't full-text searchable in their native formats.
-
-To prepare the sample data for this quickstart:
-
-1. Sign in to the [Azure portal](https://portal.azure.com/) and select your Azure Storage account.
-
-1. From the left pane, select **Data storage** > **Containers**.
-
-1. Create a container, and then upload the [sample data](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/ai-enrichment-mixed-media) to the container.
-
-## Run the wizard
-
-To run the wizard:
-
-1. Sign in to the [Azure portal](https://portal.azure.com/) and select your search service.
-
-1. On the **Overview** page, select **Import data (new)**.
-
-   :::image type="content" source="../../media/search-import-data-portal/import-data-new-button.png" alt-text="Screenshot that shows how to open the new import wizard in the Azure portal.":::
-
-1. Select **Azure Blob Storage** for the data source.
-
-   :::image type="content" source="../../media/search-get-started-skillset/choose-data-source.png" alt-text="Screenshot of the Azure Blob Storage data source option in the Azure portal." border="true" lightbox="../../media/search-get-started-skillset/choose-data-source.png":::
-
-1. Select **Keyword search**.
-
-   :::image type="content" source="../../media/search-get-started-portal/keyword-search-tile.png" alt-text="Screenshot of the keyword search tile in the Azure portal." border="true" lightbox="../../media/search-get-started-portal/keyword-search-tile.png":::
-
-### Step 1: Create a data source
-
-Azure AI Search requires a connection to a data source for content ingestion and indexing. In this case, the data source is your Azure Storage account.
-
-To create the data source:
-
-1. On the **Connect to your data** page, select your Azure subscription.
-
-1. Select your storage account, and then select the container you created.
-
-   :::image type="content" source="../../media/search-get-started-skillset/connect-to-your-data.png" alt-text="Screenshot of the Connect to your data page in the Azure portal." border="true" lightbox="../../media/search-get-started-skillset/connect-to-your-data.png":::
-
-1. Select **Next**.
-
-If you get `Error detecting index schema from data source`, the indexer that powers the wizard can't connect to your data source. The data source most likely has security protections. Try the following solutions, and then rerun the wizard.
-
-| Security feature | Solution |
-|--------------------|----------|
-| Resource requires Azure roles, or its access keys are disabled. | [Connect as a trusted service](../../search-indexer-howto-access-trusted-service-exception.md) or [connect using a managed identity](../../search-how-to-managed-identities.md). |
-| Resource is behind an IP firewall. | [Create an inbound rule for Azure AI Search and the Azure portal](../../search-indexer-howto-access-ip-restricted.md). |
-| Resource requires a private endpoint connection. | [Connect over a private endpoint](../../search-indexer-howto-access-private.md). |
-
-### Step 2: Add cognitive skills
-
-The next step is to configure AI enrichment to invoke OCR, image analysis, and entity recognition.
-
-OCR and image analysis are available for blobs in Azure Blob Storage and Azure Data Lake Storage (ADLS) Gen2 and for image content in Microsoft OneLake. Images can be standalone files or embedded images in a PDF or other files.
-
-To add the skills:
-
-1. Select **Extract entities**, and then select the gear icon.
-
-1. Select and save the following checkboxes:
-
-   + **Persons**
-
-   + **Locations**
-
-   + **Organizations**
-
-   :::image type="content" source="../../media/search-get-started-skillset/extract-entities.png" alt-text="Screenshot of the Extract entities options in the Azure portal." lightbox="../../media/search-get-started-skillset/extract-entities.png":::
-
-1. Select **Extract text from images**, and then select the gear icon.
-
-1. Select and save the following checkboxes:
-
-   + **Generate tags**
-
-   + **Categorize content**
-
-   :::image type="content" source="../../media/search-get-started-skillset/extract-text.png" alt-text="Screenshot of the Extract text from images options in the Azure portal." lightbox="../../media/search-get-started-skillset/extract-text.png":::
-
-1. Leave the **Use a free AI service (limited enrichments)** checkbox selected.
-
-   The sample data consists of 14 files, so the free allotment of 20 transactions on Foundry Tools is sufficient.
-
-1. Select **Next**.
-
-### Step 3: Configure the index
-
-An index contains your searchable content. The wizard can usually create the schema by sampling the data source. In this step, you review the generated schema and potentially revise any settings.
-
-For this quickstart, the wizard sets reasonable defaults:  
-
-+ Default fields are based on metadata properties of existing blobs and new fields for the enrichment output, such as `persons`, `locations`, and `organizations`. Data types are inferred from metadata and by data sampling.
-
-  :::image type="content" source="../../media/search-get-started-skillset/index-fields-new-wizard.png" alt-text="Screenshot of the index definition page." border="true" lightbox="../../media/search-get-started-skillset/index-fields-new-wizard.png":::
-
-+ Default document key is `metadata_storage_path`, which is selected because the field contains unique values.
-
-+ Default field attributes are based on the skills you selected. For example, fields created by the Entity Recognition skill (`persons`, `locations`, and `organizations`) are **Retrievable**, **Filterable**, **Facetable**, and **Searchable**. To view and change these attributes, select a field, and then select **Configure field**.
-
-  **Retrievable** fields can be returned in results, while **Searchable** fields support full-text search. Use **Filterable** if you want to use fields in a filter expression.
-  
-  Marking a field as **Retrievable** doesn't mean that the field *must* appear in search results. You can control which fields are returned by using the `select` query parameter.
-
-After you review the index schema, select **Next**.
-
-### Step 4: Skip advanced settings
-
-The wizard offers advanced settings for semantic ranking and index scheduling, which are beyond the scope of this quickstart. Skip this step by selecting **Next**.
-
-### Step 5: Review and create objects
-
-The last step is to review your configuration and create the index, indexer, and data source on your search service. The indexer automates the process of extracting content from your data source, loading the index, and driving skillset execution.
-
-To review and create the objects:
-
-1. Accept the default **Objects name prefix**.
-
-1. Review the object configurations.
-
-   :::image type="content" source="../../media/search-get-started-skillset/review-and-create.png" alt-text="Screenshot of the object configuration page in the Azure portal." border="true" lightbox="../../media/search-get-started-skillset/review-and-create.png":::
-
-   AI enrichment, semantic ranker, and indexer scheduling are either disabled or set to their default values because you skipped their wizard steps.
-
-1. Select **Create** to simultaneously create the objects and run the indexer.
-
-## Monitor status
-
-You can monitor the creation of the indexer in the Azure portal. Skills-based indexing takes longer than text-based indexing, especially OCR and image analysis.
-
-To monitor the progress of the indexer:
-
-1. From the left pane, select **Indexers**.
-
-1. Select your indexer from the list.
-
-1. Select **Success** (or **Failed**) to view execution details.
-
-   :::image type="content" source="../../media/search-get-started-skillset/indexer-notification.png" alt-text="Screenshot of the indexer status page." border="true" lightbox="../../media/search-get-started-skillset/indexer-notification.png":::
-
-  In this quickstart, there are a few warnings, including `Could not execute skill because one or more skill input was invalid.` This warning tells you that a PNG file in the data source doesn't provide a text input to Entity Recognition. It occurs because the upstream OCR skill didn't recognize any text in the image and couldn't provide a text input to the downstream Entity Recognition skill.
-
-  Warnings are common in skillset execution. As you become familiar with how skills iterate over your data, you might begin to notice patterns and learn which warnings are safe to ignore.
-
-## Query in Search explorer
-
-To query your index:
-
-1. From the left pane, select **Indexes**.
-
-1. Select your index from the list. If the index has zero documents or storage, wait for the Azure portal to refresh.
-
-1. On the **Search explorer** tab, enter a search string, such as `satya nadella`.
-
-The search bar accepts keywords, quote-enclosed phrases, and operators. For example: `"Satya Nadella" +"Bill Gates" +"Steve Ballmer"`
-
-Results are returned as verbose JSON, which can be hard to read, especially in large documents. Here are tips for searching in this tool:
-    
-   + Switch to the JSON view to specify parameters that shape results.
-   + Add `select` to limit the fields in results.
-   + Add `count` to show the number of matches.
-   + Use Ctrl-F to search within the JSON for specific properties or terms.
-
-:::image type="content" source="../../media/search-get-started-skillset/search-explorer-new-wizard.png" alt-text="Screenshot of the Search explorer page." border="true" lightbox="../../media/search-get-started-skillset/search-explorer-new-wizard.png":::
-
-Here's some JSON you can paste into the view:
-    
-```json
-{
-"search": "\"Satya Nadella\" +\"Bill Gates\" +\"Steve Ballmer\"",
-"count": true,
-"select": "merged_content, persons"
-}
-```
-
-> [!TIP]
-> Query strings are case sensitive, so if you get an "unknown field" message, check **Fields** or **Index Definition (JSON)** to verify the name and case.
\ No newline at end of file

Summary

{
    "modification_type": "breaking change",
    "modification_title": "クイックスタートガイドの削除: 新しいウィザード"
}

Explanation

このコードの変更は、「search-get-started-skillset-new-wizard.md」というファイルの完全な削除を示しています。変更内容としては、208行のコンテンツが削除され、その結果、このクイックスタートガイドに関する情報がすべて失われました。

この削除により、Azure AI Searchの新しいウィザードに関する公式な手引きが消え、一時的にユーザーは新しいウィザードの使用法や利点についての情報を得られなくなります。これが意味するのは、ドキュメントの再編成や更新が行われる可能性があり、ユーザーは、代わりに他のリソースや新しいガイドを参照する必要があるかもしれません。

今後の展開としては、新しいウィザードに関連する機能や手続きが他の文書に統合されることが考えられます。この重要な変更は、Azure AI Searchを活用するユーザーに影響を与える可能性があるため、ドキュメンテーションの更新と再確認が求められます。

articles/search/includes/quickstarts/search-get-started-skillset-old-wizard.md

Diff
@@ -1,196 +0,0 @@
----
-manager: nitinme
-author: haileytap
-ms.author: haileytapia
-ms.service: azure-ai-search
-ms.topic: include
-ms.date: 09/16/2025
----
-
-> [!IMPORTANT]
-> The **Import data** wizard will eventually be deprecated. Most of its functionality is available in the **Import data (new)** wizard, which we recommend for most search scenarios. For more information, see [Import data wizards in the Azure portal](../../search-import-data-portal.md).
-
-In this quickstart, you learn how a skillset in Azure AI Search adds optical character recognition (OCR), image analysis, language detection, text merging, and entity recognition to generate text-searchable content in an index.
-
-You can run the **Import data** wizard in the Azure portal to apply skills that create and transform textual content during indexing. The input is your raw data, usually blobs in Azure Storage. The output is a searchable index containing AI-generated image text, captions, and entities. You can then query generated content in the Azure portal using [**Search explorer**](../../search-explorer.md).
-
-Before you run the wizard, you create a few resources and upload sample files.
-
-## Prerequisites
-
-+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?cid=msft_learn).
-
-+ An Azure AI Search service. [Create a service](../../search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) in your current subscription. You can use a free service for this quickstart.
-
-+ An [Azure Storage account](/azure/storage/common/storage-account-create). Use Azure Blob Storage on a standard performance (general-purpose v2) account. To avoid bandwidth charges, use the same region as Azure AI Search.
-
-> [!NOTE]
-> This quickstart uses [Foundry Tools](https://azure.microsoft.com/services/cognitive-services/) for AI enrichment. Because the workload is small, Foundry Tools is tapped behind the scenes for free processing up to 20 transactions. Therefore, you don't need to create a Microsoft Foundry resource.
-
-## Prepare sample data
-
-In this section, you create an Azure Storage container to store sample data consisting of various file types, including images and application files that aren't full-text searchable in their native formats.
-
-To prepare the sample data for this quickstart:
-
-1. Sign in to the [Azure portal](https://portal.azure.com/) and select your Azure Storage account.
-
-1. From the left pane, select **Data storage** > **Containers**.
-
-1. Create a container, and then upload the [sample data](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/ai-enrichment-mixed-media) to the container.
-
-## Run the wizard
-
-To run the wizard:
-
-1. Sign in to the [Azure portal](https://portal.azure.com/) and select your search service.
-
-1. On the **Overview** page, select **Import data**.
-
-   :::image type="content" source="../../media/search-import-data-portal/import-data-button.png" alt-text="Screenshot of the Import data command." border="true":::
-
-### Step 1: Create a data source
-
-Azure AI Search requires a connection to a data source for content ingestion and indexing. In this case, the data source is your Azure Storage account.
-
-To create the data source:
-
-1. On the **Connect to your data** page, select the **Data Source** dropdown list, and then select **Azure Blob Storage**.
-
-1. Choose an existing connection string for your storage account, and then select the container you created.
-
-1. Enter a name for the data source.
-
-   :::image type="content" source="../../media/search-get-started-skillset/blob-datasource.png" alt-text="Screenshot of the data source definition page." border="true" lightbox="../../media/search-get-started-skillset/blob-datasource.png":::
-
-1. Select **Next: Add cognitive skills (Optional)**.
-
-If you get `Error detecting index schema from data source`, the indexer that powers the wizard can't connect to your data source. The data source most likely has security protections. Try the following solutions, and then rerun the wizard.
-
-| Security feature | Solution |
-|--------------------|----------|
-| Resource requires Azure roles, or its access keys are disabled. | [Connect as a trusted service](../../search-indexer-howto-access-trusted-service-exception.md) or [connect using a managed identity](../../search-how-to-managed-identities.md). |
-| Resource is behind an IP firewall. | [Create an inbound rule for Azure AI Search and the Azure portal](../../search-indexer-howto-access-ip-restricted.md). |
-| Resource requires a private endpoint connection. | [Connect over a private endpoint](../../search-indexer-howto-access-private.md). |
-
-### Step 2: Add cognitive skills
-
-The next step is to configure AI enrichment to invoke OCR, image analysis, and natural-language processing. 
-
-OCR and image analysis are available for blobs in Azure Blob Storage and Azure Data Lake Storage (ADLS) Gen2 and for image content in Microsoft OneLake. Images can be standalone files or embedded images in a PDF or other files.
-
-To add the skills:
-
-1. Expand the **Attach Cognitive Services** section.
-
-1. Select **Free (Limited enrichments)** to use a free Foundry resource.
-
-   :::image type="content" source="../../media/search-get-started-skillset/cog-search-attach.png" alt-text="Screenshot of the Attach Foundry tab." border="true" lightbox="../../media/search-get-started-skillset/cog-search-attach.png":::
-
-   The sample data consists of 14 files, so the free allotment of 20 transactions on Foundry is sufficient.
-
-1. Expand the **Add enrichments** section.
-
-1. Select the **Enable OCR and merge all text into merged_content field** checkbox.
-
-1. Under **Text Cognitive Skills**, select the following checkboxes:
-
-    + **Extract people names**
-    
-    + **Extract organization names**
-    
-    + **Extract location names**
-
-1. Under **Image Cognitive Skills**, select the following checkboxes: 
-
-    + **Generate tags from images**
-    
-    + **Generate captions from images**
-
-   :::image type="content" source="../../media/search-get-started-skillset/skillset.png" alt-text="Screenshot of the skillset definition page." border="true" lightbox="../../media/search-get-started-skillset/skillset.png":::
-
-1. Select **Next: Customer target index**.
-
-### Step 3: Configure the index
-
-An index contains your searchable content. The wizard can usually create the schema by sampling the data source. In this step, you review the generated schema and potentially revise any settings. 
-
-For this quickstart, the wizard sets reasonable defaults:  
-
-+ Default fields are based on metadata properties of existing blobs and new fields for the enrichment output, such as `people`, `organizations`, and `locations`. Data types are inferred from metadata and by data sampling.
-
-+ Default document key is `metadata_storage_path`, which is selected because the field contains unique values.
-
-+ Default attributes are **Retrievable** and **Searchable**. **Retrievable** fields can be returned in results, while **Searchable** fields support full-text search. The wizard assumes you want these fields to be retrievable and searchable because you created them via a skillset. Select **Filterable** if you want to use fields in a filter expression.
-
-  :::image type="content" source="../../media/search-get-started-skillset/index-fields-old-wizard.png" alt-text="Screenshot of the index definition page." border="true" lightbox="../../media/search-get-started-skillset/index-fields-old-wizard.png":::
-
-  Marking a field as **Retrievable** doesn't mean that the field *must* appear in search results. You can control which fields are returned by using the `select` query parameter.
-      
-After you review the index schema, select **Next: Create an indexer**.
-
-### Step 4: Configure the indexer
-
-The indexer drives the indexing process and specifies the data source name, a target index, and frequency of execution. In this step, the wizard creates several objects, including an indexer that you can reset and run repeatedly.
-
-To configure the indexer:
-
-1. On the **Create an indexer** page, accept the default name.
-
-1. Select **Once** for the schedule.
-
-   :::image type="content" source="../../media/search-get-started-skillset/indexer-def.png" alt-text="Screenshot of the indexer definition page." border="true" lightbox="../../media/search-get-started-skillset/indexer-def.png":::
-
-1. Select **Submit** to simultaneously create and run the indexer.
-
-## Monitor status
-
-You can monitor the creation of the indexer in the Azure portal. Skills-based indexing takes longer than text-based indexing, especially OCR and image analysis.
-
-To monitor the progress of the indexer:
-
-1. From the left pane, select **Indexers**.
-
-1. Select your indexer from the list.
-
-1. Select **Success** (or **Failed**) to view execution details.
-
-   :::image type="content" source="../../media/search-get-started-skillset/indexer-notification.png" alt-text="Screenshot of the indexer status page." border="true" lightbox="../../media/search-get-started-skillset/indexer-notification.png":::
-
-  In this quickstart, there are a few warnings, including `Could not execute skill because one or more skill input was invalid.` This warning tells you that a PNG file in the data source doesn't provide a text input to Entity Recognition. It occurs because the upstream OCR skill didn't recognize any text in the image and couldn't provide a text input to the downstream Entity Recognition skill.
-
-  Warnings are common in skillset execution. As you become familiar with how skills iterate over your data, you might begin to notice patterns and learn which warnings are safe to ignore.
-
-## Query in Search explorer
-
-To query your index:
-
-1. From the left pane, select **Indexes**.
-
-1. Select your index from the list. If the index has zero documents or storage, wait for the Azure portal to refresh.
-
-1. On the **Search explorer** tab, enter a search string, such as `satya nadella`.
-
-The search bar accepts keywords, quote-enclosed phrases, and operators. For example: `"Satya Nadella" +"Bill Gates" +"Steve Ballmer"`
-
-Results are returned as verbose JSON, which can be hard to read, especially in large documents. Here are tips for searching in this tool:
-    
-   + Switch to the JSON view to specify parameters that shape results.
-   + Add `select` to limit the fields in results.
-   + Add `count` to show the number of matches.
-   + Use Ctrl-F to search within the JSON for specific properties or terms.
-
-:::image type="content" source="../../media/search-get-started-skillset/search-explorer-old-wizard.png" alt-text="Screenshot of the Search explorer page." border="true" lightbox="../../media/search-get-started-skillset/search-explorer-old-wizard.png":::
-
-Here's some JSON you can paste into the view:
-    
-```json
-{
-"search": "\"Satya Nadella\" +\"Bill Gates\" +\"Steve Ballmer\"",
-"count": true,
-"select": "content, people"
-}
-```
-
-> [!TIP]
-> Query strings are case sensitive, so if you get an "unknown field" message, check **Fields** or **Index Definition (JSON)** to verify the name and case.
\ No newline at end of file

Summary

{
    "modification_type": "breaking change",
    "modification_title": "クイックスタートガイドの削除: 古いウィザード"
}

Explanation

このコードの変更は、「search-get-started-skillset-old-wizard.md」というファイルの完全な削除を示しており、196行のコンテンツが取り除かれました。この変更により、Azure AI Searchの「古いインポートデータウィザード」に関する情報が消失し、関連する手順や機能の説明が利用できなくなります。

削除されたドキュメントでは、OCR、画像分析、言語検出、テキスト統合、エンティティ認識を使用した検索可能なコンテンツの生成について説明していました。このウィザードは近い将来に廃止予定であり、そのほとんどの機能が新しい「インポートデータウィザード」で利用可能であることが示されていました。

この変更の影響により、ユーザーは古いウィザードに依存することができなくなり、代わりに新しいウィザードに関する最新の情報を追跡しなければならなくなります。このことは、Azure AI Searchの利用を通じた体験に大きな影響を及ぼす可能性があります。ユーザーは新しいウィザードに関する文書を通じて作業を進める必要があり、今後の資料の更新についても留意が必要です。

articles/search/media/search-howto-powerapps/1-10-1-parameter-metadata-search.png

Summary

{
    "modification_type": "breaking change",
    "modification_title": "画像の削除: パラメータメタデータ検索"
}

Explanation

このコードの変更は、「1-10-1-parameter-metadata-search.png」という画像ファイルが完全に削除されたことを示しています。この変更により、Azure AI Searchに関連する「パラメータメタデータ検索」に関する視覚的な情報が失われます。

画像は、PowerAppsにおける特定の操作や手順を示すために使用されていた可能性があり、その削除は関係するドキュメントの理解に影響を及ぼすでしょう。ユーザーは、画像から得られていた視覚的なコンテキストや明確さがなくなるため、他のリソースや指示を参照する必要が生じるかもしれません。

この変更は、特定の機能や手法の説明に対する支障をきたす可能性があるため、関連する文書を更新し、必要に応じて新しい画像を追加することが推奨されます。ユーザーエクスペリエンスの観点からも、この画像の削除は注意を要する改変です。

articles/search/media/search-howto-powerapps/1-10-2-parameter-metadata-version.png

Summary

{
    "modification_type": "breaking change",
    "modification_title": "画像の削除: パラメータメタデータバージョン"
}

Explanation

このコードの変更は、「1-10-2-parameter-metadata-version.png」という画像ファイルが削除されたことを示しています。この変更により、Azure AI Searchに関連する「パラメータメタデータバージョン」の説明に関する視覚的資料が失われます。

削除された画像は、特定の手順や設定に疑問を持つユーザーにとって重要な参考資料であった可能性が高く、関連する文書の理解を助けていました。この削除により、ユーザーは手動で情報を補完したり、他の資料を参照したりする必要が生じるかもしれません。

この変更は、特定の機能の説明や操作の指示に対する影響があるため、関係するドキュメントを更新し、新しい画像の追加を検討することが推奨されます。視覚的な情報が欠けることで、ユーザーエクスペリエンスに悪影響を与える可能性があるため、注意が必要です。

articles/search/media/search-howto-powerapps/1-10-4-parameter-metadata-select.png

Summary

{
    "modification_type": "breaking change",
    "modification_title": "画像の削除: パラメータメタデータ選択"
}

Explanation

このコードの変更は、「1-10-4-parameter-metadata-select.png」という画像ファイルが削除されたことを示しています。この変更により、Azure AI Searchに関連する「パラメータメタデータ選択」の説明や手順に関する視覚的な資料が失われます。

削除された画像は、ユーザーにとって重要な情報源であった可能性があり、特定の選択肢や設定を視覚的に示して理解を助けていたと考えられます。この削除によって、ユーザーはその手順についての詳細な理解を得るために、他のリソースや文書を参照しなければならないシチュエーションが発生するかもしれません。

この変更がもたらす影響を考慮し、関連する文書の更新や新しい画像の追加を検討することが重要です。視覚的情報の欠如は、ユーザーエクスペリエンスに悪影響を与える可能性があるため、特に注意する必要があります。

articles/search/media/search-howto-powerapps/1-11-1-test-connector.png

Summary

{
    "modification_type": "breaking change",
    "modification_title": "画像の削除: テストコネクタ"
}

Explanation

このコードの変更は、「1-11-1-test-connector.png」という画像ファイルが削除されたことを示しています。この変更により、Azure AI Searchに関する「テストコネクタ」の説明や手順に関する視覚的資料が失われることになります。

削除された画像は、ユーザーにとって特定の機能を利用する際の理解を助ける重要な情報源であった可能性があります。例えば、テストコネクタの設定や使用方法を示すビジュアルガイドとして機能していたと考えられます。この画像の削除は、理解の難しさを生じる要因となり、ユーザーが手動で情報を補完したり、他のドキュメントを探し求めたりする必要を引き起こすかもしれません。

この変更は、関連するドキュメントに対して更新や画像の追加が必要になることを示唆しています。視覚的な情報の欠如はユーザーエクスペリエンスに影響を与えるため、注意が必要です。

articles/search/media/search-howto-powerapps/1-11-2-test-connector.png

Summary

{
    "modification_type": "breaking change",
    "modification_title": "画像の削除: テストコネクタ(2)"
}

Explanation

このコードの変更は、「1-11-2-test-connector.png」という画像ファイルが削除されたことを示しています。この削除により、Azure AI Searchに関する「テストコネクタ」の利用方法や設定に関する視覚的資料が失われることになります。

削除された画像は、使用例や設定手順を示すために重要な役割を果たしていた可能性が高いです。特に、テストコネクタを使用する際の具体的な操作内容を理解するために、ユーザーにとって視覚的なガイダンスが必要であったことでしょう。この画像の欠如は、ユーザーが必要とする情報を得るために、他の資料を探したり、手動で設定を確認する必要性を引き起こすかもしれません。

したがって、この変更に伴い、関連するドキュメントを更新し、必要に応じて新たな画像を追加することが求められます。視覚的な情報が欠けることで、ユーザーエクスペリエンスにマイナスの影響を与える可能性があるため、特に注意が必要です。

articles/search/media/search-howto-powerapps/1-2-custom-connector.png

Summary

{
    "modification_type": "breaking change",
    "modification_title": "画像の削除: カスタムコネクタ"
}

Explanation

このコードの変更は、「1-2-custom-connector.png」という画像ファイルの削除を示しています。この変更により、Azure AI Searchに関する「カスタムコネクタ」の説明に必要な視覚的サポートが失われてしまいます。

削除された画像は、ユーザーがカスタムコネクタの設定や使用方法を理解するための重要なビジュアルリソースであったであろうと考えられます。特に、ハンズオンの手順や具体的なUIの表示を示していた場合、ユーザーの理解を助ける上で非常に有用でした。この画像の欠如は、ユーザーが情報を得るのが難しくなる可能性があり、他の資料を調査する必要が生じるかもしれません。

この変更は、関連するコンテンツの更新や新しい画像の追加が必要になることを示唆しています。視覚的な要素の欠落はユーザーエクスペリエンスに影響を与えるため、特に注意を払うべきです。

articles/search/media/search-howto-powerapps/1-3-create-blank.png

Summary

{
    "modification_type": "breaking change",
    "modification_title": "画像の削除: ブランク作成"
}

Explanation

この変更は、「1-3-create-blank.png」という画像ファイルの削除を示しています。この画像は、Azure AI Searchにおけるブランクの作成手順を説明するために使用されていたと考えられ、削除されることにより、関連する視覚的なガイダンスが失われます。

ユーザーがブランクの作成に関して必要な情報を得るためには、視覚的なサポートが特に重要です。この画像が削除されると、ユーザーは手順を理解するのが難しくなり、他の資料を参照して補う必要が生じるかもしれません。

この変更は、ユーザーエクスペリエンスに悪影響を及ぼす可能性があります。そのため、削除された画像に代わる新しいビジュアルリソースの追加や、既存のコンテンツの見直しが求められます。視覚的な要素の重要性から、特に注意して対応する必要があります。

articles/search/media/search-howto-powerapps/1-5-general-info.png

Summary

{
    "modification_type": "breaking change",
    "modification_title": "画像の削除: 一般情報"
}

Explanation

このコードの変更は、「1-5-general-info.png」という画像ファイルの削除を示しています。この画像は、Azure AI Searchに関する一般的な情報を提供するためのものであり、その削除によって重要な視覚的リソースが失われることとなります。

ユーザーは、この画像を通じて、特定の機能や情報の概要を視覚的に理解することができました。画像が取り除かれることにより、ユーザーが必要なコンテキストを得るのが難しくなる可能性があります。このような背景情報が不足すると、ユーザーは他の資料を探す必要が生じ、効率が低下する恐れがあります。

この変更は、特にその画像を関連のあるコンテンツや手順にリンクさせていた場合には、ユーザーエクスペリエンスに重大な影響を与える可能性があります。そのため、代替のビジュアルリソースを提供するか、既存のテキストコンテンツを強化することが推奨されます。

articles/search/media/search-howto-powerapps/1-6-authentication-type.png

Summary

{
    "modification_type": "breaking change",
    "modification_title": "画像の削除: 認証タイプ"
}

Explanation

この変更は、「1-6-authentication-type.png」という画像ファイルが削除されたことを示しています。この画像は、Azure AI Searchにおける認証タイプに関する説明を提供していたと見られ、削除されることで関連する説明内容における視覚的なサポートが失われます。

認証の方法は、システムのセキュリティやアクセス制御において重要な要素であり、ユーザーがその理解を深めるためには詳細な情報が必要です。この画像が脱落することにより、ユーザーにとっての理解が難しくなる可能性があり、追加の資料を探す手間が生じるかもしれません。

この変更は、特に認証に関連する重要な情報を視覚的に伝える手段を失うことに繋がるため、今後は他の関連情報を整理・提供するか、新しいビジュアルリソースを追加することが求められるでしょう。適切な情報提供を行うことで、ユーザーがAzure AI Searchの認証プロセスを正しく理解できるようサポートすることが重要です。

articles/search/media/search-howto-powerapps/1-7-new-action.png

Summary

{
    "modification_type": "breaking change",
    "modification_title": "画像の削除: 新しいアクション"
}

Explanation

この変更は、「1-7-new-action.png」という画像ファイルが削除されたことを示しています。この画像は、Azure AI Search内の新しいアクションについての視覚的なガイドを提供する役割を果たしていました。画像が削除されることにより、関連する情報が視覚的に補強されていた部分が失われ、ユーザーの理解が妨げられる可能性があります。

新しいアクションの説明は、ユーザーが機能を活用する上で非常に重要であり、この画像を通じて得られる視覚的な背景が欠けることになるため、ユーザーが手順を理解する際に困難を抱えるかもしれません。このような変更は、特に新機能の利用を促進する上で障害となることがあります。

今後の対応としては、画像の代替となる情報をテキストで提供したり、他の視覚リソースを追加したりすることが考えられます。これにより、ユーザーが新しいアクションの実行方法を正しく理解し、活用できるように支援することが重要です。

articles/search/media/search-howto-powerapps/1-8-1-import-from-sample.png

Summary

{
    "modification_type": "breaking change",
    "modification_title": "画像の削除: サンプルからのインポート"
}

Explanation

この変更は、「1-8-1-import-from-sample.png」という画像ファイルが削除されたことを示しています。この画像は、サンプルデータからのインポート手順を視覚的に示しており、ユーザーにとって重要な情報源となっていました。画像が削除されたことにより、サンプルからのインポートに関する手順や流れが視覚的にサポートされなくなり、ユーザーの理解に影響を及ぼす可能性があります。

インポート機能は、ユーザーがデータを効率的に取り込むための重要な手段ですが、視覚的な資料が欠けることで、特に初心者や新しいユーザーにとっては perplexity(混乱)を招く要因となるかもしれません。そのため、画像が失われたことで、ユーザーがスムーズにこの機能を利用することが難しく感じる可能性があります。

この状況を改善するためには、テキストベースの指示や他の種類のサポート材料を提供することが重要です。ユーザーがサンプルからのインポート手順を正しく理解できるよう、適切な情報提供を行うことが求められます。

articles/search/media/search-howto-powerapps/1-8-2-import-from-sample.png

Summary

{
    "modification_type": "breaking change",
    "modification_title": "画像の削除: サンプルからのインポートのサポート"
}

Explanation

この変更は、「1-8-2-import-from-sample.png」という画像ファイルが削除されたことを示しています。この画像は、Azure AI Searchの機能の一部であるサンプルからのデータインポート手順を視覚的に説明するために使われていました。画像の削除により、ユーザーがこの手順を理解するための視覚的支援がなくなり、特に視覚的な情報を重視するユーザーにとっては、困難を引き起こす可能性があります。

インポート機能は、ユーザーがサンプルデータを効果的に利用するために重要です。この画像がなくなることで、手順の理解が難しくなり、特に初心者や未経験のユーザーが適切に機能を利用することが難しくなるかもしれません。これにより、ユーザーの利便性が損なわれることになり、サポートドキュメントの質にも影響が出る恐れがあります。

今後の対応としては、削除された画像の内容をテキストや他の方法で補完し、適切なガイダンスを提供することで、ユーザーがスムーズに機能を活用できるようにサポートすることが重要です。

articles/search/media/search-howto-powerapps/2-3-connect-connector.png

Summary

{
    "modification_type": "breaking change",
    "modification_title": "画像の削除: コネクタの接続"
}

Explanation

この変更は、「2-3-connect-connector.png」という画像ファイルが削除されたことを示しています。この画像は、PowerApps内でのコネクタの接続手順を視覚的にガイドするためのものでした。画像が削除されることにより、ユーザーはコネクタの接続方法についての視覚的な手助けを失うことになります。

コネクタは、アプリケーションやデータソースとの接続において非常に重要な役割を果たします。この画像がないことで、特に初心者や視覚的な情報を重視するユーザーが手順を理解するのが難しくなる可能性があります。このため、ユーザーの体験が損なわれ、アプリの設定やデータの統合に関するスムーズな作業が妨げられることがあります。

この変更に対処するためには、手順を文章で詳細に説明することや、新しい画像や動画などの代替手段を提供することで、ユーザー支援を強化することが求められます。このような対応により、ユーザーが引き続きコネクタの接続手順について正しく理解し、実行できるように支援することが重要です。

articles/search/media/search-howto-powerapps/2-4-add-controls.png

Summary

{
    "modification_type": "breaking change",
    "modification_title": "画像の削除: コントロールの追加"
}

Explanation

この変更は、「2-4-add-controls.png」という画像ファイルが削除されたことを示しています。この画像は、PowerAppsにコントロールを追加する手順を視覚的に説明するために使用されていました。画像が削除されることで、ユーザーはコントロールの追加作業における具体的な手順や操作方法を視覚的に理解することができなくなります。

コントロールの追加は、アプリケーションのユーザーインターフェースを構築する上で重要なステップであり、特に新規ユーザーにとっては、視覚的な手助けが非常に価値があります。この画像が欠けることで、手順の理解が難しくなり、特に非技術的なユーザーがアプリのデザインや機能の構築を行う際に困難を感じる可能性があります。

この変更に対処するためには、手順をテキストで詳細に解説することや、新しいビジュアルコンテンツを作成してユーザーに提供することで、ツールの使い方を効果的に支援することが求められます。これにより、ユーザーが引き続きコントロールの追加手順を正しく理解し、アプリの開発をスムーズに行えるようにすることが大切です。

articles/search/media/search-howto-powerapps/2-5-controls-layout.png

Summary

{
    "modification_type": "breaking change",
    "modification_title": "画像の削除: コントロールのレイアウト"
}

Explanation

この変更は、「2-5-controls-layout.png」という画像ファイルが削除されたことを示しています。この画像は、PowerAppsにおけるコントロールのレイアウト方法を視覚的に説明するために重要な役割を果たしていました。画像が削除されることで、ユーザーはコントロールの配置や設定に関する具体的なガイドラインを視覚的に失い、アプリケーションのデザインにおいて難しさを感じることになるでしょう。

コントロールのレイアウトはアプリのユーザーエクスペリエンスに直接影響を与えるため、特に視覚的な情報が有用です。この画像を参照できないことによって、特に初心者やレイアウトデザインに不慣れなユーザーが手順を理解するのが困難になる恐れがあります。

この変更に対しては、詳細なテキストベースのチュートリアルや、他のビジュアルコンテンツを提供することが重要です。これにより、ユーザーがコントロールのレイアウトの理解を助け、アプリデザインのプロセスをスムーズに進めるための支援が可能になります。また、代替資料の作成により、ユーザーエクスペリエンスの充実を図ることも重要です。

articles/search/media/search-howto-powerapps/2-6-search-button-event.png

Summary

{
    "modification_type": "breaking change",
    "modification_title": "画像の削除: 検索ボタンのイベント"
}

Explanation

この変更は、「2-6-search-button-event.png」という画像ファイルが削除されたことを示しています。この画像は、PowerAppsにおける検索ボタンのイベント処理や動作を理解するために重要な情報源でした。そのため、この画像が削除されることは、ユーザーが検索ボタンの機能を正しく実装する上での指針を失うことを意味します。

検索ボタンのイベントは、ユーザーがアプリケーションのデータを効率的に検索するための重要な要素であり、特に視覚的な説明が不足することでユーザーは動作の実装やトラブルシューティングに困難を感じることが予想されます。

この変更に対処するためには、テキストによる詳細な説明や手順を提供することが求められます。また、新しい画像やビジュアルコンテンツを作成して、ユーザーが検索ボタンのイベント処理を理解できるようにすることが重要です。これにより、ユーザーは機能を正しく利用でき、全体のアプリケーション体験を向上させることができます。

articles/search/media/search-howto-powerapps/2-7-gallery-select-fields.png

Summary

{
    "modification_type": "breaking change",
    "modification_title": "画像の削除: ギャラリーでのフィールド選択"
}

Explanation

この変更は、「2-7-gallery-select-fields.png」という画像ファイルが削除されることを示しています。この画像は、PowerAppsのギャラリーでフィールドを選択する方法を視覚的に示しており、特に初心者ユーザーにとっては重要なリソースでした。画像が削除されることにより、ユーザーはギャラリーコンポーネントのフィールド選択に関して直感的な理解を得る手段を失うこととなります。

ギャラリーは、アプリケーションにおいてデータをリスト表示するための重要な要素であり、その設定や操作方法を視覚的に示すことは、特に技術に不慣れなユーザーにとって非常に役立ちます。この画像の削除は、アプリケーションの使いやすさを損なう可能性があります。

この問題に対応するためには、画像の代わりにテキスト説明や新しいビジュアル資料を作成し、ユーザーがギャラリーコンポーネントの操作方法を理解できるようにすることが重要です。これにより、ユーザーエクスペリエンスの向上を図り、アプリの利用を促進することができます。

articles/search/media/search-howto-powerapps/2-8-2-final.png

Summary

{
    "modification_type": "breaking change",
    "modification_title": "画像の削除: 最終結果"
}

Explanation

この変更は、「2-8-2-final.png」という画像ファイルが削除されたことを示しています。この画像は、PowerAppsにおける最終的な結果を示すビジュアルであり、ユーザーがアプリケーションの完成形を理解する上で重要な役割を果たしていました。この画像の削除により、ユーザーは最後の成果物や完成したアプリの例を見ることができなくなります。

アプリケーションの開発過程で、最終結果を示すビジュアルが欠如すると、特に新しいユーザーにとって理解が難しくなる可能性があります。また、ビジュアルがないことで、ユーザーは自分の作業の進捗や正しさを評価する基準を欠くことになりかねません。

この問題に対応するためには、画像の代わりに詳細な説明文や手順を提供し、ユーザーがアプリケーションの最終的な成果物を正しく理解できるようにすることが求められます。そして、新しいビジュアルコンテンツを作成することで、ユーザーエクスペリエンスの向上を図ることが重要です。

articles/search/media/search-howto-powerapps/2-8-3-final.png

Summary

{
    "modification_type": "breaking change",
    "modification_title": "画像の削除: 最終成果物3"
}

Explanation

この変更は、「2-8-3-final.png」という画像ファイルが削除されたことを示しています。この画像は、PowerAppsにおける最終成果物の一つを視覚的に示しており、特にユーザーがアプリケーションの使い方や完成形を理解するために重要なリソースでした。この削除により、ユーザーはアプリの最終的な外観や機能を確認する手段を失うことになります。

最終成果物を視覚的に示すことは、特に初心者や新しいユーザーに対して、理解を助けるための重要な要素です。画像が欠如すると、ユーザーは自分の作業が正しいのかどうかを確認する基準を持たず、プロジェクト全体の方向性を見失う可能性があります。

この課題に対応するためには、画像の代用として文書による説明や、新しいビジュアルの追加を検討する必要があります。これにより、ユーザーがアプリケーションの完成形をより良く理解できるようになり、全体的なユーザー体験を向上させることが期待されます。

articles/search/retrieval-augmented-generation-overview.md

Diff
@@ -5,9 +5,9 @@ description: Learn how generative AI and retrieval augmented generation (RAG) pa
 author: HeidiSteen
 ms.author: heidist
 manager: nitinme
-ms.date: 10/14/2025
+ms.date: 12/10/2025
 ms.service: azure-ai-search
-ms.topic: conceptual
+ms.topic: article
 ms.custom:
   - ignite-2023
   - ignite-2024
@@ -153,70 +153,6 @@ Here are some tips for maximizing relevance and recall:
 
 In comparison and benchmark testing, hybrid queries with text and vector fields, supplemented with semantic ranking, produce the most relevant results.
 
-<!-- ### Example code for a classic RAG workflow
-
-The following Python code demonstrates the essential components of a basic RAG workflow in Azure AI Search. You need to set up the clients, define a system prompt, and provide a query. The prompt tells the LLM to use just the results from the query, and how to return the results. For more steps based on this example, see this [RAG quickstart](search-get-started-rag.md).
-
-> [!NOTE]
-> For the Azure Government cloud, modify the API endpoint on the token provider to `"https://cognitiveservices.azure.us/.default"`.
-
-```python
-# Set up the query for generating responses
-from azure.identity import DefaultAzureCredential
-from azure.identity import get_bearer_token_provider
-from azure.search.documents import SearchClient
-from openai import AzureOpenAI
-
-credential = DefaultAzureCredential()
-token_provider = get_bearer_token_provider(credential, "https://cognitiveservices.azure.com/.default")
-openai_client = AzureOpenAI(
-    api_version="2024-06-01",
-    azure_endpoint=AZURE_OPENAI_ACCOUNT,
-    azure_ad_token_provider=token_provider
-)
-
-search_client = SearchClient(
-    endpoint=AZURE_SEARCH_SERVICE,
-    index_name="hotels-sample-index",
-    credential=credential
-)
-
-# This prompt provides instructions to the model. 
-# The prompt includes the query and the source, which are specified further down in the code.
-GROUNDED_PROMPT="""
-You are a friendly assistant that recommends hotels based on activities and amenities.
-Answer the query using only the sources provided below in a friendly and concise bulleted manner.
-Answer ONLY with the facts listed in the list of sources below.
-If there isn't enough information below, say you don't know.
-Do not generate answers that don't use the sources below.
-Query: {query}
-Sources:\n{sources}
-"""
-
-# The query is sent to the search engine, but it's also passed in the prompt
-query="Can you recommend a few hotels near the ocean with beach access and good views"
-
-# Retrieve the selected fields from the search index related to the question
-search_results = search_client.search(
-    search_text=query,
-    top=5,
-    select="Description,HotelName,Tags"
-)
-sources_formatted = "\n".join([f'{document["HotelName"]}:{document["Description"]}:{document["Tags"]}' for document in search_results])
-
-response = openai_client.chat.completions.create(
-    messages=[
-        {
-            "role": "user",
-            "content": GROUNDED_PROMPT.format(query=query, sources=sources_formatted)
-        }
-    ],
-    model="gpt-4.1-mini"
-)
-
-print(response.choices[0].message.content)
-``` -->
-
 ## Integration code and LLMs
 
 A RAG solution that includes Azure AI Search can leverage [built-in data chunking and vectorization capabilities](vector-search-integrated-vectorization.md), or you can build your own using platforms like Semantic Kernel, LangChain, or LlamaIndex.
@@ -227,13 +163,11 @@ We recommend the [Azure OpenAI demo](https://github.com/Azure-Samples/azure-sear
 
 There are many ways to get started, including code-first solutions and demos.
 
-For help with choosing between agentic retrieval and classic RAG, try a few quickstarts using your own data to understand the development effort and compare outcomes.
-
 ### [**Docs**](#tab/docs)
 
-+ [Try this agentic retrieval quickstart](search-get-started-rag.md) to walk through the new and recommended approach for RAG.
++ [Try this agentic retrieval quickstart](search-get-started-agentic-retrieval.md) to walk through the new and recommended approach for RAG.
 
-+ [Try this classic RAG quickstart](search-get-started-rag.md) for a demonstration of query integration with chat models over a search index.
++ [Try this tutorial](agentic-retrieval-how-to-create-pipeline.md) for a more comprehensive approach that includes an agent.
 
 + [Review indexing concepts and strategies](search-what-is-an-index.md) to determine how you want to ingest and refresh data. Decide whether to use vector search, keyword search, or hybrid search. The kind of content you need to search over, and the type of queries you want to run, determines index design.
 
@@ -246,7 +180,11 @@ For help with choosing between agentic retrieval and classic RAG, try a few quic
 
 Check out the following GitHub repositories for code, documentation, and video demonstrations where applicable.
 
-+ [RAG Time Journeys](https://github.com/microsoft/rag-time)
++ [RAG chat app with Azure OpenAI and Azure AI Search (Python)](https://github.com/Azure-Samples/azure-search-openai-demo/blob/main/README.md)
+
++ [Classic RAG Time Journeys](https://github.com/microsoft/rag-time)
+
++ [azure-search-classic-rag](https://github.com/Azure-Samples/azure-search-classic-rag/blob/main/README.md)
 
 + [azure-search-vector-samples](https://github.com/Azure/azure-search-vector-samples)
 
@@ -279,8 +217,4 @@ Check out the following GitHub repositories for code, documentation, and video d
 
 ## See also
 
-+ [RAG Experiment Accelerator](https://github.com/microsoft/rag-experiment-accelerator)
-
-+ [Retrieval Augmented Generation: Streamlining the creation of intelligent natural language processing models](https://ai.meta.com/blog/retrieval-augmented-generation-streamlining-the-creation-of-intelligent-natural-language-processing-models/)
-
-+ [Azure Cognitive Search and LangChain: A Seamless Integration for Enhanced Vector Search Capabilities](https://techcommunity.microsoft.com/t5/azure-ai-services-blog/azure-cognitive-search-and-langchain-a-seamless-integration-for/ba-p/3901448)
++ [Retrieval augmented generation and indexes (Foundry)](/azure/ai-foundry/concepts/retrieval-augmented-generation)

Summary

{
    "modification_type": "minor update",
    "modification_title": "文書の修正: RAGの概要"
}

Explanation

この変更は、「retrieval-augmented-generation-overview.md」という文書に対する修正を示しています。主な変更点は、文章の内容を更新し、一部の情報を削除および追加したことです。具体的には、全体で86の変更があり、その中には76の行が削除され、新たに10の行が追加されました。

主な修正点としては、日付の更新、内容の精緻化、そしてリソースへのリンクの変更が含まれています。特に、古いコードサンプルやコメントを削除し、より新しい情報や推奨されるリソースへのリンクを追加することで、より良い学習体験を提供することを目的としています。

具体的な例では、コードの説明部分が削除され、新しいチュートリアルやクイックスタートのリンクが追加されています。また、エージェンティックリトリーバルによる新しいアプローチへの誘導もなされています。これにより、ユーザーは最新の技術とリソースにアクセスできるようになり、RAGの使用に関する理解を深められることが期待されます。このような更新は、ドキュメント全体の関連性を向上させ、読者にとってより有益な情報を提供するものとなっています。

articles/search/samples-dotnet.md

Diff
@@ -10,7 +10,7 @@ ms.custom:
   - devx-track-dotnet
   - ignite-2023
 ms.topic: concept-article
-ms.date: 09/23/2025
+ms.date: 12/10/2025
 ---
 
 # C# samples for Azure AI Search
@@ -52,7 +52,6 @@ Code samples from the Azure AI Search team demonstrate features and workflows. T
 | [quickstart-rag](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/main/quickstart-rag) | [Quickstart: Generative search (RAG)](search-get-started-rag.md) | Use grounding data from Azure AI Search with a chat completion model from Azure OpenAI. |
 | [quickstart-semantic-search](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/main/quickstart-semantic-search/) | [Quickstart: Semantic ranking](search-get-started-semantic.md) | Add semantic ranking to an index schema and run semantic queries. |
 | [quickstart-vector-search](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/main/quickstart-vector-search) | [Quickstart: Vector search](search-get-started-vector.md) | Index and query vector content. |
-| [create-mvc-app](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/main/create-mvc-app) | [Tutorial: Add search to an ASP.NET Core (MVC) app](tutorial-csharp-create-mvc-app.md) | Add basic search, pagination, and other server-side behaviors to an MVC web app, unlike the console applications used in most samples. |
 | [search-website](https://github.com/Azure-Samples/azure-search-static-web-app) | [Tutorial: Add search to web apps](tutorial-csharp-overview.md) | Build an end-to-end search app that uses the push API for bulk upload and a rich client for hosting the app and handling search requests. |
 | [tutorial-ai-enrichment](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/main/tutorial-ai-enrichment) | [Tutorial: AI-generated searchable content from Azure blobs](tutorial-skillset.md) | Create a skillset that iterates over Azure blobs to extract information and infer structure. |
 | [multiple-data-sources](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/main/multiple-data-sources) | [Tutorial: Index from multiple data sources](tutorial-multiple-data-sources.md) | Merge content from two data sources into one index. |
@@ -61,14 +60,6 @@ Code samples from the Azure AI Search team demonstrate features and workflows. T
 | [DotNetToIndexers](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToIndexers) | [Tutorial: Index Azure SQL data](search-indexer-tutorial.md) | Configure an Azure SQL indexer with a schedule, field mappings, and parameters. |
 | [DotNetHowToEncryptionUsingCMK](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToEncryptionUsingCMK) | [Configure customer-managed keys for data encryption](search-security-manage-encryption-keys.md) | Create objects that are encrypted with a customer-managed key. |
 
-## Accelerators
-
-An accelerator is an end-to-end solution that includes code and documentation you can adapt for your own implementation of a specific scenario.
-
-| Sample | Description |
-|--|--|
-| [search-qna-maker-accelerator](https://github.com/Azure-Samples/search-qna-maker-accelerator) | [Solution](https://techcommunity.microsoft.com/blog/azure-ai-foundry-blog/qna-with-azure-cognitive-search/2081381) that combines Azure AI Search and QnA Maker. See the live [demo site](https://aka.ms/qnaWithAzureSearchDemo). |
-
 ## Demos
 
 A demo repo provides proof-of-concept source code for examples or scenarios shown in demonstrations. Unlike accelerators, demo solutions aren't designed for adaptation.

Summary

{
    "modification_type": "minor update",
    "modification_title": "ドキュメント更新: .NET サンプルの修正"
}

Explanation

この変更は、「samples-dotnet.md」という文書に対する修正を含んでいます。変更内容は、追加が1行、削除が10行、合計で11の変更が行われています。

主な更新点としては、文書の日付が「2025年9月23日」から「2025年12月10日」に変更されたこと、さらにサンプルリストから一部の項目が削除されたことが挙げられます。具体的には、“create-mvc-app”に関する自動生成されたチュートリアルのリンクが削除されました。このリンクは、ASP.NET Core (MVC) アプリへの検索機能追加の具体的なガイドを提供していましたが、更新の結果として、他のサンプルやチュートリアルの情報が強調されています。

削除された行の代わりに、新しいリソースや機能が強調されることで、ユーザーは今後のサンプルやチュートリアルを通じて、Azure AI Searchのよりモダンで有用な使用法を学ぶことが期待されています。また、文書全体の整合性を保ちつつ、古い情報を排除することで、最新の技術やフレームワークに焦点を当てています。このような更新により、開発者やユーザーにとって、より効果的で実用的な情報源としての信頼性が向上しています。

articles/search/samples-python.md

Diff
@@ -41,7 +41,6 @@ Code samples from the Azure AI Search team demonstrate features and workflows. T
 | [Quickstart-RAG](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/Quickstart-RAG) | [Quickstart: Generative search (RAG)](search-get-started-rag.md) | Use grounding data from Azure AI Search with a chat completion model from Azure OpenAI. |
 | [Quickstart-Semantic-Search](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/Quickstart-Semantic-Search) | [Quickstart: Semantic ranking](search-get-started-semantic.md) | Add semantic ranking to an index schema and run semantic queries. |
 | [Quickstart-Vector-Search](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/Quickstart-Vector-Search) | [Quickstart: Vector search](search-get-started-vector.md) | Index and query vector content. |
-| [Tutorial-RAG](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/Tutorial-RAG) | [Build a RAG solution using Azure AI Search](tutorial-rag-build-solution.md) | Create an indexing pipeline that loads, chunks, embeds, and ingests searchable content for RAG. |
 | [agentic-retrieval-pipeline-example](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/agentic-retrieval-pipeline-example) | [Tutorial: Build an end-to-end agentic retrieval solution](agentic-retrieval-how-to-create-pipeline.md) | Unlike [Quickstart-Agentic-Retrieval](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/Quickstart-Agentic-Retrieval), this sample incorporates Foundry Agent Service for request orchestration. |
 
 ## Accelerators
@@ -68,6 +67,7 @@ The following samples are also published by the Azure AI Search team but aren't
 
 | Sample | Description |
 |--|--|
+| [Classic RAG pattern](https://github.com/Azure-Samples/azure-search-classic-rag/blob/main/README.md) | Create an indexing pipeline that uses the [classic search engine](search-what-is-azure-search.md#what-is-classic-search) to load, chunk, embed, and ingest searchable content. |
 | [Quickstart-Document-Permissions-Pull-API](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/Quickstart-Document-Permissions-Pull-API) | Using an indexer "pull API" approach, flow access control lists from a data source to search results and apply permission filters that restrict access to authorized content. |
 | [Quickstart-Document-Permissions-Push-API](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/Quickstart-Document-Permissions-Push-API) | Using the push APIs for indexing a JSON payload, flow embedded permission metadata to indexed documents and search results that are filtered based on user access to authorized content. |
 | [azure-function-search](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/azure-function-search) | Use an Azure function to send queries to a search service. You can substitute this Python version for the `api` code used in [Add search to web sites with .NET](tutorial-csharp-overview.md). |

Summary

{
    "modification_type": "minor update",
    "modification_title": "ドキュメント更新: Python サンプルの修正"
}

Explanation

この変更は、「samples-python.md」という文書に対する修正を示しています。1行の追加と1行の削除が行われ、合計で2つの変更が加えられました。

主な変更点としては、RAGソリューションのチュートリアルリンクが削除され、その代わりに「Classic RAGパターン」という新しいサンプルが追加されたことが挙げられます。この新しいサンプルは、クラシックな検索エンジンを使用してインデクシングパイプラインを作成し、検索可能なコンテンツを読み込み、チャンク化、埋め込み、および取り込みを行う方法を示しています。

この変更により、ドキュメントは最新の技術とベストプラクティスに沿った情報を提供し、開発者やユーザーがクラシックなアプローチを含むさまざまなソリューションを学ぶ手助けをすることを目的としています。全体として、この更新は、Azure AI Searchに関連するリソースをより豊富にし、ユーザーが効果的に情報を活用できるようにするためのものです。

articles/search/search-blob-storage-integration.md

Diff
@@ -6,7 +6,7 @@ manager: nitinme
 author: HeidiSteen
 ms.author: heidist
 ms.service: azure-ai-search
-ms.topic: conceptual
+ms.topic: article
 ms.date: 10/06/2025
 ms.update-cycle: 365-days
 ms.custom:
@@ -137,7 +137,7 @@ The output of an indexer is a search index, used for interactive exploration usi
 + [Full query syntax](query-lucene-syntax.md)
 + [Filter expression syntax](query-odata-filter-orderby-syntax.md)
 
-A more permanent solution is to gather query inputs and present the response as search results in a client application. The following C# tutorial explains how to build a search application: [Add search to an ASP.NET Core (MVC) application](tutorial-csharp-create-mvc-app.md).
+A more permanent solution is to gather query inputs and present the response as search results in a client application.
 
 ## Next steps
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "ドキュメント更新: Blobストレージ統合に関する修正"
}

Explanation

この変更は、「search-blob-storage-integration.md」という文書に対する修正を含んでいます。追加が2行、削除が2行行われ、合計で4つの変更が加えられています。

主な更新は、ドキュメント内のメタデータの変更とコンテンツの削減です。具体的には、ms.topicフィールドが「conceptual」から「article」に変更され、文書の性質が明確化されています。この変更によって、ドキュメントがより情報提供型の記事として扱われることになります。また、C#のチュートリアルに関連する文言が削除され、その部分が簡潔な表現にまとめられました。

これにより、読者はクライアントアプリケーション内で検索結果を提示するより永続的な解決策の重要性を再確認できます。この変更は、Azure AI SearchとBlobストレージの統合に関する情報をより明確かつ効果的に提供することを目的としています。ユーザーが必要な情報を迅速に見つけることができるよう、ドキュメントの整合性と焦点が強化されています。

articles/search/search-get-started-portal-image-search.md

Diff
@@ -52,20 +52,20 @@ For content embedding, choose one of the following methods:
 
 + **Multimodal embeddings:** Uses an embedding model to directly vectorize both text and images.
 
-The following table lists the supported providers and models for each method. Deployment instructions for the models are provided in a [later section](#deploy-models).
+The portal supports the following models for each method. Deployment instructions are provided in a [later section](#deploy-models).
 
 | Provider | Models for image verbalization | Models for multimodal embeddings |
 |--|--|--|
-| [Azure OpenAI in Foundry Models resource](/azure/ai-services/openai/how-to/create-resource) <sup>1, 2</sup> | LLMs:<br>gpt-4o<br>gpt-4o-mini<br>gpt-5<br>gpt-5-mini<br>gpt-5-nano<br><br>Embedding models:<br>text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large | |
-| [Foundry project](/azure/ai-foundry/how-to/create-projects) | LLMs:<br>phi-4<br>gpt-4o<br>gpt-4o-mini<br>gpt-5<br>gpt-5-mini<br>gpt-5-nano<br><br>Embedding models:<br>text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large | |
-| [Foundry hub-based project](/azure/ai-foundry/how-to/hub-create-projects) | LLMs:<br>phi-4<br>gpt-4o<br>gpt-4o-mini<br>gpt-5<br>gpt-5-mini<br>gpt-5-nano<br><br>Embedding models:<br>text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large<br>Cohere-embed-v3-english <sup>3</sup><br>Cohere-embed-v3-multilingual <sup>3</sup> | Cohere-embed-v3-english <sup>3</sup><br>Cohere-embed-v3-multilingual <sup>3</sup> |
-| [Foundry resource](/azure/ai-services/multi-service-resource) <sup>4</sup> | Embedding model: [Azure Vision in Foundry Tools multimodal](/azure/ai-services/computer-vision/how-to/image-retrieval) <sup>5</sup> | [Azure Vision multimodal](/azure/ai-services/computer-vision/how-to/image-retrieval) <sup>5</sup> |
+| [Azure OpenAI resource](/azure/ai-foundry/openai/how-to/create-resource?view=foundry-classic&pivots=web-portal&preserve-view=true) <sup>1, 2</sup> | LLMs:<br>gpt-4o<br>gpt-4o-mini<br>gpt-5<br>gpt-5-mini<br>gpt-5-nano<br><br>Embedding models:<br>text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large | |
+| [Microsoft Foundry project](/azure/ai-foundry/how-to/create-projects?view=foundry-classic&pivots=web-portal&preserve-view=true) | LLMs:<br>phi-4<br>gpt-4o<br>gpt-4o-mini<br>gpt-5<br>gpt-5-mini<br>gpt-5-nano<br><br>Embedding models:<br>text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large | |
+| [Microsoft Foundry hub-based project](/azure/ai-foundry/how-to/hub-create-projects?view=foundry-classic&pivots=web-portal&preserve-view=true) | LLMs:<br>phi-4<br>gpt-4o<br>gpt-4o-mini<br>gpt-5<br>gpt-5-mini<br>gpt-5-nano<br><br>Embedding models:<br>text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large<br>Cohere-embed-v3-english <sup>3</sup><br>Cohere-embed-v3-multilingual <sup>3</sup> | Cohere-embed-v3-english <sup>3</sup><br>Cohere-embed-v3-multilingual <sup>3</sup> |
+| [Microsoft Foundry resource](/azure/ai-services/multi-service-resource) <sup>4</sup> | Embedding model: [Azure Vision in Foundry Tools multimodal](/azure/ai-services/computer-vision/how-to/image-retrieval) <sup>5</sup> | [Azure Vision multimodal](/azure/ai-services/computer-vision/how-to/image-retrieval) <sup>5</sup> |
 
 <sup>1</sup> The endpoint of your Azure OpenAI resource must have a [custom subdomain](/azure/ai-services/cognitive-services-custom-subdomains), such as `https://my-unique-name.openai.azure.com`. If you created your resource in the [Azure portal](https://portal.azure.com/), this subdomain was automatically generated during resource setup.
 
 <sup>2</sup> Azure OpenAI resources (with access to embedding models) that were created in the [Foundry portal](https://ai.azure.com/?cid=learnDocs) aren't supported. You must create an Azure OpenAI resource in the Azure portal.
 
-<sup>3</sup> To use this model in the wizard, you must [deploy it as a serverless API deployment](/azure/ai-foundry/how-to/deploy-models-serverless).
+<sup>3</sup> To use this model in the wizard, you must provision it as a serverless API deployment. You can use an [ARM/Bicep template](https://github.com/Azure-Samples/azure-ai-search-multimodal-sample/blob/42b4d07f2dd9f7720fdc0b0788bf107bdac5eecb/infra/ai/modules/project.bicep#L37C1-L38C1) for this task.
 
 <sup>4</sup> For billing purposes, you must [attach your Foundry resource](cognitive-search-attach-cognitive-services.md) to the skillset in your Azure AI Search service. Unless you use a [keyless connection (preview)](cognitive-search-attach-cognitive-services.md#bill-through-a-keyless-connection) to create the skillset, both resources must be in the same region.
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "ドキュメント更新: ポータル画像検索の使用方法に関する修正"
}

Explanation

この変更は、「search-get-started-portal-image-search.md」という文書に対する修正を示しています。合計で6行の追加と6行の削除が行われ、内容が更新されています。

主な変更点には、マルチモーダル埋め込みに関する説明が追加され、テーブルの内容が改良されています。具体的には、ポータルが各メソッドに対してサポートするモデルの情報が明確化され、以前の「プロバイダー」という表現が、具体的なリソースへのリンクに置き換えられました。また、モデルのデプロイメント指示に関する説明が改善され、利用者が必要な手順を把握しやすくなっています。

これにより、ユーザーはAzure AI Searchでの画像検索機能の利用方法をより理解しやすくなり、関連するリソースへのアクセスも容易になります。この更新は、ドキュメントの可読性を向上させ、ユーザーの実装プロセスをスムーズにすることを目指しています。

articles/search/search-get-started-portal-import-vectors.md

Diff
@@ -42,20 +42,20 @@ The wizard [supports a wide range of Azure data sources](search-import-data-port
 
 ### Supported embedding models
 
-For integrated vectorization, use one of the following embedding models. Deployment instructions are provided in a [later section](#prepare-embedding-model).
+The portal supports the following embedding models for integrated vectorization. Deployment instructions are provided in a [later section](#prepare-embedding-model).
 
 | Provider | Supported models |
 |--|--|
-| [Azure OpenAI in Foundry Models resource](/azure/ai-services/openai/how-to/create-resource) <sup>1, 2</sup> | For text:<br>text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large |
+| [Azure OpenAI resource](/azure/ai-services/openai/how-to/create-resource) <sup>1, 2</sup> | For text:<br>text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large |
 | [Microsoft Foundry project](/azure/ai-foundry/how-to/create-projects) | For text:<br>text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large |
-| [Foundry hub-based project](/azure/ai-foundry/how-to/hub-create-projects) | For text:<br>text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large<br><br>For text and images:<br>Cohere-embed-v3-english <sup>3</sup><br>Cohere-embed-v3-multilingual <sup>3</sup> |
-| [Foundry resource](/azure/ai-services/multi-service-resource) <sup>4</sup> | For text and images: [Azure Vision in Foundry Tools multimodal](/azure/ai-services/computer-vision/how-to/image-retrieval) <sup>5</sup></li> |
+| [Microsoft Foundry hub-based project](/azure/ai-foundry/how-to/hub-create-projects) | For text:<br>text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large<br><br>For text and images:<br>Cohere-embed-v3-english <sup>3</sup><br>Cohere-embed-v3-multilingual <sup>3</sup> |
+| [Microsoft Foundry resource](/azure/ai-services/multi-service-resource) <sup>4</sup> | For text and images: [Azure Vision in Foundry Tools multimodal](/azure/ai-services/computer-vision/how-to/image-retrieval) <sup>5</sup></li> |
 
 <sup>1</sup> The endpoint of your Azure OpenAI resource must have a [custom subdomain](/azure/ai-services/cognitive-services-custom-subdomains), such as `https://my-unique-name.openai.azure.com`. If you created your resource in the [Azure portal](https://portal.azure.com/), this subdomain was automatically generated during resource setup.
 
 <sup>2</sup> Azure OpenAI resources (with access to embedding models) that were created in the [Foundry portal](https://ai.azure.com/?cid=learnDocs) aren't supported. You must create an Azure OpenAI resource in the Azure portal.
 
-<sup>3</sup> To use this model in the wizard, you must [deploy it as a serverless API deployment](/azure/ai-foundry/how-to/deploy-models-serverless).
+<sup>3</sup> To use this model in the wizard, you must provision it as a serverless API deployment. You can use an [ARM/Bicep template](https://github.com/Azure-Samples/azure-ai-search-multimodal-sample/blob/42b4d07f2dd9f7720fdc0b0788bf107bdac5eecb/infra/ai/modules/project.bicep#L37C1-L38C1) for this task.
 
 <sup>4</sup> For billing purposes, you must [attach your Foundry resource](cognitive-search-attach-cognitive-services.md) to the skillset in your Azure AI Search service. Unless you use a [keyless connection (preview)](cognitive-search-attach-cognitive-services.md#bill-through-a-keyless-connection) to create the skillset, both resources must be in the same region.
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "ドキュメント更新: ベクトルインポートに関する修正"
}

Explanation

この変更は、「search-get-started-portal-import-vectors.md」という文書に対する修正を含んでいます。合計で5行の追加と5行の削除が行われ、内容が更新されています。

主な変更には、「統合ベクトル化のための埋め込みモデル」の説明が改良され、プロバイダー名がより具体的に明示されました。例えば、「Azure OpenAI in Foundry Models resource」という表記が「Azure OpenAI resource」に修正され、ユーザーに対する明瞭さが向上しています。また、マルチモーダルデータインポートの詳細が強調され、各プロバイダーに対応したモデルのリストが整理されています。

さらには、モデルをウィザード内で使用するためのデプロイメント手順に関する説明も強化されています。具体的には、ARM/Bicepテンプレートを使用するオプションが追加され、ユーザーにとってより柔軟な選択肢が提供されています。

このような更新により、ユーザーはAzure AIのインポート機能を利用する際に必要な情報を迅速に把握できるようになり、実際の実装プロセスがスムーズになることを目的としています。

articles/search/search-get-started-rag.md

Diff
@@ -53,8 +53,4 @@ In this quickstart, you send queries to a chat completion model for a conversati
 
 [!INCLUDE [TypeScript quickstart](includes/quickstarts/search-get-started-rag-typescript.md)]
 
-::: zone-end
-
-## Related content
-
-- [Tutorial: Build a RAG solution in Azure AI Search](tutorial-rag-build-solution.md)
\ No newline at end of file
+::: zone-end
\ No newline at end of file

Summary

{
    "modification_type": "minor update",
    "modification_title": "ドキュメント更新: RAGの導入に関する不要なコンテンツの削除"
}

Explanation

この変更は、「search-get-started-rag.md」という文書に対する修正を示しています。合計で1行の追加と5行の削除が行われ、内容が簡素化されています。

主な変更点は、文書の最後にあった「関連コンテンツ」セクションが削除された点です。このセクションには「Azure AI SearchでRAGソリューションを構築するためのチュートリアル」が含まれていましたが、現在は削除され、より整理された形になっています。また、文書の構造を保つためのマークアップが残っていますが、実質的な情報は減少しています。

この更新により、文書はよりスリムで焦点を絞った内容となり、読者が必要な情報へ迅速にアクセスできるようになります。また、余分な情報の削除は、全体的な可読性を向上させる効果があります。

articles/search/search-get-started-skillset.md

Diff
@@ -10,19 +10,210 @@ ms.update-cycle: 180-days
 ms.custom:
   - ignite-2023
 ms.topic: quickstart
-ms.date: 09/16/2025
-zone_pivot_groups: azure-portal-wizards
+ms.date: 12/11/2025
 ---
 
 # Quickstart: Create a skillset in the Azure portal
 
-::: zone pivot="import-data-new"
-[!INCLUDE [Import data (new) instructions](includes/quickstarts/search-get-started-skillset-new-wizard.md)]
-::: zone-end
+> [!IMPORTANT]
+> The **Import data (new)** wizard now supports keyword search, which was previously only available in the **Import data** wizard. We recommend the new wizard for an improved search experience. For more information about how we're consolidating the wizards, see [Import data wizards in the Azure portal](search-import-data-portal.md).
 
-::: zone pivot="import-data"
-[!INCLUDE [Import data instructions](includes/quickstarts/search-get-started-skillset-old-wizard.md)]
-::: zone-end
+In this quickstart, you learn how a skillset in Azure AI Search adds optical character recognition (OCR), image analysis, language detection, text merging, and entity recognition to generate text-searchable content in an index.
+
+You can run the **Import data (new)** wizard in the Azure portal to apply skills that create and transform textual content during indexing. The input is your raw data, usually blobs in Azure Storage. The output is a searchable index containing AI-generated image text, captions, and entities. You can then query generated content in the Azure portal using [**Search explorer**](search-explorer.md).
+
+Before you run the wizard, you create a few resources and upload sample files.
+
+## Prerequisites
+
++ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?cid=msft_learn).
+
++ An Azure AI Search service. [Create a service](search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) in your current subscription. You can use a free service for this quickstart.
+
++ An [Azure Storage account](/azure/storage/common/storage-account-create). Use Azure Blob Storage on a standard performance (general-purpose v2) account. To avoid bandwidth charges, use the same region as Azure AI Search.
+
+> [!NOTE]
+> This quickstart uses [Foundry Tools](https://azure.microsoft.com/services/cognitive-services/) for AI enrichment. Because the workload is small, Foundry Tools is tapped behind the scenes for free processing up to 20 transactions. Therefore, you don't need to create a Microsoft Foundry resource.
+
+## Prepare sample data
+
+In this section, you create an Azure Storage container to store sample data consisting of various file types, including images and application files that aren't full-text searchable in their native formats.
+
+To prepare the sample data for this quickstart:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) and select your Azure Storage account.
+
+1. From the left pane, select **Data storage** > **Containers**.
+
+1. Create a container, and then upload the [sample data](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/ai-enrichment-mixed-media) to the container.
+
+## Run the wizard
+
+To run the wizard:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) and select your search service.
+
+1. On the **Overview** page, select **Import data (new)**.
+
+   :::image type="content" source="media/search-import-data-portal/import-data-new-button.png" alt-text="Screenshot that shows how to open the new import wizard in the Azure portal.":::
+
+1. Select **Azure Blob Storage** for the data source.
+
+   :::image type="content" source="media/search-get-started-skillset/choose-data-source.png" alt-text="Screenshot of the Azure Blob Storage data source option in the Azure portal." border="true" lightbox="media/search-get-started-skillset/choose-data-source.png":::
+
+1. Select **Keyword search**.
+
+   :::image type="content" source="media/search-get-started-portal/keyword-search-tile.png" alt-text="Screenshot of the keyword search tile in the Azure portal." border="true" lightbox="media/search-get-started-portal/keyword-search-tile.png":::
+
+### Step 1: Create a data source
+
+Azure AI Search requires a connection to a data source for content ingestion and indexing. In this case, the data source is your Azure Storage account.
+
+To create the data source:
+
+1. On the **Connect to your data** page, select your Azure subscription.
+
+1. Select your storage account, and then select the container you created.
+
+   :::image type="content" source="media/search-get-started-skillset/connect-to-your-data.png" alt-text="Screenshot of the Connect to your data page in the Azure portal." border="true" lightbox="media/search-get-started-skillset/connect-to-your-data.png":::
+
+1. Select **Next**.
+
+If you get `Error detecting index schema from data source`, the indexer that powers the wizard can't connect to your data source. The data source most likely has security protections. Try the following solutions, and then rerun the wizard.
+
+| Security feature | Solution |
+|--------------------|----------|
+| Resource requires Azure roles, or its access keys are disabled. | [Connect as a trusted service](search-indexer-howto-access-trusted-service-exception.md) or [connect using a managed identity](search-how-to-managed-identities.md). |
+| Resource is behind an IP firewall. | [Create an inbound rule for Azure AI Search and the Azure portal](search-indexer-howto-access-ip-restricted.md). |
+| Resource requires a private endpoint connection. | [Connect over a private endpoint](search-indexer-howto-access-private.md). |
+
+### Step 2: Add cognitive skills
+
+The next step is to configure AI enrichment to invoke OCR, image analysis, and entity recognition.
+
+OCR and image analysis are available for blobs in Azure Blob Storage and Azure Data Lake Storage (ADLS) Gen2 and for image content in Microsoft OneLake. Images can be standalone files or embedded images in a PDF or other files.
+
+To add the skills:
+
+1. Select **Extract entities**, and then select the gear icon.
+
+1. Select and save the following checkboxes:
+
+   + **Persons**
+
+   + **Locations**
+
+   + **Organizations**
+
+   :::image type="content" source="media/search-get-started-skillset/extract-entities.png" alt-text="Screenshot of the Extract entities options in the Azure portal." lightbox="media/search-get-started-skillset/extract-entities.png":::
+
+1. Select **Extract text from images**, and then select the gear icon.
+
+1. Select and save the following checkboxes:
+
+   + **Generate tags**
+
+   + **Categorize content**
+
+   :::image type="content" source="media/search-get-started-skillset/extract-text.png" alt-text="Screenshot of the Extract text from images options in the Azure portal." lightbox="media/search-get-started-skillset/extract-text.png":::
+
+1. Leave the **Use a free AI service (limited enrichments)** checkbox selected.
+
+   The sample data consists of 14 files, so the free allotment of 20 transactions on Foundry Tools is sufficient.
+
+1. Select **Next**.
+
+### Step 3: Configure the index
+
+An index contains your searchable content. The wizard can usually create the schema by sampling the data source. In this step, you review the generated schema and potentially revise any settings.
+
+For this quickstart, the wizard sets reasonable defaults:  
+
++ Default fields are based on metadata properties of existing blobs and new fields for the enrichment output, such as `persons`, `locations`, and `organizations`. Data types are inferred from metadata and by data sampling.
+
+  :::image type="content" source="media/search-get-started-skillset/index-fields-new-wizard.png" alt-text="Screenshot of the index definition page." border="true" lightbox="media/search-get-started-skillset/index-fields-new-wizard.png":::
+
++ Default document key is `metadata_storage_path`, which is selected because the field contains unique values.
+
++ Default field attributes are based on the skills you selected. For example, fields created by the Entity Recognition skill (`persons`, `locations`, and `organizations`) are **Retrievable**, **Filterable**, **Facetable**, and **Searchable**. To view and change these attributes, select a field, and then select **Configure field**.
+
+  **Retrievable** fields can be returned in results, while **Searchable** fields support full-text search. Use **Filterable** if you want to use fields in a filter expression.
+  
+  Marking a field as **Retrievable** doesn't mean that the field *must* appear in search results. You can control which fields are returned by using the `select` query parameter.
+
+After you review the index schema, select **Next**.
+
+### Step 4: Skip advanced settings
+
+The wizard offers advanced settings for semantic ranking and index scheduling, which are beyond the scope of this quickstart. Skip this step by selecting **Next**.
+
+### Step 5: Review and create objects
+
+The last step is to review your configuration and create the index, indexer, and data source on your search service. The indexer automates the process of extracting content from your data source, loading the index, and driving skillset execution.
+
+To review and create the objects:
+
+1. Accept the default **Objects name prefix**.
+
+1. Review the object configurations.
+
+   :::image type="content" source="media/search-get-started-skillset/review-and-create.png" alt-text="Screenshot of the object configuration page in the Azure portal." border="true" lightbox="media/search-get-started-skillset/review-and-create.png":::
+
+   AI enrichment, semantic ranker, and indexer scheduling are either disabled or set to their default values because you skipped their wizard steps.
+
+1. Select **Create** to simultaneously create the objects and run the indexer.
+
+## Monitor status
+
+You can monitor the creation of the indexer in the Azure portal. Skills-based indexing takes longer than text-based indexing, especially OCR and image analysis.
+
+To monitor the progress of the indexer:
+
+1. From the left pane, select **Indexers**.
+
+1. Select your indexer from the list.
+
+1. Select **Success** (or **Failed**) to view execution details.
+
+   :::image type="content" source="media/search-get-started-skillset/indexer-notification.png" alt-text="Screenshot of the indexer status page." border="true" lightbox="media/search-get-started-skillset/indexer-notification.png":::
+
+  In this quickstart, there are a few warnings, including `Could not execute skill because one or more skill input was invalid.` This warning tells you that a PNG file in the data source doesn't provide a text input to Entity Recognition. It occurs because the upstream OCR skill didn't recognize any text in the image and couldn't provide a text input to the downstream Entity Recognition skill.
+
+  Warnings are common in skillset execution. As you become familiar with how skills iterate over your data, you might begin to notice patterns and learn which warnings are safe to ignore.
+
+## Query in Search explorer
+
+To query your index:
+
+1. From the left pane, select **Indexes**.
+
+1. Select your index from the list. If the index has zero documents or storage, wait for the Azure portal to refresh.
+
+1. On the **Search explorer** tab, enter a search string, such as `satya nadella`.
+
+The search bar accepts keywords, quote-enclosed phrases, and operators. For example: `"Satya Nadella" +"Bill Gates" +"Steve Ballmer"`
+
+Results are returned as verbose JSON, which can be hard to read, especially in large documents. Here are tips for searching in this tool:
+    
+   + Switch to the JSON view to specify parameters that shape results.
+   + Add `select` to limit the fields in results.
+   + Add `count` to show the number of matches.
+   + Use Ctrl-F to search within the JSON for specific properties or terms.
+
+:::image type="content" source="media/search-get-started-skillset/search-explorer-new-wizard.png" alt-text="Screenshot of the Search explorer page." border="true" lightbox="media/search-get-started-skillset/search-explorer-new-wizard.png":::
+
+Here's some JSON you can paste into the view:
+    
+```json
+{
+"search": "\"Satya Nadella\" +\"Bill Gates\" +\"Steve Ballmer\"",
+"count": true,
+"select": "merged_content, persons"
+}
+```
+
+> [!TIP]
+> Query strings are case sensitive, so if you get an "unknown field" message, check **Fields** or **Index Definition (JSON)** to verify the name and case.
 
 ## Takeaways
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "ドキュメント更新: スキルセット作成のクイックスタート手順の強化"
}

Explanation

この変更は、「search-get-started-skillset.md」の文書に対する大幅な修正を含んでおり、199行の追加と8行の削除が行われています。この更新により、Azureポータルでのスキルセット作成に関するクイックスタートガイドが強化されています。

主な変更点は、ユーザーに新しい「インポートデータ(新)」ウィザードの使用を推奨する注意喚起が追加されたことです。この新しいウィザードは、キーワード検索をサポートしており、従来のウィザードよりも改善された検索体験を提供します。また、スキルセットの目的やプロセスが詳細に説明され、OCRや画像分析、言語検出などの機能が含まれることが強調されています。

さらに、手順ごとに具体的な前提条件やサンプルデータの準備手順が明記されており、Azureポータルでの操作がより分かりやすくなっています。また、ウィザードの各ステップでの設定や推奨事項が詳述されており、ユーザーがスキルセットを効果的に作成できるよう配慮されています。

全体として、この更新はAzure AI Searchを利用する際のユーザーガイドを充実させ、よりスムーズな操作を促進し、実務での利用を容易にすることを目的としています。

articles/search/search-how-to-define-index-projections.md

Diff
@@ -16,7 +16,7 @@ ms.update-cycle: 180-days
 
 For indexes containing chunked documents, an *index projection* specifies how parent-child content is mapped to fields in a search index for one-to-many indexing. Through an index projection, you can send content to:
 
-- A single index, where the parent fields repeat for each chunk, but the grain of the index is at the chunk level. The [RAG tutorial](tutorial-rag-build-solution-index-schema.md) is an example of this approach.
+- A single index, where the parent fields repeat for each chunk, but the grain of the index is at the chunk level. The [classic RAG example](https://github.com/Azure-Samples/azure-search-classic-rag/blob/main/README.md) shows this approach.
 
 - Two or more indexes, where the parent index has fields related to the parent document, and the child index is organized around chunks. The child index is the primary search corpus, but the parent index could be used for [lookup queries](/rest/api/searchservice/documents/get) when you want to retrieve the parent fields of a particular chunk, or for independent queries.
 
@@ -92,7 +92,7 @@ You can use the Azure portal, REST APIs, or an Azure SDK to [create an index](se
 
 #### [**Python**](#tab/python-create-index)
 
-This example is similar to the [RAG tutorial](tutorial-rag-build-solution-index-schema.md). It's an index schema designed for chunked content extracted from a parent document and combines all parent-child fields in the same index.
+This example is similar to the [classic RAG example](https://github.com/Azure-Samples/azure-search-classic-rag/blob/main/README.md). It's an index schema designed for chunked content extracted from a parent document and combines all parent-child fields in the same index.
 
 ```python
  # Create a search index  
@@ -406,7 +406,7 @@ The indexer definition specifies the components of the pipeline. In the indexer
 
 ## Next step
 
-Data chunking and one-to-many indexing are part of the RAG pattern in Azure AI Search. Continue on to the following tutorial and code sample to learn more about it.
+Data chunking and one-to-many indexing are part of the classic RAG pattern in Azure AI Search. Continue on to the following tutorial and code sample to learn more about it.
 
 > [!div class="nextstepaction"]
-> [How to build a RAG solution using Azure AI Search](tutorial-rag-build-solution.md)
+> [How to build a classic RAG solution using Azure AI Search](https://github.com/Azure-Samples/azure-search-classic-rag/blob/main/README.md)

Summary

{
    "modification_type": "minor update",
    "modification_title": "ドキュメント更新: インデックスプロジェクションのリファレンス変更"
}

Explanation

この変更は、「search-how-to-define-index-projections.md」という文書に対する修正を示しており、合計で4行の追加と4行の削除が行われています。この更新では、元々のリファレンスを改善し、特定の事例をより明確に示すことが目的とされています。

具体的な変更内容としては、インデックスプロジェクションの説明に関する部分が更新され、以下の点に焦点が当てられています:

  1. Classic RAG Exampleのリンク追加: コンテンツの一部が、“RAGチュートリアル”から”classic RAG example”(クラシックRAG例)へのリンクに変更されており、詳細な情報を提供するための参照文献が更新されました。

  2. インデックススキーマ説明の改善: Pythonの例として示されるインデックススキーマに関する記述が、クラシックRAGの文脈に合わせて修正され、より一貫した情報が提供されています。

  3. 次のステップの変更: インデックスプロジェクションとデータチャンク処理を関連づけるセクションの最後にも、リンク先が新しいクラシックRAGソリューションに置き換えられています。

この修正により、ユーザーはRAGアプローチを学ぶ際に最新の情報と参考資料にアクセスしやすくなります。また、リファレンスがより具体的で明確になることで、理解を深める助けとなるでしょう。

articles/search/search-how-to-integrated-vectorization.md

Diff
@@ -46,9 +46,9 @@ For integrated vectorization, use one of the following embedding models on an Az
 
 | Provider | Supported models |
 |--|--|
-| [Azure OpenAI in Foundry Models](/azure/ai-services/openai/how-to/create-resource) <sup>1, 2</sup> | text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large |
+| [Azure OpenAI resource](/azure/ai-services/openai/how-to/create-resource) <sup>1, 2</sup> | text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large |
 | [Microsoft Foundry resource](/azure/ai-services/multi-service-resource) <sup>3</sup> | For text and images: [Azure Vision multimodal](/azure/ai-services/computer-vision/how-to/image-retrieval) <sup>4</sup></li> |
-<!--| [Foundry model catalog](/azure/ai-foundry/what-is-azure-ai-foundry) | For text:<br>Cohere-embed-v3-english<br>Cohere-embed-v3-multilingual<br><br>For images:<br>Facebook-DinoV2-Image-Embeddings-ViT-Base<br>Facebook-DinoV2-Image-Embeddings-ViT-Giant<br><br>For text and images:<br>Cohere-embed-v4 |-->
+<!--| [Foundry model catalog](/azure/ai-foundry/what-is-azure-ai-foundry) | For text:<br>Cohere-embed-v3-english<br>Cohere-embed-v3-multilingual<br><br><br>For text and images:<br>Cohere-embed-v4 |-->
 
 <sup>1</sup> The endpoint of your Azure OpenAI resource must have a [custom subdomain](/azure/ai-services/cognitive-services-custom-subdomains), such as `https://my-unique-name.openai.azure.com`. If you created your resource in the [Azure portal](https://portal.azure.com/), this subdomain was automatically generated during resource setup.
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "ドキュメント更新: 統合型ベクトル化に関するモデルの名称修正"
}

Explanation

この変更は、「search-how-to-integrated-vectorization.md」という文書に対する修正を示しており、合計で2行の追加と2行の削除が行われています。主な変更点は、Azure OpenAIに関連するリソースの名称が微修正されたことにあります。

具体的には、「Azure OpenAI in Foundry Models」という表現が「Azure OpenAI resource」に変更され、文書が最新の状態に適合するように調整されています。この変更は、ユーザーが正確で一貫した情報を得るために重要です。

さらに、Foundryモデルカタログに関するコメントアウト部分においても調整が行われており、テキストや画像に関する情報が整理されています。これにより、ユーザーは利用可能なモデルについての理解を深めることができるようになります。

この更新は、特にベクトル化機能を使用するユーザーにとって、より明確で正確な情報を提供することを目的としています。

articles/search/search-howto-complex-data-types.md

Diff
@@ -251,7 +251,7 @@ response = openai_client.chat.completions.create(
 print(response.choices[0].message.content)
 ```
 
-For the end-to-end example, see [Quickstart: Generative search (RAG) with grounding data from Azure AI Search](search-get-started-rag.md).
+For the end-to-end example, see [classic RAG in Azure AI Search](https://github.com/Azure-Samples/azure-search-classic-rag/blob/main/README.md).
 
 ## Select complex fields
 
@@ -365,7 +365,7 @@ var combinedCountryCategoryFilter = "(" + countryFilter + " and " + catgFilter +
 
 ```
 
-If you implement the workaround, be sure to test extentively.
+If you implement the workaround, be sure to test extensively.
 
 ## Next steps
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "ドキュメント更新: リファレンスリンクとスペル修正"
}

Explanation

この変更は、「search-howto-complex-data-types.md」という文書に対する修正を示しており、合計で2行の追加と2行の削除が行われています。変更の主な目的は、リファレンスリンクと文章中のスペルを修正することです。

具体的な変更内容は以下の通りです:

  1. リファレンスリンクの更新: 以前は「Quickstart: Generative search (RAG) with grounding data from Azure AI Search」というリンクが使用されていましたが、これが「classic RAG in Azure AI Search」に変更され、より正確な情報源に誘導されています。この変更は、ユーザーが最新のチュートリアルにアクセスできるようにすることを目的としています。

  2. スペルの修正: 「extentively」が「extensively」に変更されており、文書の正確性が向上しました。正しいスペルを使用することで、ユーザーに対する信頼性が高まります。

これらの変更により、文書はより正確で最新の情報を反映する形になっており、ユーザーの利便性が向上しています。

articles/search/search-howto-powerapps.md

Diff
@@ -1,269 +0,0 @@
----
-title: 'Tutorial: Query from Power Apps'
-titleSuffix: Azure AI Search
-description: Step-by-step guidance on how to build a Power App that connects to an Azure AI Search index, sends queries, and renders results.
-author: HeidiSteen
-ms.author: heidist
-ms.service: azure-ai-search
-ms.topic: tutorial
-ms.date: 04/14/2025
-ms.update-cycle: 365-days
-ms.custom:
-  - ignite-2023
-  - sfi-image-nochange
----
-
-# Tutorial: Query an Azure AI Search index from Power Apps
-
-Use the rapid application development environment of Power Apps to create a custom app for your searchable content in Azure AI Search.
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Connect to Azure AI Search
-> * Set up a query request
-> * Visualize results in a canvas app
-
-If you don't have an Azure subscription, open a [free account](https://azure.microsoft.com/pricing/purchase-options/azure-account?cid=msft_learn) before you begin.
-
-## Prerequisites
-
-* [Power Apps account](https://make.powerapps.com) with a [premium license](/power-platform/admin/pricing-billing-skus#licenses), such as a Power Apps per apps plan or a Power Apps per user plan.  
-
-* [Hotels-sample index](search-get-started-portal.md) hosted on your search service.
-
-* [Query API key](search-security-api-keys.md#find-existing-keys).
-
-## 1 - Create a custom connector
-
-A connector in Power Apps is a data source connection. In this step, create a custom connector to connect to a search index in the cloud.
-
-1. [Sign in](https://make.powerapps.com) to Power Apps.
-
-1. On the left, select **Custom Connectors**.
-
-    :::image type="content" source="./media/search-howto-powerapps/1-2-custom-connector.png" alt-text="Custom connector menu" border="true":::
-
-1. Select  **+ New custom connector**, and then select **Create from blank**.
-
-    :::image type="content" source="./media/search-howto-powerapps/1-3-create-blank.png" alt-text="Create from blank menu" border="true":::
-
-1. Give your custom connector a name (for example, *AzureSearchQuery*), and then select **Continue**.
-
-1. Enter information in the General Page:
-
-   * Icon background color (for instance, #007ee5)
-   * Description (for instance, "A connector to Azure AI Search")
-   * In the Host, enter your search service URL (such as `<yourservicename>.search.windows.net`)
-   * For Base URL, enter "/"
-
-    :::image type="content" source="./media/search-howto-powerapps/1-5-general-info.png" alt-text="General information dialogue" border="true":::
-
-1. In the Security Page, set *API Key* as the **Authentication Type**, set both the parameter label and parameter name to *api-key*. For **Parameter location**, select *Header* as shown in the following screenshot.
-
-    :::image type="content" source="./media/search-howto-powerapps/1-6-authentication-type.png" alt-text="Authentication type option" border="true":::
-
-1. In the Definitions Page, select **+ New Action** to create an action that queries the index. Enter the value "Query" for the summary and the name of the operation ID. Enter a description like *"Queries the search index"*.
-
-    :::image type="content" source="./media/search-howto-powerapps/1-7-new-action.png" alt-text="New action options" border="true":::
-
-1. Scroll down. In Requests, select **+ Import from sample** button to configure a query request to your search service:
-
-   * Select the verb `GET`
-
-   * For the URL, enter a sample query for your search index (`search=*` returns all documents, `$select=` lets you choose fields). The API version is required. Fully specified, a URL might look like the following example. Notice that the `https://` prefix is omitted.
-
-     ```http
-     mydemo.search.windows.net/indexes/hotels-sample-index/docs?search=*&$select=HotelName,Description,Address/City&api-version=2025-09-01
-     ```
-
-   * For Headers, type `Content-Type application/json`.
-
-     **Power Apps** uses the syntax in the URL to extract parameters from the query: search, select, and api-version parameters become configurable as you progress through the wizard.
-
-       :::image type="content" source="./media/search-howto-powerapps/1-8-1-import-from-sample.png" alt-text="Import from sample" border="true":::
-
-1. Select **Import** to autofill the Request. Complete setting the parameter metadata by clicking the **...** symbol next to each of the parameters. Select **Back** to return to the Request page after each parameter update.
-
-   :::image type="content" source="./media/search-howto-powerapps/1-8-2-import-from-sample.png" alt-text="Import from sample dialogue" border="true":::
-
-1. For *search*: Set `*` as the **default value**, set **required** as *False* and set **visibility** to *none*. 
-
-    :::image type="content" source="./media/search-howto-powerapps/1-10-1-parameter-metadata-search.png" alt-text="Search parameter metadata" border="true":::
-
-1. For *select*: Set `HotelName,Description,Address/City` as the **default value**, set **required** to *False*, and set **visibility** to *none*.  
-
-    :::image type="content" source="./media/search-howto-powerapps/1-10-4-parameter-metadata-select.png" alt-text="Select parameter metadata" border="true":::
-
-1. For *api-version*: Set `2025-09-01` as the **default value**, set **required** to *True*, and set **visibility** as *internal*.  
-
-    :::image type="content" source="./media/search-howto-powerapps/1-10-2-parameter-metadata-version.png" alt-text="Version parameter metadata" border="true":::
-
-1. For *Content-Type*: Set to `application/json`.
-
-1. After making these changes, toggle to the **Swagger Editor** view. In the parameters section you should see the following configuration:
-
-    ```JSON
-    parameters:
-      - {name: search, in: query, required: false, type: string, default: '*'}
-      - {name: $select, in: query, required: false, type: string, default: 'HotelName,Description,Address/City'}
-      - {name: api-version, in: query, required: true, type: string, default: '2025-09-01',
-        x-ms-visibility: internal}
-      - {name: Content-Type, in: header, required: false, type: string}
-    ```
-
-1. Switch back to the wizard and return to the **3. Definition** step. Scroll down to the Response section. Select **"Add default response"**. This step is critical because it helps Power Apps understand the schema of the response. 
-
-1. Paste a sample response. An easy way to capture a sample response is through Search Explorer in the Azure portal. In Search Explorer, you should enter the same query as you did for the request, but add **$top=2** to constrain results to just two documents: `search=*&$select=HotelName,Description,Address/City&$top=2`. 
-
-   Power Apps only needs a few results to detect the schema. You can copy the following response into the wizard now, assuming you're using the hotels-sample-index.
-
-    ```JSON
-    {
-        "@odata.context": "https://mydemo.search.windows.net/indexes('hotels-sample-index')/$metadata#docs(*)",
-        "value": [
-            {
-                "@search.score": 1,
-                "HotelName": "Happy Lake Resort & Restaurant",
-                "Description": "The largest year-round resort in the area offering more of everything for your vacation – at the best value!  What can you enjoy while at the resort, aside from the mile-long sandy beaches of the lake? Check out our activities sure to excite both young and young-at-heart guests. We have it all, including being named “Property of the Year” and a “Top Ten Resort” by top publications.",
-                "Address": {
-                    "City": "Seattle"
-                }
-            },
-            {
-                "@search.score": 1,
-                "HotelName": "Grand Gaming Resort",
-                "Description": "The Best Gaming Resort in the area.  With elegant rooms & suites, pool, cabanas, spa, brewery & world-class gaming.  This is the best place to play, stay & dine.",
-                "Address": {
-                    "City": "Albuquerque"
-                }
-            }
-        ]
-    }
-    ```
-
-    > [!TIP] 
-    > There's a character limit to the JSON response you can enter, so you might want to simplify the JSON before pasting it. The schema and format of the response is more important than the values themselves. For example, the Description field could be simplified to just the first sentence.
-
-1. Select **Import** to add the default response.
-
-1. Select **Create connector** on the top right to save the definition.
-
-1. Select **Close** to close the connector.
-
-## 2 - Test the connection
-
-When the connector is first created, you need to reopen it from the Custom Connectors list in order to test it. Later, if you make more updates, you can test from within the wizard.
-
-Provide a [query API key](search-security-api-keys.md#find-existing-keys) for this task. Each time a connection is created, whether for a test run or inclusion in an app, the connector needs the query API key used for connecting to Azure AI Search.
-
-1. On the far left, select **Custom Connectors**.
-
-1. Find your connector in the list (in this tutorial, is "AzureSearchQuery").
-
-1. Select the connector, expand the actions list, and select **View Properties**.
-
-    :::image type="content" source="./media/search-howto-powerapps/1-11-1-test-connector.png" alt-text="View Properties" border="true":::
-
-1. In the drop-down list of operations, select **6. Test**.
-
-1. In **Test Operation**, select **+ New Connection**.
-
-1. Enter a query API key. This is an Azure AI Search query for read-only access to an index. You can [find the key](search-security-api-keys.md#find-existing-keys) in the Azure portal. 
-
-1. In Operations, select the **Test operation** button. If you're successful you should see a 200 status, and in the body of the response you should see JSON that describes the search results.
-
-    :::image type="content" source="./media/search-howto-powerapps/1-11-2-test-connector.png" alt-text="JSON response" border="true":::
-
-If the test fails, recheck the inputs. In particular, revisit the sample response and make sure it was created properly. The connector definition should show the expected items for the response.
-
-If you're blocked by a Data Loss Prevention (DLP) policy error, review the error message for guidance. You might be able to modify the policy or make your connector nonblockable.
-
-## 3 - Visualize results
-
-In this step, create a Power App with a search box, a search button, and a display area for the results. The Power App will connect to the recently created custom connector to get the data from Azure Search.
-
-1. On the left, expand **Apps** > **New app** > **Start with a page design**.
-
-1. Select a **Blank canvas** with the **Phone Layout**. Give the app a name, such as "Hotel Finder". Select **Create**. The **Power Apps Studio** appears.
-
-1. In the studio, select the **Data** tab, select **Add data**, and then find the new Connector you have just created. In this tutorial, it's called *AzureSearchQuery*. Select **Add a connection**.
-
-   Enter the query API key.
-
-    :::image type="content" source="./media/search-howto-powerapps/2-3-connect-connector.png" alt-text="connect connector" border="true":::
-
-    Now *AzureSearchQuery* is a data source that is available to be used from your application.
-
-1. On the **Insert tab**, add a few controls to the canvas.
-
-    :::image type="content" source="./media/search-howto-powerapps/2-4-add-controls.png" alt-text="Insert controls" border="true":::
-
-1. Insert the following elements:
-
-   * A Text Label with the value "Query:"
-   * A Text Input element (call it *txtQuery*, default value: "*")
-   * A button with the text "Search" 
-   * A Vertical Gallery called (call it *galleryResults*)
-
-    The canvas should look something like this:
-
-    :::image type="content" source="./media/search-howto-powerapps/2-5-controls-layout.png" alt-text="Controls layout" border="true":::
-
-1. To make the **Search button** issue a query, paste the following action into **OnSelect**:
-
-    ```
-    If(!IsBlank(txtQuery.Text),
-        ClearCollect(azResult, AzureSearchQuery.Query({search: txtQuery.Text}).value))
-    ```
-
-   The following screenshot shows the formula bar for the **OnSelect** action.
-
-    :::image type="content" source="./media/search-howto-powerapps/2-6-search-button-event.png" alt-text="Button OnSelect" border="true":::
-
-   This action causes the button to update a new collection called *azResult* with the result of the search query, using the text in the *txtQuery* text box as the query term.
-
-   > [!NOTE]
-   > Try this if you get a formula syntax error "The function 'ClearCollect' has some invalid functions":
-   > 
-   > * First, make sure the connector reference is correct. Clear the connector name and begin typing the name of your connector. Intellisense should suggest the right connector and verb.
-   > 
-   > * If the error persists, delete and recreate the connector. If there are multiple instances of a connector, the app might be using the wrong one.
-   > 
-
-1. Link the Vertical Gallery control to the *azResult* collection that was created when you completed the previous step. 
-
-   Select the gallery control, and perform the following actions in the properties pane.
-
-   * Set **DataSource** to *azResult*.
-   * Select a **Layout** that works for you based on the type of data in your index. In this case, we used the *Title, subtitle and body* layout.
-   * **Edit Fields**, and select the fields you would like to visualize.
-
-    Since we provided a sample result when we defined the connector, the app is aware of the fields available in your index.
-    
-    :::image type="content" source="./media/search-howto-powerapps/2-7-gallery-select-fields.png" alt-text="Gallery fields" border="true":::   
- 
-1. Press **F5** to preview the app.  
-
-    :::image type="content" source="./media/search-howto-powerapps/2-8-3-final.png" alt-text="Final app" border="true":::    
-
-<!--     Remember that the fields can be set to calculated values.
-
-    For the example, setting using the *"Image, Title and Subtitle"* layout and specifying the *Image* function as the concatenation of the root path for the data and the file name (for instance, `"https://mystore.blob.core.windows.net/multilang/" & ThisItem.metadata_storage_name`) will produce the result below.
-
-    :::image type="content" source="./media/search-howto-powerapps/2-8-2-final.png" alt-text="Final app" border="true":::         -->
-
-## Clean up resources
-
-When you're working in your own subscription, it's a good idea at the end of a project to identify whether you still need the resources you created. Resources left running can cost you money. You can delete resources individually or delete the resource group to delete the entire set of resources.
-
-You can find and manage resources in the Azure portal, using the **All resources** or **Resource groups** link in the left-navigation pane.
-
-Remember that a free search service is limited to three indexes, indexers, and data sources. You can delete individual items in the Azure portal to stay under the limit.
-
-## Next steps
-
-Power Apps enables the rapid application development of custom apps. Now that you know how to connect to a search index, learn more about creating a rich visualize experience in a custom Power App.
-
-> [!div class="nextstepaction"]
-> [Power Apps Learning Catalog](/powerapps/learning-catalog/bdm#get-started)

Summary

{
    "modification_type": "breaking change",
    "modification_title": "ドキュメント削除: Power Appsからのクエリ方法に関するチュートリアル"
}

Explanation

この変更は、「search-howto-powerapps.md」という文書が完全に削除されたことを示しています。この削除により、Power Appsを使用してAzure AI Searchインデックスにクエリを送信し、結果を表示する方法を説明するガイドが失われました。

削除された文書には、Power Appsにおけるクエリの準備から、実際に検索結果を表示するためのカスタムアプリの作成に至るまで、具体的なステップバイステップの説明が含まれていました。具体的には、以下のような内容が含まれていました:

  • Azure AI Searchインデックスへの接続方法
  • クエリのリクエストを設定する手順
  • 結果をキャンバスアプリで視覚化する方法

文書の削除は、情報の更新や統合を目的とする場合がありますが、この変更は従来のユーザーにとっては重要なリソースを失うことにつながります。今後の参考として、関連情報は別のドキュメントに移動された可能性があり、ユーザーはそれらのリソースを参照する必要があります。

この変更はブレイキングチェンジとみなされるため、ユーザーは手順を再評価し、新たな文書やヘルプリソースを探すことが求められます。

articles/search/search-pagination-page-layout.md

Diff
@@ -383,6 +383,4 @@ To quickly generate a search page for your client, consider these options:
 
 + [Create demo app](search-create-app-portal.md), in the Azure portal, creates an HTML page with a search bar, faceted navigation, and a thumbnail area if you have images.
 
-+ [Add search to an ASP.NET Core (MVC) app](tutorial-csharp-create-mvc-app.md) is a tutorial and code sample that builds a functional client.
-
 + [Add search to web apps](tutorial-csharp-overview.md) is a C# tutorial and code sample that uses the React JavaScript libraries for the user experience. The app is deployed using Azure Static Web Apps and it implements pagination.

Summary

{
    "modification_type": "minor update",
    "modification_title": "ドキュメント更新: ASP.NET Coreアプリに関する情報の削除"
}

Explanation

この変更は、「search-pagination-page-layout.md」文書に対する修正を示しており、2行の削除が行われています。主な変更点は、ASP.NET Core (MVC) アプリにおける検索機能の実装に関するチュートリアルへの参照が削除されたことです。

具体的な変更内容は以下の通りです:

  1. ASP.NET Core (MVC) アプリに関する情報の削除: 削除された行には、「Add search to an ASP.NET Core (MVC) app is a tutorial and code sample that builds a functional client.」という文が含まれていました。この情報は、ASP.NET Coreアプリに検索機能を追加する方法を示すチュートリアルへのリンクであり、ユーザーにとっては有用なリソースでした。

  2. 他のリソースの強調: 代わりに、Azureポータルでのデモアプリ作成へのリンクが追加されており、React JavaScriptライブラリを使用したWebアプリへの検索機能の追加に関する情報が引き続き提供されています。

これらの変更により、特定のリソースが削除されることで、文書の内容がより集約されたものになった一方で、ASP.NET Coreに関する情報を求めていたユーザーにとっては不便さが生じる可能性があります。ユーザーは、他のドキュメントやリソースを探して代替情報を見つける必要があります。

articles/search/search-try-for-free.md

Diff
@@ -7,7 +7,7 @@ author: haileytap
 ms.author: haileytapia
 ms.service: azure-ai-search
 ms.update-cycle: 180-days
-ms.topic: conceptual
+ms.topic: article
 ms.date: 11/06/2025
 ms.custom: references_regions
 ---
@@ -94,7 +94,7 @@ Try the Azure portal quickstarts for Azure AI Search or quickstarts that use Vis
 - [Quickstart: Vector search in the Azure portal](search-get-started-portal-import-vectors.md)
 - [Quickstart: Image search in the Azure portal](search-get-started-portal-image-search.md)
 - [Quickstart: Keyword in the Azure portal](search-get-started-portal.md)
-- [Quickstart: Generative search (RAG) using a Python client](search-get-started-rag.md)
+- [Quickstart: Agentic retrieval](search-get-started-agentic-retrieval.md)
 - [Quickstart: Vector search using a REST client](search-get-started-vector.md)
 
 Foundry supports connecting to content in Azure AI Search.

Summary

{
    "modification_type": "minor update",
    "modification_title": "ドキュメント更新: トピックの変更とクイックスタートのリンク修正"
}

Explanation

この変更は、「search-try-for-free.md」文書に対する修正を示しており、文書内で4つの変更(2つの追加と2つの削除)が行われています。主な内容は以下の通りです:

  1. トピックの変更: 文書のメタデータの一部が更新され、「ms.topic」が「conceptual」から「article」に変更されました。これは文書の内容が概念的な情報から具体的な記事に変更されたことを示しています。

  2. クイックスタートリンクの修正: クイックスタートのセクションで、1つのリンクが更新されました。

この変更により、ユーザーは新しいクイックスタートのリソースを参照できるようになり、Azure AI検索機能をより効率的に利用するための情報が強化されました。全体的に、この修正は文書の明確性を向上させ、正確なリソースへのアクセスを提供しています。

articles/search/search-what-is-an-index.md

Diff
@@ -165,7 +165,7 @@ All indexing and query requests target an index. Endpoints are usually one of th
 
    + [Quickstart: REST](search-get-started-text.md)
    + [Quickstart: Full-text search](search-get-started-text.md)
-   + [Quickstart: RAG (using Visual Studio Code and a Jupyter notebook)](search-get-started-rag.md)
+   + [Quickstart: Agentic retrieval](search-get-started-agentic-retrieval.md)
 
 ## Next steps
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "ドキュメント更新: クイックスタートリンクの修正"
}

Explanation

この変更は、「search-what-is-an-index.md」文書に対する修正を示しており、2つの変更(1つの追加と1つの削除)が行われています。具体的な内容は以下の通りです:

  1. クイックスタートリンクの修正: リソースのクイックスタートセクションで、以下の変更が行われました。

この修正によって、ユーザーは新しい「エージェント検索」に関するクイックスタートリソースにアクセスでき、Azure の検索機能をより柔軟に活用するための情報が提供されるようになりました。この変更は文書の精度を向上させ、ユーザーにとって有用なリソースへのリンクが適切に反映されています。

articles/search/toc.yml

Diff
@@ -108,8 +108,6 @@ items:
   items:
   - name: Dev tutorials
     items:
-    - name: Add search to ASP.NET Core (MVC)
-      href: tutorial-csharp-create-mvc-app.md
     - name: Add search to static web apps
       items:
       - name: Overview
@@ -120,8 +118,6 @@ items:
         href: tutorial-csharp-deploy-static-web-app.md
       - name: Explore the code
         href: tutorial-csharp-search-query-integration.md
-    - name: Query from Power Apps
-      href: search-howto-powerapps.md
   - name: Indexing tutorials
     items:
     - name: Index any data
@@ -156,22 +152,6 @@ items:
       href: tutorial-document-layout-image-verbalization.md
   - name: Agentic retrieval tutorial
     href: agentic-retrieval-how-to-create-pipeline.md
-  - name: Classic RAG tutorials
-    items:
-    - name: Build a classic RAG solution
-      href: tutorial-rag-build-solution.md
-    - name: Choose models
-      href: tutorial-rag-build-solution-models.md
-    - name: Design an index
-      href: tutorial-rag-build-solution-index-schema.md
-    - name: Build an indexing pipeline
-      href: tutorial-rag-build-solution-pipeline.md
-    - name: Search and generate answers
-      href: tutorial-rag-build-solution-query.md
-    - name: Maximize relevance
-      href: tutorial-rag-build-solution-maximize-relevance.md
-    - name: Minimize storage and costs
-      href: tutorial-rag-build-solution-minimize-storage.md
   - name: Skills tutorials
     items:
     - name: Create a skillset

Summary

{
    "modification_type": "minor update",
    "modification_title": "目次ファイルの更新: 不要な項目の削除"
}

Explanation

この変更は、「toc.yml」ファイルの更新を示しており、20項目が削除されています。具体的な変更内容は以下の通りです:

  1. 不要な項目の削除:
    • 「Dev tutorials」セクションから、以下のリンクが削除されました:
      • 「Add search to ASP.NET Core (MVC)」
      • 「Query from Power Apps」
    • 「Classic RAG tutorials」セクションが完全に削除され、その下にあったすべての関連リンクも削除されました。このセクションには、クラシックRAGソリューションに関連するチュートリアルが含まれていました。

これにより、目次ファイルが整理され、重要な情報に焦点を当てることができるようになりました。この変更は、ユーザーに対して最新のリソースに基づいた明確なナビゲーションを提供することを目的としています。全体として、不要な情報の削除により、文書がより簡潔で理解しやすくなったことが評価されます。

articles/search/tutorial-csharp-create-mvc-app.md

Diff
@@ -1,482 +0,0 @@
----
-title: Add search to ASP.NET Core MVC
-titleSuffix: Azure AI Search
-description: In this Azure AI Search tutorial, learn how to add search to an ASP.NET Core (Model-View-Controller) application.
-manager: nitinme
-author: HeidiSteen
-ms.author: heidist
-ms.service: azure-ai-search
-ms.update-cycle: 180-days
-ms.devlang: csharp
-ms.custom:
-  - ignite-2023
-ms.topic: tutorial
-ms.date: 12/05/2025
----
-
-# Create a search app in ASP.NET Core
-
-In this tutorial, you create a basic ASP.NET Core (Model-View-Controller) app that runs in localhost and connects to the [hotels-sample-index](search-get-started-portal.md) on your search service. You learn how to:
-
-> [!div class="checklist"]
-> + Create a basic search page
-> + Filter results
-> + Sort results
-
-This tutorial focuses on server-side operations called through the [Search APIs](/dotnet/api/overview/azure/search.documents-readme). Although it's common to sort and filter in client-side script, knowing how to invoke these operations on the server gives you more options when designing the search experience.
-
-You can find sample code for this tutorial in the [azure-search-dotnet-samples](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/main/create-mvc-app) repository on GitHub. 
-
-## Prerequisites
-
-+ [Visual Studio](https://visualstudio.microsoft.com/downloads/)
-+ [Azure.Search.Documents NuGet package](https://www.nuget.org/packages/Azure.Search.Documents/)
-+ [Azure AI Search](search-create-service-portal.md), any tier, but it must have public network access. 
-+ [Hotel samples index](search-get-started-portal.md)
-
-[Step through the Import data wizard](search-get-started-portal.md) to create the hotels-sample-index on your search service. Or, change the index name in the `HomeController.cs` file.
-
-## Create the project
-
-1. Start Visual Studio and select **Create a new project**.
-
-1. Select **ASP.NET Core Web App (Model-View-Controller)**, and then select **Next**.
-
-1. Enter a project name, and then select **Next**.
-
-1. On the next page, select **.NET 9.0**.
-
-1. Accept the default settings.
-
-1. Select **Create**.
-
-### Add NuGet packages
-
-1. On the **Tools** menu, select **NuGet Package Manager** > **Manage NuGet Packages for the solution**.
-
-1. Browse for `Azure.Search.Documents` and install the latest stable version.
-
-1. Browse for and install the `Microsoft.Spatial` package. The sample index includes a `GeographyPoint` data type. Installing this package avoids run time errors. Alternatively, remove the "Location" field from the `Hotels` class if you don't want to install the package. That field isn't used in this tutorial.
-
-### Add service information
-
-For the connection, the app presents a query API key to your fully qualified search URL. Both are specified in the `appsettings.json` file.
-
-Modify `appsettings.json` to specify your search service and [query API key](search-security-api-keys.md).
-
-```json
-{
-    "SearchServiceUri": "<YOUR-SEARCH-SERVICE-URL>",
-    "SearchServiceQueryApiKey": "<YOUR-SEARCH-SERVICE-QUERY-API-KEY>"
-}
-```
-
-You can get the service URL and API key from the Azure portal. Because this code is querying an index and not creating one, you can use a query key instead of an admin key.
-
-Make sure to specify a search service that has the `hotels-sample-index`.
-
-## Add models
-
-In this step, you create models that represent the schema of the hotels-sample-index.
-
-1. In Solution Explorer, right-select **Models** and add a new class named "Hotel" for the following code:
-
-   ```csharp
-    using Azure.Search.Documents.Indexes.Models;
-    using Azure.Search.Documents.Indexes;
-    using Microsoft.Spatial;
-    using System.Text.Json.Serialization;
-    
-    namespace HotelDemoApp.Models
-    {
-        public partial class Hotel
-        {
-            [SimpleField(IsFilterable = true, IsKey = true)]
-            public string HotelId { get; set; }
-    
-            [SearchableField(IsSortable = true)]
-            public string HotelName { get; set; }
-    
-            [SearchableField(AnalyzerName = LexicalAnalyzerName.Values.EnLucene)]
-            public string Description { get; set; }
-    
-            [SearchableField(AnalyzerName = LexicalAnalyzerName.Values.FrLucene)]
-            [JsonPropertyName("Description_fr")]
-            public string DescriptionFr { get; set; }
-    
-            [SearchableField(IsFilterable = true, IsSortable = true, IsFacetable = true)]
-            public string Category { get; set; }
-    
-            [SearchableField(IsFilterable = true, IsFacetable = true)]
-            public string[] Tags { get; set; }
-    
-            [SimpleField(IsFilterable = true, IsSortable = true, IsFacetable = true)]
-            public bool? ParkingIncluded { get; set; }
-    
-            [SimpleField(IsFilterable = true, IsSortable = true, IsFacetable = true)]
-            public DateTimeOffset? LastRenovationDate { get; set; }
-    
-            [SimpleField(IsFilterable = true, IsSortable = true, IsFacetable = true)]
-            public double? Rating { get; set; }
-    
-            public Address Address { get; set; }
-    
-            [SimpleField(IsFilterable = true, IsSortable = true)]
-            public GeographyPoint Location { get; set; }
-    
-            public Rooms[] Rooms { get; set; }
-        }
-    }
-   ```
-
-1. Add a class named "Address" and replace it with the following code:
-
-   ```csharp
-    using Azure.Search.Documents.Indexes;
-
-    namespace HotelDemoApp.Models
-    {
-        public partial class Address
-        {
-            [SearchableField]
-            public string StreetAddress { get; set; }
-    
-            [SearchableField(IsFilterable = true, IsSortable = true, IsFacetable = true)]
-            public string City { get; set; }
-    
-            [SearchableField(IsFilterable = true, IsSortable = true, IsFacetable = true)]
-            public string StateProvince { get; set; }
-    
-            [SearchableField(IsFilterable = true, IsSortable = true, IsFacetable = true)]
-            public string PostalCode { get; set; }
-    
-            [SearchableField(IsFilterable = true, IsSortable = true, IsFacetable = true)]
-            public string Country { get; set; }
-        }
-    }
-   ```
-
-1. Add a class named "Rooms" and replace it with the following code:
-
-   ```csharp
-    using Azure.Search.Documents.Indexes.Models;
-    using Azure.Search.Documents.Indexes;
-    using System.Text.Json.Serialization;
-    
-    namespace HotelDemoApp.Models
-    {
-        public partial class Rooms
-        {
-            [SearchableField(AnalyzerName = LexicalAnalyzerName.Values.EnMicrosoft)]
-            public string Description { get; set; }
-    
-            [SearchableField(AnalyzerName = LexicalAnalyzerName.Values.FrMicrosoft)]
-            [JsonPropertyName("Description_fr")]
-            public string DescriptionFr { get; set; }
-    
-            [SearchableField(IsFilterable = true, IsFacetable = true)]
-            public string Type { get; set; }
-    
-            [SimpleField(IsFilterable = true, IsFacetable = true)]
-            public double? BaseRate { get; set; }
-    
-            [SearchableField(IsFilterable = true, IsFacetable = true)]
-            public string BedOptions { get; set; }
-    
-            [SimpleField(IsFilterable = true, IsFacetable = true)]
-            public int SleepsCount { get; set; }
-    
-            [SimpleField(IsFilterable = true, IsFacetable = true)]
-            public bool? SmokingAllowed { get; set; }
-    
-            [SearchableField(IsFilterable = true, IsFacetable = true)]
-            public string[] Tags { get; set; }
-        }
-    }
-   ```
-
-1. Add a class named "SearchData" and replace it with the following code:
-
-   ```csharp
-    using Azure.Search.Documents.Models;
-
-    namespace HotelDemoApp.Models
-    {
-        public class SearchData
-        {
-            // The text to search for.
-            public string searchText { get; set; }
-    
-            // The list of results.
-            public SearchResults<Hotel> resultList;
-        }
-    }
-   ```
-
-## Modify the controller
-
-For this tutorial, modify the default `HomeController` to contain methods that execute on your search service.
-
-1. In Solution Explorer under **Models**, open `HomeController`.
-
-1. Replace the default content with the following code:
-
-   ```csharp
-   using Azure;
-    using Azure.Search.Documents;
-    using Azure.Search.Documents.Indexes;
-    using HotelDemoApp.Models;
-    using Microsoft.AspNetCore.Mvc;
-    using System.Diagnostics;
-    
-    namespace HotelDemoApp.Controllers
-    {
-        public class HomeController : Controller
-        {
-            public IActionResult Index()
-            {
-                return View();
-            }
-    
-            [HttpPost]
-            public async Task<ActionResult> Index(SearchData model)
-            {
-                try
-                {
-                    // Check for a search string
-                    if (model.searchText == null)
-                    {
-                        model.searchText = "";
-                    }
-    
-                    // Send the query to Search.
-                    await RunQueryAsync(model);
-                }
-    
-                catch
-                {
-                    return View("Error", new ErrorViewModel { RequestId = "1" });
-                }
-                return View(model);
-            }
-    
-            [ResponseCache(Duration = 0, Location = ResponseCacheLocation.None, NoStore = true)]
-            public IActionResult Error()
-            {
-                return View(new ErrorViewModel { RequestId = Activity.Current?.Id ?? HttpContext.TraceIdentifier });
-            }
-    
-            private static SearchClient _searchClient;
-            private static SearchIndexClient _indexClient;
-            private static IConfigurationBuilder _builder;
-            private static IConfigurationRoot _configuration;
-    
-            private void InitSearch()
-            {
-                // Create a configuration using appsettings.json
-                _builder = new ConfigurationBuilder().AddJsonFile("appsettings.json");
-                _configuration = _builder.Build();
-    
-                // Read the values from appsettings.json
-                string searchServiceUri = _configuration["SearchServiceUri"];
-                string queryApiKey = _configuration["SearchServiceQueryApiKey"];
-    
-                // Create a service and index client.
-                _indexClient = new SearchIndexClient(new Uri(searchServiceUri), new AzureKeyCredential(queryApiKey));
-                _searchClient = _indexClient.GetSearchClient("hotels-sample-index");
-            }
-    
-            private async Task<ActionResult> RunQueryAsync(SearchData model)
-            {
-                InitSearch();
-    
-                var options = new SearchOptions()
-                {
-                    IncludeTotalCount = true
-                };
-    
-                // Enter Hotel property names to specify which fields are returned.
-                // If Select is empty, all "retrievable" fields are returned.
-                options.Select.Add("HotelName");
-                options.Select.Add("Category");
-                options.Select.Add("Rating");
-                options.Select.Add("Tags");
-                options.Select.Add("Address/City");
-                options.Select.Add("Address/StateProvince");
-                options.Select.Add("Description");
-    
-                // For efficiency, the search call should be asynchronous, so use SearchAsync rather than Search.
-                model.resultList = await _searchClient.SearchAsync<Hotel>(model.searchText, options).ConfigureAwait(false);
-    
-                // Display the results.
-                return View("Index", model);
-            }
-            public IActionResult Privacy()
-            {
-                return View();
-            }
-        }
-    }
-   ```
-
-## Modify the view
-
-1. In Solution explorer, under **Views** > **Home**, open `index.cshtml`.
-
-1. Replace the default content with the following code:
-
-    ```razor
-    @model HotelDemoApp.Models.SearchData;
-    
-    @{
-        ViewData["Title"] = "Index";
-    }
-    
-    <div>
-        <h2>Search for Hotels</h2>
-    
-        <p>Use this demo app to test server-side sorting and filtering. Modify the RunQueryAsync method to change the operation. The app uses the default search configuration (simple search syntax, with searchMode=Any).</p>
-    
-        <form asp-controller="Home" asp-action="Index">
-            <p>
-                <input type="text" name="searchText" />
-                <input type="submit" value="Search" />
-            </p>
-        </form>
-    </div>
-    
-    <div>
-        @using (Html.BeginForm("Index", "Home", FormMethod.Post))
-        {
-            @if (Model != null)
-            {
-                // Show the result count.
-                <p>@Model.resultList.TotalCount Results</p>
-    
-                // Get search results.
-                var results = Model.resultList.GetResults().ToList();
-    
-                {
-                    <table class="table">
-                        <thead>
-                            <tr>
-                                <th>Name</th>
-                                <th>Category</th>
-                                <th>Rating</th>
-                                <th>Tags</th>
-                                <th>City</th>
-                                <th>State</th>
-                                <th>Description</th>
-                            </tr>
-                        </thead>
-                        <tbody>
-                            @foreach (var d in results)
-                            {
-                                <tr>
-                                    <td>@d.Document.HotelName</td>
-                                    <td>@d.Document.Category</td>
-                                    <td>@d.Document.Rating</td>
-                                    <td>@d.Document.Tags[0]</td>
-                                    <td>@d.Document.Address.City</td>
-                                    <td>@d.Document.Address.StateProvince</td>
-                                    <td>@d.Document.Description</td>
-                                </tr>
-                            }
-                        </tbody>
-                      </table>
-                }
-            }
-        }
-    </div>
-    ```
-
-## Run the sample
-
-1. Press **F5** to compile and run the project. The app runs on localhost and opens in your default browser.
-
-1. Select **Search** to return all results.
-
-1. This code uses the default search configuration, supporting the [simple syntax](query-simple-syntax.md) and `searchMode=Any`. You can enter keywords, augment with Boolean operators, or run a prefix search (`pool*`).
-
-In the next several sections, modify the **RunQueryAsync** method in the `HomeController` to add filters and sorting.
-
-## Filter results
-
-Index field attributes determine which fields are searchable, filterable, sortable, facetable, and retrievable. In the hotels-sample-index, filterable fields include Category, Address/City, and Address/StateProvince. This example adds a [$Filter](search-query-odata-filter.md) expression on Category.
-
-A filter always executes first, followed by a query, assuming you specify one.
-
-1. Open the `HomeController` and find the **RunQueryAsync** method. Add [Filter](/dotnet/api/azure.search.documents.searchoptions.filter) to `var options = new SearchOptions()`:
-
-   ```csharp
-    private async Task<ActionResult> RunQueryAsync(SearchData model)
-    {
-        InitSearch();
-
-        var options = new SearchOptions()
-        {
-            IncludeTotalCount = true,
-            Filter = "search.in(Category,'Budget,Suite')"
-        };
-
-        options.Select.Add("HotelName");
-        options.Select.Add("Category");
-        options.Select.Add("Rating");
-        options.Select.Add("Tags");
-        options.Select.Add("Address/City");
-        options.Select.Add("Address/StateProvince");
-        options.Select.Add("Description");
-
-        model.resultList = await _searchClient.SearchAsync<Hotel>(model.searchText, options).ConfigureAwait(false);
-
-        return View("Index", model);
-    }
-   ```
-
-1. Run the application.
-
-1. Select **Search** to run an empty query. The filter returns 18 documents instead of the original 50.
-
-For more information about filter expressions, see [Filters in Azure AI Search](search-filters.md) and [OData $filter syntax in Azure AI Search](search-query-odata-filter.md).
-
-## Sort results
-
-In the hotels-sample-index, sortable fields include Rating and LastRenovated. This example adds an [$OrderBy](/dotnet/api/azure.search.documents.searchoptions.orderby) expression to the Rating field.
-
-1. Open the `HomeController` and replace the **RunQueryAsync** method with the following version:
-
-   ```csharp
-    private async Task<ActionResult> RunQueryAsync(SearchData model)
-    {
-        InitSearch();
-    
-        var options = new SearchOptions()
-        {
-            IncludeTotalCount = true,
-        };
-    
-        options.OrderBy.Add("Rating desc");
-    
-        options.Select.Add("HotelName");
-        options.Select.Add("Category");
-        options.Select.Add("Rating");
-        options.Select.Add("Tags");
-        options.Select.Add("Address/City");
-        options.Select.Add("Address/StateProvince");
-        options.Select.Add("Description");
-
-        model.resultList = await _searchClient.SearchAsync<Hotel>(model.searchText, options).ConfigureAwait(false);
-    
-        return View("Index", model);
-    }
-   ```
-
-1. Run the application. Results are sorted by Rating in descending order.
-
-For more information about sorting, see [OData $orderby syntax in Azure AI Search](search-query-odata-orderby.md).
-
-## Next step
-
-In this tutorial, you created an ASP.NET Core (MVC) project that connects to a search service and calls Search APIs for server-side filtering and sorting.
-
-To add client-side code that responds to user actions, use a React template in your solution: [C# Tutorial: Add search to a website with .NET](tutorial-csharp-overview.md).

Summary

{
    "modification_type": "breaking change",
    "modification_title": "チュートリアルの削除: ASP.NET Core MVCアプリにおける検索機能の追加"
}

Explanation

この変更は、「tutorial-csharp-create-mvc-app.md」ファイルが完全に削除されたことを示しています。このファイルは、ASP.NET Core(MVC)アプリケーションに検索機能を追加するための詳細なチュートリアルを含んでいました。削除された内容は482行に及び、チュートリアルに必要な基本的な説明や手順がすべて含まれていました。具体的には以下の要素が含まれていました:

  1. チュートリアルの内容:
    • ASP.NET Core MVCアプリケーションの作成方法。
    • 検索サービスへの接続方法。
    • 検索結果のフィルタリングやソートの操作方法。
    • 様々なNuGetパッケージの追加手順やappsettings.jsonの設定。
  2. 削除による影響:
    • この削除により、ユーザーはASP.NET Coreアプリに検索機能を組み込むための公式なガイドラインを失うことになります。
    • 新たなチュートリアルやリソースに置き換えられる可能性がありますが、現時点ではこの特定の情報源はなくなったため、代替手段の明示が必要です。

この変更は、既存のドキュメントの整合性やリソースの整理の一環として行われた可能性があり、今後のアップデートや新しいチュートリアルのリリースを期待させるものです。

articles/search/tutorial-rag-build-solution-index-schema.md

Diff
@@ -1,215 +0,0 @@
----
-title: 'Classic RAG tutorial: Design an index'
-titleSuffix: Azure AI Search
-description: Design an index for RAG patterns in Azure AI Search.
-manager: nitinme
-author: HeidiSteen
-ms.author: heidist
-ms.service: azure-ai-search
-ms.update-cycle: 180-days
-ms.topic: tutorial
-ms.date: 10/14/2025
-
----
-
-# Tutorial: Design an index for classic RAG in Azure AI Search
-
-An index contains searchable text and vector content, plus configurations. In a RAG pattern that uses a chat model for responses, you want an index designed around chunks of content that can be passed to an LLM at query time. 
-
-> [!NOTE]
-> We now recommend [agentic retrieval](agentic-retrieval-overview.md) for RAG workflows, but classic RAG is simpler. If it meets your application requirements, it's still a good choice.
-
-In this tutorial, you:
-
-> [!div class="checklist"]
-> - Learn the characteristics of an index schema built for RAG
-> - Create an index that accommodates vector and hybrid queries
-> - Add vector profiles and configurations
-> - Add structured data
-> - Add filtering
-
-## Prerequisites
-
-[Visual Studio Code](https://code.visualstudio.com/download) with the [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) and the [Jupyter package](https://pypi.org/project/jupyter/). For more information, see [Python in Visual Studio Code](https://code.visualstudio.com/docs/languages/python).
-
-The output of this exercise is an index definition in JSON. At this point, it's not uploaded to Azure AI Search, so there are no requirements for cloud services or permissions in this exercise.
-
-## Review schema considerations for RAG
-
-In conversational search, LLMs compose the response that the user sees, not the search engine, so you don't need to think about what fields to show in your search results, and whether the representations of individual search documents are coherent to the user. Depending on the question, the LLM might return verbatim content from your index, or more likely, repackage the content for a better answer.
-
-### Organized around chunks
-
-When LLMs generate a response, they operate on chunks of content for message inputs, and while they need to know where the chunk came from for citation purposes, what matters most is the quality of message inputs and its relevance to the user's question. Whether the chunks come from one document or a thousand, the LLM ingests the information or *grounding data*, and formulates the response using instructions provided in a system prompt.
-
-Chunks are the focus of the schema, and each chunk is the defining element of a search document in a RAG pattern. You can think of your index as a large collection of chunks, as opposed to traditional search documents that probably have more structure, such as fields containing uniform content for a name, descriptions, categories, and addresses.
-
-### Enhanced with generated data
-
-In this tutorial, sample data consists of PDFs and content from the [NASA Earth Book](https://www.nasa.gov/ebooks/earth/). This content is descriptive and informative, with numerous references to geographies, countries, and areas across the world. All of the textual content is captured in chunks, but recurring instances of place names create an opportunity for adding structure to the index. Using skills, it's possible to recognize entities in the text and capture them in an index for use in queries and filters. In this tutorial, we include an [entity recognition skill](cognitive-search-skill-entity-recognition-v3.md) that recognizes and extracts location entities, loading it into a searchable and filterable `locations` field. Adding structured content to your index gives you more options for filtering, improved relevance, and more focused answers.
-
-### Parent-child fields in one or two indexes?
-
-Chunked content typically derives from a larger document. And although the schema is organized around chunks, you also want to capture properties and content at the parent level. Examples of these properties might include the parent file path, title, authors, publication date, or a summary.
-
-An inflection point in schema design is whether to have two indexes for parent and child/chunked content, or a single index that repeats parent elements for each chunk.
-
-In this tutorial, because all of the chunks of text originate from a single parent (NASA Earth Book), you don't need a separate index dedicated to up level the parent fields. However, if you're indexing from multiple parent PDFs, you might want a parent-child index pair to capture level-specific fields and then send [lookup queries](/rest/api/searchservice/documents/get) to the parent index to retrieve those fields relevant to each chunk.
-
-### Checklist of schema considerations
-
-In Azure AI Search, an index that works best for RAG workloads has these qualities:
-
-- Returns chunks that are relevant to the query and readable to the LLM. LLMs can handle a certain level of dirty data in chunks, such as mark up, redundancy, and incomplete strings. While chunks need to be readable and relevant to the question, they don't need to be pristine.
-
-- Maintains a parent-child relationship between chunks of a document and the properties of the parent document, such as the file name, file type, title, author, and so forth. To answer a query, chunks could be pulled from anywhere in the index. Association with the parent document providing the chunk is useful for context, citations, and follow up queries.
-
-- Accommodates the queries you want create. You should have fields for vector and hybrid content, and those fields should be attributed to support specific query behaviors, such as searchable or filterable. You can only query one index at a time (no joins) so your fields collection should define all of your searchable content.
-
-- Your schema should either be flat (no complex types or structures), or you should [format the complext type output as JSON](search-get-started-rag.md#send-a-complex-rag-query) before sending it to the LLM. This requirement is specific to the RAG pattern in Azure AI Search.
-
-> [!NOTE]
-> Schema design affects storage and costs. This exercise is focused on schema fundamentals. In the [Minimize storage and costs](tutorial-rag-build-solution-minimize-storage.md) tutorial, you revisit schemas to learn how narrow data types, compression, and storage options significantly reduce the amount of storage used by vectors.
-
-## Create an index for RAG workloads
-
-A minimal index for LLM is designed to store chunks of content. It typically includes vector fields if you want similarity search for highly relevant results. It also includes nonvector fields for human-readable inputs to the LLM for conversational search. Nonvector chunked content in the search results becomes the grounding data sent to the LLM.
-
-1. Open Visual Studio Code and create a new file. It doesn't have to be a Python file type for this exercise.
-
-1. Here's a minimal index definition for RAG solutions that support vector and hybrid search. Review it for an introduction to required elements: index name, fields, and a configuration section for vector fields.
-
-    ```json
-    {
-      "name": "example-minimal-index",
-      "fields": [
-        { "name": "id", "type": "Edm.String", "key": true },
-        { "name": "chunked_content", "type": "Edm.String", "searchable": true, "retrievable": true },
-        { "name": "chunked_content_vectorized", "type": "Edm.Single", "dimensions": 1536, "vectorSearchProfile": "my-vector-profile", "searchable": true, "retrievable": false, "stored": false },
-        { "name": "metadata", "type": "Edm.String", "retrievable": true, "searchable": true, "filterable": true }
-      ],
-      "vectorSearch": {
-          "algorithms": [
-              { "name": "my-algo-config", "kind": "hnsw", "hnswParameters": { }  }
-          ],
-          "profiles": [ 
-            { "name": "my-vector-profile", "algorithm": "my-algo-config" }
-          ]
-      }
-    }
-    ```
-
-   Fields must include key field (`"id"` in this example) and should include vector chunks for similarity search, and nonvector chunks for inputs to the LLM. 
-
-   Vector fields are associated with algorithms that determine the search paths at query time. The index has a vectorSearch section for specifying multiple algorithm configurations. Vector fields also have [specific types](/rest/api/searchservice/supported-data-types#edm-data-types-for-vector-fields) and extra attributes for embedding model dimensions. `Edm.Single` is a data type that works for commonly used LLMs. For more information about vector fields, see [Create a vector index](vector-search-how-to-create-index.md).
-
-   Metadata fields might be the parent file path, creation date, or content type and are useful for [filters](vector-search-filters.md).
-
-1. Here's the index schema for the [tutorial source code](https://github.com/Azure-Samples/azure-search-python-samples/blob/main/Tutorial-RAG/Tutorial-rag.ipynb) and the [Earth Book content](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/nasa-e-book/earth_book_2019_text_pages). 
-
-   Like the basic schema, it's organized around chunks. The `chunk_id` uniquely identifies each chunk. The `text_vector` field is an embedding of the chunk. The nonvector `chunk` field is a readable string. The `title` maps to a unique metadata storage path for the blobs. The `parent_id` is the only parent-level field, and it's a base64-encoded version of the parent file URI. 
-
-   In integrated vectorization workloads like the one used in this tutorial series, the `dimensions` property on your vector fields should be identical to the number of `dimensions` generated by the embedding skill used to vectorize your data. In this series, we use the Azure OpenAI embedding skill, which calls the text-embedding-3-large model on Azure OpenAI. The skill is specified in the next tutorial. We set dimensions to 1024 in both the vector field and in the skill definition.
-
-   The schema also includes a `locations` field for storing generated content that's created by the [indexing pipeline](tutorial-rag-build-solution-pipeline.md).
-
-   ```python
-    from azure.identity import DefaultAzureCredential
-    from azure.identity import get_bearer_token_provider
-    from azure.search.documents.indexes import SearchIndexClient
-    from azure.search.documents.indexes.models import (
-        SearchField,
-        SearchFieldDataType,
-        VectorSearch,
-        HnswAlgorithmConfiguration,
-        VectorSearchProfile,
-        AzureOpenAIVectorizer,
-        AzureOpenAIVectorizerParameters,
-        SearchIndex
-    )
-    
-    credential = DefaultAzureCredential()
-    
-    # Create a search index  
-    index_name = "py-rag-tutorial-idx"
-    index_client = SearchIndexClient(endpoint=AZURE_SEARCH_SERVICE, credential=credential)  
-    fields = [
-        SearchField(name="parent_id", type=SearchFieldDataType.String),  
-        SearchField(name="title", type=SearchFieldDataType.String),
-        SearchField(name="locations", type=SearchFieldDataType.Collection(SearchFieldDataType.String), filterable=True),
-        SearchField(name="chunk_id", type=SearchFieldDataType.String, key=True, sortable=True, filterable=True, facetable=True, analyzer_name="keyword"),  
-        SearchField(name="chunk", type=SearchFieldDataType.String, sortable=False, filterable=False, facetable=False),  
-        SearchField(name="text_vector", type=SearchFieldDataType.Collection(SearchFieldDataType.Single), vector_search_dimensions=1024, vector_search_profile_name="myHnswProfile")
-        ]  
-      
-    # Configure the vector search configuration  
-    vector_search = VectorSearch(  
-        algorithms=[  
-            HnswAlgorithmConfiguration(name="myHnsw"),
-        ],  
-        profiles=[  
-            VectorSearchProfile(  
-                name="myHnswProfile",  
-                algorithm_configuration_name="myHnsw",  
-                vectorizer_name="myOpenAI",  
-            )
-        ],  
-        vectorizers=[  
-            AzureOpenAIVectorizer(  
-                vectorizer_name="myOpenAI",  
-                kind="azureOpenAI",  
-                parameters=AzureOpenAIVectorizerParameters(  
-                    resource_url=AZURE_OPENAI_ACCOUNT,  
-                    deployment_name="text-embedding-3-large",
-                    model_name="text-embedding-3-large"
-                ),
-            ),  
-        ], 
-    )  
-      
-    # Create the search index
-    index = SearchIndex(name=index_name, fields=fields, vector_search=vector_search)  
-    result = index_client.create_or_update_index(index)  
-    print(f"{result.name} created")  
-    ```
-
-1. For an index schema that more closely mimics structured content, you would have separate indexes for parent and child (chunked) fields. You would need [index projections](index-projections-concept-intro.md) to coordinate the indexing of the two indexes simultaneously. Queries execute against the child index. Query logic includes a lookup query, using the parent_idt  retrieve content from the parent index.
-
-   Fields in the child index:
-
-   - ID
-   - chunk
-   - chunkVectcor
-   - parent_id
-
-   Fields in the parent index (everything that you want "one of"):
-
-   - parent_id
-   - parent-level fields (name, title, category)
-
-<!-- Objective:
-
-- Design an index schema that generates results in a format that works for LLMs.
-
-Key points:
-
-- schema for rag is designed for producing chunks of content
-- schema should be flat (no complex types or structures)
-- schema determines what queries you can create (be generous in attribute assignments)
-- schema must cover all the queries you want to run. You can only query one index at a time (no joins), but you can create indexes that preserve parent-child relationship, and then use nested queries or parallel queries in your search logic to pull from both.
-- schema has impact on storage/size. Consider narrow data types, attribution, vector configuration.
-- show schema patterns: one for parent-child all-up, one for paired indexes via index projections
-- note metadata for filters
-- TBD: add fields for location and use entity recognition to pull this values out of the PDFs? Not sure how the extraction will work on chunked documents or how it will query, but goal would be to show that you can add structured data to the schema.
-
-Tasks:
-
-- H2 How to create an index for chunked and vectorized data (show examples for parent-child variants)
-- H2 How to define vector profiles and configuration (discuss pros and cons, shouldn't be a rehash of existing how-to)
-- H2 How to add filters
-- H2 How to add structured data (example is "location", top-level field, data aquisition is through the pipeline) -->
-
-## Next step
-
-> [!div class="nextstepaction"]
-> [Create an indexing pipeline](tutorial-rag-build-solution-pipeline.md)

Summary

{
    "modification_type": "breaking change",
    "modification_title": "チュートリアルの削除: RAGソリューションのインデックススキーマ設計"
}

Explanation

この変更は、「tutorial-rag-build-solution-index-schema.md」ファイルの完全な削除を示しています。このファイルは、Azure AI SearchにおけるRAG(Retrieval-Augmented Generation)パターン用のインデックススキーマを設計するための詳細なチュートリアルを提供していました。削除された内容は215行に及び、以下の主要な要素が含まれていました:

  1. インデックススキーマの設計:
    • RAGに特化したインデックススキーマの特性や、チャンクされたコンテンツの取り扱いについて説明。
    • ベクトルとハイブリッドクエリのサポートを含むインデックス作成の手順。
  2. チュートリアルの内容:
    • RAGワークフローにおけるインデックス設計の重要性。
    • 実際のインデックス定義をJSON形式で提供し、ユーザーが自らのインデックスを設計できるようにする内容。
    • Azure AI Searchの機能と、データの構造を考慮したクエリ方法。
  3. 削除による影響:
    • このチュートリアルの削除は、ユーザーにとって新たなガイドラインを失うことを意味し、特にRAGパターンに関心がある開発者にとっては重要な情報源を失う結果となります。
    • 将来的に代替のリソースや新しいチュートリアルが提供される可能性がありますが、現在はこの特定の情報が失われています。

この変更は、ドキュメントの簡素化や内容の整理の一環として行われたことが考えられますが、ユーザーにとっては有用な資源がなくなったことを意味しています。新たな情報源やチュートリアルの提供を待つ必要があります。

articles/search/tutorial-rag-build-solution-maximize-relevance.md

Diff
@@ -1,333 +0,0 @@
----
-title: 'Classic RAG tutorial: Tune relevance'
-titleSuffix: Azure AI Search
-description: Learn how to use the relevance tuning capabilities to return high quality results for generative search.
-manager: nitinme
-author: HeidiSteen
-ms.author: heidist
-ms.service: azure-ai-search
-ms.update-cycle: 180-days
-ms.custom:
-  - ignite-2024
-ms.topic: tutorial
-ms.date: 10/14/2025
----
-
-# Tutorial: Maximize relevance (classic RAG in Azure AI Search)
-
-Azure AI Search provides relevance tuning strategies for improving the relevance of search results in classic RAG solutions.  Relevance tuning can be an important factor in delivering a RAG solution that meets user expectations. 
-
-> [!NOTE]
-> We now recommend [agentic retrieval](agentic-retrieval-overview.md) for RAG workflows, but classic RAG is simpler. If it meets your application requirements, it's still a good choice.
-
-In Azure AI Search, relevance tuning includes L2 semantic ranking and scoring profiles. To implement these capabilities, you revisit the index schema to add configurations for semantic ranking and scoring profiles. You then rerun the queries using the new constructs.
-
-In this tutorial, you modify the existing search index and queries to use:
-
-> [!div class="checklist"]
-> - L2 semantic ranking
-> - Scoring profile for document boosting
-
-This tutorial updates the search index created by the [indexing pipeline](tutorial-rag-build-solution-pipeline.md). Updates don't affect the existing content, so no rebuild is necessary and you don't need to rerun the indexer.
-
-## Prerequisites
-
-- [Visual Studio Code](https://code.visualstudio.com/download) with the [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) and the [Jupyter package](https://pypi.org/project/jupyter/).
-
-- [Azure AI Search](search-create-service-portal.md), Basic tier or higher for managed identity and semantic ranking.
-
-- [Azure OpenAI](/azure/ai-services/openai/how-to/create-resource), with a deployment of text-embedding-3-small and gpt-4o.
-
-## Download the sample
-
-The [sample notebook](https://github.com/Azure-Samples/azure-search-python-samples/blob/main/Tutorial-RAG/Tutorial-rag.ipynb) includes an updated index and query request.
-
-## Run a baseline query for comparison
-
-Let's start with a new query, "Are there any cloud formations specific to oceans and large bodies of water?".
-
-To compare outcomes after adding relevance features, run the query against the existing index schema, before you add semantic ranking or a scoring profile.
-
-For the Azure Government cloud, modify the API endpoint on the token provider to `"https://cognitiveservices.azure.us/.default"`.
-
-```python
-from azure.search.documents import SearchClient
-from openai import AzureOpenAI
-
-token_provider = get_bearer_token_provider(credential, "https://cognitiveservices.azure.com/.default")
-openai_client = AzureOpenAI(
-     api_version="2024-06-01",
-     azure_endpoint=AZURE_OPENAI_ACCOUNT,
-     azure_ad_token_provider=token_provider
- )
-
-deployment_name = "gpt-4o"
-
-search_client = SearchClient(
-     endpoint=AZURE_SEARCH_SERVICE,
-     index_name=index_name,
-     credential=credential
- )
-
-GROUNDED_PROMPT="""
-You are an AI assistant that helps users learn from the information found in the source material.
-Answer the query using only the sources provided below.
-Use bullets if the answer has multiple points.
-If the answer is longer than 3 sentences, provide a summary.
-Answer ONLY with the facts listed in the list of sources below. Cite your source when you answer the question
-If there isn't enough information below, say you don't know.
-Do not generate answers that don't use the sources below.
-Query: {query}
-Sources:\n{sources}
-"""
-
-# Focused query on cloud formations and bodies of water
-query="Are there any cloud formations specific to oceans and large bodies of water?"
-vector_query = VectorizableTextQuery(text=query, k_nearest_neighbors=50, fields="text_vector")
-
-search_results = search_client.search(
-    search_text=query,
-    vector_queries= [vector_query],
-    select=["title", "chunk", "locations"],
-    top=5,
-)
-
-sources_formatted = "=================\n".join([f'TITLE: {document["title"]}, CONTENT: {document["chunk"]}, LOCATIONS: {document["locations"]}' for document in search_results])
-
-response = openai_client.chat.completions.create(
-    messages=[
-        {
-            "role": "user",
-            "content": GROUNDED_PROMPT.format(query=query, sources=sources_formatted)
-        }
-    ],
-    model=deployment_name
-)
-
-print(response.choices[0].message.content)
-```
-
-Output from this request might look like the following example.
-
-```
-Yes, there are cloud formations specific to oceans and large bodies of water. 
-A notable example is "cloud streets," which are parallel rows of clouds that form over 
-the Bering Strait in the Arctic Ocean. These cloud streets occur when wind blows from 
-a cold surface like sea ice over warmer, moister air near the open ocean, leading to 
-the formation of spinning air cylinders. Clouds form along the upward cycle of these cylinders, 
-while skies remain clear along the downward cycle (Source: page-21.pdf).
-```
-
-## Update the index for semantic ranking and scoring profiles
-
-In a previous tutorial, you [designed an index schema](tutorial-rag-build-solution-index-schema.md) for RAG workloads. We purposely omitted relevance enhancements from that schema so that you could focus on the fundamentals. Deferring relevance to a separate exercise gives you a before-and-after comparison of the quality of search results after the updates are made.
-
-1. Update the import statements to include classes for semantic ranking and scoring profiles.
-
-   ```python
-    from azure.identity import DefaultAzureCredential
-    from azure.identity import get_bearer_token_provider
-    from azure.search.documents.indexes import SearchIndexClient
-    from azure.search.documents.indexes.models import (
-        SearchField,
-        SearchFieldDataType,
-        VectorSearch,
-        HnswAlgorithmConfiguration,
-        VectorSearchProfile,
-        AzureOpenAIVectorizer,
-        AzureOpenAIVectorizerParameters,
-        SearchIndex,
-        SemanticConfiguration,
-        SemanticPrioritizedFields,
-        SemanticField,
-        SemanticSearch,
-        ScoringProfile,
-        TagScoringFunction,
-        TagScoringParameters
-    )
-    ```
-
-1. Add the following semantic configuration to the search index. This example can be found in the update schema step in the notebook.
-
-    ```python
-    # New semantic configuration
-    semantic_config = SemanticConfiguration(
-        name="my-semantic-config",
-        prioritized_fields=SemanticPrioritizedFields(
-            title_field=SemanticField(field_name="title"),
-            keywords_fields=[SemanticField(field_name="locations")],
-            content_fields=[SemanticField(field_name="chunk")]
-        )
-    )
-    
-    # Create the semantic settings with the configuration
-    semantic_search = SemanticSearch(configurations=[semantic_config])
-    ```
-
-   A semantic configuration has a name and a prioritized list of fields to help optimize the inputs to semantic ranker. For more information, see [Configure semantic ranking](/azure/search/semantic-how-to-configure).
-
-1. Next, add a scoring profile definition. As with semantic configuration, a scoring profile can be added to an index schema at any time. This example is also in the update schema step in the notebook, following the semantic configuration.
-
-    ```python
-    # New scoring profile
-    scoring_profiles = [  
-        ScoringProfile(  
-            name="my-scoring-profile",
-            functions=[
-                TagScoringFunction(  
-                    field_name="locations",  
-                    boost=5.0,  
-                    parameters=TagScoringParameters(  
-                        tags_parameter="tags",  
-                    ),  
-                ) 
-            ]
-        )
-    ]
-    ```
-
-   This profile uses the tag function which boosts the scores of documents where a match was found in the locations field. Recall that the search index has a vector field, and multiple nonvector fields for title, chunks, and locations. The locations field is a string collection, and string collections can be boosted using the tags function in a scoring profile. For more information, see [Add a scoring profile](index-add-scoring-profiles.md) and [Enhancing Search Relevance with Document Boosting (blog post)](https://farzzy.hashnode.dev/enhance-azure-ai-search-document-boosting).
-
-1. Update the index definition on the search service.
-
-   ```python
-   # Update the search index with the semantic configuration
-    index = SearchIndex(name=index_name, fields=fields, vector_search=vector_search, semantic_search=semantic_search, scoring_profiles=scoring_profiles)  
-    result = index_client.create_or_update_index(index)  
-    print(f"{result.name} updated")  
-    ```
-
-## Update queries for semantic ranking and scoring profiles
-
-In a previous tutorial, you [ran queries](tutorial-rag-build-solution-query.md) that execute on the search engine, passing the response and other information to an LLM for chat completion.
-
-This example modifies the query request to include the semantic configuration and scoring profile.
-
-For the Azure Government cloud, modify the API endpoint on the token provider to `"https://cognitiveservices.azure.us/.default"`.
-
-```python
-# Import libraries
-from azure.search.documents import SearchClient
-from openai import AzureOpenAI
-
-token_provider = get_bearer_token_provider(credential, "https://cognitiveservices.azure.com/.default")
-openai_client = AzureOpenAI(
-     api_version="2024-06-01",
-     azure_endpoint=AZURE_OPENAI_ACCOUNT,
-     azure_ad_token_provider=token_provider
- )
-
-deployment_name = "gpt-4o"
-
-search_client = SearchClient(
-     endpoint=AZURE_SEARCH_SERVICE,
-     index_name=index_name,
-     credential=credential
- )
-
-# Prompt is unchanged in this update
-GROUNDED_PROMPT="""
-You are an AI assistant that helps users learn from the information found in the source material.
-Answer the query using only the sources provided below.
-Use bullets if the answer has multiple points.
-If the answer is longer than 3 sentences, provide a summary.
-Answer ONLY with the facts listed in the list of sources below.
-If there isn't enough information below, say you don't know.
-Do not generate answers that don't use the sources below.
-Query: {query}
-Sources:\n{sources}
-"""
-
-# Queries are unchanged in this update
-query="Are there any cloud formations specific to oceans and large bodies of water?"
-vector_query = VectorizableTextQuery(text=query, k_nearest_neighbors=50, fields="text_vector")
-
-# Add query_type semantic and semantic_configuration_name
-# Add scoring_profile and scoring_parameters
-search_results = search_client.search(
-    query_type="semantic",
-    semantic_configuration_name="my-semantic-config",
-    scoring_profile="my-scoring-profile",
-    scoring_parameters=["tags-ocean, 'sea surface', seas, surface"],
-    search_text=query,
-    vector_queries= [vector_query],
-    select="title, chunk, locations",
-    top=5,
-)
-sources_formatted = "=================\n".join([f'TITLE: {document["title"]}, CONTENT: {document["chunk"]}, LOCATIONS: {document["locations"]}' for document in search_results])
-
-response = openai_client.chat.completions.create(
-    messages=[
-        {
-            "role": "user",
-            "content": GROUNDED_PROMPT.format(query=query, sources=sources_formatted)
-        }
-    ],
-    model=deployment_name
-)
-
-print(response.choices[0].message.content)
-```
-
-Output from a semantically ranked and boosted query might look like the following example.
-
-```
-Yes, there are specific cloud formations influenced by oceans and large bodies of water:
-
-- **Stratus Clouds Over Icebergs**: Low stratus clouds can frame holes over icebergs, 
-such as Iceberg A-56 in the South Atlantic Ocean, likely due to thermal instability caused 
-by the iceberg (source: page-39.pdf).
-
-- **Undular Bores**: These are wave structures in the atmosphere created by the collision 
-of cool, dry air from a continent with warm, moist air over the ocean, as seen off the 
-coast of Mauritania (source: page-23.pdf).
-
-- **Ship Tracks**: These are narrow clouds formed by water vapor condensing around tiny 
-particles from ship exhaust. They are observed over the oceans, such as in the Pacific Ocean 
-off the coast of California (source: page-31.pdf).
-
-These specific formations are influenced by unique interactions between atmospheric conditions 
-and the presence of large water bodies or objects within them.
-```
-
-Adding semantic ranking and scoring profiles positively affects the response from the LLM by promoting results that meet scoring criteria and are semantically relevant. 
-
-Now that you have a better understanding of index and query design, let's move on to optimizing for speed and concision. We revisit the schema definition to implement quantization and storage reduction, but the rest of the pipeline and models remain intact.
-
-<!-- ## Update queries for minimum thresholds ** NOT AVAILABLE IN PYTHON SDK
-
-Keyword search only returns results if there's match found in the index, up to a maximum of 50 results by default. In contrast, vector search returns `k`-results every time, even if the matching vectors aren't a close match.
-
-In the vector query portion of the request, add a threshold object and set a minimum value for including vector matches in the results.
-
-Vector scores range from 0.333 to 1.00. For more information, see [Set thresholds to exclude low-scoring results](vector-search-how-to-query.md#set-thresholds-to-exclude-low-scoring-results-preview) and [Scores in a vector search results](vector-search-ranking.md#scores-in-a-vector-search-results).
-
-```python
-# Update the vector_query to include a minimum threshold.
-query="how much of earth is covered by water"
-vector_query = VectorizableTextQuery(text=query, k_nearest_neighbors=1, fields="text_vector", threshold.kind="vectorSImiliarty", threshold.value=0.8, exhaustive=True) -->
-
-<!-- ## Update queries for vector weighting
-
-<!-- Using preview features, you can unpack a hybrid search score to review the individual component scores. Based on that information, you can set minimum thresholds to exclude any match that falls below it.
-
-Semantic ranking and scoring profiles operate on nonvector content, but you can tune the vector portion of a hybrid query to amplify or diminish its importance based on how much value it adds to the results. For example, if you run keyword search and vector search independently and find that one of them is outperforming the other, you can adjust the weight on the vector side to higher or lower. This approach gives you more control over query processing.
- -->
-
-<!-- Key points:
-
-- How to measure relevance (?) to determine if changes are improving results
-- Try different algorithms (HNSW vs eKnn)
-- Change query structure (hybrid with vector/non over same content (double-down), hybrid over multiple fields)
-- semantic ranking
-- scoring profiles
-- thresholds for minimum score
-- set weights
-- filters
-- analyzers and normalizers
-- advanced query formats (regular expressions, fuzzy search) -->
-
-## Next step
-
-> [!div class="nextstepaction"]
-> [Minimize vector storage and costs](tutorial-rag-build-solution-minimize-storage.md)

Summary

{
    "modification_type": "breaking change",
    "modification_title": "チュートリアルの削除: RAGソリューションの関連性最大化"
}

Explanation

この変更は、「tutorial-rag-build-solution-maximize-relevance.md」ファイルが完全に削除されたことを示しています。削除されたこのファイルは、Azure AI SearchにおけるRAG(Retrieval-Augmented Generation)ソリューションの関連性を最大化するための手法とチュートリアルを提供していました。内容は333行に及び、主に以下の要素が含まれていました:

  1. 関連性調整:
    • 検索結果の関連性を改善するための戦略(L2セマンティックランキングやスコアリングプロファイルの設定など)についての理解を深めさせる内容。
  2. チュートリアルの内容:
    • 既存の検索インデックスとクエリを変更して、新たに追加された関連性機能を活用する方法が解説されていました。
    • デモ用のサンプルノートブックを用いた実際の手順とコード例。
  3. 削除の影響:
    • このチュートリアルの削除により、ユーザーはRAGソリューションの関連性を高めるための重要なリソースを失うこととなり、特に関連性のチューニングに関心のある開発者にとっては欠かせない情報が失われることになります。
    • 将来的に新たなリソースや代替のチュートリアルが提供される可能性があるものの、現在はこの特定のガイドラインが失われています。

この変更は、文書の整理や情報の更新の一環として行われたと考えられますが、ユーザーにとっては有益な情報源が失われた結果となります。新しい情報源や代替リソースの提供が期待されています。

articles/search/tutorial-rag-build-solution-minimize-storage.md

Diff
@@ -1,342 +0,0 @@
----
-title: 'Classic RAG tutorial: Minimize storage and costs'
-titleSuffix: Azure AI Search
-description: Compress vectors using narrow data types and scalar quantization. Remove extra copies of stored vectors to further save on space.
-manager: nitinme
-author: HeidiSteen
-ms.author: heidist
-ms.service: azure-ai-search
-ms.update-cycle: 180-days
-ms.topic: tutorial
-ms.date: 10/14/2025
-ms.custom: sfi-ropc-nochange
-
----
-
-# Tutorial: Minimize storage and costs (classic RAG in Azure AI Search)
-
-Azure AI Search offers several approaches for reducing the size of vector indexes. These approaches range from vector compression, to being more selective over what you store on your search service.
-
-> [!NOTE]
-> We now recommend [agentic retrieval](agentic-retrieval-overview.md) for RAG workflows, but classic RAG is simpler. If it meets your application requirements, it's still a good choice.
-
-In this tutorial, you modify the existing search index to use:
-
-> [!div class="checklist"]
-> - Narrow data types
-> - Scalar quantization
-> - Reduced storage by opting out of vectors in search results
-
-This tutorial reprises the search index created by the [indexing pipeline](tutorial-rag-build-solution-pipeline.md). All of these updates affect the existing content, requiring you to rerun the indexer. However, instead of deleting the search index, you create a second one so that you can compare reductions in vector index size after adding the new capabilities.
-
-Altogether, the techniques illustrated in this tutorial can reduce vector storage by about half.
-
-The following screenshot compares the [first index](tutorial-rag-build-solution-pipeline.md) from a previous tutorial to the index built in this one.
-
-:::image type="content" source="media/tutorial-rag-solution/side-by-side-comparison.png" lightbox="media/tutorial-rag-solution/side-by-side-comparison.png" alt-text="Screenshot of the original vector index with the index created using the schema in this tutorial.":::
-
-## Prerequisites
-
-This tutorial is essentially a rerun of the [indexing pipeline](tutorial-rag-build-solution-pipeline.md). You need all of the Azure resources and permissions described in that tutorial.
-
-For comparison, you should have an existing *py-rag-tutorial-idx* index on your Azure AI Search service. It should be almost 2 MB in size, and the vector index portion should be 348 KB.
-
-You should also have the following objects:
-
-- py-rag-tutorial-ds (data source)
-
-- py-rag-tutorial-ss (skillset)
-
-## Download the sample
-
-[Download a Jupyter notebook](https://github.com/Azure-Samples/azure-search-python-samples/blob/main/Tutorial-RAG/Tutorial-rag.ipynb) from GitHub to send the requests to Azure AI Search. For more information, see [Downloading files from GitHub](https://docs.github.com/get-started/start-your-journey/downloading-files-from-github).
-
-## Update the index for reduced storage
-
-Azure AI Search has multiple approaches for reducing vector size, which lowers the cost of vector workloads. In this step, create a new index that uses the following capabilities:
-
-- Vector compression. Scalar quantization provides this capability.
-
-- Eliminate optional storage. If you only need vectors for queries and not in a response payload, you can drop the vector copy used for search results.
-
-- Narrow data types. You can specify `Collection(Edm.Half)` on the text_vector field to store incoming float32 dimensions as float16, which takes up less space in the index.
-
-All of these capabilities are specified in a search index. After you load the index, compare the difference between the original index and the new one.
-
-1. Name the new index `py-rag-tutorial-small-vectors-idx`.
-
-1. Use the following definition for the new index. The difference between this schema and the previous schema updates in [Maximize relevance](tutorial-rag-build-solution-maximize-relevance.md) are new classes for scalar quantization and a new compressions section, a new data type (`Collection(Edm.Half)`) for the text_vector field, and a new property `stored` set to false.
-
-    ```python
-    from azure.identity import DefaultAzureCredential
-    from azure.identity import get_bearer_token_provider
-    from azure.search.documents.indexes import SearchIndexClient
-    from azure.search.documents.indexes.models import (
-        SearchField,
-        SearchFieldDataType,
-        VectorSearch,
-        HnswAlgorithmConfiguration,
-        VectorSearchProfile,
-        AzureOpenAIVectorizer,
-        AzureOpenAIVectorizerParameters,
-        ScalarQuantizationCompression,
-        ScalarQuantizationParameters,
-        SearchIndex,
-        SemanticConfiguration,
-        SemanticPrioritizedFields,
-        SemanticField,
-        SemanticSearch,
-        ScoringProfile,
-        TagScoringFunction,
-        TagScoringParameters
-    )
-    
-    credential = DefaultAzureCredential()
-    
-    index_name = "py-rag-tutorial-small-vectors-idx"
-    index_client = SearchIndexClient(endpoint=AZURE_SEARCH_SERVICE, credential=credential)  
-    fields = [
-        SearchField(name="parent_id", type=SearchFieldDataType.String),  
-        SearchField(name="title", type=SearchFieldDataType.String),
-        SearchField(name="locations", type=SearchFieldDataType.Collection(SearchFieldDataType.String), filterable=True),
-        SearchField(name="chunk_id", type=SearchFieldDataType.String, key=True, sortable=True, filterable=True, facetable=True, analyzer_name="keyword"),  
-        SearchField(name="chunk", type=SearchFieldDataType.String, sortable=False, filterable=False, facetable=False),  
-        SearchField(name="text_vector", type="Collection(Edm.Half)", vector_search_dimensions=1024, vector_search_profile_name="myHnswProfile", stored= False)
-        ]  
-    
-    # Configure the vector search configuration  
-    vector_search = VectorSearch(  
-        algorithms=[  
-            HnswAlgorithmConfiguration(name="myHnsw"),
-        ],  
-        profiles=[  
-            VectorSearchProfile(  
-                name="myHnswProfile",  
-                algorithm_configuration_name="myHnsw",
-                compression_name="myScalarQuantization",
-                vectorizer_name="myOpenAI",  
-            )
-        ],  
-        vectorizers=[  
-            AzureOpenAIVectorizer(  
-                vectorizer_name="myOpenAI",  
-                kind="azureOpenAI",  
-                parameters=AzureOpenAIVectorizerParameters(  
-                    resource_url=AZURE_OPENAI_ACCOUNT,  
-                    deployment_name="text-embedding-3-large",
-                    model_name="text-embedding-3-large"
-                ),
-            ),  
-        ],
-        compressions=[
-            ScalarQuantizationCompression(
-                compression_name="myScalarQuantization",
-                rerank_with_original_vectors=True,
-                default_oversampling=10,
-                parameters=ScalarQuantizationParameters(quantized_data_type="int8"),
-            )
-        ]
-    )
-    
-    semantic_config = SemanticConfiguration(
-        name="my-semantic-config",
-        prioritized_fields=SemanticPrioritizedFields(
-            title_field=SemanticField(field_name="title"),
-            keywords_fields=[SemanticField(field_name="locations")],
-            content_fields=[SemanticField(field_name="chunk")]
-        )
-    )
-    
-    semantic_search = SemanticSearch(configurations=[semantic_config])
-    
-    scoring_profiles = [  
-        ScoringProfile(  
-            name="my-scoring-profile",
-            functions=[
-                TagScoringFunction(  
-                    field_name="locations",  
-                    boost=5.0,  
-                    parameters=TagScoringParameters(  
-                        tags_parameter="tags",  
-                    ),  
-                ) 
-            ]
-        )
-    ]
-    
-    index = SearchIndex(name=index_name, fields=fields, vector_search=vector_search, semantic_search=semantic_search, scoring_profiles=scoring_profiles)  
-    result = index_client.create_or_update_index(index)  
-    print(f"{result.name} created")
-    ```
-
-## Create or reuse the data source
-
-Here's the definition of the data source from the previous tutorial. If you already have this data source on your search service, you can skip creating a new one.
-
-```python
-from azure.search.documents.indexes import SearchIndexerClient
-from azure.search.documents.indexes.models import (
-    SearchIndexerDataContainer,
-    SearchIndexerDataSourceConnection
-)
-
-# Create a data source 
-indexer_client = SearchIndexerClient(endpoint=AZURE_SEARCH_SERVICE, credential=credential)
-container = SearchIndexerDataContainer(name="nasa-ebooks-pdfs-all")
-data_source_connection = SearchIndexerDataSourceConnection(
-    name="py-rag-tutorial-ds",
-    type="azureblob",
-    connection_string=AZURE_STORAGE_CONNECTION,
-    container=container
-)
-data_source = indexer_client.create_or_update_data_source_connection(data_source_connection)
-
-print(f"Data source '{data_source.name}' created or updated")
-```
-
-## Create or reuse the skillset
-
-The skillset is also unchanged from the previous tutorial. Here it is again so that you can review it.
-
-```python
-from azure.search.documents.indexes.models import (
-    SplitSkill,
-    InputFieldMappingEntry,
-    OutputFieldMappingEntry,
-    AzureOpenAIEmbeddingSkill,
-    EntityRecognitionSkill,
-    SearchIndexerIndexProjection,
-    SearchIndexerIndexProjectionSelector,
-    SearchIndexerIndexProjectionsParameters,
-    IndexProjectionMode,
-    SearchIndexerSkillset,
-    CognitiveServicesAccountKey
-)
-
-# Create a skillset  
-skillset_name = "py-rag-tutorial-ss"
-
-split_skill = SplitSkill(  
-    description="Split skill to chunk documents",  
-    text_split_mode="pages",  
-    context="/document",  
-    maximum_page_length=2000,  
-    page_overlap_length=500,  
-    inputs=[  
-        InputFieldMappingEntry(name="text", source="/document/content"),  
-    ],  
-    outputs=[  
-        OutputFieldMappingEntry(name="textItems", target_name="pages")  
-    ],  
-)  
-  
-embedding_skill = AzureOpenAIEmbeddingSkill(  
-    description="Skill to generate embeddings via Azure OpenAI",  
-    context="/document/pages/*",  
-    resource_url=AZURE_OPENAI_ACCOUNT,  
-    deployment_name="text-embedding-3-large",  
-    model_name="text-embedding-3-large",
-    dimensions=1536,
-    inputs=[  
-        InputFieldMappingEntry(name="text", source="/document/pages/*"),  
-    ],  
-    outputs=[  
-        OutputFieldMappingEntry(name="embedding", target_name="text_vector")  
-    ],  
-)
-
-entity_skill = EntityRecognitionSkill(
-    description="Skill to recognize entities in text",
-    context="/document/pages/*",
-    categories=["Location"],
-    default_language_code="en",
-    inputs=[
-        InputFieldMappingEntry(name="text", source="/document/pages/*")
-    ],
-    outputs=[
-        OutputFieldMappingEntry(name="locations", target_name="locations")
-    ]
-)
-  
-index_projections = SearchIndexerIndexProjection(  
-    selectors=[  
-        SearchIndexerIndexProjectionSelector(  
-            target_index_name=index_name,  
-            parent_key_field_name="parent_id",  
-            source_context="/document/pages/*",  
-            mappings=[  
-                InputFieldMappingEntry(name="chunk", source="/document/pages/*"),  
-                InputFieldMappingEntry(name="text_vector", source="/document/pages/*/text_vector"),
-                InputFieldMappingEntry(name="locations", source="/document/pages/*/locations"),  
-                InputFieldMappingEntry(name="title", source="/document/metadata_storage_name"),  
-            ],  
-        ),  
-    ],  
-    parameters=SearchIndexerIndexProjectionsParameters(  
-        projection_mode=IndexProjectionMode.SKIP_INDEXING_PARENT_DOCUMENTS  
-    ),  
-) 
-
-cognitive_services_account = CognitiveServicesAccountKey(key=AZURE_AI_FOUNDRY_KEY)
-
-skills = [split_skill, embedding_skill, entity_skill]
-
-skillset = SearchIndexerSkillset(  
-    name=skillset_name,  
-    description="Skillset to chunk documents and generating embeddings",  
-    skills=skills,  
-    index_projection=index_projections,
-    cognitive_services_account=cognitive_services_account
-)
-  
-client = SearchIndexerClient(endpoint=AZURE_SEARCH_SERVICE, credential=credential)  
-client.create_or_update_skillset(skillset)  
-print(f"{skillset.name} created")
-```
-
-## Create a new indexer and load the index
-
-Although you could reset and rerun the existing indexer using the new index, it's just as easy to create a new indexer. Having two indexes and indexers preserves the execution history and allows for closer comparisons.
-
-This indexer is identical to the previous indexer, except that it specifies the new index from this tutorial.
-
-```python
-from azure.search.documents.indexes.models import (
-    SearchIndexer
-)
-
-# Create an indexer  
-indexer_name = "py-rag-tutorial-small-vectors-idxr" 
-
-indexer_parameters = None
-
-indexer = SearchIndexer(  
-    name=indexer_name,  
-    description="Indexer to index documents and generate embeddings",
-    target_index_name="py-rag-tutorial-small-vectors-idx",
-    skillset_name="py-rag-tutorial-ss", 
-    data_source_name="py-rag-tutorial-ds",
-    parameters=indexer_parameters
-)  
-
-# Create and run the indexer  
-indexer_client = SearchIndexerClient(endpoint=AZURE_SEARCH_SERVICE, credential=credential)  
-indexer_result = indexer_client.create_or_update_indexer(indexer)  
-
-print(f' {indexer_name} is created and running. Give the indexer a few minutes before running a query.')
-```
-
-As a final step, switch to the Azure portal to compare the vector storage requirements for the two indexes. You should results similar to the following screenshot.
-
-:::image type="content" source="media/tutorial-rag-solution/side-by-side-comparison.png" lightbox="media/tutorial-rag-solution/side-by-side-comparison.png" alt-text="Screenshot of the original vector index with the index created using the schema in this tutorial.":::
-
-The index created in this tutorial uses half-precision floating-point numbers (float16) for the text vectors. This reduces the storage requirements for the vectors by half compared to the previous index that used single-precision floating-point numbers (float32). Scalar compression and the omission of one set of the vectors account for the remaining storage savings. For more information about reducing vector size, see [Choose an approach for optimizing vector storage and processing](vector-search-how-to-configure-compression-storage.md).
-
-Consider revisiting the [queries from the previous tutorial](tutorial-rag-build-solution-query.md) so that you can compare query speed and utility. You should expect some variation in LLM output whenever you repeat a query, but in general the storage-saving techniques you implemented shouldn't degrade the quality of your search results.
-
-## Next step
-
-We recommend this accelerator for you next step:
-
-> [!div class="nextstepaction"]
-> [RAG Experiment Accelerator](https://github.com/microsoft/rag-experiment-accelerator)
\ No newline at end of file

Summary

{
    "modification_type": "breaking change",
    "modification_title": "チュートリアルの削除: ストレージとコストの最小化"
}

Explanation

この変更は、「tutorial-rag-build-solution-minimize-storage.md」ファイルが完全に削除されたことを示しています。このファイルは、Azure AI SearchにおいてRAG(Retrieval-Augmented Generation)ソリューションのストレージのサイズを縮小し、コストを削減するための手法を解説していました。内容は342行に及び、以下の主要なポイントが含まれていました:

  1. ストレージ削減手法:
    • ベクトルインデックスのサイズを縮小するための方法(ベクトル圧縮、スカラー量子化、オプションのストレージ削除など)についての具体的な手法を紹介。
  2. チュートリアルの内容:
    • 新しいインデックスを作成して、ストレージを最適化する技術を実装する手順が詳細に説明されていました。
    • ベクトルデータをより効率的に保存するための具体的なコード例や、インデックス間の比較方法が示されていました。
  3. 削除の影響:
    • このチュートリアルの削除により、ストレージとコストの最適化に関連する重要な情報源が失われ、開発者にとっては特に有益なリソースがなくなることを意味します。
    • 今後代替のリソースや新しいチュートリアルが提供される可能性もありますが、現時点ではストレージ最適化の具体的なガイドラインが失われた状態です。

この変更は、ドキュメントの整理やリソースの更新の一環として行われたと考えられますが、ユーザーにとってはストレージ削減に関する有用な情報が失われています。新しい情報源や代替チュートリアルの提供が期待されています。

articles/search/tutorial-rag-build-solution-models.md

Diff
@@ -1,172 +0,0 @@
----
-title: 'CLassic RAG tutorial: Set up models'
-titleSuffix: Azure AI Search
-description: Set up an embedding model and chat model for generative search (RAG).
-manager: nitinme
-author: HeidiSteen
-ms.author: heidist
-ms.service: azure-ai-search
-ms.update-cycle: 180-days
-ms.topic: tutorial
-ms.custom: references_regions
-ms.date: 10/14/2025
-
----
-
-# Tutorial: Choose embedding and chat models for classic RAG in Azure AI Search
-
-A RAG solution built on Azure AI Search takes a dependency on embedding models for vectorization, and on chat completion models for conversational search over your data.
-
-> [!NOTE]
-> We now recommend [agentic retrieval](agentic-retrieval-overview.md) for RAG workflows, but classic RAG is simpler. If it meets your application requirements, it's still a good choice.
-
-In this tutorial, you:
-
-> [!div class="checklist"]
-> - Learn about the Azure models supported for built-in vectorization
-> - Learn about the Azure models supported for chat completion
-> - Deploy models and collect model information for your code
-> - Configure search engine access to Azure models
-> - Learn about custom skills and vectorizers for attaching non-Azure models
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/purchase-options/azure-account?cid=msft_learn) before you begin.
-
-## Prerequisites
-
-- The Azure portal, used to deploy models and configure role assignments in the Azure cloud.
-
-- An **Owner** or **User Access Administrator** role on your Azure subscription, necessary for creating role assignments. You use at least three Azure resources in this tutorial. The connections are authenticated using Microsoft Entra ID, which requires the ability to create roles. Role assignments for connecting to models are documented in this article. If you can't create roles, you can use [API keys](search-security-api-keys.md) instead.
-
-- A model provider, such as [Azure OpenAI](/azure/ai-services/openai/how-to/create-resource), Azure Vision in Foundry Tools via a [Microsoft Foundry resource](/azure/ai-services/multi-service-resource), or the [Foundry model catalog](https://ai.azure.com/?cid=learnDocs). For Azure Vision, ensure that your Foundry resource is in the same region as [Azure AI Search](search-region-support.md) and the [Azure Vision multimodal APIs](/azure/ai-services/computer-vision/overview-image-analysis?tabs=4-0#region-availability).
-
-  We use Azure OpenAI in this tutorial. Other providers are listed so that you know your options for integrated vectorization.
-
-- Azure AI Search, Basic tier or higher provides a [managed identity](search-how-to-managed-identities.md) used in role assignments.
-
-## Review models supporting built-in vectorization
-
-Vectorized content improves the query results in a RAG solution. Azure AI Search supports a built-in vectorization action in an indexing pipeline. It also supports vectorization at query time, converting text or image inputs into embeddings for a vector search. In this step, identify an embedding model that works for your content and queries. If you're providing raw vector data and raw vector queries, or if your RAG solution doesn't include vector data, skip this step.
-
-Vector queries that include a text-to-vector conversion step must use the same embedding model that was used during indexing. The search engine doesn't throw an error if you use different models, but you get poor results.
-
-To meet the same-model requirement, choose embedding models that can be referenced through *skills* during indexing and through *vectorizers* during query execution. The following table lists the skill and vectorizer pairs. To see how the embedding models are used, skip ahead to [Create an indexing pipeline](tutorial-rag-build-solution-pipeline.md) for code that calls an embedding skill and a matching vectorizer. 
-
-Azure AI Search provides skill and vectorizer support for the following embedding models in the Azure cloud.
-
-| Client | Embedding models | Skill | Vectorizer |
-|--------|------------------|-------|------------|
-| Azure OpenAI | text-embedding-ada-002<br>text-embedding-3-large<br>text-embedding-3-small | [AzureOpenAIEmbedding](cognitive-search-skill-azure-openai-embedding.md) | [AzureOpenAIEmbedding](vector-search-vectorizer-azure-open-ai.md) |
-| Azure Vision | multimodal 4.0 <sup>1</sup> | [AzureAIVision](cognitive-search-skill-vision-vectorize.md) | [AzureAIVision](vector-search-vectorizer-ai-services-vision.md) |
-| Foundry model catalog | Cohere-embed-v3-english <sup>1</sup><br>Cohere-embed-v3-multilingual <sup>1</sup><br>Cohere-embed-v4 <sup>1, 2</sup> | [AML](cognitive-search-aml-skill.md) <sup>3</sup> | [Foundry model catalog](vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md) |
-
-<sup>1</sup> Supports text and image vectorization.
-
-<sup>2</sup> At this time, you can only specify `embed-v-4-0` programmatically through the [AML skill](cognitive-search-aml-skill.md) or [Microsoft Foundry model catalog vectorizer](vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md), not through the Azure portal. However, you can use the portal to manage the skillset or vectorizer afterward.
-
-<sup>3</sup> Deployed models in the model catalog are accessed over an AML endpoint. We use the existing AML skill for this connection.
-
-You can use other models besides the ones listed here. For more information, see [Use non-Azure models for embeddings](#use-non-azure-models-for-embeddings) in this article.
-
-> [!NOTE]
-> Inputs to an embedding models are typically chunked data. In an Azure AI Search RAG pattern, chunking is handled in the indexer pipeline, covered in [another tutorial](tutorial-rag-build-solution-pipeline.md) in this series.
-
-## Review models used for generative AI at query time
-
-Azure AI Search doesn't have integration code for chat models, so you should choose an LLM that you're familiar with and that meets your requirements. You can modify query code to try different models without having to rebuild an index or rerun any part of the indexing pipeline. Review [Search and generate answers](tutorial-rag-build-solution-query.md) for code that calls the chat model.
-
-The following models are commonly used for a chat search experience:
-
-| Client | Chat models |
-|--------|------------|
-| Azure OpenAI | <br>GPT-4, <br>GPT-4o, <br>GPT-4.1. <br>GPT-5 |
-
-GPT-4 and GPT-5 models are optimized to work with inputs formatted as a conversation.
-
-We use GPT-4o in this tutorial.
-
-## Deploy models and collect information
-
-Models must be deployed and accessible through an endpoint. Both embedding-related skills and vectorizers need the number of dimensions and the model name. 
-
-This tutorial series uses the following models and model providers:
-
-- Text-embedding-3-large on Azure OpenAI for embeddings
-- GPT-4o on Azure OpenAI for chat completion
-
-You must have [**Cognitive Services OpenAI Contributor**]( /azure/ai-services/openai/how-to/role-based-access-control#cognitive-services-openai-contributor) or higher to deploy models in Azure OpenAI.
-
-1. Sign in to the [Foundry portal](https://ai.azure.com/?cid=learnDocs).
-
-1. Select **text-embedding-3-large**, and then select **Use this model**.
-
-1. Specify a deployment name. We recommend **text-embedding-3-large**.
-
-1. Accept the defaults.
-
-1. Select **Deploy**.
-
-1. Repeat the previous steps for **gpt-4o**.
-
-1. Make a note of the model names and endpoint. Embedding skills and vectorizers assemble the full endpoint internally, so you only need the resource URI. For example, given `https://MY-FAKE-ACCOUNT.openai.azure.com/openai/deployments/text-embedding-3-large/embeddings?api-version=2024-06-01`, the endpoint you should provide in skill and vectorizer definitions is `https://MY-FAKE-ACCOUNT.openai.azure.com`.
-
-## Configure search engine access to Azure models
-
-For pipeline and query execution, this tutorial uses Microsoft Entra ID for authentication and roles for authorization. 
-
-Assign yourself and the search service identity permissions on Azure OpenAI. The code for this tutorial runs locally. Requests to Azure OpenAI originate from your system. Also, search results from the search engine are passed to Azure OpenAI. For these reasons, both you and the search service need permissions on Azure OpenAI.
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and [find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices).
-
-1. Configure Azure AI Search to [use a system-managed identity](search-how-to-managed-identities.md).
-
-1. Find your Azure OpenAI resource.
-
-1. Select **Access control (IAM)** on the left menu. 
-
-1. Select **Add role assignment**.
-
-1. Select [**Cognitive Services OpenAI User**](/azure/ai-services/openai/how-to/role-based-access-control#cognitive-services-openai-userpermissions).
-
-1. Select **Managed identity** and then select **Members**. Find the system-managed identity for your search service in the dropdown list.
-
-1. Next, select **User, group, or service principal** and then select **Members**. Search for your user account and then select it from the dropdown list.
-
-1. Make sure you have two security principals assigned to the role.
-
-1. Select **Review and Assign** to create the role assignments.
-
-For access to models on Azure Vision, assign **Cognitive Services OpenAI User**. For Foundry, assign **Azure AI Developer**.
-
-## Use non-Azure models for embeddings
-
-The pattern for integrating any embedding model is to wrap it in a custom skill and custom vectorizer. This section provides links to reference articles. For a code example that calls a non-Azure model, see [custom-embeddings demo](https://github.com/Azure/azure-search-vector-samples/blob/main/demo-python/code/custom-vectorizer/readme.md).
-
-| Client | Embedding models | Skill | Vectorizer |
-|--------|------------------|-------|------------|
-| Any | Any | [custom skill](cognitive-search-custom-skill-web-api.md) | [custom vectorizer](vector-search-vectorizer-custom-web-api.md) |
-
-<!-- In this tutorial,  Learn how to set up connections so that Azure AI Search can connect securely during indexing, and at query time for generative AI responses and text-to-vector conversions of query strings.
-
-Objective:
-
-- Identify an embedding model and chat model for your RAG workflow.
-
-Key points:
-
-- Built-in integration for models hosted in the Azure cloud.
-- For chunking, use the native Text Split skill with overlapping text -- or -- for semantic chunking, use Azure Document Intelligence in Foundry Tools.
-- For embedding during indexing, use a skill that points to Azure OpenAI, Azure Vision, or the model catalog. Alternatively, use custom skill with HTTP endpoint to external model.
-- For queries, same embedding models as above, but you're wrapping it in a "vectorizer" instead of a "skill".
-- Use the same embedding model for indexing and text-to-vector queries. If you want to try a different model, it's a rebuild. An indexer pipeline like the one used in this tutorial makes this step easy.
-- For chat, same location requirements and providers, except no Azure Vision. You specify a chat model in your query logic. Unlike embedding, you can swap these around at query time to see what they do.
-
-Tasks:
-
-- H2: Identify the models for which we have skills/vectorizers and provide locations (model catalog, Azure OpenAI, etc). Crosslink to model deployment instructions. Include steps for getting endpoints, model version, deployment name, REST API version.
-- H2: How to use other models (create a custom skill, create a custom vectorizer).
-- H2: How to configure access. Set up an Azure AI Search managed identity, give it permissions on Azure-hosted models. -->
-
-## Next step
-
-> [!div class="nextstepaction"]
-> [Design an index](tutorial-rag-build-solution-index-schema.md)
\ No newline at end of file

Summary

{
    "modification_type": "breaking change",
    "modification_title": "チュートリアルの削除: モデルの設定"
}

Explanation

この変更は、「tutorial-rag-build-solution-models.md」ファイルが完全に削除されたことを示しています。このファイルは、Azure AI SearchにおけるRAG(Retrieval-Augmented Generation)ソリューションのための埋め込みモデルとチャットモデルのセットアップ方法について説明していました。内容は172行に及び、主に以下の要素が含まれていました:

  1. モデルの選択と仕様:
    • RAGソリューションにおける埋め込みモデルとチャットモデルの種類、及びそれらの選定基準についての詳細が説明されていました。
  2. 実装手順:
    • Azureモデルのデプロイや、モデル情報の収集、検索エンジンからAzureモデルへのアクセス設定など、具体的な手順が記載されていました。
  3. カスタムスキルとベクトライザーの利用:
    • Azure以外のモデルを使用するためのカスタムスキルやベクトライザーの設定方法についても言及されており、多様な選択肢を提示していました。
  4. 削除の影響:
    • このチュートリアルの削除により、RAGソリューションに必要なモデルの設定に関する重要な情報が失われ、開発者はモデルの選定や実装において大きなリソースを失うこととなります。
    • 将来的に代替のガイドライン提供が期待されますが、現段階ではこの特定のチュートリアルが失われたことは多くのユーザーにとって痛手となるでしょう。

この変更は、リソースの整理や内容の更新の一環として行われた可能性がありますが、ユーザーの学習や実装のための有用な情報源の消失を意味します。新しい情報源や更新されたリソースの提供が望まれます。

articles/search/tutorial-rag-build-solution-pipeline.md

Diff
@@ -1,406 +0,0 @@
----
-title: 'Classic RAG tutorial: Build an indexing pipeline'
-titleSuffix: Azure AI Search
-description: Create an indexer-driven pipeline that loads, chunks, embeds, and ingests content for RAG solutions on Azure AI Search.
-manager: nitinme
-author: HeidiSteen
-ms.author: heidist
-ms.service: azure-ai-search
-ms.update-cycle: 180-days
-ms.topic: tutorial
-ms.date: 10/14/2025
-ms.custom:
-  - ignite-2024
-  - sfi-ropc-nochange
----
-
-# Tutorial: Build an indexing pipeline for classic RAG on Azure AI Search
-
-Learn how to build an automated indexing pipeline for a RAG solution on Azure AI Search. Indexing automation is through an indexer that drives indexing and skillset execution, providing [integrated data chunking and vectorization](vector-search-integrated-vectorization.md) on a one-time or recurring basis for incremental updates.
-
-> [!NOTE]
-> We now recommend [agentic retrieval](agentic-retrieval-overview.md) for RAG workflows, but classic RAG is simpler. If it meets your application requirements, it's still a good choice.
-
-In this tutorial, you:
-
-> [!div class="checklist"]
-> - Provide the index schema from the previous tutorial
-> - Create a data source connection
-> - Create an indexer
-> - Create a skillset that chunks, vectorizes, and recognizes entities
-> - Run the indexer and check results
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/purchase-options/azure-account?cid=msft_learn) before you begin.
-
-> [!TIP]
-> You can use the [**Import data (new)** wizard](search-import-data-portal.md) to create your pipeline. Try some quickstarts [Image search](search-get-started-portal-image-search.md) or [Vector search](search-get-started-portal-import-vectors.md), to learn more about the pipeline and its moving parts.
-
-## Prerequisites
-
-- [Visual Studio Code](https://code.visualstudio.com/download) with the [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) and the [Jupyter package](https://pypi.org/project/jupyter/). For more information, see [Python in Visual Studio Code](https://code.visualstudio.com/docs/languages/python).
-
-- [Azure Storage](/azure/storage/common/storage-account-create) general purpose account. This exercise uploads PDF files into blob storage for automated indexing.
-
-- [Azure AI Search](search-create-service-portal.md), Basic tier or above for managed identity and semantic ranking. Choose a region that's shared with Foundry Tools.
-
-- [Azure OpenAI](/azure/ai-services/openai/how-to/create-resource), with a deployment of text-embedding-3-large. For more information about embedding models used in RAG solutions, see [Choose embedding models for RAG in Azure AI Search](tutorial-rag-build-solution-models.md).
-
-- [Microsoft Foundry](/azure/ai-services/multi-service-resource), in the same region as Azure AI Search. This resource is used for the Entity Recognition skill that detects locations in your content.
-
-## Download the sample
-
-[Download a Jupyter notebook](https://github.com/Azure-Samples/azure-search-python-samples/blob/main/Tutorial-RAG/Tutorial-rag.ipynb) from GitHub to send the requests to Azure AI Search. For more information, see [Downloading files from GitHub](https://docs.github.com/get-started/start-your-journey/downloading-files-from-github).
-
-## Provide the index schema
-
-Open or create a Jupyter notebook (`.ipynb`) in Visual Studio Code to contain the scripts that comprise the pipeline. Initial steps install packages and collect variables for the connections. After you complete the setup steps, you're ready to begin with the components of the indexing pipeline. 
-
-Let's start with the index schema from the [previous tutorial](tutorial-rag-build-solution-index-schema.md). It's organized around vectorized and nonvectorized chunks. It includes a `locations` field that stores AI-generated content created by the skillset.
-
-```python
-from azure.identity import DefaultAzureCredential
-from azure.identity import get_bearer_token_provider
-from azure.search.documents.indexes import SearchIndexClient
-from azure.search.documents.indexes.models import (
-    SearchField,
-    SearchFieldDataType,
-    VectorSearch,
-    HnswAlgorithmConfiguration,
-    VectorSearchProfile,
-    AzureOpenAIVectorizer,
-    AzureOpenAIVectorizerParameters,
-    SearchIndex
-)
-
-credential = DefaultAzureCredential()
-
-# Create a search index  
-index_name = "py-rag-tutorial-idx"
-index_client = SearchIndexClient(endpoint=AZURE_SEARCH_SERVICE, credential=credential)  
-fields = [
-    SearchField(name="parent_id", type=SearchFieldDataType.String),  
-    SearchField(name="title", type=SearchFieldDataType.String),
-    SearchField(name="locations", type=SearchFieldDataType.Collection(SearchFieldDataType.String), filterable=True),
-    SearchField(name="chunk_id", type=SearchFieldDataType.String, key=True, sortable=True, filterable=True, facetable=True, analyzer_name="keyword"),  
-    SearchField(name="chunk", type=SearchFieldDataType.String, sortable=False, filterable=False, facetable=False),  
-    SearchField(name="text_vector", type=SearchFieldDataType.Collection(SearchFieldDataType.Single), vector_search_dimensions=1024, vector_search_profile_name="myHnswProfile")
-    ]  
-  
-# Configure the vector search configuration  
-vector_search = VectorSearch(  
-    algorithms=[  
-        HnswAlgorithmConfiguration(name="myHnsw"),
-    ],  
-    profiles=[  
-        VectorSearchProfile(  
-            name="myHnswProfile",  
-            algorithm_configuration_name="myHnsw",  
-            vectorizer_name="myOpenAI",  
-        )
-    ],  
-    vectorizers=[  
-        AzureOpenAIVectorizer(  
-            vectorizer_name="myOpenAI",  
-            kind="azureOpenAI",  
-            parameters=AzureOpenAIVectorizerParameters(  
-                resource_url=AZURE_OPENAI_ACCOUNT,  
-                deployment_name="text-embedding-3-large",
-                model_name="text-embedding-3-large"
-            ),
-        ),  
-    ], 
-)  
-  
-# Create the search index
-index = SearchIndex(name=index_name, fields=fields, vector_search=vector_search)  
-result = index_client.create_or_update_index(index)  
-print(f"{result.name} created")  
-```
-
-## Create a data source connection
-
-In this step, set up the sample data and a connection from Azure AI Search to Azure Blob Storage. The indexer retrieves PDFs from a container. You create the container and upload files in this step.
-
-The original ebook is large, over 100 pages and 35 MB in size. We broke it up into smaller PDFs, one per page of text, to stay under the [document limit for indexers](search-limits-quotas-capacity.md#indexer-limits) of 16 MB per API call and also the [AI enrichment data limits](search-limits-quotas-capacity.md#data-limits-ai-enrichment). For simplicity, we omit image vectorization for this exercise.
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and find your Azure Storage account.
-
-1. Create a container and upload the PDFs from [earth_book_2019_text_pages](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/nasa-e-book/earth_book_2019_text_pages).
-
-1. Make sure your [Azure AI Search managed identity](search-how-to-managed-identities.md) has a [**Storage Blob Data Reader**](/azure/role-based-access-control/role-assignments-portal) role assignment on Azure Storage.
-
-1. Next, in Visual Studio Code, define an indexer data source that provides connection information during indexing.
-
-    ```python
-    from azure.search.documents.indexes import SearchIndexerClient
-    from azure.search.documents.indexes.models import (
-        SearchIndexerDataContainer,
-        SearchIndexerDataSourceConnection
-    )
-    
-    # Create a data source 
-    indexer_client = SearchIndexerClient(endpoint=AZURE_SEARCH_SERVICE, credential=credential)
-    container = SearchIndexerDataContainer(name="nasa-ebooks-pdfs-all")
-    data_source_connection = SearchIndexerDataSourceConnection(
-        name="py-rag-tutorial-ds",
-        type="azureblob",
-        connection_string=AZURE_STORAGE_CONNECTION,
-        container=container
-    )
-    data_source = indexer_client.create_or_update_data_source_connection(data_source_connection)
-    
-    print(f"Data source '{data_source.name}' created or updated")
-    ```
-
-If you set up a [managed identity for an Azure AI Search connection to Azure Storage](search-howto-managed-identities-storage.md), the data source connection string includes a `ResourceId=` suffix. It should look similar to the following example: `"ResourceId=/subscriptions/FAKE-SUBSCRIPTION-ID/resourceGroups/FAKE-RESOURCE-GROUP/providers/Microsoft.Storage/storageAccounts/FAKE-ACCOUNT;"`
-
-## Create a skillset
-
-Skills are the basis for integrated data chunking and vectorization. At a minimum, you want a Text Split skill to chunk your content, and an embedding skill that create vector representations of your chunked content.
-
-In this skillset, an extra skill is used to create structured data in the index. The [Entity Recognition skill](cognitive-search-skill-entity-recognition-v3.md) is used to identify locations, which can range from proper names to generic references, such as "ocean" or "mountain". Having structured data gives you more options for creating interesting queries and boosting relevance.
-
-The AZURE_AI_FOUNDRY_KEY is needed even if you're using role-based access control. Azure AI Search uses the key for billing purposes and it's required unless your workloads stay under the free limit. You can also set up a keyless connection if you're using the most recent preview API or beta packages. For more information, see [Attach a billable resource to a skillset](cognitive-search-attach-cognitive-services.md).
-
-```python
-from azure.search.documents.indexes.models import (
-    SplitSkill,
-    InputFieldMappingEntry,
-    OutputFieldMappingEntry,
-    AzureOpenAIEmbeddingSkill,
-    EntityRecognitionSkill,
-    SearchIndexerIndexProjection,
-    SearchIndexerIndexProjectionSelector,
-    SearchIndexerIndexProjectionsParameters,
-    IndexProjectionMode,
-    SearchIndexerSkillset,
-    CognitiveServicesAccountKey
-)
-
-# Create a skillset  
-skillset_name = "py-rag-tutorial-ss"
-
-split_skill = SplitSkill(  
-    description="Split skill to chunk documents",  
-    text_split_mode="pages",  
-    context="/document",  
-    maximum_page_length=2000,  
-    page_overlap_length=500,  
-    inputs=[  
-        InputFieldMappingEntry(name="text", source="/document/content"),  
-    ],  
-    outputs=[  
-        OutputFieldMappingEntry(name="textItems", target_name="pages")  
-    ],  
-)  
-  
-embedding_skill = AzureOpenAIEmbeddingSkill(  
-    description="Skill to generate embeddings via Azure OpenAI",  
-    context="/document/pages/*",  
-    resource_url=AZURE_OPENAI_ACCOUNT,  
-    deployment_name="text-embedding-3-large",  
-    model_name="text-embedding-3-large",
-    dimensions=1024,
-    inputs=[  
-        InputFieldMappingEntry(name="text", source="/document/pages/*"),  
-    ],  
-    outputs=[  
-        OutputFieldMappingEntry(name="embedding", target_name="text_vector")  
-    ],  
-)
-
-entity_skill = EntityRecognitionSkill(
-    description="Skill to recognize entities in text",
-    context="/document/pages/*",
-    categories=["Location"],
-    default_language_code="en",
-    inputs=[
-        InputFieldMappingEntry(name="text", source="/document/pages/*")
-    ],
-    outputs=[
-        OutputFieldMappingEntry(name="locations", target_name="locations")
-    ]
-)
-  
-index_projections = SearchIndexerIndexProjection(  
-    selectors=[  
-        SearchIndexerIndexProjectionSelector(  
-            target_index_name=index_name,  
-            parent_key_field_name="parent_id",  
-            source_context="/document/pages/*",  
-            mappings=[  
-                InputFieldMappingEntry(name="chunk", source="/document/pages/*"),  
-                InputFieldMappingEntry(name="text_vector", source="/document/pages/*/text_vector"),
-                InputFieldMappingEntry(name="locations", source="/document/pages/*/locations"),  
-                InputFieldMappingEntry(name="title", source="/document/metadata_storage_name"),  
-            ],  
-        ),  
-    ],  
-    parameters=SearchIndexerIndexProjectionsParameters(  
-        projection_mode=IndexProjectionMode.SKIP_INDEXING_PARENT_DOCUMENTS  
-    ),  
-) 
-
-cognitive_services_account = CognitiveServicesAccountKey(key=AZURE_AI_FOUNDRY_KEY)
-
-skills = [split_skill, embedding_skill, entity_skill]
-
-skillset = SearchIndexerSkillset(  
-    name=skillset_name,  
-    description="Skillset to chunk documents and generating embeddings",  
-    skills=skills,  
-    index_projection=index_projections,
-    cognitive_services_account=cognitive_services_account
-)
-  
-client = SearchIndexerClient(endpoint=AZURE_SEARCH_SERVICE, credential=credential)  
-client.create_or_update_skillset(skillset)  
-print(f"{skillset.name} created")
-```
-
-## Create and run the indexer
-
-Indexers are the component that sets all of the processes in motion. You can create an indexer in a disabled state, but the default is to run it immediately. In this tutorial, create and run the indexer to retrieve the data from Blob storage, execute the skills, including chunking and vectorization, and load the index.
-
-The indexer takes several minutes to run. When it's done, you can move on to the final step: querying your index.
-
-```python
-from azure.search.documents.indexes.models import (
-    SearchIndexer,
-    FieldMapping
-)
-
-# Create an indexer  
-indexer_name = "py-rag-tutorial-idxr" 
-
-indexer_parameters = None
-
-indexer = SearchIndexer(  
-    name=indexer_name,  
-    description="Indexer to index documents and generate embeddings",  
-    skillset_name=skillset_name,  
-    target_index_name=index_name,  
-    data_source_name=data_source.name,
-    # Map the metadata_storage_name field to the title field in the index to display the PDF title in the search results  
-    field_mappings=[FieldMapping(source_field_name="metadata_storage_name", target_field_name="title")],
-    parameters=indexer_parameters
-)  
-
-# Create and run the indexer  
-indexer_client = SearchIndexerClient(endpoint=AZURE_SEARCH_SERVICE, credential=credential)  
-indexer_result = indexer_client.create_or_update_indexer(indexer)  
-
-print(f' {indexer_name} is created and running. Give the indexer a few minutes before running a query.')    
-```
-
-## Run a query to check results
-
-Send a query to confirm your index is operational. This request converts the text string "`what's NASA's website?`" into a vector for a vector search. Results consist of the fields in the select statement, some of which are printed as output.
-
-There's no chat or generative AI at this point. The results are verbatim content from your search index.
-
-```python
-from azure.search.documents import SearchClient
-from azure.search.documents.models import VectorizableTextQuery
-
-# Vector Search using text-to-vector conversion of the querystring
-query = "what's NASA's website?"  
-
-search_client = SearchClient(endpoint=AZURE_SEARCH_SERVICE, credential=credential, index_name=index_name)
-vector_query = VectorizableTextQuery(text=query, k_nearest_neighbors=50, fields="text_vector")
-  
-results = search_client.search(  
-    search_text=query,  
-    vector_queries= [vector_query],
-    select=["chunk"],
-    top=1
-)  
-  
-for result in results:  
-    print(f"Score: {result['@search.score']}")
-    print(f"Chunk: {result['chunk']}")
-```
-
-This query returns a single match (`top=1`) consisting of the one chunk determined by the search engine to be the most relevant. Results from the query should look similar to the following example:
-
-```
-Score: 0.01666666753590107
-Chunk: national Aeronautics and Space Administration
-
-earth Science
-
-NASA Headquarters 
-
-300 E Street SW 
-
-Washington, DC 20546
-
-www.nasa.gov
-
-np-2018-05-2546-hQ
-```
-
-Try a few more queries to get a sense of what the search engine returns directly so that you can compare it with an LLM-enabled response. Rerun the previous script with this query: `"patagonia geography"` and set `top` to 3 to return more than one response.
-
-Results from this second query should look similar to the following results, which are lightly edited for concision. The output is copied from the notebook, which truncates the response to what you see in this example. You can expand the cell output to review the complete answer.
-
-```
-Score: 0.03306011110544205
-Chunk: 
-
-Swirling Bloom off Patagonia
-Argentina
-
-Interesting art often springs out of the convergence of different ideas and influences. 
-And so it is with nature. 
-
-Off the coast of Argentina, two strong ocean currents converge and often stir up a colorful 
-brew, as shown in this Aqua image from 
-
-December 2010. 
-
-This milky green and blue bloom formed on the continental shelf off of Patagonia, where warmer, 
-saltier waters from the subtropics 
-
-meet colder, fresher waters flowing from the south. Where these currents collide, turbulent 
-eddies and swirls form, pulling nutrients 
-
-up from the deep ocean. The nearby Rio de la Plata also deposits nitrogen- and iron-laden 
-sediment into the sea. Add in some 
-...
-
-while others terminate in water. The San Rafael and San Quintín glaciers (shown at the right) 
-are the icefield’s largest. Both have 
-
-been receding rapidly in the past 30 years.
-```
-
-With this example, it's easier to spot how chunks are returned verbatim, and how keyword and similarity search identify top matches. This specific chunk definitely has information about Patagonia and geography, but it's not exactly relevant to the query. Semantic ranker would promote more relevant chunks for a better answer, but as a next step, let's see how to connect Azure AI Search to an LLM for conversational search.
-
-<!-- Objective:
-
-- Create objects and run the indexer to produce an operational search index with chunked and vectorized content.
-
-Key points:
-
-- Dependency on a supported data source. Use Azure blob storage for this tutorial.
-- Indexer pulls from the data source, pushes to the index.
-- Large PDF files can't be chunked. Indexer shows success, but doesn't even attempt to chunk/ingest the docs. Individual files have to be less than 16 MB.
-- Skillset (example 1) has two skills: text split and embedding. Embedding model is also be used for vectorization at query time (assume text-to-vector conversion).
-- Skillset (example 2) add a custom skill that points to external embedding model, or Azure Document Intelligence in Foundry Tools.
-- Skillset (example 3) add an entity recognition skill to lift locations from raw content into the index?
-- Duplicated content is expected due to overlap and repetition of parent info. It won't affect your LLM.
-
-Tasks:
-
-- H2: Configure access to Azure Storage and upload sample data.
-- H2: Create a data source
-- H2: Create a skillset (choose one skillset)
-- H2: Use alternative skillsets (present the other two skillsets)
-- H2: Create and run the indexer
-- H2: Check your data in the search index (hide vectors) -->
-
-## Next step
-
-> [!div class="nextstepaction"]
-> [Chat with your data](tutorial-rag-build-solution-query.md)

Summary

{
    "modification_type": "breaking change",
    "modification_title": "チュートリアルの削除: インデクシングパイプラインの構築"
}

Explanation

この変更は、「tutorial-rag-build-solution-pipeline.md」ファイルが完全に削除されたことを示しています。このファイルは、Azure AI SearchでのRAG(Retrieval-Augmented Generation)ソリューションのための自動インデクシングパイプラインの構築方法を説明したチュートリアルでした。内容は406行に及び、以下の重要なポイントが含まれていました:

  1. インデクシングパイプラインの概要:
    • インデクシングパイプラインの構築手順が詳細に示されており、データの取り込み、チャンク処理、ベクトル化、コンテンツのインジェストについての情報が提供されていました。
  2. プロジェクトの前提条件:
    • Azure AI Searchに必要なリソース、ツール、権限に関する説明が含まれており、具体的にはAzure StorageやAzure OpenAIのリソースとの連携方法が解説されていました。
  3. ステップバイステップの手順:
    • インデクサやスキルセットの作成、実行結果の確認といった具体的な手順が含まれており、ユーザーが実際にコードを使用して手順を実行できるようになっていました。
  4. 削除の影響:
    • このチュートリアルの削除により、RAGソリューションの構築に必要な具体的な手順とヒントが失われ、開発者が自動インデクシングを実装する際の重要なガイドラインが欠如することになります。
    • ユーザーはこの情報の欠如により、新しいパイプラインの構築において手間取る可能性があります。

この変更は、リソースの整理や内容の見直しによるものかもしれませんが、現在のユーザーにとっては、有用なリソースが失われたことによって、実装プロセスが複雑になることが予想されます。新しいドキュメントや代替手段が提供されることが望まれます。

articles/search/tutorial-rag-build-solution-query.md

Diff
@@ -1,306 +0,0 @@
----
-title: 'Classic RAG tutorial: Search using an LLM'
-titleSuffix: Azure AI Search
-description: Learn how to build queries and engineer prompts for LLM-enabled search on Azure AI Search. Queries used in generative search provide the inputs to an LLM chat engine.
-manager: nitinme
-author: HeidiSteen
-ms.author: heidist
-ms.service: azure-ai-search
-ms.update-cycle: 180-days
-ms.custom:
-  - ignite-2024
-ms.topic: tutorial
-ms.date: 10/14/2025
----
-
-# Tutorial: Search your data using a chat model (classic RAG in Azure AI Search)
-
-The defining characteristic of a RAG solution on Azure AI Search is sending queries to a Large Language Model (LLM) for a conversational search experience over your indexed content. It can be surprisingly easy if you implement just the basics.
-
-> [!NOTE]
-> We now recommend [agentic retrieval](agentic-retrieval-overview.md) for RAG workflows, but classic RAG is simpler. If it meets your application requirements, it's still a good choice.
-
-In this tutorial, you:
-
-> [!div class="checklist"]
-> - Set up clients
-> - Write instructions for the LLM
-> - Provide a query designed for LLM inputs
-> - Review results and explore next steps
-
-This tutorial builds on the previous tutorials. It assumes you have a search index created by the [indexing pipeline](tutorial-rag-build-solution-pipeline.md).
-
-## Prerequisites
-
-- [Visual Studio Code](https://code.visualstudio.com/download) with the [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) and the [Jupyter package](https://pypi.org/project/jupyter/). For more information, see [Python in Visual Studio Code](https://code.visualstudio.com/docs/languages/python).
-
-- [Azure AI Search](search-create-service-portal.md), in a region shared with Azure OpenAI.
-
-- [Azure OpenAI](/azure/ai-services/openai/how-to/create-resource), with a deployment of gpt-4o. For more information, see [Choose models for RAG in Azure AI Search](tutorial-rag-build-solution-models.md)
-
-## Download the sample
-
-You use the same notebook from the previous indexing pipeline tutorial. Scripts for querying the LLM follow the pipeline creation steps. If you don't already have the notebook, [download it](https://github.com/Azure-Samples/azure-search-python-samples/blob/main/Tutorial-RAG/Tutorial-rag.ipynb) from GitHub.
-
-## Configure clients for sending queries
-
-The RAG pattern in Azure AI Search is a synchronized series of connections to a search index to obtain the grounding data, followed by a connection to an LLM to formulate a response to the user's question. The same query string is used by both clients.
-
-You're setting up two clients, so you need endpoints and permissions on both resources. This tutorial assumes you set up role assignments for authorized connections, but you should provide the endpoints in your sample notebook:
-
-```python
-# Set endpoints and API keys for Azure services
-AZURE_SEARCH_SERVICE: str = "PUT YOUR SEARCH SERVICE ENDPOINT HERE"
-# AZURE_SEARCH_KEY: str = "DELETE IF USING ROLES, OTHERWISE PUT YOUR SEARCH SERVICE ADMIN KEY HERE"
-AZURE_OPENAI_ACCOUNT: str = "PUT YOUR AZURE OPENAI ENDPOINT HERE"
-# AZURE_OPENAI_KEY: str = "DELETE IF USING ROLES, OTHERWISE PUT YOUR AZURE OPENAI KEY HERE"
-```
-
-## Example script for prompt and query
-
-Here's the Python script that instantiates the clients, defines the prompt, and sets up the query. You can run this script in the notebook to generate a response from your chat model deployment.
-
-For the Azure Government cloud, modify the API endpoint on the token provider to `"https://cognitiveservices.azure.us/.default"`.
-
-```python
-# Import libraries
-from azure.search.documents import SearchClient
-from openai import AzureOpenAI
-
-token_provider = get_bearer_token_provider(credential, "https://cognitiveservices.azure.com/.default")
-openai_client = AzureOpenAI(
-     api_version="2024-06-01",
-     azure_endpoint=AZURE_OPENAI_ACCOUNT,
-     azure_ad_token_provider=token_provider
- )
-
-deployment_name = "gpt-4o"
-
-search_client = SearchClient(
-     endpoint=AZURE_SEARCH_SERVICE,
-     index_name=index_name,
-     credential=credential
- )
-
-# Provide instructions to the model
-GROUNDED_PROMPT="""
-You are an AI assistant that helps users learn from the information found in the source material.
-Answer the query using only the sources provided below.
-Use bullets if the answer has multiple points.
-If the answer is longer than 3 sentences, provide a summary.
-Answer ONLY with the facts listed in the list of sources below. Cite your source when you answer the question
-If there isn't enough information below, say you don't know.
-Do not generate answers that don't use the sources below.
-Query: {query}
-Sources:\n{sources}
-"""
-
-# Provide the search query. 
-# It's hybrid: a keyword search on "query", with text-to-vector conversion for "vector_query".
-# The vector query finds 50 nearest neighbor matches in the search index
-query="What's the NASA earth book about?"
-vector_query = VectorizableTextQuery(text=query, k_nearest_neighbors=50, fields="text_vector")
-
-# Set up the search results and the chat thread.
-# Retrieve the selected fields from the search index related to the question.
-# Search results are limited to the top 5 matches. Limiting top can help you stay under LLM quotas.
-search_results = search_client.search(
-    search_text=query,
-    vector_queries= [vector_query],
-    select=["title", "chunk", "locations"],
-    top=5,
-)
-
-# Newlines could be in the OCR'd content or in PDFs, as is the case for the sample PDFs used for this tutorial.
-# Use a unique separator to make the sources distinct. 
-# We chose repeated equal signs (=) followed by a newline because it's unlikely the source documents contain this sequence.
-sources_formatted = "=================\n".join([f'TITLE: {document["title"]}, CONTENT: {document["chunk"]}, LOCATIONS: {document["locations"]}' for document in search_results])
-
-response = openai_client.chat.completions.create(
-    messages=[
-        {
-            "role": "user",
-            "content": GROUNDED_PROMPT.format(query=query, sources=sources_formatted)
-        }
-    ],
-    model=deployment_name
-)
-
-print(response.choices[0].message.content)
-```
-
-## Review results
-
-In this response, the answer is based on five inputs (`top=5`) consisting of chunks determined by the search engine to be the most relevant. Instructions in the prompt tell the LLM to use only the information in the `sources`, or formatted search results. 
-
-Results from the first query `"What's the NASA earth book about?"` should look similar to the following example.
-
-```
-The NASA Earth book is about the intricate and captivating science of our planet, studied 
-through NASA's unique perspective and tools. It presents Earth as a dynamic and complex 
-system, observed through various cycles and processes such as the water cycle and ocean 
-circulation. The book combines stunning satellite images with detailed scientific insights, 
-portraying Earth’s beauty and the continuous interaction of land, wind, water, ice, and 
-air seen from above. It aims to inspire and demonstrate that the truth of our planet is 
-as compelling as any fiction.
-
-Source: page-8.pdf
-```
-
-It's expected for LLMs to return different answers, even if the prompt and queries are unchanged. Your result might look very different from the example. For more information, see [Learn how to use reproducible output](/azure/ai-services/openai/how-to/reproducible-output).
-
-> [!NOTE]
-> In testing this tutorial, we saw a variety of responses, some more relevant than others. A few times, repeating the same request caused a deterioration in the response, most likely due to confusion in the chat history, possibly with the model registering the repeated requests as dissatisfaction with the generated answer. Managing chat history is out of scope for this tutorial, but including it in your application code should mitigate or even eliminate this behavior.
-
-## Add a filter
-
-Recall that you created a `locations` field using applied AI, populated with places recognized by the Entity Recognition skill. The field definition for locations includes the `filterable` attribute. Let's repeat the previous request, but this time adding a filter that selects on the term *ice* in the locations field. 
-
-A filter introduces inclusion or exclusion criteria. The search engine is still doing a vector search on `"What's the NASA earth book about?"`, but it's now excluding matches that don't include *ice*. For more information about filtering on string collections and on vector queries, see [text filter fundamentals](search-filters.md#text-filter-fundamentals), [Understand collection filters](search-query-understand-collection-filters.md), and [Add filters to a vector query](vector-search-filters.md).
-
-Replace the search_results definition with the following example that includes a filter:
-
-```python
-query="what is the NASA earth book about?"
-vector_query = VectorizableTextQuery(text=query, k_nearest_neighbors=50, fields="text_vector")
-
-# Add a filter that selects documents based on whether locations includes the term "ice".
-search_results = search_client.search(
-    search_text=query,
-    vector_queries= [vector_query],
-    filter="search.ismatch('ice*', 'locations', 'full', 'any')",
-    select=["title", "chunk", "locations"],
-    top=5
-)
-
-sources_formatted = "=================\n".join([f'TITLE: {document["title"]}, CONTENT: {document["chunk"]}, LOCATIONS: {document["locations"]}' for document in search_results])
-```
-
-Results from the filtered query should now look similar to the following response. Notice the emphasis on ice cover.
-
-```
-The NASA Earth book showcases various geographic and environmental features of Earth through 
-satellite imagery, highlighting remarkable landscapes and natural phenomena. 
-
-- It features extraordinary views like the Holuhraun Lava Field in Iceland, captured by 
-Landsat 8 during an eruption in 2014, with false-color images illustrating different elements 
-such as ice, steam, sulfur dioxide, and fresh lava ([source](page-43.pdf)).
-- Other examples include the North Patagonian Icefield in South America, depicted through 
-clear satellite images showing glaciers and their changes over time ([source](page-147.pdf)).
-- It documents melt ponds in the Arctic, exploring their effects on ice melting and 
-- heat absorption ([source](page-153.pdf)).
-  
-Overall, the book uses satellite imagery to give insights into Earth's dynamic systems 
-and natural changes.
-```
-
-## Change the inputs
-
-Increasing or decreasing the number of inputs to the LLM can have a large effect on the response. Try running the same query again after setting `top=8`. When you increase the inputs, the model returns different results each time, even if the query doesn't change. 
-
-Here's one example of what the model returns after increasing the inputs to 8.
-
-```
-The NASA Earth book features a range of satellite images capturing various natural phenomena 
-across the globe. These include:
-
-- The Holuhraun Lava Field in Iceland documented by Landsat 8 during a 2014 volcanic 
-eruption (Source: page-43.pdf).
-- The North Patagonian Icefield in South America, highlighting glacial landscapes 
-captured in a rare cloud-free view in 2017 (Source: page-147.pdf).
-- The impact of melt ponds on ice sheets and sea ice in the Arctic, with images from 
-an airborne research campaign in Alaska during July 2014 (Source: page-153.pdf).
-- Sea ice formations at Shikotan, Japan, and other notable geographic features in various 
-locations recorded by different Landsat missions (Source: page-168.pdf).
-
-Summary: The book showcases satellite images of diverse Earth phenomena, such as volcanic 
-eruptions, icefields, and sea ice, to provide insights into natural processes and landscapes.
-```
-
-Because the model is bound to the grounding data, the answer becomes more expansive as you increase size of the input. You can use relevance tuning to potentially generate more focused answers.
-
-## Change the prompt
-
-You can also change the prompt to control the format of the output, tone, and whether you want the model to supplement the answer with its own training data by changing the prompt. 
-
-Here's another example of LLM output if we refocus the prompt on identifying locations for scientific study.
-
-```python
-# Provide instructions to the model
-GROUNDED_PROMPT="""
-You are an AI assistant that helps scientists identify locations for future study.
-Answer the query cocisely, using bulleted points.
-Answer ONLY with the facts listed in the list of sources below.
-If there isn't enough information below, say you don't know.
-Do not generate answers that don't use the sources below.
-Do not exceed 5 bullets.
-Query: {query}
-Sources:\n{sources}
-"""
-```
-
-Output from changing just the prompt, otherwise retaining all aspects of the previous query, might look like this example. 
-
-```
-The NASA Earth book appears to showcase various locations on Earth captured through satellite imagery, 
-highlighting natural phenomena and geographic features. For instance, the book includes:
-
-- The Holuhraun Lava Field in Iceland, detailing volcanic activity and its observation via Landsat 8.
-- The North Patagonian Icefield in South America, covering its glaciers and changes over time as seen by Landsat 8.
-- Melt ponds in the Arctic and their impacts on the heat balance and ice melting.
-- Iceberg A-56 in the South Atlantic Ocean and its interaction with cloud formations.
-
-(Source: page-43.pdf, page-147.pdf, page-153.pdf, page-39.pdf)
-```
-
-> [!TIP]
-> If you're continuing on with the tutorial, remember to restore the prompt to its previous value (`You are an AI assistant that helps users learn from the information found in the source material`).
-
-Changing parameters and prompts affects the response from the LLM. As you explore on your own, keep the following tips in mind:
-
-- Raising the `top` value can exhaust available quota on the model. If there's no quota, an error message is returned or the model might return "I don't know".
-
-- Raising the `top` value doesn't necessarily improve the outcome. In testing with top, we sometimes notice that the answers aren't dramatically better.
-
-- So what might help? Typically, the answer is relevance tuning. Improving the relevance of the search results from Azure AI Search is usually the most effective approach for maximizing the utility of your LLM.
-
-In the next series of tutorials, the focus shifts to maximizing relevance and optimizing query performance for speed and concision. We revisit the schema definition and query logic to implement relevance features, but the rest of the pipeline and models remain intact.
-
-<!-- In this tutorial, learn how to send queries and prompts to a chat model for generative search. The queries that you create for a conversational search are built for prompts and the orchestration layer. The query response is fed into message prompts sent to an LLM like gpt-4o.
-
-Objective:
-
-- Set up clients for chat model and search engine, set up a prompt, point the model to search results.
-
-Key points:
-
-In a RAG app, the query request needs to:
-
-- Target searchable text (vector or nonvector) in the index
-- Return the most relevant results
-- Return any metadata necessary for citations or other client-side requirements
-
-A query request also specifies relevance options, which can include:
-
-- Scoring profile
-- L2 semantic reranking
-- Minimum thresholds
-
-- You can swap out models to see which one works best for your query. No reindexing or upstream modifications required.
-- Basic query (takeaway is prompt, scoping to grounding data, calling two clients)
-- Basic query is hybrid for the purposes of this tutorial
-- Query parent-child, one index
-- Query parent-child, two indexes
-- Filters
-
-Tasks:
-
-- H2 Set up clients and configure access (to the chat model)
-- H2 Query using text, with a filter
-- H2 Query using vectors and text-to-vector conversion at query time (not sure what the code looks like for this)
-- H2 Query parent-child two indexes (unclear how to do this, Carey said query on child, do a lookup query on parent) -->
-
-## Next step
-
-> [!div class="nextstepaction"]
-> [Maximize relevance](tutorial-rag-build-solution-maximize-relevance.md)

Summary

{
    "modification_type": "breaking change",
    "modification_title": "チュートリアルの削除: LLMを用いたデータ検索"
}

Explanation

この変更は、「tutorial-rag-build-solution-query.md」ファイルが完全に削除されたことを示しています。このファイルは、Azure AI SearchにおけるRAG(Retrieval-Augmented Generation)ソリューションの一環として、チャットモデルを使用してデータを検索する方法について説明するチュートリアルでした。内容は306行に及び、以下の主要なポイントが含まれていました:

  1. チャットモデルを用いた検索の概要:
    • RAGソリューションの特長として、ユーザーのクエリに基づいてLarge Language Model(LLM)を用いた対話型の検索体験を提供する方法が説明されていました。
  2. クライアントの設定手順:
    • Azure AI SearchとAzure OpenAIのクライアントを設定して、クエリを送信する際のインストラクションの書き方と、LLMに適したクエリの設計について詳細に述べられていました。
  3. サンプルスクリプトの提供:
    • 実際のクエリ送信に必要なPythonスクリプトの例が示されており、ユーザーはこれに基づいてLLMモデルからの応答を生成することができました。
  4. 削除の影響:
    • このチュートリアルの削除により、RAGソリューションを構築する際の重要なステップであるLLMを用いた検索に関する情報源が失われ、開発者はクエリの設計や実行に関する具体的なガイダンスを失うことになります。
    • 特に、チャットモデルを利用した効果的な検索クエリの構築方法が明示されていないため、今後の開発が難しくなる可能性があります。

この変更は、ドキュメントの整理や内容の更新の一環として行われた可能性がありますが、ユーザーにとっては重要な情報源が失われた結果、生産性に影響を与える恐れがあります。代替手段や新しい情報提供が求められます。

articles/search/tutorial-rag-build-solution.md

Diff
@@ -1,61 +0,0 @@
----
-title: Build a classic RAG solution
-titleSuffix: Azure AI Search
-description: Learn how to build a generative search (RAG) app using LLMs and your proprietary grounding data in Azure AI Search.
-manager: nitinme
-author: HeidiSteen
-ms.author: heidist
-ms.service: azure-ai-search
-ms.update-cycle: 180-days
-ms.topic: overview
-ms.date: 10/14/2025
-
----
-
-# How to build a classic RAG solution using Azure AI Search
-
-This tutorial series demonstrates the classic pattern for building RAG solutions on Azure AI Search. Classic RAG uses the original query pipeline, with no LLM integration except for at the end of the pipeline when you pass the search results to an LLM for answer formulation.
-
-> [!NOTE]
-> We now recommend [agentic retrieval](agentic-retrieval-overview.md) for RAG workflows, but classic RAG is simpler. If it meets your application requirements, it's still a good choice.
-
-## In this series
-
-In this series, you learn about the components, dependencies, and optimizations for maximizing relevance and minimizing costs.
-
-Sample data is a [collection of PDFs](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/nasa-e-book/earth_book_2019_text_pages) uploaded to Azure Storage. The content is from [NASA's Earth free e-book](https://www.nasa.gov/ebooks/earth/).
-
-Sample code can be found in [this Python notebook](https://github.com/Azure-Samples/azure-search-python-samples/blob/main/Tutorial-RAG/Tutorial-rag.ipynb), but we recommend using the articles in this series for context, insights, and for exploring alternative approaches.
-
-### Exercises in this series
-
-- Choose your models for embeddings and chat
-
-- Design an index for conversational search
-
-- Design an indexing pipeline that loads, chunks, embeds, and ingests searchable content
-
-- Retrieve searchable content using queries and a chat model
-
-- Maximize relevance
-
-- Minimize storage and costs
-
-We omitted a few aspects of a RAG pattern to reduce complexity:
-
-- No management of chat history and context. Chat history is typically stored and managed separately from your grounding data, which means extra steps and code. This tutorial assumes atomic question and answers from the LLM and the default LLM experience.
-
-- No per-user user security over results (what we refer to as "security trimming"). For more information and resources, start with [Security trimming](search-security-trimming-for-azure-search.md) and make sure to review the links at the end of the article.
-
-This series covers the fundamentals of RAG solution development. Once you understand the basics, continue with [accelerators](resource-tools.md) and other [code samples](https://github.com/Azure/azure-search-vector-samples) that provide more abstraction or are otherwise better suited for production environments and more complex workloads.
-
-## Why use Azure AI Search for RAG?
-
-Chat models face constraints on the amount of data they can accept on a request. You should use Azure AI Search because the *quality* of content passed to an LLM can make or break a RAG solution. 
-
-To deliver the highest quality inputs to a chat model, Azure AI Search provides a best-in-class search engine with AI integration and comprehensive relevance tuning. The search engine supports vector similarity search (multiple algorithms), keyword search, fuzzy search, geospatial search, and filters. You can build hybrid query requests that include all of these components, and you can control how much each query contributes to the overall request.
-
-## Next step
-
-> [!div class="nextstepaction"]
-> [Choose models](tutorial-rag-build-solution-models.md)
\ No newline at end of file

Summary

{
    "modification_type": "breaking change",
    "modification_title": "チュートリアルの削除: クラシックRAGソリューションの構築"
}

Explanation

この変更は、「tutorial-rag-build-solution.md」ファイルが完全に削除されたことを示しています。このファイルは、Azure AI Searchを使用してクラシックなRAG(Retrieval-Augmented Generation)ソリューションを構築する方法について説明したチュートリアルでした。内容は61行に及び、以下の重要なポイントが含まれていました:

  1. クラシックRAGソリューションの概要:
    • このチュートリアルシリーズでは、RAGソリューションを構築するための基本的なパターンが示されており、LAGのプロセスを通じて、検索結果をLLMに渡して回答を生成する方法が解説されていました。
  2. 内容の重要な要素:
    • RAGソリューションのコンポーネント、依存関係、そして関連性を最大化しコストを最小化するための最適化方法について学ぶことができました。
  3. サンプルデータとコード:
    • 使用するサンプルデータとしてPDFコレクションが紹介されており、サンプルコードがPythonノートブックに掲載されていることが記載されていました。
  4. 削除の影響:
    • チュートリアルの削除により、RAGソリューションを構築する際のガイダンスが失われ、開発者は基礎的な理解を持たない場合にこの重要なトピックに関する具体的な手引きを失うことになります。
    • 特に、Azure AI Searchの利点や、LLMとの統合方法についての情報が欠落することで、結果としてアプリケーションの品質に悪影響を及ぼす可能性があります。

この変更は、ドキュメントの整合性を図る目的で行われた可能性がありますが、ユーザーがこの重要な情報にアクセスできなくなったことで、開発に困難が生じる恐れがあります。新しいガイダンスや代替情報の提供が期待されます。

articles/search/vector-search-how-to-configure-vectorizer.md

Diff
@@ -43,7 +43,7 @@ The following table lists the embedding models that can be used with a vectorize
 | Vectorizer kind | Model names | Model provider | Associated skill |
 |-----------------|------------|----------------|------------------|
 | [`azureOpenAI`](vector-search-vectorizer-azure-open-ai.md) | text-embedding-ada-002<br>text-embedding-3 | Azure OpenAI | [AzureOpenAIEmbedding skill](cognitive-search-skill-azure-openai-embedding.md) |
-| [`aml`](vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md) | Cohere-embed-v3<br>Cohere-embed-v4 <sup>1</sup> | [Foundry model catalog](vector-search-integrated-vectorization-ai-studio.md)  | [AML skill](cognitive-search-aml-skill.md) |
+| [`aml`](vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md) | Cohere-embed-v3<br>Cohere-embed-v4 <sup>1</sup> | [Microsoft Foundry model catalog](vector-search-integrated-vectorization-ai-studio.md)  | [AML skill](cognitive-search-aml-skill.md) |
 | [`aiServicesVision`](vector-search-vectorizer-ai-services-vision.md) | [Multimodal embeddings 4.0 API](/azure/ai-services/computer-vision/concept-image-retrieval) | Azure Vision (through a Foundry resource) | [Azure Vision multimodal embeddings skill](cognitive-search-skill-vision-vectorize.md) |
 | [`customWebApi`](vector-search-vectorizer-custom-web-api.md) | Any embedding model | Hosted externally | [Custom Web API skill](cognitive-search-custom-skill-web-api.md) |
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "マイクロソフトファウンドリモデリングの名称変更"
}

Explanation

この変更は、「vector-search-how-to-configure-vectorizer.md」ファイルの内容が部分的に修正されたことを示しています。この修正では、Azure Machine Learningに関連するモデルのプロバイダー名が変更されました。具体的には、以下のように変更されました:

  • 変更点の詳細:
    • 元のテキストでは「Foundry model catalog」と記載されていた部分が、更新後は「Microsoft Foundry model catalog」に変更されています。これは、マイクロソフトのリソースが見込まれる特定のモデルカタログを指しています。

この修正は、より明確な表現を促進し、ユーザーが参照している内容を一貫性をもって理解できるようにするためのものであり、特にAzureを利用する開発者やデータサイエンティストにとって重要です。このマイナーアップデートによって、曖昧さが解消され、リソースがより明確に示されています。

articles/search/vector-search-how-to-generate-embeddings.md

Diff
@@ -239,4 +239,3 @@ The output is a vector array of 1,536 dimensions.
 + [Understand embeddings in Azure OpenAI in Foundry Models](/azure/ai-services/openai/concepts/understand-embeddings)
 + [Generate embeddings with Azure OpenAI](/azure/ai-services/openai/how-to/embeddings?tabs=console)
 + [Tutorial: Explore Azure OpenAI embeddings and document search](/azure/ai-services/openai/tutorials/embeddings?tabs=command-line)
-+ [Tutorial: Choose a model (RAG solutions in Azure AI Search)](tutorial-rag-build-solution-models.md)

Summary

{
    "modification_type": "minor update",
    "modification_title": "RAGソリューションに関するチュートリアルリンクの削除"
}

Explanation

この変更は、「vector-search-how-to-generate-embeddings.md」ファイルにおいて、1つのリンクが削除されたことを示しています。具体的には、以下のような修正が行われました:

この修正は、内容の整理や関連性の見直しを目的とした可能性があります。特に、RAGソリューションに関するチュートリアルリンクが削除されたことで、ユーザーが具体的な実装方法を学ぶ際に、その情報源が失われたと考えられます。この変更は、文章の整合性を図る効果がある一方で、特定のトピックに関する情報が不足することも意味し、ユーザーへの影響が懸念されるかもしれません。

articles/search/vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md

Diff
@@ -23,20 +23,20 @@ If you're using integrated vectorization to create the vector arrays, the skills
 
 ## Prerequisites
 
-+ A [Foundry hub-based project](/azure/ai-foundry/how-to/hub-create-projects) or an [AML workspace](../machine-learning/concept-workspace.md) for a custom model that you create.
++ A [Microsoft Foundry hub-based project](/azure/ai-foundry/how-to/hub-create-projects) or an [AML workspace](../machine-learning/concept-workspace.md) for a custom model that you create.
 
-+ For hub-based projects only, a [serverless deployment](/azure/ai-foundry/how-to/deploy-models-serverless) of a [supported model](#vectorizer-parameters) from the Foundry model catalog.
++ For hub-based projects only, a serverless deployment of a [supported model](#vectorizer-parameters) from the Microsoft Foundry model catalog. You can use an [ARM/Bicep template](https://github.com/Azure-Samples/azure-ai-search-multimodal-sample/blob/42b4d07f2dd9f7720fdc0b0788bf107bdac5eecb/infra/ai/modules/project.bicep#L37C1-L38C1) to provision the serverless deployment.
 
 ## Vectorizer parameters
 
 Parameters are case sensitive. The parameters you use depend on what [authentication your model provider requires](#WhatParametersToUse), if any.
 
 | Parameter name | Description |
 |--------------------|-------------|
-| `uri` | (Required for [key authentication](#WhatParametersToUse)) The target URI of the serverless deployment from the Foundry model catalog or the [scoring URI of the AML online endpoint](../machine-learning/how-to-authenticate-online-endpoint.md). Only the HTTPS URI scheme is allowed. |
+| `uri` | (Required for [key authentication](#WhatParametersToUse)) The target URI of the serverless deployment from the Microsoft Foundry model catalog or the [scoring URI of the AML online endpoint](../machine-learning/how-to-authenticate-online-endpoint.md). Only the HTTPS URI scheme is allowed. |
 | `key` | (Required for [key authentication](#WhatParametersToUse)) The API key of the model provider. |
 | `resourceId` | (Required for [token authentication](#WhatParametersToUse)) The Azure Resource Manager resource ID of the model provider. For an AML online endpoint, use the `subscriptions/{guid}/resourceGroups/{resource-group-name}/Microsoft.MachineLearningServices/workspaces/{workspace-name}/onlineendpoints/{endpoint_name}` format. |
-| `modelName` | The name of the embedding model from the Foundry model catalog deployed at the specified `uri`. Supported models are:<p><ul><li>Cohere-embed-v3-english</li><li>Cohere-embed-v3-multilingual</li><li>Cohere-embed-v4</li></ul> |
+| `modelName` | The name of the embedding model from the Microsoft Foundry model catalog deployed at the specified `uri`. Supported models (serverless deployments only) are:<p><ul><li>Cohere-embed-v3-english</li><li>Cohere-embed-v3-multilingual</li><li>Cohere-embed-v4</li></ul> |
 | `region` | (Optional for [token authentication](#WhatParametersToUse)) The region in which the model provider is deployed. Required if the region is different from the region of the search service. |
 | `timeout` | (Optional) The timeout for the HTTP client making the API call. It must be formatted as an XSD "dayTimeDuration" value (a restricted subset of an [ISO 8601 duration](https://www.w3.org/TR/xmlschema11-2/#dayTimeDuration) value). For example, `PT60S` for 60 seconds. If not set, a default value of 30 seconds is chosen. The timeout can be set to a maximum of 230 seconds and a minimum of 1 second. |
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "Microsoft Foundryに関連する用語の変更"
}

Explanation

この変更は、「vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md」ファイルにおいて、数箇所の用語が修正されたことを示しています。主な変更点は以下の通りです:

  • 用語の変更:
    • 「Foundry model catalog」という表現が「Microsoft Foundry model catalog」に変更されました。
    • 同様に、関連する記述において「Foundry hub-based project」から「Microsoft Foundry hub-based project」への修正も含まれています。
  • 追加情報:
    • 新しい説明文が加わり、「serverless deployment」について、ARM/Bicepテンプレートを使用してプロビジョニングする旨が追記されました。これにより、ユーザーがリソースを管理しやすくなる効用があります。
  • パラメータ説明の更新:
    • モデル名の説明文にも修正が加えられ、サポートされるモデルについての言及が明確化されています。

この修正により、Microsoftのリソースについての説明が最新の状態に保たれ、ユーザーが正確で具体的な情報にアクセスできるようになります。これにより、ドキュメント全体の一貫性が向上し、Azure Machine Learningおよびその関連サービスの利用が促進されることが期待されます。