Diff Insight Report - search

最終更新日: 2025-07-23

利用上の注意

このポストは Microsoft 社の Azure 公式ドキュメント(CC BY 4.0 または MIT ライセンス) をもとに生成AIを用いて翻案・要約した派生作品です。 元の文書は MicrosoftDocs/azure-ai-docs にホストされています。

生成AIの性能には限界があり、誤訳や誤解釈が含まれる可能性があります。 本ポストはあくまで参考情報として用い、正確な情報は必ず元の文書を参照してください。

このポストで使用されている商標はそれぞれの所有者に帰属します。これらの商標は技術的な説明のために使用されており、商標権者からの公式な承認や推奨を示すものではありません。

View Diff on GitHub


# Highlights
- 複数のドキュメントでの日付や手続きの更新が行われ、最新の情報が反映されました。
- ハイブリッド検索に関する手順やフィルター機能、ランキングに関するドキュメントが改良されました。
- 新しい画像ファイルが追加され、Azure AI Searchの機能についての理解を深めるための視覚サポートが強化されました。
- 一部の画像ファイルの名称変更やフォーマットの統一が行われ、ドキュメントの整合性が向上しました。
- 不要な説明が削除されるなど、ドキュメントの簡潔化を図る更新がされています。

New features

  • 画像を追加することで、検索機能に関連するプロセスを視覚的にサポート。
  • 新しい画像でAIビジョンやテキストカタログのベクター化プロセスを説明。

Breaking changes

  • MaxTextSizeRecallに関するセクションが削除され、関連情報が整理された。

Other updates

  • 複数のリンクやテキストの修正により、情報の明確化とアクセスの向上を実施。
  • ハイブリッド検索やベクトル検索の手法に関する説明やランキングアルゴリズムについての情報が改善された。

Insights

この差分の目的は、Azure AI Search についての最新情報をユーザーに提供することにあります。具体的な手順の明確化、APIバージョンの更新、新たなフィルタリングやスコアリング方法の導入といった部分がカバーされており、ユーザーが直感的かつ効果的にサービスを利用できるようになっています。

特に、新機能として追加された画像は、テキストからの情報抽出やAIによるベクター化のプロセスを視覚的に示すことで、ユーザーに対する理解を深める重要な役割を果たします。この視覚支援は、特にAIビジョン関連の新機能を活用する際のガイドとなり、学習効果を向上させると考えられます。

また、リンクやテキストの変更を通じて、ユーザーが必要とする情報に即座にアクセスしやすくする改良が行われており、結果としてAzure AI Searchの利用体験全体が向上しています。これは、開発者がシステムを容易に探索・実装し、より迅速に価値を引き出せるようにするための戦略的な改善となっています。

ドキュメントの整理といった全体的なアップデートは、Azure AI Searchが提供する機能の理解を深め、ユーザーがより効率的にシステムを使用できるようにサポートするものです。この取り組みによって、技術的な変化に追随するだけでなく、ユーザーエクスペリエンスの向上を図った実務的なアップデートとして評価できます。

Summary Table

Filename Type Title Status A D M
hybrid-search-how-to-query.md minor update ハイブリッド検索に関するクエリの更新 modified 144 116 260
hybrid-search-overview.md minor update ハイブリッド検索の概要に関する更新 modified 5 5 10
hybrid-search-ranking.md minor update ハイブリッド検索ランキングに関する更新 modified 7 7 14
extract-text-images.png new feature 画像からのテキスト抽出のための新しい画像の追加 added 0 0 0
vectorize-images.png minor update 画像ベクター化の説明画像の更新 modified 0 0 0
vectorize-text-ai-vision.png new feature AIビジョンによるテキストベクター化のための新しい画像の追加 added 0 0 0
vectorize-text-aoai.png minor update テキストベクター化画像の名称変更 renamed 0 0 0
vectorize-text-catalog.png new feature テキストカタログのベクター化に関する新しい画像の追加 added 0 0 0
retrieval-augmented-generation-overview.md minor update クエリパラメーターに関する情報の修正 modified 1 1 2
search-api-preview.md minor update ハイブリッド検索におけるフィルターターゲットの例を更新 modified 1 1 2
search-get-started-portal-image-search.md minor update イメージ検索のクイックスタートガイドの修正 modified 13 8 21
search-get-started-portal-import-vectors.md minor update ベクトルインポートのクイックスタートガイドの更新 modified 94 87 181
search-region-support.md minor update 検索リージョンサポートの更新 modified 3 4 7
search-what-is-azure-search.md minor update Azure Search の定義の更新 modified 5 5 10
semantic-how-to-query-request.md minor update ハイブリッドクエリの例へのリンク変更 modified 1 1 2
semantic-how-to-query-rewrite.md minor update ハイブリッドクエリの例へのリンク変更 modified 1 1 2
vector-search-how-to-query.md minor update 文書からの MaxTextSizeRecall セクションの削除 modified 1 11 12
whats-new.md minor update ハイブリッド検索におけるフィルタのターゲット変更の例を修正 modified 1 1 2

Modified Contents

articles/search/hybrid-search-how-to-query.md

Diff
@@ -9,22 +9,19 @@ ms.service: azure-ai-search
 ms.custom:
   - ignite-2023
 ms.topic: how-to
-ms.date: 05/08/2025
+ms.date: 07/21/2025
 ---
 
 # Create a hybrid query in Azure AI Search
 
-[Hybrid search](hybrid-search-overview.md) combines text (keyword) and vector queries in a single search request. All subqueries in the request execute in parallel. The results are merged and reordered by new search scores, using [Reciprocal Rank Fusion (RRF)](hybrid-search-ranking.md) to return a unified result set. In many cases, [per benchmark tests](https://techcommunity.microsoft.com/t5/ai-azure-ai-services-blog/azure-ai-search-outperforming-vector-search-with-hybrid/ba-p/3929167), hybrid queries with semantic ranking return the most relevant results.
+[Hybrid search](hybrid-search-overview.md) combines text (keyword) and vector queries in a single search request. Both queries execute in parallel. The results are merged and reordered by new search scores, using [Reciprocal Rank Fusion (RRF)](hybrid-search-ranking.md) to return a unified result set. In many cases, [per benchmark tests](https://techcommunity.microsoft.com/t5/ai-azure-ai-services-blog/azure-ai-search-outperforming-vector-search-with-hybrid/ba-p/3929167), hybrid queries with semantic ranking return the most relevant results.
 
 In this article, learn how to:
 
-+ Set up a basic request
++ Set up a basic hybrid request
 + Add parameters and filters
 + Improve relevance using semantic ranking or vector weights
-+ Optimize query behaviors by controlling text and vector inputs
-
-> [!NOTE]
-> New in [**2024-09-01-preview**](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2024-09-01-preview&preserve-view=true) is the ability to target filters to just the vector subqueries in a hybrid request. This gives you more precision over how filters are applied. For more information, see [targeting filters to vector subqueries](#hybrid-search-with-filters-targeting-vector-subqueries-preview) in this article.
++ Optimize query behaviors by controlling inputs (`maxTextRecallSize`)
 
 ## Prerequisites
 
@@ -38,30 +35,33 @@ In this article, learn how to:
 
 + Search Explorer in the Azure portal (supports both stable and preview API search syntax) has a JSON view that lets you paste in a hybrid request.
 
-+ [**2024-07-01**](/rest/api/searchservice/documents/search-post) stable version or a recent preview API version if you're using preview features like [maxTextRecallSize and countAndFacetMode(preview)](#set-maxtextrecallsize-and-countandfacetmode).
++ Newer stable or preview packages of the Azure SDKs (see change logs for SDK feature support).
+
++ [Stable REST APIs](/rest/api/searchservice/documents/search-post) or a recent preview API version if you're using preview features like [maxTextRecallSize and countAndFacetMode(preview)](#set-maxtextrecallsize-and-countandfacetmode).
 
-  For readability, we use REST examples to explain how the APIs work. You can use a REST client like Visual Studio Code with the REST extension to build hybrid queries. For more information, see [Quickstart: Vector search using REST APIs](search-get-started-vector.md).
+  For readability, we use REST examples to explain how the APIs work. You can use a REST client like Visual Studio Code with the REST extension to build hybrid queries. You can also use the Azure SDKs. For more information, see [Quickstart: Vector search](search-get-started-vector.md).
 
-+ Newer stable or beta packages of the Azure SDKs (see change logs for SDK feature support).
+## Set up a hybrid query
+
+This section explains the basic structure of a hybrid query and how to set one up in either Search Explorer or for execution in a REST client.
+
+Results are returned in plain text, including vectors in fields marked as `retrievable`. Because numeric vectors aren't useful in search results, choose other fields in the index as a proxy for the vector match. For example, if an index has "descriptionVector" and "descriptionText" fields, the query can match on "descriptionVector" but the search result can show "descriptionText". Use the `select` parameter to specify only human-readable fields in the results.
 
-## Set up a hybrid query in Search Explorer
+### [**Azure portal**](#tab/portal)
 
-1. In [Search Explorer](search-explorer.md), make sure the API version is **2024-07-01** or a newer preview API version.
+1. Sign in to the [Azure portal](https://portal.azure.com) and find your search service.
 
-1. Under **View**, select **JSON view** so that you can paste in a vector query. 
+1. Under **Search management** > **Indexes**, select an index that has vectors and non-vector content. [Search Explorer](search-explorer.md) is the first tab.
 
-1. Replace the default query template with a hybrid query, such as the "Run a hybrid query" example starting on line 539 in the [vector quickstart](https://raw.githubusercontent.com/Azure-Samples/azure-search-rest-samples/refs/heads/main/Quickstart-vectors/az-search-quickstart-vectors.rest). For brevity, the vector is truncated in this article. 
+1. Under **View**, switch to **JSON view** so that you can paste in a vector query. 
 
-   A hybrid query has a text query specified in `search`, and a vector query specified under `vectorQueries.vector`.
+1. Replace the default query template with a hybrid query. A basic hybrid query has a text query specified in `search`, and a vector query specified under `vectorQueries.vector`. The text query and vector query can be equivalent or divergent, but it's common for them to share the same intent.
 
-   The text query and vector query can be equivalent or divergent, but it's common for them to share the same intent.
+   This example is from the [vector quickstart](https://raw.githubusercontent.com/Azure-Samples/azure-search-rest-samples/refs/heads/main/Quickstart-vectors/az-search-quickstart-vectors.rest) that has vector and nonvector content, and several query examples. For brevity, the vector is truncated in this article. 
 
     ```json
     {
-        "count": true,
         "search": "historic hotel walk to restaurants and shopping",
-        "select": "HotelId, HotelName, Category, Tags, Description",
-        "top": 7,
         "vectorQueries": [
             {
                 "vector": [0.01944167, 0.0040178085, -0.007816401 ... <remaining values omitted> ], 
@@ -76,16 +76,34 @@ In this article, learn how to:
 
 1. Select **Search**.
 
-> [!TIP]
-> Search results are easier to read if you hide the vectors. In **Query Options**, turn on **Hide vector values in search results**.
+   > [!TIP]
+   > Search results are easier to read if you hide the vectors. In **Query Options**, turn on **Hide vector values in search results**.
+
+1. Here's another version of the query. This one adds a `count` for the number of matches found, a `select` parameter for choosing specific fields, and a `top` parameter to return the top seven results.
 
-## Hybrid query request (REST API)
+   ```json
+    {
+        "count": true,
+        "search": "historic hotel walk to restaurants and shopping",
+        "select": "HotelId, HotelName, Category, Tags, Description",
+        "top": 7,
+        "vectorQueries": [
+            {
+                "vector": [0.01944167, 0.0040178085, -0.007816401 ... <remaining values omitted> ], 
+                "k": 7,
+                "fields": "DescriptionVector",
+                "kind": "vector",
+                "exhaustive": true
+            }
+        ]
+    }
+    ```
 
-A hybrid query combines text search and vector search, where the `search` parameter takes a query string and `vectorQueries.vector` takes the vector query. The search engine runs full text and vector queries in parallel. The union of all matches is evaluated for relevance using Reciprocal Rank Fusion (RRF) and a single result set is returned in the response.
+### [**REST**](#tab/hybrid-rest)
 
-Results are returned in plain text, including vectors in fields marked as `retrievable`. Because numeric vectors aren't useful in search results, choose other fields in the index as a proxy for the vector match. For example, if an index has "descriptionVector" and "descriptionText" fields, the query can match on "descriptionVector" but the search result can show "descriptionText". Use the `select` parameter to specify only human-readable fields in the results.
+The following example shows a hybrid query request using the REST API.
 
-The following example shows a hybrid query configuration.
+This example is from the [vector quickstart](https://raw.githubusercontent.com/Azure-Samples/azure-search-rest-samples/refs/heads/main/Quickstart-vectors/az-search-quickstart-vectors.rest) that has vector and nonvector content, and several query examples. For brevity, the vector is truncated in this article. 
 
 ```http
 POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version=2024-07-01
@@ -136,7 +154,89 @@ api-key: {{admin-api-key}}
 
 + `top` determines how many matches are returned in the response all-up. In this example, the response includes 10 results, assuming there are at least 10 matches in the merged results.
 
-## Hybrid search with filter
+---
+
+## Set maxTextRecallSize and countAndFacetMode
+
+[!INCLUDE [Feature preview](./includes/previews/preview-generic.md)]
+
+A hybrid query can be tuned to control how much of each subquery contributes to the combined results. Setting `maxTextRecallSize` specifies how many BM25-ranked results are passed to the hybrid ranking model.
+
+If you use `maxTextRecallSize`, you might also want to set `CountAndFacetMode`. This parameter determines whether the `count` and `facets` should include all documents that matched the search query, or only those documents retrieved within the `maxTextRecallSize` window. The default value is "countAllResults".
+
+We recommend the latest preview REST API version [2025-05-01-preview](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2025-05-01-preview&preserve-view=true) for setting these options.
+
+> [!TIP]
+> Another approach for hybrid query tuning is [vector weighting](vector-search-how-to-query.md#vector-weighting), used to increase the importance of vector queries in the request.
+
+1. Use [Search - POST (preview)](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2025-05-01-preview&preserve-view=true) or [Search - GET (preview)](/rest/api/searchservice/documents/search-get?view=rest-searchservice-2025-05-01-preview&preserve-view=true) to specify preview parameters.
+
+1. Add a `hybridSearch` query parameter object to set the maximum number of documents recalled through the BM25-ranked results of a hybrid query. It has two properties:
+
+   + `maxTextRecallSize` specifies the number of BM25-ranked results to provide to the Reciprocal Rank Fusion (RRF) ranker used in hybrid queries. The default is 1,000. The maximum is 10,000.
+
+   + `countAndFacetMode` reports the counts for the BM25-ranked results (and for facets if you're using them). The default is all documents that match the query. Optionally, you can scope "count" to the `maxTextRecallSize`.
+
+1. Set `maxTextRecallSize`:
+
+   + Decrease `maxTextRecallSize` if vector similarity search is generally outperforming the text-side of the hybrid query.
+
+   + Increase `maxTextRecallSize` if you have a large index, and the default isn't capturing a sufficient number of results. With a larger BM25-ranked result set, you can also set `top`, `skip`, and `next` to retrieve portions of those results.
+
+The following REST examples show two use-cases for setting `maxTextRecallSize`. 
+
+The first example reduces `maxTextRecallSize` to 100, limiting the text side of the hybrid query to just 100 document. It also sets `countAndFacetMode` to include only those results from `maxTextRecallSize`.
+
+```http
+POST https://[service-name].search.windows.net/indexes/[index-name]/docs/search?api-version=2024-05-01-Preview 
+
+    { 
+      "vectorQueries": [ 
+        { 
+          "kind": "vector", 
+          "vector": [1.0, 2.0, 3.0], 
+          "fields": "my_vector_field", 
+          "k": 10 
+        } 
+      ], 
+      "search": "hello world", 
+      "hybridSearch": { 
+        "maxTextRecallSize": 100, 
+        "countAndFacetMode": "countRetrievableResults" 
+      } 
+    } 
+```
+
+The second example raises `maxTextRecallSize` to 5,000. It also uses top, skip, and next to pull results from large result sets. In this case, the request pulls in BM25-ranked results starting at position 1,500 through 2,000 as the text query contribution to the RRF composite result set.
+
+```http
+POST https://[service-name].search.windows.net/indexes/[index-name]/docs/search?api-version=2024-05-01-Preview 
+
+    { 
+      "vectorQueries": [ 
+        { 
+          "kind": "vector", 
+          "vector": [1.0, 2.0, 3.0], 
+          "fields": "my_vector_field", 
+          "k": 10 
+        } 
+      ], 
+      "search": "hello world",
+      "top": 500,
+      "skip": 1500,
+      "next": 500,
+      "hybridSearch": { 
+        "maxTextRecallSize": 5000, 
+        "countAndFacetMode": "countRetrievableResults" 
+      } 
+    } 
+```
+
+## Examples of hybrid queries
+
+This section has multiple query examples that illustrate hybrid query patterns.
+
+### Example: Hybrid search with filter
 
 This example adds a filter, which is applied to the `filterable` nonvector fields of the search index.
 
@@ -174,24 +274,24 @@ api-key: {{admin-api-key}}
 
 + When you postfilter query results, the number of results might be less than top-n.
 
-## Hybrid search with filters targeting vector subqueries (preview)
+### Example: Hybrid search with filters targeting vector subqueries (preview)
 
-Using [**2024-09-01-preview**](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2024-09-01-preview&preserve-view=true), you can override a global filter on the search request by applying a secondary filter that targets just the vector subqueries in a hybrid request.
+Using a [preview API](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2025-05-01-preview&preserve-view=true), you can override a global filter on the search request by applying a secondary filter that targets just the vector subqueries in a hybrid request.
 
 This feature provides fine-grained control by ensuring that filters only influence the vector search results, leaving keyword-based search results unaffected. 
 
 The targeted filter fully overrides the global filter, including any filters used for [security trimming](search-security-trimming-for-azure-search.md) or geospatial search.  In cases where global filters are required, such as security trimming, you must explicitly include these filters in both the top-level filter and in each vector-level filter to ensure security and other constraints are consistently enforced.
 
 To apply targeted vector filters:
 
-+ Use the [latest preview Search Documents REST API](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2024-09-01-preview&preserve-view=true#request-body) or an Azure SDK beta package that provides the feature.
++ Use the [latest preview Search Documents REST API](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2025-05-01-preview&preserve-view=true#request-body) or an Azure SDK beta package that provides the feature.
 
 + Modify a query request, adding a new `vectorQueries.filterOverride` parameter set to an [OData filter expression](search-query-odata-filter.md).
 
-Here's an example of hybrid query that adds a filter override. The global filter "Rating gt 3" is replaced at run time by the filterOvrride.
+Here's an example of hybrid query that adds a filter override. The global filter "Rating gt 3" is replaced at run time by the `filterOverride`.
 
 ```http
-POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version=2024-09-01=preview
+POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version=2025-05-01=preview
 
 {
     "vectorQueries": [
@@ -218,7 +318,7 @@ POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/d
 }
 ```
 
-## Semantic hybrid search
+### Example: Semantic hybrid search
 
 Assuming that you [have semantic ranker](semantic-how-to-enable-disable.md) and your index definition includes a [semantic configuration](semantic-how-to-query-request.md), you can formulate a query that includes vector search and keyword search, with semantic ranking over the merged result set. Optionally, you can add captions and answers. 
 
@@ -261,7 +361,7 @@ api-key: {{admin-api-key}}
 
 + "captions" and "answers" are optional. Values are extracted from verbatim text in the results. An answer is only returned if the results include content having the characteristics of an answer to the query.
 
-## Semantic hybrid search with filter
+### Example: Semantic hybrid search with filter
 
 Here's the last query in the collection. It's the same semantic hybrid query as the previous example, but with a filter.
 
@@ -304,90 +404,18 @@ api-key: {{admin-api-key}}
 
 + Postfilter is applied after query execution. If k=50 returns 50 matches on the vector query side, followed by a post-filter applied to the 50 matches, your results are reduced by the number of documents that meet filter criteria. This leaves you with fewer than 50 documents to pass to semantic ranker. Keep this in mind if you're using semantic ranking. The semantic ranker works best if it has 50 documents as input.
 
-## Set maxTextRecallSize and countAndFacetMode
-
-[!INCLUDE [Feature preview](./includes/previews/preview-generic.md)]
-
-This section explains how to adjust the inputs to a hybrid query by controlling the amount BM25-ranked results that flow to the hybrid ranking model. Controlling over the BM25-ranked input gives you more options for relevance tuning in hybrid scenarios.
-
-We recommend preview REST API version [2024-05-01-preview](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2024-05-01-preview&preserve-view=true).
-
-> [!TIP]
-> Another option to consider is a supplemental or replacement technique, is [vector weighting](vector-search-how-to-query.md#vector-weighting), which increases the importance of vector queries in the request.
-
-1. Use [Search - POST](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2024-05-01-preview&preserve-view=true) or [Search - GET](/rest/api/searchservice/documents/search-get?view=rest-searchservice-2024-05-01-preview&preserve-view=true) in 2024-05-01-preview to specify these parameters.
-
-1. Add a `hybridSearch` query parameter object to set the maximum number of documents recalled through the BM25-ranked results of a hybrid query. It has two properties:
-
-   + `maxTextRecallSize` specifies the number of BM25-ranked results to provide to the Reciprocal Rank Fusion (RRF) ranker used in hybrid queries. The default is 1,000. The maximum is 10,000.
-
-   + `countAndFacetMode` reports the counts for the BM25-ranked results (and for facets if you're using them). The default is all documents that match the query. Optionally, you can scope "count" to the `maxTextRecallSize`.
-
-1. Reduce `maxTextRecallSize` if vector similarity search is generally outperforming the text-side of the hybrid query.
-
-1. Raise `maxTextRecallSize` if you have a large index, and the default isn't capturing a sufficient number of results. With a larger BM25-ranked result set, you can also set `top`, `skip`, and `next` to retrieve portions of those results.
-
-The following REST examples show two use-cases for setting `maxTextRecallSize`. 
-
-The first example reduces `maxTextRecallSize` to 100, limiting the text side of the hybrid query to just 100 document. It also sets `countAndFacetMode` to include only those results from `maxTextRecallSize`.
-
-```http
-POST https://[service-name].search.windows.net/indexes/[index-name]/docs/search?api-version=2024-05-01-Preview 
-
-    { 
-      "vectorQueries": [ 
-        { 
-          "kind": "vector", 
-          "vector": [1.0, 2.0, 3.0], 
-          "fields": "my_vector_field", 
-          "k": 10 
-        } 
-      ], 
-      "search": "hello world", 
-      "hybridSearch": { 
-        "maxTextRecallSize": 100, 
-        "countAndFacetMode": "countRetrievableResults" 
-      } 
-    } 
-```
-
-The second example raises `maxTextRecallSize` to 5,000. It also uses top, skip, and next to pull results from large result sets. In this case, the request pulls in BM25-ranked results starting at position 1,500 through 2,000 as the text query contribution to the RRF composite result set.
-
-```http
-POST https://[service-name].search.windows.net/indexes/[index-name]/docs/search?api-version=2024-05-01-Preview 
-
-    { 
-      "vectorQueries": [ 
-        { 
-          "kind": "vector", 
-          "vector": [1.0, 2.0, 3.0], 
-          "fields": "my_vector_field", 
-          "k": 10 
-        } 
-      ], 
-      "search": "hello world",
-      "top": 500,
-      "skip": 1500,
-      "next": 500,
-      "hybridSearch": { 
-        "maxTextRecallSize": 5000, 
-        "countAndFacetMode": "countRetrievableResults" 
-      } 
-    } 
-```
-
 ## Configure a query response
 
-When you're setting up the hybrid query, think about the response structure. The response is a flattened rowset. Parameters on the query determine which fields are in each row and how many rows are in the response. The search engine ranks the matching documents and returns the most relevant results.
+When you're setting up the hybrid query, think about the response structure. The search engine ranks the matching documents and returns the most relevant results. The response is a flattened rowset. Parameters on the query determine which fields are in each row and how many rows are in the response. 
 
 ### Fields in a response
 
 Search results are composed of `retrievable` fields from your search index. A result is either:
 
 + All `retrievable` fields (a REST API default).
-+ Fields explicitly listed in a "select" parameter on the query. 
++ Fields explicitly listed in a `select` parameter on the query. 
 
-The examples in this article used a "select" statement to specify text (nonvector) fields in the response.
+The examples in this article used a `select` statement to specify text (nonvector) fields in the response.
 
 > [!NOTE]
 > Vectors aren't reverse engineered into human readable text, so avoid returning them in the response. Instead, choose nonvector fields that are representative of the search document. For example, if the query targets a "DescriptionVector" field, return an equivalent text field if you have one ("Description") in the response.
@@ -400,22 +428,22 @@ A query might match to any number of documents, as many as all of them if the se
 + `"k": n` results for vector-only queries
 + `"top": n` results for hybrid queries (with or without semantic) that include a "search" parameter
 
-Both "k" and "top" are optional. Unspecified, the default number of results in a response is 50. You can set "top" and "skip" to [page through more results](search-pagination-page-layout.md#paging-results) or change the default.
+Both `k` and `top` are optional. Unspecified, the default number of results in a response is 50. You can set `top` and `skip` to [page through more results](search-pagination-page-layout.md#paging-results) or change the default.
 
 > [!NOTE]
-> If you're using hybrid search in 2024-05-01-preview API, you can control the number of results from the keyword query using [maxTextRecallSize](#set-maxtextrecallsize-and-countandfacetmode). Combine this with a setting for "k" to control the representation from each search subsystem (keyword and vector).
+> If you're using hybrid search in 2024-05-01-preview API, you can control the number of results from the keyword query using [maxTextRecallSize](#set-maxtextrecallsize-and-countandfacetmode). Combine this with a setting for `k` to control the representation from each search subsystem (keyword and vector).
 
-#### Semantic ranker results
+### Semantic ranker results
 
 > [!NOTE]
 > The semantic ranker can take up to 50 results. 
 
-If you're using semantic ranker in 2024-05-01-preview API, it's a best practice to set "k" and "maxTextRecallSize" to sum to at least 50 total.  You can then restrict the results returned to the user with the "top" parameter. 
+If you're using semantic ranker in 2024-05-01-preview or later, it's a best practice to set `k` and `maxTextRecallSize` to sum to at least 50 total.  You can then restrict the results returned to the user with the `top` parameter. 
 
 If you're using semantic ranker in previous APIs do the following:
 
-+ if doing keyword-only search (no vector) set "top" to 50
-+ if doing hybrid search set "k" to 50, to ensure that the semantic ranker gets at least 50 results. 
++ For keyword-only search (no vectors) set `top` to 50
++ For hybrid search set `k` to 50, to ensure that the semantic ranker gets at least 50 results. 
 
 ### Ranking
 
@@ -453,6 +481,6 @@ In this section, compare the responses between single vector search and simple h
 }
 ```
 
-## Next steps
+## Next step
 
-As a next step, we recommend reviewing the demo code for [Python](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python), [C#](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-dotnet) or [JavaScript](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-javascript).
+We recommend reviewing vector demo code for [Python](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python), [C#](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-dotnet) or [JavaScript](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-javascript).

Summary

{
    "modification_type": "minor update",
    "modification_title": "ハイブリッド検索に関するクエリの更新"
}

Explanation

このコードの差分は、Azure AI Searchのハイブリッド検索に関するドキュメント「hybrid-search-how-to-query.md」の更新に関するものです。この更新では、ハイブリッドクエリの構成要素や設定手順に関する重要な情報が追加され、既存の内容が改善されています。主な変更ポイントは以下の通りです:

  1. 日付の更新: 記事の日付が2025年5月から2025年7月に変更され、内容が最新の情報を反映していることを示しています。

  2. クエリ手順の明確化: ハイブリッドクエリ設定のセクションが新たに設けられ、Search Explorerを通じた基本的なハイブリッドクエリの設定方法が詳しく説明されています。これにより、ユーザーは具体的な手順を容易に追うことができます。

  3. 機能の強化: 新しいAPIバージョンの特徴や、ベクターサブクエリに対するフィルタリングの適用方法が詳述され、ユーザーがハイブリッド検索の精度を高めるための手法が示されています。特に、フィルタリング機能がハイブリッドクエリにどのように影響するかを明確にする内容が追加されました。

  4. 例の更新: クエリ例やベストプラクティスが見直され、ユーザーフレンドリーな形で具体的な使用法が提供されています。このようにして、実際の開発・運用に役立つ情報が盛り込まれています。

全体として、この更新はAzure AI Searchのハイブリッド検索に関する理解を深め、ユーザーがより効果的に製品を活用できるようにすることを目的としています。

articles/search/hybrid-search-overview.md

Diff
@@ -9,22 +9,22 @@ ms.service: azure-ai-search
 ms.custom:
   - ignite-2023
 ms.topic: conceptual
-ms.date: 05/27/2025
+ms.date: 07/21/2025
 ---
 
 # Hybrid search using vectors and full text in Azure AI Search
 
-Hybrid search is a single query request, configured for full text and vector search, that executes against a search index containing both searchable plain text content and generated embeddings. For query purposes, hybrid search is:
+Hybrid search is a single query request, configured for full text and vector queries, that executes against a search index containing both searchable plain text content and generated embeddings. For query purposes, hybrid search is:
 
 + A single query request that includes both `search` and `vectors` query parameters
 + Executing in parallel
-+ With merged results in the query response, scored using [Reciprocal Rank Fusion (RRF)](hybrid-search-ranking.md)
++ Merging results from each query using [Reciprocal Rank Fusion (RRF)](hybrid-search-ranking.md)
 
-This article explains the concepts, benefits, and limitations of hybrid search. Links at the end provide instructions and next steps. You can also watch this [embedded video](#why-choose-hybrid-search) for an explanation of how hybrid retrieval contributes to high quality RAG apps.
+This article explains the concepts, benefits, and limitations of hybrid search. Links at the end provide instructions and next steps. You can also watch this [embedded video](#why-choose-hybrid-search) for an explanation of how hybrid retrieval contributes to high quality generative search applications.
 
 ## How does hybrid search work?
 
-In Azure AI Search, vector fields containing embeddings can live alongside textual and numerical fields, allowing you to formulate hybrid queries that execute in parallel. Hybrid queries can take advantage of existing text-based functionality like filtering, faceting, sorting, scoring profiles, and [semantic ranking](semantic-search-overview.md) on your text fields, while executing a similarity search against vectors, all in a single search request.
+In a search index, vector fields containing embeddings coexist with textual and numerical fields, allowing you to formulate hybrid queries that execute in parallel. Hybrid queries can take advantage of existing text-based functionality like filtering, faceting, sorting, scoring profiles, and [semantic ranking](semantic-search-overview.md) on your text fields, while executing a similarity search against vectors, all in a single search request.
 
 Hybrid search combines results from both full text and vector queries, which use different ranking functions such as BM25 for text, and Hierarchical Navigable Small World (HNSW) and exhaustive K Nearest Neighbors (eKNN) for vectors. A [Reciprocal Rank Fusion (RRF)](hybrid-search-ranking.md) algorithm merges the results. The query response provides just one result set, using RRF to rank the unified results.
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "ハイブリッド検索の概要に関する更新"
}

Explanation

このコードの差分は、Azure AI Searchにおけるハイブリッド検索の概要を説明するドキュメント「hybrid-search-overview.md」の更新を示しています。この更新には、情報の明確化と最新化が含まれており、以下の重要なポイントがあります。

  1. 日付の変更: 記事の日付が2025年5月から2025年7月に更新され、コンテンツが最新の情報を反映しています。

  2. 表現の改善: ハイブリッド検索の定義部分が若干変更され、テキスト内容とベクタークエリがより明確に文中で説明されています。具体的には、「search」および「vectors」クエリパラメータを包括する単一のクエリリクエストとして明示されています。

  3. 機能の明示: ハイブリッド検索の処理が実行される並列性や、Reciprocal Rank Fusion (RRF)による結果のマージが説明されています。これにより、ユーザーがハイブリッド検索の動作をより理解しやすくなっています。

  4. コンテンツの増強: 記事中に含まれる用語が明確化され、生成的検索アプリケーションにおけるハイブリッド検索の寄与についても具体的に言及されています。

全体として、この更新はハイブリッド検索の仕組みやその利点、限界についての理解を深めるために重要な情報を提供し、ユーザーがAzure AI Searchをより効果的に利用できることを目的としています。

articles/search/hybrid-search-ranking.md

Diff
@@ -9,17 +9,17 @@ ms.service: azure-ai-search
 ms.custom:
   - ignite-2023
 ms.topic: conceptual
-ms.date: 03/11/2025
+ms.date: 07/21/2025
 ---
 
 # Relevance scoring in hybrid search using Reciprocal Rank Fusion (RRF)
 
-Reciprocal Rank Fusion (RRF) is an algorithm that evaluates the search scores from multiple, previously ranked results to produce a unified result set. In Azure AI Search, RRF is used whenever there are two or more queries that execute in parallel. Each query produces a ranked result set, and RRF merges and homogenizes the rankings into a single result set for the query response. Examples of scenarios where RRF is always used include [*hybrid search*](hybrid-search-overview.md) and multiple vector queries executing concurrently. 
+Reciprocal Rank Fusion (RRF) is an algorithm that evaluates the search scores from multiple, previously ranked results to produce a unified result set. In Azure AI Search, RRF is used when two or more queries execute in parallel. Namely, for [hybrid queries](hybrid-search-overview.md) and for [multiple vector queries](vector-search-overview.md). Each individual query produces a ranked result set, and RRF merges and homogenizes the rankings into a single result set for the query response. 
 
 RRF is based on the concept of *reciprocal rank*, which is the inverse of the rank of the first relevant document in a list of search results. The goal of the technique is to take into account the position of the items in the original rankings, and give higher importance to items that are ranked higher in multiple lists. This can help improve the overall quality and reliability of the final ranking, making it more useful for the task of fusing multiple ordered search results.
 
 > [!NOTE]
-> New in [**2024-09-01-preview**](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2024-09-01-preview&preserve-view=true) is the ability to deconstruct an RRF-ranked search score into its component subscores. This gives you transparency into all-up score composition. For more information, see [unpack search scores (preview)](#unpack-a-search-score-into-subscores-preview) in this article.
+> [Preview APIs](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2025-05-01-preview&preserve-view=true) can deconstruct an RRF-ranked search score into its component subscores. This gives you transparency into all-up score composition. For more information, see [unpack search scores (preview)](#unpack-a-search-score-into-subscores-preview) in this article.
 
 ## How RRF ranking works
 
@@ -62,20 +62,20 @@ Semantic ranking occurs after RRF merging of results. Its score (`@search.rerank
 
 ## Unpack a search score into subscores (preview)
 
-Using [**2024-09-01-preview**](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2024-09-01-preview&preserve-view=true), you can deconstruct a search score to view its subscores.
+Using the [latest preview API version](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2025-05-01-preview&preserve-view=true), you can deconstruct a search score to view its subscores.
 
 For vector queries, this information can help you determine an appropriate value for [vector weighting](vector-search-how-to-query.md#vector-weighting) or [setting minimum thresholds](vector-search-how-to-query.md#set-thresholds-to-exclude-low-scoring-results-preview).
 
 To get subscores:
 
-+ Use the [latest preview Search Documents REST API](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2024-09-01-preview&preserve-view=true#request-body) or an Azure SDK beta package that provides the feature.
++ Use the [latest preview Search Documents REST API](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2025-05-01-preview&preserve-view=true#request-body) or an Azure SDK beta package that provides the feature.
 
 + Modify a query request, adding a new `debug` parameter set to either `vector`, `semantic` if using semantic ranker, or `all`.
 
 Here's an example of hybrid query that returns subscores in debug mode:
 
 ```http
-POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version=2024-09-01=preview
+POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version=2025-05-01=preview
 
 {
     "vectorQueries": [
@@ -115,7 +115,7 @@ POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/d
 
 ## Weighted scores
 
-Using [**2024-07-01**](/rest/api/searchservice/documents/search-post) and newer preview API versions, you can [weight vector queries](vector-search-how-to-query.md#vector-weighting) to increase or decrease their importance in a hybrid query.
+Using the [stable REST API version](/rest/api/searchservice/documents/search-post) and newer preview API versions, you can [weight vector queries](vector-search-how-to-query.md#vector-weighting) to increase or decrease their importance in a hybrid query.
 
 Recall that when computing RRF for a certain document, the search engine looks at the rank of that document for each result set where it shows up. Assume a document shows up in three separate search results, where the results are from two vector queries and one text BM25-ranked query. The position of the document varies in each result.
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "ハイブリッド検索ランキングに関する更新"
}

Explanation

このコードの差分は、Azure AI Searchにおけるハイブリッド検索のランキングに関するドキュメント「hybrid-search-ranking.md」の修正を示しています。この更新では、情報の明確化と最新化が行われ、主に以下のポイントが改善されています。

  1. 日付の更新: 記事の日付が2025年3月から2025年7月に変更され、内容が最新の情報を反映しています。

  2. 表現の改善: RRF(Reciprocal Rank Fusion)アルゴリズムの説明において、具体的な用語やフレーズが修正され、より明確に理解できるようになっています。特に、ハイブリッドクエリと複数のベクタークエリに言及する際の表現が改善されています。

  3. 新機能の明示: 最新のプレビューAPIがRRFでランク付けされた検索スコアを構成要素に分解することができることが強調され、その情報の透明性が向上することが示されています。

  4. デバッグ機能の追加: クエリリクエストに新たにdebugパラメータを追加して、サブスコアを取得する方法が説明されています。これにより、開発者は検索スコアの詳細分析が可能となります。

  5. APIバージョンの更新: 古いAPIバージョンから最新のプレビューバージョンへの切り替えが強調され、ユーザーが最新の機能を利用できるように導かれています。

これにより、ハイブリッド検索における関連スコアリングに関する理解が深まり、ユーザーが効果的にAzure AI Searchを活用できるようにすることを目的としています。

articles/search/media/search-get-started-portal-import-vectors/extract-text-images.png

Summary

{
    "modification_type": "new feature",
    "modification_title": "画像からのテキスト抽出のための新しい画像の追加"
}

Explanation

このコードの差分は、Azure AI Searchに関連する「extract-text-images.png」という画像ファイルの追加を示しています。この追加により、ユーザーがテキストを画像から抽出する機能に関する理解を深めるための視覚的なサポートが提供されます。具体的には、以下のポイントが挙げられます。

  1. 画像の目的: この画像は、ポータルを通じてベクターをインポートする際に、画像からテキストを抽出するプロセスを視覚的に示すことを目的としています。

  2. ユーザーガイダンスの向上: 画像が追加されることで、ユーザーは手順や機能をより直感的に理解できるようになり、ドキュメント全体の利便性が向上します。

  3. ドキュメンテーションの強化: テキストと共に画像を使用することで、情報の伝達が強化され、学習効果が高まります。

このように、新しい画像の追加は、ユーザー体験を向上させる重要な要素となります。

articles/search/media/search-get-started-portal-import-vectors/vectorize-images.png

Summary

{
    "modification_type": "minor update",
    "modification_title": "画像ベクター化の説明画像の更新"
}

Explanation

このコードの差分は、Azure AI Searchに関連する「vectorize-images.png」という画像ファイルの修正を示しています。この変更によって、画像の内容や品質が向上し、プロセスの理解を助けるためのビジュアルが改善されたことが考えられます。具体的には、以下の点が挙げられます。

  1. 画像の目的: この画像は、ポータルを介して画像をベクター化するプロセスを視覚的に示すために使用されます。画像の改良により、ユーザーは手順をより分かりやすく理解できるようになります。

  2. 視覚的な効果の向上: 更新により、画像の品質や情報の伝達方法が改善されている可能性があり、ユーザーの理解を深めるための重要な役割を果たします。

  3. ドキュメンテーション全体の向上: 画像が適切に更新されることで、関連するドキュメントの信頼性が高まり、ユーザーにとってより使いやすいリソースとなります。

このように、画像の更新は、使用時の利便性を向上させるための小さな重要なステップとなります。

articles/search/media/search-get-started-portal-import-vectors/vectorize-text-ai-vision.png

Summary

{
    "modification_type": "new feature",
    "modification_title": "AIビジョンによるテキストベクター化のための新しい画像の追加"
}

Explanation

このコードの差分は、Azure AI Searchに関連する「vectorize-text-ai-vision.png」という新しい画像ファイルの追加を示しています。この画像は、AIビジョンを活用したテキストのベクター化プロセスを視覚的に説明するために使用されます。以下の点が重要です。

  1. 新機能の紹介: 画像は、AIビジョンを利用してテキストをベクター化する手法を示すために追加されており、特にユーザーが新しい機能を理解しやすくすることを目的としています。

  2. 視覚的なガイダンスの提供: 新しい画像の追加により、ドキュメントの視覚的な表現が強化され、ユーザーが手法を直感的に関連付けやすくなっています。

  3. 全体的なドキュメンテーションの改善: 新しい画像は、関連情報を補強し、ユーザーに対する教育的な価値を高めることに寄与します。このため、全体としてより充実したリソースとなります。

このように、新しい画像の追加は、ユーザーの理解を助けるための重要な要素となることが期待されます。

articles/search/media/search-get-started-portal-import-vectors/vectorize-text-aoai.png

Summary

{
    "modification_type": "minor update",
    "modification_title": "テキストベクター化画像の名称変更"
}

Explanation

このコードの差分は、「vectorize-text.png」というファイル名が「vectorize-text-aoai.png」に変更されたことを示しています。この名前の変更は、以下の理由で重要です。

  1. 名称の明確化: 新しいファイル名は、特定の機能や使用されるテクノロジーをより明確に反映しています。「aoai」は、特定のAI関連技術に関連付けられた略称である可能性があり、ユーザーがこの画像が何を示しているかをすぐに理解できるようにしています。

  2. 整合性の向上: 画像名の変更は、他の関連するファイルやリソースとの整合性を保つために重要です。適切な命名規則を守ることで、ドキュメント全体が一貫性を持ち、使用する際に混乱を避けることができます。

  3. ユーザーエクスペリエンスの向上: ユーザーは明確なファイル名によって、必要な情報を迅速に見つけることができ、全体的な体験が向上します。

このように、画像ファイル名の変更は、ドキュメンテーションの質とユーザーの理解を促進する重要なアップデートと言えます。

articles/search/media/search-get-started-portal-import-vectors/vectorize-text-catalog.png

Summary

{
    "modification_type": "new feature",
    "modification_title": "テキストカタログのベクター化に関する新しい画像の追加"
}

Explanation

このコードの差分は、Azure AI Searchに関連する「vectorize-text-catalog.png」という新しい画像ファイルの追加を示しています。この画像は、テキストカタログをベクター化するプロセスに関する視覚的なガイダンスを提供することを目的としています。以下に、その重要なポイントを述べます。

  1. 新機能の追加: この画像は、テキストカタログのベクター化に関する手法を説明するために追加されており、特にユーザーがこの機能の使い方や利点を理解しやすくする役割を果たします。

  2. 学習支援ツールの強化: 新しい画像の追加により、ユーザーは視覚的な情報を通じてより良い理解を得ることができ、結果としてコンテンツを効果的に利用できるようになります。

  3. ドキュメントの充実: 新画像の投入により、全体としての記事がより豊富な情報を含むようになり、ユーザーにとっての価値が向上します。このように、視覚的な要素は内容を補完し、読む際の興味を引き立てることに貢献します。

この変更により、ユーザーは新しい画像を通じて、テキストカタログのベクター化についての理解が深まることが期待されます。

articles/search/retrieval-augmented-generation-overview.md

Diff
@@ -148,7 +148,7 @@ Here are some tips for maximizing relevance and recall:
 
   + [Semantic ranker](semantic-ranking.md) that re-ranks an initial results set, using semantic models from Bing to reorder results for a better semantic fit to the original query.
 
-  + Query parameters for fine-tuning. You can [bump up the importance of vector queries](vector-search-how-to-query.md#vector-weighting) or [adjust the amount of BM25-ranked results](vector-search-how-to-query.md#maxtextsizerecall-for-hybrid-search-preview) in a hybrid query. You can also [set minimum thresholds to exclude low scoring results](vector-search-how-to-query.md#set-thresholds-to-exclude-low-scoring-results-preview) from a vector query.
+  + Query parameters for fine-tuning. You can [boost the importance of vector queries](vector-search-how-to-query.md#vector-weighting) or [adjust the amount of BM25-ranked results](hybrid-search-how-to-query.md#set-maxtextrecallsize-and-countandfacetmode) in a hybrid query response. You can also [set minimum thresholds to exclude low scoring results](vector-search-how-to-query.md#set-thresholds-to-exclude-low-scoring-results-preview) from a vector query.
 
 In comparison and benchmark testing, hybrid queries with text and vector fields, supplemented with semantic ranking, produce the most relevant results.
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "クエリパラメーターに関する情報の修正"
}

Explanation

このコードの差分は、「retrieval-augmented-generation-overview.md」ファイル内のクエリパラメーターに関する情報が一部修正されたことを示しています。具体的には、以下の変更が行われました。

  1. 文言の修正: 「bump up the importance」というフレーズが「boost the importance」に変更されました。この変更は、表現を明確にすることを目的としており、より自然でわかりやすい形になっています。

  2. リンク先の更新: 一部のリンク先が修正されており、正確な情報源へと誘導されるようになっています。「BM25-ranked results」に関連するリンクが、より具体的なページへ変更されています。この変更により、ユーザーが求める情報にアクセスしやすくなり、ユーザーエクスペリエンスが向上します。

  3. 情報の一貫性: 修正された内容は、情報全体の一貫性を保ちつつ、より明確な説明を提供することを目指しています。これにより、ユーザーはクエリパラメーターの設定に関する理解を深めることができるようになります。

この変更により、ユーザーはクエリの調整や効果的な検索方法を理解するためのより質の高い情報を得ることができ、全体的なドキュメントの質が向上します。

articles/search/search-api-preview.md

Diff
@@ -44,7 +44,7 @@ Preview features are removed from this list if they're retired or transition to
 | [**Rescoring options for compressed vectors**](vector-search-how-to-quantization.md) | Relevance (scoring) | You can set options to rescore with original vectors instead of compressed vectors. Applies to HNSW and exhaustive KNN vector algorithms, using binary and scalar compression. | [Create or Update Index (preview)](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2024-09-01-preview&preserve-view=true).|
 | [**Lower the dimension requirements for MRL-trained text embedding models on Azure OpenAI**](vector-search-how-to-truncate-dimensions.md) | Index | Text-embedding-3-small and Text-embedding-3-large are trained using Matryoshka Representation Learning (MRL). This allows you to truncate the embedding vectors to fewer dimensions, and adjust the balance between vector index size usage and retrieval quality. A new `truncationDimension` provides the MRL behaviors as an extra parameter in a vector compression configuration. This can only be configured for new vector fields. | [Create or Update Index (preview)](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2024-09-01-preview&preserve-view=true). |
 | [**Unpack `@search.score` to view subscores in hybrid search results**](hybrid-search-ranking.md#unpack-a-search-score-into-subscores-preview) | Relevance (scoring) | You can investigate Reciprocal Rank Fusion (RRF) ranked results by viewing the individual query subscores of the final merged and scored result. A new `debug` property unpacks the search score. `QueryResultDocumentSubscores`, `QueryResultDocumentRerankerInput`, and `QueryResultDocumentSemanticField` provide the extra detail. | [Search Documents (preview)](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2024-09-01-preview&preserve-view=true). |
-| [**Target filters in a hybrid search to just the vector queries**](hybrid-search-how-to-query.md#hybrid-search-with-filters-targeting-vector-subqueries-preview) | Query | A filter on a hybrid query involves all subqueries on the request, regardless of type. You can override the global filter to scope the filter to a specific subquery. A new `filterOverride` parameter provides the behaviors. | [Search Documents (preview)](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2024-09-01-preview&preserve-view=true). |
+| [**Target filters in a hybrid search to just the vector queries**](hybrid-search-how-to-query.md#example-hybrid-search-with-filters-targeting-vector-subqueries-preview) | Query | A filter on a hybrid query involves all subqueries on the request, regardless of type. You can override the global filter to scope the filter to a specific subquery. A new `filterOverride` parameter provides the behaviors. | [Search Documents (preview)](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2024-09-01-preview&preserve-view=true). |
 | [**Text Split skill (token chunking)**](cognitive-search-skill-textsplit.md) | Applied AI (skills) | This skill has new parameters that improve data chunking for embedding models. A new `unit` parameter lets you specify token chunking. You can now chunk by token length, setting the length to a value that makes sense for your embedding model. You can also specify the tokenizer and any tokens that shouldn't be split during data chunking. | [Create or Update Skillset (preview)](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2024-09-01-preview&preserve-view=true). |
 | [**Azure AI Vision multimodal embedding skill**](cognitive-search-skill-vision-vectorize.md) | Applied AI (skills) | A new skill type that calls Azure AI Vision multimodal API to generate embeddings for text or images during indexing. | [Create or Update Skillset (preview)](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true). |
 | [**Azure Machine Learning (AML) skill**](cognitive-search-aml-skill.md) | Applied AI (skills) | AML skill integrates an inferencing endpoint from Azure Machine Learning. In previous preview APIs, it supports connections to deployed custom models in an AML workspace. Starting in the 2024-05-01-preview, you can use this skill in workflows that connect to embedding models in the Azure AI Foundry model catalog. It's also available in the Azure portal, in skillset design, assuming Azure AI Search and Azure Machine Learning services are deployed in the same subscription. | [Create or Update Skillset (preview)](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true). |

Summary

{
    "modification_type": "minor update",
    "modification_title": "ハイブリッド検索におけるフィルターターゲットの例を更新"
}

Explanation

このコードの差分は、「search-api-preview.md」ファイル内でハイブリッド検索に関連するフィルターのターゲットに関する情報が更新されたことを示しています。具体的な変更点は以下の通りです。

  1. リンクの修正: ハイブリッド検索におけるフィルターのターゲティングに関するセクションのリンクが、具体的な例に関連する新しいリンクに変更されました。これにより、ユーザーはより明確で具体的な情報を得やすくなっています。

  2. 説明の一貫性と明確化: フィルターのターゲティングに関する説明の内容はそのままですが、リンクの更新により、関連情報へのアクセスが改善され、セクションの一貫性が保たれています。ユーザーは、特定のサブクエリに対してフィルターを適用する方法について、より深く理解できるようになります。

この変更は、ハイブリッド検索におけるフィルターの適用方法に関するドキュメントの質を向上させ、ユーザーが必要な情報を簡単に見つけられるようにすることを目的としています。結果として、ユーザーはこの機能をより効果的に利用できるようになります。

articles/search/search-get-started-portal-image-search.md

Diff
@@ -7,7 +7,7 @@ ms.author: haileytapia
 ms.service: azure-ai-search
 ms.update-cycle: 90-days
 ms.topic: quickstart
-ms.date: 07/16/2025
+ms.date: 07/22/2025
 ms.custom:
   - references_regions
 ---
@@ -52,7 +52,7 @@ For content embedding, you can choose either image verbalization (followed by te
 | Method | Description | Supported models |
 |--|--|--|
 | Image verbalization | Uses an LLM to generate natural-language descriptions of images, and then uses an embedding model to vectorize plain text and verbalized images.<br><br>Requires an [Azure OpenAI resource](/azure/ai-services/openai/how-to/create-resource) <sup>1, 2</sup> or [Azure AI Foundry project](/azure/ai-foundry/how-to/create-projects).<br><br>For text vectorization, you can also use an [Azure AI services multi-service resource](/azure/ai-services/multi-service-resource#azure-ai-multi-services-resource-for-azure-ai-search-skills) <sup>3</sup> in a [supported region](cognitive-search-skill-vision-vectorize.md). | LLMs:<br>GPT-4o<br>GPT-4o-mini<br>phi-4 <sup>4</sup><br><br>Embedding models:<br>text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large |
-| Multimodal embeddings | Uses an embedding model to directly vectorize both text and images.<br><br>Requires an [Azure AI Foundry project](/azure/ai-foundry/how-to/create-projects) or [Azure AI services multi-service resource](/azure/ai-services/multi-service-resource#azure-ai-multi-services-resource-for-azure-ai-search-skills) <sup>3</sup> in a [supported region](cognitive-search-skill-vision-vectorize.md). | Cohere-embed-v3-english<br>Cohere-embed-v3-multilingual<br>Cohere-embed-v4 <sup>5</sup> |
+| Multimodal embeddings | Uses an embedding model to directly vectorize both text and images.<br><br>Requires an [Azure AI Foundry hub-based project](/azure/ai-foundry/how-to/create-projects) or [Azure AI services multi-service resource](/azure/ai-services/multi-service-resource#azure-ai-multi-services-resource-for-azure-ai-search-skills) <sup>3</sup> in a [supported region](cognitive-search-skill-vision-vectorize.md). | Cohere-embed-v3-english<br>Cohere-embed-v3-multilingual<br>Cohere-embed-v4 <sup>5</sup> |
 
 <sup>1</sup> The endpoint of your Azure OpenAI resource must have a [custom subdomain](/azure/ai-services/cognitive-services-custom-subdomains), such as `https://my-unique-name.openai.azure.com`. If you created your resource in the [Azure portal](https://portal.azure.com/), this subdomain was automatically generated during resource setup.
 
@@ -128,6 +128,9 @@ On your Azure OpenAI resource:
 
 The Azure AI Foundry model catalog provides LLMs for image verbalization and embedding models for text and image vectorization. Your search service requires access to call the [GenAI Prompt skill](cognitive-search-skill-genai-prompt.md) and [AML skill](cognitive-search-aml-skill.md).
 
+> [!NOTE]
+> If you're using a hub-based project for multimodal embeddings, skip this step. The wizard requires key-based authentication in this scenario.
+
 On your Azure AI Foundry project:
 
 + Assign **Azure AI Project Manager** to your [search service identity](search-howto-managed-identities-data-sources.md#create-a-system-managed-identity).
@@ -197,7 +200,7 @@ Azure AI Search requires a connection to a data source for content ingestion and
 
 To connect to your data:
 
-1. On the **Connect to your data** page, specify your Azure subscription.
+1. On the **Connect to your data** page, select your Azure subscription.
 
 1. Select the storage account and container to which you uploaded the sample data.
 
@@ -233,7 +236,7 @@ To use the Document Layout skill:
 
    :::image type="content" source="media/search-get-started-portal-images/extract-your-content-doc-intelligence.png" alt-text="Screenshot of the wizard page with Azure AI Document Intelligence selected for content extraction." border="true" lightbox="media/search-get-started-portal-images/extract-your-content-doc-intelligence.png":::
 
-1. Specify your Azure subscription and multi-service resource.
+1. Select your Azure subscription and multi-service resource.
 
 1. For the authentication type, select **System assigned identity**.
 
@@ -267,7 +270,7 @@ To use the skills for image verbalization:
 
    1. For the kind, select your LLM provider: **Azure OpenAI** or **AI Foundry Hub catalog models**.
 
-   1. Specify your Azure subscription, resource, and LLM deployment.
+   1. Select your Azure subscription, resource, and LLM deployment.
 
    1. For the authentication type, select **System assigned identity**.
 
@@ -279,7 +282,7 @@ To use the skills for image verbalization:
 
    1. For the kind, select your model provider: **Azure OpenAI**, **AI Foundry Hub catalog models**, or **AI Vision vectorization**.
 
-   1. Specify your Azure subscription, resource, and embedding model deployment.
+   1. Select your Azure subscription, resource, and embedding model deployment (if applicable).
 
    1. For the authentication type, select **System assigned identity**.
 
@@ -305,7 +308,9 @@ To use the skills for multimodal embeddings:
 
    If Azure AI Vision is unavailable, make sure your search service and multi-service resource are both in a [region that supports the Azure AI Vision multimodal APIs](/azure/ai-services/computer-vision/how-to/image-retrieval).
 
-1. Specify your Azure subscription, resource, and embedding model deployment.
+1. Select your Azure subscription, resource, and embedding model deployment (if applicable).
+
+1. If you're using Azure AI Vision, select **System assigned identity** for the authentication type. Otherwise, leave it as **API key**.
 
 1. Select the checkbox that acknowledges the billing effects of using this resource.
 
@@ -321,7 +326,7 @@ The next step is to send images extracted from your documents to Azure Storage.
 
 To store the extracted images:
 
-1. On the **Image output** page, specify your Azure subscription.
+1. On the **Image output** page, select your Azure subscription.
 
 1. Select the storage account and blob container you created to store the images.
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "イメージ検索のクイックスタートガイドの修正"
}

Explanation

このコードの差分は、「search-get-started-portal-image-search.md」ファイルにおけるイメージ検索に関するクイックスタートガイドの内容が更新されたことを示しています。以下は変更点の概要です。

  1. 日付の更新: 変更された日付が「07/16/2025」から「07/22/2025」に更新されました。これはドキュメントの最新性を反映しています。

  2. 用語の一貫性向上: 「Specify your Azure subscription」という表現が複数回「Select your Azure subscription」に修正され、用語の一貫性が向上しました。これにより、指示がより明確になっています。

  3. 新しい注釈の追加: ハブベースのプロジェクトを使用している場合、特定の手順をスキップするように指示した新しい注釈が追加されました。これにより、ユーザーは状況に応じた具体的な指示を得ることができます。

  4. 説明の強化: ハイブリッド検索でのフィルター設定や認証の手順が、より明確に定義されており、ユーザーが設定を理解しやすくなっています。

  5. 内容の追加: マルチモーダル埋め込みに関する手順も一部更新され、より良いユーザーエクスペリエンスを提供するよう改善されています。

これらの変更は、ドキュメントを更新することでユーザーがイメージ検索の機能をより効果的に活用できるようにすることを目的としており、全体的なナビゲーションと理解を向上させています。

articles/search/search-get-started-portal-import-vectors.md

Diff
@@ -10,7 +10,7 @@ ms.custom:
   - build-2024
   - ignite-2024
 ms.topic: quickstart
-ms.date: 07/17/2025
+ms.date: 07/22/2025
 ---
 
 # Quickstart: Vectorize text in the Azure portal
@@ -185,16 +185,16 @@ This section points you to the content that works for this quickstart. Before yo
 
 ## Prepare embedding model
 
-The wizard can use embedding models deployed from Azure OpenAI, Azure AI Vision, or from the model catalog in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs). Before you proceed, make sure you completed the prerequisites for [role-based access](#role-based-access).
+The wizard can use embedding models deployed from Azure OpenAI, Azure AI Vision, or the Azure AI Foundry model catalog. Before you proceed, make sure you completed the prerequisites for [role-based access](#role-based-access).
 
 ### [Azure OpenAI](#tab/model-aoai)
 
 The wizard supports text-embedding-ada-002, text-embedding-3-large, and text-embedding-3-small. Internally, the wizard calls the [AzureOpenAIEmbedding skill](cognitive-search-skill-azure-openai-embedding.md) to connect to Azure OpenAI.
 
-1. Sign in to the [Azure portal](https://portal.azure.com/) and select your Azure OpenAI resource.
-
 1. To assign roles:
 
+   1. Sign in to the [Azure portal](https://portal.azure.com/) and select your Azure OpenAI resource.
+
    1. From the left pane, select **Access control (IAM)**.
 
    1. Select **Add** > **Add role assignment**.
@@ -215,54 +215,37 @@ The wizard supports text-embedding-ada-002, text-embedding-3-large, and text-emb
 
 ### [Azure AI Vision](#tab/model-ai-vision)
 
-The wizard supports Azure AI Vision image retrieval through multimodal embeddings (version 4.0). Internally, the wizard calls the [multimodal embeddings skill](cognitive-search-skill-vision-vectorize.md) to connect to Azure AI Vision.
+The wizard supports text and image retrieval through the Azure AI Vision multimodal APIs, which are built into your Azure AI multi-service resource. Internally, the wizard calls the [Azure AI Vision multimodal embeddings skill](cognitive-search-skill-vision-vectorize.md) to make the connection.
 
-1. Sign in to the [Azure portal](https://portal.azure.com/) and select your Azure AI multi-service resource.
-
-1. To assign roles:
+Since no model deployment is required, you only need to assign roles to your search service identity.
 
-   1. From the left pane, select **Access control (IAM)**.
+To assign roles:
 
-   1. Select **Add** > **Add role assignment**.
+1. Sign in to the [Azure portal](https://portal.azure.com/) and select your multi-service resource.
 
-   1. Under **Job function roles**, select **Cognitive Services User**, and then select **Next**.
+1. From the left pane, select **Access control (IAM)**.
 
-   1. Under **Members**, select **Managed identity**, and then select **Select members**.
+1. Select **Add** > **Add role assignment**.
 
-   1. Select your subscription and the managed identity of your search service.
+1. Under **Job function roles**, select **Cognitive Services User**, and then select **Next**.
 
-The multimodal embeddings are built into your Azure AI multi-service resource, so there's no model deployment step. You should now be able to select the Azure AI Vision vectorizer in the **Import and vectorize data wizard**.
+1. Under **Members**, select **Managed identity**, and then select **Select members**.
 
-> [!NOTE]
-> If you can't select the Azure AI Vision vectorizer, make sure you have an Azure AI Vision resource in a supported region. Also make sure the managed identity of your search service has **Cognitive Services User** permissions.
+1. Select your subscription and the managed identity of your search service.
 
 ### [Azure AI Foundry model catalog](#tab/model-catalog)
 
 The wizard supports Azure, Cohere, and Facebook embedding models in the Azure AI Foundry model catalog, but it doesn't currently support the OpenAI CLIP models. Internally, the wizard calls the [AML skill](cognitive-search-aml-skill.md) to connect to the catalog.
 
-For the model catalog, you should have an [Azure AI Foundry project](/azure/ai-foundry/how-to/create-projects) with a [hub that's connected to an Azure OpenAI resource and an Azure AI Search service](/azure/ai-foundry/how-to/create-projects#create-a-project).
-
-1. Sign in to the [Azure portal](https://portal.azure.com/) and select your Azure OpenAI resource.
-
-1. To assign roles:
+To complete these steps, you must have a [hub-based project](/azure/ai-foundry/how-to/create-projects) in Azure AI Foundry. Currently, hub-based projects support API keys instead of managed identities for authentication, so there's no role assignment step. You only need to deploy a model from the catalog.
 
-   1. From the left pane, select **Access control (IAM)**.
+To deploy an embedding model:
 
-   1. Select **Add** > **Add role assignment**.
+1. Sign in to the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs) and select your hub-based project.
 
-   1. Under **Job function roles**, select **Cognitive Services User**, and then select **Next**.
+1. From the left pane, select **Model catalog**.
 
-   1. Under **Members**, select **Managed identity**, and then select **Select members**.
-
-   1. Select your subscription and the managed identity of your search service.
-
-1. To deploy an embedding model:
-
-   1. Sign in to the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs) and select your project.
-
-   1. From the left pane, select **Model catalog**.
-
-   1. Deploy a [supported embedding model](#supported-embedding-models).
+1. Deploy a [supported embedding model](#supported-embedding-models).
 
 ---
 
@@ -284,39 +267,33 @@ To start the wizard for vector search:
 
 ## Connect to your data
 
-The next step is to connect to a data source to use for the search index.
+In this step, you connect Azure AI Search to a [supported data source](#supported-data-sources) for content ingestion and indexing.
 
 ### [Azure Blob Storage](#tab/connect-data-storage)
 
-1. On the **Connect to your data** page, specify the Azure subscription.
+1. On the **Connect to your data** page, select your Azure subscription.
 
 1. Select the storage account and container that provide the sample data.
 
-1. If you enabled soft delete and optionally added custom metadata in [Prepare sample data](#prepare-sample-data), select the **Enable deletion tracking** checkbox.
+1. If you enabled soft delete and added custom metadata in [Prepare sample data](#prepare-sample-data), select the **Enable deletion tracking** checkbox.
 
    + On subsequent indexing runs, the search index is updated to remove any search documents based on soft-deleted blobs on Azure Storage.
 
    + Blobs support either **Native blob soft delete** or **Soft delete using custom metadata**.
 
    + If you configured your blobs for soft delete, provide the metadata property name-value pair. We recommend **IsDeleted**. If **IsDeleted** is set to **true** on a blob, the indexer drops the corresponding search document on the next indexer run.
 
-   The wizard doesn't check Azure Storage for valid settings or throw an error if the requirements aren't met. Instead, deletion detection doesn't work, and your search index is likely to collect orphaned documents over time.
-
-   :::image type="content" source="media/search-get-started-portal-import-vectors/data-source-blob.png" alt-text="Screenshot of the data source page with deletion detection options.":::
-
-1. Select the **Authenticate using managed identity** checkbox.
-
-   + For the type of managed identity, select **System-assigned**.
+   + The wizard doesn't check Azure Storage for valid settings or throw an error if the requirements aren't met. Instead, deletion detection doesn't work, and your search index is likely to collect orphaned documents over time.
 
-   + The identity should have a **Storage Blob Data Reader** role on Azure Storage.
+1. Select the **Authenticate using managed identity** checkbox. Leave the identity type as **System-assigned**.
 
-   + Don't skip this step. A connection error occurs during indexing if the wizard can't connect to Azure Storage.
+   :::image type="content" source="media/search-get-started-portal-import-vectors/data-source-blob.png" alt-text="Screenshot of the data source page with deletion detection options." lightbox="media/search-get-started-portal-import-vectors/data-source-blob.png":::
 
 1. Select **Next**.
 
 ### [ADLS Gen2](#tab/connect-data-adlsgen2)
 
-1. On the **connect to your data** page, specify the Azure subscription.
+1. On the **Connect to your data** page, select your Azure subscription.
 
 1. Select the storage account and container that provide the sample data.
 
@@ -328,23 +305,17 @@ The next step is to connect to a data source to use for the search index.
 
    + Provide the metadata property you created for deletion detection. We recommend **IsDeleted**. If **IsDeleted** is set to **true** on a blob, the indexer drops the corresponding search document on the next indexer run.
 
-   The wizard doesn't check Azure Storage for valid settings or throw an error if the requirements aren't met. Instead, deletion detection doesn't work, and your search index is likely to collect orphaned documents over time.
-
-   :::image type="content" source="media/search-get-started-portal-import-vectors/data-source-data-lake-storage.png" alt-text="Screenshot of the data source page with deletion detection options.":::
-
-1. Select the **Authenticate using managed identity** checkbox.
-
-   + For the type of managed identity, select **System-assigned**.
+      The wizard doesn't check Azure Storage for valid settings or throw an error if the requirements aren't met. Instead, deletion detection doesn't work, and your search index is likely to collect orphaned documents over time.
 
-   + The identity should have a **Storage Blob Data Reader** role on Azure Storage.
+1. Select the **Authenticate using managed identity** checkbox. Leave the identity type as **System-assigned**.
 
-   + Don't skip this step. A connection error occurs during indexing if the wizard can't connect to Azure Storage.
+   :::image type="content" source="media/search-get-started-portal-import-vectors/data-source-data-lake-storage.png" alt-text="Screenshot of the data source page with deletion detection options." lightbox="media/search-get-started-portal-import-vectors/data-source-data-lake-storage.png":::
 
 1. Select **Next**.
 
 ### [OneLake](#tab/connect-data-onelake)
 
-1. On the **connect to your data** page, select **Lakehouse URL** for the connection type.
+1. On the **Connect to your data** page, select **Lakehouse URL** for the connection type.
 
 1. Paste the URL you copied in [Prepare sample data](#prepare-sample-data).
 
@@ -354,15 +325,13 @@ The next step is to connect to a data source to use for the search index.
 
 ### [Logic Apps](#tab/connect-logic-apps)
 
-The current preview adds support for Logic Apps connectors. For a list of supported connectors and operations:
-
-+ [Use a Logic Apps connector for indexer-based indexing](search-how-to-index-logic-apps-indexers.md)
+The current preview adds support for Logic Apps connectors. For a list of supported connectors and operations, see [Use a Logic Apps connector for indexer-based indexing](search-how-to-index-logic-apps-indexers.md).
 
 ---
 
 ## Vectorize your text
 
-In this step, you specify an embedding model to vectorize chunked data. Chunking is built in and nonconfigurable. The effective settings are:
+During this step, the wizard uses your chosen [embedding model](#supported-embedding-models) to vectorize chunked data. Chunking is built in and nonconfigurable. The effective settings are:
 
 ```json
 "textSplitMode": "pages",
@@ -372,60 +341,98 @@ In this step, you specify an embedding model to vectorize chunked data. Chunking
 "unit": "characters"
 ```
 
-1. On the **Vectorize your text** page, select the source of your embedding model:
+### [Azure OpenAI](#tab/vectorize-text-aoai)
+
+1. On the **Vectorize your text** page, select **Azure OpenAI** for the kind.
 
-   + Azure OpenAI
+1. Select your Azure subscription.
 
-   + Azure AI Foundry model catalog
+1. Select your Azure OpenAI resource, and then select the model you deployed in [Prepare embedding model](#prepare-embedding-model).
+
+1. For the authentication type, select **System assigned identity**.
 
-   + Azure AI Vision (via an [Azure AI services multi-service resource](/azure/ai-services/multi-service-resource#azure-ai-multi-services-resource-for-azure-ai-search-skills) in the same region as Azure AI Search)
+1. Select the checkbox that acknowledges the billing effects of using these resources.
 
-1. Specify the Azure subscription.
+   :::image type="content" source="media/search-get-started-portal-import-vectors/vectorize-text-aoai.png" alt-text="Screenshot of the Vectorize your text page with Azure OpenAI in the wizard." lightbox="media/search-get-started-portal-import-vectors/vectorize-text-aoai.png":::
 
-1. Depending on your resource, make the following selection:
+1. Select **Next**.
 
-   + For Azure OpenAI, select the model you deployed in [Prepare embedding model](#prepare-embedding-model).
+### [Azure AI Vision](#tab/vectorize-text-ai-vision)
 
-   + For AI Foundry model catalog, select the model you deployed in [Prepare embedding model](#prepare-embedding-model).
+1. On the **Vectorize your text** page, select **AI Vision vectorization** for the kind.
 
-   + For AI Vision multimodal embeddings, select your multi-service resource.
+1. Select your Azure subscription and Azure AI multi-service resource.
 
 1. For the authentication type, select **System assigned identity**.
 
-   + The identity should have a **Cognitive Services User** role on the Azure AI services multi-service resource.
+1. Select the checkbox that acknowledges the billing effects of using these resources.
+
+   :::image type="content" source="media/search-get-started-portal-import-vectors/vectorize-text-ai-vision.png" alt-text="Screenshot of the Vectorize your text page with Azure AI Vision in the wizard." lightbox="media/search-get-started-portal-import-vectors/vectorize-text-ai-vision.png":::
+
+1. Select **Next**.
+
+### [Azure AI Foundry model catalog](#tab/vectorize-text-catalog)
+
+1. On the **Vectorize your text** page, select **AI Foundry Hub catalog models** for the kind.
+
+1. Select your Azure subscription.
+
+1. Select your hub-based project, and then select the model you deployed in [Prepare embedding model](#prepare-embedding-model).
+
+1. Leave the authentication type as **API key**.
 
 1. Select the checkbox that acknowledges the billing effects of using these resources.
 
-   :::image type="content" source="media/search-get-started-portal-import-vectors/vectorize-text.png" alt-text="Screenshot of the Vectorize your text page in the wizard.":::
+   :::image type="content" source="media/search-get-started-portal-import-vectors/vectorize-text-catalog.png" alt-text="Screenshot of the Vectorize your text page with the Azure AI Foundry model catalog in the wizard." lightbox="media/search-get-started-portal-import-vectors/vectorize-text-catalog.png":::
 
 1. Select **Next**.
 
+---
+
 ## Vectorize and enrich your images
 
 The health-plan PDFs include a corporate logo, but otherwise, there are no images. You can skip this step if you're using the sample documents.
 
-However, if you work with content that includes useful images, you can apply AI in two ways:
+However, if your content includes useful images, you can apply AI in one or both of the following ways:
 
-+ Use a supported image embedding model from the catalog or the Azure AI Vision multimodal embeddings API to vectorize images.
++ Use a supported image embedding model from the Azure AI Foundry model catalog or the Azure AI Vision multimodal embeddings API (via an Azure AI multi-service resource) to vectorize images.
 
-+ Use optical character recognition (OCR) to recognize text in images. This option invokes the [OCR skill](cognitive-search-skill-ocr.md) to read text from images.
++ Use optical character recognition (OCR) to extract text from images. This option invokes the [OCR skill](cognitive-search-skill-ocr.md).
 
-Azure AI Search and your Azure AI resource must be in the same region or configured for [keyless billing connections](cognitive-search-attach-cognitive-services.md).
+### [Vectorize images](#tab/vectorize-images)
 
-1. On the **Vectorize your images** page, specify the kind of connection the wizard should make. For image vectorization, the wizard can connect to embedding models in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs) or Azure AI Vision.
+1. On the **Vectorize and enrich your images** page, select the **Vectorize images** checkbox.
 
-1. Specify the subscription.
+1. For the kind, select your model provider: **AI Foundry Hub catalog models** or **AI Vision vectorization**.
 
-1. For the Azure AI Foundry model catalog, specify the project and deployment. For more information, see [Prepare embedding models](#prepare-embedding-model).
+   If Azure AI Vision is unavailable, make sure your search service and multi-service resource are both in a [region that supports the Azure AI Vision multimodal APIs](/azure/ai-services/computer-vision/how-to/image-retrieval).
 
-1. (Optional) Crack binary images, such as scanned document files, and use [OCR](cognitive-search-skill-ocr.md) to recognize text.
+1. Select your Azure subscription, resource, and embedding model deployment (if applicable).
+
+1. If you're using Azure AI Vision, select **System assigned identity** for the authentication type. Otherwise, leave it as **API key**.
 
 1. Select the checkbox that acknowledges the billing effects of using these resources.
 
-   :::image type="content" source="media/search-get-started-portal-import-vectors/vectorize-images.png" alt-text="Screenshot of the Vectorize your images page in the wizard.":::
+   :::image type="content" source="media/search-get-started-portal-import-vectors/vectorize-images.png" alt-text="Screenshot of the Vectorize and enrich your images page in the wizard." lightbox="media/search-get-started-portal-import-vectors/vectorize-images.png":::
 
 1. Select **Next**.
 
+### [Extract text from images](#tab/extract-text-images)
+
+1. On the **Vectorize and enrich your images** page, select the **Extract text from images** checkbox.
+
+1. Select your Azure subscription and multi-service resource.
+
+1. For the authentication type, select **System assigned identity**.
+
+1. Select the checkbox that acknowledges the billing effects of using these resources.
+
+   :::image type="content" source="media/search-get-started-portal-import-vectors/extract-text-images.png" alt-text="Screenshot of the Extract text from images page in the wizard." lightbox="media/search-get-started-portal-import-vectors/extract-text-images.png":::
+
+1. Select **Next**.
+
+---
+
 ## Add semantic ranking
 
 On the **Advanced settings** page, you can optionally add [semantic ranking](semantic-search-overview.md) to rerank results at the end of query execution. Reranking promotes the most semantically relevant matches to the top.
@@ -492,11 +499,11 @@ Search Explorer accepts text strings as input and then vectorizes the text for v
 
 1. Select **Query options**, and then select **Hide vector values in search results**. This step makes the results more readable.
 
-   :::image type="content" source="media/search-get-started-portal-import-vectors/query-options.png" alt-text="Screenshot of the button for query options.":::
+   :::image type="content" source="media/search-get-started-portal-import-vectors/query-options.png" alt-text="Screenshot of the button for query options." lightbox="media/search-get-started-portal-import-vectors/query-options.png":::
 
 1. From the **View** menu, select **JSON view** so you can enter text for your vector query in the `text` vector query parameter.
 
-   :::image type="content" source="media/search-get-started-portal-import-vectors/select-json-view.png" alt-text="Screenshot of the menu command for opening the JSON view.":::
+   :::image type="content" source="media/search-get-started-portal-import-vectors/select-json-view.png" alt-text="Screenshot of the menu command for opening the JSON view." lightbox="media/search-get-started-portal-import-vectors/select-json-view.png":::
 
    The default query is an empty search (`"*"`) but includes parameters for returning the number matches. It's a hybrid query that runs text and vector queries in parallel. It also includes semantic ranking and specifies which fields to return in the results through the `select` statement.
 
@@ -544,7 +551,7 @@ Search Explorer accepts text strings as input and then vectorizes the text for v
 
 1. To run the query, select **Search**.
 
-   :::image type="content" source="media/search-get-started-portal-import-vectors/search-results.png" alt-text="Screenshot of search results.":::
+   :::image type="content" source="media/search-get-started-portal-import-vectors/search-results.png" alt-text="Screenshot of search results." lightbox="media/search-get-started-portal-import-vectors/search-results.png":::
 
    Each document is a chunk of the original PDF. The `title` field shows which PDF the chunk comes from. Each `chunk` is long. You can copy and paste one into a text editor to read the entire value.
 
@@ -566,9 +573,9 @@ Search Explorer accepts text strings as input and then vectorizes the text for v
    }
    ```
 
-## Clean up
+## Clean up resource
 
-Azure AI Search is a billable resource. If you no longer need it, delete it from your subscription to avoid charges.
+This quickstart uses billable Azure resources. If you no longer need the resources, delete them from your subscription to avoid charges.
 
 ## Next step
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "ベクトルインポートのクイックスタートガイドの更新"
}

Explanation

このコードの差分は、「search-get-started-portal-import-vectors.md」ファイルにおけるベクトルインポートのクイックスタートガイドが更新されたことを示しています。以下に主な変更内容を整理しました。

  1. 日付の更新: ドキュメントの日付が「07/17/2025」から「07/22/2025」に変更され、最新の情報を反映しています。

  2. 文言の明確化: いくつかの箇所で「指定する」から「選択する」という表現に変更され、ユーザーに対する指示がより明確になりました。例えば、Azure サブスクリプションの選択やリソースの選択を促す文言が修正されています。

  3. 手順の簡素化: 新しい手順がいくつか統合され、ユーザーが手続きをよりスムーズに進められるように改善されています。また、マルチモーダル API の接続に関する説明も簡略化されました。

  4. 説明の強化: AI ビジョンを使用したイメージのベクトル化やテキスト抽出に関する手順が改良され、特に必要な条件や注意事項が強調されています。また、ユーザーがリソースをどのクラスで選択すべきかを明確に示しています。

  5. 新しいセクションの追加と内容の拡充: 画像をベクトル化し、テキストを抽出する新しいセクションが追加されています。これにより、多様なデータに対する利活用の幅が広がりました。

  6. 削除と整理: 一部の重複した手順や不必要な情報が取り除かれ、内容が整理されています。これにより、ドキュメント全体がスッキリし、ユーザーが必要な情報にアクセスしやすくなっています。

これらの変更は、ユーザーがベクトルインポートのプロセスをより効率的に行えるようにし、全体のエクスペリエンスを向上させることを目指しています。

articles/search/search-region-support.md

Diff
@@ -45,8 +45,8 @@ You can create an Azure AI Search service in any of the following Azure public r
 | Canada Central​​ | ✅ | ✅ | ✅ | ✅ | ✅ |
 | Canada East​​ ​|  |  | ✅ | ✅ |  |
 | ​Central US​​ | ✅ | ✅ | ✅ | ✅ | ✅ |
-| East US​ | ✅ | ✅ | ✅ | ✅ |  |
-| East US 2 ​<sup>1</sup>  | ✅ | ✅ | ✅ | ✅ | ✅ |
+| East US​ <sup>1</sup> | ✅ | ✅ | ✅ | ✅ |  |
+| East US 2 | ✅ | ✅ | ✅ | ✅ | ✅ |
 | Mexico Central |  | ✅ |  |  |  |
 | North Central US​ ​| ✅ |  | ✅ | ✅ | ✅ |
 | South Central US​ | ✅ | ✅ | ✅ | ✅ | ✅ |
@@ -55,8 +55,7 @@ You can create an Azure AI Search service in any of the following Azure public r
 | West US 3​ | ✅ | ✅ | ✅ | ✅ | ✅ |
 | West Central US​ ​ | ✅ |  | ✅ | ✅ |  |
 
-<sup>1</sup> This region has capacity constraints on the following tiers: S2, S3, L1, and L2.
-
+<sup>1</sup> This region has capacity constraints in all tiers.
 ### Europe
 
 | Region | AI enrichment | Availability zones | Agentic retrieval | Semantic ranker | Query rewrite |

Summary

{
    "modification_type": "minor update",
    "modification_title": "検索リージョンサポートの更新"
}

Explanation

このコードの差分は、「search-region-support.md」ファイルにおけるAzure AI Searchのリージョンサポートに関する情報が更新されたことを示しています。以下に主な変更内容をまとめます。

  1. テーブルの修正: ステータステーブル内にある「East US」と「East US 2」に関する行の表記が変更されました。具体的には、行の位置が入れ替わり、East USの注釈が簡潔化されました。

  2. 注釈の変更: 「East US」に関する注釈が「このリージョンはS2、S3、L1、L2の各ティアにおけるキャパシティ制約がある。」から「このリージョンは全てのティアにおいてキャパシティ制約がある。」に変更されています。これにより、情報がより明確になり、ユーザーにとっての理解が深まります。

  3. 整頓されたフォーマット: 一部の行で不必要な空白が削除され、全体的なフォーマットの整頓が行われました。これにより、可読性が向上しています。

これらの変更は、ユーザーがAzure AI Searchがサポートされているリージョンに関する情報をより簡単に理解できるようにするためのものです。全体的に、ガイドラインが整理され、重要な情報が強調されています。

articles/search/search-what-is-azure-search.md

Diff
@@ -20,18 +20,18 @@ Azure AI Search is a scalable search infrastructure that indexes heterogeneous c
 
 The service handles both traditional search workloads and modern RAG (retrieval-augmented generation) patterns for conversational AI applications. This makes it suitable for enterprise search scenarios as well as AI-powered customer experiences that require dynamic content generation through chat completion models.
 
-<!-- Azure AI Search is a knowledge retrieval platform that consolidates and organizes information across different types of content. You add your content to a search index. Users, agents, and bots retrieve your content through queries and apps.
-Indexing and query workloads support native integration with AI models from Azure OpenAI, Azure AI Foundry, and Azure Machine Learning. By leveraging an extensibility layer, you can connect workloads to third-party and open-source AI models and tools.
-
-You can use Azure AI Search for regular search needs (like searching through catalogs or documents) or for AI-powered search that can have conversations with users and generate answers based on your content. -->
-
 <!-- Azure AI Search ([formerly known as "Azure Cognitive Search"](whats-new.md#new-service-name)) is an enterprise-ready information retrieval system for your heterogeneous content that you ingest into a search index, and surface to users through queries and apps. It comes with a comprehensive set of advanced search technologies, built for high-performance applications at any scale.
 
 Azure AI Search is the recommended retrieval system for building agent-to-agent (A2A) and RAG-based applications on Azure, with native LLM integrations between Azure OpenAI in Azure AI Foundry Models and Azure Machine Learning, with mechanisms for integrating third-party and open-source models and processes.
 
 Azure AI Search can be used for both traditional search as well as modern information retrieval. Common use cases include catalog or document search, information discovery (data exploration), and  retrieval-augmented generation (RAG) for conversational search.  
  -->
 
+<!-- Azure AI Search is a knowledge retrieval platform that consolidates and organizes information across different types of content. You add your content to a search index. Users, agents, and bots retrieve your content through queries and apps.
+Indexing and query workloads support native integration with AI models from Azure OpenAI, Azure AI Foundry, and Azure Machine Learning. By leveraging an extensibility layer, you can connect workloads to third-party and open-source AI models and tools.
+
+You can use Azure AI Search for regular search needs (like searching through catalogs or documents) or for AI-powered search that can have conversations with users and generate answers based on your content. -->
+
 When you create a search service, you work with the following capabilities:
 
 + A search engine for [agentic search](search-agentic-retrieval-concept.md), [vector search](vector-search-overview.md), [full text](search-lucene-query-architecture.md), [multimodal search](multimodal-search-overview.md), or [hybrid search](hybrid-search-overview.md).

Summary

{
    "modification_type": "minor update",
    "modification_title": "Azure Search の定義の更新"
}

Explanation

このコードの差分は、「search-what-is-azure-search.md」ファイルにおけるAzure AI Searchの定義が更新されたことを示しています。以下に主な変更内容を示します。

  1. コメントアウトされた内容の変更: Azure AI Searchに関する以前の説明がコメントアウトされ、新しい説明が追加されています。これにより、両者の内容が置き換わり、最新の情報が反映されました。

  2. 内容の整理: コメントアウトされた旧内容には、Azure AI Searchが情報を統合し、異なる種類のコンテンツを整理するプラットフォームであること、AIモデルとの統合のサポートが強調されています。新しい内容では、Azure AI Searchがエンタープライズ向けの情報検索システムであり、高性能アプリケーションに対応できることが明記されています。

  3. 機能についての説明の強化: 新しい説明で、Azure AI Searchの利用可能な機能についてより詳細に触れています。特に、エージェント間検索やRAG(retrieval-augmented generation)型アプリケーションを構築するための推奨システムであることが示されています。

  4. 用語の統一: 新しい内容は、用語や解説での統一感があり、ユーザーにとって理解しやすくなっています。また、AIモデルの統合についても、Azure OpenAIやAzure Machine Learningなど具体的なモデル名が挙げられています。

これらの変更は、Azure AI Searchの概念を明確にし、ユーザーが実際にどのように利用できるかを理解しやすくすることを目的としています。全体として、内容が充実し、情報が簡潔かつ効果的に提供されています。

articles/search/semantic-how-to-query-request.md

Diff
@@ -357,4 +357,4 @@ If you anticipate consistent throughput requirements near, at, or higher than th
 Semantic ranking can be used in hybrid queries that combine keyword search and vector search into a single request and a unified response.
 
 > [!div class="nextstepaction"]
-> [Hybrid query with semantic ranker](hybrid-search-how-to-query.md#semantic-hybrid-search)
+> [Hybrid query with semantic ranker](hybrid-search-how-to-query.md#example-semantic-hybrid-search)

Summary

{
    "modification_type": "minor update",
    "modification_title": "ハイブリッドクエリの例へのリンク変更"
}

Explanation

このコードの差分は、「semantic-how-to-query-request.md」ファイルにおけるハイブリッドクエリに関連するリンクが更新されたことを示しています。主な変更点は以下の通りです。

  1. リンクのテキスト変更: 「ハイブリッドクエリとセマンティックランカー」のリンクテキストが「ハイブリッドクエリにおけるセマンティックランカー」から「ハイブリッドクエリの例へのセマンティックランカー」に変更されました。この変更により、リンク先の内容がより明確になります。

  2. リンク先の変更: 変更されたリンクは、具体的な例を示すセクション(#example-semantic-hybrid-search)に移動しました。これにより、ユーザーがハイブリッドクエリにおけるセマンティックランカーの使用方法をより具体的に理解しやすくなります。

この変更は、ドキュメントの可読性と有用性を向上させるものであり、特にユーザーが具体的な実装例にアクセスしやすくなることを意図しています。全体として、情報が整理され、ユーザーの学習体験が向上しています。

articles/search/semantic-how-to-query-rewrite.md

Diff
@@ -262,4 +262,4 @@ In the preceding example:
 Semantic ranking can be used in hybrid queries that combine keyword search and vector search into a single request and a unified response.
 
 > [!div class="nextstepaction"]
-> [Hybrid query with semantic ranker](hybrid-search-how-to-query.md#semantic-hybrid-search)
+> [Hybrid query with semantic ranker](hybrid-search-how-to-query.md#example-semantic-hybrid-search)

Summary

{
    "modification_type": "minor update",
    "modification_title": "ハイブリッドクエリの例へのリンク変更"
}

Explanation

このコードの差分は、「semantic-how-to-query-rewrite.md」ファイル内でハイブリッドクエリに関するリンクが更新されたことを示しています。以下に主な変更内容を記述します。

  1. リンクのテキストの調整: コメントの中のリンクテキストが、「ハイブリッドクエリとセマンティックランカー」から「ハイブリッドクエリの例へのセマンティックランカー」に変更されました。この修正により、リンク先の内容がユーザーにとってより具体的でわかりやすいものとなっています。

  2. リンク先の変更: 新しいリンクは、ハイブリッドクエリに関する具体例を示すセクション(#example-semantic-hybrid-search)に向けられており、ユーザーが実際の使用方法を簡単に把握できるようになっています。この改善は、ドキュメントの実用性を高めることに寄与します。

これらの変更は、ドキュメントがユーザーに与える情報の質を向上させ、特にハイブリッドクエリにおけるセマンティックランカーの具体的な実装を理解する手助けとなります。全体として、情報の透明性と明快さが強化されています。

articles/search/vector-search-how-to-query.md

Diff
@@ -177,6 +177,7 @@ api-key: {{admin-api-key}}
 
         }
     ]
+}
 ```
 
 ### [**Azure portal**](#tab/portal-vector-query)
@@ -526,17 +527,6 @@ POST https://[service-name].search.windows.net/indexes/[index-name]/docs/search?
     }
 ```
 
- <!-- Keep H2 as-is. Direct link from a blog post. Bulk of maxtextsizerecall has moved to hybrid query doc-->
-## MaxTextSizeRecall for hybrid search (preview)
-
-Vector queries are often used in hybrid constructs that include nonvector fields. If you discover that BM25-ranked results are over or under represented in a hybrid query results, you can [set `maxTextRecallSize`](hybrid-search-how-to-query.md#set-maxtextrecallsize-and-countandfacetmode) to increase or decrease the BM25-ranked results provided for hybrid ranking.
-
-You can only set this property in hybrid requests that include both `search` and `vectorQueries` components.
-
-This parameter is in preview. We recommend the  [2024-05-01-preview](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2024-05-01-preview&preserve-view=true) REST API version.
-
-For more information, see [Set maxTextRecallSize - Create a hybrid query](hybrid-search-how-to-query.md#set-maxtextrecallsize-and-countandfacetmode).
-
 ## Next steps
 
 As a next step, review vector query code examples in [Python](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python), [C#](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-dotnet) or [JavaScript](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-javascript).

Summary

{
    "modification_type": "minor update",
    "modification_title": "文書からの MaxTextSizeRecall セクションの削除"
}

Explanation

このコードの差分は、「vector-search-how-to-query.md」ファイルにおいて、MaxTextSizeRecallというセクションが削除され、その内容が変更されたことを示しています。変更点の概要は以下の通りです。

  1. MaxTextSizeRecall セクションの削除: もともと含まれていた「MaxTextSizeRecall for hybrid search (preview)」に関する詳細な説明とその利用方法が削除されました。このセクションでは、ハイブリッドクエリにおけるBM25ランク結果の調整方法について説明されていましたが、現在はこの情報が削除されています。

  2. 文章の簡素化: MaxTextSizeRecallに関する情報はその他のドキュメントに移行し、ドキュメント自体がより簡潔になりました。これにより、ユーザーは必要な情報にアクセスしやすくなり、ドキュメントの可読性が向上します。

この変更は、文書を最新の情報に合わせて整理することを目的としており、特に使用されなくなった機能に関する情報を削除することで、利用者が必要な内容に集中できるようにしています。全体として、ユーザー体験の向上を図るための調整が行われています。

articles/search/whats-new.md

Diff
@@ -78,7 +78,7 @@ Learn about the latest updates to Azure AI Search functionality, docs, and sampl
 | November | Feature | [Portal support for structured data](search-get-started-portal-import-vectors.md). The **Import and vectorize data** wizard now supports Azure SQL, Azure Cosmos DB, and Azure Table Storage. |
 | October | Feature | [Lower the dimension requirements for MRL-trained text embedding models on Azure OpenAI](vector-search-how-to-truncate-dimensions.md). Text-embedding-3-small and Text-embedding-3-large are trained using Matryoshka Representation Learning (MRL). This allows you to truncate the embedding vectors to fewer dimensions, and adjust the balance between vector index size usage and retrieval quality. A new `truncationDimension` in the [2024-09-01-preview](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2024-09-01-preview&preserve-view=true) enables access to MRL compression in text embedding models. This can only be configured for new vector fields. |
 | October | Feature | [Unpack `@search.score` to view subscores in hybrid search results](hybrid-search-ranking.md#unpack-a-search-score-into-subscores-preview). You can investigate Reciprocal Rank Fusion (RRF) ranked results by viewing the individual query subscores of the final merged and scored result. A new `debug` property unpacks the search score. `QueryResultDocumentSubscores`, `QueryResultDocumentRerankerInput`, and `QueryResultDocumentSemanticField` provide the extra detail. These definitions are available in the [2024-09-01-preview](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2024-09-01-preview&preserve-view=true). |
-| October | Feature | [Target filters in a hybrid search to just the vector queries](hybrid-search-how-to-query.md#hybrid-search-with-filters-targeting-vector-subqueries-preview). A filter on a hybrid query involves all subqueries on the request, regardless of type. You can override the global filter to scope the filter to a specific subquery. The new `filterOverride` parameter is available on hybrid queries using the [2024-09-01-preview](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2024-09-01-preview&preserve-view=true). |
+| October | Feature | [Target filters in a hybrid search to just the vector queries](hybrid-search-how-to-query.md#example-hybrid-search-with-filters-targeting-vector-subqueries-preview). A filter on a hybrid query involves all subqueries on the request, regardless of type. You can override the global filter to scope the filter to a specific subquery. The new `filterOverride` parameter is available on hybrid queries using the [2024-09-01-preview](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2024-09-01-preview&preserve-view=true). |
 | October | Applied AI (skills) | [Text Split skill (token chunking)](cognitive-search-skill-textsplit.md). This skill has new parameters that improve data chunking for embedding models. A new `unit` parameter lets you specify token chunking. You can now chunk by token length, setting the length to a value that makes sense for your embedding model. You can also specify the tokenizer and any tokens that shouldn't be split during data chunking. The new `unit` parameter and query subscore definitions are found in the [2024-09-01-preview](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2024-09-01-preview&preserve-view=true). |
 | October | API | [2024-09-01-preview](/rest/api/searchservice/search-service-api-versions?view=rest-searchservice-2024-09-01-preview&preserve-view=true). Preview release of REST APIs for truncated dimensions in text-embedding-3 models, targeted vector filtering for hybrid queries, RRF subscore details for debugging, and token chunking for Text Split skill.|
 | October | Feature | [Portal support for customer-managed key encryption (CMK)](search-security-manage-encryption-keys.md#step-4-encrypt-content). When you create new objects in the Azure portal, you can now specify CMK-encryption and select an Azure Key Vault to provide the key. |

Summary

{
    "modification_type": "minor update",
    "modification_title": "ハイブリッド検索におけるフィルタのターゲット変更の例を修正"
}

Explanation

このコードの差分は、「whats-new.md」ファイルにおいて、ハイブリッド検索に関するフィルタのターゲットを変更する機能の説明が更新されたことを示しています。具体的な変更点は以下の通りです。

  1. リンクテキストの調整: 以前は「ハイブリッド検索におけるフィルタを単にベクトルクエリにターゲット設定する」というタイトルで記載されていた説明が、今後は「ハイブリッド検索におけるフィルタを単にベクトルクエリにターゲット設定する」の例を示すリンクテキストに変更されました。この更新により、ユーザーが参照する際に具体的な使用例が見つけやすくなっています。

  2. 関連する情報のクリアさ: この変更により、フィルタのターゲットがどのようにベクトルクエリに適用されるかを学ぶために、より明確な道筋が提供されることとなります。ドキュメントが読者にとってより理解しやすくなり、利用しやすくなることが意図されています。

全体として、このアップデートはドキュメントの具体性と有用性を高め、ユーザーが新機能をより効果的に活用できるようになることを目指しています。