Diff Insight Report - search

最終更新日: 2024-10-23

利用上の注意

このポストは Microsoft 社の Azure 公式ドキュメント(CC BY 4.0 または MIT ライセンス) をもとに生成AIを用いて翻案・要約した派生作品です。 元の文書は MicrosoftDocs/azure-ai-docs にホストされています。

生成AIの性能には限界があり、誤訳や誤解釈が含まれる可能性があります。 本ポストはあくまで参考情報として用い、正確な情報は必ず元の文書を参照してください。

このポストで使用されている商標はそれぞれの所有者に帰属します。これらの商標は技術的な説明のために使用されており、商標権者からの公式な承認や推奨を示すものではありません。

View Diff on GitHub

Highlights

この変更は、Azure AI Searchの様々なドキュメントが更新され、具体性や正確性が向上していることを示しています。特に、新しいインデクサー作成方法に関する画像の追加や、一部のドキュメントの更新日が新しくなったことで、情報の新鮮さが確保されました。また、インデクサー作成方法に関する既存の画像の削除が行われており、これは重大な変更と見なされます。

New features

  • インデクサー作成方法に関する新しい画像が追加され、視覚的な支援が充実しました。

Breaking changes

  • インデクサー作成方法に関する既存の画像が削除され、視覚的な参照が減少しました。

Other updates

  • 数多くのドキュメントで日付の更新や微細な修正が行われ、情報の正確性が高まりました。
  • 無料サービスやフリープランに関する注意書きが追加され、ユーザーへの重要な情報を提供しています。
  • ドキュメントのタイトル変更やコンテンツの再整理がなされ、内容がより明確で読みやすくなりました。

Insights

この内容では、Azure AI Searchに関連する各種ドキュメントが新しい情報を反映するために微細な更新を受けている様子が見て取れます。これにより、ユーザーがより適切にサービスを利用しやすくすることを意図しており、特にコスト管理やサービスの管理に関連する情報が強調されました。

新しいインデクサー作成方法の画像の追加は、ユーザーがプロセスをより具体的に理解できるようにするための重要な変更といえますが、既存画像の削除がもたらす影響も考慮する必要があります。一方で、注意事項の追加や情報の明確化は、ユーザーがリソースを最適に利用し、意図せぬ費用の発生を防ぐための手助けとなっています。

これらの更新は、実際のユーザーの体験に直接的な価値をもたらし、Azure AI Searchの利用をより効果的にすることを目指しています。ドキュメント自体が一貫性を持ち、最新の情報を提供することで、技術的な理解を深めることができるようになったといえます。ユーザーにとって重要な情報を適時に提供し、ユーザーエクスペリエンスの向上を目指すことが重要であることが、この一連の更新からうかがえます。

Summary Table

Filename Type Title Status A D M
cognitive-search-attach-cognitive-services.md minor update コグニティブ サービスを Azure AI サーチに接続する方法の更新 modified 27 23 50
cognitive-search-custom-skill-web-api.md minor update Azure AI Search のカスタム Web API スキルに関するドキュメント更新 modified 2 2 4
index-add-suggesters.md minor update オートコンプリートとサジェストのためのサジェスター設定に関するドキュメントの更新 modified 24 24 48
attach-existing2.png minor update コグニティブ サービスの添付に関する画像の更新 modified 0 0 0
remove-key-save.png minor update コグニティブ サービスのキー削除に関する画像の更新 modified 0 0 0
select-skillset.png minor update コグニティブ サービスのスキルセット選択に関する画像の更新 modified 0 0 0
portal-indexer-client-2.png new feature インデクサー作成方法に関する新しい画像の追加 added 0 0 0
portal-indexer-client.png new feature インデクサー作成方法に関する新しい画像の追加 added 0 0 0
portal-indexer-client.png breaking change インデクサー作成方法に関する画像の削除 removed 0 0 0
samples-dotnet.md minor update C# サンプル記事の内容更新 modified 23 23 46
search-blob-metadata-properties.md minor update Blobメタデータプロパティに関する記事の修正 modified 16 16 32
search-create-service-portal.md minor update サービスポータル作成に関する記事の修正 modified 2 2 4
search-how-to-create-indexers.md minor update インデクサの作成方法に関する記事の改訂とリネーム renamed 41 35 76
search-howto-complex-data-types.md minor update 複雑なデータタイプのモデル化に関する記事の修正 modified 2 2 4
search-limits-quotas-capacity.md minor update 検索制限、クォータ、キャパシティに関する記事の更新 modified 1 1 2
search-query-create.md minor update フルテキストクエリ作成に関する記事の更新 modified 28 28 56
search-query-partial-matching.md minor update 部分一致検索と特殊文字に関する記事の更新 modified 3 3 6
search-sku-tier.md minor update SKUティアに関する記事の更新 modified 2 1 3
search-try-for-free.md minor update 無料トライアルサービスに関する注意事項の追加 modified 3 0 3
service-create-private-endpoint.md minor update プライベートエンドポイントの設定に関する記事の修正 modified 65 65 130
tutorial-csharp-overview.md minor update C# チュートリアルの更新 modified 2 2 4

Modified Contents

articles/search/cognitive-search-attach-cognitive-services.md

Diff
@@ -8,48 +8,52 @@ ms.service: azure-ai-search
 ms.custom:
   - ignite-2023
 ms.topic: how-to
-ms.date: 01/11/2024
+ms.date: 10/21/2024
 ---
 
 # Attach an Azure AI multi-service resource to a skillset in Azure AI Search
 
-When configuring an optional [AI enrichment pipeline](cognitive-search-concept-intro.md) in Azure AI Search, you can enrich a limited number of documents free of charge. For larger and more frequent workloads, you should attach a billable [**Azure AI multi-service resource**](/azure/ai-services/multi-service-resource?pivots=azportal). 
+When configuring an optional [AI enrichment pipeline](cognitive-search-concept-intro.md) in Azure AI Search, you can enrich a small number of documents free of charge, limited to 20 transactions daily per index. For larger and more frequent workloads, you should attach a billable [**Azure AI multi-service resource**](/azure/ai-services/multi-service-resource?pivots=azportal). 
 
-A multi-service resource references a set of Azure AI services as the offering, rather than individual services, with access granted through a single API key. This key is specified in a [**skillset**](/rest/api/searchservice/skillsets/create) and allows Microsoft to charge you for using these services:
+A multi-service account provides a collection of Azure AI services, rather than individual services. The account has an associated [resource key](/azure/ai-services/authentication#authenticate-with-a-multi-service-resource-key). This key is specified in an Azure AI Search [**skillset**](/rest/api/searchservice/skillsets/create) and allows Microsoft to charge you for using these services:
 
-+ [Azure AI Vision](/azure/ai-services/computer-vision/overview) for image analysis and optical character recognition (OCR)
++ [Azure AI Vision](/azure/ai-services/computer-vision/overview) for image analysis, optical character recognition (OCR), and multimodal text and image embedding.
 + [Azure AI Language](/azure/ai-services/language-service/overview) for language detection, entity recognition, sentiment analysis, and key phrase extraction
 + [Azure AI Speech](/azure/ai-services/speech-service/overview) for speech to text and text to speech
 + [Azure AI Translator](/azure/ai-services/translator/translator-overview) for machine text translation
 
+The key is used for billing, not connections. You must provide a key in the skillset even if you're using other mechanisms, such as role assignments and managed identities, on the connection.
+
 > [!TIP]
 > Azure provides infrastructure for you to monitor billing and budgets. For more information about monitoring Azure AI services, see [Plan and manage costs for Azure AI services](/azure/ai-services/plan-manage-costs).
 
-## Set the resource key
+## Get the resource key for an Azure AI multi-service account
 
-You can use the Azure portal, REST API, or an Azure SDK to attach a billable resource to a skillset.
+1. Sign in to the [Azure portal](https://portal.azure.com).
 
-If you leave the property unspecified, your search service attempts to use the free enrichments available to your indexer on a daily basis. Execution of billable skills stops at 20 transactions per indexer invocation and a "Time Out" message appears in indexer execution history.
+1. Create an [Azure AI multi-service resource](/azure/ai-services/multi-service-resource?pivots=azportal) in the [same region](#same-region-requirement) as your search service.
 
-### [**Azure portal**](#tab/portal)
+1. Get the resource key from the **Resources** > **Keys and endpoint** page.
 
-1. Sign in to the [Azure portal](https://portal.azure.com).
+## Add the resource key to a skillset
 
-1. Create an [Azure AI multi-service resource](/azure/ai-services/multi-service-resource?pivots=azportal) in the [same region](#same-region-requirement) as your search service.
+You can use the Azure portal, REST API, or an Azure SDK to add the key to a skillset.
 
-1. Add the key to a skillset definition:
+If you leave the property unspecified, your search service attempts to use the free enrichments available to your indexer on a daily basis. Execution of billable skills stops at 20 transactions per indexer invocation and a "Time Out" message appears in indexer execution history.
 
-   + If using the [Import data wizard](search-import-data-portal.md), enter the key in the second step, "Add AI enrichments".
+### [**Azure portal**](#tab/portal)
 
-   + If adding the key to a new or existing skillset, provide the key in the **Azure AI services** tab.
+Add the key to a skillset definition:
 
-   :::image type="content" source="media/cognitive-search-attach-cognitive-services/attach-existing2.png" alt-text="Screenshot of the key page." border="true":::
++ If using an [Import data wizard](search-import-data-portal.md), create or select the Azure AI account. The wizard adds the resource key to your skillset definition. 
 
-### [**REST**](#tab/cogkey-rest)
++ For a new or existing skillset, provide the key in skillset definition.
 
-1. Create an [Azure AI multi-service resource](/azure/ai-services/multi-service-resource?pivots=azportal) in the [same region](#same-region-requirement) as your search service.
+  :::image type="content" source="media/cognitive-search-attach-cognitive-services/attach-existing2.png" alt-text="Screenshot of the key page." border="true":::
 
-1. Create or update a skillset, specifying `cognitiveServices` section in the body of the [skillset request](/rest/api/searchservice/skillsets/create):
+### [**REST**](#tab/cogkey-rest)
+
+1. Use the [Create or Update Skillset](/rest/api/searchservice/skillsets/create-or-update) API, specifying `cognitiveServices` section in the body of the request:
 
 ```http
 PUT https://[servicename].search.windows.net/skillsets/[skillset name]?api-version=2024-07-01
@@ -115,19 +119,19 @@ SearchIndexerSkillset skillset = CreateOrUpdateDemoSkillSet(indexerClient, skill
 
 ## Remove the key
 
-Enrichments are billable operations. If you no longer need to call Azure AI services, follow these instructions to remove the multi-region key and prevent use of the external resource. Without the key, the skillset reverts to the default allocation of 20 free transactions per indexer, per day. Execution of billable skills stops at 20 transactions and a "Time Out" message appears in indexer execution history when the allocation is used up.
+Enrichments are billable operations. If you no longer need to call Azure AI services, follow these instructions to remove the multi-service key and prevent use of the external resource. Without the key, the skillset reverts to the default allocation of 20 free transactions per indexer, per day. Execution of billable skills stops at 20 transactions and a "Time Out" message appears in indexer execution history when the allocation is used up.
 
 ### [**Azure portal**](#tab/portal-remove)
 
-1. Sign in to the [Azure portal](https://portal.azure.com) and open the search service **Overview** page.
+1. Sign in to the [Azure portal](https://portal.azure.com).
 
-1. Under **Skillsets**, select the skillset containing the key you want to remove.
+1. Under **Search management > Skillsets**, select a skillset from the list.
 
    :::image type="content" source="media/cognitive-search-attach-cognitive-services/select-skillset.png" alt-text="Screenshot of the skillset page." border="true" lightbox="media/cognitive-search-attach-cognitive-services/select-skillset.png":::
 
-1. Scroll to the end of the file. 
+1. Scroll to the section in the file containing `"cognitiveServices"`.
 
-1. Remove the key from the JSON and save the skillset.
+1. Delete the key value from the JSON and save the skillset.
 
    :::image type="content" source="media/cognitive-search-attach-cognitive-services/remove-key-save.png" alt-text="Screenshot of the skillset JSON." border="true" lightbox="media/cognitive-search-attach-cognitive-services/remove-key-save.png":::
 
@@ -183,7 +187,7 @@ Enrichments are billable operations. If you no longer need to call Azure AI serv
 
 ## How the key is used
 
-Key-based billing applies when API calls to Azure AI services resources exceed 20 API calls per indexer, per day. 
+Key-based billing applies when API calls to Azure AI services resources exceed 20 API calls per indexer, per day. You can [reset the indexer](search-howto-run-reset-indexers.md) to reset the API count.
 
 The key is used for billing, but not for enrichment operations' connections. For connections, a search service [connects over the internal network](search-security-overview.md#internal-traffic) to an Azure AI services resource that's located in the [same physical region](search-region-support.md). Most regions that offer Azure AI Search also offer other Azure AI services such as Language. If you attempt AI enrichment in a region that doesn't have both services, you'll see this message: "Provided key isn't a valid CognitiveServices type key for the region of your search service."
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "コグニティブ サービスを Azure AI サーチに接続する方法の更新"
}

Explanation

このコードの変更は、Azure AI Search におけるコグニティブ サービスの接続に関するドキュメントの更新を含みます。主な内容として、添付されるリソースの利用についての説明が明確化されています。

具体的には、無償で処理できるドキュメントの数や、一日に許可されるトランザクションの限度についての記述が変更されました。また、マルチサービスアカウントの利用方法や、リソース キーの取得方法が詳しく説明されており、これによりユーザーはもっとうまくリソースを管理・使用できるようになります。さらに、リソース キーの使用に関連した料金体系や、使用しなくなった場合のリソースの削除手続きについても強調されています。

これらの変更により、ユーザーが Azure AI サービスを効果的に利用し、コストを管理するための情報がより透明になることを目的としています。全体として、ドキュメントの内容はより具体的かつ実用的なものとなっています。

articles/search/cognitive-search-custom-skill-web-api.md

Diff
@@ -8,7 +8,7 @@ ms.service: azure-ai-search
 ms.custom:
   - ignite-2023
 ms.topic: conceptual
-ms.date: 07/25/2024
+ms.date: 10/21/2024
 ---
 
 # Custom Web API skill in an Azure AI Search enrichment pipeline
@@ -92,7 +92,7 @@ It always follows these constraints:
 
   * A `recordId` property that is a **unique** string, used to identify that record.
 
-  * A `data` property that is a JSON object. The fields of the `data` property corresponds to the "names" specified in the `inputs` section of the skill definition. The values of those fields are from the `source` of those fields (which could be from a field in the document, or potentially from another skill).
+  * A `data` property that is a JSON object. The fields of the `data` property correspond to the "names" specified in the `inputs` section of the skill definition. The values of those fields are from the `source` of those fields (which could be from a field in the document, or potentially from another skill).
 
 ```json
 {

Summary

{
    "modification_type": "minor update",
    "modification_title": "Azure AI Search のカスタム Web API スキルに関するドキュメント更新"
}

Explanation

このコードの変更は、Azure AI Search におけるカスタム Web API スキルに関するドキュメントの軽微な更新を反映しています。主な変更点は、ドキュメントの日付の更新と、テキストの整形に関するものです。

具体的には、文中にある data プロパティの説明において、“corresponds” のスペルが修正され、正確性が向上しました。このように微細な調整を行うことで、文書の可読性や正確性が確保されることを意図しています。また、ドキュメントの日付が 2024 年 7 月 25 日から 2024 年 10 月 21 日に変更されており、最新の情報であることが示されています。

この修正により、ユーザーが Azure AI Search のカスタム スキル機能を理解しやすくなり、正確な情報に基づいてシステムを利用できるようになります。全体として、ドキュメントはより信頼性の高いものとなり、その使用が促進されることを目指しています。

articles/search/index-add-suggesters.md

Diff
@@ -1,5 +1,5 @@
 ---
-title: Configure a suggester
+title: Configure a suggester for autocomplete and suggestions
 titleSuffix: Azure AI Search
 description: Enable typeahead query actions in Azure AI Search by creating suggesters and formulating requests that invoke autocomplete or autosuggested query terms.
 
@@ -8,46 +8,46 @@ author: HeidiSteen
 ms.author: heidist
 ms.service: azure-ai-search
 ms.topic: conceptual
-ms.date: 01/18/2024
+ms.date: 10/21/2024
 ms.custom:
   - devx-track-csharp
   - devx-track-dotnet
   - ignite-2023
 ---
 
-# Configure a suggester for autocomplete and suggested matches in a query
+# Configure a suggester for autocomplete and suggestions in a query
 
-In Azure AI Search, typeahead (autocomplete) or "search-as-you-type" is enabled through a *suggester*. A suggester is a configuration in an index specifying which fields should be used to populate autocomplete and suggestions. These fields undergo extra tokenization, generating prefix sequences to support matches on partial terms. For example, a suggester that includes a City field with a value for "Seattle" will have prefix combinations of "sea", "seat", "seatt", and "seattl" to support typeahead.
+In Azure AI Search, typeahead or "search-as-you-type" is enabled by using a *suggester*. A suggester is a configuration in an index that specifies which fields should be used to populate autocomplete and suggested matches. These fields undergo extra tokenization, generating prefix sequences to support matches on partial terms. For example, a suggester that includes a `city` field with a value for *Seattle* has prefix combinations of *sea*, *seat*, *seatt*, and *seattl* to support typeahead.
 
 Matches on partial terms can be either an autocompleted query or a suggested match. The same suggester supports both experiences.
 
 ## Typeahead experiences in Azure AI Search
 
-Typeahead can be *autocomplete*, which completes a partial input for a whole term query, or *suggestions* that invite click through to a particular match. Autocomplete produces a query. Suggestions produce a matching document.
+Typeahead can use *autocomplete*, which completes a partial input for a whole term query, or *suggestions* that invite click through to a particular match. Autocomplete produces a query. Suggestions produce a matching document.
 
-The following screenshot illustrates both. Autocomplete anticipates a potential term, finishing "tw" with "in". Suggestions are mini search results, where a field like hotel name represents a matching hotel search document from the index. For suggestions, you can surface any field that provides descriptive information.
+The following screenshot illustrates both. Autocomplete anticipates a potential term, finishing *tw* with *in*. Suggestions are mini search results, where a field like `hotel name` represents a matching hotel search document from the index. For suggestions, you can surface any field that provides descriptive information.
 
-![Visual comparison of autocomplete and suggested queries](./media/index-add-suggesters/hotel-app-suggestions-autocomplete.png "Visual comparison of autocomplete and suggested queries")
+:::image type="content" source="media/index-add-suggesters/hotel-app-suggestions-autocomplete.png" alt-text="Screenshot showing visual comparison of autocomplete and suggested queries.":::
 
 You can use these features separately or together. To implement these behaviors in Azure AI Search, there's an index and query component. 
 
-+ Add a suggester to a search index definition. The remainder of this article is focused on creating a suggester.
++ Add a suggester to a search index definition. The remainder of this article focuses on creating a suggester.
 
-+ Call a suggester-enabled query, in the form of a Suggestion request or Autocomplete request, using one of the [APIs listed in a later section](#how-to-use-a-suggester).
++ Call a suggester-enabled query, in the form of a suggestion request or autocomplete request, by using one of the APIs listed in [Use a suggester](#how-to-use-a-suggester).
 
-Search-as-you-type is enabled on a per-field basis for string fields. You can implement both typeahead behaviors within the same search solution if you want an experience similar to the one indicated in the screenshot. Both requests target the *documents* collection of specific index and responses are returned after a user provides at least a three character input string.
+Search-as-you-type is enabled on a per-field basis for string fields. You can implement both typeahead behaviors within the same search solution if you want an experience similar to the one indicated in the screenshot. Both requests target the *documents* collection of a specific index, and responses are returned after a user provides at least a three-character input string.
 
 ## How to create a suggester
 
 To create a suggester, add one to an [index definition](/rest/api/searchservice/indexes/create). A suggester takes a name and a collection of fields over which the typeahead experience is enabled. The best time to create a suggester is when you're also defining the field that uses it.
 
 + Use string fields only.
 
-+ If the string field is part of a complex type (for example, a City field within Address), include the parent in the field path: `"Address/City"` (REST and C# and Python), or `["Address"]["City"]` (JavaScript).
++ If the string field is part of a complex type (for example, a City field within Address), include the parent in the field path: `"Address/City"` (REST, C#, and Python), or `["Address"]["City"]` (JavaScript).
 
 + Use the default standard Lucene analyzer (`"analyzer": null`) or a [language analyzer](index-add-language-analyzers.md) (for example, `"analyzer": "en.Microsoft"`) on the field.
 
-If you try to create a suggester using pre-existing fields, the API disallows it. Prefixes are generated during indexing, when partial terms in two or more character combinations are tokenized alongside whole terms. Given that existing fields are already tokenized, you have to rebuild the index if you want to add them to a suggester. For more information, see [How to rebuild an Azure AI Search index](search-howto-reindex.md).
+If you try to create a suggester using preexisting fields, the API disallows it. Prefixes are generated during indexing, when partial terms in two or more character combinations are tokenized alongside whole terms. Given that existing fields are already tokenized, you have to rebuild the index if you want to add them to a suggester. For more information, see [Update or rebuild an index in Azure AI Search](search-howto-reindex.md).
 
 ### Choose fields
 
@@ -61,28 +61,28 @@ To satisfy both search-as-you-type experiences, add all of the fields that you n
 
 ### Choose analyzers
 
-Your choice of an analyzer determines how fields are tokenized and prefixed. For example, for a hyphenated string like "context-sensitive", using a language analyzer results in these token combinations: "context", "sensitive", "context-sensitive". Had you used the standard Lucene analyzer, the hyphenated string wouldn't exist. 
+Your choice of an analyzer determines how fields are tokenized and prefixed. For example, for a hyphenated string like *context-sensitive*, using a language analyzer results in these token combinations: *context*, *sensitive*, *context-sensitive*. Had you used the standard Lucene analyzer, the hyphenated string wouldn't exist. 
 
 When evaluating analyzers, consider using the [Analyze Text API](/rest/api/searchservice/indexes/analyze) for insight into how terms are processed. Once you build an index, you can try various analyzers on a string to view token output.
 
-Fields that use [custom analyzers](index-add-custom-analyzers.md) or [built-in analyzers](index-add-custom-analyzers.md#built-in-analyzers) (except for standard Lucene) are explicitly disallowed to prevent poor outcomes.
+Fields that use [custom analyzers](index-add-custom-analyzers.md) or [built-in analyzers](index-add-custom-analyzers.md#built-in-analyzers), (except for standard Lucene) are explicitly disallowed to prevent poor outcomes.
 
 > [!NOTE]
-> If you need to work around the analyzer constraint, for example if you need a keyword or ngram analyzer for certain query scenarios, you should use two separate fields for the same content. This will allow one of the fields to have a suggester, while the other can be set up with a custom analyzer configuration.
+> If you need to work around the analyzer constraint, for example if you need a keyword or ngram analyzer for certain query scenarios, you should use two separate fields for the same content. This allows one of the fields to have a suggester, while the other can be set up with a custom analyzer configuration.
 
-## Create using the portal
+## Create using the Azure portal
 
 When using **Add Index** or the **Import data** wizard to create an index, you have the option of enabling a suggester:
 
 1. In the index definition, enter a name for the suggester.
 
-1. In each field definition for new fields, select a checkbox in the Suggester column. A checkbox is available on string fields only. 
+1. In each field definition for new fields, select a checkbox in the **Suggester** column. A checkbox is available on string fields only. 
 
 As previously noted, analyzer choice impacts tokenization and prefixing. Consider the entire field definition when enabling suggesters. 
 
 ## Create using REST
 
-In the REST API, add suggesters through [Create Index](/rest/api/searchservice/indexes/create) or [Update Index](/rest/api/searchservice/indexes/create-or-update). 
+In the REST API, add suggesters by using [Create Index](/rest/api/searchservice/indexes/create) or [Update Index](/rest/api/searchservice/indexes/create-or-update). 
 
   ```json
   {
@@ -142,7 +142,7 @@ private static void CreateIndex(string indexName, SearchIndexClient indexClient)
 |Property      |Description      |
 |--------------|-----------------|
 | name        | Specified in the suggester definition, but also called on an Autocomplete or Suggestions request. |
-| sourceFields | Specified in the suggester definition. It's a list of one or more fields in the index that are the source of the content for suggestions. Fields must be of type `Edm.String`. If an analyzer is specified on the field, it must be a named lexical analyzer from [this list](/dotnet/api/azure.search.documents.indexes.models.lexicalanalyzername) (not a custom analyzer). </br></br>As a best practice, specify only those fields that lend themselves to an expected and appropriate response, whether it's a completed string in a search bar or a dropdown list. </br></br>A hotel name is a good candidate because it has precision. Verbose fields like descriptions and comments are too dense. Similarly, repetitive fields, such as categories and tags, are less effective. In the examples, we include "category" anyway to demonstrate that you can include multiple fields. |
+| sourceFields | Specified in the suggester definition. It's a list of one or more fields in the index that are the source of the content for suggestions. Fields must be of type `Edm.String`. If an analyzer is specified on the field, it must be a named lexical analyzer listed on [LexicalAnalyzerName Struct](/dotnet/api/azure.search.documents.indexes.models.lexicalanalyzername) (not a custom analyzer). </br></br>As a best practice, specify only those fields that lend themselves to an expected and appropriate response, whether it's a completed string in a search bar or a dropdown list. </br></br>A hotel name is a good candidate because it has precision. Verbose fields like descriptions and comments are too dense. Similarly, repetitive fields, such as categories and tags, are less effective. In the examples, we include *category* anyway to demonstrate that you can include multiple fields. |
 | searchMode  | REST-only parameter, but also visible in the portal. This parameter isn't available in the .NET SDK. It indicates the strategy used to search for candidate phrases. The only mode currently supported is `analyzingInfixMatching`, which currently matches on the beginning of a term.|
 
 <a name="how-to-use-a-suggester"></a>
@@ -151,7 +151,7 @@ private static void CreateIndex(string indexName, SearchIndexClient indexClient)
 
 A suggester is used in a query. After a suggester is created, call one of the following APIs for a search-as-you-type experience:
 
-+ [Suggestions REST API](/rest/api/searchservice/documents/suggest-post)
++ [Suggest REST API](/rest/api/searchservice/documents/suggest-post)
 + [Autocomplete REST API](/rest/api/searchservice/documents/autocomplete-post)
 + [SuggestAsync method](/dotnet/api/azure.search.documents.searchclient.suggestasync)
 + [AutocompleteAsync method](/dotnet/api/azure.search.documents.searchclient.autocompleteasync)
@@ -170,11 +170,11 @@ POST /indexes/myxboxgames/docs/autocomplete?search&api-version=2024-07-01
 
 ## Sample code
 
-+ [Add search to a web site (C#)](tutorial-csharp-search-query-integration.md) uses an open source Suggestions package for partial term completion in the client app.
+To learn how to use an open source Suggestions package for partial term completion in the client app, see [Explore the .NET search code](tutorial-csharp-search-query-integration.md).
 
-## Next steps
+## Next step
 
-Learn more about requests\ formulation.
+Learn more about request formulation.
 
 > [!div class="nextstepaction"]
-> [Add autocomplete and suggestions to client code](search-add-autocomplete-suggestions.md)
+> [Add autocomplete and search suggestions in client apps](search-add-autocomplete-suggestions.md)

Summary

{
    "modification_type": "minor update",
    "modification_title": "オートコンプリートとサジェストのためのサジェスター設定に関するドキュメントの更新"
}

Explanation

このコードの変更は、Azure AI Search におけるサジェスターの設定方法に関するドキュメントの軽微な更新を反映しています。主な修正点は、タイトルや文中の表現を一貫性のあるものにするための調整です。

変更の具体例には、サジェスターの機能に関する表現の明確化や、“autocomplete”(オートコンプリート)と “suggestions”(サジェスト)の説明で用いる用語の統一が含まれます。これにより、ユーザーがサジェスターの役割や期待される機能をより理解しやすくなります。

さらに、ドキュメントの日付が 2024 年 1 月 18 日から 2024 年 10 月 21 日に更新され、新しい情報が提供されています。加えて、文中の特定のテキストが一部修正されており、こうした変化が全体としてドキュメントの一貫性と明快さを向上させています。

このように、ドキュメントの修正を通じて、ユーザーが Azure AI Search の機能を最大限に活用できるようなサポートが提供されています。全体の内容は、サジェスター機能の実装をより効果的に行うための情報を包括しています。

articles/search/media/cognitive-search-attach-cognitive-services/attach-existing2.png

Summary

{
    "modification_type": "minor update",
    "modification_title": "コグニティブ サービスの添付に関する画像の更新"
}

Explanation

このコードの変更は、Azure AI Search に関するコグニティブ サービスの添付についての画像ファイルの更新を反映しています。具体的には、ファイル自体に内容の追加や削除はなく、ファイルの更新は行われていませんが、以前の資料を基に画像が修正された可能性があります。

この画像は、ユーザーがコグニティブ サービスを既存の設定に添付する手順を視覚的に示すためのものであり、見やすさや理解を深めるために重要な役割を果たします。したがって、画像の管理と更新は、ドキュメント全体の品質を保つために欠かせません。

全体として、この変更はユーザーが必要とする正確で最新の情報を得られるようにするための一環であり、効果的な学習および実装に寄与します。

articles/search/media/cognitive-search-attach-cognitive-services/remove-key-save.png

Summary

{
    "modification_type": "minor update",
    "modification_title": "コグニティブ サービスのキー削除に関する画像の更新"
}

Explanation

このコードの変更は、Azure AI Search に関連するコグニティブ サービスのキー削除に関する画像ファイルの更新を示しています。変更内容には具体的な追加や削除はなく、画像自体の更新が行われていますが、実質的な内容に影響を与えるような変更はありません。

この画像は、ユーザーがコグニティブ サービスからキーを削除する手順を示す視覚的な教材であり、その正確な表示は重要です。ユーザーはこの画像を参照することで、手順をより理解しやすくなり、効果的に操作を進めることができます。

したがって、画像の維持と更新は、ドキュメントの品質向上やユーザーの体験を豊かにするために重要な役割を果たします。この変更は、最新の情報と視覚的なガイダンスを提供することを目的としています。

articles/search/media/cognitive-search-attach-cognitive-services/select-skillset.png

Summary

{
    "modification_type": "minor update",
    "modification_title": "コグニティブ サービスのスキルセット選択に関する画像の更新"
}

Explanation

このコードの変更は、Azure AI Search におけるコグニティブ サービスのスキルセット選択についての画像ファイルの更新を示しています。変更内容としては、新しい追加や削除はなされておらず、実質的な内容に影響がない形での修正となっています。

この画像は、ユーザーがコグニティブ サービスにおいてスキルセットを選択する手順を視覚的に示すためのものであり、正確でわかりやすい情報提供が求められます。画像の更新は、ユーザーがスムーズに操作を行えるようになるために重要です。

全体として、この変更はドキュメントの質を保ち、ユーザーが必要とする最新の情報を提供することを目的としており、コグニティブ サービスの効果的な利用をサポートします。

articles/search/media/search-how-to-create-indexers/portal-indexer-client-2.png

Summary

{
    "modification_type": "new feature",
    "modification_title": "インデクサー作成方法に関する新しい画像の追加"
}

Explanation

このコードの変更は、Azure AI Search においてインデクサーを作成する方法に関する新しい画像ファイルの追加を示しています。この画像は、ユーザーがポータルでインデクサーを作成する際の手順を視覚的に表現しており、利用者がそのプロセスをより良く理解する手助けをします。

追加されたこの画像は、使用するインターフェースや手順を具体的に示すものであり、ドキュメントの実用性を高める要素となります。特に、視覚的な情報は、ユーザーが操作を直感的に理解しやすくするために重要です。

この変更は、マニュアルやガイドラインにおける情報の充実を図るもので、エンドユーザーの学習体験を向上させるための新機能として位置づけられます。ユーザーは、この新しい画像を参照することで、インデクサー作成の手順をよりスムーズに理解し、実行することができるでしょう。

articles/search/media/search-how-to-create-indexers/portal-indexer-client.png

Summary

{
    "modification_type": "new feature",
    "modification_title": "インデクサー作成方法に関する新しい画像の追加"
}

Explanation

このコードの変更は、Azure AI Search におけるインデクサーの作成方法を示す新しい画像ファイルの追加を示しています。この画像は、ユーザーがポータルでインデクサーを効果的に利用するための視覚的なガイドとなります。

新しく追加された画像は、インデクサー作成に関連する操作の手順やインターフェースを具体的に示すものであり、利用者がそのプロセスをより理解しやすくするために役立ちます。視覚情報は、特に初めて操作を行うユーザーにとって、理解を深めるために不可欠な要素です。

この変更は、ドキュメンテーションの質を向上させ、ユーザーエクスペリエンスの向上に寄与します。新しい画像は、ユーザーがインデクサー作成の手続きをスムーズに進めることを支援し、手間を減らすことが期待されます。

articles/search/media/search-howto-create-indexers/portal-indexer-client.png

Summary

{
    "modification_type": "breaking change",
    "modification_title": "インデクサー作成方法に関する画像の削除"
}

Explanation

このコードの変更は、Azure AI Search に関連する「インデクサーの作成方法」を示す画像ファイルの削除を示しています。この変更は、関連する情報を文書から取り除くことにより、ユーザーの理解や経験に影響を与える可能性があるため、注意が必要です。

削除された画像は、インデクサーをポータルで作成する際の手順や操作を視覚的に示していたものであり、ユーザーがそのプロセスを直感的に理解するための重要なリソースでした。この画像がなくなることで、特に視覚的な参照を必要とするユーザーにとっての学習の障壁が増す可能性があります。

この変更は、ドキュメンテーションの内容や構成に重要な影響をもたらすものであり、ユーザーがインデクサー作成に必要な情報を十分に得られないリスクを含んでいます。このため、適切な代替手段や新しい情報源の提供が求められるでしょう。

articles/search/samples-dotnet.md

Diff
@@ -16,7 +16,7 @@ ms.date: 10/18/2024
 
 # C# samples for Azure AI Search
 
-Learn about the C# code samples that demonstrate the functionality and workflow of an Azure AI Search solution. These samples use the [**Azure AI Search client library**](/dotnet/api/overview/azure/search) for the [**Azure SDK for .NET**](/dotnet/azure/), which you can explore through the following links.
+You can explore C# code samples that demonstrate the functionality and workflow of an Azure AI Search solution. These samples use the [**Azure AI Search client library**](/dotnet/api/overview/azure/search) for the [**Azure SDK for .NET**](/dotnet/azure/), which you can access through the following links.
 
 | Target | Link |
 |--------|------|
@@ -28,26 +28,26 @@ Learn about the C# code samples that demonstrate the functionality and workflow
 
 ## SDK samples
 
-Code samples from the Azure SDK development team demonstrate API usage. You can find these samples in [**Azure/azure-sdk-for-net/tree/main/sdk/search/Azure.Search.Documents/samples**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/) on GitHub.
+Code samples from the Azure SDK development team demonstrate API usage. You can find [these samples on GitHub](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/).
 
-| Samples | Description |
+| Sample | Description |
 |---------|-------------|
-| ["Hello world", synchronously](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample01a_HelloWorld.md) | Demonstrates how to create a client, authenticate, and handle errors using synchronous methods.|
-| ["Hello world", asynchronously](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample01b_HelloWorldAsync.md) | Demonstrates how to create a client, authenticate, and handle errors using asynchronous methods.  |
-| [Service-level operations](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample02_Service.md) | Demonstrates how to create indexes, indexers, data sources, skillsets, and synonym maps. This sample also shows you how to get service statistics and how to query an index.  |
-| [Index operations](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample03_Index.md) | Demonstrates how to perform an action on existing index, in this case getting a count of documents stored in the index.  |
-| [FieldBuilderIgnore](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample04_FieldBuilderIgnore.md) | Demonstrates a technique for working with unsupported data types.  |
-| [Indexing documents (push model)](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample05_IndexingDocuments.md) | "Push" model indexing, where you send a JSON payload to an index on a service.   |
-| [Encryption key sample](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample06_EncryptedIndex.md) | Demonstrates using a customer-managed encryption key to add an extra layer of protection over sensitive content.  |
-| [Vector search sample](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample07_VectorSearch.md) | Shows you how to index a vector field and perform vector search using the Azure SDK for .NET. |
-| [Semantic ranking sample](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample08_SemanticSearch.md) | Shows you how to configure semantic ranker in an index and invoke semantic queries using the Azure SDK for .NET. |
+| [*Hello world* - synchronous](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample01a_HelloWorld.md) | Demonstrates how to create a client, authenticate, and handle errors using synchronous methods |
+| [*Hello world* - asynchronous](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample01b_HelloWorldAsync.md) | Demonstrates how to create a client, authenticate, and handle errors using asynchronous methods  |
+| [Service-level operations](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample02_Service.md) | Demonstrates how to create indexes, indexers, data sources, skillsets, and synonym maps. This sample also shows you how to get service statistics and how to query an index  |
+| [Index operations](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample03_Index.md) | Demonstrates how to perform an action on existing index, in this case getting a count of documents stored in the index  |
+| [FieldBuilderIgnore](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample04_FieldBuilderIgnore.md) | Demonstrates a technique for working with unsupported data types  |
+| [Indexing documents (push model)](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample05_IndexingDocuments.md) | *Push* model indexing, where you send a JSON payload to an index on a service  |
+| [Encryption key sample](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample06_EncryptedIndex.md) | Demonstrates using a customer-managed encryption key to add an extra layer of protection over sensitive content  |
+| [Vector search sample](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample07_VectorSearch.md) | Shows you how to index a vector field and perform vector search using the Azure SDK for .NET |
+| [Semantic ranking sample](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample08_SemanticSearch.md) | Shows you how to configure semantic ranker in an index and invoke semantic queries using the Azure SDK for .NET |
 
 ## Doc samples
 
 Code samples from the Azure AI Search team demonstrate features and workflows. All of the following samples are referenced in tutorials, quickstarts, and how-to articles that explain the code in detail. You can find these samples in [**Azure-Samples/azure-search-dotnet-samples**](https://github.com/Azure-Samples/azure-search-dotnet-samples) and in [**Azure-Samples/search-dotnet-getting-started**](https://github.com/Azure-Samples/search-dotnet-getting-started/) on GitHub.
 
 > [!TIP]
-> Try the [Samples browser](/samples/browse/?languages=csharp&products=azure-cognitive-search) to search for Microsoft code samples in GitHub, filtered by product, service, and language.
+> Try the [samples browser](/samples/browse/?languages=csharp&products=azure-cognitive-search) to search for Microsoft code samples in GitHub, filtered by product, service, and language.
 
 | Code sample | Related article  | Purpose |
 |-------------|------------------|---------|
@@ -70,8 +70,8 @@ An accelerator is an end-to-end solution that includes code and documentation th
 
 | Samples | Repository | Description |
 |---------|------------|-------------|
-| [Search + QnA Maker Accelerator](https://github.com/Azure-Samples/search-qna-maker-accelerator) | [search-qna-maker-accelerator](https://github.com/Azure-Samples/search-qna-maker-accelerator)| A [solution](https://techcommunity.microsoft.com/t5/azure-ai/qna-with-azure-cognitive-search/ba-p/2081381) combining the power of Search and QnA Maker. See the live [demo site](https://aka.ms/qnaWithAzureSearchDemo). |
-| [Knowledge Mining Solution Accelerator](/shows/ai-show/knowledge-mining-with-azure-search) | [azure-search-knowledge-mining](https://github.com/azure-samples/azure-search-knowledge-mining/tree/main/) | Includes templates, support files, and analytical reports to help you prototype an end-to-end knowledge mining solution.  |
+| Search + QnA Maker Accelerator | [search-qna-maker-accelerator](https://github.com/Azure-Samples/search-qna-maker-accelerator)| A [solution](https://techcommunity.microsoft.com/t5/azure-ai/qna-with-azure-cognitive-search/ba-p/2081381) combining the power of Search and QnA Maker. See the live [demo site](https://aka.ms/qnaWithAzureSearchDemo) |
+| [Knowledge Mining Solution Accelerator](/shows/ai-show/knowledge-mining-with-azure-search) | [azure-search-knowledge-mining](https://github.com/azure-samples/azure-search-knowledge-mining/tree/main/) | Includes templates, support files, and analytical reports to help you prototype an end-to-end knowledge mining solution  |
 
 ## Demos
 
@@ -80,18 +80,18 @@ A demo repo provides proof-of-concept source code for examples or scenarios show
 | Samples | Repository | Description |
 |---------|------------|-------------|
 | Covid-19 search app | [covid19search](https://github.com/liamca/covid19search) | Source code repository for the Azure AI Search based [Covid-19 Search App](https://covid19search.azurewebsites.net/) |
-| JFK demo | [AzureSearch_JFK_Files](https://github.com/Microsoft/AzureSearch_JFK_Files) | Learn more about the [JFK solution](https://www.microsoft.com/ai/ai-lab-jfk-files). |
+| JFK demo | [AzureSearch JFK Files](https://github.com/Microsoft/AzureSearch_JFK_Files) | Learn more about the [JFK solution](https://www.microsoft.com/ai/ai-lab-jfk-files) |
 
 ## Other samples
 
 The following samples are also published by the Azure AI Search team, but aren't referenced in documentation. Associated readme files provide usage instructions.
 
 | Samples | Repository | Description |
 |---------|------------|-------------|
-| [Query multiple services](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/main/multiple-search-services) |  [azure-search-dotnet-scale](https://github.com/Azure-Samples/azure-search-dotnet-samples) | Issue a single query across multiple search services and combine the results into a single page.  |
-| [Check storage](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/main/check-storage-usage/README.md) | [azure-search-dotnet-utilities](https://github.com/Azure-Samples/azure-search-dotnet-utilities) | Invokes an Azure function that checks search service storage on a schedule. |
-| [Export an index](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/main/export-data/README.md) | [azure-search-dotnet-utilities](https://github.com/Azure-Samples/azure-search-dotnet-utilities) | C# console app that partitions and export a large index. |
-| [Backup and restore an index](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/main/index-backup-restore/README.md) | [azure-search-dotnet-utilities](https://github.com/Azure-Samples/azure-search-dotnet-utilities) | C# console app that copies an index from one service to another, and in the process, creates JSON files on your computer with the index schema and documents.|
-| [Index Data Lake Gen2 using Microsoft Entra ID](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/main/data-lake-gen2-acl-indexing/README.md) | [azure-search-dotnet-utilities](https://github.com/Azure-Samples/azure-search-dotnet-utilities) | Source code demonstrating indexer connections and indexing of Azure Data Lake Gen2 files and folders that are secured through Microsoft Entra ID and role-based access controls. |
-| [Search aggregations](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/main/search-aggregations/README.md) | [azure-search-dotnet-utilities](https://github.com/Azure-Samples/azure-search-dotnet-utilities) | Proof-of-concept source code that demonstrates how to obtain aggregations from a search index and then filter by them. |
-| [Power Skills](https://github.com/Azure-Samples/azure-search-power-skills/blob/main/README.md) | [azure-search-power-skills](https://github.com/Azure-Samples/azure-search-power-skills)  | Source code for consumable custom skills that you can incorporate in your won solutions.  |
+| [Query multiple services](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/main/multiple-search-services) |  [azure-search-dotnet-scale](https://github.com/Azure-Samples/azure-search-dotnet-samples) | Issue a single query across multiple search services and combine the results into a single page  |
+| [Check storage](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/main/check-storage-usage/README.md) | [azure-search-dotnet-utilities](https://github.com/Azure-Samples/azure-search-dotnet-utilities) | Invokes an Azure function that checks search service storage on a schedule |
+| [Export an index](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/main/export-data/README.md) | [azure-search-dotnet-utilities](https://github.com/Azure-Samples/azure-search-dotnet-utilities) | C# console app that partitions and export a large index |
+| [Backup and restore an index](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/main/index-backup-restore/README.md) | [azure-search-dotnet-utilities](https://github.com/Azure-Samples/azure-search-dotnet-utilities) | C# console app that copies an index from one service to another, and in the process, creates JSON files on your computer with the index schema and documents |
+| [Index Data Lake Gen2 using Microsoft Entra ID](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/main/data-lake-gen2-acl-indexing/README.md) | [azure-search-dotnet-utilities](https://github.com/Azure-Samples/azure-search-dotnet-utilities) | Source code demonstrating indexer connections and indexing of Azure Data Lake Gen2 files and folders that are secured through Microsoft Entra ID and role-based access controls |
+| [Search aggregations](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/main/search-aggregations/README.md) | [azure-search-dotnet-utilities](https://github.com/Azure-Samples/azure-search-dotnet-utilities) | Proof-of-concept source code that demonstrates how to obtain aggregations from a search index and then filter by them |
+| [Power Skills](https://github.com/Azure-Samples/azure-search-power-skills/blob/main/README.md) | [azure-search-power-skills](https://github.com/Azure-Samples/azure-search-power-skills)  | Source code for consumable custom skills that you can incorporate in your won solutions  |

Summary

{
    "modification_type": "minor update",
    "modification_title": "C# サンプル記事の内容更新"
}

Explanation

このコードの変更は、Azure AI Search に関する C# サンプルコードを紹介する記事が更新されたことを示しています。具体的には、文章の表現や構成に関する調整が行われ、全体の可読性が向上しています。

主な変更点には、文章の構造の見直しや特定のフレーズの変更が含まれています。これにより、ユーザーがC# コードサンプルの内容をより効率的に理解できるようになっています。例えば、紹介文の一部がより明確になるように書き換えられています。

また、リンク情報やサンプルの説明においても微細な修正が行われたため、ユーザーが必要な情報に迅速にアクセスできるようになっています。全体として、この更新は Azure AI Search の利用促進に寄与し、エンドユーザーの学習体験を向上させることを目指しています。

articles/search/search-blob-metadata-properties.md

Diff
@@ -1,44 +1,44 @@
 ---
 title: Content metadata properties
 titleSuffix: Azure AI Search
-description: Metadata properties can provide content to fields in a search index. This article lists metadata properties supported in Azure AI Search.
+description: Learn how metadata properties can provide content to fields in a search index. This article lists metadata properties supported in Azure AI Search.
 author: HeidiSteen
 manager: nitinme
 ms.author: heidist
 ms.service: azure-ai-search
 ms.custom:
   - ignite-2023
 ms.topic: conceptual
-ms.date: 01/11/2024
+ms.date: 10/21/2024
 ---
 
 # Content metadata properties used in Azure AI Search
 
-Several of the indexer-supported data sources, including Azure Blob Storage, Azure Data Lake Storage Gen2, and SharePoint, contain standalone files or embedded objects of various content types. Many of those content types have metadata properties that can be useful to index. Just as you can create search fields for standard blob properties like **`metadata_storage_name`**, you can create fields in a search index for metadata properties that are specific to a document format.
+Several indexer-supported data sources, including Azure Blob Storage, Azure Data Lake Storage Gen2, and SharePoint, contain standalone files or embedded objects of various content types. Many of those content types have metadata properties that can be useful to index. Just as you can create search fields for standard blob properties like `metadata_storage_name`, you can create fields in a search index for metadata properties that are specific to a document format.
 
 ## Supported document formats
 
 Azure AI Search supports blob indexing and SharePoint document indexing for the following document formats:
 
 [!INCLUDE [search-blob-data-sources](./includes/search-blob-data-sources.md)]
 
-## Properties by document format
+## Document format properties
 
 The following table summarizes processing for each document format, and describes the metadata properties extracted by a blob indexer and the SharePoint Online indexer.
 
 | Document format / content type | Extracted metadata | Processing details |
 | --- | --- | --- |
-| CSV (text/csv) |`metadata_content_type`<br/>`metadata_content_encoding`<br/> | Extract text<br/>NOTE: If you need to extract multiple document fields from a CSV blob, see [Indexing CSV blobs](search-howto-index-csv-blobs.md) for details |
+| CSV (text/csv) |`metadata_content_type`<br/>`metadata_content_encoding`<br/> | Extract text<br/>NOTE: If you need to extract multiple document fields from a CSV blob, see [Index CSV blobs](search-howto-index-csv-blobs.md) |
 | DOC (application/msword) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_character_count`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_page_count`<br/>`metadata_word_count` |Extract text, including embedded documents |
 | DOCM (application/vnd.ms-word.document.macroenabled.12) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_character_count`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_page_count`<br/>`metadata_word_count` |Extract text, including embedded documents |
 | DOCX (application/vnd.openxmlformats-officedocument.wordprocessingml.document) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_character_count`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_page_count`<br/>`metadata_word_count` |Extract text, including embedded documents |
 | EML (message/rfc822) |`metadata_content_type`<br/>`metadata_message_from`<br/>`metadata_message_to`<br/>`metadata_message_cc`<br/>`metadata_creation_date`<br/>`metadata_subject` |Extract text, including attachments |
 | EPUB (application/epub+zip) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_title`<br/>`metadata_description`<br/>`metadata_language`<br/>`metadata_keywords`<br/>`metadata_identifier`<br/>`metadata_publisher` |Extract text from all documents in the archive |
 | GZ (application/gzip) |`metadata_content_type` |Extract text from all documents in the archive |
-| HTML (text/html or application/xhtml+xml) |`metadata_content_encoding`<br/>`metadata_content_type`<br/>`metadata_language`<br/>`metadata_description`<br/>`metadata_keywords`<br/>`metadata_title` |Strip HTML markup and extract text |
-| JSON (application/json) |`metadata_content_type`<br/>`metadata_content_encoding` |Extract text<br/>NOTE: If you need to extract multiple document fields from a JSON blob, see [Indexing JSON blobs](search-howto-index-json-blobs.md) for details |
-| KML (application/vnd.google-earth.kml+xml) |`metadata_content_type`<br/>`metadata_content_encoding`<br/>`metadata_language`<br/> |Strip XML markup and extract text |
-| MSG (application/vnd.ms-outlook) |`metadata_content_type`<br/>`metadata_message_from`<br/>`metadata_message_from_email`<br/>`metadata_message_to`<br/>`metadata_message_to_email`<br/>`metadata_message_cc`<br/>`metadata_message_cc_email`<br/>`metadata_message_bcc`<br/>`metadata_message_bcc_email`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_subject` |Extract text, including text extracted from attachments. `metadata_message_to_email`, `metadata_message_cc_email` and `metadata_message_bcc_email` are string collections, the rest of the fields are strings.|
+| HTML (text/html or application/xhtml+xml) |`metadata_content_encoding`<br/>`metadata_content_type`<br/>`metadata_language`<br/>`metadata_description`<br/>`metadata_keywords`<br/>`metadata_title` |Strip HTML elements and extract text |
+| JSON (application/json) |`metadata_content_type`<br/>`metadata_content_encoding` |Extract text<br/>NOTE: If you need to extract multiple document fields from a JSON blob, see [Index JSON blobs](search-howto-index-json-blobs.md) |
+| KML (application/vnd.google-earth.kml+xml) |`metadata_content_type`<br/>`metadata_content_encoding`<br/>`metadata_language`<br/> |Strip XML elements and extract text |
+| MSG (application/vnd.ms-outlook) |`metadata_content_type`<br/>`metadata_message_from`<br/>`metadata_message_from_email`<br/>`metadata_message_to`<br/>`metadata_message_to_email`<br/>`metadata_message_cc`<br/>`metadata_message_cc_email`<br/>`metadata_message_bcc`<br/>`metadata_message_bcc_email`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_subject` |Extract text, including text extracted from attachments. `metadata_message_to_email`, `metadata_message_cc_email`, and `metadata_message_bcc_email` are string collections. The rest of the fields are strings.|
 | ODP (application/vnd.oasis.opendocument.presentation) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_title` |Extract text, including embedded documents |
 | ODS (application/vnd.oasis.opendocument.spreadsheet) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified` |Extract text, including embedded documents |
 | ODT (application/vnd.oasis.opendocument.text) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_character_count`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_page_count`<br/>`metadata_word_count` |Extract text, including embedded documents |
@@ -48,17 +48,17 @@ The following table summarizes processing for each document format, and describe
 | PPTM (application/vnd.ms-powerpoint.presentation.macroenabled.12) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_slide_count`<br/>`metadata_title` |Extract text, including embedded documents |
 | PPTX (application/vnd.openxmlformats-officedocument.presentationml.presentation) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_slide_count`<br/>`metadata_title` |Extract text, including embedded documents |
 | RTF (application/rtf) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_character_count`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_page_count`<br/>`metadata_word_count`<br/> | Extract text|
-| WORD 2003 XML (application/vnd.ms-wordml) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date` |Strip XML markup and extract text |
-| WORD XML (application/vnd.ms-word2006ml) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_character_count`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_page_count`<br/>`metadata_word_count` |Strip XML markup and extract text |
+| WORD 2003 XML (application/vnd.ms-wordml) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date` |Strip XML elements and extract text |
+| WORD XML (application/vnd.ms-word2006ml) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_character_count`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_page_count`<br/>`metadata_word_count` |Strip XML elements and extract text |
 | XLS (application/vnd.ms-excel) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified` |Extract text, including embedded documents |
 | XLSM (application/vnd.ms-excel.sheet.macroenabled.12) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified` |Extract text, including embedded documents |
 | XLSX (application/vnd.openxmlformats-officedocument.spreadsheetml.sheet) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified` |Extract text, including embedded documents |
-| XML (application/xml) |`metadata_content_type`<br/>`metadata_content_encoding`<br/>`metadata_language`<br/> |Strip XML markup and extract text |
+| XML (application/xml) |`metadata_content_type`<br/>`metadata_content_encoding`<br/>`metadata_language`<br/> |Strip XML elements and extract text |
 | ZIP (application/zip) |`metadata_content_type` |Extract text from all documents in the archive |
 
-## See also
+## Related content
 
 * [Indexers in Azure AI Search](search-indexer-overview.md)
-* [AI enrichment overview](cognitive-search-concept-intro.md)
-* [Blob indexing overview](search-blob-storage-integration.md)
-* [SharePoint indexing](search-howto-index-sharepoint-online.md)
+* [AI enrichment in Azure AI Search](cognitive-search-concept-intro.md)
+* [Search over Azure Blob Storage content](search-blob-storage-integration.md)
+* [Index data from SharePoint](search-howto-index-sharepoint-online.md)

Summary

{
    "modification_type": "minor update",
    "modification_title": "Blobメタデータプロパティに関する記事の修正"
}

Explanation

この変更は、Azure AI Search における Blob メタデータプロパティに関する記事の内容を更新したものです。具体的には、文書内の表現が改善され、情報がより明確に伝わるように調整されています。

主な変更点としては、説明文の内容や構成が見直され、文から不明瞭な部分が排除され、情報がスムーズに流れるようになっています。また、日付の更新も行われており、これにより読者には最新の情報が提供されています。

さらに、表や項目のラベルが整頓され、ユーザーが参照しやすいようになっています。文書の詳細なプロパティに関する説明も、一般的な用語ではなく、ドキュメントの特定の形式に関するものに改訂されており、技術的な正確性が向上しています。

全体として、この更新は Azure AI Search のドキュメントの品質を向上させ、ユーザーが必要な情報を迅速に理解できるようにすることを目指しています。

articles/search/search-create-service-portal.md

Diff
@@ -38,7 +38,7 @@ A few service properties are fixed for the lifetime of the service. Before creat
 
 Paid (or billable) search occurs when you choose a billable tier (Basic or higher) when creating the resource on a billable Azure subscription.
 
-To try Azure AI Search for free, [open a trial subscription](https://azure.microsoft.com/pricing/free-trial/?WT.mc_id=A261C142F) and then create your search service by choosing the **Free** tier. You can have one free search service per Azure subscription. Free search services are intended for short-term evaluation of the product for nonproduction applications. Generally, you can complete all of the quickstarts and most tutorials, except for those featuring semantic ranker (it requires a billable service).
+To try Azure AI Search for free, [open a trial subscription](https://azure.microsoft.com/pricing/free-trial/?WT.mc_id=A261C142F) and then create your search service by choosing the **Free** tier. You can have one free search service per Azure subscription. Free search services are intended for short-term evaluation of the product for nonproduction applications. Generally, you can complete all of the quickstarts and most tutorials, except for those featuring semantic ranker (it requires a billable service). Free services that are inactive for an extended period of time can be deleted by Microsoft to make room for other services.
 
 Alternatively, you can use free credits to try out paid Azure services. With this approach, you can create your search service at **Basic** or higher to get more capacity. Your credit card is never charged unless you explicitly change your settings and ask to be charged. Another approach is to [activate Azure credits in a Visual Studio subscription](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?WT.mc_id=A261C142F). A Visual Studio subscription gives you credits every month you can use for paid Azure services. 
 
@@ -132,7 +132,7 @@ Basic and Standard are the most common choices for production workloads, but man
 
 :::image type="content" source="media/search-create-service-portal/select-pricing-tier.png" lightbox="media/search-create-service-portal/select-pricing-tier.png" alt-text="Screenshot of Select a pricing tier page." border="true":::
 
-Search services created after April 3, 2024 have larger partitions and higher vector quotas at every billable tier.
+Search services created after April 3, 2024 have larger partitions and higher vector quotas at every billable tier. 
 
 Remember, a pricing tier can't be changed once the service is created. If you need a higher or lower tier, you should re-create the service.
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "サービスポータル作成に関する記事の修正"
}

Explanation

この変更は、Azure AI Search のサービスポータルの作成に関する記事の内容を更新したものです。主な修正は、フリープランの検索サービスに関連する情報の明確化と、サービスが長期間非アクティブな場合の削除に関する警告の追加です。

具体的には、フリーサービスが長期間使用されないと Microsoft によって削除される可能性があるという文が加えられ、利用者に対して注意喚起されています。この更新により、ユーザーはフリープランのサービスの管理に関してより適切な理解を得ることができるでしょう。

全体として、これらの変更は、Azure AI Search を利用する際の重要なポイントを強調し、ユーザーがサービスを適切に利用できるようにすることを目指しています。

articles/search/search-how-to-create-indexers.md

Diff
@@ -11,20 +11,20 @@ ms.service: azure-ai-search
 ms.custom:
   - ignite-2023
 ms.topic: how-to
-ms.date: 03/28/2024
+ms.date: 10/10/2024
 ---
 
 # Create an indexer in Azure AI Search
 
-Use an indexer to automate data import and indexing in Azure AI Search. An indexer is a named object on a search service that connects to an external Azure data source, reads data, and passes it to a search engine for indexing. Using indexers significantly reduces the quantity and complexity of the code you need to write if you're using a supported data source. 
+This article focuses on the basic steps of creating an indexer. Depending on the data source and your workflow, more configuration might be necessary.
 
-Indexers support two workflows:
+You can use an indexer to automate data import and indexing in Azure AI Search. An indexer is a named object on a search service that connects to an external Azure data source, reads data, and passes it to a search engine for indexing. Using indexers significantly reduces the quantity and complexity of the code you need to write if you're using a supported data source.
 
-+ Text-based indexing, extract strings and metadata from textual content for full text search scenarios.
+Indexers support two workflows:
 
-+ Skills-based indexing, using built-in or custom skills that add integrated machine learning for analysis over images and large undifferentiated content, extracting or inferring text and structure. Skill-based indexing enables search over content that isn't otherwise easily full text searchable. To learn more, see [AI enrichment in Azure AI Search](cognitive-search-concept-intro.md).
++ **Text-based indexing**: Extract strings and metadata from textual content for full text search scenarios.
 
-This article focuses on the basic steps of creating an indexer. Depending on the data source and your workflow, more configuration might be necessary.
++ **Skills-based indexing**: Use built-in or custom skills that add integrated machine learning for analysis over images and large undifferentiated content, extracting or inferring text and structure. Skills-based indexing enables search over content that isn't otherwise easily full text searchable. To learn more, see [AI enrichment in Azure AI Search](cognitive-search-concept-intro.md).
 
 ## Prerequisites
 
@@ -34,15 +34,15 @@ This article focuses on the basic steps of creating an indexer. Depending on the
 
 + A [search index](search-how-to-create-search-index.md) that can accept incoming data.
 
-+ Be under the [maximum limits](search-limits-quotas-capacity.md#indexer-limits) for your service tier. The Free tier allows three objects of each type and 1-3 minutes of indexer processing, or 3-10 if there's a skillset.
++ Be under the [maximum limits](search-limits-quotas-capacity.md#indexer-limits) for your service tier. The Free tier allows three objects of each type and 1-3 minutes of indexer processing, or 3-10 minutes if there's a skillset.
 
 ## Indexer patterns
 
-When you create an indexer, the definition is one of two patterns: text-based indexing or AI enrichment with skills. The patterns are the same, except that skills-based indexing has more definitions.
+When you create an indexer, the definition is one of two patterns: *text-based indexing* or *skills-based indexing*. The patterns are the same, except that skills-based indexing has more definitions.
 
 ### Indexer example for text-based indexing
 
-Text-based indexing for full text search is the primary use case for indexers, and for this workflow, an indexer looks like this example.
+Text-based indexing for full text search is the primary use case for indexers. For this workflow, an indexer looks like this example.
 
 ```json
 {
@@ -66,23 +66,29 @@ Text-based indexing for full text search is the primary use case for indexers, a
 
 Indexers have the following requirements:
 
-+ A `"name"` property that uniquely identifies the indexer in the indexer collection.
-+ A `"dataSourceName"` property that points to a data source object. It specifies a connection to external data.
-+ A `"targetIndexName"` property that points to the destination search index.
++ A `name` property that uniquely identifies the indexer in the indexer collection
++ A `dataSourceName` property that points to a data source object. It specifies a connection to external data
++ A `targetIndexName` property that points to the destination search index
 
 Other parameters are optional and modify run time behaviors, such as how many errors to accept before failing the entire job. Required parameters are specified in all indexers and are documented in the [REST API reference](/rest/api/searchservice/indexers/create#request-body). 
 
-Data source-specific indexers for blobs, SQL, and Azure Cosmos DB provide extra `"configuration"` parameters for source-specific behaviors. For example, if the source is Blob Storage, you can set a parameter that filters on file extensions: `"parameters" : { "configuration" : { "indexedFileNameExtensions" : ".pdf,.docx" } }`. If the source is Azure SQL, you can set a query time out parameter.
+Data source-specific indexers for blobs, SQL, and Azure Cosmos DB provide extra `configuration` parameters for source-specific behaviors. For example, if the source is Blob Storage, you can set a parameter that filters on file extensions, such as:
+
+```json
+"parameters" : { "configuration" : { "indexedFileNameExtensions" : ".pdf,.docx" } }
+```
+
+If the source is Azure SQL, you can set a query time out parameter.
 
 [Field mappings](search-indexer-field-mappings.md) are used to explicitly map source-to-destination fields if there are discrepancies by name or type between a field in the data source and a field in the search index.
 
-By default, an indexer runs immediately when you create it on the search service. If you don't want indexer execution, set `"disabled"` to true when creating the indexer.
+By default, an indexer runs immediately when you create it on the search service. If you don't want indexer execution, set `disabled` to *true* when creating the indexer.
 
 You can also [specify a schedule](search-howto-schedule-indexers.md) or set an [encryption key](search-security-manage-encryption-keys.md) for supplemental encryption of the indexer definition.
 
 ### Indexer example for skills-based indexing
 
-Indexers also drive [AI enrichment](cognitive-search-concept-intro.md). All of the above properties and parameters for apply, but the following extra properties are specific to AI enrichment: `"skillSetName"`, `"cache"`, `"outputFieldMappings"`. 
+Skills-based indexing uses [AI enrichment](cognitive-search-concept-intro.md) to process content that isn't searchable in its raw form. All of the above properties and parameters apply, but the following extra properties are specific to AI enrichment: `skillSetName`, `cache`, `outputFieldMappings`.
 
 ```json
 {
@@ -100,7 +106,7 @@ Indexers also drive [AI enrichment](cognitive-search-concept-intro.md). All of t
 }
 ```
 
-AI enrichment is its own subject area and is out of scope for this article. For more information, start with [AI enrichment](cognitive-search-concept-intro.md), [Skillsets in Azure AI Search](cognitive-search-working-with-skillsets.md), [Create a skillset](cognitive-search-defining-skillset.md), [Map enrichment output fields](cognitive-search-output-field-mapping.md), and [Enable caching for AI enrichment](search-howto-incremental-index.md).
+AI enrichment is its own subject area and is out of scope for this article. For more information, start with [AI enrichment](cognitive-search-concept-intro.md), [Skillsets in Azure AI Search](cognitive-search-working-with-skillsets.md), [Create a skillset](cognitive-search-defining-skillset.md), [Map enriched output fields](cognitive-search-output-field-mapping.md), and [Enable caching for AI enrichment](search-howto-incremental-index.md).
 
 ## Prepare external data
 
@@ -109,13 +115,13 @@ Indexers work with data sets. When you run an indexer, it connects to your data
 | Source data | Tasks |
 |-------------|-------|
 | JSON documents | Make sure the structure or shape of incoming data corresponds to the schema of your search index. Most search indexes are fairly flat, where the fields collection consists of fields at the same level. However, hierarchical or nested structures are possible through [complex fields and collections](search-howto-complex-data-types.md). |
-| Relational | Provide it as a flattened row set, where each row becomes a full or partial search document in the index. </p> To flatten relational data into a row set, you should create a SQL view, or build a query that returns parent and child records in the same row. For example, the built-in hotels sample dataset is an SQL database that has 50 records (one for each hotel), linked to room records in a related table. The query that flattens the collective data into a row set embeds all of the room information in JSON documents in each hotel record. The embedded room information is a generated by a query that uses a **FOR JSON AUTO** clause. </p> You can learn more about this technique in [define a query that returns embedded JSON](index-sql-relational-data.md#define-a-query-that-returns-embedded-json). This is just one example; you can find other approaches that produce the same result. |
+| Relational | Provide data as a flattened row set, where each row becomes a full or partial search document in the index. <br><br> To flatten relational data into a row set, you should create a SQL view, or build a query that returns parent and child records in the same row. For example, the built-in hotels sample dataset is an SQL database that has 50 records (one for each hotel), linked to room records in a related table. The query that flattens the collective data into a row set embeds all of the room information in JSON documents in each hotel record. The embedded room information is a generated by a query that uses a **FOR JSON AUTO** clause. <br><br> You can learn more about this technique in [define a query that returns embedded JSON](index-sql-relational-data.md#define-a-query-that-returns-embedded-json). This is just one example; you can find other approaches that produce the same result. |
 | Files | An indexer generally creates one search document for each file, where the search document consists of fields for content and metadata. Depending on the file type, the indexer can sometimes [parse one file into multiple search documents](search-howto-index-one-to-many-blobs.md). For example, in a CSV file, each row can become a standalone search document. |
 
 Remember that you only need to pull in searchable and filterable data:
 
-+ Searchable data is text.
-+ Filterable data is alphanumeric.
++ Searchable data is text
++ Filterable data is alphanumeric
 
 Azure AI Search can't search over binary data in any format, although it can extract and infer text descriptions of image files (see [AI enrichment](cognitive-search-concept-intro.md)) to create searchable content. Likewise, large text can be broken down and analyzed by natural language models to find structure or relevant information, generating new content that you can add to a search document.
 
@@ -127,13 +133,13 @@ Indexers require a data source that specifies the type, container, and connectio
 
 1. Make sure you're using a [supported data source type](search-indexer-overview.md#supported-data-sources).
 
-1. [Create a data source](/rest/api/searchservice/data-sources/create) definition. The following list is a few of the more frequently used data sources:
+1. [Create a data source](/rest/api/searchservice/data-sources/create) definition. The following data sources are a few of the more frequently used sources:
 
    + [Azure Blob Storage](search-howto-indexing-azure-blob-storage.md)
    + [Azure Cosmos DB](search-howto-index-cosmosdb.md)
    + [Azure SQL Database](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md)
 
-1. If the data source is a database, such as Azure SQL or Cosmos DB, enable change tracking. Azure Storage has built-in change tracking through the `LastModified` property on every blob, file, and table. The above links for the various data sources explain which change tracking methods are supported by indexers.
+1. If the data source is a database, such as Azure SQL or Cosmos DB, enable change tracking. Azure Storage has built-in change tracking through the `LastModified` property on every blob, file, and table. The links for the various data sources explain which change tracking methods are supported by indexers.
 
 ## Prepare an index
 
@@ -157,21 +163,21 @@ When you're ready to create an indexer on a remote search service, you need a se
 
 ### [**Azure portal**](#tab/portal)
 
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com), then find your search service.
 
-1. On the search service Overview page, choose from two options: 
+1. On the search service **Overview** page, choose from two options:
 
-   + [**Import data wizard**](search-import-data-portal.md). The wizard is unique in that it creates all of the required elements. Other approaches require a predefined data source and index.
+   + [**Import data** wizard](search-import-data-portal.md): The wizard is unique in that it creates all of the required elements. Other approaches require a predefined data source and index.
 
-   + **New Indexer**, a visual editor for specifying an indexer definition. 
+       :::image type="content" source="media/search-how-to-create-indexers/portal-indexer-client.png" alt-text="Screenshot that shows the Import data wizard." border="true":::
 
-   The following screenshot shows where you can find these features in the portal. 
+   + **Add indexer**: A visual editor for specifying an indexer definition.
 
-   :::image type="content" source="media/search-howto-create-indexers/portal-indexer-client.png" alt-text="hotels indexer" border="true":::
+       :::image type="content" source="media/search-how-to-create-indexers/portal-indexer-client-2.png" alt-text="Screenshot that shows the Add indexer button." border="true":::
 
 ### [**REST**](#tab/indexer-rest)
 
-Visual Studio Code with a REST client can send indexer requests. Using the app, you can connect to your search service and send [Create Indexer (REST)](/rest/api/searchservice/indexers/create) or [Update indexer](/rest/api/searchservice/indexers/create-or-update) requests. 
+Visual Studio Code with a REST client can send indexer requests. Using the app, you can connect to your search service and send [Create indexer (REST)](/rest/api/searchservice/indexers/create) or [Update indexer](/rest/api/searchservice/indexers/create-or-update) requests. 
 
 ```http
 POST /indexers?api-version=[api-version]
@@ -192,7 +198,7 @@ There are numerous tutorials and examples that demonstrate REST clients for crea
 
 ### [**.NET SDK**](#tab/indexer-csharp)
 
-For Azure AI Search, the Azure SDKs implement generally available features. As such, you can use any of the SDKs to create indexer-related objects. All of them provide a **SearchIndexerClient** that has methods for creating indexers and related objects, including skillsets.
+For Azure AI Search, the Azure SDKs implement generally available features. As such, you can use any of the SDKs to create indexer-related objects. All of them provide a `SearchIndexerClient` that has methods for creating indexers and related objects, including skillsets.
 
 | Azure SDK | Client | Examples |
 |-----------|--------|----------|
@@ -205,7 +211,7 @@ For Azure AI Search, the Azure SDKs implement generally available features. As s
 
 ## Run the indexer
 
-By default, an indexer runs immediately when you create it on the search service. You can override this behavior by setting `"disabled"` to true in the indexer definition. Indexer execution is the moment of truth where you find out if there are problems with connections, field mappings, or skillset construction. 
+By default, an indexer runs immediately when you create it on the search service. You can override this behavior by setting `disabled` to *true* in the indexer definition. Indexer execution is the moment of truth where you find out if there are problems with connections, field mappings, or skillset construction. 
 
 There are several ways to run an indexer:
 
@@ -217,7 +223,7 @@ There are several ways to run an indexer:
 
 Scheduled execution is usually implemented when you have a need for incremental indexing so that you can pick up the latest changes. As such, scheduling has a dependency on change detection.
 
-Indexers are one of the few subsystems that make overt outbound calls to other Azure resources. In terms of Azure roles, indexers don't have separate identities: a connection from the search engine to another Azure resource is made using the [system or user-assigned managed identity](search-howto-managed-identities-data-sources.md) of a search service. If the indexer connects to an Azure resource on a virtual network, you should create a [shared private link](search-indexer-howto-access-private.md) for that connection. For more information about secure connections, see the [Security in Azure AI Search](search-security-overview.md).
+Indexers are one of the few subsystems that make overt outbound calls to other Azure resources. In terms of Azure roles, indexers don't have separate identities; a connection from the search engine to another Azure resource is made using the [system or user-assigned managed identity](search-howto-managed-identities-data-sources.md) of a search service. If the indexer connects to an Azure resource on a virtual network, you should create a [shared private link](search-indexer-howto-access-private.md) for that connection. For more information about secure connections, see [Security in Azure AI Search](search-security-overview.md).
 
 ## Check results
 
@@ -227,25 +233,25 @@ For content verification, [run queries](search-query-create.md) on the populated
 
 ## Change detection and internal state
 
-If your data source supports change detection, an indexer can detect underlying changes in the data and process just the new or updated documents on each indexer run, leaving unchanged content as-is. If indexer execution history says that a run was successful with `0/0` documents processed, it means that the indexer didn't find any new or changed rows or blobs in the underlying data source.
+If your data source supports change detection, an indexer can detect underlying changes in the data and process just the new or updated documents on each indexer run, leaving unchanged content as-is. If indexer execution history says that a run was successful with *0/0* documents processed, it means that the indexer didn't find any new or changed rows or blobs in the underlying data source.
 
 Change detection logic is built into the data platforms. How an indexer supports change detection varies by data source:
 
-+ Azure Storage has built-in change detection, which means an indexer can recognize new and updated documents automatically. Blob Storage, Azure Table Storage, and Azure Data Lake Storage Gen2 stamp each blob or row update with a date and time. An indexer automatically uses this information to determine which documents to update in the index. For more information about deletion detection, see [Delete detection using indexers for Azure Storage in Azure AI Search](search-howto-index-changed-deleted-blobs.md).
++ Azure Storage has built-in change detection, which means an indexer can recognize new and updated documents automatically. Blob Storage, Azure Table Storage, and Azure Data Lake Storage Gen2 stamp each blob or row update with a date and time. An indexer automatically uses this information to determine which documents to update in the index. For more information about deletion detection, see [Change and delete detection using indexers for Azure Storage](search-howto-index-changed-deleted-blobs.md).
 
 + Cloud database technologies provide optional change detection features in their platforms. For these data sources, change detection isn't automatic. You need to specify in the data source definition which policy is used:
 
   + [Azure SQL (change detection)](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md#indexing-new-changed-and-deleted-rows)
   + [Azure DB for MySQL (change detection)](search-howto-index-mysql.md#indexing-new-and-changed-rows)
   + [Azure Cosmos DB for NoSQL (change detection)](search-howto-index-cosmosdb.md#indexing-new-and-changed-documents)
   + [Azure Cosmos DB for MongoDB (change detection)](search-howto-index-cosmosdb-mongodb.md#indexing-new-and-changed-documents)
-  + [Azure CosmosDB for Apache Gremlin (change detection)](search-howto-index-cosmosdb-gremlin.md#indexing-new-and-changed-documents)
+  + [Azure Cosmos DB for Apache Gremlin (change detection)](search-howto-index-cosmosdb-gremlin.md#indexing-new-and-changed-documents)
 
 Indexers keep track of the last document it processed from the data source through an internal *high water mark*. The marker is never exposed in the API, but internally the indexer keeps track of where it stopped. When indexing resumes, either through a scheduled run or an on-demand invocation, the indexer references the high water mark so that it can pick up where it left off.
 
 If you need to clear the high water mark to reindex in full, you can use [Reset Indexer](/rest/api/searchservice/indexers/reset). For more selective reindexing, use [Reset Skills](/rest/api/searchservice/skillsets/reset-skills?view=rest-searchservice-2024-05-01-preview&preserve-view=true) or [Reset Documents](/rest/api/searchservice/indexers/reset-docs?view=rest-searchservice-2024-05-01-preview&preserve-view=true). Through the reset APIs, you can clear internal state, and also flush the cache if you enabled [incremental enrichment](search-howto-incremental-index.md). For more background and comparison of each reset option, see [Run or reset indexers, skills, and documents](search-howto-run-reset-indexers.md).
 
-## Next steps
+## Related content
 
 + [Index data from Azure Blob Storage](search-howto-indexing-azure-blob-storage.md)
 + [Index data from Azure SQL database](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md)

Summary

{
    "modification_type": "minor update",
    "modification_title": "インデクサの作成方法に関する記事の改訂とリネーム"
}

Explanation

この変更は、Azure AI Search のインデクサの作成方法に関する記事の改訂とリネームを含んでいます。具体的な変更点には、記事のリネームと内容の明確化が含まれています。

主な変更点としては、以下が挙げられます:

  1. 記事のリネーム: これにより、記事の内容をより正確に反映するタイトルに変更されました。
  2. コンテンツの更新: 記事内の説明がより詳しく、明確に改訂されており、特にテキストベースのインデクシングとスキルベースのインデクシングに関する説明が強化されています。
  3. 手順の整備: インデクサの作成手順に関する情報が整理され、必要な前提条件や手順に対する説明が分かりやすくなっています。
  4. 日付の更新: 最終更新日も修正され、現在の情報の新しさを示しています。

全体として、この改訂はインデクサの作成プロセスに関する理解を深め,ユーザーがその機能を効果的に活用できるようにすることを目的としています。

articles/search/search-howto-complex-data-types.md

Diff
@@ -11,7 +11,7 @@ ms.custom:
   - ignite-2023
 ms.service: azure-ai-search
 ms.topic: how-to
-ms.date: 10/14/2024
+ms.date: 10/21/2024
 ---
 
 # Model complex data types in Azure AI Search
@@ -210,7 +210,7 @@ Queries like this are *uncorrelated* for full-text search, unlike filters. In fi
 
 ## Search complex fields in RAG queries
 
-A RAG pattern passes search results to a chat model for generative AI and conversational search. By default, search results passed to an LLM are a flattened rowset. However, if your index has complex types, your query can provide those fields if you first convert the search results output to JSON, and then pass the JSON to the LLM.
+A RAG pattern passes search results to a chat model for generative AI and conversational search. By default, search results passed to an LLM are a flattened rowset. However, if your index has complex types, your query can provide those fields if you first convert the search results to JSON, and then pass the JSON to the LLM.
 
 A partial example illustrates the technique:
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "複雑なデータタイプのモデル化に関する記事の修正"
}

Explanation

この変更は、Azure AI Search における複雑なデータタイプのモデル化に関する記事に対して行われた軽微な修正を示しています。主な修正内容は以下の通りです:

  1. 日付の更新: 記事の最終更新日が、2024年10月14日から2024年10月21日に変更されました。これは最新の情報を反映するための更新です。

  2. 文の修正: 複雑なデータタイプを持つインデックスに関連するテキストが、表現の明確化のために若干修正されました。具体的には、「search results output」を「search results」に改訂することで、意味がより明確になっています。

全体として、この改訂はコンテンツの内容を最新化し、ユーザーが複雑なデータタイプの検索に関する理解を深めるのを助けることを目的としています。

articles/search/search-limits-quotas-capacity.md

Diff
@@ -8,7 +8,7 @@ author: HeidiSteen
 ms.author: heidist
 ms.service: azure-ai-search
 ms.topic: conceptual
-ms.date: 09/19/2024
+ms.date: 10/17/2024
 ms.custom:
   - references_regions
   - build-2024

Summary

{
    "modification_type": "minor update",
    "modification_title": "検索制限、クォータ、キャパシティに関する記事の更新"
}

Explanation

この変更は、Azure AI Search に関する「検索制限、クォータ、キャパシティ」に関する記事の軽微な修正を示しています。主要な変更点は以下です:

  1. 日付の更新: 記事の最終更新日が2024年9月19日から2024年10月17日に変更されました。これにより、最新の情報をユーザーに提供するための更新が行われています。

この変更は、コンテンツの最新性を保つために重要であり、ユーザーが最新の情報にアクセスできるようにすることを目的としています。

articles/search/search-query-create.md

Diff
@@ -1,5 +1,5 @@
 ---
-title: Full-text query how-to
+title: Create a full text query 
 titleSuffix: Azure AI Search
 description: Learn how to construct a query request for full text search in Azure AI Search.
 
@@ -10,16 +10,16 @@ ms.service: azure-ai-search
 ms.custom:
   - ignite-2023
 ms.topic: how-to
-ms.date: 11/16/2023
+ms.date: 10/10/2024
 ---
 
-# Create a full-text query in Azure AI Search
+# Create a full text query in Azure AI Search
 
-If you're building a query for [full text search](search-lucene-query-architecture.md), this article provides steps for setting up the request. It also introduces a query structure, and explains how field attributes and linguistic analyzers can impact query outcomes.
+If you're building a query for [full text search](search-lucene-query-architecture.md), this article provides steps for setting up the request. It also introduces a query structure, and explains how field attributes and linguistic analyzers can affect query outcomes.
 
 ## Prerequisites
 
-+ A [search index](search-how-to-create-search-index.md) with string fields attributed as `searchable`.
++ A [search index](search-how-to-create-search-index.md) with string fields attributed as *searchable*.
 
 + Read permissions on the search index. For read access, include a [query API key](search-security-api-keys.md) on the request, or give the caller [Search Index Data Reader](search-security-rbac.md) permissions.
 
@@ -29,7 +29,7 @@ In Azure AI Search, a query is a read-only request against the docs collection o
 
 A full text query is specified in a `search` parameter and consists of terms, quote-enclosed phrases, and operators. Other parameters add more definition to the request.
 
-The following [Search POST REST API](/rest/api/searchservice/documents/search-post) call illustrates a query request using the aforementioned parameters.
+The following [Search POST REST API](/rest/api/searchservice/documents/search-post) call illustrates a query request using the mentioned parameters.
 
 ```http
 POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2024-07-01
@@ -44,41 +44,41 @@ POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/
 }
 ```
 
-**Key points:**
+### Key points
 
-+ **`search`** provides the match criteria, usually whole terms or phrases, with or without operators. Any field that is attributed as "searchable" in the index schema is a candidate for this parameter.
++ **`search`** provides the match criteria, usually whole terms or phrases, with or without operators. Any field that is attributed as *searchable* in the index schema is a candidate for this parameter.
 
-+ **`queryType`** sets the parser: `simple`, `full`. The [default simple query parser](search-query-simple-examples.md) is optimal for full text search. The [full Lucene query parser](search-query-lucene-examples.md) is for advanced query constructs like regular expressions, proximity search, fuzzy and wildcard search. This parameter can also be set to `semantic` for [semantic ranking](semantic-search-overview.md) for advanced semantic modeling on the query response.
++ **`queryType`** sets the parser: *simple*, *full*. The [default simple query parser](search-query-simple-examples.md) is optimal for full text search. The [full Lucene query parser](search-query-lucene-examples.md) is for advanced query constructs like regular expressions, proximity search, fuzzy and wildcard search. This parameter can also be set to *semantic* for [semantic ranking](semantic-search-overview.md) for advanced semantic modeling on the query response.
 
-+ **`searchMode`** specifies whether matches are based on "all" criteria (favors precision) or "any" criteria (favors recall) in the expression. The default is "any". If you anticipate heavy use of Boolean operators, which is more likely in indexes that contain large text blocks (a content field or long descriptions), be sure to test queries with the **`searchMode=Any|All`** parameter to evaluate the impact of that setting on boolean search.
++ **`searchMode`** specifies whether matches are based on *all* criteria (favors precision) or *any* criteria (favors recall) in the expression. The default is *any*. If you anticipate heavy use of Boolean operators, which is more likely in indexes that contain large text blocks (a content field or long descriptions), be sure to test queries with the `searchMode=Any|All` parameter to evaluate the impact of that setting on Boolean search.
 
 + **`searchFields`** constrains query execution to specific searchable fields. During development, it's helpful to use the same field list for select and search. Otherwise a match might be based on field values that you can't see in the results, creating uncertainty as to why the document was returned.
 
 Parameters used to shape the response:
 
-+ **`select`** specifies which fields to return in the response. Only fields marked as "retrievable" in the index can be used in a select statement.
++ **`select`** specifies which fields to return in the response. Only fields marked as *retrievable* in the index can be used in a select statement.
 
 + **`top`** returns the specified number of best-matching documents. In this example, only 10 hits are returned. You can use top and skip (not shown) to page the results.
 
 + **`count`** tells you how many documents in the entire index match overall, which can be more than what are returned. 
 
-+ **`orderby`** is used if you want to sort results by a value, such as a rating or location. Otherwise, the default is to use the relevance score to rank results. A  field must be attributed as "sortable" to be a candidate for this parameter.
++ **`orderby`** is used if you want to sort results by a value, such as a rating or location. Otherwise, the default is to use the relevance score to rank results. A field must be attributed as *sortable* to be a candidate for this parameter.
 
 ## Choose a client
 
-For early development and proof-of-concept testing, start with Azure portal or a [REST client](search-get-started-rest.md). Both approaches are interactive, useful for targeted testing, and help you assess the effects of different properties without having to write any code.
+For early development and proof-of-concept testing, start with the Azure portal or a REST client. Both approaches are interactive, useful for targeted testing, and help you assess the effects of different properties without having to write any code.
 
-To call search from within an app, use the **Azure.Document.Search** client libraries in the Azure SDKs for .NET, Java, JavaScript, and Python.
+To call search from within an app, use the `Azure.Document.Search` client libraries in the Azure SDKs for .NET, Java, JavaScript, and Python.
 
 ### [**Azure portal**](#tab/portal-text-query)
 
 In the portal, when you open an index, you can work with Search Explorer alongside the index JSON definition in side-by-side tabs for easy access to field attributes. Check the **Fields** table to see which ones are searchable, sortable, filterable, and facetable while testing queries.
 
 1. Sign in to the [Azure portal](https://portal.azure.com) and [find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices).
 
-1. Open **Indexes** and select an index.
+1. In your service, select **Indexes** and choose an index.
 
-1. An index opens to the [**Search explorer**](search-explorer.md) tab so that you can query right away. Switch to **JSON view** to specify query syntax. 
+1. An index opens to the [**Search explorer**](search-explorer.md) tab so that you can query right away. Switch to **Edit JSON** to specify query syntax. 
 
    Here's a full text search query expression that works for the Hotels sample index:
 
@@ -100,7 +100,7 @@ In the portal, when you open an index, you can work with Search Explorer alongsi
 
 ### [**REST API**](#tab/rest-text-query)
 
-Use a REST client to set up a request. [Quickstart: Text search using REST](search-get-started-rest.md) has instructions if you need help with getting started.
+Use a REST client to set up a request. If you need help with getting started, see [Quickstart: Text search using REST](search-get-started-rest.md).
 
 The following example calls the REST API for full text search:
 
@@ -118,7 +118,7 @@ POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/
 
 ### [**Azure SDKs**](#tab/sdk-text-query)
 
-The following Azure SDKs provide a **SearchClient** that has methods for formulating query requests.
+The following Azure SDKs provide a `SearchClient` that has methods for formulating query requests.
 
 | Azure SDK | Client | Examples |
 |-----------|--------|----------|
@@ -133,41 +133,41 @@ The following Azure SDKs provide a **SearchClient** that has methods for formula
 
 If your query is full text search, a query parser is used to process any text that's passed as search terms and phrases. Azure AI Search offers two query parsers. 
 
-+ The simple parser understands the [simple query syntax](query-simple-syntax.md). This parser was selected as the default for its speed and effectiveness in free form text queries. The syntax supports common search operators (AND, OR, NOT) for term and phrase searches, and prefix (`*`) search (as in "sea*" for Seattle and Seaside). A general recommendation is to try the simple parser first, and then move on to full parser if application requirements call for more powerful queries.
++ The simple parser understands the [simple query syntax](query-simple-syntax.md). This parser was selected as the default for its speed and effectiveness in free form text queries. The syntax supports common search operators (AND, OR, NOT) for term and phrase searches, and prefix (`*`) search (as in `sea*` for Seattle and Seaside). A general recommendation is to try the simple parser first, and then move on to full parser if application requirements call for more powerful queries.
 
 + The [full Lucene query syntax](query-Lucene-syntax.md#bkmk_syntax), enabled when you add `queryType=full` to the request, is based on the [Apache Lucene Parser](https://lucene.apache.org/core/6_6_1/queryparser/org/apache/lucene/queryparser/classic/package-summary.html).
 
-Full syntax and simple syntax overlap to the extent that both support the same prefix and boolean operations, but the full syntax provides more operators. In full, there are more operators for boolean expressions, and more operators for advanced queries such as fuzzy search, wildcard search, proximity search, and regular expressions.
+Full syntax and simple syntax overlap to the extent that both support the same prefix and Boolean operations, but the full syntax provides more operators. In full, there are more operators for Boolean expressions, and more operators for advanced queries such as fuzzy search, wildcard search, proximity search, and regular expressions.
 
 ## Choose query methods
 
 Search is fundamentally a user-driven exercise, where terms or phrases are collected from a search box, or from click events on a page. The following table summarizes the mechanisms by which you can collect user input, along with the expected search experience.
 
 | Input | Experience |
 |-------|---------|
-| [Search method](/rest/api/searchservice/documents/search-post) | A user types the terms or phrases into a search box, with or without operators, and clicks Search to send the request. Search can be used with filters on the same request, but not with autocomplete or suggestions. |
-| [Autocomplete method](/rest/api/searchservice/documents/autocomplete-post) | A user types a few characters, and queries are initiated after each new character is typed. The response is a completed string from the index. If the string provided is valid, the user clicks Search to send that query to the service. |
-| [Suggestions method](/rest/api/searchservice/documents/suggest-post) | As with autocomplete, a user types a few characters and incremental queries are generated. The response is a dropdown list of matching documents, typically represented by a few unique or descriptive fields. If any of the selections are valid, the user clicks one and the matching document is returned. |
+| [Search method](/rest/api/searchservice/documents/search-post) | A user types the terms or phrases into a search box, with or without operators, and selects **Search** to send the request. Search can be used with filters on the same request, but not with autocomplete or suggestions. |
+| [Autocomplete method](/rest/api/searchservice/documents/autocomplete-post) | A user types a few characters, and queries are initiated after each new character is typed. The response is a completed string from the index. If the string provided is valid, the user selects **Search** to send that query to the service. |
+| [Suggestions method](/rest/api/searchservice/documents/suggest-post) | As with autocomplete, a user types a few characters and incremental queries are generated. The response is a dropdown list of matching documents, typically represented by a few unique or descriptive fields. If any of the selections are valid, the user selects one and the matching document is returned. |
 | [Faceted navigation](/rest/api/searchservice/documents/search-post#searchrequest) | A page shows clickable navigation links or breadcrumbs that narrow the scope of the search. A faceted navigation structure is composed dynamically based on an initial query. For example, `search=*` to populate a faceted navigation tree composed of every possible category. A faceted navigation structure is created from a query response, but it's also a mechanism for expressing the next query. n REST API reference, `facets` is documented as a query parameter of a Search Documents operation, but it can be used without the `search` parameter.|
 | [Filter method](/rest/api/searchservice/documents/search-post#searchrequest) | Filters are used with facets to narrow results. You can also implement a filter behind the page, for example to initialize the page with language-specific fields. In REST API reference, `$filter` is documented as a query parameter of a Search Documents operation, but it can be used without the `search` parameter.|
 
 ## Effect of field attributes on queries
 
-If you're familiar with [query types and composition](search-query-overview.md), you might remember that the parameters on a query request depend on field attributes in an index. For example, only fields marked as `searchable` and `retrievable` can be used in queries and search results. When setting the `search`, `filter`, and `orderby` parameters in your request, you should check attributes to avoid unexpected results.
+If you're familiar with [query types and composition](search-query-overview.md), you might remember that the parameters on a query request depend on field attributes in an index. For example, only fields marked as *searchable* and *retrievable* can be used in queries and search results. When setting the `search`, `filter`, and `orderby` parameters in your request, you should check attributes to avoid unexpected results.
 
-In the portal screenshot below of the [hotels sample index](search-get-started-portal.md), only the last two fields "LastRenovationDate" and "Rating" are `sortable`, a requirement for use in an `"$orderby"` only clause.
+In the following screenshot of the [hotels sample index](search-get-started-portal.md), only the last two fields **LastRenovationDate** and **Rating** are *sortable*, a requirement for use in an `"$orderby"` only clause.
 
-![Index definition for the hotel sample](./media/search-query-overview/hotel-sample-index-definition.png "Index definition for the hotel sample")
+:::image type="content" source="media/search-query-overview/hotel-sample-index-definition.png" alt-text="Screenshot that shows the index definition for the hotel sample.":::
 
 For field attribute definitions, see [Create Index (REST API)](/rest/api/searchservice/indexes/create).
 
 ## Effect of tokens on queries
 
 During indexing, the search engine uses a text analyzer on strings to maximize the potential for finding a match at query time. At a minimum, strings are lower-cased, but depending on the analyzer, might also undergo lemmatization and stop word removal. Larger strings or compound words are typically broken up by whitespace, hyphens, or dashes, and indexed as separate tokens. 
 
-The point to take away here's that what you think your index contains, and what's actually in it, can be different. If queries don't return expected results, you can inspect the tokens created by the analyzer through the [Analyze Text (REST API)](/rest/api/searchservice/indexes/analyze). For more information about tokenization and the impact on queries, see [Partial term search and patterns with special characters](search-query-partial-matching.md).
+The key point is that what you think your index contains, and what's actually in it, can be different. If queries don't return expected results, you can inspect the tokens created by the analyzer through the [Analyze Text (REST API)](/rest/api/searchservice/indexes/analyze). For more information about tokenization and the effect on queries, see [Partial term search and patterns with special characters](search-query-partial-matching.md).
 
-## Next steps
+## Related content
 
 Now that you have a better understanding of how query requests work, try the following quickstarts for hands-on experience.
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "フルテキストクエリ作成に関する記事の更新"
}

Explanation

この変更は、Azure AI Search における「フルテキストクエリの作成」に関する記事の内容を大幅に更新したものです。主な変更点は以下の通りです:

  1. タイトルの修正: 記事のタイトルが「Full-text query how-to」から「Create a full text query」に変更され、より具体的な内容を反映しています。

  2. 日付の更新: 記事の最終更新日が2023年11月16日から2024年10月10日に変更されています。これにより、最新の情報が提供されています。

  3. コンテンツの詳細なリファインメント: 多くの文の表現が明確化され、特にsearchableretrievableといった用語が適切に強調されるようになりました。また、プログラムコード部分や見出しも改善され、読みやすさが向上しています。

  4. 関連コンテンツのセクション名の変更: 「Next steps」から「Related content」に変更され、ユーザーにとって次のステップをより明確に示すようになっています。

全体として、この更新はユーザーがフルテキストクエリを作成するための理解を深めるために重要な情報を提供することを目的としています。

articles/search/search-query-partial-matching.md

Diff
@@ -10,14 +10,14 @@ ms.service: azure-ai-search
 ms.custom:
   - ignite-2023
 ms.topic: conceptual
-ms.date: 02/22/2024
+ms.date: 10/21/2024
 ---
 
 # Partial term search and patterns with special characters (hyphens, wildcard, regex, patterns)
 
 A *partial term search* refers to queries consisting of term fragments, where instead of a whole term, you might have just the beginning, middle, or end of term (sometimes referred to as prefix, infix, or suffix queries). A partial term search might include a combination of fragments, often with special characters such as hyphens, dashes, or slashes that are part of the query string. Common use-cases include parts of a phone number, URL, codes, or hyphenated compound words.
 
-Partial terms and special characters can be problematic if the index doesn't have a token that represents the text fragment you want to search for. During the [lexical analysis phase](search-lucene-query-architecture.md#stage-2-lexical-analysis) of indexing (assuming the default standard analyzer), special characters are discarded, compound words are split up, and whitespace is deleted. If you're searching for a text fragment that was modified during lexical analysis, the query fails because no match is found. Consider this example: a phone number like `+1 (425) 703-6214` (tokenized as `"1"`, `"425"`, `"703"`, `"6214"`) won't show up in a `"3-62"` query because that content doesn't actually exist in the index. 
+Partial terms and special characters can be problematic if the index doesn't have a token representing the text fragment you want to search for. During the [lexical analysis phase](search-lucene-query-architecture.md#stage-2-lexical-analysis) of keyword indexing (assuming the default standard analyzer), special characters are discarded, compound words are split up, and whitespace is deleted. If you're searching for a text fragment that was modified during lexical analysis, the query fails because no match is found. Consider this example: a phone number like `+1 (425) 703-6214` (tokenized as `"1"`, `"425"`, `"703"`, `"6214"`) won't show up in a `"3-62"` query because that content doesn't actually exist in the index. 
 
 The solution is to invoke an analyzer during indexing that preserves a complete string, including spaces and special characters if necessary, so that you can include the spaces and characters in your query string. Having a whole, untokenized string enables pattern matching for "starts with" or "ends with" queries, where the pattern you provide can be evaluated against a term that isn't transformed by lexical analysis. 
 
@@ -31,7 +31,7 @@ Partial terms are specified using these techniques:
 
 + [Regular expression queries](query-lucene-syntax.md#bkmk_regex) can be any regular expression that is valid under Apache Lucene. 
 
-+ [Wildcard operators with prefix matching](query-simple-syntax.md#prefix-search) refers to a generally recognized pattern that includes the beginning of a term, followed by `*` or `?` suffix operators, such as `search=cap*` matching on "Cap'n Jack's Waterfront Inn" or "Gacc Capital". Prefixing matching is supported in both simple and full Lucene query syntax.
++ [Wildcard operators with prefix matching](query-simple-syntax.md#prefix-search) refers to a generally recognized pattern that includes the beginning of a term, followed by `*` or `?` suffix operators, such as `search=cap*` matching on "Cap'n Jack's Waterfront Inn" or "Highline Capital". Prefixing matching is supported in both simple and full Lucene query syntax.
 
 + [Wildcard with infix and suffix matching](query-lucene-syntax.md#bkmk_wildcard) places the `*` and `?` operators inside or at the beginning of a term, and requires regular expression syntax (where the expression is enclosed with forward slashes). For example, the query string (`search=/.*numeric.*/`) returns results on "alphanumeric" and "alphanumerical" as suffix and infix matches.
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "部分一致検索と特殊文字に関する記事の更新"
}

Explanation

この変更は、Azure AI Search における「部分一致検索と特殊文字に関する」記事の軽微な修正を示しています。主要な変更点は以下の通りです:

  1. 日付の更新: 記事の最終更新日が2024年2月22日から2024年10月21日に変更されています。これにより、最新の情報が反映されています。

  2. コンテンツの細部修正: 特に部分一致検索の例が更新され、電話番号のトークン化や特殊文字の取り扱いに関する説明がクリアになっています。また、例として使用されるフレーズの一部が改善され、より具体的かつ理解しやすくなっています(例: “Gacc Capital”から”Highline Capital”に変更)。

全体として、この更新はユーザーが部分一致検索を理解し、効果的に利用するための重要な情報を提供することを目指しています。

articles/search/search-sku-tier.md

Diff
@@ -35,7 +35,7 @@ Tiers include **Free**, **Basic**, **Standard**, and **Storage Optimized**. Stan
 
 :::image type="content" source="media/search-sku-tier/tiers.png" lightbox="media/search-sku-tier/tiers.png" alt-text="Pricing tier chart" border="true":::
 
-**Free** creates a [limited search service](search-limits-quotas-capacity.md#subscription-limits) for smaller projects, like running tutorials and code samples. Internally, system resources are shared among multiple subscribers. You can't scale a free service, run significant workloads, and some premium features aren't available. You can only have one free search service per Azure subscription.
+**Free** creates a [limited search service](search-limits-quotas-capacity.md#subscription-limits) for smaller projects, like running tutorials and code samples. Internally, system resources are shared among multiple subscribers. You can't scale a free service, run significant workloads, and some premium features aren't available. You can only have one free search service per Azure subscription. If the service is inactive for an extended period of time, it might be deleted to free up capacity, especially if the region is under capacity constraints.
 
 The most commonly used billable tiers include:
 
@@ -62,6 +62,7 @@ Currently, several regions are at capacity for specific tiers and can't be used
 | Central US | S2, S3, S3HD, L1, L2 |
 | Central India | S2, S3, S3HD, L1, L2|
 | East US| All tiers|
+| East US 2| Basic, S1|
 | Japan East | S2, S3, S3HD, L1, L2 |
 | Qatar Central | All tiers|
 | South Central US | All tiers |

Summary

{
    "modification_type": "minor update",
    "modification_title": "SKUティアに関する記事の更新"
}

Explanation

この変更は、Azure AI Search における「SKUティア」に関する記事の軽微な修正を示しています。主な変更点は以下の通りです:

  1. フリーティアの説明の追加: フリーティアに関する説明が更新され、以前の内容に加えて「サービスが長期間非アクティブの場合、特に地域がキャパシティ制約を受けている場合、容量を解放するために削除される可能性がある」という注意書きが追加されました。これにより、ユーザーはフリーティアの使用に関する重要な情報をよりよく理解できるようになっています。

  2. 画像およびその説明: 記事に含まれる画像とその周辺のテキストは変更されていませんが、新しい情報が追加されたことで全体のコンテキストが改善されました。

この更新は、SKUティアに関する情報を明確にし、ユーザーがAzureのサービス利用においてより適切な選択をするための助けとなることを目的としています。

articles/search/search-try-for-free.md

Diff
@@ -150,6 +150,9 @@ You can create a search service that doesn't consume credits. Here are some poin
 
 Review the [service limits](search-limits-quotas-capacity.md) for other constraints that apply to the free tier.
 
+> [!NOTE]
+> Free services that remain inactive for an extended period of time might be deleted to free up capacity if the region is under capacity constraints.
+
 ## Next steps
 
 Sign up for an Azure trial subscription:

Summary

{
    "modification_type": "minor update",
    "modification_title": "無料トライアルサービスに関する注意事項の追加"
}

Explanation

この変更は、Azure AI Search における「無料でお試し」という記事に関する軽微な修正を示しています。主な変更点は以下の通りです:

  1. 注意事項の追加: 無料サービスに対する注意書きが挿入されました。この内容では、「長期間非アクティブな無料サービスは、地域がキャパシティ制約を受けている場合に容量を解放するために削除される可能性がある」という重要な情報が提供されています。この注意書きによって、ユーザーは無駄にリソースを使用しないよう配慮することが促されています。

  2. 文書構造の保守: 変更は主に新しい注意書きを加える形で行われており、他の部分に影響を与えることなく、情報の一貫性が保たれています。

この更新は、ユーザーに対してAzureの無料トライアルサービスの利用方法についてより明確な理解を提供し、サービスを適切に管理するための助けとなることを目的としています。

articles/search/service-create-private-endpoint.md

Diff
@@ -9,110 +9,110 @@ ms.service: azure-ai-search
 ms.custom:
   - ignite-2023
 ms.topic: conceptual
-ms.date: 04/03/2024
+ms.date: 10/08/2024
 ---
 
 # Create a private endpoint for a secure connection to Azure AI Search
 
-In this article, learn how to configure a private connection to Azure AI Search so that it admits requests from clients in a virtual network instead of over a public internet connection:
+This article explains how to configure a private connection to Azure AI Search so that it admits requests from clients in a virtual network instead of over a public internet connection:
 
-+ [Create an Azure virtual network](#create-the-virtual-network) (or use an existing one)
++ [Create an Azure virtual network](#create-the-virtual-network), or use an existing one
 + [Configure a search service to use a private endpoint](#create-a-search-service-with-a-private-endpoint)
 + [Create an Azure virtual machine in the same virtual network](#create-a-virtual-machine)
 + [Test using a browser session on the virtual machine](#connect-to-the-vm)
 
 Other Azure resources that might privately connect to Azure AI Search include Azure OpenAI for "use your own data" scenarios. Azure OpenAI Studio doesn't run in a virtual network, but it can be configured on the backend to send requests over the Microsoft backbone network. Configuration for this traffic pattern is enabled by Microsoft when your request is submitted and approved. For this scenario:
 
 + Follow the instructions in this article to set up the private endpoint.
-+ [Submit a request](/azure/ai-services/openai/how-to/use-your-data-securely#disable-public-network-access-1) for Azure OpenAI Studio to connect using your private endpoint.
++ [Enable trusted service](/azure/ai-services/openai/how-to/use-your-data-securely#enable-trusted-service-1) of your search resource from the Azure portal.
 + Optionally, [disable public network access](#disable-public-network-access) if connections should only originate from clients in virtual network or from Azure OpenAI over a private endpoint connection.
 
 ## Key points about private endpoints
 
-Private endpoints are provided by [Azure Private Link](/azure/private-link/private-link-overview), as a separate billable service. For more information about costs, see the [pricing page](https://azure.microsoft.com/pricing/details/private-link/).
+Private endpoints are provided by [Azure Private Link](/azure/private-link/private-link-overview), as a separate billable service. For more information about costs, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
 
 Once a search service has a private endpoint, portal access to that service must be initiated from a browser session on a virtual machine inside the virtual network. See [this step](#portal-access-private-search-service) for details.
 
-You can create a private endpoint for a search service in the Azure portal, as described in this article. Alternatively, you can use the [Management REST API version](/rest/api/searchmanagement/), [Azure PowerShell](/powershell/module/az.search), or [Azure CLI](/cli/azure/search).
+You can create a private endpoint for a search service in the Azure portal, as described in this article. Alternatively, you can use the [Management REST API](/rest/api/searchmanagement/), [Azure PowerShell](/powershell/module/az.search), or the [Azure CLI](/cli/azure/search).
 
 ## Why use a private endpoint?
 
-[Private Endpoints](/azure/private-link/private-endpoint-overview) for Azure AI Search allow a client on a virtual network to securely access data in a search index over a [Private Link](/azure/private-link/private-link-overview). The private endpoint uses an IP address from the [virtual network address space](/azure/virtual-network/ip-services/private-ip-addresses) for your search service. Network traffic between the client and the search service traverses over the virtual network and a private link on the Microsoft backbone network, eliminating exposure from the public internet. For a list of other PaaS services that support Private Link, check the [availability section](/azure/private-link/private-link-overview#availability) in the product documentation.
+[Private endpoints](/azure/private-link/private-endpoint-overview) for Azure AI Search allow a client on a virtual network to securely access data in a search index over a [Private Link](/azure/private-link/private-link-overview). The private endpoint uses an IP address from the [virtual network address space](/azure/virtual-network/ip-services/private-ip-addresses) for your search service. Network traffic between the client and the search service traverses over the virtual network and a private link on the Microsoft backbone network, eliminating exposure from the public internet. For a list of other PaaS services that support Private Link, check the [availability section](/azure/private-link/private-link-overview#availability) in the product documentation.
 
-Private endpoints for your search service enable you to:
+Private endpoints for your search service allow you to:
 
 + Block all connections on the public endpoint for your search service.
-+ Increase security for the virtual network, by enabling you to block exfiltration of data from the virtual network.
++ Increase security for the virtual network, by letting you block exfiltration of data from the virtual network.
 + Securely connect to your search service from on-premises networks that connect to the virtual network using [VPN](/azure/vpn-gateway/vpn-gateway-about-vpngateways) or [ExpressRoutes](/azure/expressroute/expressroute-locations) with private-peering.
 
 ## Create the virtual network
 
-In this section, you'll create a virtual network and subnet to host the VM that will be used to access your search service's private endpoint.
+In this section, you create a virtual network and subnet to host the VM that will be used to access your search service's private endpoint.
 
 1. From the Azure portal home tab, select **Create a resource** > **Networking** > **Virtual network**.
 
 1. In **Create virtual network**, enter or select the following values:
 
     | Setting | Value |
     | ------- | ----- |
-    | Subscription | Select your subscription.|
-    | Resource group | Select **Create new**, enter a name, such as "myResourceGroup", then select **OK**. |
-    | Name | Enter a name, such as "MyVirtualNetwork". |
-    | Region | Select a region. |
+    | Subscription | Select your subscription |
+    | Resource group | Select **Create new**, enter a name, such as *myResourceGroup*, then select **OK** |
+    | Name | Enter a name, such as *MyVirtualNetwork* |
+    | Region | Select a region |
 
 1. Accept the defaults for the rest of the settings. Select **Review + create** and then **Create**.
 
 ## Create a search service with a private endpoint
 
-In this section, you'll create a new Azure AI Search service with a Private Endpoint.
+In this section, you create a new Azure AI Search service with a private endpoint.
 
-1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Web** > **Azure AI Search**.
+1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **AI + machine learning** > **AI Search**.
 
-1. In **New Search Service - Basics**, enter or select the following values:
+1. In **Create a search service - Basics**, enter or select the following values:
 
     | Setting | Value |
     | ------- | ----- |
     | **PROJECT DETAILS** | |
-    | Subscription | Select your subscription. |
-    | Resource group | Use the resource group that you created in the previous step.|
+    | Subscription | Select your subscription |
+    | Resource group | Use the resource group that you created in the previous step|
     | **INSTANCE DETAILS** |  |
-    | URL | Enter a unique name. |
-    | Location | Select your region. |
+    | URL | Enter a unique name |
+    | Location | Select your region |
     | Pricing tier | Select **Change Pricing Tier** and choose your desired service tier. Private endpoints aren't supported on the  **Free** tier. You must select **Basic** or higher. |
   
 1. Select **Next: Scale**.
 
 1. Accept the defaults and select **Next: Networking**.
 
-1. In **New Search Service - Networking**, select **Private** for **Endpoint connectivity(data)**.
+1. In **Create a search service - Networking**, select **Private** for **Endpoint connectivity (data)**.
 
 1. Select **+ Add** under **Private endpoint**. 
 
-1. In **Create Private Endpoint**, enter or select values that associate your search service with the virtual network you created:
+1. In **Create private endpoint**, enter or select values that associate your search service with the virtual network you created:
 
     | Setting | Value |
     | ------- | ----- |
-    | Subscription | Select your subscription. |
-    | Resource group | Use the resource group that you created in the previous step. |
-    | Location | Select a region. |
-    | Name | Enter a name, such as "myPrivateEndpoint".  |
-    | Target subresource | Accept the default **searchService**. |
+    | Subscription | Select your subscription |
+    | Resource group | Use the resource group that you created in the previous step |
+    | Location | Select a region |
+    | Name | Enter a name, such as *myPrivateEndpoint*  |
+    | Target subresource | Accept the default **searchService** |
     | **NETWORKING** |  |
-    | Virtual network  | Select the virtual network you created in the previous step. |
-    | Subnet | Select the default. |
+    | Virtual network  | Select the virtual network you created in the previous step |
+    | Subnet | Select the default |
     | **PRIVATE DNS INTEGRATION** |  |
-    | Integrate with private DNS zone  | Accept the default "Yes". |
-    | Private DNS zone  | Accept the default **(New) privatelink.search.windows.net**. |
+    | Enable Private DNS Integration  | Select the checkbox |
+    | Private DNS zone  | Accept the default **(New) privatelink.search.windows.net** |
 
-1. Select **OK**. 
+1. Select **Add**.
 
 1. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration. 
 
 1. When you see the **Validation passed** message, select **Create**. 
 
 1. Once provisioning of your new service is complete, browse to the resource that you created.
 
-1. Select **Keys** from the left content menu.
+1. Select **Settings** > **Keys** from the left content menu.
 
 1. Copy the **Primary admin key** for later, when connecting to the service.
 
@@ -127,22 +127,22 @@ In this section, you'll create a new Azure AI Search service with a Private Endp
     | Setting | Value |
     | ------- | ----- |
     | **PROJECT DETAILS** | |
-    | Subscription | Select your subscription. |
-    | Resource group | Use the resource group that you created in the previous section.|
+    | Subscription | Select your subscription |
+    | Resource group | Use the resource group that you created in the previous section |
     | **INSTANCE DETAILS** |  |
-    | Virtual machine name | Enter a name, such as "my-vm". |
-    | Region | Select your region. |
-    | Availability options | You can choose **No infrastructure redundancy required**, or select another option if you need the functionality. |
-    | Image | Select **Windows Server 2022 Datacenter: Azure Edition - Gen2**. |
-    | VM architecture | Accept the default **x64**. |
-    | Size | Accept the default **Standard D2S v3**. |
+    | Virtual machine name | Enter a name, such as *my-vm* |
+    | Region | Select your region |
+    | Availability options | You can choose **No infrastructure redundancy required**, or select another option if you need the functionality |
+    | Image | Select **Windows Server 2022 Datacenter: Azure Edition - Gen2** |
+    | VM architecture | Accept the default **x64** |
+    | Size | Accept the default **Standard D2S v3** |
     | **ADMINISTRATOR ACCOUNT** |  |
-    | Username | Enter the user name of the administrator. Use an account that's valid for your Azure subscription. You'll want to sign in to the Azure portal from the VM so that you can manage your search service. |
+    | Username | Enter the user name of the administrator. Use an account that's valid for your Azure subscription. Sign in to the Azure portal from the VM so that you can manage your search service. |
     | Password | Enter the account password. The password must be at least 12 characters long and meet the [defined complexity requirements](/azure/virtual-machines/windows/faq?toc=%2fazure%2fvirtual-network%2ftoc.json#what-are-the-password-requirements-when-creating-a-vm-).|
-    | Confirm Password | Reenter password. |
+    | Confirm Password | Reenter password |
     | **INBOUND PORT RULES** |  |
-    | Public inbound ports | Accept the default **Allow selected ports**. |
-    | Select inbound ports | Accept the default **RDP (3389)**. |
+    | Public inbound ports | Accept the default **Allow selected ports** |
+    | Select inbound ports | Accept the default **RDP (3389)** |
 
 1. Select **Next: Disks**.
 
@@ -152,12 +152,12 @@ In this section, you'll create a new Azure AI Search service with a Private Endp
 
     | Setting | Value |
     | ------- | ----- |
-    | Virtual network | Select the virtual network you created in a previous step. |
-    | Subnet | Accept the default (10.1.0.0/24).|
-    | NIC network security group | Accept the default "Basic" |
-    | Public IP | Accept the default "(new) myVm-ip". |
-    | Public inbound ports | Select the default "Allow selected ports". |
-    | Select inbound ports | Select "HTTP 80", "HTTPS (443)" and "RDP (3389)".|
+    | Virtual network | Select the virtual network you created in a previous step |
+    | Subnet | Accept the default **10.1.0.0/24** |
+    | Public IP | Accept the default |
+    | NIC network security group | Accept the default **Basic** |
+    | Public inbound ports | Select the default **Allow selected ports** |
+    | Select inbound ports | Select **HTTP 80**, **HTTPS (443)**, and **RDP (3389)** |
 
    > [!NOTE]
    > IPv4 addresses can be expressed in [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) format. Remember to avoid the IP range reserved for private networking, as described in [RFC 1918](https://tools.ietf.org/html/rfc1918):
@@ -178,32 +178,32 @@ Download and then connect to the virtual machine as follows:
 
 1. Select **Connect**. After selecting the **Connect** button, **Connect to virtual machine** opens.
 
-1. Select **Download RDP File**. Azure creates a Remote Desktop Protocol (`.rdp`) file and downloads it to your computer.
+1. Select **Download RDP File**. Azure creates a Remote Desktop Protocol (*.rdp*) file and downloads it to your computer.
 
-1. Open the downloaded `.rdp` file.
+1. Open the downloaded *.rdp* file.
 
     1. If prompted, select **Connect**.
 
     1. Enter the username and password you specified when creating the VM.
 
         > [!NOTE]
-        > You may need to select **More choices** > **Use a different account**, to specify the credentials you entered when you created the VM.
+        > You might need to select **More choices** > **Use a different account**, to specify the credentials you entered when you created the VM.
 
 1. Select **OK**.
 
-1. You may receive a certificate warning during the sign-in process. If you receive a certificate warning, select **Yes** or **Continue**.
+1. You might receive a certificate warning during the sign-in process. If you receive a certificate warning, select **Yes** or **Continue**.
 
 1. Once the VM desktop appears, minimize it to go back to your local desktop.  
 
 ## Test connections
 
-In this section, you'll verify private network access to the search service and connect privately to the using the Private Endpoint.
+In this section, you verify private network access to the search service and connect privately to the using the Private Endpoint.
 
-When the search service endpoint is private, some portal features are disabled. You'll be able to view and manage service level settings, but portal access to index data and various other components in the service, such as the index, indexer, and skillset definitions, is restricted for security reasons.
+When the search service endpoint is private, some portal features are disabled. You can view and manage service level settings, but portal access to index data and various other components in the service, such as the index, indexer, and skillset definitions, is restricted for security reasons.
 
 1. In the Remote Desktop of *myVM*, open PowerShell.
 
-1. Enter `nslookup [search service name].search.windows.net`.
+1. Enter `nslookup [search service name].search.windows.net`.
 
     You'll receive a message similar to this:
 
@@ -216,13 +216,13 @@ When the search service endpoint is private, some portal features are disabled.
     Aliases:  [search service name].search.windows.net
     ```
 
-1. From the VM, connect to the search service and create an index. You can follow this [quickstart](search-get-started-rest.md) to create a new search index in your service using the REST API. Setting up requests from a Web API test tool requires the search service endpoint (https://[search service name].search.windows.net) and the admin api-key you copied in a previous step.
+1. From the VM, connect to the search service and create an index. You can follow this [quickstart](search-get-started-rest.md) to create a new search index in your service using the REST API. Setting up requests from a Web API test tool requires the search service endpoint `(https://[search service name].search.windows.net)` and the admin api-key you copied in a previous step.
 
 1. Completing the quickstart from the VM is your confirmation that the service is fully operational.
 
 1. Close the remote desktop connection to *myVM*. 
 
-1. To verify that your service isn't accessible on a public endpoint, open a REST client on your local workstation and attempt the first several tasks in the quickstart. If you receive an error that the remote server doesn't exist, you've successfully configured a private endpoint for your search service.
+1. To verify that your service isn't accessible on a public endpoint, open a REST client on your local workstation and attempt the first several tasks in the quickstart. If you receive an error that the remote server doesn't exist, you successfully configured a private endpoint for your search service.
 
 <a id="portal-access-private-search-service"></a>
 
@@ -234,7 +234,7 @@ To work around this restriction, connect to Azure portal from a browser on a vir
 
 1. Follow the [steps to provision a VM that can access the search service through a private endpoint](#create-virtual-machine-private-endpoint).
 
-1. On a virtual machine in your virtual network, open a browser and sign in to the Azure portal. The portal will use the private endpoint attached to the virtual machine to connect to your search service.
+1. On a virtual machine in your virtual network, open a browser and sign in to the Azure portal. The portal uses the private endpoint attached to the virtual machine to connect to your search service.
 
 ## Disable public network access
 
@@ -244,14 +244,14 @@ You can lock down a search service to prevent it from admitting any request from
 
 1. Select **Disabled** on the **Firewalls and virtual networks** tab.
 
-You can also use the [Azure CLI](/cli/azure/search/service?view=azure-cli-latest#az-search-service-update&preserve-view=true), [Azure PowerShell](/powershell/module/az.search/set-azsearchservice), or the [Management REST API](/rest/api/searchmanagement/services/update), setting `public-access` or `public-network-access` to `disabled`.
+You can also use the [Azure CLI](/cli/azure/search/service?view=azure-cli-latest#az-search-service-update&preserve-view=true), [Azure PowerShell](/powershell/module/az.search/set-azsearchservice), or the [Management REST API](/rest/api/searchmanagement/), by setting `public-access` or `public-network-access` to `disabled`.
 
 ## Clean up resources
 
 When you're working in your own subscription, it's a good idea at the end of a project to identify whether you still need the resources you created. Resources left running can cost you money.
 
 You can delete individual resources or the resource group to delete everything you created in this exercise. Select the resource group on any resource's overview page, and then select **Delete**.
 
-## Next steps
+## Next step
 
-In this article, you created a VM on a virtual network and a search service with a Private Endpoint. You connected to the VM from the internet and securely communicated to the search service using Private Link. To learn more about Private Endpoint, see [What is Azure Private Endpoint?](/azure/private-link/private-endpoint-overview).
+In this article, you created a VM on a virtual network and a search service with a private endpoint. You connected to the VM from the internet and securely communicated to the search service using Private Link. To learn more about private endpoints, see [What is a private endpoint?](/azure/private-link/private-endpoint-overview)

Summary

{
    "modification_type": "minor update",
    "modification_title": "プライベートエンドポイントの設定に関する記事の修正"
}

Explanation

この変更は、Azure AI Search における「プライベートエンドポイントの作成」に関する記事に対する軽微な更新を示しています。主な変更点は以下の通りです:

  1. 日付の更新: 記事の最終更新日が「04/03/2024」から「10/08/2024」に変更されました。これにより、情報の新しさが反映されています。

  2. 文章の明確化: 文章のいくつかの部分が言い回しの修正や説明の追加により、より明確になりました。その例として、特定の段落において文の構造が改善されています。例えば、「この文章では、Azure AI Search へのプライベート接続を設定する方法が説明されています」という表現により、読者が内容をすぐに理解できるようになっています。

  3. 項目の変更: 特に手順の説明において、一部の項目がより細かく説明され、冗長な部分が削除されました。また、用語の一貫性も向上し、文章全体の流れが良くなっています。

この更新の目的は、ユーザーに対してプライベートエンドポイントの設定方法に関する情報をより明確かつ正確に提供することです。また、手順に関する誤解を減らし、実際の設定プロセスをスムーズに進められるようにすることを狙っています。

articles/search/tutorial-csharp-overview.md

Diff
@@ -7,7 +7,7 @@ author: diberry
 ms.author: diberry
 ms.service: azure-ai-search
 ms.topic: tutorial
-ms.date: 08/16/2024
+ms.date: 10/21/2024
 ms.custom:
   - devx-track-csharp
   - devx-track-dotnet
@@ -21,7 +21,7 @@ This tutorial builds a website to search through a catalog of books and then dep
 
 ## What does the sample do?
 
-This sample website provides access to a catalog of 10,000 books. You can search the catalog by entering text in the search bar. While you enter text, the website uses the search index's [\suggestion feature to autocomplete the text. Once the query finishes, the list of books is displayed with a portion of the details. You can select a book to see all the details, stored in the search index, of the book. 
+This sample website provides access to a catalog of 10,000 books. You can search the catalog by entering text in the search bar. While you enter text, the website uses the search index's suggestion feature to autocomplete the text. Once the query finishes, the list of books is displayed with a portion of the details. You can select a book to see all the details, stored in the search index, of the book. 
 
 :::image type="content" source="media/tutorial-csharp-overview/cognitive-search-enabled-book-website-2.png" alt-text="Screenshot of the sample app in a browser window.":::
 

Summary

{
    "modification_type": "minor update",
    "modification_title": "C# チュートリアルの更新"
}

Explanation

この変更は、C# に関するチュートリアルの概要をまとめた記事に対する軽微な更新を示しています。主な変更点は以下の通りです:

  1. 日付の更新: 記事の最終更新日が「08/16/2024」から「10/21/2024」に変更されました。この修正により、コンテンツの新鮮さが反映されています。

  2. 文章の微調整: 「サンプルウェブサイト」についての説明が若干修正され、よりスムーズな読みやすさが実現されています。特に「suggestion feature」という用語の前に無駄なエスケープ文字が削除され、文の流れが改善されました。

  3. 情報の明確化: サンプルウェブサイトがどのように機能するかの説明が明確にされ、具体的な機能(自動補完や詳細表示)についての説明がより自然に繋がるように調整されています。

この更新の目的は、ユーザーに対してC#チュートリアルに関する情報をより明確かつ正確に伝えることです。また、変更により内容がより理解しやすくなり、読者が手順をスムーズに理解しやすくする方向性にあります。