View Diff on GitHub
# Highlights
このコードの差分は、様々なドキュメントにおける「Import and vectorize data wizard」の名称を「Import data (new) wizard」に変更することにより、ユーザーがAzure AIサービスを使用する際の混乱を軽減することを目的としています。また、新しいドキュメントや画像の追加、リンクの更新、誤った画像や名称の修正も行われました。
New features
- 新しいクイックスタートガイドとサンプルコードが追加され、Azure AI Searchのインポートウィザードを用いた機能の使い方がより容易に理解できるようになりました。
- 画像やGIFの追加によって、視覚的なガイドが強化され、ユーザーが容易に手順を理解できるようになりました。
Breaking changes
- 「skip-cognitive-skills.png」の削除によって、一部の手順や機能が明確でなくなる可能性があります。新しい情報や画像での代替が必要です。
Other updates
- 各ドキュメントで「Import and vectorize data wizard」の名称が「Import data (new) wizard」に統一されました。
- 日付の更新が行われ、変更内容が最新であることを示しています。
- リンクの更新や修正によって、利用者が正確な情報源にアクセスできるようにされています。
Insights
このコード差分は、Azure AIプラットフォームにおけるユーザーエクスペリエンスの向上を図るための一連の改善を示しています。特に「Import and vectorize data wizard」が「Import data (new) wizard」に変更されたことで、ドキュメントの整合性が向上し、ユーザーが最新の機能を適切に利用できる環境が整えられています。
ウィザードの名称変更は、新しい機能やプロセスに対応するためのもので、ユーザーが新旧の機能を混同せずに使い分けられるようになっています。この変更によって、初心者ユーザーでもすぐにAzure AIの機能を活用できる可能性が高まりました。
また、視覚的なガイドが強化され、新しい画像やクイックスタートガイドの導入により、ユーザーは実際の操作において具体的なステップを把握しやすくなりました。特に新しい「Import data (new) wizard」は、キーワード検索サポートや明確なインターフェイスを提供し、ユーザーがより効率的にデータを取り込むサポートをします。
全体として、これらの変更はドキュメントの質を向上させ、ユーザーが最新のAzure AI機能を効果的に活用するための助けとなります。プラットフォームの最新性とユーザーエクスペリエンスの両方を重視したアップデートになっています。
Summary Table
Modified Contents
articles/search/cognitive-search-aml-skill.md
Diff
@@ -38,7 +38,7 @@ Starting in the 2024-05-01-preview REST API and the Azure portal, which also tar
During indexing, the **AML** skill can connect to the model catalog to generate vectors for the index. At query time, queries can use a vectorizer to connect to the same model to vectorize text strings for a vector query. In this workflow, you should use the **AML** skill and the model catalog vectorizer together so that the same embedding model is used for indexing and queries. For more information, including a list of supported embedding models, see [Use embedding models from Azure AI Foundry model catalog](vector-search-integrated-vectorization-ai-studio.md).
-We recommend using the [**Import and vectorize data wizard**](search-get-started-portal-import-vectors.md) to generate a skillset that includes an AML skill for deployed embedding models in Azure AI Foundry. The wizard generates the AML skill definition for inputs, outputs, and mappings, providing an easy way to test a model before writing any code.
+We recommend using the [**Import data (new)** wizard](search-get-started-portal-import-vectors.md) to generate a skillset that includes an AML skill for deployed embedding models in Azure AI Foundry. The wizard generates the AML skill definition for inputs, outputs, and mappings, providing an easy way to test a model before writing any code.
## Prerequisites
Summary
{
"modification_type": "minor update",
"modification_title": "AMLスキルに関するウィザードの名称変更"
}
Explanation
この変更は、cognitive-search-aml-skill.md
ファイルにおける「Import and vectorize data wizard」というウィザードの名称を「Import data (new) wizard」に変更するものです。この修正により、ユーザーが最新のウィザードを使用する際に混乱を避けることができ、Azure AI Foundryでの埋め込みモデルのスキルセット生成が容易になります。具体的には、変更前の内容が不適切だったため、より正確な名称が推奨された結果です。この変更の影響は軽微ですが、文書の整合性を向上させる重要なものであり、利用者にとっても理解しやすい情報提供につながります。
articles/search/cognitive-search-concept-troubleshooting.md
Diff
@@ -18,7 +18,7 @@ This article contains tips to help you get started with AI enrichment and skills
## Tip 1: Start simple and start small
-Both the [**Import data wizard**](search-get-started-skillset.md) and [**Import and vectorize data wizard**](search-get-started-portal-import-vectors.md) in the Azure portal support AI enrichment. Without writing any code, you can create and examine all of the objects used in an enrichment pipeline: an index, indexer, data source, and skillset.
+Both the [**Import data** wizard](search-get-started-skillset.md) and the [**Import data (new)** wizard](search-get-started-portal-import-vectors.md) in the Azure portal support AI enrichment. Without writing any code, you can create and examine all of the objects used in an enrichment pipeline: an index, indexer, data source, and skillset.
Another way to start simply is by creating a data source with just a handful of documents or rows in a table that are representative of the documents that will be indexed. A small data set is the best way to increase the speed of finding and fixing issues.Run your sample through the end-to-end pipeline and check that the results meet your needs. Once you're satisfied with the results, you're ready to add more files to your data source.
Summary
{
"modification_type": "minor update",
"modification_title": "ウィザードの名称変更"
}
Explanation
この変更は、cognitive-search-concept-troubleshooting.md
ファイルにおいて、Azureポータル内でのウィザードの名称を修正したものです。具体的には、「Import and vectorize data wizard」という名称が「Import data (new) wizard」に変更されました。この修正により、利用者が使用するウィザードの正確な名称が反映され、最新の機能を提供することが強調されます。この変更はマイナーですが、文書の整合性を向上させ、ユーザーに正しい情報を提供するために重要です。結果として、ユーザーはAI強化およびスキルセット作成のプロセスをよりスムーズに進めることができます。
articles/search/cognitive-search-defining-skillset.md
Diff
@@ -271,7 +271,7 @@ Although skill output can be optionally cached for reuse purposes, it's usually
## Tips for a first skillset
-+ Try the [Import data wizard](search-get-started-portal.md) or [Import and vectorize data wizard](search-get-started-portal-import-vectors.md).
++ Try the [**Import data** wizard](search-get-started-portal.md) or [**Import data (new)** wizard](search-get-started-portal-import-vectors.md).
The wizards automate several steps that can be challenging the first time around. It defines the skillset, index, and indexer, including field mappings and output field mappings. It also defines projections in a knowledge store if you're using one. For some skills, such as OCR or image analysis, the wizard adds utility skills that merge the image and text content that was separated during document cracking.
Summary
{
"modification_type": "minor update",
"modification_title": "ウィザードの名称修正"
}
Explanation
この変更は、cognitive-search-defining-skillset.md
ファイルにおけるウィザードの名称を更新したものです。「Import and vectorize data wizard」が「Import data (new) wizard」に変更され、ユーザーに対して最新のウィザードを促す内容になっています。この修正は、ウィザードの機能をより明確に伝えるものであり、ユーザーがAIスキルセットを定義する際に役立つ情報を提供します。また、このウィザードは、スキルセットやインデックス、インデクサーを自動で生成し、初めてのユーザーでも手順を簡略化できるため、文書の品質向上にも寄与しています。全体的に、利用者にとって役立つ具体的な手助けを提供するために重要な更新です。
articles/search/cognitive-search-how-to-debug-skillset.md
Diff
@@ -132,7 +132,7 @@ If skills produce output but the search index is empty, check the field mappings
Select one of the mapping options and expand the details view to review source and target definitions.
-+ [**Projection Mappings**](index-projections-concept-intro.md) are found in skillsets that provide integrated vectorization, such as the skills created by the [Import and vectorize data wizard](search-get-started-portal-import-vectors.md). These mappings determine parent-child (chunk) field mappings and whether a secondary index is created for just the chunked content
++ [**Projection Mappings**](index-projections-concept-intro.md) are found in skillsets that provide integrated vectorization, such as the skills created by the [**Import data (new)** wizard](search-get-started-portal-import-vectors.md). These mappings determine parent-child (chunk) field mappings and whether a secondary index is created for just the chunked content
+ [**Output Field Mappings**](cognitive-search-output-field-mapping.md) are found in indexers and are used when skillsets invoke built-in or custom skills. These mappings are used to set the data path from a node in the enrichment tree to a field in the search index. For more information about paths, see [enrichment node path syntax](cognitive-search-concept-annotations-syntax.md).
Summary
{
"modification_type": "minor update",
"modification_title": "ウィザードの名称更新"
}
Explanation
この変更は、cognitive-search-how-to-debug-skillset.md
ファイルにおけるウィザードの名称を修正したものです。「Import and vectorize data wizard」が「Import data (new) wizard」に更新され、ウィザードの新しいバージョンをはっきりと示しています。この変更により、ユーザーが最新の機能を明確に認識できるようになります。また、スキルセットでの統合ベクトル化を提供するスキルがこのウィザードによって生成されることも強調されています。さらに、出力フィールドマッピングについての情報も追加されており、インデクサーがビルトインまたはカスタムスキルを呼び出す際に、データパスの設定がどのように行われるかを示しています。この修正により、ドキュメントの正確性と有用性が向上しています。
articles/search/cognitive-search-skill-azure-openai-embedding.md
Diff
@@ -16,7 +16,7 @@ ms.date: 09/12/2025
The **Azure OpenAI Embedding** skill connects to an embedding model deployed to your [Azure OpenAI](/azure/ai-services/openai/overview) resource or [Azure AI Foundry](/azure/ai-foundry/what-is-azure-ai-foundry) project to generate embeddings during indexing. Your data is processed in the [Geo](https://azure.microsoft.com/explore/global-infrastructure/data-residency/) where your model is deployed.
-The [Import and vectorize data wizard](search-get-started-portal-import-vectors.md) in the Azure portal uses the Azure OpenAI Embedding skill to vectorize content. You can run the wizard and review the generated skillset to see how the wizard builds the skill for embedding models.
+The [**Import data (new)** wizard](search-get-started-portal-import-vectors.md) in the Azure portal uses the Azure OpenAI Embedding skill to vectorize content. You can run the wizard and review the generated skillset to see how the wizard builds the skill for embedding models.
> [!NOTE]
> This skill is bound to Azure OpenAI and is charged at the existing [Azure OpenAI Standard price](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/#pricing).
Summary
{
"modification_type": "minor update",
"modification_title": "ウィザードの名称更新"
}
Explanation
この変更は、cognitive-search-skill-azure-openai-embedding.md
ファイルの内容において、ウィザードの名称を更新したものです。「Import and vectorize data wizard」が「Import data (new) wizard」に変更され、最新のウィザードを示す内容となっています。これにより、ユーザーは新しい機能や手順をより明確に認識でき、Azure OpenAI Embeddingスキルを使用してコンテンツをベクトル化する方法についての理解が深まります。この修正は、Azureポータルでのデータの読み込みとベクトル化のプロセスにおける最新の情報を提供するための重要な更新です。
articles/search/cognitive-search-skill-document-intelligence-layout.md
Diff
@@ -46,7 +46,7 @@ Supported regions vary by modality and how the skill connects to the Document In
| Approach | Requirement |
|----------|-------------|
-| [Import and vectorize data wizard](search-import-data-portal.md) | Create an Azure AI multi-service resource in one of these regions to get the portal experience: **East US**, **West Europe**, **North Central US**. |
+| [**Import data (new)** wizard](search-import-data-portal.md) | Create an Azure AI multi-service resource in one of these regions to get the portal experience: **East US**, **West Europe**, **North Central US**. |
| Programmatic, using [Microsoft Entra ID authentication (preview)](cognitive-search-attach-cognitive-services.md#bill-through-a-keyless-connection) for billing | Create Azure AI Search in one of these regions: **East US**, **West Europe**, **North Central US**, **West US 2**. <br>Create the Azure AI multi-service resource in any region listed in the [Product availability by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/table) table.|
| Programmatic, using a [multi-service resource API key](cognitive-search-attach-cognitive-services.md#bill-through-a-keyless-connection) for billing | Create your Azure AI Search service and AI multi-service resource in the same region: **East US**, **West Europe**, **North Central US**, **West US 2**. |
Summary
{
"modification_type": "minor update",
"modification_title": "ウィザードの名称更新"
}
Explanation
この変更は、cognitive-search-skill-document-intelligence-layout.md
ファイルにおいて、ウィザードの名称を更新したものです。「Import and vectorize data wizard」が「Import data (new) wizard」に修正され、最新のウィザードに関する情報が反映されています。この修正により、ユーザーはAzure AIのマルチサービスリソースを作成する際に必要な手順についての正確な情報を得ることができ、ポータルでの体験を向上させることができます。文書インテリジェンスレイアウトに関連するスキルの接続方法や地域に関する条件も引き続き説明されており、全体的な理解を促進します。
articles/search/hybrid-search-how-to-query.md
Diff
@@ -24,7 +24,7 @@ In this article, learn how to:
## Prerequisites
-+ A search index containing `searchable` vector and nonvector fields. We recommend the [Import and vectorize data wizard](search-import-data-portal.md) to create an index quickly. Otherwise, see [Create an index](search-how-to-create-search-index.md) and [Add vector fields to a search index](vector-search-how-to-create-index.md).
++ A search index containing `searchable` vector and nonvector fields. We recommend the [**Import data (new)** wizard](search-import-data-portal.md) to create an index quickly. Otherwise, see [Create an index](search-how-to-create-search-index.md) and [Add vector fields to a search index](vector-search-how-to-create-index.md).
+ (Optional) If you want the [semantic ranker](semantic-search-overview.md), your search service must be Basic tier or higher, with [semantic ranker enabled](semantic-how-to-enable-disable.md).
Summary
{
"modification_type": "minor update",
"modification_title": "ウィザードの名称更新"
}
Explanation
この変更は、hybrid-search-how-to-query.md
ファイルにおいて、インデックス作成に関連するウィザードの名称を更新したものです。「Import and vectorize data wizard」が「Import data (new) wizard」に修正され、最新のウィザードに関する正確な情報が反映されています。この修正により、ユーザーは検索インデックスを迅速に作成する際に推奨される手順を明確に理解できるようになります。また、検索インデックスに必要な要件や、セマンティックランカーを有効にするための条件についての説明も引き続き含まれており、全体的なユーザー体験が向上しています。
articles/search/includes/quickstarts/search-get-started-portal-new-wizard.md
Diff
@@ -0,0 +1,155 @@
+---
+manager: nitinme
+author: haileytap
+ms.author: haileytapia
+ms.service: azure-ai-search
+ms.topic: include
+ms.date: 09/16/2025
+---
+
+> [!IMPORTANT]
+> The **Import data (new)** wizard now supports keyword search, which was previously only available in the **Import data** wizard. We recommend the new wizard for an improved search experience. For more information about how we're consolidating the wizards, see [Import data wizards in the Azure portal](../../search-import-data-portal.md).
+
+In this quickstart, you use the **Import data (new)** wizard and sample data about fictitious hotels to create your first search index. The wizard requires no code to create an index, helping you write interesting queries within minutes.
+
+The wizard creates multiple objects on your search service, including a searchable [index](../../search-what-is-an-index.md), an [indexer](../../search-indexer-overview.md), and a data source connection for automated data retrieval. At the end of this quickstart, you review each object.
+
+## Prerequisites
+
++ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
+
++ An Azure AI Search service. [Create a service](../../search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) in your current subscription. You can use a free service for this quickstart.
+
++ An [Azure Storage account](/azure/storage/common/storage-account-create). Use Azure Blob Storage or Azure Data Lake Storage Gen2 (storage account with a hierarchical namespace) on a standard performance (general-purpose v2) account. To avoid bandwidth charges, use the same region as Azure AI Search.
+
+### Check for network access
+
+For this quickstart, all of the preceding resources must have public access enabled so that the Azure portal nodes can access them. Otherwise, the wizard fails. After the wizard runs, you can enable firewalls and private endpoints on the integration components for security. For more information, see [Secure connections in the import wizards](../../search-import-data-portal.md#secure-connections).
+
+### Check for space
+
+Many customers start with a free search service, which is limited to three indexes, three indexers, and three data sources. This quickstart creates one of each, so before you begin, make sure you have room for extra objects.
+
+On the **Overview** page, select **Usage** to see how many indexes, indexers, and data sources you currently have.
+
+ :::image type="content" source="../../media/search-get-started-portal/overview-quota-usage.png" alt-text="Screenshot of the Overview page for an Azure AI Search service instance in the Azure portal, showing the number of indexes, indexers, and data sources." lightbox="../../media/search-get-started-portal/overview-quota-usage.png":::
+
+## Prepare sample data
+
+This quickstart uses a JSON document that contains metadata for 50 fictitious hotels, but you can also use your own files.
+
+To prepare the sample data for this quickstart:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) and select your Azure Storage account.
+
+1. From the left pane, select **Data storage** > **Containers**.
+
+1. Create a container named **hotels-sample**.
+
+1. Upload the [sample JSON document](https://github.com/Azure-Samples/azure-search-sample-data/blob/main/hotels/HotelsData_toAzureBlobs.json) to the container.
+
+## Start the wizard
+
+To start the wizard for this quickstart:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) and select your search service.
+
+1. On the **Overview** page, select **Import data (new)**.
+
+ :::image type="content" source="../../media/search-import-data-portal/import-data-new-button.png" alt-text="Screenshot that shows how to open the new import wizard in the Azure portal.":::
+
+1. Select your data source: **Azure Blob Storage** or **Azure Data Lake Storage Gen2**.
+
+ :::image type="content" source="../../media/search-get-started-portal-images/select-data-source.png" alt-text="Screenshot of the options for selecting a data source in the wizard." border="true" lightbox="../../media/search-get-started-portal-images/select-data-source.png":::
+
+1. Select **Keyword search**.
+
+ :::image type="content" source="../../media/search-get-started-portal/keyword-search-tile.png" alt-text="Screenshot of the keyword search tile in the Azure portal." border="true" lightbox="../../media/search-get-started-portal/keyword-search-tile.png":::
+
+## Create and load a search index
+
+In this section, you create and load an index in five steps.
+
+### Connect to a data source
+
+Azure AI Search requires a connection to a data source for content ingestion and indexing. In this case, the data source is your Azure Storage account.
+
+To connect to the sample data:
+
+1. On the **Connect to your data** page, select your Azure subscription.
+
+1. Select your storage account, and then select the **hotels-sample** container.
+
+1. Select **JSON array** for the parsing mode.
+
+ :::image type="content" source="../../media/search-get-started-portal/connect-to-your-data.png" alt-text="Screenshot of the Connect to your data page in the Azure portal." lightbox="../../media/search-get-started-portal/connect-to-your-data.png":::
+
+1. Select **Next**.
+
+### Skip configuration for skills
+
+The wizard supports skillset creation and [AI enrichment](../../cognitive-search-concept-intro.md) during indexing, which are beyond the scope of this quickstart. Skip this step by selecting **Next**.
+
+> [!TIP]
+> For a similar walkthrough that focuses on AI enrichment, see [Quickstart: Create a skillset in the Azure portal](../../search-get-started-skillset.md).
+
+### Configure the index
+
+Based on the structure and content of the sample hotel data, the wizard infers a schema for your search index.
+
+To configure the index:
+
+1. For each of the following fields, select **Configure field**, and then set the respective attributes.
+
+ | Fields | Attributes |
+ |-------|------------|
+ | `HotelId` | Key, Retrievable, Filterable, Sortable, Searchable |
+ | `HotelName`, `Category` | Retrievable, Filterable, Sortable, Searchable |
+ | `Description`, `Description_fr` | Retrievable |
+ | `Tags` | Retrievable, Filterable, Searchable |
+ | `ParkingIncluded`, `IsDeleted`, `LastRenovationDate`, `Rating`, `Location` | Retrievable, Filterable, Sortable |
+ | `Address.StreetAddress`, `Rooms.Description`, `Rooms.Description_fr` | Retrievable, Searchable |
+ | `Address.City`, `Address.StateProvince`, `Address.PostalCode`, `Address.Country`, `Rooms.Type`, `Rooms.BedOptions`, `Rooms.Tags` | Retrievable, Filterable, Facetable, Searchable |
+ | `Rooms.BaseRate`, `Rooms.SleepsCount`, `Rooms.SmokingAllowed` | Retrievable, Filterable, Facetable |
+
+ :::image type="content" source="../../media/search-get-started-portal/configure-index.gif" alt-text="GIF that shows how to configure attributes for fields in the index." lightbox="../../media/search-get-started-portal/configure-index.gif":::
+
+1. Delete the `AzureSearch_DocumentKey` field.
+
+1. Select **Next**.
+
+At a minimum, the index requires a name and a collection of fields. The wizard scans for unique string fields and marks one as the document key, which uniquely identifies each document in the index.
+
+Each field has a name, data type, and attributes that control how the field is used in the index. You can enable or disable the following attributes:
+
+| Attribute | Description | Applicable data types |
+|-----------|-------------|------------------------|
+| Retrievable | Fields returned in a query response. | Strings and integers |
+| Filterable | Fields that accept a filter expression. | Integers |
+| Sortable | Fields that accept an orderby expression. | Integers |
+| Facetable | Fields used in a faceted navigation structure. | Integers |
+| Searchable | Fields used in full-text search. Strings are searchable, but numeric and Boolean fields are often marked as not searchable. | Strings |
+
+Attributes affect storage in different ways. For example, filterable fields consume extra storage, while retrievable fields don't. For more information, see [Example demonstrating the storage implications of attributes and suggesters](../../search-what-is-an-index.md#example-demonstrating-the-storage-implications-of-attributes-and-suggesters).
+
+If you want autocomplete or suggested queries, specify language **Analyzers** or **Suggesters**.
+
+### Skip advanced settings
+
+The wizard offers advanced settings for semantic ranking and index scheduling, which are beyond the scope of this quickstart. Skip this step by selecting **Next**.
+
+### Review and create the objects
+
+The last step is to review your configuration and create the index, indexer, and data source on your search service. The indexer automates the process of extracting content from your data source and loading it into the index, enabling keyword search.
+
+To review and create the objects:
+
+1. Change the object name prefix to **hotels-sample**.
+
+1. Review the object configurations.
+
+ :::image type="content" source="../../media/search-get-started-portal/review-and-create.png" alt-text="Screenshot of the object configuration page in the Azure portal." lightbox="../../media/search-get-started-portal/review-and-create.png":::
+
+ AI enrichment, semantic ranker, and indexer scheduling are either disabled or set to their default values because you skipped their wizard steps.
+
+1. Select **Create** to simultaneously create the objects and run the indexer.
Summary
{
"modification_type": "new feature",
"modification_title": "新しいインポートデータウィザードに関するクイックスタート"
}
Explanation
この変更は、search-get-started-portal-new-wizard.md
という新しいファイルの追加に関するものです。このクイックスタートガイドは、「Import data (new) wizard」を使用して、架空のホテルに関するサンプルデータを用いて最初の検索インデックスを作成する方法を示しています。新しいウィザードは以前のウィザードと比較して、キーワード検索をサポートしており、ユーザーにとって向上した検索体験を提供します。
ガイドは、Azureアカウントの作成から始まり、必要なすべてのリソースに関する事前条件を説明しています。特に、Azure Storageアカウントの作成手順やサンプルデータの準備方法が詳しく記載されています。また、ウィザードを使用してインデックス、インデクサー、データソース接続を作成する手順が具体的に説明されており、各ステップで画像も提供されています。この変更により、Azure AI Searchの利便性が大幅に向上し、新規ユーザーが機能を簡単に理解するためのサポートが充実しました。
articles/search/includes/quickstarts/search-get-started-portal-old-wizard.md
Diff
@@ -0,0 +1,112 @@
+---
+manager: nitinme
+author: haileytap
+ms.author: haileytapia
+ms.service: azure-ai-search
+ms.topic: include
+ms.date: 09/16/2025
+---
+
+> [!IMPORTANT]
+> The **Import data** wizard will eventually be deprecated. Most of its functionality is available in the **Import data (new)** wizard, which we recommend for most search scenarios. For more information, see [Import data wizards in the Azure portal](../../search-import-data-portal.md).
+
+In this quickstart, you use the **Import data** wizard and a built-in sample of fictitious hotel data to create your first search index. The wizard requires no code to create an index, helping you write interesting queries within minutes.
+
+The wizard creates multiple objects on your search service, including a searchable [index](../../search-what-is-an-index.md), an [indexer](../../search-indexer-overview.md), and a data source connection for automated data retrieval. At the end of this quickstart, you review each object.
+
+## Prerequisites
+
++ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
+
++ An Azure AI Search service. [Create a service](../../search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) in your current subscription. You can use a free service for this quickstart.
+
+### Check for network access
+
+For this quickstart, which uses built-in sample data, make sure your search service doesn't have [network access controls](../../service-configure-firewall.md). The Azure portal controller uses a public endpoint to retrieve data and metadata from the Microsoft-hosted data source. For more information, see [Secure connections in the import wizards](../../search-import-data-portal.md#secure-connections).
+
+### Check for space
+
+Many customers start with a free search service, which is limited to three indexes, three indexers, and three data sources. This quickstart creates one of each, so before you begin, make sure you have room for extra objects.
+
+On the **Overview** page, select **Usage** to see how many indexes, indexers, and data sources you currently have.
+
+ :::image type="content" source="../../media/search-get-started-portal/overview-quota-usage.png" alt-text="Screenshot of the Overview page for an Azure AI Search service instance in the Azure portal, showing the number of indexes, indexers, and data sources." lightbox="../../media/search-get-started-portal/overview-quota-usage.png":::
+
+## Start the wizard
+
+To start the wizard for this quickstart:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) and select your search service.
+
+1. On the **Overview** page, select **Import data**.
+
+ :::image type="content" source="../../media/search-import-data-portal/import-data-button.png" alt-text="Screenshot that shows how to open the Import data wizard in the Azure portal.":::
+
+## Create and load a search index
+
+In this section, you create and load an index in four steps.
+
+### Connect to a data source
+
+The wizard creates a data source connection to sample data that Microsoft hosts on Azure Cosmos DB. The sample data is accessed through a public endpoint, so you don't need an Azure Cosmos DB account or source files for this step.
+
+To connect to the sample data:
+
+1. On the **Connect to your data** page, select the **Data Source** dropdown list, and then select **Samples**.
+
+1. Select **hotels-sample** from the list of built-in samples.
+
+1. Select **Next: Add cognitive skills (Optional)**.
+
+ :::image type="content" source="../../media/search-get-started-portal/import-hotels-sample.png" alt-text="Screenshot that shows how to select the hotels-sample data source in the Import data wizard." lightbox="../../media/search-get-started-portal/import-hotels-sample.png":::
+
+### Skip configuration for skills
+
+The wizard supports skillset creation and [AI enrichment](../../cognitive-search-concept-intro.md) during indexing, which are beyond the scope of this quickstart. Skip this step by selecting **Next: Customize target index**.
+
+> [!TIP]
+> For a similar walkthrough that focuses on AI enrichment, see [Quickstart: Create a skillset in the Azure portal](../../search-get-started-skillset.md).
+
+### Configure the index
+
+Based on the structure and content of the sample hotel data, the wizard infers a schema for your search index.
+
+To configure the index:
+
+1. Accept the system-generated values for the index name (**hotels-sample-index**) and key (**HotelId**).
+
+1. Accept the system-generated values for all field attributes.
+
+1. Select **Next: Create an indexer**.
+
+ :::image type="content" source="../../media/search-get-started-portal/hotels-sample-generated-index.png" alt-text="Screenshot that shows the generated index definition for the hotels-sample data source in the Import data wizard." lightbox="../../media/search-get-started-portal/hotels-sample-generated-index.png":::
+
+At a minimum, the index requires a name and a collection of fields. The wizard scans for unique string fields and marks one as the document key, which uniquely identifies each document in the index.
+
+Each field has a name, data type, and attributes that control how the field is used in the index. You can use the checkboxes to enable or disable the following attributes:
+
+| Attribute | Description | Applicable data types |
+|-----------|-------------|------------------------|
+| Retrievable | Fields returned in a query response. | Strings and integers |
+| Filterable | Fields that accept a filter expression. | Integers |
+| Sortable | Fields that accept an orderby expression. | Integers |
+| Facetable | Fields used in a faceted navigation structure. | Integers |
+| Searchable | Fields used in full-text search. Strings are searchable, but numeric and Boolean fields are often marked as not searchable. | Strings |
+
+Attributes affect storage in different ways. For example, filterable fields consume extra storage, while retrievable fields don't. For more information, see [Example demonstrating the storage implications of attributes and suggesters](../../search-what-is-an-index.md#example-demonstrating-the-storage-implications-of-attributes-and-suggesters).
+
+If you want autocomplete or suggested queries, specify language **Analyzers** or **Suggesters**.
+
+### Configure and run the indexer
+
+The last step is to configure and run the indexer, which automates the process of extracting content from your data source and loading it into your index. This step also creates the data source and index objects on your search service.
+
+To configure and run the indexer:
+
+1. Accept the system-generated value for the indexer name (**hotels-sample-indexer**).
+
+1. Accept the default schedule option to run the indexer once and immediately. The sample data is static, so you can't enable change tracking.
+
+1. Select **Submit** to simultaneously create and run the indexer.
+
+ :::image type="content" source="../../media/search-get-started-portal/hotels-sample-indexer.png" alt-text="Screenshot that shows how to configure the indexer for the hotels-sample data source in the Import data wizard." lightbox="../../media/search-get-started-portal/hotels-sample-indexer.png":::
Summary
{
"modification_type": "new feature",
"modification_title": "古いインポートデータウィザードに関するクイックスタート"
}
Explanation
この変更は、search-get-started-portal-old-wizard.md
という新しいファイルを追加するもので、古い「Import data wizard」を利用して架空のホテルデータから検索インデックスを作成する方法を説明しています。クイックスタートは、コーディングなしでインデックスを作成できることを強調しており、数分で興味深いクエリを作成できるサポートを提供します。
このガイドでは、AzureアカウントやAzure AI Searchサービスの作成、ネットワークアクセスおよびストレージの確認方法が詳しく説明されています。具体的には、ビルトインサンプルデータである「hotels-sample」を使う手順や、インデックスおよびインデクサの構成、実行方法が具体的に記載されています。さらに、このウィザードは将来的に廃止される予定で、新しいウィザードである「Import data (new) wizard」への移行が推奨されています。この追加により、Old Wizardの機能を利用しながらも、その利用を促す情報を提供する形で、ユーザーに対する明確なガイダンスが伝えられています。
articles/search/includes/quickstarts/search-get-started-skillset-new-wizard.md
Diff
@@ -0,0 +1,208 @@
+---
+manager: nitinme
+author: haileytap
+ms.author: haileytapia
+ms.service: azure-ai-search
+ms.topic: include
+ms.date: 09/16/2025
+---
+
+> [!IMPORTANT]
+> The **Import data (new)** wizard now supports keyword search, which was previously only available in the **Import data** wizard. We recommend the new wizard for an improved search experience. For more information about how we're consolidating the wizards, see [Import data wizards in the Azure portal](../../search-import-data-portal.md).
+
+In this quickstart, you learn how a skillset in Azure AI Search adds optical character recognition (OCR), image analysis, language detection, text merging, and entity recognition to generate text-searchable content in an index.
+
+You can run the **Import data (new)** wizard in the Azure portal to apply skills that create and transform textual content during indexing. The input is your raw data, usually blobs in Azure Storage. The output is a searchable index containing AI-generated image text, captions, and entities. You can then query generated content in the Azure portal using [**Search explorer**](../../search-explorer.md).
+
+Before you run the wizard, you create a few resources and upload sample files.
+
+## Prerequisites
+
++ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
+
++ An Azure AI Search service. [Create a service](../../search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) in your current subscription. You can use a free service for this quickstart.
+
++ An [Azure Storage account](/azure/storage/common/storage-account-create). Use Azure Blob Storage on a standard performance (general-purpose v2) account. To avoid bandwidth charges, use the same region as Azure AI Search.
+
+> [!NOTE]
+> This quickstart uses [Azure AI services](https://azure.microsoft.com/services/cognitive-services/) for AI enrichment. Because the workload is small, Azure AI services is tapped behind the scenes for free processing up to 20 transactions. Therefore, you don't need to create an Azure AI services multi-service resource.
+
+## Prepare sample data
+
+In this section, you create an Azure Storage container to store sample data consisting of various file types, including images and application files that aren't full-text searchable in their native formats.
+
+To prepare the sample data for this quickstart:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) and select your Azure Storage account.
+
+1. From the left pane, select **Data storage** > **Containers**.
+
+1. Create a container, and then upload the [sample data](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/ai-enrichment-mixed-media) to the container.
+
+## Run the wizard
+
+To run the wizard:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) and select your search service.
+
+1. On the **Overview** page, select **Import data (new)**.
+
+ :::image type="content" source="../../media/search-import-data-portal/import-data-new-button.png" alt-text="Screenshot that shows how to open the new import wizard in the Azure portal.":::
+
+1. Select **Azure Blob Storage** for the data source.
+
+ :::image type="content" source="../../media/search-get-started-skillset/choose-data-source.png" alt-text="Screenshot of the Azure Blob Storage data source option in the Azure portal." border="true" lightbox="../../media/search-get-started-skillset/choose-data-source.png":::
+
+1. Select **Keyword search**.
+
+ :::image type="content" source="../../media/search-get-started-portal/keyword-search-tile.png" alt-text="Screenshot of the keyword search tile in the Azure portal." border="true" lightbox="../../media/search-get-started-portal/keyword-search-tile.png":::
+
+### Step 1: Create a data source
+
+Azure AI Search requires a connection to a data source for content ingestion and indexing. In this case, the data source is your Azure Storage account.
+
+To create the data source:
+
+1. On the **Connect to your data** page, select your Azure subscription.
+
+1. Select your storage account, and then select the container you created.
+
+ :::image type="content" source="../../media/search-get-started-skillset/connect-to-your-data.png" alt-text="Screenshot of the Connect to your data page in the Azure portal." border="true" lightbox="../../media/search-get-started-skillset/connect-to-your-data.png":::
+
+1. Select **Next**.
+
+If you get `Error detecting index schema from data source`, the indexer that powers the wizard can't connect to your data source. The data source most likely has security protections. Try the following solutions, and then rerun the wizard.
+
+| Security feature | Solution |
+|--------------------|----------|
+| Resource requires Azure roles, or its access keys are disabled. | [Connect as a trusted service](../../search-indexer-howto-access-trusted-service-exception.md) or [connect using a managed identity](../../search-how-to-managed-identities.md). |
+| Resource is behind an IP firewall. | [Create an inbound rule for Azure AI Search and the Azure portal](../../search-indexer-howto-access-ip-restricted.md). |
+| Resource requires a private endpoint connection. | [Connect over a private endpoint](../../search-indexer-howto-access-private.md). |
+
+### Step 2: Add cognitive skills
+
+The next step is to configure AI enrichment to invoke OCR, image analysis, and entity recognition.
+
+OCR and image analysis are available for blobs in Azure Blob Storage and Azure Data Lake Storage (ADLS) Gen2 and for image content in OneLake. Images can be standalone files or embedded images in a PDF or other files.
+
+To add the skills:
+
+1. Select **Extract entities**, and then select the gear icon.
+
+1. Select and save the following checkboxes:
+
+ + **Persons**
+
+ + **Locations**
+
+ + **Organizations**
+
+ :::image type="content" source="../../media/search-get-started-skillset/extract-entities.png" alt-text="Screenshot of the Extract entities options in the Azure portal." lightbox="../../media/search-get-started-skillset/extract-entities.png":::
+
+1. Select **Extract text from images**, and then select the gear icon.
+
+1. Select and save the following checkboxes:
+
+ + **Generate tags**
+
+ + **Categorize content**
+
+ :::image type="content" source="../../media/search-get-started-skillset/extract-text.png" alt-text="Screenshot of the Extract text from images options in the Azure portal." lightbox="../../media/search-get-started-skillset/extract-text.png":::
+
+1. Leave the **Use a free AI service (limited enrichments)** checkbox selected.
+
+ The sample data consists of 14 files, so the free allotment of 20 transactions on Azure AI services is sufficient.
+
+1. Select **Next**.
+
+### Step 3: Configure the index
+
+An index contains your searchable content. The wizard can usually create the schema by sampling the data source. In this step, you review the generated schema and potentially revise any settings.
+
+For this quickstart, the wizard sets reasonable defaults:
+
++ Default fields are based on metadata properties of existing blobs and new fields for the enrichment output, such as `persons`, `locations`, and `organizations`. Data types are inferred from metadata and by data sampling.
+
+ :::image type="content" source="../../media/search-get-started-skillset/index-fields-new-wizard.png" alt-text="Screenshot of the index definition page." border="true" lightbox="../../media/search-get-started-skillset/index-fields-new-wizard.png":::
+
++ Default document key is `metadata_storage_path`, which is selected because the field contains unique values.
+
++ Default field attributes are based on the skills you selected. For example, fields created by the Entity Recognition skill (`persons`, `locations`, and `organizations`) are **Retrievable**, **Filterable**, **Facetable**, and **Searchable**. To view and change these attributes, select a field, and then select **Configure field**.
+
+ **Retrievable** fields can be returned in results, while **Searchable** fields support full-text search. Use **Filterable** if you want to use fields in a filter expression.
+
+ Marking a field as **Retrievable** doesn't mean that the field *must* appear in search results. You can control which fields are returned by using the `select` query parameter.
+
+After you review the index schema, select **Next**.
+
+### Step 4: Skip advanced settings
+
+The wizard offers advanced settings for semantic ranking and index scheduling, which are beyond the scope of this quickstart. Skip this step by selecting **Next**.
+
+### Step 5: Review and create objects
+
+The last step is to review your configuration and create the index, indexer, and data source on your search service. The indexer automates the process of extracting content from your data source, loading the index, and driving skillset execution.
+
+To review and create the objects:
+
+1. Accept the default **Objects name prefix**.
+
+1. Review the object configurations.
+
+ :::image type="content" source="../../media/search-get-started-skillset/review-and-create.png" alt-text="Screenshot of the object configuration page in the Azure portal." border="true" lightbox="../../media/search-get-started-skillset/review-and-create.png":::
+
+ AI enrichment, semantic ranker, and indexer scheduling are either disabled or set to their default values because you skipped their wizard steps.
+
+1. Select **Create** to simultaneously create the objects and run the indexer.
+
+## Monitor status
+
+You can monitor the creation of the indexer in the Azure portal. Skills-based indexing takes longer than text-based indexing, especially OCR and image analysis.
+
+To monitor the progress of the indexer:
+
+1. From the left pane, select **Indexers**.
+
+1. Select your indexer from the list.
+
+1. Select **Success** (or **Failed**) to view execution details.
+
+ :::image type="content" source="../../media/search-get-started-skillset/indexer-notification.png" alt-text="Screenshot of the indexer status page." border="true" lightbox="../../media/search-get-started-skillset/indexer-notification.png":::
+
+ In this quickstart, there are a few warnings, including `Could not execute skill because one or more skill input was invalid.` This warning tells you that a PNG file in the data source doesn't provide a text input to Entity Recognition. It occurs because the upstream OCR skill didn't recognize any text in the image and couldn't provide a text input to the downstream Entity Recognition skill.
+
+ Warnings are common in skillset execution. As you become familiar with how skills iterate over your data, you might begin to notice patterns and learn which warnings are safe to ignore.
+
+## Query in Search explorer
+
+To query your index:
+
+1. From the left pane, select **Indexes**.
+
+1. Select your index from the list. If the index has zero documents or storage, wait for the Azure portal to refresh.
+
+1. On the **Search explorer** tab, enter a search string, such as `satya nadella`.
+
+The search bar accepts keywords, quote-enclosed phrases, and operators. For example: `"Satya Nadella" +"Bill Gates" +"Steve Ballmer"`
+
+Results are returned as verbose JSON, which can be hard to read, especially in large documents. Here are tips for searching in this tool:
+
+ + Switch to the JSON view to specify parameters that shape results.
+ + Add `select` to limit the fields in results.
+ + Add `count` to show the number of matches.
+ + Use Ctrl-F to search within the JSON for specific properties or terms.
+
+:::image type="content" source="../../media/search-get-started-skillset/search-explorer-new-wizard.png" alt-text="Screenshot of the Search explorer page." border="true" lightbox="../../media/search-get-started-skillset/search-explorer-new-wizard.png":::
+
+Here's some JSON you can paste into the view:
+
+```json
+{
+"search": "\"Satya Nadella\" +\"Bill Gates\" +\"Steve Ballmer\"",
+"count": true,
+"select": "merged_content, persons"
+}
+```
+
+> [!TIP]
+> Query strings are case sensitive, so if you get an "unknown field" message, check **Fields** or **Index Definition (JSON)** to verify the name and case.
\ No newline at end of file
Summary
{
"modification_type": "new feature",
"modification_title": "新しいスキルセットウィザードに関するクイックスタート"
}
Explanation
この変更は、search-get-started-skillset-new-wizard.md
という新ファイルの追加に関するもので、Azure AI Searchのスキルセットを活用して、光学式文字認識(OCR)、画像分析、言語検出、テキストのマージ、エンティティ認識を行い、インデックス内にテキスト検索可能なコンテンツを生成する方法を説明しています。
このクイックスタートでは、「Import data (new) wizard」を使用して、生成されたAIコンテンツを利用した検索インデックスの作成手順が詳しく解説されています。利用者は、Azureポータルでウィザードを実行して、生のデータ(通常はAzure Storage内のブロブ)からインデックスを生成し、検索エンジンでクエリを実行できるようになります。
ガイドには、必要なリソースの作成、サンプルデータのアップロード方法、AI機能の追加手順が包括的に示されており、最終的にはインデックスの作成と各スキルの実行を含むインデクサーの設定についても説明しています。新しいウィザードにより、従来のウィザードとは異なる機能が追加されており、より高度な検索体験を提供することが意図されています。
articles/search/includes/quickstarts/search-get-started-skillset-old-wizard.md
Diff
@@ -0,0 +1,196 @@
+---
+manager: nitinme
+author: haileytap
+ms.author: haileytapia
+ms.service: azure-ai-search
+ms.topic: include
+ms.date: 09/16/2025
+---
+
+> [!IMPORTANT]
+> The **Import data** wizard will eventually be deprecated. Most of its functionality is available in the **Import data (new)** wizard, which we recommend for most search scenarios. For more information, see [Import data wizards in the Azure portal](../../search-import-data-portal.md).
+
+In this quickstart, you learn how a skillset in Azure AI Search adds optical character recognition (OCR), image analysis, language detection, text merging, and entity recognition to generate text-searchable content in an index.
+
+You can run the **Import data** wizard in the Azure portal to apply skills that create and transform textual content during indexing. The input is your raw data, usually blobs in Azure Storage. The output is a searchable index containing AI-generated image text, captions, and entities. You can then query generated content in the Azure portal using [**Search explorer**](../../search-explorer.md).
+
+Before you run the wizard, you create a few resources and upload sample files.
+
+## Prerequisites
+
++ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
+
++ An Azure AI Search service. [Create a service](../../search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) in your current subscription. You can use a free service for this quickstart.
+
++ An [Azure Storage account](/azure/storage/common/storage-account-create). Use Azure Blob Storage on a standard performance (general-purpose v2) account. To avoid bandwidth charges, use the same region as Azure AI Search.
+
+> [!NOTE]
+> This quickstart uses [Azure AI services](https://azure.microsoft.com/services/cognitive-services/) for AI enrichment. Because the workload is small, Azure AI services is tapped behind the scenes for free processing up to 20 transactions. Therefore, you don't need to create an Azure AI services multi-service resource.
+
+## Prepare sample data
+
+In this section, you create an Azure Storage container to store sample data consisting of various file types, including images and application files that aren't full-text searchable in their native formats.
+
+To prepare the sample data for this quickstart:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) and select your Azure Storage account.
+
+1. From the left pane, select **Data storage** > **Containers**.
+
+1. Create a container, and then upload the [sample data](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/ai-enrichment-mixed-media) to the container.
+
+## Run the wizard
+
+To run the wizard:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) and select your search service.
+
+1. On the **Overview** page, select **Import data**.
+
+ :::image type="content" source="../../media/search-import-data-portal/import-data-button.png" alt-text="Screenshot of the Import data command." border="true" lightbox="../../media/search-import-data-portal/import-data-button.png":::
+
+### Step 1: Create a data source
+
+Azure AI Search requires a connection to a data source for content ingestion and indexing. In this case, the data source is your Azure Storage account.
+
+To create the data source:
+
+1. On the **Connect to your data** page, select the **Data Source** dropdown list, and then select **Azure Blob Storage**.
+
+1. Choose an existing connection string for your storage account, and then select the container you created.
+
+1. Enter a name for the data source.
+
+ :::image type="content" source="../../media/search-get-started-skillset/blob-datasource.png" alt-text="Screenshot of the data source definition page." border="true" lightbox="../../media/search-get-started-skillset/blob-datasource.png":::
+
+1. Select **Next: Add cognitive skills (Optional)**.
+
+If you get `Error detecting index schema from data source`, the indexer that powers the wizard can't connect to your data source. The data source most likely has security protections. Try the following solutions, and then rerun the wizard.
+
+| Security feature | Solution |
+|--------------------|----------|
+| Resource requires Azure roles, or its access keys are disabled. | [Connect as a trusted service](../../search-indexer-howto-access-trusted-service-exception.md) or [connect using a managed identity](../../search-how-to-managed-identities.md). |
+| Resource is behind an IP firewall. | [Create an inbound rule for Azure AI Search and the Azure portal](../../search-indexer-howto-access-ip-restricted.md). |
+| Resource requires a private endpoint connection. | [Connect over a private endpoint](../../search-indexer-howto-access-private.md). |
+
+### Step 2: Add cognitive skills
+
+The next step is to configure AI enrichment to invoke OCR, image analysis, and natural-language processing.
+
+OCR and image analysis are available for blobs in Azure Blob Storage and Azure Data Lake Storage (ADLS) Gen2 and for image content in OneLake. Images can be standalone files or embedded images in a PDF or other files.
+
+To add the skills:
+
+1. Expand the **Attach Cognitive Services** section.
+
+1. Select **Free (Limited enrichments)** to use a free Azure AI services multi-service resource.
+
+ :::image type="content" source="../../media/search-get-started-skillset/cog-search-attach.png" alt-text="Screenshot of the Attach Azure AI services tab." border="true" lightbox="../../media/search-get-started-skillset/cog-search-attach.png":::
+
+ The sample data consists of 14 files, so the free allotment of 20 transactions on Azure AI services is sufficient.
+
+1. Expand the **Add enrichments** section.
+
+1. Select the **Enable OCR and merge all text into merged_content field** checkbox.
+
+1. Under **Text Cognitive Skills**, select the following checkboxes:
+
+ + **Extract people names**
+
+ + **Extract organization names**
+
+ + **Extract location names**
+
+1. Under **Image Cognitive Skills**, select the following checkboxes:
+
+ + **Generate tags from images**
+
+ + **Generate captions from images**
+
+ :::image type="content" source="../../media/search-get-started-skillset/skillset.png" alt-text="Screenshot of the skillset definition page." border="true" lightbox="../../media/search-get-started-skillset/skillset.png":::
+
+1. Select **Next: Customer target index**.
+
+### Step 3: Configure the index
+
+An index contains your searchable content. The wizard can usually create the schema by sampling the data source. In this step, you review the generated schema and potentially revise any settings.
+
+For this quickstart, the wizard sets reasonable defaults:
+
++ Default fields are based on metadata properties of existing blobs and new fields for the enrichment output, such as `people`, `organizations`, and `locations`. Data types are inferred from metadata and by data sampling.
+
++ Default document key is `metadata_storage_path`, which is selected because the field contains unique values.
+
++ Default attributes are **Retrievable** and **Searchable**. **Retrievable** fields can be returned in results, while **Searchable** fields support full-text search. The wizard assumes you want these fields to be retrievable and searchable because you created them via a skillset. Select **Filterable** if you want to use fields in a filter expression.
+
+ :::image type="content" source="../../media/search-get-started-skillset/index-fields-old-wizard.png" alt-text="Screenshot of the index definition page." border="true" lightbox="../../media/search-get-started-skillset/index-fields-old-wizard.png":::
+
+ Marking a field as **Retrievable** doesn't mean that the field *must* appear in search results. You can control which fields are returned by using the `select` query parameter.
+
+After you review the index schema, select **Next: Create an indexer**.
+
+### Step 4: Configure the indexer
+
+The indexer drives the indexing process and specifies the data source name, a target index, and frequency of execution. In this step, the wizard creates several objects, including an indexer that you can reset and run repeatedly.
+
+To configure the indexer:
+
+1. On the **Create an indexer** page, accept the default name.
+
+1. Select **Once** for the schedule.
+
+ :::image type="content" source="../../media/search-get-started-skillset/indexer-def.png" alt-text="Screenshot of the indexer definition page." border="true" lightbox="../../media/search-get-started-skillset/indexer-def.png":::
+
+1. Select **Submit** to simultaneously create and run the indexer.
+
+## Monitor status
+
+You can monitor the creation of the indexer in the Azure portal. Skills-based indexing takes longer than text-based indexing, especially OCR and image analysis.
+
+To monitor the progress of the indexer:
+
+1. From the left pane, select **Indexers**.
+
+1. Select your indexer from the list.
+
+1. Select **Success** (or **Failed**) to view execution details.
+
+ :::image type="content" source="../../media/search-get-started-skillset/indexer-notification.png" alt-text="Screenshot of the indexer status page." border="true" lightbox="../../media/search-get-started-skillset/indexer-notification.png":::
+
+ In this quickstart, there are a few warnings, including `Could not execute skill because one or more skill input was invalid.` This warning tells you that a PNG file in the data source doesn't provide a text input to Entity Recognition. It occurs because the upstream OCR skill didn't recognize any text in the image and couldn't provide a text input to the downstream Entity Recognition skill.
+
+ Warnings are common in skillset execution. As you become familiar with how skills iterate over your data, you might begin to notice patterns and learn which warnings are safe to ignore.
+
+## Query in Search explorer
+
+To query your index:
+
+1. From the left pane, select **Indexes**.
+
+1. Select your index from the list. If the index has zero documents or storage, wait for the Azure portal to refresh.
+
+1. On the **Search explorer** tab, enter a search string, such as `satya nadella`.
+
+The search bar accepts keywords, quote-enclosed phrases, and operators. For example: `"Satya Nadella" +"Bill Gates" +"Steve Ballmer"`
+
+Results are returned as verbose JSON, which can be hard to read, especially in large documents. Here are tips for searching in this tool:
+
+ + Switch to the JSON view to specify parameters that shape results.
+ + Add `select` to limit the fields in results.
+ + Add `count` to show the number of matches.
+ + Use Ctrl-F to search within the JSON for specific properties or terms.
+
+:::image type="content" source="../../media/search-get-started-skillset/search-explorer-old-wizard.png" alt-text="Screenshot of the Search explorer page." border="true" lightbox="../../media/search-get-started-skillset/search-explorer-old-wizard.png":::
+
+Here's some JSON you can paste into the view:
+
+```json
+{
+"search": "\"Satya Nadella\" +\"Bill Gates\" +\"Steve Ballmer\"",
+"count": true,
+"select": "content, people"
+}
+```
+
+> [!TIP]
+> Query strings are case sensitive, so if you get an "unknown field" message, check **Fields** or **Index Definition (JSON)** to verify the name and case.
\ No newline at end of file
Summary
{
"modification_type": "new feature",
"modification_title": "古いスキルセットウィザードに関するクイックスタート"
}
Explanation
この変更は、search-get-started-skillset-old-wizard.md
という新しいファイルを追加するもので、Azure AI Searchのスキルセットを活用して、光学式文字認識(OCR)、画像分析、言語検出、テキストのマージ、エンティティ認識を行う方法を解説しています。このクイックスタートは、Azureポータルで「Import data」ウィザードを使用して、データをインデックス化し、生成されたコンテンツを検索可能にする手順を示しています。
このガイドには、必要なリソースの作成、サンプルデータの準備、コグニティブスキルの追加手順、インデクサーの設定など、全体のプロセスが詳細に記載されています。特に、データソースとの接続やインデクシングの設定に関する実践的な情報が含まれており、ユーザーはAzureを利用して効率的にデータを処理し、検索インデックスを作成することができます。
さらに、古い「Import data」ウィザードの機能がすでに新しいウィザードにほとんど統合されていることが強調されており、ユーザーに対して新しいウィザードの利用を推奨する内容になっています。この変更によって、古いウィザードを利用する際の手引きが提供され、ユーザーの利便性が向上しています。
articles/search/includes/quickstarts/search-get-started-vector-dotnet.md
Diff
@@ -12,7 +12,7 @@ In this quickstart, you work with a .NET app to create, populate, and query vect
In Azure AI Search, a [vector store](../../vector-store.md) has an index schema that defines vector and nonvector fields, a vector search configuration for algorithms that create the embedding space, and settings on vector field definitions that are evaluated at query time. The [Create Index](/rest/api/searchservice/indexes/create-or-update) REST API creates the vector store.
> [!NOTE]
-> This quickstart omits the vectorization step and provides inline embeddings. If you want to add [built-in data chunking and vectorization](../../vector-search-integrated-vectorization.md) over your own content, try the [**Import and vectorize data wizard**](../../search-get-started-portal-import-vectors.md) for an end-to-end walkthrough.
+> This quickstart omits the vectorization step and provides inline embeddings. If you want to add [built-in data chunking and vectorization](../../vector-search-integrated-vectorization.md) over your own content, try the [**Import data (new)** wizard](../../search-get-started-portal-import-vectors.md) for an end-to-end walkthrough.
## Prerequisites
Summary
{
"modification_type": "minor update",
"modification_title": "ベクトルストレージを扱う.NETクイックスタートの更新"
}
Explanation
この変更は、search-get-started-vector-dotnet.md
ファイルの一部を更新し、Azure AI Searchにおけるベクトルストレージを使用する際のクイックスタートに関する情報を改善するものです。具体的には、クイックスタートガイド内のノートセクションで、データのチャンク化とベクトル化のためのウィザードの名前を更新しています。
元の文では「Import and vectorize data wizard」と記述されていたのが、新しい表現である「Import data (new) wizard」に変更されています。この変更により、現在利用可能な新しいウィザードを正確に反映し、ユーザーが最新の機能を利用できるようにしています。全体として、更新はごくわずかですが、ドキュメントの正確性と使いやすさを向上させる重要な改良です。
articles/search/includes/quickstarts/search-get-started-vector-java.md
Diff
@@ -13,7 +13,7 @@ In this quickstart, you use Java to create, load, and query vectors. The code ex
In Azure AI Search, a [vector store](../../vector-store.md) has an index schema that defines vector and nonvector fields, a vector search configuration for algorithms that create the embedding space, and settings on vector field definitions that are evaluated at query time. The [Create Index](/rest/api/searchservice/indexes/create-or-update) REST API creates the vector store.
> [!NOTE]
-> This quickstart omits the vectorization step and provides inline embeddings. If you want to add [built-in data chunking and vectorization](../../vector-search-integrated-vectorization.md) over your own content, try the [**Import and vectorize data wizard**](../../search-get-started-portal-import-vectors.md) for an end-to-end walkthrough.
+> This quickstart omits the vectorization step and provides inline embeddings. If you want to add [built-in data chunking and vectorization](../../vector-search-integrated-vectorization.md) over your own content, try the [**Import data (new)** wizard](../../search-get-started-portal-import-vectors.md) for an end-to-end walkthrough.
## Prerequisites
Summary
{
"modification_type": "minor update",
"modification_title": "Javaでのベクトルストレージに関するクイックスタートの更新"
}
Explanation
この変更は、search-get-started-vector-java.md
ファイルの更新であり、Javaを使用してベクトルを作成、読み込み、クエリするクイックスタートガイドの一部を修正しています。特に、ノートセクションの内容が更新されており、ここではデータのチャンク化とベクトル化のためのウィザードの名称が変更されています。
元の表現では「Import and vectorize data wizard」と記載されていた部分が、「Import data (new) wizard」に修正され、現在利用可能な新しいウィザードを反映しています。この修正により、ユーザーは最新の機能について正確な情報を得ることができ、クイックスタートをより効果的に活用できるようになります。全体として、文書の更新は小規模ではありますが、内容の正確性と利用者の理解を向上させる重要な改善です。
articles/search/includes/quickstarts/search-get-started-vector-javascript.md
Diff
@@ -12,7 +12,7 @@ In this quickstart, you use JavaScript to create, load, and query vectors. The c
In Azure AI Search, a [vector store](../../vector-store.md) has an index schema that defines vector and nonvector fields, a vector search configuration for algorithms that create the embedding space, and settings on vector field definitions that are evaluated at query time. The [Create Index](/rest/api/searchservice/indexes/create-or-update) REST API creates the vector store.
> [!NOTE]
-> This quickstart omits the vectorization step and provides inline embeddings. If you want to add [built-in data chunking and vectorization](../../vector-search-integrated-vectorization.md) over your own content, try the [**Import and vectorize data wizard**](../../search-get-started-portal-import-vectors.md) for an end-to-end walkthrough.
+> This quickstart omits the vectorization step and provides inline embeddings. If you want to add [built-in data chunking and vectorization](../../vector-search-integrated-vectorization.md) over your own content, try the [**Import data (new)** wizard](../../search-get-started-portal-import-vectors.md) for an end-to-end walkthrough.
## Prerequisites
Summary
{
"modification_type": "minor update",
"modification_title": "JavaScriptによるベクトルストレージに関するクイックスタートの更新"
}
Explanation
この変更は、search-get-started-vector-javascript.md
ファイルの一部を更新し、JavaScriptを使用してベクトルを作成、読み込み、クエリするクイックスタートガイドに関する内容の修正を行っています。特に、ノートセクションが更新されており、データのチャンク化とベクトル化を行うウィザードの名称が変更されています。
元の記述では「Import and vectorize data wizard」とされていた部分が、「Import data (new) wizard」に修正され、新しいウィザードが正確に反映されています。この更新により、ユーザーは最新の情報に基づいてより良い体験を得ることができるようになり、クイックスタートの内容がより実用的になります。全体として、この変更は文書の正確性を高める重要なマイナーアップデートです。
articles/search/includes/quickstarts/search-get-started-vector-python.md
Diff
@@ -12,7 +12,7 @@ In this quickstart, you use a Jupyter notebook to create, load, and query vector
In Azure AI Search, a [vector store](../../vector-store.md) has an index schema that defines vector and nonvector fields, a vector search configuration for algorithms that create the embedding space, and settings on vector field definitions that are evaluated at query time. The [Create Index](/rest/api/searchservice/indexes/create-or-update) REST API creates the vector store.
> [!NOTE]
-> This quickstart omits the vectorization step and provides inline embeddings. If you want to add [built-in data chunking and vectorization](../../vector-search-integrated-vectorization.md) over your own content, try the [**Import and vectorize data wizard**](../../search-get-started-portal-import-vectors.md) for an end-to-end walkthrough.
+> This quickstart omits the vectorization step and provides inline embeddings. If you want to add [built-in data chunking and vectorization](../../vector-search-integrated-vectorization.md) over your own content, try the [**Import data (new)** wizard](../../search-get-started-portal-import-vectors.md) for an end-to-end walkthrough.
## Prerequisites
Summary
{
"modification_type": "minor update",
"modification_title": "Pythonによるベクトルストレージに関するクイックスタートの更新"
}
Explanation
この変更は、search-get-started-vector-python.md
というファイルに対するもので、Pythonを使用してベクトルを作成、読み込み、クエリするためのクイックスタートガイドの一部が更新されています。特に、ノートセクションにおいて、データのチャンク化とベクトル化を行うウィザードの名称が修正されました。
元の表現では「Import and vectorize data wizard」と記載されていましたが、これが「Import data (new) wizard」へと変更され、新しいウィザードが正確に示されています。この修正により、ユーザーは最新のウィザードの名称に基づいてより良い情報を得ることができるようになり、クイックスタートがより実用的なものとなります。この変更は文書の質を向上させる小規模ではありますが重要な改善です。
articles/search/includes/quickstarts/search-get-started-vector-rest.md
Diff
@@ -13,7 +13,7 @@ In this quickstart, you use the [Azure AI Search REST APIs](/rest/api/searchserv
In Azure AI Search, a [vector store](../../vector-store.md) has an index schema that defines vector and nonvector fields, a vector search configuration for algorithms that create the embedding space, and settings on vector field definitions that are evaluated at query time. The [Create Index](/rest/api/searchservice/indexes/create-or-update) REST API creates the vector store.
> [!NOTE]
-> This quickstart omits the vectorization step and provides inline embeddings. If you want to add [built-in data chunking and vectorization](../../vector-search-integrated-vectorization.md) over your own content, try the [**Import and vectorize data wizard**](../../search-get-started-portal-import-vectors.md) for an end-to-end walkthrough.
+> This quickstart omits the vectorization step and provides inline embeddings. If you want to add [built-in data chunking and vectorization](../../vector-search-integrated-vectorization.md) over your own content, try the [**Import data (new)** wizard](../../search-get-started-portal-import-vectors.md) for an end-to-end walkthrough.
## Prerequisites
Summary
{
"modification_type": "minor update",
"modification_title": "REST APIを使用したベクトルストレージに関するクイックスタートの更新"
}
Explanation
この変更は、search-get-started-vector-rest.md
というファイルに対するもので、Azure AI Search REST APIを使用してベクトルを作成、読み込み、クエリするためのクイックスタートガイドの一部が更新されています。特に、ノートセクションにおいて、データのチャンク化とベクトル化を行うウィザードの名称が修正されています。
元の記述では「Import and vectorize data wizard」とされていた部分が、「Import data (new) wizard」に変更され、新しいウィザードの名称がより正確に反映されています。この更新により、ユーザーは最新の情報を基にしたより良い体験を得ることができ、クイックスタートの内容がより実用的になります。この変更は文書の品質を向上させる大切なマイナーアップデートです。
articles/search/includes/quickstarts/search-get-started-vector-typescript.md
Diff
@@ -12,7 +12,7 @@ In this quickstart, you use TypeScript to create, load, and query vectors. The c
In Azure AI Search, a [vector store](../../vector-store.md) has an index schema that defines vector and nonvector fields, a vector search configuration for algorithms that create the embedding space, and settings on vector field definitions that are evaluated at query time. The [Create Index](/rest/api/searchservice/indexes/create-or-update) REST API creates the vector store.
> [!NOTE]
-> This quickstart omits the vectorization step and provides inline embeddings. If you want to add [built-in data chunking and vectorization](../../vector-search-integrated-vectorization.md) over your own content, try the [**Import and vectorize data wizard**](../../search-get-started-portal-import-vectors.md) for an end-to-end walkthrough.
+> This quickstart omits the vectorization step and provides inline embeddings. If you want to add [built-in data chunking and vectorization](../../vector-search-integrated-vectorization.md) over your own content, try the [**Import data (new)** wizard](../../search-get-started-portal-import-vectors.md) for an end-to-end walkthrough.
## Prerequisites
Summary
{
"modification_type": "minor update",
"modification_title": "TypeScriptによるベクトルストレージに関するクイックスタートの更新"
}
Explanation
この変更は、search-get-started-vector-typescript.md
というファイルに対するもので、TypeScriptを使用してベクトルを作成、読み込み、クエリするためのクイックスタートガイドが更新されています。具体的には、ノートセクションにおけるウィザードの名称が修正されました。
元の記述では「Import and vectorize data wizard」と記載されていましたが、これが「Import data (new) wizard」に変更され、新しいウィザードの名称が正確に反映されています。この更新により、ユーザーは最新のウィザードに関する正しい情報を得ることができ、クイックスタートがより実用的なものとなります。この変更は文書の精度とユーザー体験の向上を目的とした小規模なアップデートです。
articles/search/knowledge-store-create-portal.md
Diff
@@ -47,7 +47,7 @@ First, you set up sample data in Azure Storage. Next, you run the **Import data*
1. [Find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) and on the Overview page, select **Import data** on the command bar to create a knowledge store in four steps.
- :::image type="content" source="media/search-import-data-portal/import-data-cmd.png" alt-text="Screenshot of the Import data command" border="true":::
+ :::image type="content" source="media/search-import-data-portal/import-data-button.png" alt-text="Screenshot of the Import data command" border="true":::
### Step 1: Create a data source
Summary
{
"modification_type": "minor update",
"modification_title": "ポータルでのナレッジストア作成に関する画像の更新"
}
Explanation
この変更は、knowledge-store-create-portal.md
というファイルに対するもので、Azureポータルにおけるナレッジストア作成の手順に関連する画像が更新されています。具体的には、インポートデータコマンドに関連するスクリーンショットが新しい画像に置き換えられました。
以前は「import-data-cmd.png」という名前の画像が使用されていましたが、これが「import-data-button.png」に変更されました。この更新により、ユーザーは最新の画面を元にした正確な情報を参照できるようになり、より良いガイドラインが提供されます。この変更は、文書のビジュアルの一貫性を保ち、ユーザーが手順をより容易に理解できるようにするためのマイナーアップデートです。
articles/search/media/search-get-started-portal-import-vectors/command-bar.png
Summary
{
"modification_type": "minor update",
"modification_title": "コマンドバーの画像の削除"
}
Explanation
この変更は、command-bar.png
という画像ファイルが削除されたものです。この画像は、search-get-started-portal-import-vectors
に関連するドキュメント内で使用されていましたが、更新により不要と判断され、削除されました。
この削除は、ドキュメントの内容やビジュアルの一貫性を向上させるためのものと考えられます。削除された画像が提供していた情報が他の更新された画像やテキストでカバーされた可能性が高く、ユーザーには最新の情報を届けることが意図されています。この変更は、コンテンツの整理および明確化を目的としたマイナーアップデートです。
articles/search/media/search-get-started-portal/configure-index.gif
Summary
{
"modification_type": "new feature",
"modification_title": "インデックスを設定するためのGIF画像の追加"
}
Explanation
この変更は、configure-index.gif
という新しいGIF画像ファイルが追加されたことに関するものです。この画像は、Azureポータルにおけるインデックス設定の手順を視覚的に説明するために使用されると考えられます。
新しい画像の追加により、ユーザーはインデックス設定を行う際の具体的な操作方法を視覚的に確認できるようになり、理解を深めることが可能になります。この変更は、ユーザーが手順をよりスムーズに進められるようにするための新機能であり、ドキュメントの全体的な質を向上させることを目的としています。
articles/search/media/search-get-started-portal/connect-to-your-data.png
Summary
{
"modification_type": "new feature",
"modification_title": "データ接続に関する画像の追加"
}
Explanation
この変更は、connect-to-your-data.png
という新しい画像ファイルが追加されたことを示しています。この画像は、ユーザーが自分のデータに接続する際のプロセスや手順を視覚的に示すために使用される予定です。
新しい画像を追加することで、ユーザーはデータ接続の手順をより理解しやすくなり、操作の成功率を高めることができます。この追加は、ドキュメントの内容を補完し、ユーザーに対してより良い体験を提供するための新機能であり、全体的なドキュメントの質を向上させることを目指しています。
articles/search/media/search-get-started-portal/keyword-search-tile.png
Summary
{
"modification_type": "new feature",
"modification_title": "キーワード検索タイルの画像の追加"
}
Explanation
この変更は、keyword-search-tile.png
という新しい画像ファイルが追加されたことを示しています。この画像は、ユーザーがキーワード検索の機能を視覚的に理解するための資料として使用されることを意図しています。
追加された画像により、ユーザーはキーワード検索のインターフェースや操作方法を直感的に把握できるようになり、検索機能の利用がよりスムーズになることが期待されます。この新機能は、ドキュメントの情報を補完し、ユーザーエクスペリエンスを向上させる一助となります。
articles/search/media/search-get-started-portal/review-and-create.png
Summary
{
"modification_type": "new feature",
"modification_title": "レビューと作成の画像の追加"
}
Explanation
この変更は、review-and-create.png
という新しい画像ファイルが追加されたことを示しています。この画像は、ユーザーがレビューおよび作成プロセスを理解するための視覚的なサポートを提供することを目的としています。
新しく追加された画像により、ユーザーはレビューおよび作成の手順をより明確に把握できるようになります。これにより、ドキュメントはより使いやすくなり、ユーザーが新しい機能を効果的に活用できるようになることが期待されます。この新機能は、全体的なユーザーエクスペリエンスの向上に寄与します。
articles/search/media/search-get-started-portal/skip-cognitive-skills.png
Summary
{
"modification_type": "breaking change",
"modification_title": "Cognitive Skillsをスキップする画像の削除"
}
Explanation
この変更は、skip-cognitive-skills.png
という画像ファイルが削除されたことを示しています。この画像は、Cognitive Skillsをスキップするための手順に関連していたと考えられます。
画像が削除されたことにより、関連する機能や手順の明確性が損なわれる可能性があります。この変更は、ユーザーがCognitive Skillsをスキップする方法についての理解を深めるための情報が不足することを示唆しており、必要に応じて新しい画像や説明を提供することが望ましいです。この変更は、ドキュメント内での情報の整合性に影響を及ぼす可能性があるため、注意が必要です。
articles/search/media/search-get-started-skillset/choose-data-source.png
Summary
{
"modification_type": "new feature",
"modification_title": "データソース選択の画像の追加"
}
Explanation
この変更は、choose-data-source.png
という新しい画像ファイルが追加されたことを示しています。この画像は、ユーザーがデータソースを選択するプロセスをサポートするために使用されます。
新しく追加された画像により、ユーザーはデータソースの選択方法についての理解を深めることができるでしょう。この視覚的なサポートは、ドキュメントの内容をより明確にし、ユーザーが実際の手順を遂行する際の参考になります。これにより、全体的なユーザーエクスペリエンスが向上し、新しい機能をより効果的に活用できるようになることが期待されます。
articles/search/media/search-get-started-skillset/connect-to-your-data.png
Summary
{
"modification_type": "new feature",
"modification_title": "データに接続するための画像の追加"
}
Explanation
この変更は、connect-to-your-data.png
という新しい画像ファイルが追加されたことを示しています。この画像は、ユーザーが自身のデータに接続する手順を視覚的にサポートすることを目的としています。
新しい画像の追加により、ドキュメント内でデータ接続の手順がより明確になり、ユーザーが具体的なアクションを理解しやすくなります。この視覚的な要素は、手順を実行する際のガイダンスとして機能し、全体的なユーザーエクスペリエンスを向上させることが期待されます。これによって、ユーザーはデータ接続の過程での混乱を軽減し、スムーズに作業を行えるようになるでしょう。
articles/search/media/search-get-started-skillset/extract-entities.png
Summary
{
"modification_type": "new feature",
"modification_title": "エンティティ抽出のための画像の追加"
}
Explanation
この変更は、extract-entities.png
という新しい画像ファイルが追加されたことを示しています。この画像は、エンティティ抽出のプロセスを視覚的に説明するために使用されます。
追加された画像によって、ユーザーはエンティティ抽出の手順をより容易に理解できるようになります。この視覚的なガイドは、ドキュメントの内容を強化し、ユーザーがエンティティ抽出を実行する方法を明示化します。これにより、ユーザーは具体的な手順を踏む際の参考として役立てることができ、エンティティ抽出機能の効果的な利用が促進されると期待されます。全体として、ユーザーエクスペリエンスの向上に寄与する重要な変更です。
articles/search/media/search-get-started-skillset/extract-text.png
Summary
{
"modification_type": "new feature",
"modification_title": "テキスト抽出のための画像の追加"
}
Explanation
この変更は、extract-text.png
という新しい画像ファイルが追加されたことを示しています。この画像は、テキスト抽出プロセスを説明するための視覚的な資料として提供されます。
新たに追加された画像により、ユーザーはテキスト抽出の手順をより理解しやすくなります。この視覚的要素は、ドキュメント内で情報をよりクリアにし、ユーザーがテキスト抽出機能の使用方法を簡単に把握できるようにするためのものです。具体的な手順や方法を示すことによって、ユーザーはテキスト抽出を自信を持って行えるようになり、全体的な体験が向上することが期待されます。
articles/search/media/search-get-started-skillset/index-fields-new-wizard.png
Summary
{
"modification_type": "new feature",
"modification_title": "インデックスフィールド新ウィザードのための画像の追加"
}
Explanation
この変更は、index-fields-new-wizard.png
という新しい画像ファイルが追加されたことを示しています。この画像は、インデックスフィールドを設定する新しいウィザードの使用方法を視覚的に説明するために作成されました。
追加された画像は、ユーザーがインデックスフィールドを簡単に設定できるようにするためのガイドとして機能します。ウォークスルーや手順を視覚的に示すことで、ユーザーが新しいウィザードの操作を理解しやすくなり、手続きがスムーズに進むことを目的としています。この改善により、ユーザー体験が向上し、複雑な設定プロセスが簡素化されることが期待されます。
articles/search/media/search-get-started-skillset/index-fields-old-wizard.png
Summary
{
"modification_type": "minor update",
"modification_title": "インデックスフィールドの旧ウィザード画像の名称変更"
}
Explanation
この変更は、index-fields.png
というファイル名がindex-fields-old-wizard.png
に変更されたことを示しています。このリネーミングは、ユーザーが古いインデックスフィールドウィザードに関する画像を識別しやすくするために行われました。
新しいファイル名は、旧ウィザードを示すものであり、ユーザーが最新の情報を参照できるようにするため、古いバージョンのウィザードのコンテキストを明確にしています。この変更によって、混乱を避け、関連するリソースをより効果的に利用できるようになることが期待されます。ファイル名の変更は、情報を整理し、ユーザーが必要な素材を迅速に見つけられる助けとなります。
articles/search/media/search-get-started-skillset/indexer-notification.png
Summary
{
"modification_type": "minor update",
"modification_title": "インデクサ通知画像の修正"
}
Explanation
この変更は、indexer-notification.png
という画像ファイルが修正されたことを示しています。具体的には、画像自体に加えた具体的な変更内容は記載されていませんが、ファイルは何らかの理由で更新されています。
更新された画像は、インデクサの通知の視覚的な理解を助けるために使用され、ユーザーにとっての明瞭さや情報の質が向上することを目的としています。このようなマイナーな更新は、ドキュメントの正確性及びユーザー体験の改善につながり、最新の情報を提供するために重要です。画像の修正により、利用者がインデクサの動作や通知メッセージを正しく理解できるようになることが期待されます。
articles/search/media/search-get-started-skillset/review-and-create.png
Summary
{
"modification_type": "new feature",
"modification_title": "レビューと作成の画像追加"
}
Explanation
この変更は、review-and-create.png
という新しい画像ファイルが追加されたことを示しています。この画像は、検索スキルセットのレビューおよび作成プロセスに関する視覚情報を提供する目的で使用されています。
新しい画像の追加は、ユーザーがプロセスを理解しやすくするための重要な手助けとなります。これによって、ユーザーはレビュー段階と作成段階での重要なポイントや手順を視覚的に把握できるようになります。視覚的なコンテンツは、特に技術的な内容やプロセスを説明する際に、理解を深める助けとなり、全体的なユーザーエクスペリエンスを向上させることが期待されます。
articles/search/media/search-get-started-skillset/search-explorer-new-wizard.png
Summary
{
"modification_type": "new feature",
"modification_title": "検索エクスプローラーの新ウィザード画像追加"
}
Explanation
この変更は、search-explorer-new-wizard.png
という新しい画像ファイルが追加されたことを示しています。この画像は、検索エクスプローラーの新しいウィザード機能に関する視覚的なガイドを提供することを目的としています。
追加された画像は、ユーザーが新しいウィザードを使って検索エクスプローラーの機能を容易に理解できるようにするための鍵となる情報を視覚的に示す役割を果たします。この新機能は、ユーザーが検索スキルの構築を効率的に行う手助けをし、全体的なエクスペリエンスを改善することを期待されます。視覚的なコンテンツにより、操作手順が明確になるため、初めてのユーザーでも安心して利用できる環境が整います。
articles/search/media/search-get-started-skillset/search-explorer-old-wizard.png
Summary
{
"modification_type": "minor update",
"modification_title": "検索エクスプローラーのウィザード画像のリネーム"
}
Explanation
この変更は、既存の画像ファイルsearch-explorer.png
がsearch-explorer-old-wizard.png
に名前変更されたことを示しています。このリネームは、古いウィザードに関連する画像が新しいものと区別されるようにするために行われました。
リネームによって、より新しい機能やガイドと古い機能との間で明確な区別ができ、ユーザーがどの画像が現在使われているのか、またどの画像が過去のものであるのかを簡単に理解できるようになります。この変更は、ドキュメントやリソースの整合性を保ち、ユーザーの混乱を減らすために重要です。全体的に見て、このリネームは、開発チームが新旧の機能を明確に区別できるようにし、ユーザーエクスペリエンスを向上させるためのものです。
articles/search/media/search-how-to-create-indexers/portal-indexer-client-2.png
Summary
{
"modification_type": "minor update",
"modification_title": "インデクサー作成方法に関する画像の修正"
}
Explanation
この変更は、画像ファイルportal-indexer-client-2.png
が修正されたことを示しています。この画像は、インデクサーの作成方法に関するガイドの一部として使用されており、ユーザーがAzureポータルを使ってインデクサーを設定する手順を視覚的にサポートしています。
修正の内容について具体的な詳細は提供されていませんが、一般的には、画像の質を改善したり、情報の更新に合わせて視認性を向上させるために行われることが考えられます。このような変更により、ユーザーはより正確で信頼性の高い情報を基にインデクサーを作成できるようになり、ドキュメント全体の品質が向上することが期待されます。全体として、この修正は、ユーザーエクスペリエンスを改善し、Azureの機能を最大限に活用できるようにするために重要です。
articles/search/media/search-how-to-create-indexers/portal-indexer-client.png
Summary
{
"modification_type": "minor update",
"modification_title": "インデクサー作成方法に関する別の画像の修正"
}
Explanation
この変更は、画像ファイルportal-indexer-client.png
が修正されたことを示しています。この画像は、Azureポータルを使用してインデクサーを作成する方法に関連しており、ユーザーに対して具体的な視覚情報を提供する役割を果たしています。
具体的な修正内容は詳細には示されていませんが、一般的には、情報の更新やビジュアルの改善を目的としていると考えられます。画像が修正されることで、ユーザーはインデクサー作成の手順をより理解しやすくなり、正確な情報をもとに設定を行うことが可能になります。このような更新は、文書全体のクリアさと質を向上させるために重要であり、ユーザーエクスペリエンスを向上させることに繋がります。全体的に、この修正は、Azureの機能を効果的に使いこなすための一助となることでしょう。
articles/search/media/search-import-data-portal/import-data-button.png
Summary
{
"modification_type": "new feature",
"modification_title": "データインポートボタン画像の追加"
}
Explanation
この変更は、新しい画像ファイルimport-data-button.png
が追加されたことを示しています。この画像は、Azureポータルにおけるデータインポート機能のボタンを視覚的に表現しており、ユーザーがデータをインポートする際の操作を簡単に理解できるように設計されています。
具体的には、ユーザーはこの画像を参照することで、データインポートボタンの位置や外観を把握しやすくなり、実際の操作に対する理解を深めることができます。この新しい画像によって、Azureのインポート機能に関するドキュメントの情報がより視覚的で直感的になり、ユーザーエクスペリエンスが向上することが期待されます。全体として、この変更は、リソースの使用を促進し、操作のしやすさを高めるための重要な要素となります。
articles/search/media/search-import-data-portal/import-data-cmd.png
Summary
{
"modification_type": "minor update",
"modification_title": "データインポートコマンド画像の削除"
}
Explanation
この変更は、画像ファイルimport-data-cmd.png
が削除されたことを示しています。この画像は、Azureポータルにおけるデータインポート機能に関連するコマンドを示していたと思われますが、削除されたことにより、関連する文書やコンテンツから視覚的な情報が一部失われたことになります。
削除の理由としては、該当の画像が古くなった、情報が誤っている、またはより適切な代替手段が用意された可能性があります。その結果、ユーザーが誤解することなく、最新の情報に基づいて操作を行うのを助けるために行われたと考えられます。この更新は、ドキュメントの正確さを維持し、ユーザーに対してクリアな情報を提供するためには重要です。全体として、この変更は、ユーザーエクスペリエンスの向上を目指すための一環です。
articles/search/media/search-import-data-portal/import-data-new-button.png
Summary
{
"modification_type": "new feature",
"modification_title": "新規データインポートボタン画像の追加"
}
Explanation
この変更は、新しい画像ファイルimport-data-new-button.png
が追加されたことを示しています。この画像は、Azureポータルにおけるデータインポート機能の新しいボタンを視覚的に表現しており、ユーザーがデータをインポートする際の操作をより直感的に理解できるように設計されています。
新しいボタン画像の追加により、ドキュメントは最新のインターフェイスを反映し、ユーザーに対して具体的なビジュアルガイドを提供します。これにより、ユーザーは操作手順をより容易に追うことができ、データインポートのプロセスがスムーズになることが期待されます。この更新は、全体的なユーザーエクスペリエンスの向上に寄与し、より効果的なサポート資料を提供するための重要なステップとなります。
articles/search/media/search-import-data-portal/import-wizards.png
Summary
{
"modification_type": "new feature",
"modification_title": "新規インポートウィザード画像の追加"
}
Explanation
この変更は、新しい画像ファイルimport-wizards.png
が追加されたことを示しています。この画像は、Azureポータルでのデータインポート用のウィザードを視覚的に表現しており、ユーザーがインポート手順をより理解しやすくするための重要なビジュアルリソースとなります。
新しいウィザード画像の追加により、ユーザーは複雑なデータインポートのプロセスを視覚的に示されることで、その流れや手順を把握しやすくなることが期待されます。これにより、手順に対する不安感が軽減され、データインポートがスムーズに行えるようになるでしょう。この更新は、Azureポータルのユーザーエクスペリエンスを向上させるための重要なステップであり、利用ガイドの質を高める役割を果たします。
articles/search/media/search-what-is-an-index/add-index.png
Summary
{
"modification_type": "minor update",
"modification_title": "インデックス追加画像の修正"
}
Explanation
この変更は、画像ファイルadd-index.png
が修正されたことを示しています。この画像は、Azureのインデックス追加機能に関連するものであり、ユーザーがその機能を利用する際の手順や操作を視覚的に示しています。
修正内容は具体的には記載されていませんが、画像の更新により、インデックス追加に関する情報がより正確またはわかりやすくなったことが期待されます。この更新は、ユーザーが適切に機能を理解し、使う際の助けとなることを目的としています。全体として、ドキュメントのクオリティを向上させ、より良いユーザーエクスペリエンスを提供するための措置です。
articles/search/media/vector-search-how-to-configure-vectorizer/connect-to-data.png
Summary
{
"modification_type": "minor update",
"modification_title": "データ接続画像の修正"
}
Explanation
この変更は、画像ファイルconnect-to-data.png
が修正されたことを示しています。この画像は、ベクター検索の設定プロセスにおけるデータ接続の手順を視覚的に示すもので、ユーザーがシステムとデータソースをうまく接続する方法を理解するための重要な情報となります。
具体的な変更内容は記載されていませんが、改善された画像により、ユーザーが接続手順をより直感的に理解できるようになったことが期待されます。この更新は、ユーザーエクスペリエンスを向上させ、誤解を避けるための支援を提供することを目的としています。全体として、ドキュメントの品質が向上し、Azureにおけるベクター検索の設定がよりスムーズに行えるようになることを目指しています。
articles/search/media/vector-search-how-to-configure-vectorizer/vectorize-enrich-data.png
Summary
{
"modification_type": "minor update",
"modification_title": "データベクタイズと強化の画像修正"
}
Explanation
この変更は、画像ファイルvectorize-enrich-data.png
が修正されたことを示しています。この画像は、データのベクタイズおよび強化のプロセスを視覚的に表現しており、ユーザーがこの機能を理解しやすくするための重要な要素です。
具体的な変更内容は示されていませんが、更新された画像によって、ユーザーにとっての有用性や理解のしやすさが向上することが期待されます。この修正は、Azureにおけるベクター検索の設定や利用をよりスムーズにするために、情報の正確性やクリアさを高めることを目的としています。全体として、ドキュメントの維持管理や品質向上に寄与する要素といえるでしょう。
articles/search/multimodal-search-overview.md
Diff
@@ -37,7 +37,7 @@ Multimodal search is ideal for [retrieval-augmented generation (RAG)](retrieval-
## How multimodal search works in Azure AI Search
-To simplify the creation of a multimodal pipeline, Azure AI Search offers the **Import and vectorize data** wizard in the Azure portal. The wizard helps you configure a data source, define extraction and enrichment settings, and generate a multimodal index that contains text, embedded image references, and vector embeddings. For more information, see [Quickstart: Multimodal search in the Azure portal](search-get-started-portal-image-search.md).
+To simplify the creation of a multimodal pipeline, Azure AI Search offers the **Import data (new)** wizard in the Azure portal. The wizard helps you configure a data source, define extraction and enrichment settings, and generate a multimodal index that contains text, embedded image references, and vector embeddings. For more information, see [Quickstart: Multimodal search in the Azure portal](search-get-started-portal-image-search.md).
The wizard follows these steps to create a multimodal pipeline:
Summary
{
"modification_type": "minor update",
"modification_title": "マルチモーダル検索ウィザードの名称変更"
}
Explanation
この変更は、multimodal-search-overview.md
ファイル内のテキストを修正したもので、マルチモーダル検索のウィザードの名称が変更されています。具体的には、“Import and vectorize data”ウィザードが”Import data (new)“ウィザードに改名されました。この修正により、ウィザードの機能と目的をより明確に伝えることができるようになっています。
この変更は、ユーザーにとっての利便性を向上させ、Azureポータル内でのマルチモーダルパイプラインの作成プロセスを明確にすることを目的としています。全体として、このような小さな更新は、文書の正確性とユーザビリティの向上に寄与する重要な要素です。
articles/search/search-api-preview.md
Diff
@@ -32,7 +32,7 @@ Preview features are removed from this list if they're retired or transition to
| [**Agentic retrieval**](search-agentic-retrieval-concept.md) | Query | Create a conversational search experience powered by large language models (LLMs) and your proprietary content. Agentic retrieval breaks down complex user queries into subqueries, runs the subqueries simultaneously, and either extracts grounding data or synthesizes an answer based on documents indexed in Azure AI Search. To get started, see [Quickstart: Agentic retrieval](search-get-started-agentic-retrieval.md).<p>The pipeline involves one or more [knowledge sources](search-knowledge-source-overview.md) and an associated [knowledge agent](search-agentic-retrieval-how-to-create.md), whose [response payload](search-agentic-retrieval-how-to-retrieve.md) provides full transparency into the query plan and reference data. Knowledge sources currently support [search indexes](search-knowledge-source-how-to-index.md) and [Azure blobs](search-knowledge-source-how-to-blob.md). | [Knowledge Sources (preview)](/rest/api/searchservice/knowledge-sources?view=rest-searchservice-2025-08-01-preview&preserve-view=true), [Knowledge Agents (preview)](/rest/api/searchservice/knowledge-agents?view=rest-searchservice-2025-08-01-preview&preserve-view=true), and [Knowledge Retrieval (preview)](/rest/api/searchservice/knowledge-retrieval?view=rest-searchservice-2025-08-01-preview&preserve-view=true). |
| [**Multivector support**](vector-search-multi-vector-fields.md) | Indexing | Index multiple child vectors within a single document field. You can now use vector types in nested fields of complex collections, effectively allowing multiple vectors to be associated with a single document.| [Create or Update Index (preview)](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2025-08-01-preview&preserve-view=true). |
| [**Scoring profiles with semantic ranking**](semantic-how-to-enable-scoring-profiles.md) | Relevance | Semantic ranker adds a new field, `@search.rerankerBoostedScore`, to help you maintain consistent relevance and greater control over final ranking outcomes in your search pipeline. | [Create or Update Index (preview)](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2025-08-01-preview&preserve-view=true). |
-| [**Logic Apps integration in the portal wizard**](search-how-to-index-logic-apps-indexers.md) | Indexing | Create an automated indexing pipeline that retrieves content using a logic app workflow. Use the [Import and vectorize data wizard](search-get-started-portal-import-vectors.md) in the Azure portal to build an indexing pipeline based on Logic Apps. | Image and vectorize data wizard in the Azure portal. |
+| [**Logic Apps integration in the portal wizard**](search-how-to-index-logic-apps-indexers.md) | Indexing | Create an automated indexing pipeline that retrieves content using a logic app workflow. Use the [**Import data (new)** wizard](search-get-started-portal-import-vectors.md) in the Azure portal to build an indexing pipeline based on Logic Apps. | Image and vectorize data wizard in the Azure portal. |
| [**Document-level access control**](search-document-level-access-overview.md) | Security | Flow document-level permissions from blobs in Azure Data Lake Storage (ADLS) Gen2 to searchable documents in an index. Queries can now filter results based on user identity for selected data sources. | [Create or Update Index (preview)](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2025-08-01-preview&preserve-view=true). |
| [**GenAI Prompt skill**](cognitive-search-skill-genai-prompt.md) | Skills | A new skill that connects to a large language model (LLM) for information, using a prompt you provide. With this skill, you can populate a searchable field using content from an LLM. A primary use case for this skill is *image verbalization*, using an LLM to describe images and send the description to a searchable field in your index. | [Create or Update Skillset (preview)](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2025-08-01-preview&preserve-view=true). |
| [**Document Layout skill**](cognitive-search-skill-document-intelligence-layout.md)| Skills | New parameters are available for this skill if you use the 2025-05-01-preview API version or later. The new parameters support image offset metadata that improves the image search experience. | [Create or Update Skillset (preview)](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2025-08-01-preview&preserve-view=true). |
Summary
{
"modification_type": "minor update",
"modification_title": "ロジックアプリ統合ウィザードの名称変更"
}
Explanation
この変更は、search-api-preview.md
ファイル内のテキストの修正を示しており、Azureポータルにおけるロジックアプリの統合ウィザードの名称が”Import and vectorize data”から”Import data (new)“に変更されました。この変更は、ユーザーがこれらの機能をより理解しやすくすることを目的としています。
具体的には、ロジックアプリを使用してコンテンツを取得するための自動インデキシングパイプラインの作成プロセスを簡素化することに焦点を当てています。この小さな更新により、ユーザーが新しいウィザードの機能を正確に把握し、適切に利用できるようになることが期待されます。全体として、文書における正確性と明瞭さが向上し、利用者にとっての利便性が高まっています。
articles/search/search-blob-storage-integration.md
Diff
@@ -45,7 +45,7 @@ You can start directly in your Storage Account portal page.
1. In the left navigation page under **Data management**, select **Azure AI Search** to select or create a search service.
-1. Follow the steps in the wizard to extract and optionally create searchable content from your blobs. The workflow is the [**Import data** wizard](search-get-started-skillset.md). The workflow creates an indexer, data source, index, and option skillset on your Azure AI Search service.
+1. Use an [import wizard](search-get-started-skillset.md) to extract and optionally create searchable content from your blobs. The workflow creates an indexer, data source, index, and option skillset on your Azure AI Search service.
:::image type="content" source="media/search-blob-storage-integration/blob-blade.png" alt-text="Screenshot of the Azure AI Search wizard in the Azure Storage portal page." border="true":::
@@ -73,7 +73,7 @@ Textual content of a document is extracted into a string field named "content".
An *indexer* is a data-source-aware subservice in Azure AI Search, equipped with internal logic for sampling data, reading and retrieving data and metadata, and serializing data from native formats into JSON documents for subsequent import.
-Blobs in Azure Storage are indexed using the [blob indexer](search-howto-indexing-azure-blob-storage.md). You can invoke this indexer by using the **Azure AI Search** command in Azure Storage, the [**Import data** wizards](search-import-data-portal.md), a REST API, or the .NET SDK. In code, you use this indexer by setting the type, and by providing connection information that includes an Azure Storage account along with a blob container. You can subset your blobs by creating a virtual directory, which you can then pass as a parameter, or by filtering on a file type extension.
+Blobs in Azure Storage are indexed using the [blob indexer](search-howto-indexing-azure-blob-storage.md). You can invoke this indexer by using the **Azure AI Search** command in Azure Storage, an [import wizard](search-import-data-portal.md) in the Azure portal, a REST API, or the .NET SDK. In code, you use this indexer by setting the type, and by providing connection information that includes an Azure Storage account along with a blob container. You can subset your blobs by creating a virtual directory, which you can then pass as a parameter, or by filtering on a file type extension.
An indexer ["cracks a document"](search-indexer-overview.md#document-cracking), opening a blob to inspect content. After connecting to the data source, it's the first step in the pipeline. For blob data, this is where PDF, Office docs, and other content types are detected. Document cracking with text extraction is no charge. If your blobs contain image content, images are ignored unless you [add AI enrichment](cognitive-search-concept-intro.md). Standard indexing applies only to text content.
Summary
{
"modification_type": "minor update",
"modification_title": "データインポートウィザードの表現の変更"
}
Explanation
この変更は、search-blob-storage-integration.md
ファイルの内容を修正したもので、Azure Storageにおけるデータインポートの手順に関連する表現を更新しています。改訂された部分では、“Import data”ウィザードの表現が変更され、より簡潔に「インポートウィザードを使用して」と述べられています。
具体的には、ストレージアカウントポータルページで検索サービスを選択または作成した後、ウィザードを通じてBlobから検索可能なコンテンツを抽出し、オプションで作成する流れが示されています。この修正は、ユーザーが手順を理解しやすくすることを目指しており、提示される情報の明確さを向上させています。
全体的には、文書の表現をより直接的にし、ユーザーがAzure AI検索サービスを通じてのワークフローをよりスムーズに理解できるようにする効果が期待されます。
articles/search/search-features-list.md
Diff
@@ -80,7 +80,7 @@ The following table summarizes features by category. There's feature parity in a
| Category | Features |
|-------------------|----------|
-| Tools for prototyping and inspection | [**Add index**](search-what-is-an-index.md) is an index designer in the Azure portal that you can use to create a basic schema consisting of attributed fields and a few other settings. After saving the index, you can populate it using an SDK or the REST API to provide the data. </br></br>[**Import data wizard**](search-import-data-portal.md) creates indexes, indexers, skillsets, and data source definitions. If your data exists in Azure, this wizard can save you significant time and effort, especially for proof-of-concept investigation and exploration. </br></br>[**Import and vectorize data wizard**](search-get-started-portal-import-vectors.md) creates a full indexing pipeline that includes data chunking and vectorization. The wizard creates all of the objects and configuration settings. </br></br>[**Search explorer**](search-explorer.md) is used to test queries and refine scoring profiles.</br></br>[**Create demo app**](search-create-app-portal.md) is used to generate an HTML page that can be used to test the search experience. </br></br>[**Debug Sessions**](cognitive-search-debug-session.md) is a visual editor that lets you debug a skillset interactively. It shows you dependencies, output, and transformations. |
+| Tools for prototyping and inspection | [**Add index**](search-what-is-an-index.md) is an index designer in the Azure portal that you can use to create a basic schema consisting of attributed fields and a few other settings. After saving the index, you can populate it using an SDK or the REST API to provide the data. </br></br>[**Import data** wizard](search-import-data-portal.md) creates indexes, indexers, skillsets, and data source definitions. If your data exists in Azure, this wizard can save you significant time and effort, especially for proof-of-concept investigation and exploration. </br></br>[**Import data (new)** wizard](search-get-started-portal-import-vectors.md) creates a full indexing pipeline that includes data chunking and vectorization. The wizard creates all of the objects and configuration settings. </br></br>[**Search explorer**](search-explorer.md) is used to test queries and refine scoring profiles.</br></br>[**Create demo app**](search-create-app-portal.md) is used to generate an HTML page that can be used to test the search experience. </br></br>[**Debug Sessions**](cognitive-search-debug-session.md) is a visual editor that lets you debug a skillset interactively. It shows you dependencies, output, and transformations. |
| Monitoring and diagnostics | [**Enable monitoring features**](monitor-azure-cognitive-search.md) to go beyond the metrics-at-a-glance that are always visible in the Azure portal. Metrics on queries per second, latency, and throttling are captured and reported in portal pages with no extra configuration required.|
## Programmability
Summary
{
"modification_type": "minor update",
"modification_title": "インポートウィザードの名称変更"
}
Explanation
この変更は、search-features-list.md
ファイルの内容を修正したもので、Azureの機能リストにおける「インポートウィザード」の名称が「Import data wizard」から「Import data (new) wizard」に変更されました。この更新は、ユーザーに新しいバージョンのウィザードを明確に識別できるようにすることを目的としています。
具体的には、トピックはプロトタイピングと検査のためのツールに関するものであり、新しいウィザードはデータのチャンク処理とベクトル化を含む完全なインデキシングパイプラインを作成します。これにより、ユーザーはインデックス、インデクサー、スキルセット、データソース定義などを迅速に構成できるようになります。
この変更は、Azure AI検索サービスを利用するユーザーに対して、より正確でアップデートされた情報を提供し、機能を適切に利用できるよう手助けすることを目的としています。結果として、全体的な文書の明確さと利用者の体験が向上することが期待されます。
articles/search/search-file-storage-integration.md
Diff
@@ -26,8 +26,8 @@ To configure and run the indexer, you can use:
+ [Search Service preview REST APIs](/rest/api/searchservice), any preview version.
+ An Azure SDK package, any version.
-+ [Import data wizard](search-get-started-portal.md) in the Azure portal.
-+ [Import and vectorize data wizard](search-get-started-portal-import-vectors.md) in the Azure portal.
++ [**Import data** wizard](search-get-started-portal.md) in the Azure portal.
++ [**Import data (new)** wizard](search-get-started-portal-import-vectors.md) in the Azure portal.
## Prerequisites
Summary
{
"modification_type": "minor update",
"modification_title": "インポートウィザードの名称変更"
}
Explanation
この変更は、search-file-storage-integration.md
ファイル内のAzureの機能に関する表現を修正したもので、特にインポートウィザードの名称が更新されています。具体的には、従来の「Import data wizard」と「Import and vectorize data wizard」という表現が、それぞれ「Import data wizard」と「Import data (new) wizard」に変更されました。
この修正の目的は、ユーザーが最新のウィザードのバージョンを正確に把握し、識別できるようにすることです。ウィザードは、データのインポートやベクトル化を効率的に行う手段として提供されており、Azureポータル内で簡単に使用することができます。
この変更により、文書の内容がより現代的になり、ユーザーにとっての理解が容易になることが期待されます。また、ユーザーは新しい機能セットを利用する際に、最新の情報にアクセスできるため、全体的なユーザー体験が向上します。
articles/search/search-get-started-portal-image-search.md
Diff
@@ -13,7 +13,7 @@ ms.custom:
# Quickstart: Search for multimodal content in the Azure portal
-In this quickstart, you use the **Import and vectorize data** wizard in the Azure portal to get started with [multimodal search](multimodal-search-overview.md). The wizard simplifies the process of extracting, chunking, vectorizing, and loading both text and images into a searchable index.
+In this quickstart, you use the **Import data (new)** wizard in the Azure portal to get started with [multimodal search](multimodal-search-overview.md). The wizard simplifies the process of extracting, chunking, vectorizing, and loading both text and images into a searchable index.
Unlike [Quickstart: Vector search in the Azure portal](search-get-started-portal-import-vectors.md), which processes simple text-containing images, this quickstart supports advanced image processing for multimodal RAG scenarios.
@@ -179,9 +179,9 @@ To start the wizard for multimodal search:
1. Sign in to the [Azure portal](https://portal.azure.com/) and select your Azure AI Search service.
-1. On the **Overview** page, select **Import and vectorize data**.
+1. On the **Overview** page, select **Import data (new)**.
- :::image type="content" source="media/search-get-started-portal-import-vectors/command-bar.png" alt-text="Screenshot of the command to open the wizard for importing and vectorizing data.":::
+ :::image type="content" source="media/search-import-data-portal/import-data-new-button.png" alt-text="Screenshot of the command to open the wizard for importing and vectorizing data.":::
1. Select your data source: **Azure Blob Storage** or **Azure Data Lake Storage Gen2**.
@@ -460,7 +460,7 @@ This quickstart uses billable Azure resources. If you no longer need the resourc
## Next steps
-This quickstart introduced you to the **Import and vectorize data** wizard, which creates all of the necessary objects for multimodal search. To explore each step in detail, see the following tutorials:
+This quickstart introduced you to the **Import data (new)** wizard, which creates all of the necessary objects for multimodal search. To explore each step in detail, see the following tutorials:
+ [Tutorial: Verbalize images using generative AI](tutorial-document-extraction-image-verbalization.md)
+ [Tutorial: Verbalize images from a structured document layout](tutorial-document-layout-image-verbalization.md)
Summary
{
"modification_type": "minor update",
"modification_title": "インポートウィザードの名称変更"
}
Explanation
この変更は、search-get-started-portal-image-search.md
ファイル内の文言を修正したもので、特に「インポートとベクトル化データ」ウィザードの名称が「Import data (new) wizard」に更新されました。この変更は、ユーザーが新しいウィザードを認識しやすくするためのものです。
具体的な修正点としては、クイックスタートガイドの冒頭、ウィザードを開始する手順、そして次のステップに向けた説明中において、ウィザードの名称が一貫して変更されています。この新しいウィザードは、テキストと画像の両方を検索可能なインデックスに抽出、チャンク化、ベクトル化、ロードするプロセスを簡素化します。また、このクイックスタートは、シンプルなテキストを含む画像処理である「Vector search in the Azure portal」とは異なり、マルチモーダル検索シナリオ向けの高度な画像処理をサポートしています。
この変更により、ユーザーに最新の情報と手順が提供され、Azureポータル内でのマルチモーダル検索の使用がより明確で効果的になります。全体的なユーザー体験の向上が期待されています。
articles/search/search-get-started-portal-import-vectors.md
Diff
@@ -14,7 +14,7 @@ ms.date: 09/12/2025
# Quickstart: Vectorize text in the Azure portal
-In this quickstart, you use the **Import and vectorize data** wizard in the Azure portal to get started with [integrated vectorization](vector-search-integrated-vectorization.md). The wizard chunks your content and calls an embedding model to vectorize the chunks at indexing and query time.
+In this quickstart, you use the **Import data (new)** wizard in the Azure portal to get started with [integrated vectorization](vector-search-integrated-vectorization.md). The wizard chunks your content and calls an embedding model to vectorize the chunks at indexing and query time.
This quickstart uses text-based PDFs from the [azure-search-sample-data](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/sustainable-ai-pdf) repo. However, you can use images and still complete this quickstart.
@@ -32,11 +32,11 @@ This quickstart uses text-based PDFs from the [azure-search-sample-data](https:/
### Supported data sources
-The **Import and vectorize data wizard** [supports a wide range of Azure data sources](search-import-data-portal.md#supported-data-sources-and-scenarios). However, this quickstart only covers the data sources that work with whole files, which are described in the following table.
+The wizard [supports a wide range of Azure data sources](search-import-data-portal.md#supported-data-sources-and-scenarios). However, this quickstart only covers the data sources that work with whole files, which are described in the following table.
| Supported data source | Description |
|--|--|
-| [Azure Blob Storage](search-howto-indexing-azure-blob-storage.md) | This data source works with blobs and tables. You must use a standard performance (general-purpose v2) account. Access tiers can be hot, cool, or cold. |
+| [Azure Blob Storage](/azure/storage/common/storage-account-create) | This data source works with blobs and tables. You must use a standard performance (general-purpose v2) account. Access tiers can be hot, cool, or cold. |
| [Azure Data Lake Storage (ADLS) Gen2](/azure/storage/blobs/create-data-lake-storage-account) | This is an Azure Storage account with a hierarchical namespace enabled. To confirm that you have Data Lake Storage, check the **Properties** tab on the **Overview** page.<br><br> :::image type="content" source="media/search-get-started-portal-import-vectors/data-lake-storage.png" alt-text="Screenshot of an Azure Data Lake Storage account in the Azure portal." border="true" lightbox="media/search-get-started-portal-import-vectors/data-lake-storage.png"::: |
| [OneLake](search-how-to-index-onelake-files.md) | This data source is currently in preview. For information about limitations and supported shortcuts, see [OneLake indexing](search-how-to-index-onelake-files.md). |
@@ -60,7 +60,7 @@ For integrated vectorization, you must use one of the following embedding models
### Public endpoint requirements
-For the purposes of this quickstart, all of the preceding resources must have public access enabled so that the Azure portal nodes can access them. Otherwise, the wizard fails. After the wizard runs, you can enable firewalls and private endpoints on the integration components for security. For more information, see [Secure connections in the import wizards](search-import-data-portal.md#secure-connections).
+For this quickstart, all of the preceding resources must have public access enabled so that the Azure portal nodes can access them. Otherwise, the wizard fails. After the wizard runs, you can enable firewalls and private endpoints on the integration components for security. For more information, see [Secure connections in the import wizards](search-import-data-portal.md#secure-connections).
If private endpoints are already present and you can't disable them, the alternative option is to run the respective end-to-end flow from a script or program on a virtual machine. The virtual machine must be on the same virtual network as the private endpoint. Here's a [Python code sample](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python/code/integrated-vectorization) for integrated vectorization. The same [GitHub repo](https://github.com/Azure/azure-search-vector-samples/tree/main) has samples in other programming languages.
@@ -252,9 +252,9 @@ To start the wizard for vector search:
1. Sign in to the [Azure portal](https://portal.azure.com/) and select your Azure AI Search service.
-1. On the **Overview** page, select **Import and vectorize data**.
+1. On the **Overview** page, select **Import data (new)**.
- :::image type="content" source="media/search-get-started-portal-import-vectors/command-bar.png" alt-text="Screenshot of the command to open the wizard for importing and vectorizing data.":::
+ :::image type="content" source="media/search-import-data-portal/import-data-new-button.png" alt-text="Screenshot of the command to open the wizard for importing and vectorizing data.":::
1. Select your data source: **Azure Blob Storage**, **ADLS Gen2**, or **OneLake**.
@@ -576,4 +576,4 @@ This quickstart uses billable Azure resources. If you no longer need the resourc
## Next step
-This quickstart introduced you to the **Import and vectorize data wizard**, which creates all of the necessary objects for integrated vectorization. To explore each step in detail, see [Set up integrated vectorization in Azure AI Search](search-how-to-integrated-vectorization.md).
+This quickstart introduced you to the **Import data (new)** wizard, which creates all of the necessary objects for integrated vectorization. To explore each step in detail, see [Set up integrated vectorization in Azure AI Search](search-how-to-integrated-vectorization.md).
Summary
{
"modification_type": "minor update",
"modification_title": "インポートウィザードの名称変更"
}
Explanation
この変更は、search-get-started-portal-import-vectors.md
ファイル内の文言を修正したもので、特に「インポートとベクトル化データ」ウィザードの名称が「Import data (new) wizard」に更新されています。この修正は、ユーザーが新しいウィザードを識別しやすくするために行われました。
変更点としては、クイックスタートガイドの冒頭でのウィザードの正式名称の修正に加え、データソースを記載した表の内容やウィザードを開始する手順などでも同様の名称変更が行われています。また、ウィザードは、コンテンツをチャンク化し、インデックス作成時およびクエリ時に埋め込みモデルを呼び出してベクトル化を行う機能を提供します。
加えて、サポートされるデータソースの説明や、公開アクセスに関する要件についての言い回しも一部修正されていますが、基本的なプロセスや機能に大きな変更はありません。この変更は、ドキュメントの内容を一貫性のあるものにし、ユーザーに最新の情報を提供することを目的としています。全体的なユーザー体験の向上が期待されています。
articles/search/search-get-started-portal.md
Diff
@@ -1,167 +1,57 @@
---
-title: "Quickstart: Keyword Search in the Azure Portal"
+title: "Quickstart: Keyword Search in the Azure portal"
titleSuffix: Azure AI Search
-description: Learn how to create, load, and query your first search index using the Import Data wizard in the Azure portal. This quickstart uses a fictitious hotel dataset for sample data.
+description: Learn how to create, load, and query your first search index using an import wizard in the Azure portal. This quickstart uses a fictitious hotel dataset for sample data.
manager: nitinme
author: haileytap
ms.author: haileytapia
ms.service: azure-ai-search
ms.topic: quickstart
-ms.date: 03/04/2025
+ms.date: 09/16/2025
ms.custom:
- mode-ui
- ignite-2023
- ignite-2024
+zone_pivot_groups: azure-portal-wizards
---
# Quickstart: Create a search index in the Azure portal
-In this quickstart, you create your first Azure AI Search index using the [**Import data** wizard](search-import-data-portal.md) and a built-in sample of fictitious hotel data hosted by Microsoft. The wizard requires no code to create an index, helping you write interesting queries within minutes.
+::: zone pivot="import-data-new"
+[!INCLUDE [Import data (new) instructions](includes/quickstarts/search-get-started-portal-new-wizard.md)]
+::: zone-end
-The wizard creates multiple objects on your search service, including a [searchable index](search-what-is-an-index.md), an [indexer](search-indexer-overview.md), and a data source connection for automated data retrieval. At the end of this quickstart, we review each object.
-
-> [!NOTE]
-> The **Import data** wizard includes options for OCR, text translation, and other AI enrichments that aren't covered in this quickstart. For a similar walkthrough that focuses on applied AI, see [Quickstart: Create a skillset in the Azure portal](search-get-started-skillset.md).
-
-## Prerequisites
-
-+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
-
-+ An Azure AI Search service. [Create a service](search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) in your current subscription. You can use a free service for this quickstart.
-
-+ Familiarity with the wizard. See [Import data wizards in the Azure portal](search-import-data-portal.md).
-
-### Check for network access
-
-For this quickstart, which uses built-in sample data, make sure your search service doesn't have [network access controls](service-configure-firewall.md). The Azure portal controller uses a public endpoint to retrieve data and metadata from the Microsoft-hosted data source. For more information, see [Secure connections in the import wizards](search-import-data-portal.md#secure-connections).
-
-### Check for space
-
-Many customers start with a free search service, which is limited to three indexes, three indexers, and three data sources. This quickstart creates one of each, so before you begin, make sure you have room for extra objects.
-
-On the **Overview** tab, select **Usage** to see how many indexes, indexers, and data sources you currently have.
-
- :::image type="content" source="media/search-get-started-portal/overview-quota-usage.png" alt-text="Screenshot of the Overview page for an Azure AI Search service instance in the Azure portal, showing the number of indexes, indexers, and data sources." lightbox="media/search-get-started-portal/overview-quota-usage.png":::
-
-## Start the wizard
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. Go to your search service.
-
-1. On the **Overview** tab, select **Import data** to start the wizard.
-
- :::image type="content" source="media/search-import-data-portal/import-data-cmd.png" alt-text="Screenshot that shows how to open the Import data wizard in the Azure portal.":::
-
-## Create and load a search index
-
-In this section, you create and load an index in four steps:
-
-1. [Connect to a data source](#connect-to-a-data-source)
-1. [Skip configuration for cognitive skills](#skip-configuration-for-cognitive-skills)
-1. [Configure the index](#configure-the-index)
-1. [Configure and run the indexer](#configure-and-run-the-indexer)
-
-### Connect to a data source
-
-The wizard creates a data source connection to sample data that Microsoft hosts on Azure Cosmos DB. The sample data is accessed through a public endpoint, so you don't need an Azure Cosmos DB account or source files for this step.
-
-To connect to the sample data:
-
-1. On **Connect to your data**, expand the **Data Source** dropdown list and select **Samples**.
-
-1. Select **hotels-sample** from the list of built-in samples.
-
-1. Select **Next: Add cognitive skills (Optional)** to continue.
-
- :::image type="content" source="media/search-get-started-portal/import-hotels-sample.png" alt-text="Screenshot that shows how to select the hotels-sample data source in the Import data wizard.":::
-
-### Skip configuration for cognitive skills
-
-Although the wizard supports skillset creation and [AI enrichment](cognitive-search-concept-intro.md) during indexing, cognitive skills are beyond the scope of this quickstart.
-
-To skip this step in the wizard:
-
-1. On **Add cognitive skills**, ignore the AI enrichment configuration options.
-
-1. Select **Next: Customize target index** to continue.
-
- :::image type="content" source="media/search-get-started-portal/skip-cognitive-skills.png" alt-text="Screenshot that shows how to Skip to the Customize target index tab in the Import data wizard.":::
-
-> [!TIP]
-> To get started with AI enrichment, see [Quickstart: Create a skillset in the Azure portal](search-get-started-skillset.md).
-
-### Configure the index
-
-The wizard infers a schema for the hotels-sample index. To configure the index:
-
-1. Accept the system-generated values for the **Index name** (_hotels-sample-index_) and **Key** (_HotelId_).
-
-1. Accept the system-generated values for all field attributes.
-
-1. Select **Next: Create an indexer** to continue.
-
- :::image type="content" source="media/search-get-started-portal/hotels-sample-generated-index.png" alt-text="Screenshot that shows the generated index definition for the hotels-sample data source in the Import data wizard.":::
-
-At a minimum, the search index requires a name and a collection of fields. The wizard scans for unique string fields and marks one as the document key, which uniquely identifies each document in the index.
-
-Each field has a name, a data type, and attributes that control how the field is used in the index. Use the checkboxes to enable or disable the following attributes:
-
-| Attribute | Description | Applicable data types |
-|-----------|-------------|------------------------|
-| Retrievable | Fields returned in a query response. | Strings and integers |
-| Filterable | Fields that accept a filter expression. | Integers |
-| Sortable | Fields that accept an orderby expression. | Integers |
-| Facetable | Fields used in a faceted navigation structure. | Integers |
-| Searchable | Fields used in full text search. Strings are searchable, but numeric and Boolean fields are often marked as not searchable. | Strings |
-
-Attributes affect storage in different ways. For example, filterable fields consume extra storage, while retrievable fields don't. For more information, see [Example demonstrating the storage implications of attributes and suggesters](search-what-is-an-index.md#example-demonstrating-the-storage-implications-of-attributes-and-suggesters).
-
-If you want autocomplete or suggested queries, specify language **Analyzers** or **Suggesters**.
-
-### Configure and run the indexer
-
-Finally, you configure and run the indexer, which defines an executable process. The data source and index are also created in this step.
-
-To configure and run the indexer:
-
-1. Accept the system-generated value for the **Indexer name** (_hotels-sample-indexer_).
-
-1. For this quickstart, use the default option to run the indexer immediately and only once. The sample data is static, so you can't enable change tracking.
-
-1. Select **Submit** to simultaneously create and run the indexer.
-
- :::image type="content" source="media/search-get-started-portal/hotels-sample-indexer.png" alt-text="Screenshot that shows how to configure the indexer for the hotels-sample data source in the Import data wizard.":::
+::: zone pivot="import-data"
+[!INCLUDE [Import data instructions](includes/quickstarts/search-get-started-portal-old-wizard.md)]
+::: zone-end
## Monitor indexer progress
-You can monitor the creation of the indexer and index in the Azure portal. The **Overview** tab provides links to the resources created in your search service.
+You can monitor the creation of the indexer and index in the Azure portal. The **Overview** page provides links to the objects created on your search service.
To monitor the progress of the indexer:
-1. Go to your search service in the [Azure portal](https://portal.azure.com/).
-
1. From the left pane, select **Indexers**.
- :::image type="content" source="media/search-get-started-portal/indexers-status.png" alt-text="Screenshot that shows the creation of the indexer in progress in the Azure portal.":::
+1. Find **hotels-sample-indexer** in the list.
+
+ :::image type="content" source="media/search-get-started-portal/indexers-status.png" alt-text="Screenshot that shows the creation of the indexer in progress in the Azure portal." lightbox="media/search-get-started-portal/indexers-status.png":::
It can take a few minutes for the results to update. You should see the newly created indexer with a status of **In progress** or **Success**. The list also shows the number of documents indexed.
## Check search index results
-1. Go to your search service in the [Azure portal](https://portal.azure.com/).
-
1. From the left pane, select **Indexes**.
1. Select **hotels-sample-index**. If the index has zero documents or storage, wait for the Azure portal to refresh.
- :::image type="content" source="media/search-get-started-portal/indexes-list.png" alt-text="Screenshot of the Indexes list on the Azure AI Search service dashboard in the Azure portal.":::
+ :::image type="content" source="media/search-get-started-portal/indexes-list.png" alt-text="Screenshot of the Indexes list on the Azure AI Search service dashboard in the Azure portal." lightbox="media/search-get-started-portal/indexes-list.png":::
1. Select the **Fields** tab to view the index schema.
1. Check which fields are **Filterable** or **Sortable** so that you know what queries to write.
- :::image type="content" source="media/search-get-started-portal/index-schema-definition.png" alt-text="Screenshot that shows the schema definition for an index in the Azure AI Search service in the Azure portal.":::
+ :::image type="content" source="media/search-get-started-portal/index-schema-definition.png" alt-text="Screenshot that shows the schema definition for an index in the Azure AI Search service in the Azure portal." lightbox="media/search-get-started-portal/index-schema-definition.png":::
## Add or change fields
@@ -173,28 +63,30 @@ Review the index definition options to understand what you can and can't edit du
## Query with Search explorer
-You now have a search index that can be queried using [**Search explorer**](search-explorer.md), which sends REST calls that conform to the [Search POST REST API](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2024-05-01-preview&preserve-view=true). This tool supports [simple query syntax](/rest/api/searchservice/simple-query-syntax-in-azure-search) and [full Lucene query syntax](/rest/api/searchservice/lucene-query-syntax-in-azure-search).
+You now have a search index that can be queried using [**Search explorer**](search-explorer.md), which sends REST calls that conform to [Documents - Search Post (REST API)](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2024-05-01-preview&preserve-view=true). This tool supports [simple query syntax](/rest/api/searchservice/simple-query-syntax-in-azure-search) and [full Lucene query syntax](/rest/api/searchservice/lucene-query-syntax-in-azure-search).
To query your search index:
1. On the **Search explorer** tab, enter text to search on.
- :::image type="content" source="media/search-get-started-portal/search-explorer-query-string.png" alt-text="Screenshot that shows how to enter and run a query in the Search Explorer tool.":::
+ :::image type="content" source="media/search-get-started-portal/search-explorer-query-string.png" alt-text="Screenshot that shows how to enter and run a query in the Search Explorer tool." lightbox="media/search-get-started-portal/search-explorer-query-string.png":::
1. To jump to nonvisible areas of the output, use the mini map.
- :::image type="content" source="media/search-get-started-portal/search-explorer-query-results.png" alt-text="Screenshot that shows long results for a query in the Search Explorer tool and the mini-map.":::
+ :::image type="content" source="media/search-get-started-portal/search-explorer-query-results.png" alt-text="Screenshot that shows long results for a query in the Search Explorer tool and the mini-map." lightbox="media/search-get-started-portal/search-explorer-query-results.png":::
1. To specify syntax, switch to the JSON view.
- :::image type="content" source="media/search-get-started-portal/search-explorer-change-view.png" alt-text="Screenshot of the JSON view selector.":::
+ :::image type="content" source="media/search-get-started-portal/search-explorer-change-view.png" alt-text="Screenshot of the JSON view selector." lightbox="media/search-get-started-portal/search-explorer-change-view.png":::
## Example queries for hotels-sample index
-The following examples assume the JSON view and the 2024-05-01-preview REST API version.
+The following examples assume the JSON view and 2024-05-01-preview REST API version.
> [!TIP]
-> The JSON view supports intellisense for parameter name completion. Place your cursor inside the JSON view and type a space character to see a list of all query parameters. You can also type a letter, like "s," to see only the query parameters that begin with that letter. Intellisense doesn't exclude invalid parameters, so use your best judgment.
+> The JSON view supports intellisense for parameter name completion. Place your cursor inside the JSON view and enter a space character to see a list of all query parameters. You can also enter a letter, like `s`, to see only the query parameters that begin with that letter.
+>
+> Intellisense doesn't exclude invalid parameters, so use your best judgment.
### Filter examples
@@ -249,7 +141,7 @@ The default syntax is [simple syntax](query-simple-syntax.md), but if you want f
Misspelled query terms, like `seatle` instead of `Seattle`, don't return matches in a typical search. The `queryType=full` parameter invokes the full Lucene query parser, which supports the tilde (`~`) operand. When you use these parameters, the query performs a fuzzy search for the specified keyword and matches on terms that are similar but not an exact match.
-Take a minute to try these example queries on your index. To learn more about queries, see [Querying in Azure AI Search](search-query-overview.md).
+Take a minute to try these example queries on your index. For more information, see [Querying in Azure AI Search](search-query-overview.md).
## Clean up resources
Summary
{
"modification_type": "minor update",
"modification_title": "クイックスタートガイドの更新"
}
Explanation
この変更は、search-get-started-portal.md
ファイルに対する大幅な更新を示しており、特にクイックスタートガイドの内容が簡素化され、最新のインポートウィザードに焦点が当てられています。新しいウィザードに関する手順が強調され、旧い手順の一部は削除または置換されています。
変更点として、「インポートデータウィザード」の名称が「インポートデータ(新)」に変更され、推奨される手順は「[!INCLUDE]」構文を使用して新しいウィザードのインクルードファイルに結び付けられています。また、全体的なフローが簡略化され、重要な情報は保持しつつ不必要な詳細は削除されています。
さらに日付の更新、タイトルの調整、一部の手順の見直しや再構成などが行われており、結果的に新しいウィザードを使用したワークフローが明確になっています。この変更は、ユーザーに新しい機能と手順を理解してもらいやすくし、全体的な体験を向上させることを目的としています。
articles/search/search-get-started-skillset.md
Diff
@@ -1,7 +1,7 @@
---
-title: "Quickstart: Create a Skillset in the Azure Portal"
+title: "Quickstart: Create a Skillset in the Azure portal"
titleSuffix: Azure AI Search
-description: Learn how to use the Import Data wizard to generate searchable text from images and unstructured documents. Skills in this quickstart include optical character recognition (OCR), image analysis, and natural language processing.
+description: Learn how to use an import wizard to generate searchable text from images and unstructured documents. Skills in this quickstart include optical character recognition (OCR), image analysis, and natural-language processing.
manager: nitinme
author: haileytap
ms.author: haileytapia
@@ -10,189 +10,43 @@ ms.update-cycle: 180-days
ms.custom:
- ignite-2023
ms.topic: quickstart
-ms.date: 03/04/2025
+ms.date: 09/16/2025
+zone_pivot_groups: azure-portal-wizards
---
# Quickstart: Create a skillset in the Azure portal
-In this quickstart, you learn how a skillset in Azure AI Search adds optical character recognition (OCR), image analysis, language detection, text translation, and entity recognition to generate text-searchable content in a search index.
+::: zone pivot="import-data-new"
+[!INCLUDE [Import data (new) instructions](includes/quickstarts/search-get-started-skillset-new-wizard.md)]
+::: zone-end
-You can run the **Import data** wizard in the Azure portal to apply skills that create and transform textual content during indexing. The input is your raw data, usually blobs in Azure Storage. The output is a searchable index containing AI-generated image text, captions, and entities. You can query generated content in the Azure portal using [**Search explorer**](search-explorer.md).
-
-To prepare, you create a few resources and upload sample files before running the wizard.
-
-## Prerequisites
-
-+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
-
-+ An Azure AI Search service. [Create a service](search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/CognitiveSearch) in your current subscription. For this quickstart, you can use a free service.
-
-+ An Azure Storage account with Azure Blob Storage.
-
-> [!NOTE]
-> This quickstart uses [Azure AI services](https://azure.microsoft.com/services/cognitive-services/) for AI transformations. Because the workload is so small, Azure AI services is tapped behind the scenes for free processing for up to 20 transactions. You can complete this exercise without having to create an Azure AI services multi-service resource.
-
-## Set up your data
-
-In the following steps, set up a blob container in Azure Storage to store heterogeneous content files.
-
-1. [Download sample data](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/ai-enrichment-mixed-media) consisting of a small file set of different types.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account.
-
-1. [Create an Azure Storage account](/azure/storage/common/storage-account-create?tabs=azure-portal) or [find an existing account](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/).
-
- + Choose the same region as Azure AI Search to avoid bandwidth charges.
-
- + Choose the StorageV2 (general purpose V2).
-
-1. In Azure portal, open your Azure Storage page and create a container. You can use the default access level.
-
-1. In Container, select **Upload** to upload the sample files. Notice that you have a wide range of content types, including images and application files that aren't full text searchable in their native formats.
-
- :::image type="content" source="media/search-get-started-skillset/sample-data.png" alt-text="Screenshot of source files in Azure Blob Storage." border="false":::
-
-You're now ready to move on the Import data wizard.
-
-## Run the Import data wizard
-
-1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account.
-
-1. [Find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices). On the Overview page, select **Import data** on the command bar to create searchable content in four steps.
-
- :::image type="content" source="media/search-import-data-portal/import-data-cmd.png" alt-text="Screenshot of the Import data command." border="true":::
-
-### Step 1: Create a data source
-
-1. In **Connect to your data**, choose **Azure Blob Storage**.
-
-1. Choose an existing connection to the storage account and select the container you created. Give the data source a name, and use default values for the rest.
-
- :::image type="content" source="media/search-get-started-skillset/blob-datasource.png" alt-text="Screenshot of the data source definition page." border="true":::
-
- Continue to the next page.
-
-If you get *Error detecting index schema from data source*, the indexer that powers the wizard can't connect to your data source. Most likely, the data source has security protections. Try the following solutions and then rerun the wizard.
-
-| Security feature | Solution |
-|--------------------|----------|
-| Resource requires Azure roles, or its access keys are disabled | [Connect as a trusted service](search-indexer-howto-access-trusted-service-exception.md) or [connect using a managed identity](search-how-to-managed-identities.md) |
-| Resource is behind an IP firewall | [Create an inbound rule for Search and for the Azure portal](search-indexer-howto-access-ip-restricted.md) |
-| Resource requires a private endpoint connection | [Connect over a private endpoint](search-indexer-howto-access-private.md) |
-
-### Step 2: Add cognitive skills
-
-Next, configure AI enrichment to invoke OCR, image analysis, and natural language processing.
-
-OCR and image analysis are available for blobs in Azure Blob Storage and Azure Data Lake Storage (ADLS) Gen2, and for image content in OneLake. Images can be standalone files or embedded images in a PDF or other files.
-
-1. For this quickstart, we're using the **Free** Azure AI services resource. The sample data consists of 14 files, so the free allotment of 20 transactions on Azure AI services is sufficient for this quickstart.
-
- :::image type="content" source="media/search-get-started-skillset/cog-search-attach.png" alt-text="Screenshot of the Attach Azure AI services tab." border="true":::
-
-1. Expand **Add enrichments** and make six selections.
-
- Enable OCR to add image analysis skills to wizard page.
-
- Choose entity recognition (people, organizations, locations) and image analysis skills (tags, captions).
-
- :::image type="content" source="media/search-get-started-skillset/skillset.png" alt-text="Screenshot of the skillset definition page." border="true":::
-
- Continue to the next page.
-
-### Step 3: Configure the index
-
-An index contains your searchable content and the **Import data** wizard can usually create the schema by sampling the data source. In this step, review the generated schema and potentially revise any settings.
-
-For this quickstart, the wizard does a good job setting reasonable defaults:
-
-+ Default fields are based on metadata properties of existing blobs, plus the new fields for the enrichment output (for example, `people`, `organizations`, `locations`). Data types are inferred from metadata and by data sampling.
-
-+ Default document key is *metadata_storage_path* (selected because the field contains unique values).
-
-+ Default attributes are **Retrievable** and **Searchable**. **Searchable** allows full text search a field. **Retrievable** means field values can be returned in results. The wizard assumes you want these fields to be retrievable and searchable because you created them via a skillset. Select **Filterable** if you want to use fields in a filter expression.
-
- :::image type="content" source="media/search-get-started-skillset/index-fields.png" alt-text="Screenshot of the index definition page." border="true":::
-
-Marking a field as **Retrievable** doesn't mean that the field *must* be present in the search results. You can control search results composition by using the **select** query parameter to specify which fields to include.
-
-Continue to the next page.
-
-### Step 4: Configure the indexer
-
-The indexer drives the indexing process. It specifies the data source name, a target index, and frequency of execution. The **Import data** wizard creates several objects, including an indexer that you can reset and run repeatedly.
-
-1. In the **Indexer** page, accept the default name and select **Once**.
-
- :::image type="content" source="media/search-get-started-skillset/indexer-def.png" alt-text="Screenshot of the indexer definition page." border="true":::
-
-1. Select **Submit** to create and simultaneously run the indexer.
-
-## Monitor status
-
-Select **Indexers** from the left pane to monitor status, and then select the indexer. Skills-based indexing takes longer than text-based indexing, especially OCR and image analysis.
-
- :::image type="content" source="media/search-get-started-skillset/indexer-notification.png" alt-text="Screenshot of the indexer status page." border="true":::
-
-To view details about execution status, select **Success** (or **Failed**) to view execution details.
-
-In this demo, there are a few warnings: *"Could not execute skill because one or more skill input was invalid."* It tells you that a PNG file in the data source doesn't provide a text input to Entity Recognition. This warning occurs because the upstream OCR skill didn't recognize any text in the image, and thus couldn't provide a text input to the downstream Entity Recognition skill.
-
-Warnings are common in skillset execution. As you become familiar with how skills iterate over your data, you might begin to notice patterns and learn which warnings are safe to ignore.
-
-## Query in Search explorer
-
-After an index is created, use **Search explorer** to return results.
-
-1. On the left, select **Indexes** and then select the index. **Search explorer** is on the first tab.
-
-1. Enter a search string to query the index, such as `satya nadella`. The search bar accepts keywords, quote-enclosed phrases, and operators: `"Satya Nadella" +"Bill Gates" +"Steve Ballmer"`
-
-Results are returned as verbose JSON, which can be hard to read, especially in large documents. Some tips for searching in this tool include the following techniques:
-
-+ Switch to JSON view to specify parameters that shape results.
-+ Add `select` to limit the fields in results.
-+ Add `count` to show the number of matches.
-+ Use CTRL-F to search within the JSON for specific properties or terms.
-
- :::image type="content" source="media/search-get-started-skillset/search-explorer.png" alt-text="Screenshot of the Search explorer page." border="true":::
-
-Here's some JSON you can paste into the view:
-
- ```json
- {
- "search": "\"Satya Nadella\" +\"Bill Gates\" +\"Steve Ballmer\"",
- "count": true,
- "select": "content, people"
- }
- ```
-
-> [!TIP]
-> Query strings are case-sensitive so if you get an "unknown field" message, check **Fields** or **Index Definition (JSON)** to verify name and case.
+::: zone pivot="import-data"
+[!INCLUDE [Import data instructions](includes/quickstarts/search-get-started-skillset-old-wizard.md)]
+::: zone-end
## Takeaways
-You've now created your first skillset and learned the basic steps of skills-based indexing.
+You've created your first skillset and learned the basic steps of skills-based indexing.
-Some key concepts that we hope you picked up include the dependencies. A skillset is bound to an indexer, and indexers are Azure and source-specific. Although this quickstart uses Azure Blob Storage, other Azure data sources are possible. For more information, see [Indexers in Azure AI Search](search-indexer-overview.md).
+Some key concepts that we hope you picked up include the dependencies. A skillset is bound to an indexer, and indexers are Azure and source-specific. Although this quickstart uses Azure Blob Storage, other Azure data sources are available. For more information, see [Indexers in Azure AI Search](search-indexer-overview.md).
-Another important concept is that skills operate over content types, and when working with heterogeneous content, some inputs are skipped. Also, large files or fields might exceed the indexer limits of your service tier. It's normal to see warnings when these events occur.
+Another important concept is that skills operate over content types, and when you use heterogeneous content, some inputs are skipped. Also, large files or fields might exceed the indexer limits of your service tier. It's normal to see warnings when these events occur.
-Output is routed to a search index, and there's a mapping between name-value pairs created during indexing and individual fields in your index. Internally, the wizard sets up [an enrichment tree](cognitive-search-concept-annotations-syntax.md) and defines a [skillset](cognitive-search-defining-skillset.md), establishing the order of operations and general flow. These steps are hidden in the wizard, but when you start writing code, these concepts become important.
+The output is routed to a search index, and there's a mapping between name-value pairs created during indexing and individual fields in your index. Internally, the wizard sets up [an enrichment tree](cognitive-search-concept-annotations-syntax.md) and defines a [skillset](cognitive-search-defining-skillset.md), establishing the order of operations and general flow. These steps are hidden in the wizard, but when you start writing code, these concepts become important.
-Finally, you learned that you can verify content by querying the index. In the end, what Azure AI Search provides is a searchable index, which you can query using either the [simple](/rest/api/searchservice/simple-query-syntax-in-azure-search) or [fully extended query syntax](/rest/api/searchservice/lucene-query-syntax-in-azure-search). An index containing enriched fields is like any other. You can incorporate standard or [custom analyzers](search-analyzers.md), [scoring profiles](/rest/api/searchservice/add-scoring-profiles-to-a-search-index), [synonyms](search-synonyms.md), [faceted navigation](search-faceted-navigation.md), geo-search, or any other Azure AI Search feature.
+Finally, you learned that you can verify content by querying the index. Ultimately, Azure AI Search provides a searchable index that you can query using either [simple](/rest/api/searchservice/simple-query-syntax-in-azure-search) or [fully extended query syntax](/rest/api/searchservice/lucene-query-syntax-in-azure-search). An index containing enriched fields is like any other. You can incorporate standard or [custom analyzers](search-analyzers.md), [scoring profiles](/rest/api/searchservice/add-scoring-profiles-to-a-search-index), [synonyms](search-synonyms.md), [faceted navigation](search-faceted-navigation.md), geo-search, and other Azure AI Search features.
## Clean up resources
When you're working in your own subscription, it's a good idea at the end of a project to identify whether you still need the resources you created. Resources left running can cost you money. You can delete resources individually or delete the resource group to delete the entire set of resources.
-You can find and manage resources in the Azure portal, using the **All resources** or **Resource groups** link in the left-navigation pane.
+You can find and manage resources in the Azure portal by selecting **All resources** or **Resource groups** from the left pane.
If you used a free service, remember that you're limited to three indexes, indexers, and data sources. You can delete individual items in the Azure portal to stay under the limit.
## Next step
-You can create skillsets using the Azure portal, .NET SDK, or REST API. To further your knowledge, try the REST API by using a REST client and more sample data:
+You can use the Azure portal, REST APIs, or an Azure SDK to create skillsets. Try the REST APIs by using a REST client and more sample data:
> [!div class="nextstepaction"]
> [Tutorial: Use skillsets to generate searchable content in Azure AI Search](tutorial-skillset.md)
Summary
{
"modification_type": "minor update",
"modification_title": "スキルセット作成クイックスタートの更新"
}
Explanation
この変更は、search-get-started-skillset.md
ファイルに関する大規模な更新を示しており、スキルセットの作成に関するドキュメントが簡素化され、最新の「インポートデータウィザード」に基づいた手順が強調されています。特に、古い手順が削除され、新しいウィザードに関連する内容が追加されたことで、最新の機能やフローを反映しています。
具体的には、タイトルの変更、日付の更新、全体的な説明が整理され、スキルセット作成における重要な概念やプロセスが明確に示されています。また、AIサービスのトランザクション数の上限に関する情報も明確化されており、ユーザーが無料で利用できる範囲においてスキルを使用することが可能であることが強調されています。
さらに、事前準備や手順のセクションも明確化され、複雑なステップが省略されることで、全体的なプロセスが整理され、初心者にも理解しやすい構成になっています。この変更により、ユーザーはスキルベースのインデクシングをより簡単に学び、実行できるようになります。
articles/search/search-how-to-create-indexers.md
Diff
@@ -163,11 +163,11 @@ When you're ready to create an indexer on a remote search service, you need a se
### [**Azure portal**](#tab/portal)
-1. Sign in to the [Azure portal](https://portal.azure.com), then find your search service.
+1. Sign in to the [Azure portal](https://portal.azure.com) and select your search service.
-1. On the search service **Overview** page, choose from two options:
+1. Choose from the following options:
- + [**Import data** wizard](search-import-data-portal.md): The wizard is unique in that it creates all of the required elements. Other approaches require a predefined data source and index.
+ + [**Import wizards**](search-import-data-portal.md): The wizards are unique in that they create all of the required elements. Other approaches require a predefined data source and index.
:::image type="content" source="media/search-how-to-create-indexers/portal-indexer-client.png" alt-text="Screenshot that shows the Import data wizard." border="true":::
Summary
{
"modification_type": "minor update",
"modification_title": "インデクサー作成手順の更新"
}
Explanation
この変更は、search-how-to-create-indexers.md
ファイルに対する小規模な更新を示しており、主にAzureポータルでのインデクサー作成手順が明確化されています。具体的には、手順の表現が改善され、よりわかりやすく整理されています。
変更点の一つとして、「サインインをして検索サービスを見つける」という手順が「サインインをして検索サービスを選択する」という表現に変更され、より具体的な指示に修正されました。また、「インポートデータウィザード」という用語が「インポートウィザード」と複数形に変更され、ウィザードの機能の多様性を反映しています。これにより、異なるアプローチの存在とそれに伴う要求事項がより明確に説明されています。
全体として、これらの変更は利用者が手順をスムーズに理解し、実行できるようにすることを目的としています。
articles/search/search-how-to-create-search-index.md
Diff
@@ -96,7 +96,7 @@ Setting a field as searchable, filterable, sortable, or facetable has an effect
If a field isn't set to be searchable, filterable, sortable, or facetable, the field can't be referenced in any query expression. This is desirable for fields that aren't used in queries, but are needed in search results.
-The REST APIs have default attribution based on [data types](/rest/api/searchservice/supported-data-types), which is also used by the [Import wizards](search-import-data-portal.md) in the Azure portal. The Azure SDKs don't have defaults, but they have field subclasses that incorporate properties and behaviors, such as [SearchableField](/dotnet/api/azure.search.documents.indexes.models.searchablefield) for strings and [SimpleField](/dotnet/api/azure.search.documents.indexes.models.simplefield) for primitives.
+The REST APIs have default attribution based on [data types](/rest/api/searchservice/supported-data-types), which is also used by the [import wizards](search-import-data-portal.md) in the Azure portal. The Azure SDKs don't have defaults, but they have field subclasses that incorporate properties and behaviors, such as [SearchableField](/dotnet/api/azure.search.documents.indexes.models.searchablefield) for strings and [SimpleField](/dotnet/api/azure.search.documents.indexes.models.simplefield) for primitives.
Default field attributions for the REST APIs are summarized in the following table.
@@ -113,7 +113,7 @@ Default field attributions for the REST APIs are summarized in the following tab
String fields can also be optionally associated with [analyzers](search-analyzers.md) and [synonym maps](search-synonyms.md). Fields of type `Edm.String` that are filterable, sortable, or facetable can be at most 32 kilobytes in length. This is because values of such fields are treated as a single search term, and the maximum length of a term in Azure AI Search is 32 kilobytes. If you need to store more text than this in a single string field, you should explicitly set filterable, sortable, and facetable to `false` in your index definition.
-Vector fields must be associated with [dimensions and vector profiles](vector-search-how-to-create-index.md). Retrievable is true by default if you add the vector field using the [Import and vectorize data wizard](search-get-started-portal-import-vectors.md) in the Azure portal. If you use the REST API, it's false.
+Vector fields must be associated with [dimensions and vector profiles](vector-search-how-to-create-index.md). Retrievable is true by default if you add the vector field using the [**Import data (new)** wizard](search-get-started-portal-import-vectors.md) in the Azure portal. If you use the REST API, it's false.
Field attributes are described in the following table.
@@ -154,7 +154,7 @@ Index design through the Azure portal enforces requirements and schema rules for
The wizard is an end-to-end workflow that creates an indexer, a data source, and a finished index. It also loads the data. If this is more than what you want, use **Add index** instead.
-The following screenshot highlights where **Add index**, **Import data**, and **Import and vectorize data wizard** appear on the command bar.
+The following screenshot highlights where the **Add index**, **Import data**, and **Import data (new)** wizards appear on the command bar.
:::image type="content" source="media/search-what-is-an-index/add-index.png" alt-text="Screenshot of the options to add an index." border="true":::
Summary
{
"modification_type": "minor update",
"modification_title": "検索インデックス作成手順の更新"
}
Explanation
この変更は、search-how-to-create-search-index.md
ファイルに対する小規模な更新で、主に手順や用語の明確化が行われています。具体的な変更点は以下の通りです。
まず、「インポートデータウィザード」が「インポートウィザード」および「インポートデータ(新)」の表現に置き換えられており、これにより現在利用可能なウィザードの複数形が反映されています。このような用語変更は、利用者が異なるウィザードの機能を理解しやすくするためです。
また、REST APIに関する説明において、デフォルトの設定の動作が明確化され、特に「retrievable」がデフォルトでtrue
になる条件が新しい表現で説明されています。これにより、ユーザーはフィールドの属性やそれらの挙動についての理解を深めることができます。
さらに、図のキャプションも更新され、現在のインターフェースにマッチするように変更されています。これにより、ユーザーが実際のAzureポータルでの操作とさらに整合性のある文書を利用できるようになります。
全体として、これらの変更はドキュメントの明確性を高め、ユーザーが検索インデックスの作成手順をより効果的に理解し、実行できるようにすることを目的としています。
articles/search/search-how-to-define-index-projections.md
Diff
@@ -305,7 +305,7 @@ If a parent document is completely deleted from the datasource, the correspondin
### Projected key value
-To ensure data integrity for updated and deleted content, data refresh in one-to-many indexing relies on a *projected key value* on the "many" side. If you're using integrated vectorization or the [Import and vectorize data wizard](search-import-data-portal.md), the projected key value is the `parent_id` field in a chunked or "many" side of the index.
+To ensure data integrity for updated and deleted content, data refresh in one-to-many indexing relies on a *projected key value* on the "many" side. If you're using integrated vectorization or the [**Import data (new)** wizard](search-import-data-portal.md), the projected key value is the `parent_id` field in a chunked or "many" side of the index.
A projected key value is a unique identifier that the indexer generates for each document. It ensures uniqueness and allows for change and deletion tracking to work correctly. This key contains the following segments:
Summary
{
"modification_type": "minor update",
"modification_title": "インデックスプロジェクション定義の手順更新"
}
Explanation
この変更は、search-how-to-define-index-projections.md
ファイルに対する小規模な更新を示しており、特にプロジェクテッドキーの定義に関連する手順が改善されています。具体的には、「インポートデータウィザード」の部分が「インポートデータ(新)」ウィザードと明確に修正されており、これにより最新の操作手順に即した表現が反映されています。
この変更により、ユーザーは一対多のインデクシングにおけるデータ整合性の確保や、プロジェクテッドキーがどのように機能するかについての理解を深めることができます。プロジェクテッドキーは、各ドキュメントに生成されるユニークな識別子であり、内容の変更や削除の追跡を正しく行うために重要であることが強調されています。
全体として、これらの変更は文書の最新性を保ち、ユーザーが案内された手順をより理解しやすくすることを目的としています。
articles/search/search-how-to-index-logic-apps-indexers.md
Diff
@@ -17,9 +17,9 @@ ms.custom:
[!INCLUDE [Feature preview](./includes/previews/preview-generic.md)]
-Support for Azure Logic Apps integration is now in public preview, available in the Azure portal [Import and vectorize data wizard](search-get-started-portal-import-vectors.md) only. In Azure AI Search, a logic app workflow is used for indexing and vectorization, and it's equivalent to an indexer and data source in Azure AI Search.
+Support for Azure Logic Apps integration is currently in public preview and only available through the [**Import data (new)** wizard](search-get-started-portal-import-vectors.md) in the Azure portal. In Azure AI Search, a logic app workflow is used for indexing and vectorization, and it's equivalent to an indexer and data source in Azure AI Search.
-You can create a workflow in Azure AI Search using the Import and vectorize data wizard, and then manage the workflow in Azure Logic Apps alongside your other workflows. Behind the scenes, the wizard follows a workflow template that pulls in (ingests) content from a source for indexing in AI Search. The connectors used in this scenario are prebuilt and already exist in Azure Logic Apps, so the workflow template just provides details for those connectors to create connections to the data source, AI Search, and other items to complete the ingestion workflow.
+You can create a workflow in Azure AI Search using the **Import data (new)** wizard, and then manage the workflow in Azure Logic Apps alongside your other workflows. Behind the scenes, the wizard follows a workflow template that pulls in (ingests) content from a source for indexing in AI Search. The connectors used in this scenario are prebuilt and already exist in Azure Logic Apps, so the workflow template just provides details for those connectors to create connections to the data source, AI Search, and other items to complete the ingestion workflow.
> [!NOTE]
> A logic app workflow is a billable resource. For more information, see [Azure Logic Apps pricing](/azure/logic-apps/logic-apps-pricing).
@@ -33,7 +33,7 @@ Azure Logic Apps integration in Azure AI Search adds support for:
+ Scheduled or on-demand indexing
+ Change detection of new and existing documents
-Import and vectorize data wizard inputs include:
+The **Import data (new)** wizard inputs include:
+ A supported data source
+ A supported text embedding model
@@ -77,7 +77,8 @@ End-to-end functionality is available in the following regions, which provide th
### Supported models
-The logic app path through the **Import and vectorize data** wizard supports a selection of embedding models.
+The logic app path through the **Import data (new)** wizard supports a selection of embedding models.
+
Deploy one of the following [embedding models](/azure/ai-services/openai/concepts/models#embeddings) on Azure OpenAI for your end-to-end workflow.
+ text-embedding-3-small
@@ -109,11 +110,11 @@ Currently, the public preview has these limitations:
Follow these steps to create a logic app workflow for indexing content in Azure AI Search.
-1. Start the Import and vectorize data wizard in the Azure portal.
+1. Start the **Import data (new)** wizard in the Azure portal.
1. Choose a [supported Azure Logic Apps connector](#supported-connectors).
- :::image type="content" source="media/logic-apps-connectors/choose-data-source.png" alt-text="Screenshot of the chosen data source page in the Import and vectorize data wizard." lightbox="media/logic-apps-connectors/choose-data-source.png" :::
+ :::image type="content" source="media/logic-apps-connectors/choose-data-source.png" alt-text="Screenshot of the chosen data source page in the Import data (new) wizard." lightbox="media/logic-apps-connectors/choose-data-source.png" :::
1. In **Connect to your data**, provide a name prefix used for the search index and workflow. Having a common name helps you manage them together.
Summary
{
"modification_type": "minor update",
"modification_title": "Logic Appsインデクサーの手順更新"
}
Explanation
この変更は、search-how-to-index-logic-apps-indexers.md
ファイルに対する小規模な更新で、Azure Logic Appsとの統合に関連する指示が改善されています。主な変更点は、「インポートおよびベクトル化データウィザード」という用語が「インポートデータ(新)ウィザード」に置き換えられ、より最新のウィザードの名称が反映されたことです。
変更の中では、Azure AI Search内でのLogic Appsの使用についての説明が行われており、具体的にどのようにワークフローが作成され、管理されるかが記述されています。ウィザードがテンプレートに基づいてコンテンツを取り込むプロセスも強調されており、これにより、ユーザーはインデクシングの流れを理解しやすくなっています。
また、ウィザードの入力内容や利用可能なテキスト埋め込みモデルについても更新が行われ、ユーザーが正確な手続きを確認できるようになっています。特に、Azure Logic Appsの価格についての注意喚起も新たに含まれ、ユーザーがコスト管理を意識することができるよう配慮されています。
全体として、今回の変更は、Azure Logic Appsとの統合プロセスの明確化を図り、ユーザーが効率的に作業を行うためのサポートを目的としています。
articles/search/search-how-to-index-onelake-files.md
Diff
@@ -23,8 +23,8 @@ To configure and run the indexer, you can use:
+ [2024-05-01-preview REST API](/rest/api/searchservice/data-sources/create-or-update?view=rest-searchservice-2024-05-01-preview&tabs=HTTP&preserve-view=true) or a newer preview REST API.
+ An Azure SDK beta package that provides the feature.
-+ [Import data wizard](search-get-started-portal.md) in the Azure portal.
-+ [Import and vectorize data wizard](search-get-started-portal-import-vectors.md) in the Azure portal.
++ [**Import data** wizard](search-get-started-portal.md) in the Azure portal.
++ [**Import data (new)** wizard](search-get-started-portal-import-vectors.md) in the Azure portal.
This article uses the REST APIs to illustrate each step.
@@ -454,6 +454,6 @@ There are five indexer properties that control the indexer's response when error
## Next steps
-Review how the [Import and vectorize data wizard](search-get-started-portal-import-vectors.md) works and try it out for this indexer. You can use [integrated vectorization](vector-search-integrated-vectorization.md) to chunk and create embeddings for vector or hybrid search using a default schema.
+Review how the [**Import data (new)** wizard](search-get-started-portal-import-vectors.md) works and try it out for this indexer. You can use [integrated vectorization](vector-search-integrated-vectorization.md) to chunk and create embeddings for vector or hybrid search using a default schema.
<!-- + Check out [this Python demo](add a link to demo location) that shows how to set this up using code. -->
Summary
{
"modification_type": "minor update",
"modification_title": "OneLakeファイルのインデクシング手順更新"
}
Explanation
この変更は、search-how-to-index-onelake-files.md
ファイルに対する小規模な更新で、OneLakeファイルのインデクシング手順に関連する記述の改善を示しています。主な変更点は、ウィザードの名称が「インポートデータウィザード」および「インポートおよびベクトル化データウィザード」から、それぞれ「Import dataウィザード」と「Import data (new)ウィザード」に更新され、最新の名称が反映されている点です。
これにより、ユーザーはAzureポータル内でのインデクサー設定や実行方法をより正確に理解できるようになります。具体的には、REST APIやAzure SDKのベータパッケージに加え、これらの新しいウィザードが利用可能であることが明示されています。
さらに、本文内ではエラー時のインデクサーの応答を制御するためのインデクサーのプロパティに関する情報も提供されています。変更された内容は、ユーザーがOneLakeファイルをインデクシングする際の手順をよりスムーズに進められるようにするための重要な更新です。
全体として、これらの変更は、ユーザーが最新の機能を適切に活用できるようにし、手順を理解しやすくすることを目的としています。
articles/search/search-how-to-index-sql-database.md
Diff
@@ -136,13 +136,17 @@ In this step, specify the data source, index, and indexer.
:::image type="content" source="media/search-how-to-index-sql-database/search-data-source.png" alt-text="Screenshot of the data source creation page in the Azure portal.":::
-1. Start the **Import data** wizard to create the index and indexer.
+1. Use an [import wizard](search-import-data-portal.md) to create the index and indexer.
- 1. On the Overview page, select **Import data**.
- 1. Select the data source you just created, and select **Next**.
- 1. Skip the **Add cognitive skills (Optional)** page.
- 1. On **Customize target index**, name the index, set the key to your primary key in the table, and then group select *Retrievable* and *Searchable* for all fields, and optionally add *Filterable* and *Sortable* for short strings or numeric values.
- 1. On **Create an indexer**, name the indexer and select **Submit**.
+ 1. On the **Overview** page, select **Import data** or **Import data (new)**.
+
+ 1. Select the data source you just created.
+
+ 1. Skip the step for adding AI enrichments.
+
+ 1. Name the index, set the key to your primary key in the table, attribute all fields as **Retrievable** and **Searchable**, and optionally add **Filterable** and **Sortable** for short strings or numeric values.
+
+ 1. Name the indexer and finish the wizard to create the necessary objects.
### [**REST**](#tab/test-sql)
@@ -363,7 +367,7 @@ For Azure SQL indexers, there are two change detection policies:
+ "HighWaterMarkChangeDetectionPolicy" (works for views)
-### SQL Integrated Change Tracking Policy
+### SQL integrated change tracking policy
We recommend using "SqlIntegratedChangeTrackingPolicy" for its efficiency and its ability to identify deleted rows.
@@ -401,7 +405,7 @@ When using SQL integrated change tracking policy, don't specify a separate data
<a name="HighWaterMarkPolicy"></a>
-### High Water Mark Change Detection policy
+### High water mark change detection policy
This change detection policy relies on a "high water mark" column in your table or view that captures the version or time when a row was last updated. If you're using a view, you must use a high water mark policy.
@@ -484,7 +488,7 @@ You can also disable the `ORDER BY [High Water Mark Column]` clause. However, th
}
```
-### Soft Delete Column Deletion Detection policy
+### Soft delete column deletion detection policy
When rows are deleted from the source table, you probably want to delete those rows from the search index as well. If you use the SQL integrated change tracking policy, this is taken care of for you. However, the high water mark change tracking policy doesn’t help you with deleted rows. What to do?
Summary
{
"modification_type": "minor update",
"modification_title": "SQLデータベースインデクシング手順の更新"
}
Explanation
この変更は、search-how-to-index-sql-database.md
ファイルに対する小規模な更新で、SQLデータベースのインデクシング手順に関する説明が改善されています。主な変更点は、インデクサーの作成手順に関する具体的な指示が更新されたことです。
まず、ウィザードの呼び方が「Import dataウィザード」から「import wizard」に変更され、最新の手順が反映されました。また、「Import data (new)」オプションの選択が可能になったことも強調されています。これにより、ユーザーは最新のインターフェースを使った正確な手順を追うことができます。
手順の詳細も見直されており、インデックスの命名、主キーの設定、フィールド属性の指定方法が明確化されています。特に、フィールドを「Retrievable」と「Searchable」として設定し、必要に応じて「Filterable」と「Sortable」を追加するように指示されています。この具体的なアドバイスにより、ユーザーは設定の効率を高めることができるでしょう。
さらに、変更の中で、変化検出ポリシーに関するセクションの見出しも書き換えられ、より一貫したスタイルに整えられています。これにより、ドキュメント全体の可読性が向上し、ユーザーは必要な情報を容易に見つけることができます。
全体として、これらの変更はSQLデータベースのインデクシングプロセスをより理解しやすくし、実行可能な手順を提供することを目的としています。
articles/search/search-how-to-integrated-vectorization.md
Diff
@@ -36,7 +36,7 @@ Integrated vectorization works with [all supported data sources](search-indexer-
| Supported data source | Description |
|--|--|
-| [Azure Blob Storage](search-howto-indexing-azure-blob-storage.md) | This data source works with blobs and tables. You must use a standard performance (general-purpose v2) account. Access tiers can be hot, cool, or cold. |
+| [Azure Blob Storage](/azure/storage/common/storage-account-create) | This data source works with blobs and tables. You must use a standard performance (general-purpose v2) account. Access tiers can be hot, cool, or cold. |
| [Azure Data Lake Storage (ADLS) Gen2](/azure/storage/blobs/create-data-lake-storage-account) | This is an Azure Storage account with a hierarchical namespace enabled. To confirm that you have Data Lake Storage, check the **Properties** tab on the **Overview** page.<br><br> :::image type="content" source="media/search-how-to-integrated-vectorization/data-lake-storage-account.png" alt-text="Screenshot of an Azure Data Lake Storage account in the Azure portal." border="true" lightbox="media/search-how-to-integrated-vectorization/data-lake-storage-account.png"::: |
<!--| [OneLake](search-how-to-index-onelake-files.md) | This data source is currently in preview. For information about limitations and supported shortcuts, see [OneLake indexing](search-how-to-index-onelake-files.md). |-->
Summary
{
"modification_type": "minor update",
"modification_title": "インテグレーテッドベクトル化のサポートデータソースのリンク更新"
}
Explanation
この変更は、search-how-to-integrated-vectorization.md
ファイルに対する小規模な更新で、インテグレーテッドベクトル化をサポートするデータソースに関する情報のリンク修正を含んでいます。具体的には、Azure Blob Storageに関する記述が変更され、正しいリンク形式が適用されました。
元の記述では、Azure Blob Storageへのリンクが相対リンク形式で記載されていましたが、新しい記述では完全なリンク形式に更新されています。これにより、ユーザーが直接Azure Blob Storageの作成に関する情報にアクセスできるようになっています。この変更は、ドキュメント内のリンクの一貫性と正確性を高め、ユーザーが必要な情報を見つけやすくすることを目指しています。
また、他のデータソースの説明はそのままであり、Azure Data Lake Storageに関する情報も保持されています。この手順により、ユーザーは支援されたデータソースを理解し、インテグレーテッドベクトル化機能を効果的に使用するための情報を容易に取得できるようになっています。
全体として、この変更は、ドキュメントの更新を通じてユーザーの利便性の向上を図るものであり、正確かつ有用な情報を提供することに貢献しています。
articles/search/search-how-to-load-search-index.md
Diff
@@ -33,13 +33,13 @@ For more information, see [Data import strategies](search-what-is-data-import.md
## Use the Azure portal
-In the Azure portal, use the [import wizards](search-import-data-portal.md) to create and load indexes in a seamless workflow. If you want to load an existing index, choose an alternative approach.
+In the Azure portal, use an [import wizard](search-import-data-portal.md) to create and load indexes in a seamless workflow. If you want to load an existing index, choose an alternative approach.
1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account and [find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices).
-1. On the **Overview** page, select **Import data** or **Import and vectorize data wizard** on the command bar to create and populate a search index.
+1. On the **Overview** page, select **Import data** or **Import data (new)** on the command bar to create and populate a search index.
- :::image type="content" source="media/search-import-data-portal/import-data-cmd.png" alt-text="Screenshot of the Import data command" border="true":::
+ :::image type="content" source="media/search-import-data-portal/import-wizards.png" alt-text="Screenshot of the Import data command." border="true":::
You can follow these links to review the workflow: [Quickstart: Create an Azure AI Search index](search-get-started-portal.md) and [Quickstart: Integrated vectorization](search-get-started-portal-import-vectors.md).
Summary
{
"modification_type": "minor update",
"modification_title": "Azureポータルでのインポートウィザードのリンク更新"
}
Explanation
この変更は、search-how-to-load-search-index.md
ファイルに対する小規模な更新で、Azureポータルにおけるインポートウィザードに関する記述が改善されています。具体的には、インポートウィザードに関するフレーズが微調整され、より正確で一貫性のある表現が使用されています。
変更の中で、最初の文の「[import wizards]」が「an [import wizard]」に修正され、単数形に統一されています。この修正により、ユーザーがインポートウィザードを指すときの表現がより明確になります。
また、手順における「Import and vectorize data wizard」が「Import data (new)」に変更されており、最新のインターフェースまたは機能を反映した表現へと改善されています。これにより、新しい機能の存在をユーザーに適切に伝える効果があります。
加えて、手順の説明に用いられている画像も変更され、新しいインポートデータコマンドのスクリーンショットが追加されています。この更新により、ユーザーはその手順をより視覚的に理解しやすくなります。
全体として、この変更は、Azureポータルでのインポート手順に関連する情報を最新のものに保ち、ユーザーにとってより使いやすいドキュメントを提供することを目指しています。
articles/search/search-how-to-semantic-chunking.md
Diff
@@ -30,7 +30,7 @@ In this article, learn how to:
> + Generate embeddings for each chunk
> + Use index projections to map embeddings to fields in a search index
-For illustration purposes, this article uses the [sample health plan PDFs](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/health-plan) uploaded to Azure Blob Storage and then indexed using the **Import and vectorize data wizard**.
+For illustration purposes, this article uses the [sample health plan PDFs](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/health-plan) uploaded to Azure Blob Storage and then indexed using the **Import data (new)** wizard.
## Prerequisites
@@ -59,15 +59,15 @@ The raw inputs must be in a [supported data source](search-indexer-overview.md#s
You can use the Azure portal, REST APIs, or an Azure SDK package to [create a data source](search-howto-indexing-azure-blob-storage.md).
> [!TIP]
-> Upload the [health plan PDF](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/health-plan) sample files to your supported data source to try out the Document Layout skill and structure-aware chunking on your own search service. The [Import and vectorize data wizard](search-get-started-portal-import-vectors.md) is an easy code-free approach for trying out this skill. Be sure to select the **default parsing mode** to use structure-aware chunking. Otherwise, the [Markdown parsing mode](search-how-to-index-markdown-blobs.md) is used.
+> Upload the [health plan PDF](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/health-plan) sample files to your supported data source to try out the Document Layout skill and structure-aware chunking on your own search service. The [**Import data (new)** wizard](search-get-started-portal-import-vectors.md) is an easy code-free approach for trying out this skill. Be sure to select the **default parsing mode** to use structure-aware chunking. Otherwise, the [Markdown parsing mode](search-how-to-index-markdown-blobs.md) is used.
## Create an index for one-to-many indexing
Here's an example payload of a single search document designed around chunks. Whenever you're working with chunks, you need a chunk field and a parent field that identifies the origin of the chunk. In this example, parent fields are the text_parent_id. Child fields are the vector and nonvector chunks of the markdown section.
The Document Layout skill outputs headings and content. In this example, `header_1` through `header_3` store document headings, as detected by the skill. Other content, such as paragraphs, is stored in `chunk`. The `text_vector` field is a vector representation of the chunk field content.
-You can use the **Import and vectorize data wizard** in the Azure portal, REST APIs, or an Azure SDK to [create an index](search-how-to-load-search-index.md). The following index is very similar to what the wizard creates by default. You might have more fields if you add image vectorization.
+You can use the **Import data (new)** wizard in the Azure portal, REST APIs, or an Azure SDK to [create an index](search-how-to-load-search-index.md). The following index is very similar to what the wizard creates by default. You might have more fields if you add image vectorization.
If you aren't using the wizard, the index must exist on the search service before you create the skillset or run the indexer.
Summary
{
"modification_type": "minor update",
"modification_title": "インポートウィザードの名称更新"
}
Explanation
この変更は、search-how-to-semantic-chunking.md
ファイルに対する小規模な更新で、Azureポータル内のインポートウィザードに関する言及が改善されています。主な変更点は、インポートウィザードの名称が「Import and vectorize data wizard」から「Import data (new)」に修正されたことです。これにより、最新の機能に対応した正しい表現が使われ、ユーザーが最新のインターフェースにアクセスしやすくなります。
具体的には、記事全体のいくつかのセクションで、このウィザードを指す文言が変更されており、初心者が実際の手順を実行する際の混乱を避ける助けとなります。また、ユーザーがボタンやメニューをクリックする際の体験を一貫性のあるものにすることで、よりスムーズに作業を進められるようになります。
さらに、チュートリアルの説明の一部として、健康プランPDFのサンプルファイルを利用した具体的なケーススタディも言及されており、ユーザーがこのスキルや構造を考慮したチャンク処理を試す際に役立つ実例が提供されています。
全体として、この変更は、文書の正確性を高め、ユーザーにとって使いやすい体験を提供することを目指したものです。
articles/search/search-howto-complex-data-types.md
Diff
@@ -22,7 +22,7 @@ Complex fields represent either a single object in the document, or an array of
Azure AI Search natively supports complex types and collections. These types allow you to model almost any JSON structure in an Azure AI Search index. In previous versions of Azure AI Search APIs, only flattened row sets could be imported. In the newest version, your index can now more closely correspond to source data. In other words, if your source data has complex types, your index can have complex types also.
-To get started, we recommend the [Hotels data set](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/hotels), which you can load in the **Import data** wizard in the Azure portal. The wizard detects complex types in the source and suggests an index schema based on the detected structures.
+To get started, we recommend the [hotels data set](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/hotels), which you can load using an [import wizard](search-get-started-portal.md) in the Azure portal. The wizard detects complex types in the source and suggests an index schema based on the detected structures.
> [!NOTE]
> Support for complex types became generally available starting in `api-version=2019-05-06`.
@@ -369,9 +369,7 @@ If you implement the workaround, be sure to test extentively.
## Next steps
-Try the [Hotels data set](https://github.com/Azure-Samples/azure-search-sample-data/tree/master/hotels) in the **Import data** wizard. You need the Azure Cosmos DB connection information provided in the readme to access the data.
-
-With that information in hand, your first step in the wizard is to create a new Azure Cosmos DB data source. Further on in the wizard, when you get to the target index page, you see an index with complex types. Create and load this index, and then execute queries to understand the new structure.
+Use an import wizard with sample data to guide you through creating, loading, and querying an index.
> [!div class="nextstepaction"]
-> [Quickstart: portal wizard for import, indexing, and queries](search-get-started-portal.md)
+> [Quickstart: Create a search index in the Azure portal](search-get-started-portal.md)
Summary
{
"modification_type": "minor update",
"modification_title": "データセットおよびウィザードの説明改善"
}
Explanation
この変更は、search-howto-complex-data-types.md
ファイルに対する小規模な更新で、Azure AI Searchの複雑なデータ型に関する情報が整理され、明確化されています。具体的には、データセットの名称とウィザードの使用方法に関する表現が改善されています。
主な変更点として、「Import data wizard」と記載されていた部分が「[import wizard]」に修正され、リンクが追加されました。これにより、ユーザーはウィザードを使って複雑な型を持つデータを簡単にインポートできることを理解しやすくなっています。また、ホテルデータセットを手始めに勧める部分も、具体的なデータセット名が小文字に修正され、一貫性が保たれています。
さらに、下部の「次のステップ」セクションでは、より具体的なアクションが示され、ユーザーがインデックスの作成や読み込み、クエリの実行をより容易に理解できるようになりました。以前よりも親しみやすい表現で、Azure Cosmos DBとの接続情報に言及しつつ、ウィザードの利用方法が簡潔にまとめられています。
全体の修正は、読み手にとっての明確性を向上させ、Azure AI Searchで複雑なデータ型を扱う際のユーザビリティを高めることを目指しています。
articles/search/search-howto-index-cosmosdb.md
Diff
@@ -67,7 +67,7 @@ The Description field provides the most verbose content. You should target this
## Use the Azure portal
-You can use either the **Import data** wizard or **Import and vectorize data wizard** to automate indexing from an SQL database table or view. The data source configuration is similar for both wizards.
+You can use either the **Import data** wizard or the **Import data (new)** wizard to automate indexing from an SQL database table or view. The data source configuration is similar for both wizards.
1. [Start the wizard](search-import-data-portal.md#starting-the-wizards).
@@ -81,17 +81,17 @@ You can use either the **Import data** wizard or **Import and vectorize data wiz
If you [configure Azure AI Search to use a managed identity](search-how-to-managed-identities.md), and you create a [role assignment on Cosmos DB](/azure/cosmos-db/how-to-setup-rbac#built-in-role-definitions) that grants **Cosmos DB Account Reader** and **Cosmos DB Built-in Data Reader** permissions to the identity, your indexer can connect to Cosmos DB using Microsoft Entra ID and roles.
-1. For the **Import and vectorize data wizard**, you can specify options for change and deletion tracking.
+1. For the **Import data (new)** wizard, you can specify options for change and deletion tracking.
[Change detection](#incremental-indexing-and-custom-queries) is supported by default through a `_ts` field (timestamp). If you upload content using the approach described in [Try with sample data](#try-with-sample-data), the collection is created with a `_ts` field.
[Deletion detection](#indexing-deleted-documents) requires that you have a preexisting top-level field in the collection that can be used as a soft-delete flag. It should be a Boolean field (you could name it IsDeleted). Specify `true` as the soft-deleted value. In the search index, add a corresponding search field called *IsDeleted* set to retrievable and filterable.
1. Continue with the remaining steps to complete the wizard:
- + [Import data wizard](search-get-started-portal.md)
+ + [**Import data** wizard](search-get-started-portal.md)
- + [Import and vectorize data wizard](search-get-started-portal-import-vectors.md)
+ + [**Import data (new)** wizard](search-get-started-portal-import-vectors.md)
## Use the REST APIs
Summary
{
"modification_type": "minor update",
"modification_title": "ウィザードの名称の更新"
}
Explanation
この変更は、search-howto-index-cosmosdb.md
ファイルに対する小規模な修正で、Azureポータルでのデータインポートウィザードに関する記載が更新されています。主な変更点は、インポートウィザードの名称が「Import and vectorize data wizard」から「Import data (new)」に修正されたことです。この修正により、ユーザーは最新の機能を活用する際の混乱を避けることができます。
具体的には、ウィザードに関する説明や指示が一貫性を持たせるために整えられ、両方のウィザードの比較がより明確になっています。たとえば、インデックスの更新や削除トラッキングのオプションに関する説明も、名称変更に合わせて適切に更新されています。
また、ウィザードを使った手順の最後に、各ウィザードへのリンクが更新され、ユーザーが該当する機能に簡単にアクセスできるようになっています。このようにして、文書全体での整合性が向上し、ユーザーにとって使いやすいガイダンスが提供されています。
全体の修正は、Azure AI Searchの最新機能を効果的に利用できるようにすることを目的としたものです。これにより、ユーザーはインポートウィザードを使ってデータをより効率的に処理できるようになります。
articles/search/search-howto-index-json-blobs.md
Diff
@@ -78,7 +78,7 @@ api-key: [admin key]
### json example (single hotel JSON files)
-The [hotel JSON document data set](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/hotels/hotel-json-documents) on GitHub is helpful for testing JSON parsing, where each blob represents a structured JSON file. You can upload the data files to Blob Storage and use the [**Import data** wizard](search-get-started-portal.md) to quickly evaluate how this content is parsed into individual search documents.
+The [hotel JSON document data set](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/hotels/hotel-json-documents) on GitHub is helpful for testing JSON parsing, where each blob represents a structured JSON file. You can upload the data files to Blob Storage and use an [import wizard](search-get-started-portal.md) to quickly evaluate how this content is parsed into individual search documents.
The data set consists of five blobs, each containing a hotel document with an address collection and a rooms collection. The blob indexer detects both collections and reflects the structure of the input documents in the index schema.
@@ -113,7 +113,7 @@ api-key: [admin key]
### jsonArrays example
-The [New York Philharmonic JSON data set](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/ny-philharmonic) on GitHub is helpful for testing JSON array parsing. You can upload the data files to Blob storage and use the [**Import data** wizard](search-get-started-portal.md) to quickly evaluate how this content is parsed into individual search documents.
+The [New York Philharmonic JSON data set](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/ny-philharmonic) on GitHub is helpful for testing JSON array parsing. You can upload the data files to Blob storage and use an [import wizard](search-get-started-portal.md) to quickly evaluate how this content is parsed into individual search documents.
The data set consists of eight blobs, each containing a JSON array of entities, for a total of 100 entities. The entities vary as to which fields are populated, but the end result is one search document per entity, from all arrays, in all blobs.
Summary
{
"modification_type": "minor update",
"modification_title": "インポートウィザードの表記の統一"
}
Explanation
この変更は、search-howto-index-json-blobs.md
ファイルにおける小規模な更新で、JSONデータのインポートに関する説明文が改善されています。具体的には、ウィザードの名称が「Import data wizard」から「import wizard」に変更され、文書全体での用語の一貫性が高められています。
変更内容としては、GitHub上の「ホテルのJSONドキュメントデータセット」と「ニューヨーク・フィルハーモニックのJSONデータセット」において、ウィザードの呼称が統一されました。この更新により、ユーザーは異なるセクションを通じて、関連情報にアクセスする際の混乱を避けることができます。具体的には、ウィザードを利用してデータをBlob Storageにアップロードし、個々の検索ドキュメントとしてどのように解析されるかを素早く評価するプロセスが明確に記述されています。
これにより、JSONの構造を持ったデータを扱う際の手順がよりスムーズになり、ユーザー体験が向上します。また、説明がより一貫性を持って提供されることで、ドキュメント全体の可読性も向上しています。全体として、この変更はAzure AI Searchの使用に関するガイダンスをより使いやすくすることを目的としています。
articles/search/search-howto-indexing-azure-blob-storage.md
Diff
@@ -23,8 +23,8 @@ To configure and run the indexer, you can use:
+ [Search Service REST API](/rest/api/searchservice), any version.
+ An Azure SDK package, any version.
-+ [Import data wizard](search-get-started-portal.md) in the Azure portal.
-+ [Import and vectorize data wizard](search-get-started-portal-import-vectors.md) in the Azure portal.
++ [**Import data** wizard](search-get-started-portal.md) in the Azure portal.
++ [**Import data (new)** wizard](search-get-started-portal-import-vectors.md) in the Azure portal.
This article uses the REST APIs to illustrate each step.
Summary
{
"modification_type": "minor update",
"modification_title": "インポートウィザードの表記を統一"
}
Explanation
この変更は、search-howto-indexing-azure-blob-storage.md
ファイルに対する小規模な修正で、Azure Blob Storageにおけるインデクサの設定と実行方法に関する説明が更新されています。変更された点は、ウィザードの名称に関する表記が一貫性を持たせられたことです。
具体的には、従来の「Import data wizard」および「Import and vectorize data wizard」という名称が、最新の名称「Import data wizard」および「Import data (new) wizard」に変更されました。この修正により、ユーザーはウィザードの最新の機能やバージョンについての理解を深めることができ、混乱を避けやすくなります。
この文書では、REST APIを使用して各ステップを解説しており、Azure Blob Storageに関連するデータのインデクシングに関する情報が提供されています。これにより、ユーザーはそれぞれの手順を把握しやすくなり、実際に操作する際の助けとなります。
全体として、この変更は、Azure Searchの利用における指導をより明確で一貫したものにすることを目的としており、ユーザー体験の向上に寄与しています。
articles/search/search-howto-indexing-azure-tables.md
Diff
@@ -54,7 +54,7 @@ The Description field provides the most verbose content. You should target this
## Use the Azure portal
-You can use either the **Import data** wizard or **Import and vectorize data wizard** to automate indexing from an SQL database table or view. The data source configuration is similar for both wizards.
+You can use either the **Import data** wizard or the **Import data (new)** wizard to automate indexing from an SQL database table or view. The data source configuration is similar for both wizards.
1. [Start the wizard](search-import-data-portal.md#starting-the-wizards).
@@ -68,15 +68,15 @@ You can use either the **Import data** wizard or **Import and vectorize data wiz
If you [configure Azure AI Search to use a managed identity](search-how-to-managed-identities.md), and you create a role assignment on Azure Storage that grants **Reader and Data Access** permissions to the identity, your indexer can connect to table storage using Microsoft Entra ID and roles.
-1. For the **Import and vectorize data wizard**, you can specify options for deletion detection.
+1. For the **Import data (new)** wizard, you can specify options for deletion detection.
Deletion detection requires that you have a preexisting field in the table that can be used as a soft-delete flag. It should be a Boolean field (you could name it IsDeleted). Specify `true` as the soft-delete value. In the search index, add a corresponding search field called *IsDeleted* set to retrievable and filterable.
1. Continue with the remaining steps to complete the wizard:
- + [Import data wizard](search-get-started-portal.md)
+ + [**Import data** wizard](search-get-started-portal.md)
- + [Import and vectorize data wizard](search-get-started-portal-import-vectors.md)
+ + [**Import data (new)** wizard](search-get-started-portal-import-vectors.md)
## Use the REST APIs
Summary
{
"modification_type": "minor update",
"modification_title": "インポートウィザードの名称変更"
}
Explanation
この変更は、search-howto-indexing-azure-tables.md
ファイルに対する小規模な修正で、Azure テーブルからのインデクシングに関するウィザードの表記が更新されています。この修正は、ウィザードの名称をより明確かつ一貫性のあるものにしています。
具体的には、従来の「Import and vectorize data wizard」という名称が「Import data (new) wizard」に変更されました。この更新により、ユーザーはウィザードの最新バージョンを容易に特定でき、混乱を避けることができます。文書内では、両方のウィザードが SQL データベースのテーブルまたはビューからのインデクシングを自動化するための手段として記載されていますが、それぞれの設定は似ています。
さらに、「Import data (new) wizard」に関する指示が明確にされ、削除検出のオプションに関する具体的な説明も強化されました。これにより、ユーザーは削除フラグを利用したデータ管理において必要な手順をより簡単に理解できるようになります。
全体として、この変更はAzure Searchのドキュメントの明瞭さを高め、ユーザーが適切なインデクシング手法を選択する際の助けとなることを目指しています。このような一貫した用語の使用は、ドキュメントの可読性を向上させ、ユーザー体験を改善します。
articles/search/search-import-data-portal.md
Diff
@@ -1,11 +1,11 @@
---
-title: Import Wizards in the Azure Portal
+title: Import Wizards in the Azure portal
titleSuffix: Azure AI Search
-description: Learn about the import wizards in the Azure portal used to create and load an index, and optionally invoke applied AI for vectorization, natural language processing, language translation, OCR, and image analysis.
+description: Learn about the Azure portal wizards that create and load an index and optionally invoke applied AI for vectorization, natural-language processing, language translation, OCR, and image analysis.
author: HeidiSteen
ms.author: heidist
manager: nitinme
-ms.date: 05/12/2025
+ms.date: 09/16/2025
ms.service: azure-ai-search
ms.topic: concept-article
ms.custom:
@@ -16,257 +16,296 @@ ms.custom:
# Import data wizards in the Azure portal
-Azure AI Search has two wizards that automate indexing and object creation so that you can begin querying immediately. If you're new to Azure AI Search, these wizards are one of the most powerful features at your disposal. With minimal effort, you can create an indexing or enrichment pipeline that exercises most of the functionality of Azure AI Search.
+> [!IMPORTANT]
+> We're consolidating the Azure AI Search wizards. Key changes include:
+>
+> + The **Import and vectorize data** wizard is now called **Import data (new)**.
+> + The **Import data** workflow is now available in **Import data (new)**.
+>
+> The **Import data** wizard will eventually be deprecated. For now, you can still use this wizard, but we recommend the new wizard for an improved search experience that uses the latest frameworks.
+>
+> The wizards don't have identical keyword search workflows. Certain skills and capabilities are only available in the old wizard. For more information about their similarities and differences, continue reading this article.
-+ **Import data wizard** supports nonvector workflows. You can extract text and numbers from raw documents. You can also configure applied AI and built-in skills that infer structure and generate text searchable content from image files and unstructured data.
+Azure AI Search has two wizards that automate indexing, enrichment, and object creation for various search scenarios:
-+ **Import and vectorize data wizard** adds chunking and vectorization. You must specify an existing deployment of an embedding model, but the wizard makes the connection, formulates the request, and handles the response. It generates vector content from text or image content.
++ The **Import data** wizard supports keyword (nonvector) search. You can extract text and numbers from raw documents. You can also configure applied AI and built-in skills to infer structure and generate searchable text from image files and unstructured data.
-If you're using the wizard for proof-of-concept testing, this article explains the internal workings of the wizards so that you can use them more effectively.
++ The **Import data (new)** wizard supports keyword search, RAG, and multimodal RAG. For keyword search, it modernizes the **Import data** workflow but lacks some functionality, such as automatic metadata field creation. For RAG and multimodal RAG, it connects to your embedding model deployment, sends requests, and generates vectors from text or images.
-This isn't a step-by-step article. To use the wizard with sample data, see:
+Despite their differences, the wizards follow similar workflows for content ingestion and indexing. The following table summarizes their capabilities.
-+ [Quickstart: Create a search index](search-get-started-portal.md)
-+ [Quickstart: Create a text translation and entity skillset](search-get-started-skillset.md)
-+ [Quickstart: Create a vector index](search-get-started-portal-import-vectors.md)
-+ [Quickstart: Image search (vectors)](search-get-started-portal-image-search.md)
+| Capability | Import data wizard | Import data (new) wizard |
+|--|--|--|
+| Index creation | ✅ | ✅ |
+| Indexer pipeline creation | ✅ | ✅ |
+| Azure Logic Apps connectors | ❌ | ✅ |
+| Sample data | ✅ | ❌ |
+| Skills-based enrichment | ✅ | ✅ |
+| Vector and multimodal support | ❌ | ✅ |
+| Semantic ranking support | ❌ | ✅ |
+| Knowledge store support | ✅ | ❌ |
+
+This article explains how the wizards work to help you with proof-of-concept testing. For step-by-step instructions using sample data, see [Try the wizards](#try-the-wizards).
## Supported data sources and scenarios
-The wizards support most of the data sources supported by indexers.
+This section describes the available options in each wizard.
+
+### Data sources
+
+The wizards support the following data sources, most of which use [built-in indexers](search-indexer-overview.md#supported-data-sources). Exceptions are noted in the table's footnotes.
-| Data | Import data wizard | Import and vectorize data wizard |
-|------|--------------------|----------------------------------|
-| [ADLS Gen2](search-howto-index-azure-data-lake-storage.md) | ✅ | ✅ |
+| Data source | Import data wizard | Import data (new) wizard |
+|--|--|--|
+| [ADLS Gen2](search-howto-index-azure-data-lake-storage.md) | ✅ | ✅ |
| [Azure Blob Storage](search-howto-indexing-azure-blob-storage.md) | ✅ | ✅ |
-| [Azure File Storage](search-file-storage-integration.md) | ❌ | ❌ |
-| [Azure Table Storage](search-howto-indexing-azure-tables.md) | ✅ | ✅ |
-| [Azure SQL database and managed instance](search-how-to-index-sql-database.md) | ✅ | ✅ |
+| [Azure File Storage](search-how-to-index-logic-apps-indexers.md#supported-connectors) | ❌ | ✅ <sup>1, 2</sup> |
+| [Azure Queues](search-how-to-index-logic-apps-indexers.md#supported-connectors) | ❌ | ✅ <sup>1</sup> |
+| [Azure Table Storage](search-howto-indexing-azure-tables.md) | ✅ | ✅ |
+| [Azure SQL Database and Managed Instance](search-how-to-index-sql-database.md) | ✅ | ✅ |
| [Cosmos DB for NoSQL](search-howto-index-cosmosdb.md) | ✅ | ✅ |
| [Cosmos DB for MongoDB](search-howto-index-cosmosdb-mongodb.md) | ✅ | ✅ |
| [Cosmos DB for Apache Gremlin](search-howto-index-cosmosdb-gremlin.md) | ✅ | ✅ |
| [MySQL](search-howto-index-mysql.md) | ❌ | ❌ |
-| [OneLake](search-how-to-index-onelake-files.md) | ✅ | ✅ |
-| [SharePoint Online](search-howto-index-sharepoint-online.md) | ❌ | ❌ |
-| [SQL Server on virtual machines](search-how-to-index-sql-server.md) | ✅ | ✅ |
+| [OneDrive](search-how-to-index-logic-apps-indexers.md#supported-connectors) | ❌ | ✅ <sup>1</sup> |
+| [OneDrive for Business](search-how-to-index-logic-apps-indexers.md#supported-connectors) | ❌ | ✅ <sup>1</sup> |
+| [OneLake](search-how-to-index-onelake-files.md) | ✅ | ✅ |
+| [Service Bus](search-how-to-index-logic-apps-indexers.md#supported-connectors) | ❌ | ✅ <sup>1</sup> |
+| [SharePoint Online](search-how-to-index-logic-apps-indexers.md#supported-connectors) | ❌ | ✅ <sup>1, 2</sup> |
+| [SQL Server on virtual machines](search-how-to-index-sql-server.md) | ✅ | ✅ |
+
+<sup>1</sup> This data source uses an [Azure Logic Apps connector (preview)](search-how-to-index-logic-apps-indexers.md#supported-connectors) instead of a built-in indexer.
+
+<sup>2</sup> Instead of using a Logic Apps connector, you can use the Search Service REST APIs to programmatically index data from [Azure File Storage](search-file-storage-integration.md) or [SharePoint Online](search-howto-index-sharepoint-online.md).
### Sample data
-Microsoft hosts sample data so that you can omit a data source configuration step on a wizard workflow.
+Microsoft hosts the following sample data so that you can skip the wizard step for data source configuration.
-| Sample data | Import data wizard | Import and vectorize data wizard |
-|-------------|--------------------|----------------------------------|
-| hotels | ✅ | ❌ |
-| real estate | ✅ | ❌ |
+| Sample data | Import data wizard | Import data (new) wizard |
+|--|--|--|
+| Hotels | ✅ | ❌ |
+| Real estate | ✅ | ❌ |
### Skills
-This section lists the skills that might appear in a skillset generated by a wizard. Wizards generate a skillset and output field mappings based on options you select. After the skillset is created, you can modify its JSON definition to add more skills.
-
-Here are some points to keep in mind about the skills in the following list:
+Each wizard generates a skillset and outputs field mappings based on options you select. After the skillset is created, you can modify its JSON definition to add or remove skills.
-+ OCR and image analysis options are available for blobs in Azure Storage and files in OneLake, assuming the default parsing mode. Images are either an image content type (such as PNG or JPG) or an embedded image in an application file (such as PDF).
-+ Shaper is added if you configure a knowledge store.
-+ Text Split and Text Merge are added for data chunking if you choose an embedding model. They are added for other non-embedding skills if the source field granularity is set to pages or sentences.
+The following skills might appear in a wizard-generated skillset.
-| Skills | Import data wizard | Import and vectorize data wizard |
-|------|--------------------|----------------------------------|
-| [AI Vision multimodal](cognitive-search-skill-vision-vectorize.md) | ❌ | ✅ |
-| [Azure OpenAI embedding](cognitive-search-skill-azure-openai-embedding.md) | ❌ | ✅ |
-| [Azure Machine Learning (Azure AI Foundry model catalog)](cognitive-search-aml-skill.md) | ❌ | ✅ |
-| [Document layout](cognitive-search-skill-document-intelligence-layout.md) | ❌ | ✅ |
-| [Entity recognition](cognitive-search-skill-entity-recognition-v3.md) | ✅ | ❌ |
-| [Image analysis (applies to blobs, default parsing, whole file indexing](cognitive-search-skill-image-analysis.md) | ✅ | ❌ |
-| [Keyword extraction](cognitive-search-skill-keyphrases.md) | ✅ | ❌ |
-| [Language detection](cognitive-search-skill-language-detection.md) | ✅ | ❌ |
+| Skill | Import data wizard | Import data (new) wizard |
+|--|--|--|
+| [Azure AI Vision multimodal](cognitive-search-skill-vision-vectorize.md) | ❌ | ✅ <sup>1</sup> |
+| [Azure OpenAI embedding](cognitive-search-skill-azure-openai-embedding.md) | ❌ | ✅ <sup>1</sup> |
+| [Azure Machine Learning (Azure AI Foundry model catalog)](cognitive-search-aml-skill.md) | ❌ | ✅ <sup>1</sup> |
+| [Document layout](cognitive-search-skill-document-intelligence-layout.md) | ❌ | ✅ <sup>1</sup> |
+| [Entity recognition](cognitive-search-skill-entity-recognition-v3.md) | ✅ | ✅ |
+| [Image analysis](cognitive-search-skill-image-analysis.md) <sup>2</sup> | ✅ | ✅ |
+| [Key phrase extraction](cognitive-search-skill-keyphrases.md) | ✅ | ✅ |
+| [Language detection](cognitive-search-skill-language-detection.md) | ✅ | ✅ |
| [Text translation](cognitive-search-skill-text-translation.md) | ✅ | ❌ |
-| [OCR (applies to blobs, default parsing, whole file indexing)](cognitive-search-skill-ocr.md) | ✅ | ✅ |
+| [OCR](cognitive-search-skill-ocr.md) <sup>2</sup> | ✅ | ✅ |
| [PII detection](cognitive-search-skill-pii-detection.md) | ✅ | ❌ |
| [Sentiment analysis](cognitive-search-skill-sentiment.md) | ✅ | ❌ |
-| [Shaper (applies to knowledge store)](cognitive-search-skill-shaper.md) | ✅ | ❌ |
-| [Text Split](cognitive-search-skill-textsplit.md) | ✅ | ✅ |
-| [Text Merge](cognitive-search-skill-textmerger.md) | ✅ | ✅ |
+| [Shaper](cognitive-search-skill-shaper.md) <sup>3</sup> | ✅ | ❌ |
+| [Text Split](cognitive-search-skill-textsplit.md) <sup>4</sup> | ✅ | ✅ |
+| [Text Merge](cognitive-search-skill-textmerger.md) <sup>4</sup> | ✅ | ✅ |
-### Knowledge store
+<sup>1</sup> This skill is available for RAG and multimodal RAG workflows only. Keyword search isn't supported.
-You can [generate a knowledge store](knowledge-store-create-portal.md) for secondary storage of enriched (skills-generated) content. You might want a knowledge store for information retrieval workflows that don't require a search engine.
+<sup>2</sup> This skill is available for Azure Storage blobs and OneLake files, assuming the default parsing mode. Images can be an image content type (such as PNG or JPG) or an embedded image in an application file (such as PDF).
-| Knowledge store | Import data wizard | Import and vectorize data wizard |
-|-----------------|--------------------|----------------------------------|
-| storage | ✅ | ❌ |
+<sup>3</sup> This skill is added when you configure a knowledge store.
-## What the wizards create
+<sup>4</sup> This skill is added for data chunking when you choose an embedding model. For nonembedding skills, it's added when you set the source field granularity to pages or sentences.
-The import wizards create the objects described in the following table. After the objects are created, you can review their JSON definitions in the Azure portal or call them from code.
+### Semantic ranker
-To view these objects after the wizard runs:
+You can [configure semantic ranking](semantic-how-to-configure.md) to improve the relevance of search results.
-1. [Sign in to the Azure portal](https://portal.azure.com) and [find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices).
+| Capability | Import data wizard | Import data (new) wizard |
+|--|--|--|
+| Semantic ranker | ❌ | ✅ |
-1. Select **Search management** on the menu to find pages for indexes, indexers, data sources, and skillsets.
+### Knowledge store
+
+You can [generate a knowledge store](knowledge-store-create-portal.md) for secondary storage of enriched (skills-generated) content. A knowledge store is useful for information retrieval workflows that don't require a search engine.
+
+| Capability | Import data wizard | Import data (new) wizard |
+|--|--|--|
+| Knowledge store | ✅ | ❌ |
+
+## What the wizards create
+
+The following table lists the objects created by the wizards. After the objects are created, you can review their JSON definitions in the Azure portal or call them from code.
| Object | Description |
-|--------|-------------|
-| [Indexer](/rest/api/searchservice/indexers/create) | A configuration object specifying a data source, target index, an optional skillset, optional schedule, and optional configuration settings for error handing and base-64 encoding. |
-| [Data Source](/rest/api/searchservice/data-sources/create) | Persists connection information to a [supported data source](search-indexer-overview.md#supported-data-sources) on Azure. A data source object is used exclusively with indexers. |
-| [Index](/rest/api/searchservice/indexes/create) | Physical data structure used for full text search and other queries. |
-| [Skillset](/rest/api/searchservice/skillsets/create) | Optional. A complete set of instructions for manipulating, transforming, and shaping content, including analyzing and extracting information from image files. Skillsets are also used for integrated vectorization. Unless the volume of work fall under the limit of 20 transactions per indexer per day, the skillset must include a reference to an Azure AI services multi-service resource that provides enrichment. For integrated vectorization, you can use either Azure AI Vision or an embedding model in the Azure AI Foundry model catalog. |
-| [Knowledge store](knowledge-store-concept-intro.md) | Optional. Available only in the **Import data** wizard. Stores enriched skillset output from in tables and blobs in Azure Storage for independent analysis or downstream processing in nonsearch scenarios. |
+|--|--|
+| [Indexer](/rest/api/searchservice/indexers/create) | Configuration object that specifies a data source, target index, optional skillset, optional schedule, and optional configuration settings for error handling and base-64 encoding. |
+| [Data source](/rest/api/searchservice/data-sources/create) | Persists connection information to a [supported data source](search-indexer-overview.md#supported-data-sources) on Azure. A data source object is used exclusively with indexers. |
+| [Index](/rest/api/searchservice/indexes/create) | Physical data structure for full-text search, vector search, and other queries. |
+| [Skillset](/rest/api/searchservice/skillsets/create) | (Optional) Complete set of instructions for manipulating, transforming, and shaping content, including analyzing and extracting information from image files. Skillsets are also used for integrated vectorization. If the volume of work exceeds 20 transactions per indexer per day, the skillset must include a reference to an Azure AI services multi-service resource that provides enrichment. For integrated vectorization, you can use either Azure AI Vision or an embedding model in the Azure AI Foundry model catalog. |
+| [Knowledge store](knowledge-store-concept-intro.md) | (Optional) Stores enriched skillset output from tables and blobs in Azure Storage for independent analysis or downstream processing in nonsearch scenarios. Available only in the **Import data** wizard. |
+
+To view these objects after the wizards run:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and select your search service.
+1. From the left pane, select **Search management** to find pages for indexes, indexers, data sources, and skillsets.
## Benefits
-Before writing any code, you can use the wizards for prototyping and proof-of-concept testing. The wizards connect to external data sources, sample the data to create an initial index, and then import and optionally vectorize the data as JSON documents into an index on Azure AI Search.
+Before you write any code, you can use the wizards for prototyping and proof-of-concept testing. The wizards connect to external data sources, sample the data to create an initial index, and then import and optionally vectorize the data as JSON documents into an index on Azure AI Search.
-If you're evaluating skillsets, the wizard handles output field mappings and adds helper functions to create usable objects. Text split is added if you specify a parsing mode. Text merge is added if you chose image analysis so that the wizard can reunite text descriptions with image content. Shaper skills are added to support valid projections if you chose the knowledge store option. All of the above tasks come with a learning curve. If you're new to enrichment, the ability to have these steps handled for you allows you to measure the value of a skill without having to invest much time and effort.
+If you're evaluating skillsets, the wizards handle output field mappings and add helper functions to create usable objects. [Text Split](cognitive-search-skill-textsplit.md) is added when you specify a parsing mode. [Text Merge](cognitive-search-skill-textmerger.md) is added when you choose image analysis so that the wizards can reunite text descriptions with image content. [Shaper](cognitive-search-skill-shaper.md) is added to support valid projections when you choose the knowledge store option. All of these tasks come with a learning curve. If you're new to enrichment, having these steps handled for you allows you to measure the value of a skill without investing much time and effort.
-Sampling is the process by which an index schema is inferred, and it has some limitations. When the data source is created, the wizard picks a random sample of documents to decide what columns are part of the data source. Not all files are read, as this could potentially take hours for very large data sources. Given a selection of documents, source metadata, such as field name or type, is used to create a fields collection in an index schema. Depending on the complexity of source data, you might need to edit the initial schema for accuracy, or extend it for completeness. You can make your changes inline on the index definition page.
+Sampling is the process by which an index schema is inferred, which has some limitations. When the data source is created, the wizards pick a random sample of documents to decide what columns are part of the data source. Not all files are read, as doing so could potentially take hours for large data sources. Given a selection of documents, source metadata (such as field name or type) is used to create a fields collection in an index schema. Based on the complexity of the source data, you might need to edit the initial schema for accuracy or extend it for completeness. You can make your changes inline on the index definition page.
-Overall, the advantages of using the wizard are clear: as long as requirements are met, you can create a queryable index within minutes. Some of the complexities of indexing, such as serializing data as JSON documents, are handled by the wizards.
+Overall, the advantages of the wizards are clear: as long as requirements are met, you can create a queryable index within minutes. The wizards handle some of the complexities of indexing, such as serializing data as JSON documents.
## Limitations
-The import wizards aren't without limitations. Constraints are summarized as follows:
+The wizards have the following limitations:
-+ The wizards don't support iteration or reuse. Each pass through the wizard creates a new index, skillset, and indexer configuration. Only data sources can be persisted and reused within the wizard. To edit or refine other objects, either delete the objects and start over, or use the REST APIs or .NET SDK to modify the structures.
++ The wizards don't support iteration or reuse. Each pass through the wizards creates an index, skillset, and indexer configuration. You can reuse data sources only in the **Import data** wizard. After you finish the wizards, you can edit the created objects by using other portal tools, the REST APIs, or the Azure SDKs.
+ Source content must reside in a [supported data source](search-indexer-overview.md#supported-data-sources).
-+ Sampling is over a subset of source data. For large data sources, it's possible for the wizard to miss fields. You might need to extend the schema, or correct the inferred data types, if sampling is insufficient.
++ Sampling occurs over a subset of source data. For large data sources, it's possible for the wizards to miss fields. If sampling is insufficient, you might need to extend the schema or correct the inferred data types.
-+ AI enrichment, as exposed in the Azure portal, is limited to a subset of built-in skills.
++ [AI enrichment](cognitive-search-concept-intro.md), as exposed in the Azure portal, is limited to a subset of built-in skills.
-+ A [knowledge store](knowledge-store-concept-intro.md), which can be created by the **Import data** wizard, is limited to a few default projections and uses a default naming convention. If you want to customize names or projections, you'll need to create the knowledge store through REST API or the SDKs.
++ A [knowledge store](knowledge-store-concept-intro.md), which is only available through the **Import data** wizard, is limited to a few default projections and uses a default naming convention. To customize projections and names, you must create the knowledge store through the REST APIs or Azure SDKs.
## Secure connections
-The import wizards make outbound connections using the Azure portal controller and public endpoints. You can't use the wizards if Azure resources are accessed over a private connection or through a shared private link.
+The wizards use the Azure portal controller and public endpoints to make outbound connections. You can't use the wizards if Azure resources are accessed over a private connection or through a shared private link.
You can use the wizards over restricted public connections, but not all functionality is available.
+ On a search service, importing the built-in sample data requires a public endpoint and no firewall rules.
- Sample data is hosted by Microsoft on specific Azure resources. The Azure portal controller connects to those resources over a public endpoint. If you put your search service behind a firewall, you get this error when attempting to retrieve the builtin sample data: `Import configuration failed, error creating Data Source`, followed by `"An error has occured."`.
+ Microsoft hosts the sample data on specific Azure resources. The Azure portal controller connects to these resources over a public endpoint. If your search service is behind a firewall, you get the following error when you attempt to retrieve the sample data: `Import configuration failed, error creating Data Source`, followed by `"An error has occured."`.
+ On supported Azure data sources protected by firewalls, you can retrieve data if you have the right firewall rules in place.
The Azure resource must admit network requests from the IP address of the device used on the connection. You should also list Azure AI Search as a trusted service on the resource's network configuration. For example, in Azure Storage, you can list `Microsoft.Search/searchServices` as a trusted service.
-+ On connections to an Azure AI services multi-service account that you provide, or on connections to embedding models deployed in [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs) or Azure OpenAI, public internet access must be enabled unless your search service meets the creation date, tier, and region requirements for private connections. For more information about these requirements, see [Make outbound connections through a shared private link](search-indexer-howto-access-private.md).
++ On connections to an Azure AI services multi-service account that you provide, or on connections to embedding models deployed in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs) or Azure OpenAI, public internet access must be enabled unless your search service meets the creation date, tier, and region requirements for private connections. For more information, see [Make outbound connections through a shared private link](search-indexer-howto-access-private.md).
- Connections to Azure AI services multi-service are for [billing purposes](cognitive-search-attach-cognitive-services.md). Billing occurs when API calls exceed the free transaction count (20 per indexer run) for built-in skills called by the **Import data** wizard or integrated vectorization in the **Import and vectorize data wizard**.
+ Connections to Azure AI services multi-service accounts are for [billing purposes](cognitive-search-attach-cognitive-services.md). You're billed when API calls for built-in skills (in the **Import data** wizard or the keyword search workflow in the **Import data (new)** wizard) and integrated vectorization (in the **Import data (new)** wizard) exceed the free transaction count (20 per indexer run).
If Azure AI Search can't connect:
- + In the **Import and vectorize data wizard**, the error is `"Access denied due to Virtual Network/Firewall rules."`
+ + In the **Import data (new)** wizard, the error is `"Access denied due to Virtual Network/Firewall rules."`.
+ In the **Import data** wizard, there's no error, but the skillset won't be created.
If firewall settings prevent your wizard workflows from succeeding, consider scripted or programmatic approaches instead.
## Workflow
-The wizard is organized into four main steps:
+Both wizards follow a similar high-level workflow:
1. Connect to a supported Azure data source.
-1. Create an index schema, inferred by sampling source data.
+1. (Optional) Add skills to extract or generate content and structure.
-1. Optionally, it adds skills to extract or generate content and structure. Inputs for creating a knowledge store are collected in this step.
+1. Create an index schema, inferred by sampling source data.
-1. Run the wizard to create objects, optionally vectorize data, load data into an index, set a schedule and other configuration options.
+1. Run the wizard to create objects, optionally vectorize data, load data into an index, set a schedule, and configure other options.
-The workflow is a pipeline, so it's one way. You can't use the wizard to edit any of the objects that were created, but you can use other portal tools, such as the index or indexer designer or the JSON editors, for allowed updates.
+The workflow is a one-way pipeline. You can't use the wizard to edit any of the objects that were created, but you can use other portal tools, such as the index designer, indexer designer, or JSON editors, to make allowed updates.
### Starting the wizards
-Here's how you start the wizards.
+To start the wizards:
-1. In the [Azure portal](https://portal.azure.com), open the search service page from the dashboard or [find your service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) in the service list.
+1. Sign in to the [Azure portal](https://portal.azure.com) and select your search service.
-1. On the **Overview** page, select **Import data** or **Import and vectorize data wizard**.
+1. On the **Overview** page, select **Import data** or **Import data (new)**.
- :::image type="content" source="media/search-import-data-portal/import-data-cmd.png" alt-text="Screenshot of the import wizard options." border="true":::
+ :::image type="content" source="media/search-import-data-portal/import-wizards.png" alt-text="Screenshot of the import wizard options." border="true":::
- The wizards open fully expanded in the browser window so that you have more room to work.
+ The wizards open fully expanded in the browser window, giving you more room to work.
-1. If you selected **Import data**, you can select the **Samples** option to index a Microsoft-hosted dataset from a supported data source.
+1. If you selected **Import data**, you can select **Samples** to index a Microsoft-hosted dataset from a supported data source.
:::image type="content" source="media/search-what-is-an-index/add-index-import-samples.png" alt-text="Screenshot of the import data page with the samples option selected." border="true":::
-1. Follow the remaining steps in the wizard to create the index and indexer.
+1. Follow the remaining steps to create the index, indexer, and other applicable objects.
-You can also launch **Import data** from other Azure services, including Azure Cosmos DB, Azure SQL Database, SQL Managed Instance, and Azure Blob Storage. Look for **Add Azure AI Search** in the left-navigation pane on the service overview page.
+You can also launch **Import data** from other Azure services, including Azure Cosmos DB, Azure SQL Database, SQL Managed Instance, and Azure Blob Storage. Look for **Add Azure AI Search** in the left pane on the service overview page.
<a name="data-source-inputs"></a>
### Data source configuration in the wizard
-The wizards connect to an external [supported data source](search-indexer-overview.md#supported-data-sources) using the internal logic provided by Azure AI Search indexers, which are equipped to sample the source, read metadata, crack documents to read content and structure, and serialize contents as JSON for subsequent import to Azure AI Search.
+The wizards connect to an external [supported data source](search-indexer-overview.md#supported-data-sources) using the internal logic provided by indexers, which are equipped to sample the source, read metadata, crack documents to read content and structure, and serialize contents as JSON for subsequent import to Azure AI Search.
-You can paste in a connection to a supported data source in a different subscription or region, but the **Choose an existing connection** picker is scoped to the active subscription.
+In the **Import data** wizard, you can paste a connection to a supported data source in a different subscription or region, but the **Choose an existing connection** picker is scoped to the active subscription.
:::image type="content" source="media/search-import-data-portal/choose-connection-same-subscription.png" alt-text="Screenshot of the Connect to your data tab." border="true":::
-Not all preview data sources are guaranteed to be available in the wizard. Because each data source has the potential for introducing other changes downstream, a preview data source will only be added to the data sources list if it fully supports all of the experiences in the wizard, such as skillset definition and index schema inference.
+Not all preview data sources are guaranteed to be available in the wizards. Because each data source has the potential to introduce changes downstream, a preview data source is only added when it fully supports all of the wizard's experiences, such as skillset definition and index schema inference.
-You can only import from a single table, database view, or equivalent data structure, however the structure can include hierarchical or nested substructures. For more information, see [How to model complex types](search-howto-complex-data-types.md).
+You can only import from a single table, database view, or equivalent data structure. However, the structure can include hierarchical or nested substructures. For more information, see [How to model complex types](search-howto-complex-data-types.md).
### Skillset configuration in the wizard
-Skillset configuration occurs after the data source definition because the type of data source informs the availability of certain built-in skills. In particular, if you're indexing files from Blob storage, your choice of parsing mode of those files determine whether sentiment analysis is available.
+Skillset configuration occurs after the data source definition because the type of data source informs the availability of certain built-in skills. For example, if you're indexing files from Azure Blob Storage, the parsing mode you choose for those files determines whether sentiment analysis is available.
-The wizard adds the skills you choose. It also adds other skills that are necessary for achieving a successful outcome. For example, if you specify a knowledge store, the wizard adds a Shaper skill to support projections (or physical data structures).
+The wizards add not only skills you choose but also skills that are necessary for a successful outcome. For example, if you specify a knowledge store in the **Import data** wizard, the wizard adds a Shaper skill to support projections or physical data structures.
-Skillsets are optional and there's a button at the bottom of the page to skip ahead if you don't want AI enrichment.
+Skillsets are optional, and there's a button at the bottom of the page to skip ahead if you don't want AI enrichment.
<a name="index-definition"></a>
### Index schema configuration in the wizard
-The wizards sample your data source to detect the fields and field type. Depending on the data source, it might also offer fields for indexing metadata.
+The wizards sample your data source to detect the fields and field types. Depending on the data source, they might also offer fields for indexing metadata.
Because sampling is an imprecise exercise, review the index for the following considerations:
-1. Is the field list accurate? If your data source contains fields that weren't picked up in sampling, you can manually add any new fields that sampling missed, and remove any that don't add value to a search experience or that won't be used in a [filter expression](search-query-odata-filter.md) or [scoring profile](index-add-scoring-profiles.md).
+1. Is the field list accurate? If your data source contains fields that weren't picked up in sampling, you can manually add the missed fields. You can also remove fields that don't add value to the search experience or won't be used in a [filter expression](search-query-odata-filter.md) or [scoring profile](index-add-scoring-profiles.md).
-1. Is the data type appropriate for the incoming data? Azure AI Search supports the [entity data model (EDM) data types](/rest/api/searchservice/supported-data-types). For Azure SQL data, there's [mapping chart](search-how-to-index-sql-database.md#TypeMapping) that lays out equivalent values. For more background, see [Field mappings and transformations](search-indexer-field-mappings.md).
+1. Is the data type appropriate for the incoming data? Azure AI Search supports the [entity data model (EDM) data types](/rest/api/searchservice/supported-data-types). For Azure SQL data, there's a [mapping chart](search-how-to-index-sql-database.md#TypeMapping) that lays out equivalent values. For more information, see [Field mappings and transformations](search-indexer-field-mappings.md).
-1. Do you have one field that can serve as the *key*? This field must be Edm.string and it must uniquely identify a document. For relational data, it might be mapped to a primary key. For blobs, it might be the `metadata-storage-path`. If field values include spaces or dashes, you must set the **Base-64 Encode Key** option in the **Create an Indexer** step, under **Advanced options**, to suppress the validation check for these characters.
+1. Do you have one field that can serve as the *key*? This field must be an Edm.String that uniquely identifies a document. For relational data, it might be mapped to a primary key. For blobs, it might be the `metadata-storage-path`. If field values include spaces or dashes, you must set the **Base-64 Encode Key** option in the **Create an indexer** step, under **Advanced options**, to suppress the validation check for these characters.
1. Set attributes to determine how that field is used in an index.
- Take your time with this step because attributes determine the physical expression of fields in the index. If you want to change attributes later, even programmatically, you'll almost always need to drop and rebuild the index. Core attributes like **Searchable** and **Retrievable** have a [negligible effect on storage](search-what-is-an-index.md#index-size). Enabling filters and using suggesters increase storage requirements.
+ Take your time with this step because attributes determine the physical expression of fields in the index. If you want to change attributes later, even programmatically, you almost always need to drop and rebuild the index. Core attributes like **Searchable** and **Retrievable** have a [negligible effect on storage](search-what-is-an-index.md#index-size). Enabling filters and using suggesters increase storage requirements.
- + **Searchable** enables full-text search. Every field used in free form queries or in query expressions must have this attribute. Inverted indexes are created for each field that you mark as **Searchable**.
+ + **Searchable** enables full-text search. Every field used in free-form queries or in query expressions must have this attribute. Inverted indexes are created for each field that you mark as **Searchable**.
+ **Retrievable** returns the field in search results. Every field that provides content to search results must have this attribute. Setting this field doesn't appreciably affect index size.
- + **Filterable** allows the field to be referenced in filter expressions. Every field used in a **$filter** expression must have this attribute. Filter expressions are for exact matches. Because text strings remain intact, more storage is required to accommodate the verbatim content.
+ + **Filterable** allows the field to be referenced in filter expressions. Every field used in a **$filter** expression must have this attribute. Filter expressions are for exact matches. Because text strings remain intact, more storage is required to accommodate the verbatim content.
+ **Facetable** enables the field for faceted navigation. Only fields also marked as **Filterable** can be marked as **Facetable**.
+ **Sortable** allows the field to be used in a sort. Every field used in an **$Orderby** expression must have this attribute.
-1. Do you need [lexical analysis](search-lucene-query-architecture.md#stage-2-lexical-analysis)? For Edm.string fields that are **Searchable**, you can set an **Analyzer** if you want language-enhanced indexing and querying.
+1. Do you need [lexical analysis](search-lucene-query-architecture.md#stage-2-lexical-analysis)? For Edm.String fields that are **Searchable**, you can set an **Analyzer** if you want language-enhanced indexing and querying.
- The default is *Standard Lucene* but you could choose *Microsoft English* if you wanted to use Microsoft's analyzer for advanced lexical processing, such as resolving irregular noun and verb forms. Only language analyzers can be specified in the Azure portal. If you use a custom analyzer or a non-language analyzer like Keyword, Pattern, and so forth, you must create it programmatically. For more information about analyzers, see [Add language analyzers](search-language-support.md).
+ The default is *Standard Lucene*, but you can choose *Microsoft English* if you wanted to use Microsoft's analyzer for advanced lexical processing, such as resolving irregular noun and verb forms. Only language analyzers can be specified in the Azure portal. If you want to use a custom analyzer or non-language analyzer, such as Keyword or Pattern, you must create it programmatically. For more information, see [Add language analyzers](search-language-support.md).
-1. Do you need typeahead functionality in the form of autocomplete or suggested results? Select the **Suggester** the checkbox to enable [typeahead query suggestions and autocomplete](index-add-suggesters.md) on selected fields. Suggesters add to the number of tokenized terms in your index, and thus consume more storage.
+1. Do you need typeahead functionality in the form of autocomplete or suggested results? Select the **Suggester** checkbox to enable [typeahead query suggestions and autocomplete](index-add-suggesters.md) on selected fields. Suggesters add to the number of tokenized terms in your index and thus consume more storage.
### Indexer configuration in the wizard
-The last page of the wizard collects user inputs for indexer configuration. You can [specify a schedule](search-howto-schedule-indexers.md) and set other options that will vary by the data source type.
+The last page of the wizard collects user inputs for indexer configuration. You can [specify a schedule](search-howto-schedule-indexers.md) and set other options that vary by the data source type.
-Internally, the wizard also sets up the following definitions, which aren't visible in the indexer until after it's created:
+Internally, the wizard sets up the following definitions, which aren't visible in the indexer until after it's created.
-+ [field mappings](search-indexer-field-mappings.md) between the data source and index
-+ [output field mappings](cognitive-search-output-field-mapping.md) between skill output and an index
++ [Field mappings](search-indexer-field-mappings.md) between the data source and index.
++ [Output field mappings](cognitive-search-output-field-mapping.md) between the skill output and an index.
## Try the wizards
-The best way to understand the benefits and limitations of the wizard is to step through it. Here are some quickstarts that are based on the wizard.
+The best way to understand the benefits and limitations of the wizards is to step through them. The following quickstarts are based on the wizards.
+ [Quickstart: Create a search index](search-get-started-portal.md)
+ [Quickstart: Create a text translation and entity skillset](search-get-started-skillset.md)
+ [Quickstart: Create a vector index](search-get-started-portal-import-vectors.md)
-+ [Quickstart: Image search (vectors)](search-get-started-portal-image-search.md)
++ [Quickstart: Create a multimodal index](search-get-started-portal-image-search.md)
Summary
{
"modification_type": "major update",
"modification_title": "Azure Portal インポートウィザードの改善"
}
Explanation
この変更は、search-import-data-portal.md
ファイルに対する大規模な更新で、Azure Portalにおけるインポートウィザードに関する詳細な情報が追加され、既存の内容が改善されています。この更新により、ウィザードの機能や使用法が明確化され、ユーザーにとってより便利で理解しやすいものとなっています。
主な変更点は以下の通りです:
タイトルと説明の修正: 文書のタイトルを「Import Wizards in the Azure portal」に変更し、説明文もより具体的に修正しました。これにより、ウィザードの役割が明確になっています。
新ウィザード機能の導入: すでに使われている「Import and vectorize data wizard」は「Import data (new) wizard」として再命名され、この新しいウィザードが持つ機能の説明が強調されました。また、従来のウィザードに関連する違いとそれぞれの機能について詳しく説明しています。
サポートされるデータソースの更新: 新旧のウィザードで使用できるデータソースの違いがテーブル形式で示され、どちらのウィザードでどのデータソースを使用できるかが一目でわかるようになっています。
能力の比較: 新旧のウィザードの機能を比較した表が追加され、それぞれのウィザードの強みがわかりやすく説明されています。これにより、ユーザーは自身のニーズに応じたウィザードを選択しやすくなります。
サンプルデータの使用: マイクロソフトによってホストされるサンプルデータを利用する手段が強調され、ユーザーが手軽にウィザードを試せるよう配慮されています。
ワークフローの説明の更新: ワークフローの手順が具体的に記述され、ウィザードの使用がどう進行するかを明確に説明しています。
全体として、この更新はAzure Portalにおけるインポートウィザードに豊富な情報を与え、ユーザーの体験を向上させることを意図しています。新しいウィザードが提供する機能は、より効率的で効果的なインデクシングとデータの取り扱いを実現し、ユーザーが迅速に価値を得られるようになっています。
articles/search/search-indexer-howto-access-private.md
Diff
@@ -405,7 +405,7 @@ After the indexer is created successfully, it should connect to the Azure resour
1. If you haven't done so already, verify that your Azure PaaS resource refuses connections from the public internet. If connections are accepted, review the DNS settings in the **Networking** page of your Azure PaaS resource.
-1. Choose a tool that can invoke an outbound request scenario, such as an indexer connection to a private endpoint. An easy choice is using the **Import data** wizard, but you can also try a REST client and REST APIs for more precision. Assuming that your search service isn't also configured for a private connection, the REST client connection to search can be over the public internet.
+1. Choose a tool that can invoke an outbound request scenario, such as an indexer connection to a private endpoint. An easy choice is using an [import wizard](search-get-started-portal.md), but you can also try a REST client and REST APIs for more precision. Assuming that your search service isn't also configured for a private connection, the REST client connection to search can be over the public internet.
1. Set the connection string to the private Azure PaaS resource. The format of the connection string doesn't change for shared private link. The search service invokes the shared private link internally.
Summary
{
"modification_type": "minor update",
"modification_title": "インポートウィザードへのリンクの追加"
}
Explanation
この変更は、search-indexer-howto-access-private.md
ファイルに対する小規模な修正で、Azure インデクサーがプライベートエンドポイントに接続するための手順の一部が明確化されています。
具体的には、接続手順の一環として「Import data」ウィザードへの言及が、単なるテキストからリンク付きの形式に改善されました。これにより、ユーザーはウィザードにすぐにアクセスできるようになり、よりスムーズにインデクシング作業を行えるようになります。具体的には、以下の変更が加えられました:
- 「Import data」ウィザードが単独の文として記載されていたのを、関連するページへのリンクである
[import wizard](search-get-started-portal.md)
に置き換えています。この変更により、ユーザーはウィザードの詳細情報に迅速にアクセスでき、インデックス作成手順の理解が深まります。
全体的に、この修正はユーザーにとっての使いやすさを向上させ、インデクサーの設定や利用に関する情報をより効果的に提供することを目的としています。
articles/search/search-manage.md
Diff
@@ -61,7 +61,7 @@ For integrated vectorization, your search service identity needs the following r
Role assignments can take several minutes to take effect.
-Before you move on to network security, consider testing all points of connection to validate role assignments. Run either the [**Import data** wizard](search-get-started-portal.md) or the [**Import and vectorize data wizard**](search-get-started-portal-image-search.md) to test permissions.
+Before you move on to network security, consider testing all points of connection to validate role assignments. Run an [import wizard](search-get-started-portal.md) to test permissions.
## Configure network security
@@ -101,7 +101,7 @@ To connect to Azure AI Search, developers need:
+ An endpoint or URL from the **Overview** page.
+ An API key from the **Keys** page or a role assignment. We recommend Search Service Contributor, Search Index Data Contributor, and Search Index Data Reader.
-We recommend portal access for the [**Import data** wizard](search-get-started-portal.md), the [**Import and vectorize data wizard**](search-get-started-portal-import-vectors.md), and [Search explorer](search-explorer.md). You must be a contributor or higher to run the wizards.
+We recommend portal access for the [import wizards](search-get-started-portal.md) and [Search explorer](search-explorer.md). You must be a contributor or higher to run the wizards.
## Related content
Summary
{
"modification_type": "minor update",
"modification_title": "ウィザードへのリンク表現の簡略化"
}
Explanation
この変更は、search-manage.md
ファイルに対する小規模な修正で、Azureのインポートウィザードに関する記述が簡略化されています。
具体的には、以下の点が変更されました:
ウィザードへの言及の簡略化: 「Import data wizard」や「Import and vectorize data wizard」という表現から、それぞれ「import wizard」および「import wizards」という表現に変更されています。この変更により、記述がよりスムーズになり、冗長さが軽減されています。
リンク形式の維持: リンク自体は維持されているため、ユーザーは引き続き関連するウィザードに簡単にアクセスできるようになっています。
この更新は、文書の可読性を向上させることを意図しており、ユーザーが情報を迅速に理解できるように配慮されています。また、役割の割当についてテストする際に関連するウィザードの使用を促進しています。全体として、この修正は内容の一貫性と流れを改善するための小さな、しかし有意義な変更です。
articles/search/search-what-is-data-import.md
Diff
@@ -88,14 +88,14 @@ Indexers connect an index to a data source (usually a table, view, or equivalent
Use the following tools and APIs for indexer-based indexing:
-+ [Import data wizard or Import and vectorize data wizard](search-import-data-portal.md)
++ Azure portal: [Import wizards](search-import-data-portal.md)
+ REST APIs: [Create Indexer (REST)](/rest/api/searchservice/indexers/create), [Create Data Source (REST)](/rest/api/searchservice/data-sources/create), [Create Index (REST)](/rest/api/searchservice/indexes/create)
+ Azure SDK for .NET: [SearchIndexer](/dotnet/api/azure.search.documents.indexes.models.searchindexer), [SearchIndexerDataSourceConnection](/dotnet/api/azure.search.documents.indexes.models.searchindexerdatasourceconnection), [SearchIndex](/dotnet/api/azure.search.documents.indexes.models.searchindex),
+ Azure SDK for Python: [SearchIndexer](/python/api/azure-search-documents/azure.search.documents.indexes.models.searchindexer), [SearchIndexerDataSourceConnection](/python/api/azure-search-documents/azure.search.documents.indexes.models.searchindexerdatasourceconnection), [SearchIndex](/python/api/azure-search-documents/azure.search.documents.indexes.models.searchindex),
+ Azure SDK for Java: [SearchIndexer](/java/api/com.azure.search.documents.indexes.models.searchindexer), [SearchIndexerDataSourceConnection](/java/api/com.azure.search.documents.indexes.models.searchindexerdatasourceconnection), [SearchIndex](/java/api/com.azure.search.documents.indexes.models.searchindex),
+ Azure SDK for JavaScript: [SearchIndexer](/javascript/api/@azure/search-documents/searchindexer), [SearchIndexerDataSourceConnection](/javascript/api/@azure/search-documents/searchindexerdatasourceconnection), [SearchIndex](/javascript/api/@azure/search-documents/searchindex),
-Indexer functionality is exposed in the [Azure portal], the [REST API](/rest/api/searchservice/indexers/create), and the [.NET SDK](/dotnet/api/azure.search.documents.indexes.searchindexerclient).
+Indexer functionality is exposed in the Azure portal, the [REST API](/rest/api/searchservice/indexers/create), and the [.NET SDK](/dotnet/api/azure.search.documents.indexes.searchindexerclient).
An advantage to using the Azure portal is that Azure AI Search can usually generate a default index schema by reading the metadata of the source dataset.
Summary
{
"modification_type": "minor update",
"modification_title": "ウィザードの表現の改善"
}
Explanation
この変更は、search-what-is-data-import.md
ファイルに対する小規模な修正で、Azure データインポートに関するウィザードの記述が改良されています。
具体的には、以下のポイントが変更されました:
ウィザードの名称の調整: 「Import data wizard or Import and vectorize data wizard」という表現を、「Azure portal: Import wizards」に変更しています。この修正により、ウィザードの役割がより明確に定義され、Azureポータル内での関連性が強調されています。
項目の整列: 他のツールやAPIのリストが整列され、文書の読みやすさが向上しました。これにより、ユーザーは用語やツールの関係をより直感的に理解できるようになっています。
統一された表現: インデクサー機能が「Azure portal」や「REST API」などの具体的な情報源とともに記述されており、コンテキストの一貫性が保たれています。
この更新はドキュメンテーションの中でウィザードを利用する際のユーザー体験を向上させることを目的としており、情報を明確に伝えることで、ユーザーが必要なツールを簡単に見つけて使用できるように配慮されています。全体として、こうした小さな変更が文書全体の品質を向上させる要素となっています。
articles/search/tutorial-rag-build-solution-pipeline.md
Diff
@@ -30,7 +30,7 @@ In this tutorial, you:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
> [!TIP]
-> You can use the [Import and vectorize data wizard](search-import-data-portal.md) to create your pipeline. Try some quickstarts [Image search](search-get-started-portal-image-search.md) or [Vector search](search-get-started-portal-import-vectors.md), to learn more about the pipeline and its moving parts.
+> You can use the [**Import data (new)** wizard](search-import-data-portal.md) to create your pipeline. Try some quickstarts [Image search](search-get-started-portal-image-search.md) or [Vector search](search-get-started-portal-import-vectors.md), to learn more about the pipeline and its moving parts.
## Prerequisites
Summary
{
"modification_type": "minor update",
"modification_title": "ウィザードの名称の更新"
}
Explanation
この変更は、tutorial-rag-build-solution-pipeline.md
ファイルに対する小規模な修正で、Azureのパイプライン作成におけるウィザードの名称が更新されています。
具体的には、次の点が変更されました:
ウィザードの名称変更: 「Import and vectorize data wizard」という表現を「Import data (new) wizard」に変更しています。この修正により、最新のウィザードが明示的に提示され、ユーザーは使用すべき正確なツールを特定しやすくなっています。
表記の強調: 新しいウィザードの名前が強調されており、視覚的に目立つように変更されています。これはユーザーの注意を引く効果があり、重要な情報を強調しています。
この更新は、ユーザーが正しいウィザードを迅速に見つけて利用できるようにすることを目的としており、Azureの機能を最大限に活用できるような配慮がなされています。全体として、この種の小さな変更がドキュメントの有用性と明確さを向上させる助けとなります。
articles/search/vector-search-filters.md
Diff
@@ -9,7 +9,7 @@ ms.update-cycle: 180-days
ms.custom:
+ ignite-2023
ms.topic: how-to
-ms.date: 08/28/2025
+ms.date: 09/16/2025
---
# Add a filter to a vector query in Azure AI Search
@@ -19,37 +19,31 @@ ms.date: 08/28/2025
>
> `prefilter` and `postfilter` are generally available in the [latest stable REST API version](/rest/api/searchservice/search-service-api-versions).
-In Azure AI Search, you can use a [filter expression](search-filters.md) to add inclusion or exclusion criteria to a vector query. You can also specify a filtering mode that applies the filter:
+In Azure AI Search, you can use a [filter expression](search-filters.md) to add inclusion or exclusion criteria to a [vector query](vector-search-how-to-query.md). You can also specify a filtering mode that applies the filter:
+ Before query execution, known as *prefiltering*.
+ After query execution, known as *postfiltering*.
+ After the global top-`k` results are identified, known as *strict postfiltering* (preview).
This article uses REST for illustration. For code samples in other languages and end-to-end solutions that include vector queries, see the [azure-search-vector-samples](https://github.com/Azure/azure-search-vector-samples) GitHub repository.
-You can also use [Search Explorer](search-get-started-portal-import-vectors.md#check-results) in the Azure portal to query vector content. If you use the JSON view, you can add filters and specify the filter mode.
+You can also use [Search Explorer](search-get-started-portal-import-vectors.md#check-results) in the Azure portal to query vector content. In the JSON view, you can add filters and specify the filter mode.
## How filtering works in vector queries
-When performing Approximate Nearest Neighbor (ANN) search using **Hierarchical Navigable Small World (HNSW)** algorithm, Azure AI Search stores HNSW graphs across multiple shards. Each shard contains a portion of the entire index. The different filtering options control where filter operations are applied within the stages of search, which will affect how the results are filtered down to a subset of items (e.g., by category, tag, or other attributes) and impact latency, recall, and throughput.
+Azure AI Search uses the Hierarchical Navigable Small World (HNSW) algorithm for Approximate Nearest Neighbor (ANN) search, storing HNSW graphs across multiple shards. Each shard contains a portion of the entire index.
-Filters apply to `filterable` *nonvector* fields, either string or numeric, to include or exclude search documents based on filter criteria. Although vector fields themselves aren't filterable, you can use filters on nonvector fields in the same index to include or exclude documents that contain vector fields you're searching on.
+Filters apply to `filterable` *nonvector* fields, either string or numeric, to include or exclude search documents based on filter criteria. Vector fields themselves aren't filterable, but you can use filters on other fields in the same index to narrow the documents considered for vector search. If your index lacks suitable text or numeric fields, check for document metadata that might help with filtering, such as `LastModified` or `CreatedBy` properties.
-If your index lacks suitable text or numeric fields, check for document metadata that might be useful in filtering, such as `LastModified` or `CreatedBy` properties.
+The `vectorFilterMode` parameter controls where filter operations are applied during the stages of search, which affects how the results are filtered to a subset of items (such as by category, tag, or other attributes) and impacts latency, recall, and throughput. There are three modes:
-The `vectorFilterMode` parameter controls when the filter is applied in the vector search process, with `k` setting the maximum number of nearest neighbors to return. Depending on the filter mode and how selective your filter is, fewer than `k` results might be returned.
++ `preFilter` applies the filter *during* HNSW traversal on each shard. This mode maximizes recall but can traverse more of the graph, increasing CPU and latency for highly selective filters.
-Azure AI Search supports three types of filtering during vector search: `preFilter` (default), `postFilter`, and `strictPostFilter`.
++ `postFilter` runs HNSW traversal and filtering on each shard independently, intersects results at the shard level, and then aggregates the top `k` from each shard into a global top `k`. This mode can create false negatives for highly selective filters or small `k` values.
-> [!NOTE]
-> On older indexes created before approximately October 15, 2023, `preFilter` is not available. For these indexes, `postFilter` will be the default. In order to use `preFilter` and other advanced vector features, such as vector compression, you will need to recreate your index. You can test compatibility by sending a vector query with `vectorFilterMode: preFilter` on API version later than `2023-10-01-preview` and observe whether it fails.
-
-In summary, the three approaches are described below:
-* **Pre-filter:** apply the predicate *during* HNSW traversal on each shard. Highest recall for filtered queries, but may traverse more of the graph (higher CPU/latency) when the filter has high selectivity.
-* **Post-filter:** run the HNSW traversal and the filtering independently on each shard, then intersect results at shard level, and aggregate top-k from each shard into a global top-k. For higher selectivity filters or small `k`, this can create false negatives.
-* **Strict post-filter:** run HNSW traversal to find the unfiltered global top-k, then apply the filter. Highest chance of returning false negatives when `k` is small or the filter has high selectivity.
++ `strictPostFilter` (preview) finds the unfiltered global top `k` *before* applying the filter. This mode has the highest risk of returning false negatives for highly selective filters and small `k` values.
-For both *post-filtering* options, instead of controlling the number of results only using `k`, it is recommended to control it using `top` and increase `k`, because this reduces the likelihood of false negatives. It is also recommended to avoid both post-filtering options for high-selectivity filters (which match very few documents) because the initial set of candidates may not surface enough documents which satisfy the filter.
+For more information about these modes, see [Set the filter mode](#set-the-filter-mode).
## Define a filter
@@ -58,7 +52,7 @@ Filters determine the scope of vector queries and are defined using [Documents -
This REST API provides:
+ `filter` for the criteria.
-+ `vectorFilterMode` to specify when the filter should be applied during the query. For supported modes, see the next section.
++ `vectorFilterMode` to specify when the filter is applied during the vector query. For supported modes, see [Set the filter mode](#set-the-filter-mode).
```http
POST https://{search-endpoint}/indexes/{index-name}/docs/search?api-version={api-version}
@@ -89,62 +83,104 @@ api-key: {admin-api-key}
In this example, the vector embedding targets the `contentVector` field, and the filter criteria apply to `category`, a filterable text field. Because the `preFilter` mode is used, the filter is applied before the search engine runs the query, so only documents in the `Databases` category are considered during the vector search.
-## Understanding Pre-Filter, Post-Filter, and Strict Post-Filter in HNSW Vector Search
+## Set the filter mode
-The `vectorFilterMode` parameter determines when and how the filter is applied relative to vector query execution. There are three modes:
+The `vectorFilterMode` parameter determines when and how the filter is applied relative to vector query execution. You can use the following modes:
-+ `preFilter` (default for indexes created after approximately October 15, 2023) - **recommended**
-+ `postFilter` (default for indexes created before approximately October 15, 2023)
++ `preFilter` (recommended)
++ `postFilter`
+ `strictPostFilter` (preview)
-### Pre-filter
+> [!NOTE]
+> `preFilter` is the default for indexes created after approximately October 15, 2023. For indexes created before this date, `postFilter` is the default. To use `preFilter` and other advanced vector features, such as vector compression, you must recreate your index.
+>
+> You can test compatibility by sending a vector query with `"vectorFilterMode": "preFilter"` on the `2023-10-01-preview` REST API version or later. If the query fails, your index doesn't support `preFilter`.
+
+### [preFilter](#tab/prefilter-mode)
+
+Prefiltering applies filters before query execution, which reduces the candidate set for the vector search algorithm. The top-`k` results are then selected from this filtered set.
+
+In a vector query, `preFilter` is the default mode because it favors recall and quality over latency.
-Pre-filtering applies filters before query execution, which reduces the candidate set for the vector search algorithm. The top-`k` results are then selected from this filtered set. In a vector query, `preFilter` is the default mode because it favors recall and quality over latency.
+#### How this mode works
-1. On each shard, during HNSW traversal, apply the filter predicate when considering candidates, expanding the graph traversal until `k` candidates are found.
-1. Pre-filtered local top-k results are produced per shard, which are aggregated into the global top-k.
+1. On each shard, apply the filter predicate *during* HNSW traversal, expanding the graph until `k` candidates are found.
-**Effect:** Traversal expands the search surface to find more filtered candidates (especially if filter is selective), producing the most similar top-k results across all shards. Each shard will identify `k` number of results which satisfy the filter predicate. Pre-filter guarantees `k` results are returned if they exist in the index. For high selectivity filters, this could cause a significant portion of the graph to be traversed, increasing computation cost and latency and reducing throughput. If your filter has a very high selectivity (very few matches), consider using `exhaustive: true` to perform exhaustive search.
+1. Produce the prefiltered local top-`k` results per shard.
+
+1. Aggregate the filtered results into a global top-`k` result set.
+
+#### Effect of this mode
+
+Traversal expands the search surface to find more filtered candidates, especially if the filter is selective. This produces the most similar top-`k` results across all shards. Each shard identifies the `k` results that satisfy the filter predicate.
+
+Prefiltering guarantees that `k` results are returned if they exist in the index. For highly selective filters, this can cause a significant portion of the graph to be traversed, increasing computation cost and latency while reducing throughput. If your filter is highly selective (has very few matches), consider using `exhaustive: true` to perform exhaustive search.
:::image type="content" source="media/vector-search-filters/vector-filter-modes-prefilter.svg" alt-text="Diagram of prefilters." border="true" lightbox="media/vector-search-filters/vector-filter-modes-prefilter.png":::
-### Post-filter
+### [postFilter](#tab/postfilter-mode)
+
+Postfiltering applies filters after query execution, which narrows the search results. This mode processes results within each shard and then merges the filtered results from all shards to produce the top-`k` results. As a result, you might receive documents that match the filter but aren't among the global top-`k` results.
+
+To use this mode in a vector query, use `"vectorFilterMode": "postFilter"`.
-Post-filtering applies filters after query execution, which narrows the search results. This mode processes results within each shard and then merges the filtered results from all shards to produce the top-`k` results. As a result, you might receive documents that match the filter but aren't among the global top-`k` results.
+> [!TIP]
+> For both postfiltering modes, use a higher `k` and the `top` parameter to reduce false negatives. Avoid postfiltering with highly selective filters, as it might not return enough matching documents.
-To use this option in a vector query, use `"vectorFilterMode": "postFilter"`.
+#### How this mode works
-1. On each shard, run HNSW traversal *without considering the filter* to identify the unfiltered local top-k.
-1. Apply the filter predicate on the unfiltered top-k result for each shard. Note this will reduce the contribution from each shard to be potentially fewer than `k` results.
-1. Aggregate into the global top-k results.
+1. On each shard, run HNSW traversal *without considering the filter* to identify the unfiltered local top-`k` results.
-**Effect:** Traversal is performed independently of the filter expression, but because the intersection happens *after* the top-k results are identified, some matching documents that are less similar than the best unfiltered top-k documents will never surface in the search results. For highly selective filters, this can reduce recall or produce false negatives (fewer matching documents returned than actually exist within the index). Latency and throughput is more predictable because traversal cost is not correlated to filter selectivity but rather filter execution cost.
+1. Apply the filter predicate on the unfiltered top-`k` results for each shard. This reduces the contribution from each shard to be potentially fewer than `k` results.
-:::image type="content" source="media/vector-search-filters/vector-filter-modes-postfilter.svg" alt-text="Diagram of post-filters." border="true" lightbox="media/vector-search-filters/vector-filter-modes-postfilter.png":::
+1. Aggregate the filtered results into the global top-`k` results.
-### Strict Post-filter (preview)
+#### Effect of this mode
-Strict post-filtering applies filters after identifying the global top-`k` results. This mode guarantees that the filtered results are always a subset of the unfiltered top `k`.
+Traversal occurs independently of the filter expression. However, because the intersection happens *after* the top-`k` results are identified, some matching documents that are less similar than the top-`k` unfiltered documents never appear in the search results.
+
+For highly selective filters, postfiltering can reduce recall and produce false negatives, meaning fewer matching documents are returned than are in the index. However, latency and throughput are more predictable because the traversal cost isn't correlated with filter selectivity, but rather with filter execution cost.
+
+:::image type="content" source="media/vector-search-filters/vector-filter-modes-postfilter.svg" alt-text="Diagram of postfilters." border="true" lightbox="media/vector-search-filters/vector-filter-modes-postfilter.png":::
+
+### [strictPostFilter (preview)](#tab/strictpostfilter-mode)
+
+Strict postfiltering applies filters after identifying the global top-`k` results. This mode guarantees that the filtered results are always a subset of the unfiltered top `k`.
With strict postfiltering, highly selective filters or small `k` values can return zero results (even if matches exist) because only documents that match the filter within the global top `k` are returned. Don't use this mode if missing relevant results could have serious consequences, such as in healthcare or patent searches.
-To use this option in a vector query, use `"vectorFilterMode": "strictPostFilter"` with the latest preview version of the [Search Service REST APIs](/rest/api/searchservice/search-service-api-versions).
+To use this mode in a vector query, use `"vectorFilterMode": "strictPostFilter"` with the latest preview version of the [Search Service REST APIs](/rest/api/searchservice/search-service-api-versions).
+
+> [!TIP]
+> For both postfiltering modes, use a higher `k` and the `top` parameter to reduce false negatives. Avoid postfiltering with highly selective filters, as it might not return enough matching documents.
+
+#### How this mode works
+
+1. On each shard, run HNSW traversal *without considering the filter* to identify the unfiltered local top-`k` results.
-1. On each shard, run HNSW traversal *without considering the filter* to identify the unfiltered local top-k.
-1. Aggregate the local top-k results per shard into an unfiltered global top-k result set.
-1. Apply the filter to this global top-k. Return the subset that satisfies the filter predicate.
+1. Aggregate the local top-`k` results per shard into an unfiltered global top-`k` result set.
-**Effect:** Applying a filter will *always* reduce the set of results to be fewer than `k` if some documents don't satisfy the filter. If qualifying items are not present in the global top-k, this mode will never surface them. This option can be useful when building a facet and filter navigation experience to prevent additional results from surfacing after applying increasingly selective filters and increase consistency of facet bucket counts and search counts, at the expense of potential false negatives or zero results.
+1. Apply the filter to the global top-`k` result set.
-:::image type="content" source="media/vector-search-filters/vector-filter-modes-strictpostfilter.svg" alt-text="Diagram of strict post-filters." border="true" lightbox="media/vector-search-filters/vector-filter-modes-strictpostfilter.png":::
+1. Return the subset that satisfies the filter predicate.
+
+#### Effect of this mode
+
+Applying a filter *always* reduces the set of results to be fewer than `k` if some documents don't satisfy the filter. If qualifying items aren't present in the global top-`k` results, this mode never surfaces them.
+
+Strict postfiltering is useful for faceted navigation because it ensures that applying more selective filters never increases the number of results. This increases the consistency of facet bucket counts and search counts. However, it can result in false negatives or zero results.
+
+:::image type="content" source="media/vector-search-filters/vector-filter-modes-strictpostfilter.svg" alt-text="Diagram of strict postfilters." border="true" lightbox="media/vector-search-filters/vector-filter-modes-strictpostfilter.png":::
+
+---
-## Comparison table
+### Comparison table
| Mode | Recall (filtered results) | Computational cost | Risk of false negatives | When to use |
-| -------- | -------------: | ------------: | ---------------------: | ----------------------------------- |
-| Pre-filter | Very high | Higher (increases with filter selectivity and complexity) | No false negatives | **(recommended as the default in order to favor recall over speed)** Especially when recall for filtered queries is critical (sensitive search domains), filter is selective, or k is small |
-| Post-filter | Medium-high, reduces with filter selectivity | Similar to unfiltered but increases with filter complexity | Moderate (per-shard misses possible) | Can be an option for higher `k` queries and filters which are not too selective |
-| Strict post-filter | Lowest (degrades the fastest with filter selectivity) | Similar to unfiltered | Highest - can return zero results for small k or selective filters | For faceted search applications where surfacing additional results after applying a filter impacts the user experience. Do not use with small `k`. |
+|--|--|--|--|--|
+| `preFilter` | Very high | Higher (increases with filter selectivity and complexity) | No risk | **Recommended default for all scenarios**, especially when recall is critical (sensitive search domains), when using selective filters, or when using small `k`. |
+| `postFilter` | Medium to high (decreases with filter selectivity) | Similar to unfiltered but increases with filter complexity | Moderate (can miss matches per shard) | An option for filters that aren't too selective and for higher-`k` queries. |
+| `strictPostFilter` | Lowest (decreases most quickly with filter selectivity) | Similar to unfiltered | Highest (can return zero results for selective filters or small `k`) | An option for faceted search applications where surfacing more results after filter application impacts the user experience more than the risk of false negatives. Don't use with small `k`. |
### Benchmark testing of prefiltering and postfiltering
Summary
{
"modification_type": "minor update",
"modification_title": "ベクター検索フィルターに関する情報の強化"
}
Explanation
この変更は、vector-search-filters.md
ファイルにおける大規模な更新で、Azure AIのベクター検索におけるフィルターの使用に関する情報が強化されています。
主な変更点は以下の通りです:
日付の更新: ドキュメントの日付が2025年8月28日から2025年9月16日に更新されています。これにより、情報が最新であることが保証されています。
フィルター表現の明確化: ベクタークエリにおけるフィルター表現が改良され、フィルターの適用方法やフィルターモードに関する詳細な説明が追加されています。特に、prefilter
、postfilter
、およびstrict postfilter
の動作と使い方についての明確な説明がなされています。
新機能の強調: 新しいフィルタリングオプションや、フィルター適用モードの選択肢について詳述されており、ユーザーにとっての利点や使用シナリオが明確になっています。
重要な注意事項の追加: 過去のインデックスでのフィルターモードのデフォルトの変更や、特定の条件下での利用推奨が強調されています。
この更新は、ユーザーがベクター検索機能に関連するフィルターをより理解し、適切に使用できるよう支援することを目的としており、フィルター処理の方法やその影響を明確に伝えることで、ユーザビリティの向上が図られています。また、パフォーマンスに対する影響や、それに伴うリスクについての説明も含まれており、技術的な理解が深まるようになっています。
articles/search/vector-search-how-to-configure-vectorizer.md
Diff
@@ -51,11 +51,11 @@ The following table lists the embedding models that can be used with a vectorize
## Try a vectorizer with sample data
-The [Import and vectorize data wizard](search-get-started-portal-import-vectors.md) reads files from Azure Blob storage, creates an index with chunked and vectorized fields, and adds a vectorizer. By design, the vectorizer that's created by the wizard is set to the same embedding model used to index the blob content.
+The [**Import data (new)** wizard](search-get-started-portal-import-vectors.md) reads files from Azure Blob storage, creates an index with chunked and vectorized fields, and adds a vectorizer. By design, the vectorizer that's created by the wizard is set to the same embedding model used to index the blob content.
1. [Upload sample data files](/azure/storage/blobs/storage-quickstart-blobs-portal) to a container on Azure Storage. We used some [small text files from NASA's earth book](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/nasa-e-book/earth-txt-10) to test these instructions on a free search service.
-
-1. Run the [Import and vectorize data wizard](search-get-started-portal-import-vectors.md), choosing the blob container for the data source.
+
+1. Run the [**Import data (new)** wizard](search-get-started-portal-import-vectors.md), choosing the blob container for the data source.
:::image type="content" source="media/vector-search-how-to-configure-vectorizer/connect-to-data.png" lightbox="media/vector-search-how-to-configure-vectorizer/connect-to-data.png" alt-text="Screenshot of the connect to your data page.":::
Summary
{
"modification_type": "minor update",
"modification_title": "ウィザードの名称変更"
}
Explanation
この変更は、vector-search-how-to-configure-vectorizer.md
ファイルに対する小規模な修正で、Azureのデータインポートウィザードの名称が更新されています。
具体的には、以下の点が変更されました:
ウィザード名の更新: 原文で「Import and vectorize data wizard」と記載されていた部分が「Import data (new) wizard」に変更されています。この変更は、最新のウィザードの名称を明示的にすることで、ユーザーが正しい機能を認識し利用できるようにするためのものです。
関連情報の整備: 更新されたウィザード名は、関連する手順や指示の全体としての一貫性を持たせる役割も果たしています。これにより、ユーザーは新しいウィザードを通じてデータをインポートする際の方法を容易に理解できるようになります。
全体として、この修正はウィザードに関する情報を最新のものに保ち、ユーザーがAzureのデータインポート機能を適切に利用できるようにすることを目的としています。この種の小さな改良は、ドキュメントの明確性やユーザビリティを向上させるうえで重要な役割を果たします。
articles/search/vector-search-how-to-index-binary-data.md
Diff
@@ -37,7 +37,7 @@ The binary data type is generally available starting with API version 2024-07-01
## Limitations
-+ No Azure portal support in the Import and vectorize data wizard.
++ No Azure portal support in the **Import data (new)** wizard.
+ No support for binary fields in the [AML skill](cognitive-search-aml-skill.md) that's used for integrated vectorization of models in the Azure AI Foundry model catalog.
## Add a vector search algorithm and vector profile
Summary
{
"modification_type": "minor update",
"modification_title": "ウィザードの名称更新"
}
Explanation
この変更は、vector-search-how-to-index-binary-data.md
ファイルに対する小規模な修正で、バイナリデータのインデックス作成に関するウィザードの名称が更新されています。
具体的な変更点は以下の通りです:
ウィザード名の更新: 「Import and vectorize data wizard」という表現が「Import data (new) wizard」に変更されました。この変更により、最新のウィザード名を反映させ、ユーザーが正規の機能を認識できるようにしています。
一貫性の向上: この変更は、ドキュメント内の他の部分で使用されている名称と整合性を持たせることで、ユーザーにとっての利便性を向上させるために重要です。
全体として、この修正はバイナリデータのインデックス作成に関する情報を最新のものに保つためのものであり、Azureプラットフォームにおけるデータのインポート機能の利用を向上させることを目的としています。この小さな更新は、ドキュメントの正確性とユーザビリティの向上に寄与します。
articles/search/vector-search-integrated-vectorization-ai-studio.md
Diff
@@ -23,7 +23,7 @@ The workflow includes model deployment steps. The model catalog includes embeddi
After the model is deployed, you can use it for [integrated vectorization](vector-search-integrated-vectorization.md) during indexing, or with the [Azure AI Foundry vectorizer](vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md) for queries.
> [!TIP]
-> Use the [**Import and vectorize data wizard**](search-get-started-portal-import-vectors.md) to generate a skillset that includes an AML skill for deployed embedding models on Azure AI Foundry. AML skill definition for inputs, outputs, and mappings are generated by the wizard, which gives you an easy way to test a model before writing any code.
+> Use the [**Import data (new)** wizard](search-get-started-portal-import-vectors.md) to generate a skillset that includes an AML skill for deployed embedding models on Azure AI Foundry. AML skill definition for inputs, outputs, and mappings are generated by the wizard, which gives you an easy way to test a model before writing any code.
## Prerequisites
@@ -33,7 +33,7 @@ After the model is deployed, you can use it for [integrated vectorization](vecto
## Supported embedding models
-Integrated vectorization and the [Import and vectorize data wizard](search-import-data-portal.md) support the following embedding models in the model catalog:
+Integrated vectorization and the [**Import data (new)** wizard](search-import-data-portal.md) support the following embedding models in the model catalog:
| Embedding type | Supported models |
|--|--|
Summary
{
"modification_type": "minor update",
"modification_title": "ウィザード名称の変更"
}
Explanation
この変更は、vector-search-integrated-vectorization-ai-studio.md
ファイルに対する小規模な修正で、Azureにおけるデータインポートウィザードの名称が更新されています。
具体的な変更内容は以下の通りです:
ウィザード名の更新: 「Import and vectorize data wizard」という表現が「Import data (new) wizard」に変更されました。これにより、ユーザーは最新のウィザード名を認識できるようになり、どの機能を使用するかを明確に理解できます。
情報の一貫性: 更新されたウィザード名は、文書全体の一貫性を保つために重要です。この変更により、他の部分で使用されている用語と整合性が持たれ、ユーザーにとっての利便性が向上します。
この小さな修正は、Azure AIプラットフォーム上でのデータのインポート機能を利用する際の文書の正確性を保つことを目的としており、ユーザーが機能を適切に理解し使用できるようにするための重要な改善です。
articles/search/vector-search-integrated-vectorization.md
Diff
@@ -111,7 +111,7 @@ A more common scenario - data chunking and vectorization during indexing:
Optionally, [create secondary indexes](index-projections-concept-intro.md) for advanced scenarios where chunked content is in one index, and nonchunked in another index. Chunked indexes (or secondary indexes) are useful for RAG apps.
> [!TIP]
-> [Try the **Import and vectorize data wizard**](search-get-started-portal-import-vectors.md) in the Azure portal to explore integrated vectorization before writing any code.
+> [Try the **Import data (new)** wizard](search-get-started-portal-import-vectors.md) in the Azure portal to explore integrated vectorization before writing any code.
### Secure connections to vectorizers and models
Summary
{
"modification_type": "minor update",
"modification_title": "ウィザード名称の変更"
}
Explanation
この変更は、vector-search-integrated-vectorization.md
ファイルに対する小規模な修正で、Azureにおけるデータインポートウィザードの名称が更新されています。
具体的な変更内容は以下の通りです:
ウィザード名の更新: 「Import and vectorize data wizard」という表現が「Import data (new) wizard」に変更されました。この変更により、最新のウィザード名をユーザーが明確に理解できるようになります。
正確性の向上: この更新は、ドキュメント全体の用語の一貫性を保つために重要です。ユーザーが最新の機能を正しく認識し、適切に活用できるようサポートします。
全体として、この小さな修正は、Azureプラットフォーム上でのデータのインポート機能に関する情報を最新のものに保つことを目的としており、ユーザーが機能を使う際の混乱を避けるための改善となります。
articles/search/vector-search-overview.md
Diff
@@ -68,7 +68,7 @@ Vector search is available in [all regions](search-region-support.md) and on [al
For portal and programmatic access to vector search, you can use:
-+ The [Import and vectorize data wizard](search-get-started-portal-import-vectors.md) in the Azure portal.
++ The [**Import data (new)** wizard](search-get-started-portal-import-vectors.md) in the Azure portal.
+ The [Search Service REST APIs](/rest/api/searchservice).
+ The Azure SDKs for [.NET](https://www.nuget.org/packages/Azure.Search.Documents), [Python](https://pypi.org/project/azure-search-documents), and [JavaScript](https://www.npmjs.com/package/@azure/search-documents).
+ [Other Azure offerings](#azure-integration-and-related-services), such as Azure AI Foundry.
Summary
{
"modification_type": "minor update",
"modification_title": "ウィザード名称の変更"
}
Explanation
この変更は、vector-search-overview.md
ファイルにおける小規模な修正で、Azureポータルで使用されるデータインポートウィザードの名称が更新されています。
具体的な変更内容は以下の通りです:
ウィザード名の更新: 「Import and vectorize data wizard」という表現が「Import data (new) wizard」に変更されました。この変更により、ユーザーは新しいウィザード名を正確に理解し、最新の機能を認識できます。
情報の明確化: 更新されたウィザード名は、Azureプラットフォームでのデータインポート機能を利用する際に、ユーザーにとっての利便性と明確さを向上させます。
この小規模な修正は、文書の正確性や一貫性を保つための重要な改善であり、ユーザーがAzureのベクター検索機能を理解しやすくする手助けとなります。
articles/search/vector-store.md
Diff
@@ -94,9 +94,9 @@ Other fields, such as the `content` field, provide the human-readable equivalent
Metadata fields are useful for filters, especially if they include origin information about the source document. Although you can't filter directly on a vector field, you can set prefilter, postfilter, or strict postfilter (preview) modes to filter before or after vector query execution.
-### Schema generated by the Import and vectorize data wizard
+### Schema generated by the import wizard
-We recommend the [**Import and vectorize data** wizard](search-get-started-portal-import-vectors.md) for evaluation and proof-of-concept testing. The wizard generates the example schema in this section.
+We recommend the [**Import data (new)** wizard](search-get-started-portal-import-vectors.md) for evaluation and proof-of-concept testing. The wizard generates the example schema in this section.
The wizard chunks your content into smaller search documents, which benefits RAG apps that use language models to formulate responses. Chunking helps you stay within the input limits of language models and the token limits of semantic ranker. It also improves precision in similarity search by matching queries against chunks pulled from multiple parent documents. For more information, see [Chunk large documents for vector search solutions](vector-search-how-to-chunk-documents.md).
Summary
{
"modification_type": "minor update",
"modification_title": "ウィザード名称の変更"
}
Explanation
この変更は、vector-store.md
ファイルにおける小規模な修正で、データインポートウィザードに関連する記述が更新されています。
具体的な変更内容は以下の通りです:
ウィザード名の更新: 「Import and vectorize data wizard」という表現が「Import data (new) wizard」に変更されました。これにより、新しいウィザードの名称が明確に反映され、ユーザーが利用する際に混乱が少なくなります。
セクションタイトルの簡略化: セクションタイトルも「Import and vectorize data wizard」から「import wizard」に変更され、より簡潔で分かりやすい表現になりました。
この小規模な修正は、ドキュメントの整合性を向上させるとともに、ユーザーが最新の機能を容易に理解できるよう配慮された改良です。また、ウィザードの使用に関するガイダンスが分かりやすくなり、効果的にデータを扱うための手助けとなります。
articles/search/whats-new.md
Diff
@@ -4,7 +4,7 @@ description: Announcements of new and enhanced features, including a service ren
author: HeidiSteen
ms.author: heidist
manager: nitinme
-ms.date: 09/01/2025
+ms.date: 09/16/2025
ms.service: azure-ai-search
ms.topic: overview
ms.custom:
@@ -20,6 +20,12 @@ Learn about the latest updates to Azure AI Search functionality, docs, and sampl
> [!NOTE]
> Preview features are announced here, but we also maintain a [preview features list](search-api-preview.md) so you can find them in one place.
+## September 2025
+
+| Item | Type | Description |
+|--|--|--|
+| [Updates to import wizards (Phase 1)](search-import-data-portal.md) | Portal | The Azure portal is undergoing a three-phase rollout to unify the two import wizards. For Phase 1, the **Import and vectorize data** wizard has been renamed to **Import data (new)** and redeveloped to support keyword search, modernizing the legacy **Import data** workflow with an improved interface and user experience. <p>**Import data (new)** isn't a direct replacement for the old wizard. For example, it supports fewer skills for keyword search and doesn't offer built-in sample data.<p>Both wizards are currently available, but **Import data** will be deprecated in a future phase. |
+
## August 2025
| Item | Type | Description |
Summary
{
"modification_type": "minor update",
"modification_title": "新機能の追加と日付の更新"
}
Explanation
この変更は、whats-new.md
ファイルにおいて新機能のアップデートと日付の更新が行われました。主な内容は以下の通りです:
日付の更新: 最終更新日が2025年9月1日から2025年9月16日に変更されました。これにより、ドキュメントの最新性が保たれています。
新しい内容の追加: 2025年9月に関する新機能のセクションが追加されました。ここでは、Azureポータルにおけるインポートウィザードの更新に関する情報が記載されています。具体的には、次の点が強調されています:
- インポートウィザードの統合を計画する三段階のロールアウトのPhase 1において、Import and vectorize dataウィザードがImport data (new)に改名され、キーワード検索をサポートするように再設計されたこと。
- 新しいウィザードは、旧ウィザードの直接的な置き換えではないこと。例えば、キーワード検索のためのスキルが少なく、組み込みサンプルデータを提供しない点が挙げられています。
- 現在は両方のウィザードが利用可能ですが、Import dataは将来的に廃止される予定であることも記載されています。
この変更は、ユーザーに対してAzure AI Searchの最新情報を効果的に提供し、機能の利用を促進するための重要なアップデートです。