@@ -1,36 +1,66 @@
---
title: "Quickstart: Multimodal Search in the Azure portal"
titleSuffix: Azure AI Search
-description: Learn how to search for multimodal content on an Azure AI Search index in the Azure portal. Run a wizard to generate natural-language descriptions of images and vectorize both text and images, and then use Search Explorer to query your multimodal index.
+description: Learn how to index and search for multimodal content in the Azure portal. Run a wizard to extract and embed both text and images, and then use Search Explorer to query your multimodal index.
author: haileytap
ms.author: haileytapia
ms.service: azure-ai-search
ms.topic: quickstart
-ms.date: 05/22/2025
+ms.date: 06/04/2025
ms.custom:
- references_regions
---
# Quickstart: Search for multimodal content in the Azure portal
-In this quickstart, you use the **Import and vectorize data** wizard in the Azure portal to get started with [multimodal search](multimodal-search-overview.md). The wizard simplifies the process of extracting page text and inline images from documents, describing images in natural language, vectorizing image descriptions and text, and storing images for later retrieval.
+In this quickstart, you use the **Import and vectorize data** wizard in the Azure portal to get started with [multimodal search](multimodal-search-overview.md). The wizard simplifies the process of extracting, chunking, vectorizing, and loading both text and images into a searchable index.
-The sample data consists of a multimodal PDF in the [azure-search-sample-data](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/sustainable-ai-pdf) repo, but you can use different files and still follow this quickstart.
+Unlike [Quickstart: Vector search in the Azure portal](search-get-started-portal-import-vectors.md), which processes simple text-containing images, this quickstart supports advanced image processing for multimodal RAG scenarios.
+
+This quickstart uses a multimodal PDF from the [azure-search-sample-data](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/sustainable-ai-pdf) repo. However, you can use different files and still complete this quickstart.
## Prerequisites
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-+ An [Azure Storage account](/azure/storage/common/storage-account-create). Use Azure Blob Storage or Azure Data Lake Storage Gen2 (storage account with a hierarchical namespace) on a standard performance (general-purpose v2) account. Access tiers can be hot, cool, or cold.
++ An [Azure AI Search service](search-create-service-portal.md). We recommend the Basic tier or higher.
-+ An [Azure AI services multi-service account](/azure/ai-services/multi-service-resource#azure-ai-multi-services-resource-for-azure-ai-search-skills) in East US, West Europe, or North Central US.
++ An [Azure Storage account](/azure/storage/common/storage-account-create). Use Azure Blob Storage or Azure Data Lake Storage Gen2 (storage account with a hierarchical namespace) on a standard performance (general-purpose v2) account. Access tiers can be hot, cool, or cold.
-+ An [Azure AI Search service](search-create-service-portal.md) in the same region as your Azure AI multi-service account.
++ A [supported extraction method](#supported-extraction-methods).
-+ An [Azure OpenAI resource](/azure/ai-services/openai/how-to/create-resource).
++ A [supported embedding method](#supported-embedding-methods).
+ Familiarity with the wizard. See [Import data wizards in the Azure portal](search-import-data-portal.md).
+### Supported extraction methods
+
+For content extraction, you can choose either default extraction via Azure AI Search or enhanced extraction via [Azure AI Document Intelligence](/azure/ai-services/document-intelligence/overview). The following table describes both extraction methods.
+
+| Method | Description |
+|--|--|
+| Default extraction | Extracts location metadata from PDF images only. Doesn't require another Azure AI resource. |
+| Enhanced extraction | Extracts location metadata from text and images for multiple document types. Requires an [Azure AI services multi-service resource](/azure/ai-services/multi-service-resource#azure-ai-multi-services-resource-for-azure-ai-search-skills) <sup>1</sup> in a [supported region](cognitive-search-skill-document-intelligence-layout.md#supported--regions). |
+
+<sup>1</sup> For billing purposes, you must [attach your multi-service resource](cognitive-search-attach-cognitive-services.md) to the skillset in your Azure AI Search service. Unless you use a [keyless connection](cognitive-search-attach-cognitive-services.md#bill-through-a-keyless-connection) to create the skillset, both resources must be in the same region.
+
+### Supported embedding methods
+
+For content embedding, you can choose either image verbalization (followed by text vectorization) or multimodal embeddings. Deployment instructions for the models are provided in a [later section](#deploy-models). The following table describes both embedding methods.
+
+| Method | Description | Supported models |
+|--|--|--|
+| Image verbalization | Uses an LLM to generate natural-language descriptions of images, and then uses an embedding model to vectorize plain text and verbalized images.<br><br>Requires an [Azure OpenAI resource](/azure/ai-services/openai/how-to/create-resource) <sup>1, 2</sup> or [Azure AI Foundry project](/azure/ai-foundry/how-to/create-projects).<br><br>For text vectorization, you can also use an [Azure AI services multi-service resource](/azure/ai-services/multi-service-resource#azure-ai-multi-services-resource-for-azure-ai-search-skills) <sup>3</sup> in a [supported region](cognitive-search-skill-vision-vectorize.md). | LLMs:<br>GPT-4o<br>GPT-4o-mini<br>phi-4 <sup>4</sup><br><br>Embedding models:<br>text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large |
+| Multimodal embeddings | Uses an embedding model to directly vectorize both text and images.<br><br>Requires an [Azure AI Foundry project](/azure/ai-foundry/how-to/create-projects) or [Azure AI services multi-service resource](/azure/ai-services/multi-service-resource#azure-ai-multi-services-resource-for-azure-ai-search-skills) <sup>3</sup> in a [supported region](cognitive-search-skill-vision-vectorize.md). | Cohere-embed-v3-english<br>Cohere-embed-v3-multilingual |
+
+<sup>1</sup> The endpoint of your Azure OpenAI resource must have a [custom subdomain](/azure/ai-services/cognitive-services-custom-subdomains), such as `https://my-unique-name.openai.azure.com`. If you created your resource in the [Azure portal](https://portal.azure.com/), this subdomain was automatically generated during resource setup.
+
+<sup>2</sup> Azure OpenAI resources (with access to embedding models) that were created in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs) aren't supported. You must create an Azure OpenAI resource in the Azure portal.
+
+<sup>3</sup> For billing purposes, you must [attach your multi-service resource](cognitive-search-attach-cognitive-services.md) to the skillset in your Azure AI Search service. Unless you use a [keyless connection (preview)](cognitive-search-attach-cognitive-services.md#bill-through-a-keyless-connection) to create the skillset, both resources must be in the same region.
+
+<sup>4</sup> `phi-4` is only available to Azure AI Foundry projects.
+
### Public endpoint requirements
All of the preceding resources must have public access enabled so that the Azure portal nodes can access them. Otherwise, the wizard fails. After the wizard runs, you can enable firewalls and private endpoints on the integration components for security. For more information, see [Secure connections in the import wizards](search-import-data-portal.md#secure-connections).
@@ -45,7 +75,11 @@ If you're starting with the free service, you're limited to three indexes, three
Before you begin, make sure you have permissions to access content and operations. We recommend Microsoft Entra ID authentication and role-based access for authorization. You must be an **Owner** or **User Access Administrator** to assign roles. If roles aren't feasible, you can use [key-based authentication](search-security-api-keys.md) instead.
-Configure access to each resource identified in this section.
+Configure the [required roles](#required-roles) and [conditional roles](#conditional-roles) identified in this section.
+
+### Required roles
+
+Azure AI Search and Azure Storage are required for all multimodal search scenarios.
### [**Azure AI Search**](#tab/search-perms)
@@ -67,28 +101,42 @@ On your Azure AI Search service:
### [**Azure Storage**](#tab/storage-perms)
-Azure Storage is both the data source for your documents and the destination for extracted images. Your search service requires access to these storage containers, which you create in the next section of this quickstart.
+Azure Storage is both the data source for your documents and the destination for extracted images. Your search service requires access to these storage containers, which you create in the next section.
On your Azure Storage account:
+ Assign **Storage Blob Data Contributor** to your [search service identity](search-howto-managed-identities-data-sources.md#create-a-system-managed-identity).
-### [**Azure AI services**](#tab/ai-services-perms)
-
-An Azure AI multi-service account provides multiple Azure AI services, including [Azure AI Document Intelligence](/azure/ai-services/document-intelligence/overview) for content extraction and semantic chunking. Your search service requires access to call the [Document Layout skill](cognitive-search-skill-document-intelligence-layout.md).
+---
-On your Azure AI multi-service account:
+### Conditional roles
-+ Assign **Cognitive Services User** to your [search service identity](search-howto-managed-identities-data-sources.md#create-a-system-managed-identity).
+The following tabs cover all wizard-compatible resources for multimodal search. Select only the tabs that apply to your chosen [extraction method](#supported-extraction-methods) and [embedding method](#supported-embedding-methods).
### [**Azure OpenAI**](#tab/openai-perms)
-Azure OpenAI provides large language models (LLMs) for image verbalization and embedding models for text and image vectorization. Your search service requires access to call the [GenAI Prompt skill](cognitive-search-skill-genai-prompt.md) and [Azure OpenAI Embedding skill](cognitive-search-skill-azure-openai-embedding.md).
+Azure OpenAI provides LLMs for image verbalization and embedding models for text and image vectorization. Your search service requires access to call the [GenAI Prompt skill](cognitive-search-skill-genai-prompt.md) and [Azure OpenAI Embedding skill](cognitive-search-skill-azure-openai-embedding.md).
On your Azure OpenAI resource:
+ Assign **Cognitive Services OpenAI User** to your [search service identity](search-howto-managed-identities-data-sources.md#create-a-system-managed-identity).
+### [**Azure AI Foundry**](#tab/ai-foundry-perms)
+
+The Azure AI Foundry model catalog provides LLMs for image verbalization and embedding models for text and image vectorization. Your search service requires access to call the [GenAI Prompt skill](cognitive-search-skill-genai-prompt.md) and [AML skill](cognitive-search-aml-skill.md).
+
+On your Azure AI Foundry project:
+
++ Assign **Azure AI Project Manager** to your [search service identity](search-howto-managed-identities-data-sources.md#create-a-system-managed-identity).
+
+### [**Azure AI services**](#tab/ai-services-perms)
+
+An Azure AI multi-service resource provides multiple Azure AI services, including [Azure AI Document Intelligence](/azure/ai-services/document-intelligence/overview) for content extraction and [Azure AI Vision](/azure/ai-services/computer-vision/overview) for content embedding. Your search service requires access to call the [Document Layout skill](cognitive-search-skill-document-intelligence-layout.md) and [Azure AI Vision multimodal embeddings skill](cognitive-search-skill-vision-vectorize.md).
+
+On your Azure AI multi-service resource:
+
++ Assign **Cognitive Services User** to your [search service identity](search-howto-managed-identities-data-sources.md#create-a-system-managed-identity).
+
---
## Prepare sample data
@@ -107,27 +155,20 @@ To prepare the sample data for this quickstart:
## Deploy models
-The wizard requires an LLM to verbalize images and an embedding model to generate vector representations of text and verbalized text content. Both models are available through Azure OpenAI.
-
-To deploy the models for this quickstart:
-
-1. Sign in to the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs) and select your Azure OpenAI resource.
-
-1. From the left pane, select **Model catalog**.
+The wizard offers several options for content embedding. Image verbalization requires an LLM to describe images and an embedding model to vectorize text and image content, while direct multimodal embeddings only require an embedding model. These models are available through Azure OpenAI and Azure AI Foundry.
-1. Deploy one of the following LLMs:
+> [!NOTE]
+> If you're using Azure AI Vision, skip this step. The multimodal embeddings are built into your Azure AI multi-service resource and don't require model deployment.
- + gpt-4o
-
- + gpt-4o-mini
+To deploy the models for this quickstart:
-1. Deploy one of the following embedding models:
+1. Sign in to the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs).
- + text-embedding-ada-002
+1. Select your Azure OpenAI resource or Azure AI Foundry project.
- + text-embedding-3-small
+1. From the left pane, select **Model catalog**.
- + text-embedding-3-large
+1. Deploy the models required for your chosen [embedding method](#supported-embedding-methods).
## Start the wizard
@@ -165,43 +206,65 @@ To connect to your data:
## Extract your content
-The next step is to select a method for document cracking and chunking.
+Depending on your chosen [extraction method](#supported-extraction-methods), the wizard provides configuration options for document cracking and chunking.
+
+### [**Default extraction**](#tab/document-extraction)
+
+The default method calls the [Document Extraction skill](cognitive-search-skill-document-extraction.md) to extract text content and generate normalized images from your documents. The [Text Split skill](cognitive-search-skill-textsplit.md) is then called to split the extracted text content into pages.
-Your Azure AI multi-service account provides access to the [Document Layout skill](cognitive-search-skill-document-intelligence-layout.md), which extracts page numbers, bounding polygons, and other location metadata from both text and images. The Document Layout skill also breaks documents into smaller, more manageable chunks.
+To use the Document Extraction skill:
+
+1. On the **Content extraction** page, select **Default**.
+
+ :::image type="content" source="media/search-get-started-portal-images/extract-your-content-doc-extraction.png" alt-text="Screenshot of the wizard page with the default method selected for content extraction." border="true" lightbox="media/search-get-started-portal-images/extract-your-content-doc-extraction.png":::
+
+1. Select **Next**.
+
+### [**Enhanced extraction**](#tab/document-intelligence)
+
+Your Azure AI multi-service resource provides access to [Azure AI Document Intelligence](/azure/ai-services/document-intelligence/overview), which calls the [Document Layout skill](cognitive-search-skill-document-intelligence-layout.md) to recognize document structure and extract text and images relationally. It does so by attaching location metadata, such as page numbers and bounding polygons, to each image. The Document Layout skill also breaks text content into smaller, more manageable chunks.
To use the Document Layout skill:
1. On the **Content extraction** page, select **AI Document Intelligence**.
-1. Specify your Azure subscription and Azure AI multi-service account.
+ :::image type="content" source="media/search-get-started-portal-images/extract-your-content-doc-intelligence.png" alt-text="Screenshot of the wizard page with Azure AI Document Intelligence selected for content extraction." border="true" lightbox="media/search-get-started-portal-images/extract-your-content-doc-intelligence.png":::
+
+1. Specify your Azure subscription and multi-service resource.
1. For the authentication type, select **System assigned identity**.
1. Select the checkbox that acknowledges the billing effects of using these resources.
- :::image type="content" source="media/search-get-started-portal-images/extract-your-content.png" alt-text="Screenshot of the wizard page for selecting a content extraction method." border="true" lightbox="media/search-get-started-portal-images/extract-your-content.png":::
+ :::image type="content" source="media/search-get-started-portal-images/doc-intelligence-options.png" alt-text="Screenshot of the wizard page with configuration options for Azure AI Document Intelligence selected." border="true" lightbox="media/search-get-started-portal-images/doc-intelligence-options.png":::
1. Select **Next**.
+---
+
## Embed your content
-During this step, the wizard calls two skills to generate descriptive text for images (image verbalization) and vector embeddings for text and images.
+During this step, the wizard uses your chosen [embedding method](#supported-embedding-methods) to generate vector representations of both text and images.
+
+### [**Image verbalization**](#tab/image-verbalization)
-For image verbalization, the [GenAI Prompt skill](cognitive-search-skill-genai-prompt.md) uses the LLM you deployed to analyze each extracted image and produce a natural-language description.
+The wizard calls one skill to create descriptive text for images (image verbalization) and another skill to create vector embeddings for both text and images.
-For text and image embeddings, the [Azure OpenAI Embedding skill](cognitive-search-skill-azure-openai-embedding.md) uses the embedding model you deployed to convert the text chunks and verbalized descriptions into high-dimensional vectors. These vectors enable similarity and hybrid retrieval.
+For image verbalization, the [GenAI Prompt skill](cognitive-search-skill-genai-prompt.md) uses your deployed LLM to analyze each extracted image and produce a natural-language description.
-To use the GenAI Prompt skill and Azure OpenAI Embedding skill:
+For embeddings, the [Azure OpenAI Embedding skill](cognitive-search-skill-azure-openai-embedding.md), [AML skill](cognitive-search-aml-skill.md), or [Azure AI Vision multimodal embeddings skill](cognitive-search-skill-vision-vectorize.md) uses your deployed embedding model to convert text chunks and verbalized descriptions into high-dimensional vectors. These vectors enable similarity and hybrid retrieval.
+
+To use the skills for image verbalization:
1. On the **Content embedding** page, select **Image Verbalization**.
:::image type="content" source="media/search-get-started-portal-images/image-verbalization-tile.png" alt-text="Screenshot of the Image Verbalization tile in the wizard." border="true" lightbox="media/search-get-started-portal-images/image-verbalization-tile.png":::
1. On the **Image Verbalization** tab:
- 1. For the kind, select **Azure OpenAI**.
+ 1. For the kind, select your LLM provider: **Azure OpenAI** or **AI Foundry Hub catalog models**.
- 1. Specify your Azure subscription, Azure OpenAI resource, and LLM deployment.
+ 1. Specify your Azure subscription, resource, and LLM deployment.
1. For the authentication type, select **System assigned identity**.
@@ -211,9 +274,9 @@ To use the GenAI Prompt skill and Azure OpenAI Embedding skill:
1. On the **Text Vectorization** tab:
- 1. For the kind, select **Azure OpenAI**.
+ 1. For the kind, select your model provider: **Azure OpenAI**, **AI Foundry Hub catalog models**, or **AI Vision vectorization**.
- 1. Specify your Azure subscription, Azure OpenAI resource, and embedding model deployment.
+ 1. Specify your Azure subscription, resource, and embedding model deployment.
1. For the authentication type, select **System assigned identity**.
@@ -223,6 +286,32 @@ To use the GenAI Prompt skill and Azure OpenAI Embedding skill:
1. Select **Next**.
+### [**Multimodal embeddings**](#tab/multimodal-embeddings)
+
+If the raw content of your data includes text, the wizard calls the [AML skill](cognitive-search-aml-skill.md) or [Azure AI Vision multimodal embeddings skill](cognitive-search-skill-vision-vectorize.md) to vectorize it. The same embedding skill is used to generate vector representations of images.
+
+The wizard also calls the [Shaper skill](cognitive-search-skill-shaper.md) to enrich the output with metadata, such as page numbers. This metadata is useful for associating vectorized content with its original context in the document.
+
+To use the skills for multimodal embeddings:
+
+1. On the **Content embedding** page, select **Multimodal Embedding**.
+
+ :::image type="content" source="media/search-get-started-portal-images/multimodal-embedding-tile.png" alt-text="Screenshot of the Multimodal Embedding tile in the wizard." border="true" lightbox="media/search-get-started-portal-images/multimodal-embedding-tile.png":::
+
+1. For the kind, select your model provider: **AI Foundry Hub catalog models** or **AI Vision vectorization**.
+
+ <!-- If it's unavailable, make sure your Azure AI Search service and Azure AI multi-service account are both in a region that [supports the AI Vision multimodal APIs](/azure/ai-services/computer-vision/how-to/image-retrieval). -->
+
+1. Specify your Azure subscription, resource, and embedding model deployment.
+
+1. Select the checkbox that acknowledges the billing effects of using this resource.
+
+ :::image type="content" source="media/search-get-started-portal-images/multimodal-embeddings-options.png" alt-text="Screenshot of the wizard page for vectorizing text and images." border="true" lightbox="media/search-get-started-portal-images/multimodal-embeddings-options.png":::
+
+1. Select **Next**.
+
+---
+
## Store the extracted images
The next step is to send images extracted from your documents to Azure Storage. In Azure AI Search, this secondary storage is known as a [knowledge store](knowledge-store-concept-intro.md).
@@ -245,14 +334,14 @@ On the **Advanced settings** page, you can optionally add fields to the index sc
| Field | Applies to | Description | Attributes |
|--|--|--|--|
-| content_id | Text and image vectors | String field. Document key for the index. | Searchable, retrievable, sortable, filterable, and facetable. |
-| document_title | Text and image vectors | String field. Human-readable document title, page title, or page number. | Searchable, retrievable, sortable, filterable, and facetable. |
+| content_id | Text and image vectors | String field. Document key for the index. | Retrievable, sortable, and searchable. |
+| document_title | Text and image vectors | String field. Human-readable document title. | Retrievable and searchable. |
| text_document_id | Text vectors | String field. Identifies the parent document from which the text chunk originates. | Retrievable and filterable. |
-| image_document_id | Image vectors | String field. Identifies the parent document from which the image originates. | Searchable, retrievable, sortable, filterable, and facetable. |
-| content_text | Text vectors | String field. Human-readable version of the text chunk. | Searchable, retrievable, sortable, filterable, and facetable. |
-| content_embedding | Image vectors | Collection(Edm.Single). Vector representation of the image verbalization. | Searchable and retrievable. |
-| content_path | Text and image vectors | String field. Path to the content in the storage container. | Retrievable, sortable, filterable, and facetable. |
-| locationMetadata | Text and image vectors | Edm.ComplexType. Contains metadata about the content's location. | Varies by field. |
+| image_document_id | Image vectors | String field. Identifies the parent document from which the image originates. | Retrievable and filterable. |
+| content_text | Text vectors | String field. Human-readable version of the text chunk. | Retrievable and searchable. |
+| content_embedding | Text and image vectors | Collection(Edm.Single). Vector representation of text and images. | Retrievable and searchable. |
+| content_path | Text and image vectors | String field. Path to the content in the storage container. | Retrievable and searchable. |
+| locationMetadata | Image vectors | Edm.ComplexType. Contains metadata about the image's location in the documents. | Varies by field. |
You can't modify the generated fields or their attributes, but you can add fields if your data source provides them. For example, Azure Blob Storage provides a collection of metadata fields.
@@ -264,9 +353,6 @@ To add fields to the index schema:
1. Select a source field from the available fields, enter a field name for the index, and accept (or override) the default data type.
- > [!NOTE]
- > Metadata fields are searchable but not retrievable, filterable, facetable, or sortable.
-
1. If you want to restore the schema to its original version, select **Reset**.
## Schedule indexing
@@ -303,13 +389,11 @@ When the wizard completes the configuration, it creates the following objects:
+ A skillset with the following skills:
- + The [Document Layout skill](cognitive-search-skill-document-intelligence-layout.md) splits documents into text chunks and extracts images with location data.
+ + The [Document Extraction skill](cognitive-search-skill-document-extraction.md) or [Document Layout skill](cognitive-search-skill-document-intelligence-layout.md) extracts text and images from source documents. The [Text Split skill](cognitive-search-skill-textsplit.md) accompanies the Document Extraction skill for data chunking, while the Document Layout skill has built-in chunking.
- + The [GenAI Prompt skill](cognitive-search-skill-genai-prompt.md) generates natural-language descriptions (verbalizations) of images.
+ + The [GenAI Prompt skill](cognitive-search-skill-genai-prompt.md) verbalizes images in natural language. If you're using direct multimodal embeddings, this skill is absent.
- + The [Azure OpenAI Embedding skill](cognitive-search-skill-azure-openai-embedding.md) vectorizes each text chunk.
-
- + The [Azure OpenAI Embedding skill](cognitive-search-skill-azure-openai-embedding.md) is called again to vectorize each image verbalization.
+ + The [Azure OpenAI Embedding skill](cognitive-search-skill-azure-openai-embedding.md), [AML skill](cognitive-search-aml-skill.md), or [Azure AI Vision multimodal embeddings skill](cognitive-search-skill-vision-vectorize.md) is called once for text vectorization and once for image vectorization.
+ The [Shaper skill](cognitive-search-skill-shaper.md) enriches the output with metadata and creates new images with contextual information.
@@ -318,9 +402,9 @@ When the wizard completes the configuration, it creates the following objects:
## Check results
-This quickstart creates a multimodal index that supports [hybrid search](hybrid-search-overview.md) over both text and verbalized images. However, it doesn't support images as query inputs, which requires integrated vectorization using an embedding skill and an equivalent vectorizer. For more information, see [Query with Search explorer](search-explorer.md).
+This quickstart creates a multimodal index that supports [hybrid search](hybrid-search-overview.md) over both text and images. Unless you use direct multimodal embeddings, the index doesn't accept images as query inputs, which requires the [AML skill](cognitive-search-aml-skill.md) or [Azure AI Vision multimodal embeddings skill](cognitive-search-skill-vision-vectorize.md) with an equivalent vectorizer. For more information, see [Configure a vectorizer in a search index](vector-search-how-to-configure-vectorizer.md).
-Hybrid search is a combination of full-text queries and vector queries. When you issue a hybrid query, the search engine computes the semantic similarity between your query and the indexed vectors and ranks the results accordingly. For the index created in this quickstart, the results surface content from the `content_text` field that closely aligns with your query.
+Hybrid search combines full-text queries and vector queries. When you issue a hybrid query, the search engine computes the semantic similarity between your query and the indexed vectors and ranks the results accordingly. For the index created in this quickstart, the results surface content from the `content_text` field that closely aligns with your query.
To query your multimodal index:
@@ -340,12 +424,40 @@ To query your multimodal index:
:::image type="content" source="media/search-get-started-portal-images/search-button.png" alt-text="Screenshot of the Search button in Search Explorer." border="true" lightbox="media/search-get-started-portal-images/search-button.png":::
- The results should include text and image content related to `energy` in your index. Highlights from relevant passages and image verbalizations appear in `@search.captions`, helping you quickly identify matches to your query.
+ The JSON results should include text and image content related to `energy` in your index. If you enabled semantic ranker, the `@search.answers` array provides concise, high-confidence [semantic answers](semantic-answers.md) to help you quickly identify relevant matches.
+
+ ```json
+ "@search.answers": [
+ {
+ "key": "a71518188062_aHR0cHM6Ly9oYWlsZXlzdG9yYWdlLmJsb2IuY29yZS53aW5kb3dzLm5ldC9tdWx0aW1vZGFsLXNlYXJjaC9BY2NlbGVyYXRpbmctU3VzdGFpbmFiaWxpdHktd2l0aC1BSS0yMDI1LnBkZg2_normalized_images_7",
+ "text": "A vertical infographic consisting of three sections describing the roles of AI in sustainability: 1. **Measure, predict, and optimize complex systems**: AI facilitates analysis, modeling, and optimization in areas like energy distribution, resource allocation, and environmental monitoring. **Accelerate the development of sustainability solution...",
+ "highlights": "A vertical infographic consisting of three sections describing the roles of AI in sustainability: 1. **Measure, predict, and optimize complex systems**: AI facilitates analysis, modeling, and optimization in areas like<em> energy distribution, </em>resource<em> allocation, </em>and environmental monitoring. **Accelerate the development of sustainability solution...",
+ "score": 0.9950000047683716
+ },
+ {
+ "key": "1cb0754930b6_aHR0cHM6Ly9oYWlsZXlzdG9yYWdlLmJsb2IuY29yZS53aW5kb3dzLm5ldC9tdWx0aW1vZGFsLXNlYXJjaC9BY2NlbGVyYXRpbmctU3VzdGFpbmFiaWxpdHktd2l0aC1BSS0yMDI1LnBkZg2_text_sections_5",
+ "text": "...cross-laminated timber.8 Through an agreement with Brookfield, we aim 10.5 gigawatts (GW) of renewable energy to the grid.910.5 GWof new renewable energy capacity to be developed across the United States and Europe.Play 4 Advance AI policy principles and governance for sustainabilityWe advocated for policies that accelerate grid decarbonization",
+ "highlights": "...cross-laminated timber.8 Through an agreement with Brookfield, we aim <em> 10.5 gigawatts (GW) of renewable energy </em>to the<em> grid.910.5 </em>GWof new<em> renewable energy </em>capacity to be developed across the United States and Europe.Play 4 Advance AI policy principles and governance for sustainabilityWe advocated for policies that accelerate grid decarbonization",
+ "score": 0.9890000224113464
+ },
+ {
+ "key": "1cb0754930b6_aHR0cHM6Ly9oYWlsZXlzdG9yYWdlLmJsb2IuY29yZS53aW5kb3dzLm5ldC9tdWx0aW1vZGFsLXNlYXJjaC9BY2NlbGVyYXRpbmctU3VzdGFpbmFiaWxpdHktd2l0aC1BSS0yMDI1LnBkZg2_text_sections_50",
+ "text": "ForewordAct... Similarly, we have restored degraded stream ecosystems near our datacenters from Racine, Wisconsin120 to Jakarta, Indonesia.117INNOVATION SPOTLIGHTAI-powered Community Solar MicrogridsDeveloping energy transition programsWe are co-innovating with communities to develop energy transition programs that align their goals with broader s.",
+ "highlights": "ForewordAct... Similarly, we have restored degraded stream ecosystems near our datacenters from Racine, Wisconsin120 to Jakarta, Indonesia.117INNOVATION SPOTLIGHTAI-powered Community<em> Solar MicrogridsDeveloping energy transition programsWe </em>are co-innovating with communities to develop<em> energy transition programs </em>that align their goals with broader s.",
+ "score": 0.9869999885559082
+ }
+ ]
+ ```
## Clean up resources
This quickstart uses billable Azure resources. If you no longer need the resources, delete them from your subscription to avoid charges.
-## Next step
+## Next steps
+
+This quickstart introduced you to the **Import and vectorize data** wizard, which creates all of the necessary objects for multimodal search. To explore each step in detail, see the following tutorials:
-This quickstart introduced you to the **Import and vectorize data wizard**, which creates all of the necessary objects for multimodal search. To explore each step in detail, see [Tutorial: Index mixed content using image verbalizations and the Document Layout skill](tutorial-document-layout-image-verbalization.md).
++ [Tutorial: Image verbalization and Document Extraction skill](tutorial-document-extraction-image-verbalization.md)
++ [Tutorial: Image verbalization and Document Layout skill](tutorial-document-layout-image-verbalization.md)
++ [Tutorial: Multimodal embeddings and Document Extraction skill](tutorial-document-extraction-multimodal-embeddings.md)
++ [Tutorial: Multimodal embeddings and Document Layout skill](tutorial-document-layout-multimodal-embeddings.md)