View Diff on GitHub
Highlights
この一連の変更は、ドキュメント中の「Azure AI Studio」という名称を「Azure AI Foundry」に変更することを主目的としています。これにより、サービス名の一貫性を確保し、ユーザーが最新のプラットフォーム名を正しく認識し、ドキュメントを利用する際の混乱を避けることができます。
New features
- ドキュメントのメタデータに新しいカスタムタグ「ignite-2024」が追加され、イベント情報に関連付けられています。
Breaking changes
- 大幅な機能変更は含まれていませんが、すべての関連ドキュメントのタイトルおよび説明が新しいサービス名に合わせて更新されています。これにより、古い名称に基づく検索や参照が非推奨となりました。
Other updates
- 各ドキュメント内でのサービス名の変更(Azure AI StudioからAzure AI Foundryへの変更)。
- 該当するリンクやスクリーンショットの説明において、対応する新しいプラットフォーム情報が反映されています。
Insights
今回の変更は、統一されたサービス名称の導入により、Azure AI関連のドキュメントにおけるサービス名を一貫性を持ってユーザーに提示することを目的としています。これにより、ユーザーはAzure AI Foundryが提供する機能と、それをどのように活用できるかについてより体系的に理解できるようになります。
この種の名称変更は、単なる文字の差し替え以上の効果をもち、新しいユーザーエクスペリエンスの提供に必要な基盤づくりとなります。加えて、メタデータとして「ignite-2024」が追加されていることから、Azureの今後の展開やイベントとの関連がより強化されていることがわかります。
大規模な名称変更の際はユーザーが混乱しないよう、通知やサポート情報の充実が求められます。ドキュメントの整合性が保たれることで、Azure AI Foundryの利用者は最新かつ正確な情報を基に、より迅速かつ効果的にプロジェクトを進められるようサポートされています。また、イベントに関連した情報も提供されることで、ユーザーはサービスの新しい機能をいち早く実際のプロジェクトに反映することが可能になっています。
Summary Table
Modified Contents
articles/ai-services/document-intelligence/quickstarts/try-document-intelligence-studio.md
Diff
@@ -5,6 +5,8 @@ description: Form and document processing, data extraction, and analysis using D
author: laujan
manager: nitinme
ms.service: azure-ai-document-intelligence
+ms.custom:
+ - ignite-2024
ms.topic: quickstart
ms.date: 08/07/2024
ms.author: lajanuar
Summary
{
"modification_type": "minor update",
"modification_title": "ドキュメントインテリジェンススタジオのクイックスタートを更新"
}
Explanation
このコードの変更は、ドキュメントインテリジェンススタジオに関するクイックスタートガイドのメタデータに2行の新しい内容を追加することです。具体的には、ms.custom
のフィールドに ignite-2024
という値が追加されました。これにより、このコンテンツが特定のイベントやカンファレンスに関連していることを示す目的があります。このようなマイナーな更新は、通常、ナビゲーションや検索エンジンでの見つけやすさを向上させるために使用されます。全体として、ドキュメントの内容やフォーマットには大きな変更はなく、2行の追加のみが行われました。
articles/ai-services/document-intelligence/studio-overview.md
Diff
@@ -1,17 +1,16 @@
---
-title: Studio experience for Document Intelligence
+title: Document Intelligence Studio
titleSuffix: Azure AI services
-description: Learn how to set up and use either Document Intelligence Studio or AI Studio to test features of Azure AI Document Intelligence.
+description: Learn how to set up Document Intelligence Studio to test Azure AI Document Intelligence features.
author: laujan
manager: nitinme
ms.service: azure-ai-document-intelligence
-ms.topic: how-to
-ms.date: 10/29/2024
+ms.topic: overview
+ms.date: 11/19/2024
ms.author: lajanuar
monikerRange: '>=doc-intel-3.0.0'
---
-
<!-- markdownlint-disable MD033 -->
<!-- markdownlint-disable MD051 -->
Summary
{
"modification_type": "minor update",
"modification_title": "ドキュメントインテリジェンススタジオの概要を更新"
}
Explanation
この変更は、ドキュメントインテリジェンススタジオに関する概要ドキュメントのメタデータを見直したものです。具体的には、タイトルが「Studio experience for Document Intelligence」から「Document Intelligence Studio」に変更され、内容の説明も簡潔になりました。さらに、ms.topic
が「how-to」から「overview」に変更され、ドキュメントの焦点がより包括的な説明にシフトしました。更新された日付も2024年10月29日から2024年11月19日に変更されています。全体として、これらの変更は文書の明確さを高め、ユーザーがドキュメントインテリジェンススタジオについてより良く理解できるようにすることを目的としています。
articles/ai-services/language-service/concepts/configure-containers.md
Diff
@@ -7,6 +7,7 @@ author: jboback
manager: nitinme
ms.custom:
- ignite-2023
+ - ignite-2024
ms.service: azure-ai-language
ms.topic: conceptual
ms.date: 11/04/2024
Summary
{
"modification_type": "minor update",
"modification_title": "コンテナ設定のための言語サービスのドキュメントを更新"
}
Explanation
この変更は、言語サービスに関する「コンテナの設定」ドキュメントにおいて、新しいカスタムメタデータを追加するものです。具体的には、ms.custom
フィールドに ignite-2024
という値が追加され、これによりこのドキュメントが特定のイベント(Ignite 2024)に関連していることが示されています。以前の値である ignite-2023
はそのまま残されており、それに加えて新しいイベントが明示されています。このようなマイナーな更新により、新しい情報やイベントにコンテンツがリンクされ、ユーザーにとって、関連する情報を簡単に見つけやすくすることが目的です。全体として、ドキュメントの内容には大きな変更はなく、1行の追加のみが行われています。
articles/ai-services/language-service/concepts/custom-features/multi-region-deployment.md
Diff
@@ -9,7 +9,7 @@ ms.service: azure-ai-language
ms.topic: conceptual
ms.date: 11/04/2024
ms.author: jboback
-ms.custom: language-service-clu
+ms.custom: language-service-clu, ignite-2024
---
# Deploy custom language projects to multiple regions
Summary
{
"modification_type": "minor update",
"modification_title": "マルチリージョン展開に関するドキュメントを更新"
}
Explanation
この変更は、マルチリージョン展開に関する言語サービスのドキュメントにおいて、ms.custom
フィールドに新しいイベント情報を追加したものです。具体的には、元の値であった language-service-clu
に対して、ignite-2024
という追加のタグが加えられました。これにより、このドキュメントがIgnite 2024イベントにも関連付けられることとなります。この更新により、ユーザーは最新の情報やイベントに基づいたドキュメントを容易に参照できるように設計されています。全体として、内容は簡単な変更のみで、ユーザビリティの向上を目指したマイナーな更新です。
articles/ai-services/language-service/conversational-language-understanding/how-to/use-containers.md
Diff
@@ -7,6 +7,7 @@ author: jboback
manager: nitinme
ms.service: azure-ai-language
ms.custom:
+ - ignite-2024
ms.topic: how-to
ms.date: 10/07/2024
ms.author: jboback
Summary
{
"modification_type": "minor update",
"modification_title": "コンテナの使用に関するドキュメントを更新"
}
Explanation
この変更は、会話型言語理解における「コンテナの使用」ドキュメントにおいて、新たなカスタムメタデータを追加したことに関するものです。具体的には、ms.custom
フィールドに ignite-2024
という新しい値が追加され、これによりこのドキュメントが特定のイベント(Ignite 2024)に関連付けられています。既存の内容には大きな変更はなく、追加された行はこの新しいイベントを反映するもので、今後の関連する情報へのアクセスが簡便になることを目的としています。全体的に見て、これはユーザーに最新の情報に基づいたコンテンツを提供するためのマイナーな更新です。
articles/ai-services/language-service/includes/use-language-studio.md
Diff
@@ -7,7 +7,7 @@
ms.topic: include
ms.date: 12/19/2023
ms.author: jboback
- ms.custom: include
+ms.custom: include, ignite-2024
---
> [!TIP]
Summary
{
"modification_type": "minor update",
"modification_title": "言語スタジオの使用に関するドキュメントを更新"
}
Explanation
この変更は、言語サービスの「言語スタジオの使用」に関するインクルードドキュメントに対するもので、ms.custom
フィールドに新しく ignite-2024
というタグが追加されました。これにより、このドキュメントがIgnite 2024イベントに関連することが示されています。元の include
という値はそのまま保持されていますが、新しいイベント情報が付加されることで、ユーザーは関連情報をより容易に見つけやすくなります。この変更は小規模ですが、コンテンツの正確性と関連性を高める役割を果たしています。全体的には、ユーザビリティの向上を図ったマイナーな更新と位置付けられます。
articles/ai-services/language-service/language-detection/language-support.md
Diff
@@ -9,7 +9,7 @@ ms.service: azure-ai-language
ms.topic: conceptual
ms.date: 12/19/2023
ms.author: jboback
-ms.custom: language-service-language-detection
+ms.custom: language-service-language-detection, ignite-2024
---
# Language support for Language Detection
Summary
{
"modification_type": "minor update",
"modification_title": "言語検出のための言語サポートに関するドキュメントを更新"
}
Explanation
この変更は、言語サービスの「言語検出」に関するサポートドキュメントにおいて、ms.custom
フィールドに新しく ignite-2024
というタグが追加されたことを示しています。元々は language-service-language-detection
という値のみが設定されていましたが、新たなイベントに合わせてタグが追加され、これによりユーザーはこのドキュメントがIgnite 2024に関連していることを認識できるようになります。全体として、コンテンツの関連性が高まることを目的としたマイナーな更新であり、ユーザーに対する情報提供の質を向上させる効果があります。
articles/ai-services/language-service/named-entity-recognition/concepts/ga-preview-mapping.md
Diff
@@ -9,7 +9,7 @@ ms.service: azure-ai-language
ms.topic: conceptual
ms.date: 11/04/2024
ms.author: jboback
-ms.custom: language-service-ner
+ms.custom: language-service-ner, ignite-2024
---
# Preview API changes
Summary
{
"modification_type": "minor update",
"modification_title": "命名エンティティ認識におけるプレビューAPIの変更に関するドキュメントを更新"
}
Explanation
この変更は、命名エンティティ認識に関連する「プレビューAPIの変更」に関するドキュメントにおいて、ms.custom
フィールドに ignite-2024
という新しいタグが追加されたことを示しています。元々は language-service-ner
という値だけが設定されていましたが、新しいイベントに関連付けることで、ユーザーがこのドキュメントのコンテンツがIgnite 2024に関連していると認識しやすくなります。このような小規模な更新は、情報の関連性を向上させ、ユーザーエクスペリエンスを改善することを目的としています。
articles/ai-services/language-service/named-entity-recognition/how-to-call.md
Diff
@@ -9,7 +9,7 @@ ms.service: azure-ai-language
ms.topic: how-to
ms.date: 12/19/2023
ms.author: jboback
-ms.custom: language-service-ner
+ms.custom: language-service-ner, ignite-2024
---
Summary
{
"modification_type": "minor update",
"modification_title": "命名エンティティ認識の呼び出し方に関するドキュメントを更新"
}
Explanation
この変更は、「命名エンティティ認識の呼び出し方」に関するドキュメントに対して行われたもので、ms.custom
フィールドに新たに ignite-2024
というタグが追加されています。これにより、元々は language-service-ner
という値が設定されていた部分が更新され、ユーザーがこのドキュメントがIgnite 2024イベントに関連していることを明確に把握できるようになります。このようなマイナーな更新は、ドキュメントの関連性を高め、最新の情報を提供するための重要なステップとして機能します。
articles/ai-services/language-service/named-entity-recognition/how-to/skill-parameters.md
Diff
@@ -7,6 +7,7 @@ author: jboback
manager: nitinme
ms.service: azure-ai-language
ms.custom:
+ - ignite-2024
ms.topic: how-to
ms.date: 11/04/2024
ms.author: jboback
@@ -70,4 +71,4 @@ This bit of sample code explains how to use skill parameters.
## Next steps
-* See [Configure containers](../../concepts/configure-containers.md) for configuration settings.
\ No newline at end of file
+* See [Configure containers](../../concepts/configure-containers.md) for configuration settings.
Summary
{
"modification_type": "minor update",
"modification_title": "スキルパラメータに関するドキュメントを更新"
}
Explanation
この変更は、「スキルパラメータ」に関するドキュメントに対するもので、主に以下の2つの点が更新されています。まず、ms.custom
フィールドに ignite-2024
という新しい項目が追加され、ユーザーはこのドキュメントがIgnite 2024イベントに関連していることを明示的に認識できるようになります。次に、ドキュメントの末尾部分において、リスト項目の表現に細かな変更が加えられています。これらの更新は、情報の最新性を保持し、リーディング・エクスペリエンスを向上させることを目的としています。
articles/ai-services/language-service/native-document-support/use-native-documents.md
Diff
@@ -5,6 +5,8 @@ description: How to use native document with Azure AI Languages Personally Ident
author: laujan
manager: nitinme
ms.service: azure-ai-language
+ms.custom:
+ - ignite-2024
ms.topic: how-to
ms.date: 11/19/2024
ms.author: lajanuar
Summary
{
"modification_type": "minor update",
"modification_title": "ネイティブドキュメントの使用に関するドキュメントを更新"
}
Explanation
この変更は、「ネイティブドキュメントの使用」に関するドキュメントに対するもので、ms.custom
フィールドに新たに ignite-2024
という項目が追加されています。この更新により、このドキュメントがIgnite 2024イベントに関連していることが明確になります。情報の最新性を保持することを目的としたこのマイナーな更新は、ユーザーに関連する情報を提供し、より良い理解を促進するための重要なステップです。
articles/ai-services/language-service/personally-identifiable-information/how-to-call-for-conversations.md
Diff
@@ -6,6 +6,8 @@ description: This article shows you how to extract PII from chat and spoken tran
author: jboback
manager: nitinme
ms.service: azure-ai-language
+ms.custom:
+ - ignite-2024
ms.topic: how-to
ms.date: 11/04/2024
ms.author: jboback
@@ -352,4 +354,3 @@ curl -X GET https://your-language-endpoint/language/analyze-conversations/job
## Service and data limits
[!INCLUDE [service limits article](../includes/service-limits-link.md)]
-
Summary
{
"modification_type": "minor update",
"modification_title": "会話の呼び出しに関するドキュメントを更新"
}
Explanation
この変更は、「会話の呼び出し」に関するドキュメントに対するもので、主に2つの主な更新が行われています。最初に、ms.custom
フィールドに ignite-2024
という新しい項目が追加されています。これにより、ドキュメントがIgnite 2024イベントに関連していることが明示され、ユーザーはこのイベントに関連する内容を把握しやすくなります。さらに、ドキュメントの最後の部分で不要な改行が削除され、全体的な構成が改善されています。これらのマイナーな修正は、情報の最新性を保持し、文書の整然さを強化することを目的としています。
articles/ai-services/language-service/personally-identifiable-information/how-to-call.md
Diff
@@ -9,7 +9,7 @@ ms.service: azure-ai-language
ms.topic: how-to
ms.date: 11/04/2024
ms.author: jboback
-ms.custom: language-service-pii
+ms.custom: language-service-pii, ignite-2024
---
Summary
{
"modification_type": "minor update",
"modification_title": "個人を特定できる情報の呼び出しに関するドキュメントを更新"
}
Explanation
この変更は、「個人を特定できる情報」を扱うための呼び出しに関するドキュメントに対するものです。ms.custom
フィールドに新たに ignite-2024
が追加され、既存の値である language-service-pii
と組み合わされました。この改訂により、ドキュメントがIgnite 2024イベントに関連する内容を含むことが明確になり、ユーザーに対してより包括的な情報が提供されるようになっています。修正された内容は軽微ですが、情報の関連性を高め、ドキュメントの整合性を確保するための重要なステップです。
articles/ai-services/language-service/personally-identifiable-information/includes/quickstarts/csharp-sdk.md
Diff
@@ -5,7 +5,7 @@ ms.service: azure-ai-language
ms.topic: include
ms.date: 12/19/2023
ms.author: jboback
-ms.custom: language-service-pii
+ms.custom: language-service-pii, ignite-2024
---
[Reference documentation](/dotnet/api/azure.ai.textanalytics?preserve-view=true&view=azure-dotnet) | [More samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/textanalytics/Azure.AI.TextAnalytics/samples) | [Package (NuGet)](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/textanalytics/Azure.AI.TextAnalytics)
@@ -97,4 +97,4 @@ Redacted Text: Call our office at ************, or send an email to ************
Recognized 2 PII entities:
Text: 312-555-1234, Category: PhoneNumber, SubCategory: , Confidence score: 0.8
Text: support@contoso.com, Category: Email, SubCategory: , Confidence score: 0.8
-```
\ No newline at end of file
+```
Summary
{
"modification_type": "minor update",
"modification_title": "C# SDKに関するクイックスタートドキュメントを更新"
}
Explanation
この変更は、C# SDKに関連するクイックスタートドキュメントに対するもので、主に ms.custom
フィールドに ignite-2024
が追加されています。これにより、ドキュメントはIgnite 2024イベントに関連付けられることが明確になり、ユーザーがこの情報をより簡単に見つけられるようになります。また、最後のセクションでコードブロックの終了を示す新しい改行が追加され、ドキュメントの構成が改善されています。全体として、これらの更新は、情報の正確性と関連性を高めるための小規模な改良です。
articles/ai-services/language-service/personally-identifiable-information/includes/quickstarts/java-sdk.md
Diff
@@ -5,7 +5,7 @@ ms.service: azure-ai-language
ms.topic: include
ms.date: 12/19/2023
ms.author: jboback
-ms.custom: language-service-pii
+ms.custom: language-service-pii, ignite-2024
---
[Reference documentation](/java/api/overview/azure/ai-textanalytics-readme?preserve-view=true&view=azure-java-stable) | [More samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/textanalytics/azure-ai-textanalytics/src/samples) | [Package (Maven)](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.2.0) | [Library source code](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/textanalytics/azure-ai-textanalytics)
Summary
{
"modification_type": "minor update",
"modification_title": "Java SDKに関するクイックスタートドキュメントを更新"
}
Explanation
この変更は、Java SDKに関連するクイックスタートドキュメントに対するものであり、ms.custom
フィールドに ignite-2024
が新たに追加されました。これにより、このドキュメントはIgnite 2024イベントに関連付けられることが示され、利用者が関連情報を見つけやすくなります。また、追加された変更は軽微で、ドキュメントの内容において重要な情報には影響しないものの、ユーザーにとっての情報の関連性を向上させる役割を果たしています。全体として、この更新はサポートする技術とイベントに関する明確なコンテキストを提供するものです。
articles/ai-services/language-service/personally-identifiable-information/includes/quickstarts/nodejs-sdk.md
Diff
@@ -5,7 +5,7 @@ ms.service: azure-ai-language
ms.topic: include
ms.date: 12/19/2023
ms.author: jboback
-ms.custom: devx-track-js
+ms.custom: devx-track-js, ignite-2024
---
[Reference documentation](/javascript/api/overview/azure/ai-language-text-readme) | [More samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/cognitivelanguage/ai-language-text/samples/v1) | [Package (npm)](https://www.npmjs.com/package/@azure/ai-language-text) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/cognitivelanguage/ai-language-text)
Summary
{
"modification_type": "minor update",
"modification_title": "Node.js SDKに関するクイックスタートドキュメントを更新"
}
Explanation
この変更は、Node.js SDKに関連するクイックスタートドキュメントで行われたもので、ms.custom
フィールドに ignite-2024
が追加されました。これにより、ドキュメントはIgnite 2024イベントに関連付けられ、利用者は特定のイベントに関連する情報を見つけやすくなります。さらに、変更は軽微であり、ドキュメントの主な内容や機能には影響を与えませんが、情報の整理に役立つ重要なアップデートとなります。全体として、これは関連性を向上させるための小規模な更新です。
articles/ai-services/language-service/personally-identifiable-information/includes/quickstarts/python-sdk.md
Diff
@@ -4,7 +4,7 @@ ms.service: azure-ai-language
ms.topic: include
ms.date: 12/19/2023
ms.author: jboback
-ms.custom: language-service-pii
+ms.custom: language-service-pii, ignite-2024
---
[Reference documentation](/python/api/azure-ai-textanalytics/azure.ai.textanalytics?preserve-view=true&view=azure-python) | [More samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics/samples) | [Package (PyPi)](https://pypi.org/project/azure-ai-textanalytics/5.2.0/) | [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics)
Summary
{
"modification_type": "minor update",
"modification_title": "Python SDKに関するクイックスタートドキュメントを更新"
}
Explanation
この変更は、Python SDKに関連するクイックスタートドキュメントで行われたもので、ms.custom
フィールドに ignite-2024
が追加されました。この追加により、ドキュメントはIgnite 2024イベントに関連付けられ、ユーザーが特定のイベントに関連する情報をより簡単に見つけられるようになります。変更は軽微であり、ドキュメントの主な内容や機能に重大な影響はありませんが、イベントに対する関連性の向上に寄与する重要なアップデートとして機能します。全体として、これは情報の整理を図るための小規模な改善です。
articles/ai-services/language-service/personally-identifiable-information/includes/quickstarts/rest-api.md
Diff
@@ -6,7 +6,7 @@ ms.service: azure-ai-language
ms.topic: include
ms.date: 12/19/2023
ms.author: jboback
-ms.custom: language-service-pii
+ms.custom: language-service-pii, ignite-2024
---
[Reference documentation](https://go.microsoft.com/fwlink/?linkid=2239169)
Summary
{
"modification_type": "minor update",
"modification_title": "REST APIに関するクイックスタートドキュメントを更新"
}
Explanation
この変更は、REST APIに関連するクイックスタートドキュメントにおいて ms.custom
フィールドに ignite-2024
を追加するもので、これによりドキュメントはIgnite 2024イベントに関連付けられます。この変更は、ユーザーが特定のイベントに関連した情報にアクセスしやすくするためのものです。全体的に見て、この変更は軽微であり、ドキュメントの核心的な内容や機能には影響しないものの、情報の整理および関連性を高めるための重要な更新となります。
articles/ai-services/language-service/personally-identifiable-information/includes/use-language-studio.md
Diff
@@ -5,7 +5,7 @@ ms.service: azure-ai-language
ms.topic: include
ms.date: 12/19/2023
ms.author: jboback
-ms.custom: include
+ms.custom: include, ignite-2024
---
> [!TIP]
Summary
{
"modification_type": "minor update",
"modification_title": "Language Studioの使用に関するドキュメントを更新"
}
Explanation
この変更は、Language Studioの使用に関するドキュメントで行われたもので、ms.custom
フィールドに ignite-2024
が追加されました。この追加により、ドキュメントはIgnite 2024イベントに関連付けられ、特定のイベントの情報にユーザーがアクセスしやすくなります。この変更は軽微であり、主なコンテンツや機能に直接的な変化をもたらすものではありませんが、情報をより整理された形で提供するための重要な更新です。全体として、この更新はドキュメントの関連性を高める役割を果たします。
articles/ai-services/language-service/personally-identifiable-information/overview.md
Diff
@@ -9,7 +9,7 @@ ms.service: azure-ai-language
ms.topic: overview
ms.date: 09/27/2024
ms.author: jboback
-ms.custom: language-service-pii, build-2024
+ms.custom: language-service-pii, build-2024, ignite-2024
---
# What is Personally Identifiable Information (PII) detection in Azure AI Language?
Summary
{
"modification_type": "minor update",
"modification_title": "PII検出に関する概要ドキュメントを更新"
}
Explanation
この変更は、Azure AI Languageの「PII(個人情報)検出」に関する概要ドキュメントの更新です。具体的には、ms.custom
フィールドに ignite-2024
が追加され、ドキュメントがIgnite 2024イベントに関連付けられました。この更新により、読者は特定のイベントに関するコンテキストや情報を認識しやすくなります。変更は軽微で、主な内容や機能には直接的な影響を与えないものの、情報整理の観点から重要な意味を持ちます。この文書は、個人情報検出の重要性や関連情報を提供するために引き続き役立つでしょう。
articles/ai-services/language-service/summarization/includes/quickstarts/csharp-sdk.md
Diff
@@ -4,6 +4,7 @@ manager: nitinme
ms.service: azure-ai-language
ms.custom:
- build-2024
+ - ignite-2024
ms.topic: include
ms.date: 12/19/2023
ms.author: jboback
Summary
{
"modification_type": "minor update",
"modification_title": "C# SDKのクイックスタートに関するドキュメントを更新"
}
Explanation
この変更は、C# SDKのクイックスタートに関するドキュメントでの更新です。ms.custom
フィールドに ignite-2024
が追加され、このドキュメントがIgnite 2024イベントに関連付けられました。これにより、ユーザーは新しい情報や特別なイベントと関連してこのドキュメントを認識しやすくなります。この変更は軽微であり、主な機能や内容には影響しませんが、イベント情報を含めることで、ドキュメントの価値を高める役割を果たしています。ユーザーが提供される情報をより適切に理解できるようにするため、重要な更新です。
articles/ai-services/language-service/summarization/includes/quickstarts/java-sdk.md
Diff
@@ -4,7 +4,7 @@ manager: nitinme
ms.service: azure-ai-language
ms.topic: include
ms.date: 12/19/2023
-ms.custom: devx-track-java
+ms.custom: devx-track-java, ignite-2024
ms.author: jboback
---
Summary
{
"modification_type": "minor update",
"modification_title": "Java SDKのクイックスタートに関するドキュメントを更新"
}
Explanation
この変更は、Java SDKのクイックスタートに関するドキュメントの更新です。具体的には、ms.custom
フィールドに ignite-2024
が追加され、ドキュメントがIgnite 2024イベントに関連付けられました。これにより、ユーザーはこのドキュメントが特定のイベントに関連していることを認識しやすくなります。この変更は、主に情報の整理とイベントに特化した参照を目的としたもので、実質的な機能や内容の大幅な変更ではありませんが、関連性を高める役割を果たしています。ユーザーにとって、よりコンテキストが明確なドキュメントとして利用できるでしょう。
articles/ai-services/language-service/summarization/includes/quickstarts/nodejs-sdk.md
Diff
@@ -5,7 +5,7 @@ ms.service: azure-ai-language
ms.topic: include
ms.date: 12/19/2023
ms.author: jboback
-ms.custom: devx-track-js
+ms.custom: devx-track-js, ignite-2024
---
[Reference documentation](/javascript/api/overview/azure/ai-language-text-readme?view=azure-node-latest&preserve-view=true) | [Additional samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/textanalytics/ai-text-analytics/samples) | [Package (npm)](https://www.npmjs.com/package/@azure/ai-text-analytics/v/5.2.0-beta.1) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/textanalytics/ai-text-analytics)
Summary
{
"modification_type": "minor update",
"modification_title": "Node.js SDKのクイックスタートに関するドキュメントを更新"
}
Explanation
この変更は、Node.js SDKのクイックスタートに関するドキュメントの更新を示しています。具体的には、ms.custom
フィールドに ignite-2024
を追加しました。これにより、ドキュメントがIgnite 2024イベントに関連づけられ、ユーザーが特定のイベントとの関連性を容易に理解できるようになります。この変更は、主に情報の整理を目的としたものであり、ドキュメントの他の部分に大きな影響を与えるものではありませんが、イベント情報を含めることでユーザーにとっての利便性が向上します。結果として、このドキュメントはより関連性の高いものとなり、使用するユーザーにとって価値ある情報源となるでしょう。
articles/ai-services/language-service/summarization/includes/quickstarts/python-sdk.md
Diff
@@ -3,6 +3,7 @@ author: jboback
ms.service: azure-ai-language
ms.custom:
- build-2024
+ - ignite-2024
ms.topic: include
ms.date: 12/19/2023
ms.author: jboback
Summary
{
"modification_type": "minor update",
"modification_title": "Python SDKのクイックスタートに関するドキュメントを更新"
}
Explanation
この変更は、Python SDKのクイックスタートに関するドキュメントの更新を示しています。具体的には、ms.custom
セクションに ignite-2024
を新たに追加しました。これにより、ドキュメントがIgnite 2024イベントに関連付けられ、ユーザーがそのイベントに基づく情報として利用できるようになります。この更新は、情報の整理を目的としたもので、他の部分には影響を及ぼさない軽微な変更です。結果として、ユーザーはこのドキュメントが特定のイベントと関連していることを明確に理解でき、より便利な情報源として活用できるでしょう。
articles/ai-services/language-service/summarization/includes/quickstarts/rest-api.md
Diff
@@ -5,6 +5,7 @@ manager: nitinme
ms.service: azure-ai-language
ms.custom:
- build-2024
+ - ignite-2024
ms.topic: include
ms.date: 12/19/2023
ms.author: jboback
Summary
{
"modification_type": "minor update",
"modification_title": "REST APIのクイックスタートに関するドキュメントを更新"
}
Explanation
この変更は、REST APIのクイックスタートに関するドキュメントの更新を示しています。具体的には、ms.custom
セクションに ignite-2024
を新たに追加しました。これにより、ドキュメントがIgnite 2024イベントに関連づけられ、ユーザーがそのイベントに基づく情報として活用できるようになります。この変更は、情報の体系化を目的としたもので、他の部分には影響を及ぼさない軽微な更新です。最終的に、ユーザーはこのドキュメントが特定のイベントと関連していることを容易に理解でき、より価値のある情報源として活用できるでしょう。
articles/ai-services/language-service/summarization/includes/use-language-studio.md
Diff
@@ -7,7 +7,7 @@
ms.topic: include
ms.date: 05/07/2024
ms.author: jboback
-ms.custom: include, build-2024
+ms.custom: include, build-2024, ignite-2024
---
> [!TIP]
Summary
{
"modification_type": "minor update",
"modification_title": "Language Studioの使い方に関するドキュメントを更新"
}
Explanation
この変更は、Language Studioの使い方に関するドキュメントの更新を示しています。具体的には、ms.custom
セクションで、ignite-2024
が追加され、ms.custom
の内容が更新されました。この変更により、ドキュメントがIgnite 2024イベントに関連づけられ、より多くの情報を提供します。また、文言の軽微な修正が行われ、情報の明確さが向上しています。この変更は、ユーザーがドキュメントを通じてイベント関連の情報を簡単にレファレンスできるようにするためのもので、ユーザーの理解を深める助けになります。
articles/ai-services/language-service/summarization/overview.md
Diff
@@ -9,7 +9,7 @@ ms.service: azure-ai-language
ms.topic: overview
ms.date: 05/07/2024
ms.author: jboback
-ms.custom: language-service-summarization, build-2024
+ms.custom: language-service-summarization, build-2024, ignite-2024
---
# What is summarization?
Summary
{
"modification_type": "minor update",
"modification_title": "要約サービスの概要に関するドキュメントを更新"
}
Explanation
この変更は、要約サービスの概要に関するドキュメントの更新を示しています。具体的には、ms.custom
セクションに ignite-2024
が追加され、内容が更新されました。このことにより、ドキュメントがIgnite 2024イベントに関連づけられ、より多くの情報を提供します。この変更は、サービスの認知度を高め、ユーザーに関連するイベント情報を迅速に提供することを目的としています。また、更新は他の部分には影響を与えず、軽微な調整に留まっています。したがって、ユーザーは最新の情報をもとにサービスを理解し、利用することができます。
articles/ai-services/language-service/text-analytics-for-health/concepts/fhir.md
Diff
@@ -8,7 +8,7 @@ ms.service: azure-ai-language
ms.topic: conceptual
ms.date: 11/04/2024
ms.author: jboback
-ms.custom: language-service-health
+ms.custom: language-service-health, ignite-2024
---
# Utilizing Fast Healthcare Interoperability Resources (FHIR) structuring in Text Analytics for Health
@@ -86,4 +86,4 @@ You can also use the SDK to make the request for Text Analytics for health to in
## Next steps
-* [How to call the Text Analytics for health](../how-to/call-api.md)
\ No newline at end of file
+* [How to call the Text Analytics for health](../how-to/call-api.md)
Summary
{
"modification_type": "minor update",
"modification_title": "FHIRに関するドキュメントの更新"
}
Explanation
この変更は、健康のためのテキスト分析におけるFHIR(Fast Healthcare Interoperability Resources)に関するドキュメントの更新を示しています。具体的には、ms.custom
セクションに ignite-2024
が追加され、FHIRに関連する情報が強化されています。さらに、リスト項目に変更が加えられましたが、内容自体は保持されています。これにより、ドキュメントはIgnite 2024イベントを考慮した最新の情報を反映し、ユーザーがより関連性の高いリソースを見つけやすくなっています。この変更は、ユーザーが健康関連のテキスト分析を実行する際に、最新の情報とサポートを提供することを目的としています。
articles/ai-services/language-service/text-analytics-for-health/includes/quickstarts/csharp-sdk.md
Diff
@@ -2,6 +2,8 @@
author: jboback
manager: nitinme
ms.service: azure-ai-language
+ms.custom:
+ - ignite-2024
ms.topic: include
ms.date: 12/19/2023
ms.author: jboback
Summary
{
"modification_type": "minor update",
"modification_title": "C# SDKのクイックスタートドキュメントに関する更新"
}
Explanation
この変更は、健康のためのテキスト分析におけるC# SDKのクイックスタートドキュメントに関する更新を示しています。具体的には、ms.custom
セクションが追加され、ignite-2024
のタグが含まれています。この変更により、ドキュメントはIgnite 2024イベントに関連した最新の情報を反映することになり、ユーザーがこのイベントと関連するリソースを見つけやすくなります。全体として、この更新はユーザーに対して、より魅力的で情報価値の高いクイックスタートを提供することを目的としています。
articles/ai-services/language-service/text-analytics-for-health/includes/quickstarts/java-sdk.md
Diff
@@ -4,7 +4,7 @@ manager: nitinme
ms.service: azure-ai-language
ms.topic: include
ms.date: 12/19/2023
-ms.custom: devx-track-java
+ms.custom: devx-track-java, ignite-2024
ms.author: jboback
---
Summary
{
"modification_type": "minor update",
"modification_title": "Java SDKのクイックスタートドキュメントの更新"
}
Explanation
この変更は、健康のためのテキスト分析におけるJava SDKのクイックスタートドキュメントの更新を示しています。特に、ms.custom
セクションに ignite-2024
が追加され、既存の devx-track-java
に続けて記載されています。この変更により、ドキュメントはIgnite 2024イベントに関連した情報を含むことになり、ユーザーがこのイベントに関連するリソースを簡単に見つけられるようになります。全体として、この更新は、Java SDKに関するクイックスタートのリソースをさらに充実させることを目的としています。
articles/ai-services/language-service/text-analytics-for-health/includes/quickstarts/nodejs-sdk.md
Diff
@@ -5,7 +5,7 @@ ms.service: azure-ai-language
ms.topic: include
ms.date: 12/19/2023
ms.author: jboback
-ms.custom: devx-track-js
+ms.custom: devx-track-js, ignite-2024
---
[Reference documentation](/javascript/api/overview/azure/ai-language-text-readme) | [More samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/cognitivelanguage/ai-language-text/samples/v1) | [Package (npm)](https://www.npmjs.com/package/@azure/ai-language-text) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/cognitivelanguage/ai-language-text)
Summary
{
"modification_type": "minor update",
"modification_title": "Node.js SDKのクイックスタートドキュメントの更新"
}
Explanation
この変更は、健康のためのテキスト分析におけるNode.js SDKのクイックスタートドキュメントに関する更新を示しています。具体的には、ms.custom
セクションに新たに ignite-2024
が追加され、既存の devx-track-js
に追加される形で記載されています。この更新により、ドキュメントはIgnite 2024イベントに関連する情報を反映し、ユーザーが関連するリソースを容易に見つけられるようになります。全体的に、この変更はNode.js SDKに対するクイックスタートの情報を強化することを目的としています。
articles/ai-services/language-service/text-analytics-for-health/includes/quickstarts/python-sdk.md
Diff
@@ -1,6 +1,8 @@
---
author: jboback
ms.service: azure-ai-language
+ms.custom:
+ - ignite-2024
ms.topic: include
ms.date: 12/19/2023
ms.author: jboback
Summary
{
"modification_type": "minor update",
"modification_title": "Python SDKのクイックスタートドキュメントの更新"
}
Explanation
この変更は、健康のためのテキスト分析におけるPython SDKのクイックスタートドキュメントに関連する更新を示しています。具体的には、ms.custom
セクションが追加され、その中で ignite-2024
が指定されています。これにより、ドキュメントはIgnite 2024イベントに関連した情報を含むことになり、ユーザーがこのイベントに関連するリソースを見つけやすくなります。全体的に、この更新はPython SDKに関するクイックスタートの情報をさらに充実させることを目的としています。
articles/ai-services/language-service/text-analytics-for-health/includes/quickstarts/rest-api.md
Diff
@@ -2,6 +2,8 @@
author: jboback
manager: nitinme
ms.service: azure-ai-language
+ms.custom:
+ - ignite-2024
ms.topic: include
ms.date: 12/19/2023
ms.author: jboback
Summary
{
"modification_type": "minor update",
"modification_title": "REST APIクイックスタートドキュメントの更新"
}
Explanation
この変更は、健康のためのテキスト分析におけるREST APIのクイックスタートドキュメントに関連する更新を示しています。具体的には、ms.custom
セクションが追加され、その中に ignite-2024
が記載されています。この更新により、ドキュメントはIgnite 2024イベントに関連する情報を含むことになり、ユーザーにとってより有用なリソースとなります。全体的に、この変更はREST APIに関連するクイックスタートの情報を充実させることを目的としています。
articles/ai-services/language-service/text-analytics-for-health/overview.md
Diff
@@ -9,7 +9,7 @@ ms.service: azure-ai-language
ms.topic: overview
ms.date: 10/21/2024
ms.author: jboback
-ms.custom: language-service-health
+ms.custom: language-service-health, ignite-2024
---
# What is Text Analytics for health?
Summary
{
"modification_type": "minor update",
"modification_title": "テキスト分析サービスの概要ドキュメントの更新"
}
Explanation
この変更は、健康のためのテキスト分析に関する概要ドキュメントの更新を示しています。具体的には、ms.custom
フィールドが更新され、language-service-health
に加えて ignite-2024
が追加されました。この変更により、ドキュメントはIgnite 2024イベントに関連づけられ、ユーザーがこの情報を見つけやすくなります。全体的には、この更新はコンテンツの関連性を高める目的があります。
articles/ai-studio/ai-services/concepts/endpoints.md
Diff
@@ -72,7 +72,7 @@ All models deployed in Azure AI model inference service support the [Azure AI mo
|------------|---------|-----|-------|
| C# | [Reference](https://aka.ms/azsdk/azure-ai-inference/csharp/reference) | [azure-ai-inference (NuGet)](https://www.nuget.org/packages/Azure.AI.Inference/) | [C# examples](https://aka.ms/azsdk/azure-ai-inference/csharp/samples) |
| Java | [Reference](https://aka.ms/azsdk/azure-ai-inference/java/reference) | [azure-ai-inference (Maven)](https://central.sonatype.com/artifact/com.azure/azure-ai-inference/) | [Java examples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/ai/azure-ai-inference/src/samples) |
-| JavaScript | [Reference](https://aka.ms/AAp1kxa) | [@azure/ai-inference (npm)](https://www.npmjs.com/package/@azure/ai-inference) | [JavaScript examples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) |
+| JavaScript | [Reference](/javascript/api/overview/azure/ai-inference-rest-readme?view=azure-node-preview&preserve-view=true) | [@azure/ai-inference (npm)](https://www.npmjs.com/package/@azure/ai-inference) | [JavaScript examples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) |
| Python | [Reference](https://aka.ms/azsdk/azure-ai-inference/python/reference) | [azure-ai-inference (PyPi)](https://pypi.org/project/azure-ai-inference/) | [Python examples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-inference/samples) |
## Azure OpenAI inference endpoint
Summary
{
"modification_type": "minor update",
"modification_title": "AIサービスのエンドポイントドキュメントの更新"
}
Explanation
この変更は、AIサービスにおけるエンドポイントに関するドキュメントの更新を示しています。具体的には、JavaScriptの参照リンクが修正され、以前のリンクが新しい形式のものに置き換えられました。新しいリンクでは、AzureのJavaScript APIに関する詳細な情報が得られるようになっており、ユーザーが必要なリソースによりアクセスしやすくなっています。この変更によって、開発者はより正確で更新された情報を取得できるようになります。全体として、この修正はドキュメントの明確さと正確さを向上させることを目的としています。
articles/ai-studio/ai-services/content-safety-overview.md
Diff
@@ -1,7 +1,7 @@
---
-title: Content Safety in Azure AI Studio overview
+title: Content Safety in Azure AI Foundry portal overview
titleSuffix: Azure AI Foundry
-description: Learn how to use Azure AI Content Safety in Azure AI Studio to detect harmful user-generated and AI-generated content in applications and services.
+description: Learn how to use Azure AI Content Safety in Azure AI Foundry portal to detect harmful user-generated and AI-generated content in applications and services.
manager: nitinme
ms.service: azure-ai-studio
ms.custom:
@@ -12,9 +12,9 @@ ms.author: pafarley
author: PatrickFarley
---
-# Content Safety in Azure AI Studio
+# Content Safety in Azure AI Foundry portal
-Azure AI Content Safety is an AI service that detects harmful user-generated and AI-generated content in applications and services. Azure AI Content Safety includes various APIs that allow you to detect and prevent the output of harmful content. The interactive Content Safety **try out** page in AI Studio allows you to view, explore, and try out sample code for detecting harmful content across different modalities.
+Azure AI Content Safety is an AI service that detects harmful user-generated and AI-generated content in applications and services. Azure AI Content Safety includes various APIs that allow you to detect and prevent the output of harmful content. The interactive Content Safety **try out** page in AI Foundry portal allows you to view, explore, and try out sample code for detecting harmful content across different modalities.
## Features
@@ -64,4 +64,4 @@ Refer to the [Content Safety overview](/azure/ai-services/content-safety/overvie
## Next step
-Get started using Azure AI Content Safety in Azure AI Studio by following the [How-to guide](./how-to/content-safety.md).
\ No newline at end of file
+Get started using Azure AI Content Safety in Azure AI Foundry portal by following the [How-to guide](./how-to/content-safety.md).
\ No newline at end of file
Summary
{
"modification_type": "minor update",
"modification_title": "Azure AIスタジオからAzure AIファウンドリーポータルへの変更"
}
Explanation
この変更は、Azure AIスタジオにおけるコンテンツ安全性の概要ドキュメントのタイトルと内容が更新されたことを示しています。具体的には、“Azure AI Studio” という表現が “Azure AI Foundry portal” に置き換えられ、ドキュメント全体にわたってこの用語が統一されています。これにより、ユーザーはAzure AIファウンドリーに関連する情報にアクセスしやすくなります。また、ドキュメントの説明文や各セクションの見出しも一貫性を持って修正されており、Azure AIファウンドリーにおけるコンテンツ安全性の利用方法を明確にしています。この更新は、ユーザーが最新のプラットフォーム情報を利用できるようにすることを目的としています。
articles/ai-studio/ai-services/faq.yml
Diff
@@ -34,11 +34,11 @@ sections:
Learn more about the [Azure OpenAI service](../../ai-services/openai/overview.md).
- question: |
- What's the difference between Azure AI model inference and Azure AI studio?
+ What's the difference between Azure AI model inference and Azure AI Foundry?
answer: |
- Azure AI services are a suite of AI services that provide prebuilt APIs for common AI scenarios. One of them is Azure AI model inference service which focuses on inference service of different state-of-the-art models. Azure AI studio is a web-based tool that allows you to build, train, and deploy machine learning models. Azure AI services can be used in Azure AI studio to enhance your models with prebuilt AI capabilities.
+ Azure AI services are a suite of AI services that provide prebuilt APIs for common AI scenarios. One of them is Azure AI model inference service which focuses on inference service of different state-of-the-art models. Azure AI Foundry portal is a web-based tool that allows you to build, train, and deploy machine learning models. Azure AI services can be used in Azure AI Foundry portal to enhance your models with prebuilt AI capabilities.
- question: |
- What's the difference between Azure AI model inference service and Serverless API model deployments in Azure AI studio?
+ What's the difference between Azure AI model inference service and Serverless API model deployments in Azure AI Foundry portal?
answer: |
Both technologies allow you to deploy models without requiring compute resources as they are based on the Models as a Service idea. [Serverless API model deployments](../how-to/deploy-models-serverless.md) allow you to deploy a single model under a unique endpoint and credentials. You need to create a different endpoint for each model you want to deploy. On top of that, they are always created in the context of the project and while they can be shared by creating connections from other projects, they live in the context of a given project.
@@ -52,7 +52,7 @@ sections:
answer: |
The Azure AI model inference service in AI services supports all the models in the Azure AI catalog with pay-as-you-go billing (per-token). For more information, see [the Models section](model-inference.md#models).
- The Azure AI model catalog contains a wider list of models, however, those models require compute quota from your subscription. They also need to have a project or AI hub where to host the deployment. For more information, see [deployment options in Azure AI studio](../concepts/deployments-overview.md).
+ The Azure AI model catalog contains a wider list of models, however, those models require compute quota from your subscription. They also need to have a project or AI hub where to host the deployment. For more information, see [deployment options in Azure AI Foundry portal](../concepts/deployments-overview.md).
- question: |
Why I can't add OpenAI o1-preview or OpenA o1-mini-preview to my resource?
answer: |
@@ -105,7 +105,7 @@ sections:
answer: |
Billing and costs are displayed in [Microsoft Cost Management + Billing](/azure/cost-management-billing/understand/download-azure-daily-usage). You can see the usage details in the [Azure portal](https://portal.azure.com).
- Billing isn't shown in Azure AI studio.
+ Billing isn't shown in Azure AI Foundry portal.
- question: |
How can I place a spending limit to my bill?
answer: |
Summary
{
"modification_type": "minor update",
"modification_title": "Azure AIスタジオからAzure AIファウンドリーポータルへのFAQの更新"
}
Explanation
この変更は、Azure AIスタジオに関するFAQのYAMLファイルの内容を更新することを示しています。主な変更点は、“Azure AI Studio” の表現が “Azure AI Foundry” に置き換えられていることです。この変更により、FAQの質問と回答の中で、新しいプラットフォーム名に関する一貫性が保たれています。
具体的な修正点としては、Azure AIモデル推論サービスとAzure AIファウンドリーポータルに関連する質問が追加され、それに伴って回答も更新されています。また、その他の関連情報も新しいポータル名に調整され、ユーザーが最新の情報を得られるようになっています。これにより、ユーザーは引き続き正確で役立つ情報にアクセスできるようになります。全体として、ドキュメントの整合性と正確性が向上しています。
articles/ai-studio/ai-services/how-to/connect-ai-services.md
Diff
@@ -1,7 +1,7 @@
---
-title: How to use Azure AI services in AI Studio
+title: How to use Azure AI services in AI Foundry portal
titleSuffix: Azure AI Foundry
-description: Learn how to use Azure AI services in AI Studio. You can use existing Azure AI services resources in AI Studio by creating a connection to the resource.
+description: Learn how to use Azure AI services in AI Foundry portal. You can use existing Azure AI services resources in AI Foundry portal by creating a connection to the resource.
manager: nitinme
ms.service: azure-ai-studio
ms.custom:
@@ -15,20 +15,20 @@ ms.author: eur
author: eric-urban
---
-# How to use Azure AI services in AI Studio
+# How to use Azure AI services in AI Foundry portal
-You might have existing resources for Azure AI services that you used in the old studios such as Azure OpenAI Studio or Speech Studio. You can pick up where you left off by using your existing resources in AI Studio.
+You might have existing resources for Azure AI services that you used in the old studios such as Azure OpenAI Studio or Speech Studio. You can pick up where you left off by using your existing resources in AI Foundry portal.
-This article describes how to use new or existing Azure AI services resources in an AI Studio project.
+This article describes how to use new or existing Azure AI services resources in an AI Foundry project.
## Usage scenarios
-Depending on the AI service and model you want to use, you can use them in AI Studio via:
-- [Bring your existing Azure AI services resources](#bring-your-existing-azure-ai-services-resources-into-a-project) into a project. You can use your existing Azure AI services resources in an AI Studio project by creating a connection to the resource.
+Depending on the AI service and model you want to use, you can use them in AI Foundry portal via:
+- [Bring your existing Azure AI services resources](#bring-your-existing-azure-ai-services-resources-into-a-project) into a project. You can use your existing Azure AI services resources in an AI Foundry project by creating a connection to the resource.
- The [model catalog](#discover-azure-ai-models-in-the-model-catalog). You don't need a project to browse and discover Azure AI models. Some of the Azure AI services are available for you to try via the model catalog without a project. Some Azure AI services require a project to use in the playgrounds.
- The [project-level playgrounds](#try-azure-ai-services-in-the-project-level-playgrounds). You need a project to try Azure AI services such as Azure AI Speech and Azure AI Language.
- [Azure AI Services demo pages](#try-out-azure-ai-services-demos). You can browse Azure AI services capabilities and step through the demos. You can try some limited demos for free without a project.
-- [Fine-tune](#fine-tune-azure-ai-services-models) models. You can fine-tune a subset of Azure AI services models in AI Studio.
+- [Fine-tune](#fine-tune-azure-ai-services-models) models. You can fine-tune a subset of Azure AI services models in AI Foundry portal.
- [Deploy](#deploy-models-to-production) models. You can deploy base models and fine-tuned models to production. Most Azure AI services models are already deployed and ready to use.
## Bring your existing Azure AI services resources into a project
@@ -44,14 +44,14 @@ When you create a project for the first time, you also create a hub. When you cr
:::image type="content" source="../../media/how-to/projects/projects-create-resource.png" alt-text="Screenshot of the create resource page within the create project dialog." lightbox="../../media/how-to/projects/projects-create-resource.png":::
-For more details about creating a project, see the [create an AI Studio project](../../how-to/create-projects.md) how-to guide or the [create a project and use the chat playground](../../quickstarts/get-started-playground.md) quickstart.
+For more details about creating a project, see the [create an AI Foundry project](../../how-to/create-projects.md) how-to guide or the [create a project and use the chat playground](../../quickstarts/get-started-playground.md) quickstart.
### Connect Azure AI services after you create a project
-To use your existing Azure AI services resources (such as Azure AI Speech) in an AI Studio project, you need to create a connection to the resource.
+To use your existing Azure AI services resources (such as Azure AI Speech) in an AI Foundry project, you need to create a connection to the resource.
-1. Create an AI Studio project. For detailed instructions, see [Create an AI Studio project](../../how-to/create-projects.md).
-1. Go to your AI Studio project.
+1. Create an AI Foundry project. For detailed instructions, see [Create an AI Foundry project](../../how-to/create-projects.md).
+1. Go to your AI Foundry project.
1. Select **Management center** from the left pane.
1. Select **Connected resources** (under **Project**) from the left pane.
1. Select **+ New connection**.
@@ -72,24 +72,24 @@ To use your existing Azure AI services resources (such as Azure AI Speech) in an
You can discover Azure AI models in the model catalog without a project. Some Azure AI services are available for you to try via the model catalog without a project.
-1. Go to the [AI Studio home page](https://ai.azure.com).
+1. Go to the [AI Foundry home page](https://ai.azure.com).
1. Select the tile that says **Model catalog and benchmarks**.
- :::image type="content" source="../../media/explore/ai-studio-home-model-catalog.png" alt-text="Screenshot of the home page in Azure AI Studio with the option to select the model catalog tile." lightbox="../../media/explore/ai-studio-home-model-catalog.png":::
+ :::image type="content" source="../../media/explore/ai-studio-home-model-catalog.png" alt-text="Screenshot of the home page in Azure AI Foundry portal with the option to select the model catalog tile." lightbox="../../media/explore/ai-studio-home-model-catalog.png":::
- If you don't see this tile, you can also go directly to the [Azure AI model catalog page](https://ai.azure.com/explore/models) in AI Studio.
+ If you don't see this tile, you can also go directly to the [Azure AI model catalog page](https://ai.azure.com/explore/models) in AI Foundry portal.
1. From the **Collections** dropdown, select **Microsoft**. Search for Azure AI services models by entering **azure-ai** in the search box.
- :::image type="content" source="../../media/ai-services/models/ai-services-model-catalog.png" alt-text="Screenshot of the model catalog page in Azure AI Studio with the option to search by collection and name." lightbox="../../media/ai-services/models/ai-services-model-catalog.png":::
+ :::image type="content" source="../../media/ai-services/models/ai-services-model-catalog.png" alt-text="Screenshot of the model catalog page in Azure AI Foundry portal with the option to search by collection and name." lightbox="../../media/ai-services/models/ai-services-model-catalog.png":::
1. Select a model to view more details about it. You can also try the model if it's available for you to try without a project.
## Try Azure AI services in the project level playgrounds
In the project-level playgrounds, you can try Azure AI services such as Azure AI Speech and Azure AI Language.
-1. Go to your AI Studio project. If you need to create a project, see [Create an AI Studio project](../../how-to/create-projects.md).
+1. Go to your AI Foundry project. If you need to create a project, see [Create an AI Foundry project](../../how-to/create-projects.md).
1. Select **Playgrounds** from the left pane and then select a playground to use. In this example, select **Try the Speech playground**.
:::image type="content" source="../../media/ai-services/playgrounds/azure-ai-services-playgrounds.png" alt-text="Screenshot of the project level playgrounds that you can use." lightbox="../../media/ai-services/playgrounds/azure-ai-services-playgrounds.png":::
@@ -106,24 +106,24 @@ If you have other connected resources, you can use them in the corresponding pla
You can browse Azure AI services capabilities and step through the demos. You can try some limited demos for free without a project.
-1. Go to the [AI Studio home page](https://ai.azure.com) and make sure you're signed in with the Azure subscription that has your Azure AI services resource.
+1. Go to the [AI Foundry home page](https://ai.azure.com) and make sure you're signed in with the Azure subscription that has your Azure AI services resource.
1. Find the tile that says **Explore Azure AI Services** and select **Try now**.
- :::image type="content" source="../../media/explore/home-ai-services.png" alt-text="Screenshot of the home page in Azure AI Studio with the option to select Azure AI Services." lightbox="../../media/explore/home-ai-services.png":::
+ :::image type="content" source="../../media/explore/home-ai-services.png" alt-text="Screenshot of the home page in Azure AI Foundry portal with the option to select Azure AI Services." lightbox="../../media/explore/home-ai-services.png":::
- If you don't see this tile, you can also go directly to the [Azure AI Services page](https://ai.azure.com/explore/aiservices) in AI Studio.
+ If you don't see this tile, you can also go directly to the [Azure AI Services page](https://ai.azure.com/explore/aiservices) in AI Foundry portal.
1. You should see tiles for Azure AI services that you can try. Select a tile to get to the demo page for that service. For example, select **Language + Translator**.
- :::image type="content" source="../../media/ai-services/overview/ai-services-capabilities.png" alt-text="Screenshot of the landing page to try Azure AI Services try out capabilities in Azure AI Studio." lightbox="../../media/ai-services/overview/ai-services-capabilities.png":::
+ :::image type="content" source="../../media/ai-services/overview/ai-services-capabilities.png" alt-text="Screenshot of the landing page to try Azure AI Services try out capabilities in Azure AI Foundry portal." lightbox="../../media/ai-services/overview/ai-services-capabilities.png":::
The presentation and flow of the demo pages might vary depending on the service. In some cases, you need to select a project or connection to use the service.
## Fine-tune Azure AI services models
-In AI Studio, you can fine-tune some Azure AI services models. For example, you can fine-tune a model for custom speech.
+In AI Foundry portal, you can fine-tune some Azure AI services models. For example, you can fine-tune a model for custom speech.
-1. Go to your AI Studio project. If you need to create a project, see [Create an AI Studio project](../../how-to/create-projects.md).
+1. Go to your AI Foundry project. If you need to create a project, see [Create an AI Foundry project](../../how-to/create-projects.md).
1. Select **Fine-tuning** from the left pane.
1. Select **AI Service fine-tuning**.
@@ -136,7 +136,7 @@ In AI Studio, you can fine-tune some Azure AI services models. For example, you
Once you have a project, several Azure AI services models are already deployed and ready to use.
-1. Go to your AI Studio project.
+1. Go to your AI Foundry project.
1. Select **Management center** from the left pane.
1. Select **Models + endpoints** (under **Project**) from the left pane.
1. Select the **Service deployments** tab to view the list of Azure AI services models that are already deployed.
@@ -155,4 +155,4 @@ However, you can deploy [fine-tuned Azure AI services models](#fine-tune-azure-a
## Related content
- [What are Azure AI services?](../../../ai-services/what-are-ai-services.md?context=/azure/ai-studio/context/context)
-- [Connections in Azure AI Studio](../../concepts/connections.md)
+- [Connections in Azure AI Foundry portal](../../concepts/connections.md)
Summary
{
"modification_type": "minor update",
"modification_title": "Azure AIスタジオからAzure AIファウンドリーポータルへの接続手順の更新"
}
Explanation
この変更は、「Azure AIサービスを接続する方法」に関するドキュメントの内容を更新しています。主な変更点は、「AI Studio」から「AI Foundry portal」への表現の置き換えであり、これにより将来に向けた整合性が保たれています。具体的には、ドキュメントのタイトル、説明、および手順の各セクションにおいて「AI Studio」が「AI Foundry portal」に変更され、ユーザーが新しいプラットフォームについての正確な情報を得られるようにしています。
ドキュメント全体にわたり、Azure AIサービスのリソースをプロジェクトに接続する方法についての説明が一貫して更新されています。特定の手順や使用シナリオに加えて、関連するリンクや画像も新しいポータル名に調整され、ユーザーにとってより明確で役立つ内容になっています。この更新は、ユーザーが新しいプラットフォームでの操作をスムーズに継続できるようにすることを目的としています。
articles/ai-studio/ai-services/how-to/connect-azure-openai.md
Diff
@@ -1,7 +1,7 @@
---
-title: How to use Azure OpenAI Service in AI Studio
+title: How to use Azure OpenAI Service in AI Foundry portal
titleSuffix: Azure AI Foundry
-description: Learn how to use Azure OpenAI Service in AI Studio.
+description: Learn how to use Azure OpenAI Service in AI Foundry portal.
manager: nitinme
ms.service: azure-ai-studio
ms.custom:
@@ -15,40 +15,40 @@ ms.author: eur
author: eric-urban
---
-# How to use Azure OpenAI Service in AI Studio
+# How to use Azure OpenAI Service in AI Foundry portal
-You might have existing Azure OpenAI Service resources and model deployments that you created using the old Azure OpenAI Studio or via code. You can pick up where you left off by using your existing resources in AI Studio.
+You might have existing Azure OpenAI Service resources and model deployments that you created using the old Azure OpenAI Studio or via code. You can pick up where you left off by using your existing resources in AI Foundry portal.
This article describes how to:
- Use Azure OpenAI Service models outside of a project.
-- Use Azure OpenAI Service models and an AI Studio project.
+- Use Azure OpenAI Service models and an AI Foundry project.
> [!TIP]
-> You can use Azure OpenAI Service in AI Studio without creating a project or a connection. When you're working with the models and deployments, we recommend that you work outside of a project. Eventually, you want to work in a project for tasks such as managing connections, permissions, and deploying the models to production.
+> You can use Azure OpenAI Service in AI Foundry portal without creating a project or a connection. When you're working with the models and deployments, we recommend that you work outside of a project. Eventually, you want to work in a project for tasks such as managing connections, permissions, and deploying the models to production.
## Use Azure OpenAI models outside of a project
-You can use your existing Azure OpenAI model deployments in AI Studio outside of a project. Start here if you previously deployed models using the old Azure OpenAI Studio or via the Azure OpenAI Service SDKs and APIs.
+You can use your existing Azure OpenAI model deployments in AI Foundry portal outside of a project. Start here if you previously deployed models using the old Azure OpenAI Studio or via the Azure OpenAI Service SDKs and APIs.
To use Azure OpenAI Service outside of a project, follow these steps:
-1. Go to the [AI Studio home page](https://ai.azure.com) and make sure you're signed in with the Azure subscription that has your Azure OpenAI Service resource.
+1. Go to the [AI Foundry home page](https://ai.azure.com) and make sure you're signed in with the Azure subscription that has your Azure OpenAI Service resource.
1. Find the tile that says **Focused on Azure OpenAI Service?** and select **Let's go**.
- :::image type="content" source="../../media/azure-openai-in-ai-studio/home-page.png" alt-text="Screenshot of the home page in Azure AI Studio with the option to select Azure OpenAI Service." lightbox="../../media/azure-openai-in-ai-studio/home-page.png":::
+ :::image type="content" source="../../media/azure-openai-in-ai-studio/home-page.png" alt-text="Screenshot of the home page in Azure AI Foundry portal with the option to select Azure OpenAI Service." lightbox="../../media/azure-openai-in-ai-studio/home-page.png":::
- If you don't see this tile, you can also go directly to the [Azure OpenAI Service page](https://ai.azure.com/resource/overview) in AI Studio.
+ If you don't see this tile, you can also go directly to the [Azure OpenAI Service page](https://ai.azure.com/resource/overview) in AI Foundry portal.
1. You should see your existing Azure OpenAI Service resources. In this example, the Azure OpenAI Service resource `contoso-azure-openai-eastus` is selected.
- :::image type="content" source="../../media/ai-services/azure-openai-studio-select-resource.png" alt-text="Screenshot of the Azure OpenAI Service resources page in Azure AI Studio." lightbox="../../media/ai-services/azure-openai-studio-select-resource.png":::
+ :::image type="content" source="../../media/ai-services/azure-openai-studio-select-resource.png" alt-text="Screenshot of the Azure OpenAI Service resources page in Azure AI Foundry portal." lightbox="../../media/ai-services/azure-openai-studio-select-resource.png":::
If your subscription has multiple Azure OpenAI Service resources, you can use the selector or go to **All resources** to see all your resources.
If you create more Azure OpenAI Service resources later (such as via the Azure portal or APIs), you can also access them from this page.
## <a name="project"></a> Use Azure OpenAI Service in a project
-You might eventually want to use a project for tasks such as managing connections, permissions, and deploying models to production. You can use your existing Azure OpenAI Service resources in an AI Studio project.
+You might eventually want to use a project for tasks such as managing connections, permissions, and deploying models to production. You can use your existing Azure OpenAI Service resources in an AI Foundry project.
Let's look at two ways to connect Azure OpenAI Service resources to a project:
@@ -61,13 +61,13 @@ When you create a project for the first time, you also create a hub. When you cr
:::image type="content" source="../../media/how-to/projects/projects-create-resource.png" alt-text="Screenshot of the create resource page within the create project dialog." lightbox="../../media/how-to/projects/projects-create-resource.png":::
-For more details about creating a project, see the [create an AI Studio project](../../how-to/create-projects.md) how-to guide or the [create a project and use the chat playground](../../quickstarts/get-started-playground.md) quickstart.
+For more details about creating a project, see the [create an AI Foundry project](../../how-to/create-projects.md) how-to guide or the [create a project and use the chat playground](../../quickstarts/get-started-playground.md) quickstart.
### Connect Azure OpenAI Service after you create a project
If you already have a project and you want to connect your existing Azure OpenAI Service resources, follow these steps:
-1. Go to your AI Studio project.
+1. Go to your AI Foundry project.
1. Select **Management center** from the left pane.
1. Select **Connected resources** (under **Project**) from the left pane.
1. Select **+ New connection**.
@@ -91,7 +91,7 @@ You can try Azure OpenAI models in the Azure OpenAI Service playgrounds outside
> [!TIP]
> You can also try Azure OpenAI models in the project-level playgrounds. However, while you're only working with the Azure OpenAI Service models, we recommend working outside of a project.
-1. Go to the [Azure OpenAI Service page](https://ai.azure.com/resource/overview) in AI Studio.
+1. Go to the [Azure OpenAI Service page](https://ai.azure.com/resource/overview) in AI Foundry portal.
1. Select a playground from under **Resource playground** in the left pane.
:::image type="content" source="../../media/ai-services/playgrounds/azure-openai-studio-playgrounds.png" alt-text="Screenshot of the playgrounds that you can select to use Azure OpenAI Service." lightbox="../../media/ai-services/playgrounds/azure-openai-studio-playgrounds.png":::
@@ -106,9 +106,9 @@ Each playground has different model requirements and capabilities. The supported
## Fine-tune Azure OpenAI models
-In AI Studio, you can fine-tune several Azure OpenAI models. The purpose is typically to improve model performance on specific tasks or to introduce information that wasn't well represented when you originally trained the base model.
+In AI Foundry portal, you can fine-tune several Azure OpenAI models. The purpose is typically to improve model performance on specific tasks or to introduce information that wasn't well represented when you originally trained the base model.
-1. Go to the [Azure OpenAI Service page](https://ai.azure.com/resource/overview) in AI Studio to fine-tune Azure OpenAI models.
+1. Go to the [Azure OpenAI Service page](https://ai.azure.com/resource/overview) in AI Foundry portal to fine-tune Azure OpenAI models.
1. Select **Fine-tuning** from the left pane.
:::image type="content" source="../../media/ai-services/fine-tune-azure-openai.png" alt-text="Screenshot of the page to select fine-tuning of Azure OpenAI Service models." lightbox="../../media/ai-services/fine-tune-azure-openai.png":::
@@ -117,16 +117,16 @@ In AI Studio, you can fine-tune several Azure OpenAI models. The purpose is typi
1. Follow the [detailed how to guide](../../../ai-services/openai/how-to/fine-tuning.md?context=/azure/ai-studio/context/context) to fine-tune the model.
For more information about fine-tuning Azure AI models, see:
-- [Overview of fine-tuning in AI Studio](../../concepts/fine-tuning-overview.md)
+- [Overview of fine-tuning in AI Foundry portal](../../concepts/fine-tuning-overview.md)
- [How to fine-tune Azure OpenAI models](../../../ai-services/openai/how-to/fine-tuning.md?context=/azure/ai-studio/context/context)
- [Azure OpenAI models that are available for fine-tuning](../../../ai-services/openai/concepts/models.md?context=/azure/ai-studio/context/context)
## Deploy models to production
-You can deploy Azure OpenAI base models and fine-tuned models to production via the AI Studio.
+You can deploy Azure OpenAI base models and fine-tuned models to production via the AI Foundry portal.
-1. Go to the [Azure OpenAI Service page](https://ai.azure.com/resource/overview) in AI Studio.
+1. Go to the [Azure OpenAI Service page](https://ai.azure.com/resource/overview) in AI Foundry portal.
1. Select **Deployments** from the left pane.
:::image type="content" source="../../media/ai-services/endpoint/models-endpoints-azure-openai-deployments.png" alt-text="Screenshot of the models and endpoints page to view and create Azure OpenAI Service deployments." lightbox="../../media/ai-services/endpoint/models-endpoints-azure-openai-deployments.png":::
@@ -145,5 +145,5 @@ At some point, you want to develop apps with code. Here are some developer resou
## Related content
-- [Azure OpenAI in AI Studio](../../azure-openai-in-ai-studio.md)
+- [Azure OpenAI in AI Foundry portal](../../azure-openai-in-ai-studio.md)
- [Use Azure AI services resources](./connect-ai-services.md)
Summary
{
"modification_type": "minor update",
"modification_title": "Azure OpenAIサービス接続手順の更新"
}
Explanation
この変更は、「Azure OpenAIサービスの接続方法」についてのドキュメントを更新しています。主要な変更点は、「AI Studio」から「AI Foundry portal」への表現の統一です。これにより、最新のプラットフォーム名が反映され、ユーザーが現在のサービスをより簡単に理解できるようになります。
具体的には、ドキュメント全体のタイトルや説明、使用手順が「AI Foundry portal」に合わせて変更され、ユーザーが以前に使用したリソースを引き続き活用できることを明確にしています。また、プロジェクトの作成、モデルのデプロイ、リソースの管理手順も新しい表現に従って修正されています。
このアップデートにより、ユーザーは正確な情報を得られ、Azure OpenAIサービスを効果的に活用できるように設計されています。全体として、ドキュメントの整合性が向上し、ユーザー体験が向上しています。
articles/ai-studio/ai-services/how-to/content-safety.md
Diff
@@ -1,18 +1,20 @@
---
-title: Use Content Safety in Azure AI Studio
+title: Use Content Safety in Azure AI Foundry portal
titleSuffix: Azure AI services
-description: Learn how to use the Content Safety try it out page in Azure AI Studio to experiment with various content safety features such as text and image content, using adjustable thresholds to filter for inappropriate or harmful content.
+description: Learn how to use the Content Safety try it out page in Azure AI Foundry portal to experiment with various content safety features such as text and image content, using adjustable thresholds to filter for inappropriate or harmful content.
ms.service: azure-ai-studio
+ms.custom:
+ - ignite-2024
ms.topic: how-to
author: PatrickFarley
manager: nitinme
ms.date: 11/09/2024
ms.author: pafarley
---
-# Use Content Safety in Azure AI Studio
+# Use Content Safety in Azure AI Foundry portal
-Azure AI Studio includes a Content Safety **try it out** page that lets you use the core detection models and other content safety features.
+Azure AI Foundry includes a Content Safety **try it out** page that lets you use the core detection models and other content safety features.
## Prerequisites
@@ -24,7 +26,7 @@ Azure AI Studio includes a Content Safety **try it out** page that lets you use
Follow these steps to use the Content Safety **try it out** page:
-1. Go to [AI Studio](https://ai.azure.com/) and navigate to your project/hub. Then select the **Safety+ Security** tab on the left nav and select the **Try it out** tab.
+1. Go to [AI Foundry](https://ai.azure.com/) and navigate to your project/hub. Then select the **Safety+ Security** tab on the left nav and select the **Try it out** tab.
1. On the **Try it out** page, you can experiment with various content safety features such as text and image content, using adjustable thresholds to filter for inappropriate or harmful content.
:::image type="content" source="../../media/content-safety/try-it-out.png" alt-text="Screenshot of the try it out page for content safety.":::
@@ -110,4 +112,4 @@ For more information, see the [Custom categories conceptual guide](/azure/ai-ser
## Next step
-To use Azure AI Content Safety features with your Generative AI models, see the [Content filtering](../../concepts/content-filtering.md) guide.
\ No newline at end of file
+To use Azure AI Content Safety features with your Generative AI models, see the [Content filtering](../../concepts/content-filtering.md) guide.
Summary
{
"modification_type": "minor update",
"modification_title": "コンテンツ安全性機能の使用に関する更新"
}
Explanation
この変更は、「Azure AI Studio」におけるコンテンツ安全性機能の使用方法に関するドキュメントを更新しています。主な変更点は、「AI Studio」から「AI Foundry portal」への表現の統一です。これにより、最新のプラットフォームでの機能に関する正しい情報を提供しています。
具体的には、ドキュメントのタイトルや説明といった基本情報が更新され、コンテンツ安全性の「試してみる」ページの利用手順も新しいプラットフォーム名に合わせて修正されています。ユーザーは、テキストや画像コンテンツに対する調整可能な閾値を使用して、不適切または有害なコンテンツをフィルタリングするさまざまな機能を試すことができます。
また、メタデータとして新しいカスタムタグ「ignite-2024」が追加され、最新の文脈に即した情報が提供されています。このアップデートは、ユーザーがAzureの新しい機能やサービスを効果的に活用できるように設計されています。全体として、ドキュメントの整合性が向上し、より良いユーザー体験が提供されています。
articles/ai-studio/ai-services/how-to/quickstart-github-models.md
Diff
@@ -44,9 +44,9 @@ To obtain the key and endpoint:
1. If your existing account is a free account, you first have to upgrade to a Pay as you go plan. Once you upgrade, go back to the playground and select **Get API key** again, then sign in with your upgraded account.
-1. Once you've signed in to your Azure account, you're taken to [Azure AI Studio](https://ai.azure.com).
+1. Once you've signed in to your Azure account, you're taken to [Azure AI Foundry](https://ai.azure.com).
-1. At the top of the page, select **Go to your GitHub AI resource** to go to Azure AI Studio / Github](https://ai.azure.com/github). It might take one or two minutes to load your initial model details in AI Studio.
+1. At the top of the page, select **Go to your GitHub AI resource** to go to Azure AI Foundry / Github](https://ai.azure.com/github). It might take one or two minutes to load your initial model details in AI Foundry portal.
1. The page is loaded with your model's details. Select the **Create a Deployment** button to deploy the model to your account.
@@ -96,4 +96,4 @@ See the [FAQ section](../faq.yml) to explore more help.
## Next steps
* [Add more models](create-model-deployments.md) to your endpoint.
-* [Explore the model catalog](https://ai.azure.com/github/models) in Azure AI studio.
\ No newline at end of file
+* [Explore the model catalog](https://ai.azure.com/github/models) in Azure AI Foundry portal.
\ No newline at end of file
Summary
{
"modification_type": "minor update",
"modification_title": "GitHubモデルのクイックスタートガイドの更新"
}
Explanation
この変更は、「GitHubモデルのクイックスタートガイド」に関連するドキュメントを更新しています。主に「Azure AI Studio」から「Azure AI Foundry」への名称変更が反映されています。この変更は、ユーザーが新しいプラットフォームでモデルを扱う際の整合性を保つためのものです。
具体的には、サインイン後にユーザーが移動する先や、モデルの詳細情報が表示されるページの名称が変更されており、その旨が手順に記載されています。また、モデルカタログを探索するためのリンクも「Azure AI Foundry portal」に合わせて修正されています。
このアップデートによって、ユーザーは正確な情報を基にAzureの最新の機能を体験できるようになるため、よりスムーズに作業を進めることが可能になります。全体として、文書の整合性が向上し、ユーザー体験が改善されています。
articles/ai-studio/azure-openai-in-ai-studio.md
Diff
@@ -1,7 +1,7 @@
---
-title: Azure OpenAI in Azure AI Studio
+title: Azure OpenAI in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: Learn about using Azure OpenAI models in Azure AI Studio, including when to use a project and when to use without a project.
+description: Learn about using Azure OpenAI models in Azure AI Foundry portal, including when to use a project and when to use without a project.
manager: scottpolly
keywords: Azure AI services, cognitive, Azure OpenAI
ms.service: azure-ai-studio
@@ -10,7 +10,7 @@ ms.date: 11/04/2024
ms.reviewer: shwinne
ms.author: sgilley
author: sdgilley
-ms.custom: ignite-2023, build-2024
+ms.custom: ignite-2023, build-2024, ignite-2024
# customer intent: As a developer, I want to understand the different ways I can work with Azure OpenAI models so that I can build and deploy AI models.
---
@@ -24,15 +24,15 @@ Azure OpenAI Service provides REST API access to OpenAI's powerful language mode
From the [Azure AI Foundry portal](https://ai.azure.com) landing page, use the **Let's go** button in the **Focused on Azure OpenAI Service?** section.
-:::image type="content" source="media/azure-openai-in-ai-studio/home-page.png" alt-text="Screenshot shows Azure AI Studio home page.":::
+:::image type="content" source="media/azure-openai-in-ai-studio/home-page.png" alt-text="Screenshot shows Azure AI Foundry home page.":::
You can also use [https://ai.azure.com/resource](https://ai.azure.com/resource) to directly access Azure OpenAI models outside of a project.
## Focus on Azure OpenAI Service
If you've been using Azure OpenAI Studio, all your work, such as your deployments, content filters, batch jobs or fine-tuned models, is still available. All the features and functionality are still here, though the look and feel of some features are updated.
-:::image type="content" source="media/azure-openai-in-ai-studio/studio-home.png" alt-text="Screenshot shows the new Azure OpenAI in Azure AI Studio." lightbox="media/azure-openai-in-ai-studio/studio-home.png":::
+:::image type="content" source="media/azure-openai-in-ai-studio/studio-home.png" alt-text="Screenshot shows the new Azure OpenAI in Azure AI Foundry portal." lightbox="media/azure-openai-in-ai-studio/studio-home.png":::
Use the left navigation area to perform your tasks with Azure OpenAI models:
@@ -46,14 +46,14 @@ Use the left navigation area to perform your tasks with Azure OpenAI models:
* **Batch jobs**: Create and manage jobs for your global batch deployments.
* Use the resource name in the top left to switch to another recently used resource. Or find all your Azure OpenAI Service resources in the top right-hand corner under **All resources**.
- :::image type="content" source="media/azure-openai-in-ai-studio/all-resources.png" alt-text="Screenshot shows the top right access to all resources in Azure AI Service section of Azure AI Studio." lightbox="media/azure-openai-in-ai-studio/all-resources.png":::
+ :::image type="content" source="media/azure-openai-in-ai-studio/all-resources.png" alt-text="Screenshot shows the top right access to all resources in Azure AI Service section of Azure AI Foundry." lightbox="media/azure-openai-in-ai-studio/all-resources.png":::
## Azure OpenAI in an Azure AI Foundry project
While the previous sections show how to focus on just the Azure OpenAI Service, you can also incorporate other AI services and models from various providers in Azure AI Foundry portal. You can access the Azure OpenAI Service in two ways:
* When you focus on just the Azure OpenAI Service, as described in the previous sections, you don't use a project.
-* Azure AI Foundry portal uses a project to organize your work and save state while building customized AI apps. When you work in a project, you can connect to the service. For more information, see [How to use Azure OpenAI Service in AI Studio](ai-services/how-to/connect-azure-openai.md#project).
+* Azure AI Foundry portal uses a project to organize your work and save state while building customized AI apps. When you work in a project, you can connect to the service. For more information, see [How to use Azure OpenAI Service in AI Foundry portal](ai-services/how-to/connect-azure-openai.md#project).
When you create a project, you can try other models and tools along with Azure OpenAI. For example, the **Model catalog** in a project contains many more models than just Azure OpenAI models. Inside a project, you'll have access to features that are common across all AI services and models.
@@ -77,15 +77,15 @@ Pay attention to the top left corner of the screen to see which context you are
* When you are in the Azure AI Foundry portal landing page, with choices of where to go next, you see **Azure AI Foundry**.
- :::image type="content" source="media/azure-openai-in-ai-studio/ai-studio-no-project.png" alt-text="Screenshot shows top left corner of screen for AI Studio without a project.":::
+ :::image type="content" source="media/azure-openai-in-ai-studio/ai-studio-no-project.png" alt-text="Screenshot shows top left corner of screen for AI Foundry without a project.":::
* When you are in a project, you see **Azure AI Foundry / project name**. The project name allows you to switch between projects.
- :::image type="content" source="media/azure-openai-in-ai-studio/ai-studio-project.png" alt-text="Screenshot shows top left corner of screen for AI Studio with a project.":::
+ :::image type="content" source="media/azure-openai-in-ai-studio/ai-studio-project.png" alt-text="Screenshot shows top left corner of screen for AI Foundry with a project.":::
* When you're working with Azure OpenAI outside of a project, you see **Azure AI Foundry | Azure OpenAI / resource name**. The resource name allows you to switch between Azure OpenAI resources.
- :::image type="content" source="media/azure-openai-in-ai-studio/ai-studio-azure-openai.png" alt-text="Screenshot shows top left corner of screen for AI Studio when using Azure OpenAI without a project.":::
+ :::image type="content" source="media/azure-openai-in-ai-studio/ai-studio-azure-openai.png" alt-text="Screenshot shows top left corner of screen for AI Foundry when using Azure OpenAI without a project.":::
Use the **Azure AI Foundry** breadcrumb to navigate back to the Azure AI Foundry portal home page.
Summary
{
"modification_type": "minor update",
"modification_title": "Azure OpenAIに関するガイドの更新"
}
Explanation
この変更は、「Azure AI Studio」におけるAzure OpenAIモデルの使用に関するドキュメントの更新を反映しています。主な変更点は、用語の明確化に伴って「Azure AI Studio」から「Azure AI Foundry portal」へと表現が統一されたことです。この変更は、ユーザーに最新のプラットフォーム情報を正確に提供するためのものです。
具体的には、ドキュメントのタイトルや説明、手順において「Azure AI Foundry portal」という名称が新たに使用されており、ユーザーは新しいインターフェースでの操作方法や利用方法を理解しやすくなっています。また、関連する画像のキャプションやリンクも全て新しいプラットフォームに基づいて修正されています。
さらに、カスタムメタデータに「ignite-2024」が追加され、イベントに関連する更新も考慮されています。このアップデートにより、ユーザーはAzure OpenAIモデルに関する最新の情報を基に、より効果的にAIモデルの構築やデプロイが行えるようになります。全体として、正確性とユーザー体験の向上が図られています。
articles/ai-studio/concepts/a-b-experimentation.md
Diff
@@ -5,7 +5,9 @@ author: s-polly
ms.author: scottpolly
ms.reviewer: skohlmeier
ms.service: azure-ai-studio
-ms.topic: concept-article
+ms.custom:
+ - ignite-2024
+ms.topic: concept-article
ms.date: 11/22/2024
#CustomerIntent: As an AI application developer, I want to learn about A/B experiments so that I can evaluate and improve my applications.
Summary
{
"modification_type": "minor update",
"modification_title": "A/B実験に関する記事の更新"
}
Explanation
この変更は、「A/B実験」に関するドキュメントの更新を反映しています。具体的には、メタデータに「ignite-2024」という新しいカスタムプロパティが追加されました。このプロパティの追加は、ドキュメントが2024年のイベントに関連する情報を含んでいることを示しています。
変更点は少なく、主にメタデータの更新が行われたことが特徴です。この更新により、ユーザーは最新のイベント情報や関連するリソースにアクセスしやすくなります。また、文書のトピックが「concept-article」として明示されたまま維持されているため、内容の目的が明確に示されています。
全体として、この変更は情報の可視性を高め、関連性のあるアップデートを行うことによって、更なるユーザーの理解を助けるためのものであり、特にAIアプリケーションの開発者にとって有益です。
articles/ai-studio/concepts/ai-resources.md
Diff
@@ -1,13 +1,14 @@
---
title: Manage, collaborate, and organize with hubs
titleSuffix: Azure AI Foundry
-description: This article introduces concepts about Azure AI Studio hubs for your AI Studio projects.
+description: This article introduces concepts about Azure AI Foundry hubs for your AI Foundry projects.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
- ignite-2023
- build-2024
- ai-learning-hub
+ - ignite-2024
ms.topic: conceptual
ms.date: 11/19/2024
ms.reviewer: deeikele
@@ -17,44 +18,44 @@ author: Blackmist
# Manage, collaborate, and organize with hubs
-Hubs are the primary top-level Azure resource for AI Studio and provide a central way for a team to govern security, connectivity, and computing resources across playgrounds and projects. Once a hub is created, developers can create projects from it and access shared company resources without needing an IT administrator's repeated help.
+Hubs are the primary top-level Azure resource for AI Foundry and provide a central way for a team to govern security, connectivity, and computing resources across playgrounds and projects. Once a hub is created, developers can create projects from it and access shared company resources without needing an IT administrator's repeated help.
Project workspaces that are created using a hub inherit the same security settings and shared resource access. Teams can create project workspaces as needed to organize their work, isolate data, and/or restrict access.
-In this article, you learn more about hub capabilities, and how to set up a hub for your organization. You can see the resources created in the [Azure portal](https://portal.azure.com/) and in [Azure AI Studio](https://ai.azure.com).
+In this article, you learn more about hub capabilities, and how to set up a hub for your organization. You can see the resources created in the [Azure portal](https://portal.azure.com/) and in [Azure AI Foundry](https://ai.azure.com).
## Rapid AI use case exploration without IT bottlenecks
Successful AI applications and models typically start as prototypes, where developers test the feasibility of an idea, or assess the quality of data or a model for a particular task. The prototype is a stepping stone towards project funding or a full-scale implementation.
-When a single platform team is responsible for the setup of cloud resources, the transition from proving the feasibility of an idea to a funded project might be a bottleneck in productivity. Such a team might be the only one authorized to configure security, connectivity or other resources that might incur costs. This situation can cause a huge backlog, resulting in development teams getting blocked on innovating with a new idea. In Azure AI Studio, hubs help mitigate this bottleneck. IT can set up a preconfigured, reusable environment (a hub), for a team one time. Then the team can use that hub to create their own projects for prototyping, building, and operating AI applications.
+When a single platform team is responsible for the setup of cloud resources, the transition from proving the feasibility of an idea to a funded project might be a bottleneck in productivity. Such a team might be the only one authorized to configure security, connectivity or other resources that might incur costs. This situation can cause a huge backlog, resulting in development teams getting blocked on innovating with a new idea. In Azure AI Foundry portal, hubs help mitigate this bottleneck. IT can set up a preconfigured, reusable environment (a hub), for a team one time. Then the team can use that hub to create their own projects for prototyping, building, and operating AI applications.
## Set up and secure a hub for your team
-Get started by [creating your first hub in Azure AI Studio](../how-to/create-azure-ai-resource.md), or use [Azure portal](../how-to/create-secure-ai-hub.md) or [templates](../how-to/create-azure-ai-hub-template.md) for advanced configuration options. You can customize networking, identity, encryption, monitoring, or tags, to meet compliance with your organization’s requirements.
+Get started by [creating your first hub in Azure AI Foundry portal](../how-to/create-azure-ai-resource.md), or use [Azure portal](../how-to/create-secure-ai-hub.md) or [templates](../how-to/create-azure-ai-hub-template.md) for advanced configuration options. You can customize networking, identity, encryption, monitoring, or tags, to meet compliance with your organization’s requirements.
Often, projects in a business domain require access to the same company resources such as vector indices, model endpoints, or repos. As a team lead, you can preconfigure connectivity with these resources within a hub, so developers can access them from any new project workspace without delay on IT.
-[Connections](connections.md) let you access objects in AI Studio that are managed outside of your hub. For example, uploaded data on an Azure storage account, or model deployments on an existing Azure OpenAI resource. A connection can be shared with every project or made accessible to one specific project. Connections can be configured with key-based access or Microsoft Entra ID to authorize access to users on the connected resource. Plus, as an administrator, you can track, audit, and manage connections across projects using your hub.
+[Connections](connections.md) let you access objects in AI Foundry portal that are managed outside of your hub. For example, uploaded data on an Azure storage account, or model deployments on an existing Azure OpenAI resource. A connection can be shared with every project or made accessible to one specific project. Connections can be configured with key-based access or Microsoft Entra ID to authorize access to users on the connected resource. Plus, as an administrator, you can track, audit, and manage connections across projects using your hub.
## Shared Azure resources and configurations
Various management concepts are available on hubs to support team leads and admins to centrally manage a team's environment.
* **Security configuration** including public network access, [virtual networking](#virtual-networking), customer-managed key encryption, and privileged access to whom can create projects for customization. Security settings configured on the hub automatically pass down to each project. A managed virtual network is shared between all projects that share the same hub.
* **Connections** are named and authenticated references to Azure and non-Azure resources like data storage providers. Use a connection as a means for making an external resource available to a group of developers without having to expose its stored credential to an individual.
-* **Compute and quota allocation** is managed as shared capacity for all projects in AI Studio that share the same hub. This quota includes compute instance as managed cloud-based workstation for an individual. The same user can use a compute instance across projects.
+* **Compute and quota allocation** is managed as shared capacity for all projects in AI Foundry portal that share the same hub. This quota includes compute instance as managed cloud-based workstation for an individual. The same user can use a compute instance across projects.
* **AI services access keys** to endpoints for prebuilt AI models are managed on the hub scope. Use these endpoints to access foundation models from Azure OpenAI, Speech, Vision, and Content Safety with one [API key](#azure-ai-services-api-access-keys)
* **Policy** enforced in Azure on the hub scope applies to all projects managed under it.
-* **Dependent Azure resources** are set up once per hub and associated projects and used to store artifacts you generate while working in AI Studio such as logs or when uploading data. For more information, see [Azure AI dependencies](#azure-ai-dependencies).
+* **Dependent Azure resources** are set up once per hub and associated projects and used to store artifacts you generate while working in AI Foundry portal such as logs or when uploading data. For more information, see [Azure AI dependencies](#azure-ai-dependencies).
## Organize work in projects for customization
-A hub provides the hosting environment for [projects](../how-to/create-projects.md) in AI Studio. A project is an organizational container that has tools for AI customization and orchestration. It lets you organize your work, save state across different tools like prompt flow, and collaborate with others. For example, you can share uploaded files and connections to data sources.
+A hub provides the hosting environment for [projects](../how-to/create-projects.md) in AI Foundry portal. A project is an organizational container that has tools for AI customization and orchestration. It lets you organize your work, save state across different tools like prompt flow, and collaborate with others. For example, you can share uploaded files and connections to data sources.
Multiple projects can use a hub, and multiple users can use a project. A project also helps you keep track of billing, and manage access and provides data isolation. Every project uses dedicated storage containers to let you upload files and share it with only other project members when using the 'data' experiences.
-Projects let you create and group reusable components that can be used across tools in AI Studio:
+Projects let you create and group reusable components that can be used across tools in AI Foundry portal:
| Asset | Description |
| --- | --- |
@@ -71,11 +72,11 @@ Projects also have specific settings that only hold for that project:
| Prompt flow runtime | Prompt flow is a feature that can be used to generate, customize, or run a flow. To use prompt flow, you need to create a runtime on top of a compute instance. |
> [!NOTE]
-> In AI Studio you can also manage language and notification settings that apply to all projects that you can access regardless of the hub or project.
+> In AI Foundry portal you can also manage language and notification settings that apply to all projects that you can access regardless of the hub or project.
## Azure AI services API access keys
-The hub allows you to set up connections to existing Azure OpenAI or Azure AI Services resource types, which can be used to host model deployments. You can access these model deployments from connected resources in AI Studio. Keys to connected resources can be listed from the AI Studio or Azure portal. For more information, see [Find Azure AI Studio resources in the Azure portal](#find-azure-ai-studio-resources-in-the-azure-portal).
+The hub allows you to set up connections to existing Azure OpenAI or Azure AI Services resource types, which can be used to host model deployments. You can access these model deployments from connected resources in AI Foundry portal. Keys to connected resources can be listed from the AI Foundry portal or Azure portal. For more information, see [Find Azure AI Foundry resources in the Azure portal](#find-azure-ai-foundry-resources-in-the-azure-portal).
### Virtual networking
@@ -92,15 +93,15 @@ While projects show up as their own tracking resources in the Azure portal, they
Azure AI offers a set of connectors that allows you to connect to different types of data sources and other Azure tools. You can take advantage of connectors to connect with data such as indexes in Azure AI Search to augment your flows.
-Connections can be set up as shared with all projects in the same hub, or created exclusively for one project. To manage connections via Azure AI Studio, go to your project and then select **Management center**. Select **Connected resources** in either the **Hub** or **Project** section to manage shared connections for the project or hub, respectively. As an administrator, you can audit both shared and project-scoped connections on a hub level to have a single pane of glass of connectivity across projects.
+Connections can be set up as shared with all projects in the same hub, or created exclusively for one project. To manage connections via Azure AI Foundry, go to your project and then select **Management center**. Select **Connected resources** in either the **Hub** or **Project** section to manage shared connections for the project or hub, respectively. As an administrator, you can audit both shared and project-scoped connections on a hub level to have a single pane of glass of connectivity across projects.
## Azure AI dependencies
-Azure AI Studio layers on top of existing Azure services including Azure AI and Azure Machine Learning services. While it might not be visible on the display names in Azure portal, AI Studio, or when using the SDK or CLI, some of these architectural details become apparent when you work with the Azure REST APIs, use Azure cost reporting, or use infrastructure-as-code templates such as Azure Bicep or Azure Resource Manager. From an Azure Resource Provider perspective, Azure AI Studio resource types map to the following resource provider kinds:
+Azure AI Foundry layers on top of existing Azure services including Azure AI and Azure Machine Learning services. While it might not be visible on the display names in Azure portal, AI Foundry, or when using the SDK or CLI, some of these architectural details become apparent when you work with the Azure REST APIs, use Azure cost reporting, or use infrastructure-as-code templates such as Azure Bicep or Azure Resource Manager. From an Azure Resource Provider perspective, Azure AI Foundry resource types map to the following resource provider kinds:
[!INCLUDE [Resource provider kinds](../includes/resource-provider-kinds.md)]
-When you create a new hub, a set of dependent Azure resources are required to store data that you upload or get generated when working in AI Studio. If not provided by you, and required, these resources are automatically created.
+When you create a new hub, a set of dependent Azure resources are required to store data that you upload or get generated when working in AI Foundry portal. If not provided by you, and required, these resources are automatically created.
[!INCLUDE [Dependent Azure resources](../includes/dependent-resources.md)]
@@ -114,19 +115,19 @@ If you require to group costs of these different services together, we recommend
You can use [cost management](/azure/cost-management-billing/costs/quick-acm-cost-analysis) and [Azure resource tags](/azure/azure-resource-manager/management/tag-resources) to help with a detailed resource-level cost breakdown, or run [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) on the above listed resources to obtain a pricing estimate. For more information, see [Plan and manage costs for Azure AI services](../how-to/costs-plan-manage.md).
-## Find Azure AI Studio resources in the Azure portal
+## Find Azure AI Foundry resources in the Azure portal
-In the Azure portal, you can find resources that correspond to your project in Azure AI Studio.
+In the Azure portal, you can find resources that correspond to your project in Azure AI Foundry portal.
> [!NOTE]
> This section assumes that the hub and project are in the same resource group.
-1. In [Azure AI Studio](https://ai.azure.com), go to a project and select **Management center** to view your project resources.
+1. In [Azure AI Foundry](https://ai.azure.com), go to a project and select **Management center** to view your project resources.
1. From the management center, select the overview for either your hub or project and then select the link to **Manage in Azure portal**.
- :::image type="content" source="../media/concepts/azureai-project-view-ai-studio.png" alt-text="Screenshot of the AI Studio project overview page with links to the Azure portal." lightbox="../media/concepts/azureai-project-view-ai-studio.png":::
+ :::image type="content" source="../media/concepts/azureai-project-view-ai-studio.png" alt-text="Screenshot of the AI Foundry project overview page with links to the Azure portal." lightbox="../media/concepts/azureai-project-view-ai-studio.png":::
## Next steps
- [Quickstart: Analyze images and video with GPT-4 for Vision in the playground](../quickstarts/multimodal-vision.md)
-- [Learn more about Azure AI Studio](../what-is-ai-studio.md)
+- [Learn more about Azure AI Foundry](../what-is-ai-studio.md)
- [Learn more about projects](../how-to/create-projects.md)
Summary
{
"modification_type": "minor update",
"modification_title": "AI StudioからAI Foundryへの用語の変更"
}
Explanation
この変更は、「AI Studio」に関するドキュメントの更新を反映し、主に用語の変更が行われています。「AI Studio」という表現が「AI Foundry」に変更され、その一貫性を保つために文中の全ての参照が修正されています。これにより、ユーザーは最新のプラットフォーム名に基づいて情報を理解しやすくなります。
具体的には、記事のタイトル、説明、各セクション内での用語が「AI Foundry」として統一されており、ユーザーがこの新しいプラットフォームの機能やリソースを適切に利用できるようになっています。また、メタデータには新たに「ignite-2024」が追加され、2024年のイベントに関連する情報が含まれていることを示しています。
全体として、これらの変更は、ユーザーがAIモデルの管理やプロジェクトのコラボレーションが行いやすくなることを目的としており、AIアプリケーションの開発者にとって重要なリソースとなることを意図しています。
articles/ai-studio/concepts/architecture.md
Diff
@@ -1,11 +1,12 @@
---
-title: Azure AI Studio architecture
+title: Azure AI Foundry architecture
titleSuffix: Azure AI Foundry
-description: Learn about the architecture of Azure AI Studio.
+description: Learn about the architecture of Azure AI Foundry.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
- build-2024
+ - ignite-2024
ms.topic: conceptual
ms.date: 11/19/2024
ms.reviewer: deeikele
@@ -19,19 +20,19 @@ AI Foundry provides a unified experience for AI developers and data scientists t
[!INCLUDE [new-name](../includes/new-name.md)]
-:::image type="content" source="../media/concepts/ai-studio-architecture.png" alt-text="Diagram of the high-level architecture of Azure AI Studio." lightbox="../media/concepts/ai-studio-architecture.png":::
+:::image type="content" source="../media/concepts/ai-studio-architecture.png" alt-text="Diagram of the high-level architecture of Azure AI Foundry." lightbox="../media/concepts/ai-studio-architecture.png":::
At the top level, AI Foundry provides access to the following resources:
-<!-- The top level AI Studio resources (hub and project) are based on Azure Machine Learning. Connected resources, such as Azure OpenAI, Azure AI services, and Azure AI Search, are used by the hub and project in reference, but follow their own resource management lifecycle. -->
+<!-- The top level AI Foundry resources (hub and project) are based on Azure Machine Learning. Connected resources, such as Azure OpenAI, Azure AI services, and Azure AI Search, are used by the hub and project in reference, but follow their own resource management lifecycle. -->
- **Azure OpenAI**: Provides access to the latest Open AI models. You can create secure deployments, try playgrounds, fine tune models, content filters, and batch jobs. The Azure OpenAI resource provider is `Microsoft.CognitiveServices/account` and the kind of resource is `OpenAI`. You can also connect to Azure OpenAI by using a kind of `AIServices`, which also includes other [Azure AI services](/azure/ai-services/what-are-ai-services).
When using Azure AI Foundry portal, you can directly work with Azure OpenAI without an Azure Studio project or you can use Azure OpenAI through a project.
- For more information, visit [Azure OpenAI in Azure AI Studio](../azure-openai-in-ai-studio.md).
+ For more information, visit [Azure OpenAI in Azure AI Foundry portal](../azure-openai-in-ai-studio.md).
-- **Management center**: The management center streamlines governance and management of AI Studio resources such as hubs, projects, connected resources, and deployments.
+- **Management center**: The management center streamlines governance and management of AI Foundry resources such as hubs, projects, connected resources, and deployments.
For more information, visit [Management center](management-center.md).
- **AI Foundry hub**: The hub is the top-level resource in AI Foundry portal, and is based on the Azure Machine Learning service. The Azure resource provider for a hub is `Microsoft.MachineLearningServices/workspaces`, and the kind of resource is `Hub`. It provides the following features:
@@ -49,7 +50,7 @@ At the top level, AI Foundry provides access to the following resources:
- Project-scoped connections. For example, project members might need private access to data stored in an Azure Storage account without giving that same access to other projects.
- Open source model deployments from catalog and fine-tuned model endpoints.
- :::image type="content" source="../media/concepts/resource-provider-connected-resources.svg" alt-text="Diagram of the relationship between AI Studio resources." :::
+ :::image type="content" source="../media/concepts/resource-provider-connected-resources.svg" alt-text="Diagram of the relationship between AI Foundry resources." :::
For more information, visit [Hubs and projects overview](ai-resources.md).
@@ -66,7 +67,7 @@ Azure AI Foundry is built on the Azure Machine Learning resource provider, and t
When you create a new hub, a set of dependent Azure resources are required to store data, get access to models, and provide compute resources for AI customization. The following table lists the dependent Azure resources and their resource providers:
> [!TIP]
-> If you don't provide a dependent resource when creating a hub, and it's a required dependency, AI Studio creates the resource for you.
+> If you don't provide a dependent resource when creating a hub, and it's a required dependency, AI Foundry creates the resource for you.
[!INCLUDE [Dependent Azure resources](../includes/dependent-resources.md)]
@@ -77,7 +78,7 @@ For information on registering resource providers, see [Register an Azure resour
While most of the resources used by Azure AI Foundry live in your Azure subscription, some resources are in an Azure subscription managed by Microsoft. The cost for these managed resources shows on your Azure bill as a line item under the Azure Machine Learning resource provider. The following resources are in the Microsoft-managed Azure subscription, and don't appear in your Azure subscription:
- **Managed compute resources**: Provided by Azure Batch resources in the Microsoft subscription.
-- **Managed virtual network**: Provided by Azure Virtual Network resources in the Microsoft subscription. If FQDN rules are enabled, an Azure Firewall (standard) is added and charged to your subscription. For more information, see [Configure a managed virtual network for Azure AI Studio](../how-to/configure-managed-network.md).
+- **Managed virtual network**: Provided by Azure Virtual Network resources in the Microsoft subscription. If FQDN rules are enabled, an Azure Firewall (standard) is added and charged to your subscription. For more information, see [Configure a managed virtual network for Azure AI Foundry](../how-to/configure-managed-network.md).
- **Metadata storage**: Provided by Azure Storage resources in the Microsoft subscription.
> [!NOTE]
@@ -95,7 +96,7 @@ Often, projects in a business domain require access to the same company resource
[Connections](connections.md) let you access objects in AI Foundry that are managed outside of your hub. For example, uploaded data on an Azure storage account, or model deployments on an existing Azure OpenAI resource. A connection can be shared with every project or made accessible to one specific project. Connections can be configured to use key-based access or Microsoft Entra ID passthrough to authorize access to users on the connected resource. As an administrator, you can track, audit, and manage connections across the organization from a single view in AI Foundry.
-:::image type="content" source="../media/concepts/connected-resources-spog.png" alt-text="Screenshot of AI Studio showing an audit view of all connected resources across a hub and its projects." :::
+:::image type="content" source="../media/concepts/connected-resources-spog.png" alt-text="Screenshot of AI Foundry showing an audit view of all connected resources across a hub and its projects." :::
### Organize for your team's needs
@@ -109,7 +110,7 @@ Azure AI services including Azure OpenAI provide control plane endpoints for ope
To reduce the complexity of Azure RBAC management, AI Foundry provides a *control plane proxy* that allows you to perform operations on connected Azure AI services and Azure OpenAI resources. Performing operations on these resources through the control plane proxy only requires Azure RBAC permissions on the hub. The Azure AI Foundry service then performs the call to the Azure AI services or Azure OpenAI control plane endpoint on your behalf.
-For more information, see [Role-based access control in Azure AI Studio](rbac-ai-studio.md).
+For more information, see [Role-based access control in Azure AI Foundry portal](rbac-ai-studio.md).
## Attribute-based access control
@@ -153,15 +154,15 @@ A hub can be configured to use a *managed* virtual network. The managed virtual
> [!NOTE]
> If you want to use a virtual network to secure communications between your clients and the hub or project, you must use an Azure Virtual Network that you create and manage. For example, an Azure Virtual Network that uses a VPN or ExpressRoute connection to your on-premises network.
-For more information on how to configure a managed virtual network, see [Configure a managed virtual network for Azure AI Studio](../how-to/configure-managed-network.md).
+For more information on how to configure a managed virtual network, see [Configure a managed virtual network for Azure AI Foundry](../how-to/configure-managed-network.md).
## Azure Monitor
Azure monitor and Azure Log Analytics provide monitoring and logging for the underlying resources used by Azure AI Foundry. Since Azure AI Foundry is built on Azure Machine Learning, Azure OpenAI, Azure AI services, and Azure AI Search, use the following articles to learn how to monitor the services:
| Resource | Monitoring and logging |
| --- | --- |
-| Azure AI Studio hub and project | [Monitor Azure Machine Learning](/azure/machine-learning/monitor-azure-machine-learning) |
+| Azure AI Foundry hub and project | [Monitor Azure Machine Learning](/azure/machine-learning/monitor-azure-machine-learning) |
| Azure OpenAI | [Monitor Azure OpenAI](/azure/ai-services/openai/how-to/monitoring) |
| Azure AI services | [Monitor Azure AI (training)](/training/modules/monitor-ai-services/) |
| Azure AI Search | [Monitor Azure AI Search](/azure/search/monitor-azure-cognitive-search) |
@@ -177,6 +178,6 @@ For more information on price and quota, use the following articles:
Create a hub using one of the following methods:
-- [Azure AI Foundry portal](../how-to/create-azure-ai-resource.md#create-a-hub-in-ai-studio): Create a hub for getting started.
+- [Azure AI Foundry portal](../how-to/create-azure-ai-resource.md#create-a-hub-in-ai-foundry-portal): Create a hub for getting started.
- [Azure portal](../how-to/create-secure-ai-hub.md): Create a hub with your own networking.
- [Bicep template](../how-to/create-azure-ai-hub-template.md).
Summary
{
"modification_type": "minor update",
"modification_title": "AI StudioからAI Foundryへの用語の更新"
}
Explanation
この変更は「AI Studio」に関するドキュメントで用語の統一が行われ、「AI Foundry」という新しい名称に変更されています。具体的には、文書全体で「AI Studio」が「AI Foundry」に置き換えられ、そのために関連する説明やセクションが更新されています。
この変更によって、ドキュメントのタイトル、説明、画像のキャプション、リソースや用語に関する部分がすべて「AI Foundry」として再記述されています。これにより、ユーザーは新しいプラットフォームの最新情報に基づいて、適切なリソースや機能を利用しやすくなっています。
また、メタデータには新たに「ignite-2024」が追加され、2024年のイベントに関連する情報も示されています。新しい構造と名前に伴い、重要な機能やリソースに関する説明が明確にされています。
全体として、この更新はAIアプリケーションの開発者やデータサイエンティストに対して、最新のプラットフォーム情報を提供し、理解を深めることを目的としており、文書の整合性と可読性の向上が図られています。
articles/ai-studio/concepts/concept-model-distillation.md
Diff
@@ -1,7 +1,7 @@
---
-title: Distillation in AI Studio (preview)
+title: Distillation in AI Foundry portal (preview)
titleSuffix: Azure AI Foundry
-description: Learn how to do distillation in Azure AI Studio.
+description: Learn how to do distillation in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.topic: how-to
@@ -13,11 +13,11 @@ author: ssalgadodev
ms.custom: references_regions
---
-# Distillation in Azure AI Studio (preview)
+# Distillation in Azure AI Foundry portal (preview)
[!INCLUDE [Feature preview](~/reusable-content/ce-skilling/azure/includes/ai-studio/includes/feature-preview.md)]
-In Azure AI Studio, you can use distillation to efficiently train a student model.
+In Azure AI Foundry portal, you can use distillation to efficiently train a student model.
## What is distillation?
@@ -33,14 +33,14 @@ The main steps in knowledge distillation are:
## Sample notebook
-Distillation in AI Studio is currently only available through a notebook experience. You can use the [sample notebook](https://github.com/Azure/azureml-examples/tree/main/sdk/python/foundation-models/system/distillation) to see how to perform distillation. Model distillation is available for Microsoft models and a selection of OSS (open-source software) models available in the model catalog. In this sample notebook, the teacher model uses the Meta Llama 3.1 405B instruction model, and the student model uses the Meta Llama 3.1 8B instruction model.
+Distillation in AI Foundry portal is currently only available through a notebook experience. You can use the [sample notebook](https://github.com/Azure/azureml-examples/tree/main/sdk/python/foundation-models/system/distillation) to see how to perform distillation. Model distillation is available for Microsoft models and a selection of OSS (open-source software) models available in the model catalog. In this sample notebook, the teacher model uses the Meta Llama 3.1 405B instruction model, and the student model uses the Meta Llama 3.1 8B instruction model.
We used an advanced prompt during synthetic data generation. The advanced prompt incorporates chain-of-thought (CoT) reasoning, which results in higher-accuracy data labels in the synthetic data. This labeling further improves the accuracy of the distilled model.
## Related content
-- [What is Azure AI Studio?](../what-is-ai-studio.md)
-- [Deploy Meta Llama 3.1 models with Azure AI Studio](../how-to/deploy-models-llama.md)
-- [Azure AI Studio FAQ](../faq.yml)
+- [What is Azure AI Foundry?](../what-is-ai-studio.md)
+- [Deploy Meta Llama 3.1 models with Azure AI Foundry](../how-to/deploy-models-llama.md)
+- [Azure AI Foundry FAQ](../faq.yml)
Summary
{
"modification_type": "minor update",
"modification_title": "AI StudioからAI Foundryへの用語の変更"
}
Explanation
この変更は、「AI Studio」に関するドキュメントの更新であり、主に用語の変更が行われています。具体的には、文書内のすべての「AI Studio」という呼称が「AI Foundry」に変更されており、それに伴い関連する説明やリンクも更新されています。
タイトルや説明が「AI Foundry portal」に置き換えられ、内容の一貫性が保たれています。また、サンプルノートブックに関する説明も、特に「AI Foundry portal」にてディストillationが行われることを明示しています。さらに、関連コンテンツのリンクも新しい名称に合わせて修正されています。
これにより、ユーザーは新しいプラットフォーム名を反映した最新の情報に基づいて、モデルのディストillationに関する手順を正確に理解し、実行できるようになります。この更新は、ユーザビリティと情報の整合性を高めることを目的としており、開発者がAIモデルを効率的にトレーニングするための情報源として機能します。
articles/ai-studio/concepts/concept-synthetic-data.md
Diff
@@ -1,7 +1,7 @@
---
-title: Synthetic data generation in AI Studio
+title: Synthetic data generation in AI Foundry portal
titleSuffix: Azure AI Foundry
-description: Learn how to generate a synthetic dataset in Azure AI Studio.
+description: Learn how to generate a synthetic dataset in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.topic: how-to
@@ -13,9 +13,9 @@ author: ssalgadodev
ms.custom: references_regions
---
-# Synthetic data generation in Azure AI Studio
+# Synthetic data generation in Azure AI Foundry portal
-In Azure AI Studio, you can use synthetic data generation to efficiently produce predictions for your datasets. This article introduces you to the concept of synthetic data generation and how you can use it in machine learning.
+In Azure AI Foundry portal, you can use synthetic data generation to efficiently produce predictions for your datasets. This article introduces you to the concept of synthetic data generation and how you can use it in machine learning.
## What is synthetic data generation?
@@ -35,6 +35,6 @@ To see how to generate synthetic data, you can use the [sample notebook](https:/
## Related content
-- [What is Azure AI Studio?](../what-is-ai-studio.md)
-- [Deploy Meta Llama 3.1 models with Azure AI Studio](../how-to/deploy-models-llama.md)
-- [Azure AI Studio FAQ](../faq.yml)
+- [What is Azure AI Foundry?](../what-is-ai-studio.md)
+- [Deploy Meta Llama 3.1 models with Azure AI Foundry](../how-to/deploy-models-llama.md)
+- [Azure AI Foundry FAQ](../faq.yml)
Summary
{
"modification_type": "minor update",
"modification_title": "AI StudioからAI Foundryへの用語の更新"
}
Explanation
この変更は、「AI Studio」に関連するドキュメントの更新であり、主に用語の変更が行われています。具体的には、文書内のすべての「AI Studio」という表現が「AI Foundry portal」に変更され、それに伴って説明や関連するリンクも更新されています。
タイトルや説明が更新され、内容全体で「AI Foundry」という新しい名称が統一されて使用されています。これにより、ユーザーは最新の情報を反映した状態で、合成データ生成の手法を正しく理解し、適用することができるようになります。
また、関連コンテンツのリンクも新しいプラットフォーム名に合わせて修正されており、ドキュメントの整合性が保たれています。これにより、AIモデルのトレーニングやデータ生成に関する情報を探している開発者に対して、一貫したナビゲーションを提供し、作業の効率性を高めることが可能になります。
articles/ai-studio/concepts/connections.md
Diff
@@ -1,22 +1,23 @@
---
-title: Connections in Azure AI Studio
+title: Connections in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: This article introduces connections in Azure AI Studio.
+description: This article introduces connections in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
- ignite-2023
- build-2024
+ - ignite-2024
ms.topic: conceptual
ms.date: 5/21/2024
ms.reviewer: sgilley
ms.author: sgilley
author: sdgilley
---
-# Connections in Azure AI Studio
+# Connections in Azure AI Foundry portal
-Connections in Azure AI Studio are a way to authenticate and consume both Microsoft and non-Microsoft resources within your AI Studio projects. For example, connections can be used for prompt flow, training data, and deployments. [Connections can be created](../how-to/connections-add.md) exclusively for one project or shared with all projects in the same hub.
+Connections in Azure AI Foundry portal are a way to authenticate and consume both Microsoft and non-Microsoft resources within your AI Foundry projects. For example, connections can be used for prompt flow, training data, and deployments. [Connections can be created](../how-to/connections-add.md) exclusively for one project or shared with all projects in the same hub.
## Connections to Azure AI services
@@ -30,7 +31,7 @@ As another example, you can [create a connection](../how-to/connections-add.md)
## Connections to non-Microsoft services
-Azure AI Studio supports connections to non-Microsoft services, including the following:
+Azure AI Foundry supports connections to non-Microsoft services, including the following:
- The [API key connection](../how-to/connections-add.md) handles authentication to your specified target on an individual basis. This is the most common non-Microsoft connection type.
- The [custom connection](../how-to/connections-add.md) allows you to securely store and access keys while storing related properties, such as targets and versions. Custom connections are useful when you have many targets that or cases where you wouldn't need a credential to access. LangChain scenarios are a good example where you would use custom service connections. Custom connections don't manage authentication, so you'll have to manage authentication on your own.
@@ -45,7 +46,7 @@ A data connection offers these benefits:
- A common, easy-to-use API that interacts with different storage types including Microsoft OneLake, Azure Blob, and Azure Data Lake Gen2.
- Easier discovery of useful connections in team operations.
-- For credential-based access (service principal/SAS/key), AI Studio connection secures credential information. This way, you won't need to place that information in your scripts.
+- For credential-based access (service principal/SAS/key), AI Foundry connection secures credential information. This way, you won't need to place that information in your scripts.
When you create a connection with an existing Azure storage account, you can choose between two different authentication methods:
@@ -69,7 +70,7 @@ A Uniform Resource Identifier (URI) represents a storage location on your local
| Storage location | URI examples |
|------------------|--------------|
-| Azure AI Studio connection | `azureml://datastores/<data_store_name>/paths/<folder1>/<folder2>/<folder3>/<file>.parquet` |
+| Azure AI Foundry connection | `azureml://datastores/<data_store_name>/paths/<folder1>/<folder2>/<folder3>/<file>.parquet` |
| Local files | `./home/username/data/my_data` |
| Public http or https server | `https://raw.githubusercontent.com/pandas-dev/pandas/main/doc/data/titanic.csv` |
| Blob storage | `wasbs://<containername>@<accountname>.blob.core.windows.net/<folder>/` |
@@ -83,9 +84,9 @@ A Uniform Resource Identifier (URI) represents a storage location on your local
Connections allow you to securely store credentials, authenticate access, and consume data and information. Secrets associated with connections are securely persisted in the corresponding Azure Key Vault, adhering to robust security and compliance standards. As an administrator, you can audit both shared and project-scoped connections on a hub level (link to connection rbac).
-Azure connections serve as key vault proxies, and interactions with connections are direct interactions with an Azure key vault. Azure AI Studio connections store API keys securely, as secrets, in a key vault. The key vault [Azure role-based access control (Azure RBAC)](./rbac-ai-studio.md) controls access to these connection resources. A connection references the credentials from the key vault storage location for further use. You won't need to directly deal with the credentials after they're stored in the hub's key vault. You have the option to store the credentials in the YAML file. A CLI command or SDK can override them. We recommend that you avoid credential storage in a YAML file, because a security breach could lead to a credential leak.
+Azure connections serve as key vault proxies, and interactions with connections are direct interactions with an Azure key vault. Azure AI Foundry connections store API keys securely, as secrets, in a key vault. The key vault [Azure role-based access control (Azure RBAC)](./rbac-ai-studio.md) controls access to these connection resources. A connection references the credentials from the key vault storage location for further use. You won't need to directly deal with the credentials after they're stored in the hub's key vault. You have the option to store the credentials in the YAML file. A CLI command or SDK can override them. We recommend that you avoid credential storage in a YAML file, because a security breach could lead to a credential leak.
## Next steps
-- [How to create a connection in Azure AI Studio](../how-to/connections-add.md)
+- [How to create a connection in Azure AI Foundry portal](../how-to/connections-add.md)
Summary
{
"modification_type": "minor update",
"modification_title": "接続に関する用語の更新"
}
Explanation
この変更は、「Azure AI Studio」に関するドキュメントの更新で、主に用語が「Azure AI Foundry portal」に変更されています。これにより、全体を通して新しい名称が一貫して使用されるようになり、最新のプラットフォームに関連する情報が反映されています。
具体的には、接続の使い方や関連する機能に関する説明が、「AI Studio」から「AI Foundry portal」へと更新され、従来の機能や操作は同じままですが、名前が最新のものに変更されています。また、各種接続の管理や特典に関する説明も新しい用語を用いて記載されています。
関連コンテンツへのリンクも更新され、ユーザーが新しいプラットフォームの下で接続を作成する方法やその特典について、明確な情報を得られるように配慮されています。これにより、開発者やユーザーは最新の文書を基に、接続の管理やデータの利用に関する理解を深めることができるようになります。
articles/ai-studio/concepts/content-filtering.md
Diff
@@ -1,22 +1,23 @@
---
-title: Azure AI Studio content filtering
+title: Azure AI Foundry content filtering
titleSuffix: Azure AI Foundry
-description: Learn about the content filtering capabilities of Azure OpenAI in Azure AI Studio.
+description: Learn about the content filtering capabilities of Azure OpenAI in Azure AI Foundry portal.
manager: nitinme
ms.service: azure-ai-studio
ms.custom:
- ignite-2023
- build-2024
+ - ignite-2024
ms.topic: conceptual
ms.date: 5/21/2024
ms.reviewer: eur
ms.author: pafarley
author: PatrickFarley
---
-# Content filtering in Azure AI Studio
+# Content filtering in Azure AI Foundry portal
-Azure AI Studio includes a content filtering system that works alongside core models and DALL-E image generation models.
+Azure AI Foundry includes a content filtering system that works alongside core models and DALL-E image generation models.
> [!IMPORTANT]
> The content filtering system isn't applied to prompts and completions processed by the Whisper model in Azure OpenAI Service. Learn more about the [Whisper model in Azure OpenAI](../../ai-services/openai/concepts/models.md).
@@ -70,11 +71,11 @@ You can also enable the following special output filters:
## Create a content filter
-For any model deployment in [Azure AI Studio](https://ai.azure.com), you can directly use the default content filter, but you might want to have more control. For example, you could make a filter stricter or more lenient, or enable more advanced capabilities like prompt shields and protected material detection.
+For any model deployment in [Azure AI Foundry](https://ai.azure.com), you can directly use the default content filter, but you might want to have more control. For example, you could make a filter stricter or more lenient, or enable more advanced capabilities like prompt shields and protected material detection.
Follow these steps to create a content filter:
-1. Go to AI Studio and navigate to your project/ hub. Then select the Safety+ Security tab on the left nav and select the Content Filters.
+1. Go to AI Foundry and navigate to your project/ hub. Then select the Safety+ Security tab on the left nav and select the Content Filters.
:::image type="content" source="../media/content-safety/content-filter/create-content-filter.png" alt-text="Screenshot of the button to create a new content filter." lightbox="../media/content-safety/content-filter/create-content-filter.png":::
1. On the **Basic information** page, enter a name for your content filter. Select a connection to associate with the content filter. Then select **Next**.
@@ -96,7 +97,7 @@ Follow these steps to create a content filter:
:::image type="content" source="../media/content-safety/content-filter/create-content-filter-deployment.png" alt-text="Screenshot of the option to select a deployment when creating a content filter." lightbox="../media/content-safety/content-filter/create-content-filter-deployment.png":::
- Content filtering configurations are created at the hub level in AI Studio. Learn more about configurability in the [Azure OpenAI docs](/azure/ai-services/openai/how-to/content-filters).
+ Content filtering configurations are created at the hub level in AI Foundry portal. Learn more about configurability in the [Azure OpenAI docs](/azure/ai-services/openai/how-to/content-filters).
1. On the **Review** page, review the settings and then select **Create filter**.
@@ -110,7 +111,7 @@ The filter creation process gives you the option to apply the filter to the depl
Follow these steps to apply a content filter to a deployment:
-1. Go to [AI Studio](https://ai.azure.com) and select a hub and project.
+1. Go to [AI Foundry](https://ai.azure.com) and select a hub and project.
1. Select **Models + endpoints** on the left pane and choose one of your deployments, then select **Edit**.
:::image type="content" source="../media/content-safety/content-filter/deployment-edit.png" alt-text="Screenshot of the button to edit a deployment." lightbox="../media/content-safety/content-filter/deployment-edit.png":::
@@ -143,6 +144,6 @@ Customers are responsible for ensuring that applications integrating Azure OpenA
## Next steps
- Learn more about the [underlying models that power Azure OpenAI](../../ai-services/openai/concepts/models.md).
-- Azure AI Studio content filtering is powered by [Azure AI Content Safety](../../ai-services/content-safety/overview.md).
+- Azure AI Foundry content filtering is powered by [Azure AI Content Safety](../../ai-services/content-safety/overview.md).
- Learn more about understanding and mitigating risks associated with your application: [Overview of Responsible AI practices for Azure OpenAI models](/legal/cognitive-services/openai/overview?context=/azure/ai-services/context/context).
- Learn more about evaluating your generative AI models and AI systems via [Azure AI Evaluation](https://aka.ms/genaiopsevals).
Summary
{
"modification_type": "minor update",
"modification_title": "コンテンツフィルタリングに関する用語の更新"
}
Explanation
この変更は、「Azure AI Studio」に関する内容を「Azure AI Foundry」に更新するためのドキュメントの修正です。具体的には、用語の変更が行われ、特に「コンテンツフィルタリング」の機能に関連する説明が、すべて「AI Studio」から「AI Foundry portal」へと変更されています。
更新された内容には、コンテンツフィルタリングの能力やその使用方法が含まれており、ユーザーがモデルのデプロイメントにおいてデフォルトのコンテンツフィルタを利用する方法や、フィルタの設定をカスタマイズする手順が詳述されています。また、「Whisperモデル」や設定の管理方法に関する注意事項も改めて強調されており、従来の情報とは異なり、最新のプラットフォームでの状況を正確に反映しています。
全体として、この変更によりユーザーは、Azure AI Foundryに移行した後も効果的にコンテンツフィルタリングを理解し、利用できるようになっています。また、情報の整合性が保たれ、ユーザー体験の向上が期待されます。
articles/ai-studio/concepts/deployments-overview.md
Diff
@@ -1,22 +1,23 @@
---
-title: Deploy models in Azure AI studio
+title: Deploy models in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: Learn about deploying models in Azure AI studio.
+description: Learn about deploying models in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
- ignite-2023
- build-2024
+ - ignite-2024
ms.topic: concept-article
ms.date: 10/21/2024
ms.reviewer: fasantia
ms.author: mopeakande
author: msakande
---
-# Overview: Deploy AI models in Azure AI Studio
+# Overview: Deploy AI models in Azure AI Foundry portal
-The model catalog in Azure AI studio is the hub to discover and use a wide range of models for building generative AI applications. Models need to be deployed to make them available for receiving inference requests. The process of interacting with a deployed model is called *inferencing*. Azure AI Studio offer a comprehensive suite of deployment options for those models depending on your needs and model requirements.
+The model catalog in Azure AI Foundry portal is the hub to discover and use a wide range of models for building generative AI applications. Models need to be deployed to make them available for receiving inference requests. The process of interacting with a deployed model is called *inferencing*. Azure AI Foundry offer a comprehensive suite of deployment options for those models depending on your needs and model requirements.
## Deploying models
@@ -26,7 +27,7 @@ Deployment options vary depending on the model type:
* **Models as a Service models:** These models don't require compute quota from your subscription. This option allows you to deploy your Model as a Service (MaaS). You use a serverless API deployment and are billed per token in a pay-as-you-go fashion.
* **Open and custom models:** The model catalog offers access to a large variety of models across modalities that are of open access. You can host open models in your own subscription with a managed infrastructure, virtual machines, and the number of instances for capacity management. There's a wide range of models from Azure OpenAI, Hugging Face, and NVIDIA.
-Azure AI studio offers four different deployment options:
+Azure AI Foundry offers four different deployment options:
|Name | Azure OpenAI service | Azure AI model inference service | Serverless API | Managed compute |
|-------------------------------|----------------------|-------------------|----------------|-----------------|
@@ -45,7 +46,7 @@ Azure AI studio offers four different deployment options:
### How should I think about deployment options?
-Azure AI studio encourages customers to explore the deployment options and pick the one that best suites their business and technical needs. In general you can use the following thinking process:
+Azure AI Foundry encourages customers to explore the deployment options and pick the one that best suites their business and technical needs. In general you can use the following thinking process:
1. Start with the deployment options that have the bigger scopes. This allows you to iterate and prototype faster in your application without having to rebuild your architecture each time you decide to change something. [Azure AI model inference service](../ai-services/model-inference.md) is a deployment target that supports all the flagship models in the Azure AI catalog, including latest innovation from Azure OpenAI.
@@ -63,6 +64,6 @@ Azure AI studio encourages customers to explore the deployment options and pick
## Related content
* [Add and configure models to the Azure AI model inference service](../ai-services/how-to/create-model-deployments.md)
-* [Deploy Azure OpenAI models with Azure AI Studio](../how-to/deploy-models-openai.md)
-* [Deploy open models with Azure AI Studio](../how-to/deploy-models-open.md)
-* [Model catalog and collections in Azure AI Studio](../how-to/model-catalog-overview.md)
+* [Deploy Azure OpenAI models with Azure AI Foundry](../how-to/deploy-models-openai.md)
+* [Deploy open models with Azure AI Foundry](../how-to/deploy-models-open.md)
+* [Model catalog and collections in Azure AI Foundry portal](../how-to/model-catalog-overview.md)
Summary
{
"modification_type": "minor update",
"modification_title": "モデルデプロイメントに関する用語の更新"
}
Explanation
この変更は、「Azure AI Studio」に関する文書を「Azure AI Foundry」に適応させるためのもので、ドキュメント全体で使用されている用語が更新されています。主な変更点は、モデルのデプロイメントに関する情報が「AI Studio」から「AI Foundry portal」へと置き換えられていることです。
具体的には、Azure AI Foundry portalが、さまざまなモデルを発見し、それを生成AIアプリケーションに利用するための中心的な役割を果たすことが強調されています。また、デプロイメントプロセスやインファレンスについての説明も更新され、最新のプラットフォームに関連する手順や選択肢が記載されています。
さらに、ドキュメントでは、モデルデプロイメントの選択肢に関するガイダンスがあり、ユーザーが自分のビジネスニーズや技術的要件に最適な選択を行えるようにしています。全体として、この更新は新しいプラットフォームにおける一貫性と明確さを提供し、ユーザーにとっての利便性を向上させることを目的としています。
articles/ai-studio/concepts/encryption-keys-portal.md
Diff
@@ -1,7 +1,7 @@
---
-title: Customer-Managed Keys for Azure AI Studio
+title: Customer-Managed Keys for Azure AI Foundry
titleSuffix: Azure AI Foundry
-description: Learn about using customer-managed keys for encryption to improve data security with Azure AI Studio.
+description: Learn about using customer-managed keys for encryption to improve data security with Azure AI Foundry.
author: Blackmist
ms.author: larryfr
ms.service: azure-ai-services
@@ -10,16 +10,16 @@ ms.custom:
ms.topic: concept-article
ms.date: 10/7/2024
ms.reviewer: deeikele
-# Customer intent: As an admin, I want to understand how I can use my own encryption keys with Azure AI Studio.
+# Customer intent: As an admin, I want to understand how I can use my own encryption keys with Azure AI Foundry.
---
-# Customer-managed keys for encryption with Azure AI Studio
+# Customer-managed keys for encryption with Azure AI Foundry
-Customer-managed keys (CMKs) in Azure AI Studio provide enhanced control over the encryption of your data. By using CMKs, you can manage your own encryption keys to add an extra layer of protection and meet compliance requirements more effectively.
+Customer-managed keys (CMKs) in Azure AI Foundry portal provide enhanced control over the encryption of your data. By using CMKs, you can manage your own encryption keys to add an extra layer of protection and meet compliance requirements more effectively.
-## About encryption in Azure AI Studio
+## About encryption in Azure AI Foundry portal
-Azure AI Studio layers on top of Azure Machine Learning and Azure AI services. By default, these services use Microsoft-managed encryption keys.
+Azure AI Foundry layers on top of Azure Machine Learning and Azure AI services. By default, these services use Microsoft-managed encryption keys.
Hub and project resources are implementations of the Azure Machine Learning workspace and encrypt data in transit and at rest. For details, see [Data encryption with Azure Machine Learning](../../machine-learning/concept-data-encryption.md).
@@ -39,11 +39,11 @@ The following data is stored on the managed resources.
|Service|What it's used for|Example|
|-----|-----|-----|
|Azure Cosmos DB|Stores metadata for your Azure AI projects and tools|Index names, tags; Flow creation timestamps; deployment tags; evaluation metrics|
-|Azure AI Search|Stores indices that are used to help query your AI studio content.|An index based off your model deployment names|
-|Azure Storage Account|Stores instructions for how customization tasks are orchestrated|JSON representation of flows you create in AI Studio|
+|Azure AI Search|Stores indices that are used to help query your AI Foundry content.|An index based off your model deployment names|
+|Azure Storage Account|Stores instructions for how customization tasks are orchestrated|JSON representation of flows you create in AI Foundry portal|
>[!IMPORTANT]
-> Azure AI Studio uses Azure compute that is managed in the Microsoft subscription, for example when you fine-tune models or or build flows. Its disks are encrypted with Microsoft-managed keys. Compute is ephemeral, meaning after a task is completed the virtual machine is deprovisioned, and the OS disk is deleted. Compute instance machines used for 'Code' experiences are persistant. Azure Disk Encryption isn't supported for the OS disk.
+> Azure AI Foundry uses Azure compute that is managed in the Microsoft subscription, for example when you fine-tune models or or build flows. Its disks are encrypted with Microsoft-managed keys. Compute is ephemeral, meaning after a task is completed the virtual machine is deprovisioned, and the OS disk is deleted. Compute instance machines used for 'Code' experiences are persistant. Azure Disk Encryption isn't supported for the OS disk.
## (Preview) Service-side storage of encrypted data when using customer-managed keys
@@ -77,15 +77,15 @@ If connecting with Azure AI Services, or variants of Azure AI Services such as A
## Enable customer-managed keys
-Azure AI studio builds on hub as implementation of Azure Machine Learning workspace, Azure AI Services, and lets you connect with other resources in Azure. You must set encryption specifically on each resource.
+Azure AI Foundry builds on hub as implementation of Azure Machine Learning workspace, Azure AI Services, and lets you connect with other resources in Azure. You must set encryption specifically on each resource.
Customer-managed key encryption is configured via Azure portal in a similar way for each Azure resource:
1. Create a new Azure resource in Azure portal.
1. Under the encryption tab, select your encryption key.
:::image type="content" source="../../machine-learning/media/concept-customer-managed-keys/cmk-service-side-encryption.png" alt-text="Screenshot of the encryption tab with the option for service side encryption selected." lightbox="../../machine-learning/media/concept-customer-managed-keys/cmk-service-side-encryption.png":::
-Alternatively, use infrastructure-as-code options for automation. Example Bicep templates for Azure AI Studio are available on the Azure Quickstart repo:
+Alternatively, use infrastructure-as-code options for automation. Example Bicep templates for Azure AI Foundry are available on the Azure Quickstart repo:
1. [CMK encryption for hub](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/aistudio-cmk).
1. [Service-side CMK encryption preview for hub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/aistudio-cmk-service-side-encryption).
Summary
{
"modification_type": "minor update",
"modification_title": "顧客管理キーに関する用語の更新"
}
Explanation
この変更は、「Azure AI Studio」に関するドキュメントを「Azure AI Foundry」に適応させるためのもので、特に顧客管理キー(CMK)に関する情報を更新しています。ページタイトルや説明文、そして文中での「Azure AI Studio」から「Azure AI Foundry」への用語の置き換えが行われており、コンテンツ全体が新しいプラットフォームに関連する内容になっています。
更新された文書では、顧客管理キーを使用することによってデータの暗号化に対する制御が強化されることが強調されており、ユーザーは独自の暗号化キーを管理することによってデータ保護を向上させ、コンプライアンス要件を満たすことができると説明されています。
また、暗号化の基本的な仕組みや、リソースごとの暗号化の設定方法についての手順も詳述されており、ユーザーがAzureリソース内での顧客管理キーによる暗号化を有効にする方法が明確化されています。全体として、この変更は新しいプラットフォームにおける機能や設定に関する情報を整理し、利用者の理解を助けることを目的としています。
articles/ai-studio/concepts/evaluation-approach-gen-ai.md
Diff
@@ -7,6 +7,7 @@ ms.service: azure-ai-studio
ms.custom:
- ignite-2023
- build-2024
+ - ignite-2024
ms.topic: conceptual
ms.date: 5/21/2024
ms.reviewer: mithigpe
Summary
{
"modification_type": "minor update",
"modification_title": "カスタムメタデータの更新"
}
Explanation
この変更は、「evaluation-approach-gen-ai.md」というドキュメントに対するもので、カスタムメタデータの一部が更新されています。具体的には、ms.custom
フィールドに新たに「ignite-2024」が追加されています。これにより、ドキュメントは最新のイベントやトピックに関連付けられ、関連性が高まります。
このようなメタデータの変更は、コンテンツ管理やナビゲーションにおいて重要であり、ユーザーが必要な情報をより容易に見つけられるようになることを目的としています。全体として、この更新は内容の整合性と最新性を保つためのものです。
articles/ai-studio/concepts/evaluation-metrics-built-in.md
Diff
@@ -8,6 +8,7 @@ ms.custom:
- ignite-2023
- build-2024
- references_regions
+ - ignite-2024
ms.topic: conceptual
ms.date: 11/19/2024
ms.reviewer: mithigpe
@@ -529,6 +530,6 @@ Currently certain AI-assisted evaluators are available only in the following reg
- [Evaluate your generative AI apps via the playground](../how-to/evaluate-prompts-playground.md)
- [Evaluate with the Azure AI evaluate SDK](../how-to/develop/evaluate-sdk.md)
-- [Evaluate your generative AI apps with the Azure AI Studio](../how-to/evaluate-generative-ai-app.md)
+- [Evaluate your generative AI apps with the Azure AI Foundry portal](../how-to/evaluate-generative-ai-app.md)
- [View the evaluation results](../how-to/evaluate-results.md)
-- [Transparency Note for Azure AI Studio safety evaluations](safety-evaluations-transparency-note.md)
\ No newline at end of file
+- [Transparency Note for Azure AI Foundry safety evaluations](safety-evaluations-transparency-note.md)
\ No newline at end of file
Summary
{
"modification_type": "minor update",
"modification_title": "評価指標に関する用語の更新"
}
Explanation
この変更は、「evaluation-metrics-built-in.md」というドキュメントにおける用語の更新を含んでいます。主な変更点は、「Azure AI Studio」という表現が「Azure AI Foundry portal」に置き換えられており、これにより内容が新しいプラットフォームに適応しています。
さらに、カスタムメタデータに「ignite-2024」が追加されており、ドキュメントが最新のイベントやトピックと関連付けられています。また、ナビゲーション情報の項目も更新され、透明性に関するノートの名称が現在のプラットフォームに沿った形に修正されています。
このような更新は、ドキュメントの正確性と関連性を担保し、ユーザーが最新の情報を得るための重要なステップとなっています。全体として、情報の整合性を保ちながら、新しいプラットフォームにおける利用状況を反映させることを目的としています。
articles/ai-studio/concepts/fine-tuning-overview.md
Diff
@@ -1,7 +1,7 @@
---
-title: Fine-tuning in Azure AI Studio
+title: Fine-tuning in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: This article introduces fine-tuning of models in Azure AI Studio.
+description: This article introduces fine-tuning of models in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
@@ -78,7 +78,7 @@ It's important to call out that fine-tuning is heavily dependent on the quality
## Supported models for fine-tuning
-Now that you know when to use fine-tuning for your use case, you can go to Azure AI Studio to find models available to fine-tune. Fine-tuning is available in specific Azure regions for some models. To fine-tune such models, a user must have a hub/project in the region where the model is available for fine-tuning. See [Region availability for models in serverless API endpoints | Azure AI Studio](../how-to/deploy-models-serverless-availability.md) for detailed information.
+Now that you know when to use fine-tuning for your use case, you can go to Azure AI Foundry to find models available to fine-tune. Fine-tuning is available in specific Azure regions for some models. To fine-tune such models, a user must have a hub/project in the region where the model is available for fine-tuning. See [Region availability for models in serverless API endpoints | Azure AI Foundry](../how-to/deploy-models-serverless-availability.md) for detailed information.
For details about Azure OpenAI models that are available for fine-tuning, see the [Azure OpenAI Service models documentation](../../ai-services/openai/concepts/models.md#fine-tuning-models) or the [Azure OpenAI models table](#fine-tuning-azure-openai-models) later in this guide.
@@ -91,7 +91,7 @@ For the Azure OpenAI Service models that you can fine tune, supported regions f
## Related content
-- [Fine-tune an Azure OpenAI model in Azure AI Studio](../../ai-services/openai/how-to/fine-tuning.md?context=/azure/ai-studio/context/context)
-- [Fine-tune a Llama 2 model in Azure AI Studio](../how-to/fine-tune-model-llama.md)
-- [Fine-tune a Phi-3 model in Azure AI Studio](../how-to/fine-tune-phi-3.md)
-- [Deploy Phi-3 family of small language models with Azure AI Studio](../how-to/deploy-models-phi-3.md)
+- [Fine-tune an Azure OpenAI model in Azure AI Foundry portal](../../ai-services/openai/how-to/fine-tuning.md?context=/azure/ai-studio/context/context)
+- [Fine-tune a Llama 2 model in Azure AI Foundry portal](../how-to/fine-tune-model-llama.md)
+- [Fine-tune a Phi-3 model in Azure AI Foundry portal](../how-to/fine-tune-phi-3.md)
+- [Deploy Phi-3 family of small language models with Azure AI Foundry](../how-to/deploy-models-phi-3.md)
Summary
{
"modification_type": "minor update",
"modification_title": "ドキュメントのプラットフォーム名の更新"
}
Explanation
この変更は、「fine-tuning-overview.md」というドキュメントにおけるプラットフォーム名の更新に関するものです。タイトルや説明文において「Azure AI Studio」という表現が「Azure AI Foundry portal」に置き換えられ、情報が新しいプラットフォームに適応されています。
具体的には、ユーザーがモデルのファインチューニングを行う際に必要な情報や手順が、「Azure AI Foundry」の文脈に合わせて修正されています。また、関連コンテンツのリンクも同様に更新され、新しいプラットフォームに誘導する内容となっています。
この変更は、ユーザーに最新の情報を提供するために重要であり、全体としてドキュメントの整合性と関連性を保つことを目指しています。これにより、ファインチューニングに関するプロセスやリソースがより明確に伝わるようになります。
articles/ai-studio/concepts/management-center.md
Diff
@@ -1,30 +1,30 @@
---
title: Management center overview
titleSuffix: Azure AI Foundry
-description: "The management center in Azure AI Studio provides a centralized hub for governance and management activities."
+description: "The management center in Azure AI Foundry portal provides a centralized hub for governance and management activities."
author: Blackmist
ms.author: larryfr
ms.service: azure-ai-studio
+ms.custom:
+ - ignite-2024
ms.topic: concept-article #Don't change.
ms.date: 11/18/2024
-
#customer intent: As an admin, I want a central location where I can perform governance and management activities.
-
---
# Management center overview
-The management center is a part of the Azure AI Studio that streamlines governance and management activities. From the management center, you can manage Azure AI Studio hubs, projects, resources, and settings. To visit the management center, open the [Azure AI Studio](https://ai.azure.com) and (while in a project) select the __Management center__ link from the left menu.
+The management center is a part of the Azure AI Foundry portal that streamlines governance and management activities. From the management center, you can manage Azure AI Foundry hubs, projects, resources, and settings. To visit the management center, open the [Azure AI Foundry](https://ai.azure.com) and (while in a project) select the __Management center__ link from the left menu.
-:::image type="content" source="../media/management-center/management-center.png" alt-text="Screenshot of the left menu of Azure AI Studio with the management center selected." lightbox="../media/management-center/management-center.png":::
+:::image type="content" source="../media/management-center/management-center.png" alt-text="Screenshot of the left menu of Azure AI Foundry with the management center selected." lightbox="../media/management-center/management-center.png":::
## Manage hubs and projects
You can use the management center to create and configure hubs and projects within those hubs. Use __All resources__ to view all hubs and projects that you have access to. Use the __Hub__ and __Project__ sections of the left menu to manage individual hubs and projects.
:::image type="content" source="../media/management-center/manage-hub-project.png" alt-text="Screenshot of the all resources, hub, and project sections of the management studio selected." lightbox="../media/management-center/manage-hub-project.png":::
-For more information, see the articles on creating a [hub](../how-to/create-azure-ai-resource.md#create-a-hub-in-ai-studio) and [project](../how-to/create-projects.md).
+For more information, see the articles on creating a [hub](../how-to/create-azure-ai-resource.md#create-a-hub-in-ai-foundry-portal) and [project](../how-to/create-projects.md).
## Manage resource utilization
@@ -40,7 +40,7 @@ Assign roles, manage users, and ensure that all settings comply with organizatio
:::image type="content" source="../media/management-center/user-management.png" alt-text="Screenshot of the user management section of the management center." lightbox="../media/management-center/user-management.png":::
-For more information, see [Role-based access control](rbac-ai-studio.md#assigning-roles-in-ai-studio).
+For more information, see [Role-based access control](rbac-ai-studio.md#assigning-roles-in-ai-foundry-portal).
## Related content
Summary
{
"modification_type": "minor update",
"modification_title": "管理センターに関するプラットフォーム名の更新"
}
Explanation
この変更は、「management-center.md」というドキュメントの内容に関するもので、プラットフォーム名の更新が行われています。「Azure AI Studio」という表現が「Azure AI Foundry portal」に置き換えられており、これによりドキュメントの内容が最新のプラットフォームに適応しています。
具体的には、管理センターの説明や関連する手順において、すべての言及が新しいプラットフォーム名に一致するように更新されています。また、管理センターを訪れるための手順や、ハブやプロジェクトの管理に関する情報も新しいコンテキストで提供されています。さらに、リンク先のURLも新しいポータルに合わせて修正されています。
これにより、このドキュメントは最新の情報を反映し、Usersが適切にガバナンスおよび管理活動を行えるようにサポートしています。このような更新は、正確な情報をユーザーに提供するために重要です。
articles/ai-studio/concepts/model-benchmarks.md
Diff
@@ -1,29 +1,30 @@
---
-title: Explore model benchmarks in Azure AI Studio
+title: Explore model benchmarks in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: This article introduces benchmarking capabilities and the model benchmarks experience in Azure AI Studio.
+description: This article introduces benchmarking capabilities and the model benchmarks experience in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
- ai-learning-hub
+ - ignite-2024
ms.topic: concept-article
ms.date: 11/11/2024
ms.reviewer: jcioffi
ms.author: mopeakande
author: msakande
---
-# Model benchmarks in Azure AI Studio
+# Model benchmarks in Azure AI Foundry portal
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-In Azure AI Studio, you can compare benchmarks across models and datasets available in the industry to decide which one meets your business scenario. You can directly access detailed benchmarking results within the model catalog. Whether you already have models in mind or you're exploring models, the benchmarking data in Azure AI empowers you to make informed decisions quickly and efficiently.
+In Azure AI Foundry portal, you can compare benchmarks across models and datasets available in the industry to decide which one meets your business scenario. You can directly access detailed benchmarking results within the model catalog. Whether you already have models in mind or you're exploring models, the benchmarking data in Azure AI empowers you to make informed decisions quickly and efficiently.
Azure AI supports model benchmarking for select models that are popular and most frequently used. Supported models have a _benchmarks_ icon that looks like a histogram. You can find these models in the model catalog by using the **Collections** filter and selecting **Benchmark results**. You can then use the search functionality to find specific models.
:::image type="content" source="../media/how-to/model-benchmarks/access-model-catalog-benchmark.png" alt-text="Screenshot showing how to filter for benchmark models in the model catalog homepage." lightbox="../media/how-to/model-benchmarks/access-model-catalog-benchmark.png":::
-Model benchmarks help you make informed decisions about the sustainability of models and datasets before you initiate any job. The benchmarks are a curated list of the best-performing models for a task, based on a comprehensive comparison of benchmarking metrics. Azure AI Studio provides the following benchmarks for models, based on model catalog collections:
+Model benchmarks help you make informed decisions about the sustainability of models and datasets before you initiate any job. The benchmarks are a curated list of the best-performing models for a task, based on a comprehensive comparison of benchmarking metrics. Azure AI Foundry provides the following benchmarks for models, based on model catalog collections:
- Benchmarks across large language models (LLMs) and small language models (SLMs)
- Benchmarks across embedding models
@@ -155,5 +156,5 @@ Prompt construction follows best practices for each dataset, as specified by the
## Related content
-- [How to benchmark models in Azure AI Studio](../how-to/benchmark-model-in-catalog.md)
-- [Model catalog and collections in Azure AI Studio](../how-to/model-catalog-overview.md)
+- [How to benchmark models in Azure AI Foundry portal](../how-to/benchmark-model-in-catalog.md)
+- [Model catalog and collections in Azure AI Foundry portal](../how-to/model-catalog-overview.md)
Summary
{
"modification_type": "minor update",
"modification_title": "モデルベンチマークに関するプラットフォーム名の更新"
}
Explanation
この変更は、「model-benchmarks.md」というドキュメントの内容を更新するもので、プラットフォーム名の修正が行われています。具体的には、「Azure AI Studio」という表現が「Azure AI Foundry portal」に変更され、全ての関連情報が新しいプラットフォームの文脈に適応されています。
変更の内容には、モデルベンチマークの機能や体験に関する説明が含まれており、利用者がモデルやデータセットを比較し、自身のビジネスシナリオに合ったものを決定するための情報が提供されています。また、ベンチマークのアイコン表示や検索機能についての説明も更新されています。
この更新により、読者は最新のプラットフォームに基づいた正確な情報を得ることができ、モデルベンチマークの重要性や利点を理解しやすくなっています。全体として、ドキュメントは最新の状況に合わせて整合性が取られており、ユーザーが効率的に情報にアクセスできるよう配慮されています。
articles/ai-studio/concepts/rbac-ai-studio.md
Diff
@@ -1,37 +1,38 @@
---
-title: Role-based access control in Azure AI Studio
+title: Role-based access control in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: This article introduces role-based access control in Azure AI Studio.
+description: This article introduces role-based access control in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
- ignite-2023
- build-2024
+ - ignite-2024
ms.topic: conceptual
ms.date: 9/12/2024
ms.reviewer: deeikele
ms.author: larryfr
author: Blackmist
---
-# Role-based access control in Azure AI Studio
+# Role-based access control in Azure AI Foundry portal
-In this article, you learn how to manage access (authorization) to an Azure AI Studio hub. Azure role-based access control (Azure RBAC) is used to manage access to Azure resources, such as the ability to create new resources or use existing ones. Users in your Microsoft Entra ID are assigned specific roles, which grant access to resources. Azure provides both built-in roles and the ability to create custom roles.
+In this article, you learn how to manage access (authorization) to an Azure AI Foundry hub. Azure role-based access control (Azure RBAC) is used to manage access to Azure resources, such as the ability to create new resources or use existing ones. Users in your Microsoft Entra ID are assigned specific roles, which grant access to resources. Azure provides both built-in roles and the ability to create custom roles.
> [!WARNING]
-> Applying some roles might limit UI functionality in Azure AI Studio for other users. For example, if a user's role does not have the ability to create a compute instance, the option to create a compute instance will not be available in studio. This behavior is expected, and prevents the user from attempting operations that would return an access denied error.
+> Applying some roles might limit UI functionality in Azure AI Foundry portal for other users. For example, if a user's role does not have the ability to create a compute instance, the option to create a compute instance will not be available in studio. This behavior is expected, and prevents the user from attempting operations that would return an access denied error.
-## AI Studio hub vs project
+## AI Foundry hub vs project
-In the Azure AI Studio, there are two levels of access: the hub and the project. The hub is home to the infrastructure (including virtual network setup, customer-managed keys, managed identities, and policies) and where you configure your Azure AI services. Hub access can allow you to modify the infrastructure, create new hubs, and create projects. Projects are a subset of the hub that act as workspaces that allow you to build and deploy AI systems. Within a project you can develop flows, deploy models, and manage project assets. Project access lets you develop AI end-to-end while taking advantage of the infrastructure setup on the hub.
+In the Azure AI Foundry portal, there are two levels of access: the hub and the project. The hub is home to the infrastructure (including virtual network setup, customer-managed keys, managed identities, and policies) and where you configure your Azure AI services. Hub access can allow you to modify the infrastructure, create new hubs, and create projects. Projects are a subset of the hub that act as workspaces that allow you to build and deploy AI systems. Within a project you can develop flows, deploy models, and manage project assets. Project access lets you develop AI end-to-end while taking advantage of the infrastructure setup on the hub.
-:::image type="content" source="../media/concepts/resource-provider-connected-resources.svg" alt-text="Diagram of the relationship between AI Studio resources.":::
+:::image type="content" source="../media/concepts/resource-provider-connected-resources.svg" alt-text="Diagram of the relationship between AI Foundry resources.":::
One of the key benefits of the hub and project relationship is that developers can create their own projects that inherit the hub security settings. You might also have developers who are contributors to a project, and can't create new projects.
## Default roles for the hub
-The AI Studio hub has built-in roles that are available by default.
+The AI Foundry hub has built-in roles that are available by default.
Here's a table of the built-in roles and their permissions for the hub:
@@ -91,7 +92,7 @@ If the built-in Azure AI Developer role doesn't meet your needs, you can create
## Default roles for projects
-Projects in AI Studio have built-in roles that are available by default.
+Projects in AI Foundry portal have built-in roles that are available by default.
Here's a table of the built-in roles and their permissions for the project:
@@ -103,7 +104,7 @@ Here's a table of the built-in roles and their permissions for the project:
| Azure AI Inference Deployment Operator | Perform all actions required to create a resource deployment within a resource group. |
| Reader | Read only access to the project. |
-When a user is granted access to a project (for example, through the AI Studio permission management), two more roles are automatically assigned to the user. The first role is Reader on the hub. The second role is the Inference Deployment Operator role, which allows the user to create deployments on the resource group that the project is in. This role is composed of these two permissions: ```"Microsoft.Authorization/*/read"``` and ```"Microsoft.Resources/deployments/*"```.
+When a user is granted access to a project (for example, through the AI Foundry portal permission management), two more roles are automatically assigned to the user. The first role is Reader on the hub. The second role is the Inference Deployment Operator role, which allows the user to create deployments on the resource group that the project is in. This role is composed of these two permissions: ```"Microsoft.Authorization/*/read"``` and ```"Microsoft.Resources/deployments/*"```.
In order to complete end-to-end AI development and deployment, users only need these two autoassigned roles and either the Contributor or Azure AI Developer role on a project.
@@ -209,7 +210,7 @@ The hub has dependencies on other Azure services. The following table lists the
| `Microsoft.MachineLearningServices/workspaces/write` | Create a new workspace or updates the properties of an existing workspace. |
## Sample enterprise RBAC setup
-The following table is an example of how to set up role-based access control for your Azure AI Studio for an enterprise.
+The following table is an example of how to set up role-based access control for your Azure AI Foundry for an enterprise.
| Persona | Role | Purpose |
| --- | --- | ---|
@@ -228,26 +229,26 @@ For example, if you're trying to consume a new Blob storage, you need to ensure
## Manage access with roles
-If you're an owner of a hub, you can add and remove roles for AI Studio. Go to the **Home** page in [AI Studio](https://ai.azure.com) and select your hub. Then select **Users** to add and remove users for the hub. You can also manage permissions from the Azure portal under **Access Control (IAM)** or through the Azure CLI. For example, use the [Azure CLI](/cli/azure/) to assign the Azure AI Developer role to "joe@contoso.com" for resource group "this-rg" with the following command:
+If you're an owner of a hub, you can add and remove roles for AI Foundry. Go to the **Home** page in [AI Foundry](https://ai.azure.com) and select your hub. Then select **Users** to add and remove users for the hub. You can also manage permissions from the Azure portal under **Access Control (IAM)** or through the Azure CLI. For example, use the [Azure CLI](/cli/azure/) to assign the Azure AI Developer role to "joe@contoso.com" for resource group "this-rg" with the following command:
```azurecli-interactive
az role assignment create --role "Azure AI Developer" --assignee "joe@contoso.com" --resource-group this-rg
```
## Create custom roles
-If the built-in roles are insufficient, you can create custom roles. Custom roles might have the read, write, delete, and compute resource permissions in that AI Studio. You can make the role available at a specific project level, a specific resource group level, or a specific subscription level.
+If the built-in roles are insufficient, you can create custom roles. Custom roles might have the read, write, delete, and compute resource permissions in that AI Foundry. You can make the role available at a specific project level, a specific resource group level, or a specific subscription level.
> [!NOTE]
> You must be an owner of the resource at that level to create custom roles within that resource.
-The following JSON example defines a custom AI Studio developer role at the subscription level:
+The following JSON example defines a custom AI Foundry developer role at the subscription level:
```json
{
"properties": {
- "roleName": "AI Studio Developer",
- "description": "Custom role for AI Studio. At subscription level",
+ "roleName": "AI Foundry Developer",
+ "description": "Custom role for AI Foundry. At subscription level",
"assignableScopes": [
"/subscriptions/<your-subscription-id>"
],
@@ -298,14 +299,14 @@ For steps on creating a custom role, use one of the following articles:
For more information on creating custom roles in general, visit the [Azure custom roles](/azure/role-based-access-control/custom-roles) article.
-## Assigning roles in AI Studio
+## Assigning roles in AI Foundry portal
-You can add users and assign roles directly from Azure AI Studio at either the hub or project level. In the [management center](management-center.md), select **Users** in either the hub or project section, then select **New user** to add a user.
+You can add users and assign roles directly from Azure AI Foundry at either the hub or project level. In the [management center](management-center.md), select **Users** in either the hub or project section, then select **New user** to add a user.
> [!NOTE]
> You are limited to selecting built-in roles. If you need to assign custom roles, you must use the [Azure portal](/azure/role-based-access-control/role-assignments-portal), [Azure CLI](/azure/role-based-access-control/role-assignments-cli), or [Azure PowerShell](/azure/role-based-access-control/role-assignments-powershell).
-:::image type="content" source="../media/concepts/hub-overview-add-user.png" lightbox="../media/concepts/hub-overview-add-user.png" alt-text="Screenshot of the Azure AI Studio hub overview with the new user button highlighted.":::
+:::image type="content" source="../media/concepts/hub-overview-add-user.png" lightbox="../media/concepts/hub-overview-add-user.png" alt-text="Screenshot of the Azure AI Foundry hub overview with the new user button highlighted.":::
You are then prompted to enter the user information and select a built-in role.
@@ -315,7 +316,7 @@ You are then prompted to enter the user information and select a built-in role.
When configuring a hub to use a customer-managed key (CMK), an Azure Key Vault is used to store the key. The user or service principal used to create the workspace must have owner or contributor access to the key vault.
-If your AI Studio hub is configured with a **user-assigned managed identity**, the identity must be granted the following roles. These roles allow the managed identity to create the Azure Storage, Azure Cosmos DB, and Azure Search resources used when using a customer-managed key:
+If your AI Foundry hub is configured with a **user-assigned managed identity**, the identity must be granted the following roles. These roles allow the managed identity to create the Azure Storage, Azure Cosmos DB, and Azure Search resources used when using a customer-managed key:
- `Microsoft.Storage/storageAccounts/write`
- `Microsoft.Search/searchServices/write`
@@ -329,10 +330,10 @@ When you create a connection that uses Microsoft Entra ID authentication, you mu
| Resource connection | Role | Description |
|----------|------|-------------|
-| Azure AI Search | Contributor | List API-Keys to list indexes from Azure AI Studio. |
+| Azure AI Search | Contributor | List API-Keys to list indexes from Azure AI Foundry. |
| Azure AI Search | Search Index Data Contributor | Required for indexing scenarios |
-| Azure AI services / Azure OpenAI | Cognitive Services OpenAI Contributor | Call public ingestion API from Azure AI Studio. |
-| Azure AI services / Azure OpenAI | Cognitive Services User | List API-Keys from Azure AI Studio. |
+| Azure AI services / Azure OpenAI | Cognitive Services OpenAI Contributor | Call public ingestion API from Azure AI Foundry. |
+| Azure AI services / Azure OpenAI | Cognitive Services User | List API-Keys from Azure AI Foundry. |
| Azure AI services / Azure OpenAI | Cognitive Services Contributor | Allows for calls to the control plane. |
| Azure Blob Storage | Storage Blob Data Contributor | Required for reading and writing data to the blob storage. |
| Azure Data Lake Storage Gen 2 | Storage Blob Data Contributor | Required for reading and writing data to the data lake. |
@@ -362,19 +363,19 @@ When you create a connection to an existing Azure OpenAI resource, you must also
## Scenario: Use Azure Container Registry
-An Azure Container Registry instance is an optional dependency for Azure AI Studio hub. The following table lists the support matrix when authenticating a hub to Azure Container Registry, depending on the authentication method and the __Azure Container Registry's__ [public network access configuration](/azure/container-registry/container-registry-access-selected-networks).
+An Azure Container Registry instance is an optional dependency for Azure AI Foundry hub. The following table lists the support matrix when authenticating a hub to Azure Container Registry, depending on the authentication method and the __Azure Container Registry's__ [public network access configuration](/azure/container-registry/container-registry-access-selected-networks).
| Authentication method | Public network access </br>disabled | Azure Container Registry</br>Public network access enabled |
| ---- | :----: | :----: |
| Admin user | ✓ | ✓ |
-| AI Studio hub system-assigned managed identity | ✓ | ✓ |
-| AI Studio hub user-assigned managed identity </br>with the **ACRPull** role assigned to the identity | | ✓ |
+| AI Foundry hub system-assigned managed identity | ✓ | ✓ |
+| AI Foundry hub user-assigned managed identity </br>with the **ACRPull** role assigned to the identity | | ✓ |
A system-assigned managed identity is automatically assigned to the correct roles when the hub is created. If you're using a user-assigned managed identity, you must assign the **ACRPull** role to the identity.
## Scenario: Use Azure Application Insights for logging
-Azure Application Insights is an optional dependency for Azure AI Studio hub. The following table lists the permissions required if you want to use Application Insights when you create a hub. The person that creates the hub needs these permissions. The person who creates a project from the hub doesn't need these permissions.
+Azure Application Insights is an optional dependency for Azure AI Foundry hub. The following table lists the permissions required if you want to use Application Insights when you create a hub. The person that creates the hub needs these permissions. The person who creates a project from the hub doesn't need these permissions.
| Permission | Purpose |
|------------|-------------|
@@ -486,7 +487,7 @@ The following example defines a role for a developer using [Azure OpenAI Assista
#### Symptoms
-When using the Azure AI Studio chat playground, you receive an error message stating "Principal does not have access to API/Operation". The error may also include an "Apim-request-id".
+When using the Azure AI Foundry portal chat playground, you receive an error message stating "Principal does not have access to API/Operation". The error may also include an "Apim-request-id".
#### Cause
@@ -498,13 +499,13 @@ Assign the following roles to the user or service principal. The role you assign
| Service being accessed | Role | Description |
| --- | --- | --- |
-| Azure OpenAI | Cognitive Services OpenAI Contributor | Call public ingestion API from Azure AI Studio. |
-| Azure OpenAI | Cognitive Services User | List API-Keys from Azure AI Studio. |
+| Azure OpenAI | Cognitive Services OpenAI Contributor | Call public ingestion API from Azure AI Foundry. |
+| Azure OpenAI | Cognitive Services User | List API-Keys from Azure AI Foundry. |
| Azure AI Search | Search Index Data Contributor | Required for indexing scenarios. |
| Azure AI Search| Search Index Data Reader | Inference service queries the data from the index. Only used for inference scenarios. |
## Next steps
-- [How to create an Azure AI Studio hub](../how-to/create-azure-ai-resource.md)
-- [How to create an Azure AI Studio project](../how-to/create-projects.md)
-- [How to create a connection in Azure AI Studio](../how-to/connections-add.md)
+- [How to create an Azure AI Foundry hub](../how-to/create-azure-ai-resource.md)
+- [How to create an Azure AI Foundry project](../how-to/create-projects.md)
+- [How to create a connection in Azure AI Foundry portal](../how-to/connections-add.md)
Summary
{
"modification_type": "minor update",
"modification_title": "役割ベースのアクセス制御に関するプラットフォーム名の更新"
}
Explanation
この変更は、「rbac-ai-studio.md」というドキュメントの内容を更新するもので、すべての関連情報において「Azure AI Studio」の表現が「Azure AI Foundry portal」に変更されています。これにより、ドキュメントは最新のプラットフォームに適合するようになっています。
変更内容には、役割ベースのアクセス制御(RBAC)の管理方法や、ハブとプロジェクトのアクセスレベル、デフォルトの役割、カスタム役割の作成方法についての詳しい説明が含まれています。この更新により、特定の役割がUI機能に与える影響や、リソースやプロジェクトへのアクセス権の設定に関する注意点が明確に示されています。
さらに、新しいプラットフォームに関する具体的な手順や注意事項が見直されており、権限の管理やカスタム役割の作成がより明確に説明されています。これにより、ユーザーは適切な役割や権限を設定し、効果的に作業を進めることができるようになります。全体として、ドキュメントは利用者が役割ベースのアクセス制御を理解し活用するための重要な情報を提供しています。
articles/ai-studio/concepts/retrieval-augmented-generation.md
Diff
@@ -1,5 +1,5 @@
---
-title: Retrieval augmented generation in Azure AI Studio
+title: Retrieval augmented generation in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
description: This article introduces retrieval augmented generation for use in generative AI applications.
manager: scottpolly
Summary
{
"modification_type": "minor update",
"modification_title": "情報取得を拡張した生成に関するプラットフォーム名の更新"
}
Explanation
この変更は、「retrieval-augmented-generation.md」というドキュメントのタイトルを更新するもので、プラットフォーム名が「Azure AI Studio」から「Azure AI Foundry portal」に変更されています。この修正により、最新のプラットフォームに関連した正確な情報が提供されるようになります。
変更の内容は主にドキュメントのタイトルに関するものであり、技術的な詳細やコンテンツ自体は変更されていません。しかし、この小さな修正が、ユーザーに対して最新の製品情報を反映する重要な役割を果たします。結果として、利用者は関連情報が正確であることを確認しながら、生成AIアプリケーションにおける情報取得を拡張した生成の概念に関する理解を深めることができます。
articles/ai-studio/concepts/safety-evaluations-transparency-note.md
Diff
@@ -1,7 +1,7 @@
---
-title: Transparency Note for Azure AI Studio safety evaluations
+title: Transparency Note for Azure AI Foundry safety evaluations
titleSuffix: Azure AI Foundry
-description: Azure AI Studio safety evaluations intended purpose, capabilities, limitations and how to achieve the best performance.
+description: Azure AI Foundry safety evaluations intended purpose, capabilities, limitations and how to achieve the best performance.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
@@ -13,7 +13,7 @@ ms.author: lagayhar
author: lgayhardt
---
-# Transparency Note for Azure AI Studio safety evaluations
+# Transparency Note for Azure AI Foundry safety evaluations
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
@@ -23,11 +23,11 @@ An AI system includes not only the technology, but also the people who will use
Microsoft’s Transparency Notes are part of a broader effort at Microsoft to put our AI Principles into practice. To find out more, see the [Microsoft AI principles](https://www.microsoft.com/en-us/ai/responsible-ai).
-## The basics of Azure AI Studio safety evaluations
+## The basics of Azure AI Foundry safety evaluations
### Introduction
-The Azure AI Studio safety evaluations let users evaluate the output of their generative AI application for textual content risks: hateful and unfair content, sexual content, violent content, self-harm-related content, jailbreak vulnerability. Safety evaluations can also help generate adversarial datasets to help you accelerate and augment the red-teaming operation. Azure AI Studio safety evaluations reflect Microsoft’s commitments to ensure AI systems are built safely and responsibly, operationalizing our Responsible AI principles.
+The Azure AI Foundry portal safety evaluations let users evaluate the output of their generative AI application for textual content risks: hateful and unfair content, sexual content, violent content, self-harm-related content, jailbreak vulnerability. Safety evaluations can also help generate adversarial datasets to help you accelerate and augment the red-teaming operation. Azure AI Foundry safety evaluations reflect Microsoft’s commitments to ensure AI systems are built safely and responsibly, operationalizing our Responsible AI principles.
### Key terms
@@ -43,23 +43,23 @@ The Azure AI Studio safety evaluations let users evaluate the output of their ge
### System behavior
-Azure AI Studio provisions an Azure OpenAI GPT-4 model and orchestrates adversarial attacks against your application to generate a high quality test dataset. It then provisions another GPT-4 model to annotate your test dataset for content and security. Users provide their generative AI application endpoint that they wish to test, and the safety evaluations will output a static test dataset against that endpoint along with its content risk label (Very low, Low, Medium, High) and reasoning for the AI-generated label.
+Azure AI Foundry provisions an Azure OpenAI GPT-4 model and orchestrates adversarial attacks against your application to generate a high quality test dataset. It then provisions another GPT-4 model to annotate your test dataset for content and security. Users provide their generative AI application endpoint that they wish to test, and the safety evaluations will output a static test dataset against that endpoint along with its content risk label (Very low, Low, Medium, High) and reasoning for the AI-generated label.
### Use cases
#### Intended uses
The safety evaluations aren't intended to use for any purpose other than to evaluate content risks and jailbreak vulnerabilities of your generative AI application:
-- **Evaluating your generative AI application pre-deployment**: Using the evaluation wizard in the Azure AI Studio or the Azure AI Python SDK, safety evaluations can assess in an automated way to evaluate potential content or security risks.
+- **Evaluating your generative AI application pre-deployment**: Using the evaluation wizard in the Azure AI Foundry portal or the Azure AI Python SDK, safety evaluations can assess in an automated way to evaluate potential content or security risks.
- **Augmenting your red-teaming operations**: Using the adversarial simulator, safety evaluations can simulate adversarial interactions with your generative AI application to attempt to uncover content and security risks.
-- **Communicating content and security risks to stakeholders**: Using the Azure AI Studio, you can share access to your Azure AI Studio project with safety evaluations results with auditors or compliance stakeholders.
+- **Communicating content and security risks to stakeholders**: Using the Azure AI Foundry portal, you can share access to your Azure AI Foundry project with safety evaluations results with auditors or compliance stakeholders.
#### Considerations when choosing a use case
-We encourage customers to leverage Azure AI Studio safety evaluations in their innovative solutions or applications. However, here are some considerations when choosing a use case:
+We encourage customers to leverage Azure AI Foundry safety evaluations in their innovative solutions or applications. However, here are some considerations when choosing a use case:
-- **Safety evaluations should include human-in-the-loop**: Using automated evaluations like Azure AI Studio safety evaluations should include human reviewers such as domain experts to assess whether your generative AI application has been tested thoroughly prior to deployment to end users.
+- **Safety evaluations should include human-in-the-loop**: Using automated evaluations like Azure AI Foundry safety evaluations should include human reviewers such as domain experts to assess whether your generative AI application has been tested thoroughly prior to deployment to end users.
- **Safety evaluations do not include total comprehensive coverage**: Though safety evaluations can provide a way to augment your testing for potential content or security risks, it wasn't designed to replace manual red-teaming operations specifically geared towards your application’s domain, use cases, and type of end users.
- Supported scenarios:
- For adversarial simulation: Question answering, multi-turn chat, summarization, search, text rewrite, ungrounded and grounded content generation.
@@ -68,13 +68,13 @@ We encourage customers to leverage Azure AI Studio safety evaluations in their i
- The coverage of content risks provided in the safety evaluations is subsampled from a limited number of marginalized groups and topics:
- The hate- and unfairness metric includes some coverage for a limited number of marginalized groups for the demographic factor of gender (for example, men, women, non-binary people) and race, ancestry, ethnicity, and nationality (for example, Black, Mexican, European). Not all marginalized groups in gender and race, ancestry, ethnicity, and nationality are covered. Other demographic factors that are relevant to hate and unfairness don't currently have coverage (for example, disability, sexuality, religion).
- The metrics for sexual, violent, and self-harm-related content are based on a preliminary conceptualization of these harms that are less developed than hate and unfairness. This means that we can make less strong claims about measurement coverage and how well the measurements represent the different ways these harms can occur. Coverage for these content types includes a limited number of topics relate to sex (for example, sexual violence, relationships, sexual acts), violence (for example, abuse, injuring others, kidnapping), and self-harm (for example, intentional death, intentional self-injury, eating disorders).
-- Azure AI Studio safety evaluations don't currently allow for plug-ins or extensibility.
+- Azure AI Foundry safety evaluations don't currently allow for plug-ins or extensibility.
- To keep quality up to date and improve coverage, we'll aim for a cadence of future releases of improvement to the service’s adversarial simulation and annotation capabilities.
### Technical limitations, operational factors, and ranges
-- The field of large language models (LLMs) continues to evolve at a rapid pace, requiring continuous improvement of evaluation techniques to ensure safe and reliable AI system deployment. Azure AI Studio safety evaluations reflect Microsoft’s commitment to continue innovating in the field of LLM evaluation. We aim to provide the best tooling to help you evaluate the safety of your generative AI applications but recognize effective evaluation is a continuous work in progress.
-- Customization of Azure AI Studio safety evaluations is currently limited. We only expect users to provide their input generative AI application endpoint and our service will output a static dataset that is labeled for content risk.
+- The field of large language models (LLMs) continues to evolve at a rapid pace, requiring continuous improvement of evaluation techniques to ensure safe and reliable AI system deployment. Azure AI Foundry safety evaluations reflect Microsoft’s commitment to continue innovating in the field of LLM evaluation. We aim to provide the best tooling to help you evaluate the safety of your generative AI applications but recognize effective evaluation is a continuous work in progress.
+- Customization of Azure AI Foundry safety evaluations is currently limited. We only expect users to provide their input generative AI application endpoint and our service will output a static dataset that is labeled for content risk.
- Finally, it should be noted that this system doesn't automate any actions or tasks, it only provides an evaluation of your generative AI application outputs, which should be reviewed by a human decision maker in the loop before choosing to deploy the generative AI application or system into production for end users.
## System performance
@@ -84,7 +84,7 @@ We encourage customers to leverage Azure AI Studio safety evaluations in their i
- When accounting for your domain, which might treat some content more sensitively than other, consider adjusting the threshold for calculating the defect rate.
- When using the automated safety evaluations, there might sometimes be an error in your AI-generated labels for the severity of a content risk or its reasoning. There's a manual human feedback column to enable human-in-the-loop validation of the automated safety evaluation results.
-## Evaluation of Azure AI Studio safety evaluations
+## Evaluation of Azure AI Foundry safety evaluations
### Evaluation methods
@@ -94,21 +94,21 @@ For all supported content risk types, we have internally checked the quality by
Overall, we saw a high rate of approximate matches across the self-harm and sexual content risks across all tolerance levels. For violence and for hate and unfairness, the approximate match rate across tolerance levels were lower. These results were in part due to increased divergence in annotation guideline content for human labelers versus automated annotator, and in part due to the increased amount of content and complexity in specific guidelines.
-Although our comparisons are between entities that used slightly to moderately different annotation guidelines (and are thus not standard human-model agreement comparisons), these comparisons provide an estimate of the quality that we can expect from Azure AI Studio safety evaluations given the parameters of these comparisons. Specifically, we only looked at English samples, so our findings might not generalize to other languages. Also, each dataset sample consisted of only a single turn, and so more experiments are needed to verify generalizability of our evaluation findings to multi-turn scenarios (for example, a back-and-forth conversation including user queries and system responses). The types of samples used in these evaluation datasets can also greatly affect the approximate match rate between human labels and an automated annotator – if samples are easier to label (for example, if all samples are free of content risks), we might expect the approximate match rate to be higher. The quality of human labels for an evaluation could also affect the generalization of our findings.
+Although our comparisons are between entities that used slightly to moderately different annotation guidelines (and are thus not standard human-model agreement comparisons), these comparisons provide an estimate of the quality that we can expect from Azure AI Foundry safety evaluations given the parameters of these comparisons. Specifically, we only looked at English samples, so our findings might not generalize to other languages. Also, each dataset sample consisted of only a single turn, and so more experiments are needed to verify generalizability of our evaluation findings to multi-turn scenarios (for example, a back-and-forth conversation including user queries and system responses). The types of samples used in these evaluation datasets can also greatly affect the approximate match rate between human labels and an automated annotator – if samples are easier to label (for example, if all samples are free of content risks), we might expect the approximate match rate to be higher. The quality of human labels for an evaluation could also affect the generalization of our findings.
-## Evaluating and integrating Azure AI Studio safety evaluations for your use
+## Evaluating and integrating Azure AI Foundry safety evaluations for your use
-Measurement and evaluation of your generative AI application are a critical part of a holistic approach to AI risk management. Azure AI Studio safety evaluations are complementary to and should be used in tandem with other AI risk management practices. Domain experts and human-in-the-loop reviewers should provide proper oversight when using AI-assisted safety evaluations in the generative AI application design, development, and deployment cycle. You should understand the limitations and intended uses of the safety evaluations, being careful not to rely on outputs produced by Azure AI Studio AI-assisted safety evaluations in isolation.
+Measurement and evaluation of your generative AI application are a critical part of a holistic approach to AI risk management. Azure AI Foundry safety evaluations are complementary to and should be used in tandem with other AI risk management practices. Domain experts and human-in-the-loop reviewers should provide proper oversight when using AI-assisted safety evaluations in the generative AI application design, development, and deployment cycle. You should understand the limitations and intended uses of the safety evaluations, being careful not to rely on outputs produced by Azure AI Foundry AI-assisted safety evaluations in isolation.
-Due to the non-deterministic nature of the LLMs, you might experience false negative or positive results, such as a high-severity level of violent content scored as "very low" or “low.” Additionally, evaluation results might have different meanings for different audiences. For example, safety evaluations might generate a label for “low” severity of violent content that might not align to a human reviewer’s definition of how severe that specific violent content might be. In Azure AI Studio, we provide a human feedback column with thumbs up and thumbs down when viewing your evaluation results to surface which instances were approved or flagged as incorrect by a human reviewer. Consider the context of how your results might be interpreted for decision making by others you can share evaluation with and validate your evaluation results with the appropriate level of scrutiny for the level of risk in the environment that each generative AI application operates in.
+Due to the non-deterministic nature of the LLMs, you might experience false negative or positive results, such as a high-severity level of violent content scored as "very low" or “low.” Additionally, evaluation results might have different meanings for different audiences. For example, safety evaluations might generate a label for “low” severity of violent content that might not align to a human reviewer’s definition of how severe that specific violent content might be. In Azure AI Foundry portal, we provide a human feedback column with thumbs up and thumbs down when viewing your evaluation results to surface which instances were approved or flagged as incorrect by a human reviewer. Consider the context of how your results might be interpreted for decision making by others you can share evaluation with and validate your evaluation results with the appropriate level of scrutiny for the level of risk in the environment that each generative AI application operates in.
## Learn more about responsible AI
- [Microsoft AI principles](https://www.microsoft.com/ai/responsible-ai)
- [Microsoft responsible AI resources](https://www.microsoft.com/ai/tools-practices)
- [Microsoft Azure Learning courses on responsible AI](/ai)
-## Learn more about Azure AI Studio safety evaluations
+## Learn more about Azure AI Foundry safety evaluations
- [Microsoft concept documentation on our approach to evaluating generative AI applications](evaluation-approach-gen-ai.md)
- [Microsoft concept documentation on how safety evaluation works](evaluation-metrics-built-in.md)
Summary
{
"modification_type": "minor update",
"modification_title": "安全評価に関するプラットフォーム名の更新"
}
Explanation
この変更は、「safety-evaluations-transparency-note.md」というドキュメントの内容を更新するもので、タイトルや内容に含まれる「Azure AI Studio」という表現が「Azure AI Foundry」に変更されています。この更新により、文書は最新のプラットフォームに合わせて整備され、正確な情報が提供されることになります。
変更された内容には、安全評価の基本情報、システムの動作、使用例、技術的制限などが含まれており、各セクション全体で「Azure AI Studio」に関連する用語が「Azure AI Foundry」に置き換えられています。具体的には、安全評価の目的、機能、限界、および最高のパフォーマンスを達成するための方法についての情報が強調されています。
このような修正は、ユーザーが役割に基づく透明性のある理解を持ち、整備されたプラットフォームにおける安全評価プロセスを効果的に利用できるようにするために重要です。また、ドキュメントが最新のコンテキストに合致することで、利用者の信頼性も向上します。
articles/ai-studio/concepts/trace.md
Diff
@@ -4,10 +4,12 @@ titleSuffix: Azure AI Foundry
description: This article provides an overview of tracing with the Azure AI Inference SDK.
manager: scottpolly
ms.service: azure-ai-studio
+ms.custom:
+ - ignite-2024
ms.topic: conceptual
ms.date: 11/19/2024
ms.reviewer: truptiparkar
-ms.author: lagayhar
+ms.author: lagayhar
author: lgayhardt
---
@@ -59,7 +61,7 @@ Trace exporters are responsible for sending trace data to a backend system for s
### Trace visualization
-Trace visualization refers to the graphical representation of trace data. Azure AI integrates with visualization tools like Azure AI Studio Tracing, Aspire dashboard, and Prompty Trace viewer to provide developers with an intuitive way to explore and analyze traces, helping them to quickly identify issues and understand the behavior of their applications.
+Trace visualization refers to the graphical representation of trace data. Azure AI integrates with visualization tools like Azure AI Foundry Tracing, Aspire dashboard, and Prompty Trace viewer to provide developers with an intuitive way to explore and analyze traces, helping them to quickly identify issues and understand the behavior of their applications.
## Conclusion
Summary
{
"modification_type": "minor update",
"modification_title": "トレースに関するプラットフォーム名の更新"
}
Explanation
この変更は、「trace.md」というドキュメント内にあるプラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に更新するもので、いくつかのメタデータの調整も行われています。
具体的には、以下のような修正が含まれています:
- 新たに「ms.custom」フィールドが追加され、「ignite-2024」という値が設定されており、今後のイベントに関連する情報が含まれることを示しています。
- 文書の内容において、トレース可視化に関する説明が更新され、これまでの「Azure AI Studio Tracing」が「Azure AI Foundry Tracing」に変更されています。この修正により、ユーザーは最新のプラットフォームに関する情報を正しく理解できるようになります。
このような修正は、ユーザーが最新のツールや機能を利用する際に、正確で関連性のある情報を保持するために重要です。また、トレースデータの可視化がどのように行われ、どのツールが統合されているかを明確に示すことにより、開発者の理解を助け、アプリケーションの動作をより効果的に把握できるようにしています。
articles/ai-studio/concepts/vulnerability-management.md
Diff
@@ -1,7 +1,7 @@
---
title: Vulnerability management
titleSuffix: Azure AI Foundry
-description: Learn how Azure AI Studio manages vulnerabilities in images that the service provides, and how you can get the latest security updates for the components that you manage.
+description: Learn how Azure AI Foundry manages vulnerabilities in images that the service provides, and how you can get the latest security updates for the components that you manage.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
@@ -13,19 +13,19 @@ ms.author: larryfr
author: Blackmist
---
-# Vulnerability management for Azure AI Studio
+# Vulnerability management for Azure AI Foundry
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
Vulnerability management involves detecting, assessing, mitigating, and reporting on any security vulnerabilities that exist in an organization's systems and software. Vulnerability management is a shared responsibility between you and Microsoft.
-This article discusses these responsibilities and outlines the vulnerability management controls that Azure AI Studio provides. You learn how to keep your service instance and applications up to date with the latest security updates, and how to minimize the window of opportunity for attackers.
+This article discusses these responsibilities and outlines the vulnerability management controls that Azure AI Foundry provides. You learn how to keep your service instance and applications up to date with the latest security updates, and how to minimize the window of opportunity for attackers.
## Microsoft-managed VM images
Microsoft manages host OS virtual machine (VM) images for compute instances and serverless compute clusters. The update frequency is monthly and includes the following details:
-* For each new VM image version, the latest updates are sourced from the original publisher of the OS. Using the latest updates helps ensure that you get all applicable OS-related patches. For Azure AI Studio, the publisher is Canonical for all the Ubuntu images.
+* For each new VM image version, the latest updates are sourced from the original publisher of the OS. Using the latest updates helps ensure that you get all applicable OS-related patches. For Azure AI Foundry, the publisher is Canonical for all the Ubuntu images.
* VM images are updated monthly.
@@ -44,28 +44,28 @@ In addition to the regular release cadence, Microsoft applies hotfixes if vulner
## Microsoft-managed container images
-[Base docker images](https://github.com/Azure/AzureML-Containers) that Microsoft maintains for Azure AI Studio get security patches frequently to address newly discovered vulnerabilities.
+[Base docker images](https://github.com/Azure/AzureML-Containers) that Microsoft maintains for Azure AI Foundry get security patches frequently to address newly discovered vulnerabilities.
Microsoft releases updates for supported images every two weeks to address vulnerabilities. As a commitment, we aim to have no vulnerabilities older than 30 days in the latest version of supported images.
Patched images are released under a new immutable tag and an updated `:latest` tag. Using the `:latest` tag or pinning to a particular image version might be a tradeoff between security and environment reproducibility for your machine learning job.
## Managing environments and container images
-In Azure AI Studio, Docker images are used to provide a runtime environment for [prompt flow deployments](../how-to/flow-deploy.md). The images are built from a base image that Azure AI Studio provides.
+In Azure AI Foundry portal, Docker images are used to provide a runtime environment for [prompt flow deployments](../how-to/flow-deploy.md). The images are built from a base image that Azure AI Foundry provides.
Although Microsoft patches base images with each release, whether you use the latest image might be tradeoff between reproducibility and vulnerability management. It's your responsibility to choose the environment version that you use for your jobs or model deployments.
By default, dependencies are layered on top of base images when you're building an image. After you install more dependencies on top of the Microsoft-provided images, vulnerability management becomes your responsibility.
-Associated with your AI Studio hub is an Azure Container Registry instance that functions as a cache for container images. Any image that materializes is pushed to the container registry. The workspace uses it when deployment is triggered for the corresponding environment.
+Associated with your AI Foundry hub is an Azure Container Registry instance that functions as a cache for container images. Any image that materializes is pushed to the container registry. The workspace uses it when deployment is triggered for the corresponding environment.
The hub doesn't delete any image from your container registry. You're responsible for evaluating the need for an image over time. To monitor and maintain environment hygiene, you can use [Microsoft Defender for Container Registry](/azure/defender-for-cloud/defender-for-container-registries-usage) to help scan your images for vulnerabilities. To automate your processes based on triggers from Microsoft Defender, see [Automate remediation responses](/azure/defender-for-cloud/workflow-automation).
## Vulnerability management on compute hosts
-Managed compute nodes in Azure AI Studio use Microsoft-managed OS VM images. When you provision a node, it pulls the latest updated VM image. This behavior applies to compute instance, serverless compute cluster, and managed inference compute options.
+Managed compute nodes in Azure AI Foundry portal use Microsoft-managed OS VM images. When you provision a node, it pulls the latest updated VM image. This behavior applies to compute instance, serverless compute cluster, and managed inference compute options.
Although OS VM images are regularly patched, Microsoft doesn't actively scan compute nodes for vulnerabilities while they're in use. For an extra layer of protection, consider network isolation of your computes.
@@ -116,5 +116,5 @@ Compute nodes are automatically upgraded to the latest VM image version when tha
## Next steps
-* [Azure AI Studio hubs](ai-resources.md)
+* [Azure AI Foundry hubs](ai-resources.md)
* [Create and manage compute instances](../how-to/create-manage-compute.md)
Summary
{
"modification_type": "minor update",
"modification_title": "脆弱性管理に関するプラットフォーム名の更新"
}
Explanation
この変更は、「vulnerability-management.md」というドキュメントにおいて、プラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に更新するものです。この変更は、セキュリティに関する脆弱性の管理に関する情報を最新の文脈に合わせるために行われました。
具体的な変更内容は以下の通りです:
- ドキュメントの説明文や見出しにおいて、「Azure AI Foundry」という表現が追加されました。これにより、ユーザーは最新のプラットフォームについて正確な情報を得ることができます。
- Microsoftが管理するVMおよびコンテナイメージに関連するセクションでも、「Azure AI Foundry」が使用され、このプラットフォーム上でのセキュリティパッチや更新の頻度についての詳細が明確化されています。
- Dockerイメージの管理や、環境の維持に関する説明も「Azure AI Foundry」に更新され、関連する情報が整備されています。
これにより、ドキュメント全体が一貫して最新のプラットフォームに関連する情報を提供し、ユーザーが脆弱性管理を行う際の理解をサポートします。また、プラットフォームの最新情報に基づいた運用が促進されます。
articles/ai-studio/faq.yml
Diff
@@ -19,31 +19,31 @@ sections:
- name: General questions
questions:
- question: |
- Who is Azure AI Studio intended for?
+ Who is Azure AI Foundry intended for?
answer: |
- Azure AI Studio is intended for AI software developers - including cloud architects and technical decision-makers who want to create generative AI applications and custom copilot experiences.
+ Azure AI Foundry is intended for AI software developers - including cloud architects and technical decision-makers who want to create generative AI applications and custom copilot experiences.
- question: |
- How can customers access Azure AI Studio?
+ How can customers access Azure AI Foundry?
answer: |
- Customers can explore Azure AI Studio unauthenticated - including its cutting-edge AI capabilities. When they're ready to begin using templates, tools, and the robust model catalog to stitch together their own AI solutions, they'll be prompted to register or sign in to their Azure account. During preview, there's no extra charge for using Azure AI Studio. When deploying solutions, Azure AI services, Azure Machine Learning, and other Azure resources used inside of Azure AI Studio will be billed at their existing rates. Pricing is subject to change when Azure AI Studio is generally available.
+ Customers can explore Azure AI Foundry unauthenticated - including its cutting-edge AI capabilities. When they're ready to begin using templates, tools, and the robust model catalog to stitch together their own AI solutions, they'll be prompted to register or sign in to their Azure account. During preview, there's no extra charge for using Azure AI Foundry. When deploying solutions, Azure AI services, Azure Machine Learning, and other Azure resources used inside of Azure AI Foundry will be billed at their existing rates. Pricing is subject to change when Azure AI Foundry is generally available.
- question: |
- What regions is Azure AI Studio available in?
+ What regions is Azure AI Foundry available in?
answer: |
- Azure AI Studio is available in most regions where Azure AI services are available. For more information, see [region support for Azure AI Studio](reference/region-support.md).
+ Azure AI Foundry is available in most regions where Azure AI services are available. For more information, see [region support for Azure AI Foundry](reference/region-support.md).
- question: |
- Can I integrate Microsoft Fabric data into Azure AI Studio?
+ Can I integrate Microsoft Fabric data into Azure AI Foundry?
answer: |
- Yes. Azure AI Studio supports seamless access to data in the Microsoft Fabric datastore Lakehouse without having to move or copy data. Data from Amazon S3 bucket can be accessed via Fabric shortcuts in Azure AI Studio directly from Amazon S3 location without having to create a copy of the data in Azure.
+ Yes. Azure AI Foundry supports seamless access to data in the Microsoft Fabric datastore Lakehouse without having to move or copy data. Data from Amazon S3 bucket can be accessed via Fabric shortcuts in Azure AI Foundry portal directly from Amazon S3 location without having to create a copy of the data in Azure.
- question: |
- Can I use models other than ChatGPT in Azure AI Studio?
+ Can I use models other than ChatGPT in Azure AI Foundry portal?
answer: |
- Yes. Azure AI Studio includes a robust and growing catalog of frontier and open-source models from OpenAI, Hugging Face, Meta and more that can be applied over your data. You can even compare models by task using open-source datasets and evaluate the model with your own test data to see how the pre-trained model would perform to fit your own use case.
+ Yes. Azure AI Foundry includes a robust and growing catalog of frontier and open-source models from OpenAI, Hugging Face, Meta and more that can be applied over your data. You can even compare models by task using open-source datasets and evaluate the model with your own test data to see how the pre-trained model would perform to fit your own use case.
- question: |
- How is the playground in Azure AI Studio different from the Azure OpenAI Studio playground?
+ How is the playground in Azure AI Foundry portal different from the Azure OpenAI Studio playground?
answer: |
- The playground experiences in both Azure AI Studio and Azure OpenAI Studio are similar; however, Azure AI Studio provides playground experience for models in addition to those provisioned via Azure OpenAI Studio.
+ The playground experiences in both Azure AI Foundry and Azure OpenAI Studio are similar; however, Azure AI Foundry provides playground experience for models in addition to those provisioned via Azure OpenAI Studio.
- question: |
- Will there be multiple varying model benchmarks in Azure AI Studio based on individual projects and data sources?
+ Will there be multiple varying model benchmarks in Azure AI Foundry portal based on individual projects and data sources?
answer: |
In the model benchmarks view, customers can view varying model benchmarks published by Azure AI.
- question: |
@@ -57,7 +57,7 @@ sections:
- question: |
What is the billing model for Model-as-a-Service (MaaS)?
answer: |
- Azure AI Studio offers paygo inference APIs and hosted fine-tuning for [Llama 2 family models](how-to/deploy-models-llama.md). During preview, there's no extra charge for Azure AI Studio outside of typical AI services and other Azure resource charges.
+ Azure AI Foundry offers paygo inference APIs and hosted fine-tuning for [Llama 2 family models](how-to/deploy-models-llama.md). During preview, there's no extra charge for Azure AI Foundry outside of typical AI services and other Azure resource charges.
- question: |
Can all models be secured with content filtering?
answer: |
@@ -86,5 +86,5 @@ sections:
additionalContent: |
## Next steps
- - [Azure AI Studio](what-is-ai-studio.md)
+ - [Azure AI Foundry](what-is-ai-studio.md)
- [Plan and manage costs](./how-to/costs-plan-manage.md)
Summary
{
"modification_type": "minor update",
"modification_title": "FAQにおけるプラットフォーム名の更新"
}
Explanation
この変更は、「faq.yml」というFAQドキュメント内で、プラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に全て更新することを目的としています。これにより、ユーザーには最新の情報を提供し、正確な理解を促すことが意図されています。
主な変更点は以下の通りです:
- 各質問とその回答が、「Azure AI Foundry」と関連する内容に合わせて修正されています。これにより、ユーザーが探している情報が新しいプラットフォームに正確に関連づけられるようになります。
- 具体的な質問内容も変更されており、プラットフォームの利用方法や機能、データ統合の可否などに関する情報が新しい名称に基づいて記載されています。
この更新により、FAQセクションは正確で一貫した情報提供が可能となり、ユーザーにとっての利便性が向上します。また、プラットフォーム名の統一により、ブランドの認知度や正確性が強化されます。
articles/ai-studio/how-to/access-on-premises-resources.md
Diff
@@ -1,31 +1,31 @@
---
title: How to access on-premises resources
titleSuffix: Azure AI Foundry
-description: Learn how to configure an Azure AI Studio managed network to securely allow access to your on-premises resources.
+description: Learn how to configure an Azure AI Foundry managed network to securely allow access to your on-premises resources.
manager: scottpolly
ms.service: azure-ai-studio
ms.topic: how-to
ms.date: 10/24/2024
ms.reviewer: meerakurup
ms.author: larryfr
author: Blackmist
-# Customer intent: As an admin, I want to allow my developers to securely access on-premises resources from Azure AI Studio.
+# Customer intent: As an admin, I want to allow my developers to securely access on-premises resources from Azure AI Foundry.
---
-# Access on-premises resources from your Azure AI Studio's managed network (preview)
+# Access on-premises resources from your Azure AI Foundry's managed network (preview)
-To access your non-Azure resources located in a different virtual network or located entirely on-premises from your Azure AI Studio's managed virtual network, an Application Gateway must be configured. Through this Application Gateway, full end to end access can be configured to your resources.
+To access your non-Azure resources located in a different virtual network or located entirely on-premises from your Azure AI Foundry's managed virtual network, an Application Gateway must be configured. Through this Application Gateway, full end to end access can be configured to your resources.
Azure Application Gateway is a load balancer that makes routing decisions based on the URL of an HTTPS request. Azure Machine Learning supports using an application gateway to securely communicate with non-Azure resources. For more on Application Gateway, see [What is Azure Application Gateway](/azure/application-gateway/overview).
-To access on-premises or custom virtual network resources from the managed virtual network, you configure an Application Gateway on your Azure virtual network. The application gateway is used for inbound access to the AI Studio's hub. Once configured, you then create a private endpoint from the Azure AI Studio hub's managed virtual network to the Application Gateway. With the private endpoint, the full end to end path is secured and not routed through the Internet.
+To access on-premises or custom virtual network resources from the managed virtual network, you configure an Application Gateway on your Azure virtual network. The application gateway is used for inbound access to the AI Foundry portal's hub. Once configured, you then create a private endpoint from the Azure AI Foundry hub's managed virtual network to the Application Gateway. With the private endpoint, the full end to end path is secured and not routed through the Internet.
:::image type="content" source="../media/how-to/network/ai-studio-app-gateway.png" alt-text="Diagram of a managed network using Application Gateway to communicate with on-premises resources." lightbox="../media/how-to/network/ai-studio-app-gateway.png":::
## Prerequisites
- Read the [How an application gateway works](/azure/application-gateway/how-application-gateway-works) article to understand how the Application Gateway can secure the connection to your non-Azure resources.
-- Set up your Azure AI Studio hub's managed virtual network and select your isolation mode, either Allow Internet Outbound or Allow Only Approved Outbound. For more information, see [Managed virtual network isolation](configure-managed-network.md).
+- Set up your Azure AI Foundry hub's managed virtual network and select your isolation mode, either Allow Internet Outbound or Allow Only Approved Outbound. For more information, see [Managed virtual network isolation](configure-managed-network.md).
- Get the private HTTP(S) endpoint of the resource to access.
## Supported resources
@@ -42,7 +42,7 @@ Follow the [Quickstart: Direct web traffic using the portal](/azure/application-
1. From the __Basics__ tab:
- Ensure your Application Gateway is in the same region as the selected Azure Virtual Network.
- - Azure AI Studio only supports IPv4 for Application Gateway.
+ - Azure AI Foundry only supports IPv4 for Application Gateway.
- With your Azure Virtual Network, select one dedicated subnet for your Application Gateway. No other resources can be deployed in this subnet.
1. From the __Frontends__ tab, Application Gateway doesn’t support private Frontend IP address only so Public IP addresses need to be selected or a new one created. Private IP addresses for the resources that the gateway connects to can be added within the range of the subnet you selected on the Basics tab.
@@ -68,7 +68,7 @@ Follow the [Quickstart: Direct web traffic using the portal](/azure/application-
## Configure private link
-1. Now that your Application Gateway’s front-end IP and backend pools are created, you can now configure the private endpoint from the managed virtual network to your Application Gateway. in the [Azure portal](https://portal.azure.com), navigate to your Azure AI Studio hub's __Networking__ tab. Select __Workspace managed outbound access__, __+ Add user-defined outbound rules__.
+1. Now that your Application Gateway’s front-end IP and backend pools are created, you can now configure the private endpoint from the managed virtual network to your Application Gateway. in the [Azure portal](https://portal.azure.com), navigate to your Azure AI Foundry hub's __Networking__ tab. Select __Workspace managed outbound access__, __+ Add user-defined outbound rules__.
1. In the __Workspace Outbound rules__ form, select the following to create your private endpoint:
- Rule name: Provide a name for your private endpoint to Application Gateway.
@@ -77,7 +77,7 @@ Follow the [Quickstart: Direct web traffic using the portal](/azure/application-
- Resource Type: `Microsoft.Network/applicationGateways`
- Resource name: The name of your Application Gateway resource.
- Sub resource: `appGwPrivateFrontendIpIPv4`
- - FQDNs: These FQDNs are the aliases that you want to use inside the Azure AI Studio. They're resolved to the managed private endpoint’s private IP address targeting Application Gateway. You might include multiple FQDNs depending on how many resources you would like to connect to with the Application Gateway.
+ - FQDNs: These FQDNs are the aliases that you want to use inside the Azure AI Foundry portal. They're resolved to the managed private endpoint’s private IP address targeting Application Gateway. You might include multiple FQDNs depending on how many resources you would like to connect to with the Application Gateway.
> [!NOTE]
> - If you are using HTTPS listener with certificate uploaded, make sure the FQDN alias matches with the certificate's CN (Common Name) or SAN (Subject Alternative Name) otherwise HTTPS call will fail with SNI (Server Name Indication).
Summary
{
"modification_type": "minor update",
"modification_title": "オンプレミスリソースへのアクセスガイドのプラットフォーム名更新"
}
Explanation
この変更は、「access-on-premises-resources.md」というドキュメント内で、プラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に更新することを目的としています。これにより、関連する手順やガイドラインが新しいプラットフォームに正しく適用されることを保証しています。
主な変更点は以下の通りです:
- 各説明文や見出しにおいて、「Azure AI Foundry」という表現が挿入され、これによりユーザーがプラットフォームの最新情報を得られるようになっています。
- 「Application Gateway」に関連する説明も更新され、ユーザーがこのセキュリティ構成を正しく理解し、設定できるよう配慮されています。
- お客様の意図に関するセクションも同様に更新され、プラットフォーム名が一致するように変更されています。
この更新により、ドキュメントは最新のプラットフォームに関する一貫した情報を提供し、ユーザーの理解を助けます。また、情報の正確性が保たれることで、スムーズな導入と運用が促進されます。
articles/ai-studio/how-to/autoscale.md
Diff
@@ -1,7 +1,7 @@
---
title: Autoscale Azure AI limits
titleSuffix: Azure AI Foundry
-description: Learn how you can manage and increase quotas for resources with Azure AI Studio.
+description: Learn how you can manage and increase quotas for resources with Azure AI Foundry.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
@@ -18,7 +18,7 @@ author: Blackmist
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-This article provides guidance for how you can manage and increase quotas for resources with Azure AI Studio.
+This article provides guidance for how you can manage and increase quotas for resources with Azure AI Foundry.
## Overview
Summary
{
"modification_type": "minor update",
"modification_title": "オートスケールに関するガイドのプラットフォーム名更新"
}
Explanation
この変更は、「autoscale.md」というドキュメント内で、プラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に更新することを目的としています。これによって、オートスケール機能に関連する情報が最新のプラットフォームに正しく適用されることを確保しています。
主な変更点は以下の通りです:
- ドキュメントの説明文が修正され、「Azure AI Foundry」に関連したリソースの管理およびクォータの増加方法について述べられています。
- 記事の導入部でもプラットフォーム名が変更されており、該当する情報が正確に反映されています。
この更新により、ユーザーは新しいプラットフォーム名に基づいた一貫した情報を得ることができ、これにより利用時の混乱を避けることができます。正確な情報提供は、ユーザーの理解を深め、リソース管理の効率を向上させることに寄与します。
articles/ai-studio/how-to/benchmark-model-in-catalog.md
Diff
@@ -1,33 +1,34 @@
---
-title: How to use model benchmarking in Azure AI Studio
+title: How to use model benchmarking in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: In this article, you learn to compare benchmarks across models and datasets, using the model benchmarks tool in Azure AI Studio.
+description: In this article, you learn to compare benchmarks across models and datasets, using the model benchmarks tool in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
- ai-learning-hub
+ - ignite-2024
ms.topic: how-to
ms.date: 11/06/2024
ms.reviewer: jcioffi
ms.author: mopeakande
author: msakande
---
-# How to benchmark models in Azure AI Studio
+# How to benchmark models in Azure AI Foundry portal
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-In this article, you learn to compare benchmarks across models and datasets, using the model benchmarks tool in Azure AI Studio. You also learn to analyze benchmarking results and to perform benchmarking with your data. Benchmarking can help you make informed decisions about which models meet the requirements for your particular use case or application.
+In this article, you learn to compare benchmarks across models and datasets, using the model benchmarks tool in Azure AI Foundry portal. You also learn to analyze benchmarking results and to perform benchmarking with your data. Benchmarking can help you make informed decisions about which models meet the requirements for your particular use case or application.
## Prerequisites
- An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
-- An [Azure AI Studio project](create-projects.md).
+- An [Azure AI Foundry project](create-projects.md).
## Access model benchmarks through the model catalog
-Azure AI supports model benchmarking for select models that are popular and most frequently used. Follow these steps to use detailed benchmarking results to compare and select models directly from the AI Studio model catalog:
+Azure AI supports model benchmarking for select models that are popular and most frequently used. Follow these steps to use detailed benchmarking results to compare and select models directly from the AI Foundry model catalog:
[!INCLUDE [open-catalog](../includes/open-catalog.md)]
@@ -60,7 +61,7 @@ When you're in the "Benchmarks" tab for a specific model, you can gather extensi
:::image type="content" source="../media/how-to/model-benchmarks/gpt4o-benchmark-tab-expand.png" alt-text="Screenshot showing benchmarks tab for gpt-4o." lightbox="../media/how-to/model-benchmarks/gpt4o-benchmark-tab-expand.png":::
-By default, AI Studio displays an average index across various metrics and datasets to provide a high-level overview of model performance.
+By default, AI Foundry displays an average index across various metrics and datasets to provide a high-level overview of model performance.
To access benchmark results for a specific metric and dataset:
@@ -84,6 +85,6 @@ The previous sections showed the benchmark results calculated by Microsoft, usin
## Related content
-- [Model benchmarks in Azure AI Studio](../concepts/model-benchmarks.md)
-- [How to evaluate generative AI apps with Azure AI Studio](evaluate-generative-ai-app.md)
-- [How to view evaluation results in Azure AI Studio](evaluate-results.md)
+- [Model benchmarks in Azure AI Foundry portal](../concepts/model-benchmarks.md)
+- [How to evaluate generative AI apps with Azure AI Foundry](evaluate-generative-ai-app.md)
+- [How to view evaluation results in Azure AI Foundry portal](evaluate-results.md)
Summary
{
"modification_type": "minor update",
"modification_title": "モデルベンチマークガイドのプラットフォーム名更新"
}
Explanation
この変更は、「benchmark-model-in-catalog.md」というドキュメント内で、プラットフォーム名を「Azure AI Studio」から「Azure AI Foundry portal」に更新することを目的としています。この修正により、モデルベンチマークツールに関する情報が新しいプラットフォームに正確に適用されるようになっています。
主な変更点は以下の通りです:
- ドキュメントのタイトルや説明文において、「Azure AI Foundry portal」という表現が使われ、具体的なプラットフォームの名称が最新のものに更新されています。
- 「Azure AI Studioプロジェクト」という表現が「Azure AI Foundryプロジェクト」に変更されており、利用する際の正しい名称が強調されています。
- モデルのベンチマーク結果に関する説明でも、「Azure AI Foundry」が用語に関連付けられ、一貫したプラットフォーム情報が提供されています。
この更新によって、ユーザーは新しいプラットフォームに関する正しい情報を得ることができ、モデルのベンチマークを行う際の手順や要件を明確に理解できるようになります。これにより、リソースの使用時に混乱を避け、効果的にシステムを利用することが促進されます。
articles/ai-studio/how-to/built-in-policy-model-deployment.md
Diff
@@ -1,7 +1,7 @@
---
title: Control AI model deployment with built-in policies
titleSuffix: Azure AI Foundry
-description: "Learn how to use built-in Azure policies to control what managed AI Services (MaaS) and Model-as-a-Platform (MaaP) AI models can be deployed in Azure AI Studio."
+description: "Learn how to use built-in Azure policies to control what managed AI Services (MaaS) and Model-as-a-Platform (MaaP) AI models can be deployed in Azure AI Foundry portal."
author: Blackmist
ms.author: larryfr
ms.service: azure-ai-studio
@@ -12,9 +12,9 @@ ms.date: 10/25/2024
---
-# Control AI model deployment with built-in policies in Azure AI Studio
+# Control AI model deployment with built-in policies in Azure AI Foundry portal
-Azure Policy provides built-in policy definitions that help you govern the deployment of AI models in Managed AI Services (MaaS) and Model-as-a-Platform (MaaP). You can use these policies to control what models your developers can deploy in Azure AI Studio.
+Azure Policy provides built-in policy definitions that help you govern the deployment of AI models in Managed AI Services (MaaS) and Model-as-a-Platform (MaaP). You can use these policies to control what models your developers can deploy in Azure AI Foundry portal.
## Prerequisites
@@ -45,7 +45,7 @@ Azure Policy provides built-in policy definitions that help you govern the deplo
To get the model asset ID strings and model publishers' name use the following steps:
- 1. Go to the [Azure AI Studio model catalog](model-catalog-overview.md).
+ 1. Go to the [Azure AI Foundry model catalog](model-catalog-overview.md).
1. For each model you want to allow, select the model to view the details. In the model detail information, copy the **Model ID** value. For example, the value might look like `azureml://registries/azure-openai/models/gpt-35-turbo/versions/3` for GPT-3.5-Turbo model. The provided names are also *Collections* in model catalog. For example, the publisher for "Meta-Llama-3.1-70B-Instruct" model is Meta.
@@ -85,4 +85,4 @@ To update an existing policy assignment with new models, follow these steps:
## Related content
- [Azure Policy overview](/azure/governance/policy/overview)
-- [Azure AI Studio model catalog](model-catalog-overview.md)
+- [Azure AI Foundry model catalog](model-catalog-overview.md)
Summary
{
"modification_type": "minor update",
"modification_title": "AIモデルデプロイメントに関するガイドのプラットフォーム名更新"
}
Explanation
この変更は、「built-in-policy-model-deployment.md」というドキュメント内で、プラットフォーム名を「Azure AI Studio」から「Azure AI Foundry portal」に更新することを目的としています。この修正により、AIモデルのデプロイメントに関する情報が正確に新しいプラットフォームに対応しています。
主な変更点は以下の通りです:
- ドキュメントのタイトルと説明文において、「Azure AI Foundry portal」という表現が使われ、具体的なプラットフォームの名称が最新のものに修正されています。
- ユーザーが使用する際には、従来の「Azure AI Studioモデルカタログ」という表現が、「Azure AI Foundryモデルカタログ」に変更され、一貫性が保たれています。
この更新により、ユーザーは新しいプラットフォームに関する正確な情報を取得できるようになり、ポリシーを使用してAIモデルのデプロイメントを制御するための手順が明確になります。正確な情報が提供されることで、管理者や開発者がより効果的にリソースを利用できるようになり、システム全体の治理を強化することが期待されます。
articles/ai-studio/how-to/concept-data-privacy.md
Diff
@@ -1,5 +1,5 @@
---
-title: Data, privacy, and security for use of models through the model catalog in AI Studio
+title: Data, privacy, and security for use of models through the model catalog in AI Foundry portal
titleSuffix: Azure AI Foundry
description: Get details about how data that customers provide is processed, used, and stored when a user deploys a model from the model catalog.
manager: scottpolly
@@ -12,7 +12,7 @@ ms.author: scottpolly
author: s-polly
#Customer intent: As a data scientist, I want to learn about data privacy and security for use of models in the model catalog.
---
-# Data, privacy, and security for use of models through the model catalog in AI Studio
+# Data, privacy, and security for use of models through the model catalog in AI Foundry portal
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
@@ -21,19 +21,19 @@ This article describes how the data that you provide is processed, used, and sto
> [!IMPORTANT]
> For information about responsible AI in Azure OpenAI and AI services, see [Responsible use of AI](../../ai-services/responsible-use-of-ai-overview.md?context=/azure/ai-studio/context/context).
-## What data is processed for models deployed in Azure AI Studio?
+## What data is processed for models deployed in Azure AI Foundry portal?
-When you deploy models in Azure AI Studio, the following types of data are processed to provide the service:
+When you deploy models in Azure AI Foundry portal, the following types of data are processed to provide the service:
* **Prompts and generated content**. A user submits a prompt, and the model generates content (output) via the operations that the model supports. Prompts might include content added via retrieval-augmented generation (RAG), metaprompts, or other functionality included in an application.
* **Uploaded data**. For models that support fine-tuning, customers can upload their data to a [datastore](../concepts/connections.md#connections-to-datastores) for fine-tuning.
## Generation of inferencing outputs with managed compute
-Deploying models to managed compute deploys model weights to dedicated virtual machines and exposes a REST API for real-time inference. To learn more about deploying models from the model catalog to managed compute, see [Model catalog and collections in Azure AI Studio](model-catalog-overview.md).
+Deploying models to managed compute deploys model weights to dedicated virtual machines and exposes a REST API for real-time inference. To learn more about deploying models from the model catalog to managed compute, see [Model catalog and collections in Azure AI Foundry portal](model-catalog-overview.md).
-You manage the infrastructure for these managed compute resources. Azure data, privacy, and security commitments apply. To learn more about Azure compliance offerings applicable to Azure AI Studio, see the [Azure Compliance Offerings page](https://servicetrust.microsoft.com/DocumentPage/7adf2d9e-d7b5-4e71-bad8-713e6a183cf3).
+You manage the infrastructure for these managed compute resources. Azure data, privacy, and security commitments apply. To learn more about Azure compliance offerings applicable to Azure AI Foundry, see the [Azure Compliance Offerings page](https://servicetrust.microsoft.com/DocumentPage/7adf2d9e-d7b5-4e71-bad8-713e6a183cf3).
Although containers for **Curated by Azure AI** models are scanned for vulnerabilities that could exfiltrate data, not all models available through the model catalog are scanned. To reduce the risk of data exfiltration, you can [help protect your deployment by using virtual networks](configure-managed-network.md). You can also use [Azure Policy](../../ai-services/policy-reference.md) to regulate the models that your users can deploy.
@@ -43,7 +43,7 @@ Although containers for **Curated by Azure AI** models are scanned for vulnerabi
When you deploy a model from the model catalog (base or fine-tuned) by using serverless APIs with pay-as-you-go billing for inferencing, an API is provisioned. The API gives you access to the model that the Azure Machine Learning service hosts and manages. Learn more about serverless APIs in [Model catalog and collections](./model-catalog-overview.md).
-The model processes your input prompts and generates outputs based on its functionality, as described in the model details. Your use of the model (along with the provider's accountability for the model and its outputs) is subject to the license terms for the model. Microsoft provides and manages the hosting infrastructure and API endpoint. The models hosted in this *model as a service* (MaaS) scenario are subject to Azure data, privacy, and security commitments. [Learn more about Azure compliance offerings applicable to Azure AI Studio](https://servicetrust.microsoft.com/DocumentPage/7adf2d9e-d7b5-4e71-bad8-713e6a183cf3).
+The model processes your input prompts and generates outputs based on its functionality, as described in the model details. Your use of the model (along with the provider's accountability for the model and its outputs) is subject to the license terms for the model. Microsoft provides and manages the hosting infrastructure and API endpoint. The models hosted in this *model as a service* (MaaS) scenario are subject to Azure data, privacy, and security commitments. [Learn more about Azure compliance offerings applicable to Azure AI Foundry](https://servicetrust.microsoft.com/DocumentPage/7adf2d9e-d7b5-4e71-bad8-713e6a183cf3).
Microsoft acts as the data processor for prompts and outputs sent to, and generated by, a model deployed for pay-as-you-go inferencing (MaaS). Microsoft doesn't share these prompts and outputs with the model provider. Also, Microsoft doesn't use these prompts and outputs to train or improve Microsoft models, the model provider's models, or any third party's models.
Summary
{
"modification_type": "minor update",
"modification_title": "データプライバシーに関するガイドのプラットフォーム名更新"
}
Explanation
この変更は、「concept-data-privacy.md」というドキュメント内で、プラットフォーム名を「Azure AI Studio」から「Azure AI Foundry portal」に更新するためのものです。この修正は、データプライバシー、セキュリティ、モデルカタログの使用に関する情報が新しいプラットフォームに正確に対応することを目的としています。
主な変更点は次のとおりです:
- ドキュメントのタイトルや見出しにおいて、明確に「Azure AI Foundry portal」と記載され、正しいプラットフォーム名が強調されています。
- デプロイメントに関連するデータ処理に関するセクションでも、「Azure AI Foundry portal」という表現が使用され、プラットフォームに関する整合性が保たれています。
- モデルカタログからのデプロイメントに関する情報や、Azureのデータ、プライバシー、セキュリティに関する取り組みについても、新しい名称に合わせて更新されています。
この更新により、ユーザーは現行のプラットフォームに関する正確な情報を得ることができ、AIモデルの運用におけるデータプライバシーとセキュリティについての理解が深まります。また、Azureのデータとプライバシーに関するコミットメントが引き続き適用されていることが強調されており、ユーザーの信頼を高める一助となります。
articles/ai-studio/how-to/configure-managed-network.md
Diff
@@ -1,21 +1,21 @@
---
-title: How to configure a managed network for Azure AI Studio hubs
+title: How to configure a managed network for Azure AI Foundry hubs
titleSuffix: Azure AI Foundry
-description: Learn how to configure a managed network for Azure AI Studio hubs.
+description: Learn how to configure a managed network for Azure AI Foundry hubs.
manager: scottpolly
ms.service: azure-ai-studio
-ms.custom: ignite-2023, build-2024, devx-track-azurecli
+ms.custom: ignite-2023, build-2024, devx-track-azurecli, ignite-2024
ms.topic: how-to
ms.date: 11/19/2024
-ms.reviewer: meerakurup
+ms.reviewer: meerakurup
ms.author: larryfr
author: Blackmist
zone_pivot_groups: azure-ai-studio-sdk-cli
---
-# How to configure a managed network for Azure AI Studio hubs
+# How to configure a managed network for Azure AI Foundry hubs
-We have two network isolation aspects. One is the network isolation to access an Azure AI Studio hub. Another is the network isolation of computing resources for both your hub and project (such as compute instance, serverless and managed online endpoint.) This document explains the latter highlighted in the diagram. You can use hub built-in network isolation to protect your computing resources.
+We have two network isolation aspects. One is the network isolation to access an Azure AI Foundry hub. Another is the network isolation of computing resources for both your hub and project (such as compute instance, serverless and managed online endpoint.) This document explains the latter highlighted in the diagram. You can use hub built-in network isolation to protect your computing resources.
:::image type="content" source="../media/how-to/network/azure-ai-network-outbound.svg" alt-text="Diagram of hub network isolation." lightbox="../media/how-to/network/azure-ai-network-outbound.png":::
@@ -135,7 +135,7 @@ Before following the steps in this article, make sure you have the following pre
## Limitations
-* Azure AI Studio currently doesn't support bringing your own virtual network, it only supports managed virtual network isolation.
+* Azure AI Foundry currently doesn't support bringing your own virtual network, it only supports managed virtual network isolation.
* Once you enable managed virtual network isolation of your Azure AI, you can't disable it.
* Managed virtual network uses private endpoint connection to access your private resources. You can't have a private endpoint and a service endpoint at the same time for your Azure resources, such as a storage account. We recommend using private endpoints in all scenarios.
* The managed virtual network is deleted when the Azure AI is deleted.
@@ -154,7 +154,7 @@ Before following the steps in this article, make sure you have the following pre
* __Create a new hub__:
- 1. Sign in to the [Azure portal](https://portal.azure.com), and choose Azure AI Studio from Create a resource menu.
+ 1. Sign in to the [Azure portal](https://portal.azure.com), and choose Azure AI Foundry from Create a resource menu.
1. Select **+ New Azure AI**.
1. Provide the required information on the __Basics__ tab.
1. From the __Networking__ tab, select __Private with Internet Outbound__.
@@ -337,7 +337,7 @@ To configure a managed virtual network that allows internet outbound communicati
* __Create a new hub__:
- 1. Sign in to the [Azure portal](https://portal.azure.com), and choose Azure AI Studio from Create a resource menu.
+ 1. Sign in to the [Azure portal](https://portal.azure.com), and choose Azure AI Foundry from Create a resource menu.
1. Select **+ New Azure AI**.
1. Provide the required information on the __Basics__ tab.
1. From the __Networking__ tab, select __Private with Approved Outbound__.
@@ -821,7 +821,7 @@ pytorch.org
Private endpoints are currently supported for the following Azure services:
-* AI Studio hub
+* AI Foundry hub
* Azure AI Search
* Azure AI services
* Azure API Management
@@ -846,7 +846,7 @@ When you create a private endpoint, you provide the _resource type_ and _subreso
When you create a private endpoint for hub dependency resources, such as Azure Storage, Azure Container Registry, and Azure Key Vault, the resource can be in a different Azure subscription. However, the resource must be in the same tenant as the hub.
-A private endpoint is automatically created for a connection if the target resource is an Azure resource listed previously. A valid target ID is expected for the private endpoint. A valid target ID for the connection can be the Azure Resource Manager ID of a parent resource. The target ID is also expected in the target of the connection or in `metadata.resourceid`. For more on connections, see [How to add a new connection in Azure AI Studio](connections-add.md).
+A private endpoint is automatically created for a connection if the target resource is an Azure resource listed previously. A valid target ID is expected for the private endpoint. A valid target ID for the connection can be the Azure Resource Manager ID of a parent resource. The target ID is also expected in the target of the connection or in `metadata.resourceid`. For more on connections, see [How to add a new connection in Azure AI Foundry portal](connections-add.md).
## Select an Azure Firewall version for allowed only approved outbound (Preview)
@@ -902,4 +902,4 @@ The hub managed virtual network feature is free. However, you're charged for the
## Related content
-- [Create AI Studio hub and project using the SDK](./develop/create-hub-project-sdk.md)
+- [Create AI Foundry hub and project using the SDK](./develop/create-hub-project-sdk.md)
Summary
{
"modification_type": "minor update",
"modification_title": "マネージドネットワーク設定ガイドのプラットフォーム名更新"
}
Explanation
この変更は、「configure-managed-network.md」というドキュメント内で、プラットフォーム名を「Azure AI Studio」から「Azure AI Foundry hubs」に更新するためのものです。この修正は、Azure AIモデルの利用に関連するネットワーク構成についての情報が新しいプラットフォームに正確に対応していることを目的としています。
主な変更点は以下の通りです:
- ドキュメントのタイトル、説明文、見出しにおいて「Azure AI Foundry」との表記がなされ、正確なプラットフォーム名が反映されています。
- ネットワーク分離の概念が説明されている部分でも、「Azure AI Foundry」に基づく情報が更新されており、利用者が最新のプラットフォームに関する理解を得やすくなっています。
- 記述されている手順や制限事項、特に新しいユーザーがAzureポータルを通じてリソースを作成する際に選択する項目が明確化されています。
この更新により、ユーザーは新しいプラットフォームに関する正確かつ一貫した情報を取得でき、Azure AI Foundryにおけるマネージドネットワークの設定プロセスをより理解しやすくなります。また、プラットフォーム名の変更は、最新のコンテキストにおけるガイダンスを提供することで、ユーザーの運用効率を向上させる役割を果たします。
articles/ai-studio/how-to/configure-private-link.md
Diff
@@ -1,29 +1,29 @@
---
-title: How to configure a private link for an Azure AI Studio hub
+title: How to configure a private link for an Azure AI Foundry hub
titleSuffix: Azure AI Foundry
-description: Learn how to configure a private link for Azure AI Studio hubs. A private link is used to secure communication with the Azure AI Studio hub.
+description: Learn how to configure a private link for Azure AI Foundry hubs. A private link is used to secure communication with the Azure AI Foundry hub.
manager: scottpolly
ms.service: azure-ai-studio
-ms.custom: ignite-2023, devx-track-azurecli, build-2024
+ms.custom: ignite-2023, devx-track-azurecli, build-2024, ignite-2024
ms.topic: how-to
ms.date: 5/21/2024
-ms.reviewer: meerakurup
+ms.reviewer: meerakurup
ms.author: larryfr
author: Blackmist
# Customer intent: As an admin, I want to configure a private link for hub so that I can secure my hubs.
---
-# How to configure a private link for Azure AI Studio hubs
+# How to configure a private link for Azure AI Foundry hubs
-We have two network isolation aspects. One is the network isolation to access an Azure AI Studio hub. Another is the network isolation of computing resources in your hub and projects such as compute instances, serverless, and managed online endpoints. This article explains the former highlighted in the diagram. You can use private link to establish the private connection to your hub and its default resources. This article is for Azure AI Studio (hub and projects). For information on Azure AI services, see the [Azure AI services documentation](/azure/ai-services/cognitive-services-virtual-networks).
+We have two network isolation aspects. One is the network isolation to access an Azure AI Foundry hub. Another is the network isolation of computing resources in your hub and projects such as compute instances, serverless, and managed online endpoints. This article explains the former highlighted in the diagram. You can use private link to establish the private connection to your hub and its default resources. This article is for Azure AI Foundry (hub and projects). For information on Azure AI services, see the [Azure AI services documentation](/azure/ai-services/cognitive-services-virtual-networks).
-:::image type="content" source="../media/how-to/network/azure-ai-network-inbound.svg" alt-text="Diagram of AI Studio hub network isolation." lightbox="../media/how-to/network/azure-ai-network-inbound.png":::
+:::image type="content" source="../media/how-to/network/azure-ai-network-inbound.svg" alt-text="Diagram of AI Foundry hub network isolation." lightbox="../media/how-to/network/azure-ai-network-inbound.png":::
You get several hub default resources in your resource group. You need to configure following network isolation configurations.
- Disable public network access of hub default resources such as Azure Storage, Azure Key Vault, and Azure Container Registry.
- Establish private endpoint connection to hub default resources. You need to have both a blob and file private endpoint for the default storage account.
-- [Managed identity configurations](#managed-identity-configuration) to allow hubs access to your storage account if it's private.
+- If your storage account is private, [assign roles](#private-storage-configuration) to allow access.
## Prerequisites
@@ -41,7 +41,7 @@ Use one of the following methods to create a hub with a private endpoint. Each o
# [Azure portal](#tab/azure-portal)
-1. From the [Azure portal](https://portal.azure.com), go to Azure AI Studio and choose __+ New Azure AI__.
+1. From the [Azure portal](https://portal.azure.com), go to Azure AI Foundry and choose __+ New Azure AI__.
1. Choose network isolation mode in __Networking__ tab.
1. Scroll down to __Workspace Inbound access__ and choose __+ Add__.
1. Input required fields. When selecting the __Region__, select the same region as your virtual network.
@@ -234,15 +234,28 @@ az extension add --name ml
---
-## Managed identity configuration
-A manged identity configuration is required if you make your storage account private. Our services need to read/write data in your private storage account using [Allow Azure services on the trusted services list to access this storage account](/azure/storage/common/storage-network-security#grant-access-to-trusted-azure-services) with following managed identity configurations. Enable the system assigned managed identity of Azure AI Service and Azure AI Search, then configure role-based access control for each managed identity.
+## Private storage configuration
-| Role | Managed Identity | Resource | Purpose | Reference |
-|--|--|--|--|--|
-| `Storage File Data Privileged Contributor` | Azure AI Studio project | Storage Account | Read/Write prompt flow data. | [Prompt flow doc](/azure/machine-learning/prompt-flow/how-to-secure-prompt-flow#secure-prompt-flow-with-workspace-managed-virtual-network) |
-| `Storage Blob Data Contributor` | Azure AI Service | Storage Account | Read from input container, write to pre-process result to output container. | [Azure OpenAI Doc](../../ai-services/openai/how-to/managed-identity.md) |
-| `Storage Blob Data Contributor` | Azure AI Search | Storage Account | Read blob and write knowledge store | [Search doc](/azure/search/search-howto-managed-identities-data-sources). |
+If your storage account is private (uses a private endpoint to communicate with your project), you perform the following steps:
+
+1. Our services need to read/write data in your private storage account using [Allow Azure services on the trusted services list to access this storage account](/azure/storage/common/storage-network-security#grant-access-to-trusted-azure-services) with following managed identity configurations. Enable the system assigned managed identity of Azure AI Service and Azure AI Search, then configure role-based access control for each managed identity.
+
+ | Role | Managed Identity | Resource | Purpose | Reference |
+ |--|--|--|--|--|
+ | `Reader` | Azure AI Foundry project | Private endpoint of the storage account | Read data from the private storage account. |
+ | `Storage File Data Privileged Contributor` | Azure AI Foundry project | Storage Account | Read/Write prompt flow data. | [Prompt flow doc](/azure/machine-learning/prompt-flow/how-to-secure-prompt-flow#secure-prompt-flow-with-workspace-managed-virtual-network) |
+ | `Storage Blob Data Contributor` | Azure AI Service | Storage Account | Read from input container, write to preprocess result to output container. | [Azure OpenAI Doc](../../ai-services/openai/how-to/managed-identity.md) |
+ | `Storage Blob Data Contributor` | Azure AI Search | Storage Account | Read blob and write knowledge store | [Search doc](/azure/search/search-howto-managed-identities-data-sources). |
+
+ > [!TIP]
+ > Your storage account may have multiple private endpoints. You need to assign the `Reader` role to each private endpoint.
+
+1. Assign the `Storage Blob Data reader` role to your developers. This role allows them to read data from the storage account.
+
+1. Verify that the project's connection to the storage account uses Microsoft Entra ID for authentication. To view the connection information, go to the __Management center__, select __Connected resources__, and then select the storage account connections. If the credential type isn't Entra ID, select the pencil icon to update the connection and set the __Authentication method__ to __Microsoft Entra ID__.
+
+For information on securing playground chat, see [Securely use playground chat](secure-data-playground.md).
## Custom DNS configuration
@@ -265,7 +278,7 @@ If you need to configure custom DNS server without DNS forwarding, use the follo
> * Compute instances can be accessed only from within the virtual network.
> * The IP address for this FQDN is **not** the IP of the compute instance. Instead, use the private IP address of the workspace private endpoint (the IP of the `*.api.azureml.ms` entries.)
-* `<instance-name>.<region>.instances.azureml.ms` - Only used by the `az ml compute connect-ssh` command to connect to computers in a managed virtual network. Not needed if you are not using a managed network or SSH connections.
+* `<instance-name>.<region>.instances.azureml.ms` - Only used by the `az ml compute connect-ssh` command to connect to computers in a managed virtual network. Not needed if you aren't using a managed network or SSH connections.
* `<managed online endpoint name>.<region>.inference.ml.azure.com` - Used by managed online endpoints
@@ -278,7 +291,7 @@ To check AI-PROJECT-GUID, go to the Azure portal, select your project, settings,
## Next steps
-- [Create an Azure AI Studio project](create-projects.md)
-- [Learn more about Azure AI Studio](../what-is-ai-studio.md)
-- [Learn more about Azure AI Studio hubs](../concepts/ai-resources.md)
+- [Create an Azure AI Foundry project](create-projects.md)
+- [Learn more about Azure AI Foundry](../what-is-ai-studio.md)
+- [Learn more about Azure AI Foundry hubs](../concepts/ai-resources.md)
- [Troubleshoot secure connectivity to a project](troubleshoot-secure-connection-project.md)
Summary
{
"modification_type": "minor update",
"modification_title": "プライベートリンク設定ガイドのプラットフォーム名更新"
}
Explanation
この変更は、「configure-private-link.md」というドキュメント内で、プラットフォーム名を「Azure AI Studio」から「Azure AI Foundry hub」に更新するためのものです。この修正は、プライベートリンク設定に関する情報が新しいプラットフォームに正確に対応することを目的としています。
主な変更点は以下の通りです:
- ドキュメントのタイトル、説明文、見出しにおいて「Azure AI Foundry」との表記がされ、正確なプラットフォーム名が強調されています。
- ユーザーがプライベートリンクを介してハブとそのリソース間の通信を安全に行えるようにするための手順が更新され、両プラットフォームの違いに応じた内容が明確化されています。
- 成功に必要な要件や設定手順が強化され、特にプライベートストレージの構成やロールの割り当てに関する新しい指示が追加されています。
- いくつかのセクションで、プライベートリンクのメリットや対応する要件についての情報が詳細に説明されており、ユーザーが適切に設定を行えるようになっています。
この更新により、ドキュメントは新しいプラットフォームの正確な情報を提供することができ、ユーザーがAzure AI Foundryのプライベートリンクを適切に設定するためのプロセスを理解しやすくなります。また、安全な通信の確保に向けた明確な手順が提供されることで、ユーザーの信頼性や操作の効率が向上します。
articles/ai-studio/how-to/connections-add.md
Diff
@@ -1,31 +1,32 @@
---
-title: How to add a new connection in Azure AI Studio
+title: How to add a new connection in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: Learn how to add a new connection in Azure AI Studio.
+description: Learn how to add a new connection in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
- ignite-2023
- build-2024
+ - ignite-2024
ms.topic: how-to
ms.date: 11/19/2024
ms.reviewer: larryfr
ms.author: larryfr
author: Blackmist
-# Customer Intent: As an admin or developer, I want to understand how to add new connections in Azure AI Studio.
+# Customer Intent: As an admin or developer, I want to understand how to add new connections in Azure AI Foundry portal.
---
-# How to add a new connection in Azure AI Studio
+# How to add a new connection in Azure AI Foundry portal
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-In this article, you learn how to add a new connection in Azure AI Studio.
+In this article, you learn how to add a new connection in Azure AI Foundry portal.
-Connections are a way to authenticate and consume both Microsoft and other resources within your Azure AI Studio projects. For example, connections can be used for prompt flow, training data, and deployments. [Connections can be created](../how-to/connections-add.md) exclusively for one project or shared with all projects in the same Azure AI Studio hub.
+Connections are a way to authenticate and consume both Microsoft and other resources within your Azure AI Foundry projects. For example, connections can be used for prompt flow, training data, and deployments. [Connections can be created](../how-to/connections-add.md) exclusively for one project or shared with all projects in the same Azure AI Foundry hub.
## Connection types
-Here's a table of some of the available connection types in Azure AI Studio. The __Preview__ column indicates connection types that are currently in preview.
+Here's a table of some of the available connection types in Azure AI Foundry portal. The __Preview__ column indicates connection types that are currently in preview.
| Service connection type | Preview | Description |
| --- |:---:| --- |
@@ -35,15 +36,15 @@ Here's a table of some of the available connection types in Azure AI Studio. The
| Azure Content Safety | | Azure AI Content Safety is a service that detects potentially unsafe content in text, images, and videos. |
| Azure OpenAI || Azure OpenAI is a service that provides access to OpenAI's models including the GPT-4o, GPT-4o mini, GPT-4, GPT-4 Turbo with Vision, GPT-3.5-Turbo, DALLE-3 and Embeddings model series with the security and enterprise capabilities of Azure. |
| Serverless Model | ✓ | Serverless Model connections allow you to [serverless API deployment](deploy-models-serverless.md). |
-| Microsoft OneLake | | Microsoft OneLake provides open access to all of your Fabric items through Azure Data Lake Storage (ADLS) Gen2 APIs and SDKs.<br/><br/>In Azure AI Studio you can set up a connection to your OneLake data using a OneLake URI. You can find the information that Azure AI Studio requires to construct a __OneLake Artifact URL__ (workspace and item GUIDs) in the URL on the Fabric portal. For information about the URI syntax, see [Connecting to Microsoft OneLake](/fabric/onelake/onelake-access-api). |
+| Microsoft OneLake | | Microsoft OneLake provides open access to all of your Fabric items through Azure Data Lake Storage (ADLS) Gen2 APIs and SDKs.<br/><br/>In Azure AI Foundry portal you can set up a connection to your OneLake data using a OneLake URI. You can find the information that Azure AI Foundry requires to construct a __OneLake Artifact URL__ (workspace and item GUIDs) in the URL on the Fabric portal. For information about the URI syntax, see [Connecting to Microsoft OneLake](/fabric/onelake/onelake-access-api). |
| API key || API Key connections handle authentication to your specified target on an individual basis. For example, you can use this connection with the SerpApi tool in prompt flow. |
| Custom || Custom connections allow you to securely store and access keys while storing related properties, such as targets and versions. Custom connections are useful when you have many targets that or cases where you wouldn't need a credential to access. LangChain scenarios are a good example where you would use custom service connections. Custom connections don't manage authentication, so you have to manage authentication on your own. |
## Create a new connection
Follow these steps to create a new connection that's only available for the current project.
-1. Go to your project in Azure AI Studio. If you don't have a project, [create a new project](./create-projects.md).
+1. Go to your project in Azure AI Foundry portal. If you don't have a project, [create a new project](./create-projects.md).
1. Select __Management center__ from the bottom left navigation.
1. Select __Connected resources__ from the __Project__ section.
1. Select __+ New connection__ from the __Connected resources__ section.
@@ -75,11 +76,11 @@ If your hub is configured for [network isolation](configure-managed-network.md),
To create an outbound private endpoint rule to the data source, use the following steps:
-1. Sign in to the [Azure portal](https://portal.azure.com), and select the Azure AI Studio hub.
+1. Sign in to the [Azure portal](https://portal.azure.com), and select the Azure AI Foundry hub.
1. Select __Networking__, then __Workspace managed outbound access__.
1. To add an outbound rule, select __Add user-defined outbound rules__. From the __Workspace outbound rules__ sidebar, provide the following information:
- - __Rule name__: A name for the rule. The name must be unique for the AI Studio hub.
+ - __Rule name__: A name for the rule. The name must be unique for the AI Foundry hub.
- __Destination type__: Private Endpoint.
- __Subscription__: The subscription that contains the Azure resource you want to connect to.
- __Resource type__: `Microsoft.Storage/storageAccounts`. This resource provider is used for Azure Storage, Azure Data Lake Storage Gen2, and Microsoft OneLake.
@@ -92,6 +93,6 @@ To create an outbound private endpoint rule to the data source, use the followin
## Related content
-- [Connections in Azure AI Studio](../concepts/connections.md)
+- [Connections in Azure AI Foundry portal](../concepts/connections.md)
- [How to create vector indexes](../how-to/index-add.md)
- [How to configure a managed network](configure-managed-network.md)
Summary
{
"modification_type": "minor update",
"modification_title": "接続追加ガイドのプラットフォーム名更新"
}
Explanation
この変更は、「connections-add.md」というドキュメント内で、プラットフォーム名を「Azure AI Studio」から「Azure AI Foundry portal」に更新することを目的としています。この修正により、最新のプラットフォームに関する情報が正確に反映されます。
主な変更点は以下の通りです:
- ドキュメントのタイトルや説明、各セクションの見出しにおいて「Azure AI Foundry」に関する表記が使われるようになり、プラットフォーム名の一貫性が保たれています。
- 接続機能に関する説明が更新され、Azure AI Foundryプロジェクト内でのさまざまなサービス接続タイプの情報が明確に提供されています。
- 新しい接続を作成する手順が具体化されており、特にAzure AI Foundryポータル内での操作が強調されています。
- 従来のAzure AI Studioとの違いが強調されており、ユーザーが新しい環境においてもシームレスに作業を続けられるように配慮されています。
- 関連コンテンツのリンクも新しいプラットフォーム名に基づいて更新されており、ユーザーが必要な情報にアクセスしやすくなっています。
この更新により、ユーザーは最新のAzure AI Foundry環境において接続を追加するための正確かつ一貫した手順を理解しやすくなり、操作の効率が向上します。また、プラットフォームの変更に伴う混乱を軽減し、正しい情報のもとで作業を行えるようになります。
articles/ai-studio/how-to/costs-plan-manage.md
Diff
@@ -1,29 +1,30 @@
---
-title: Plan and manage costs for Azure AI Studio
+title: Plan and manage costs for Azure AI Foundry
titleSuffix: Azure AI Foundry
-description: Learn how to plan for and manage costs for Azure AI Studio by using cost analysis in the Azure portal.
+description: Learn how to plan for and manage costs for Azure AI Foundry by using cost analysis in the Azure portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
- ignite-2023
- build-2024
+ - ignite-2024
ms.topic: conceptual
ms.date: 11/19/2024
ms.reviewer: siarora
ms.author: larryfr
author: Blackmist
---
-# Plan and manage costs for Azure AI Studio
+# Plan and manage costs for Azure AI Foundry
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-This article describes how you plan for and manage costs for Azure AI Studio. First, you use the Azure pricing calculator to help plan for Azure AI Studio costs before you add any resources for the service to estimate costs. Next, as you add Azure resources, review the estimated costs.
+This article describes how you plan for and manage costs for Azure AI Foundry. First, you use the Azure pricing calculator to help plan for Azure AI Foundry costs before you add any resources for the service to estimate costs. Next, as you add Azure resources, review the estimated costs.
> [!TIP]
-> Azure AI Studio does not have a specific page in the Azure pricing calculator. Azure AI Studio is composed of several other Azure services, some of which are optional. This article provides information on using the pricing calculator to estimate costs for these services.
+> Azure AI Foundry does not have a specific page in the Azure pricing calculator. Azure AI Foundry is composed of several other Azure services, some of which are optional. This article provides information on using the pricing calculator to estimate costs for these services.
-You use Azure AI services in Azure AI Studio. Costs for Azure AI services are only a portion of the monthly costs in your Azure bill. You're billed for all Azure services and resources used in your Azure subscription, including the third-party services.
+You use Azure AI services in Azure AI Foundry portal. Costs for Azure AI services are only a portion of the monthly costs in your Azure bill. You're billed for all Azure services and resources used in your Azure subscription, including the third-party services.
## Prerequisites
@@ -48,18 +49,18 @@ Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculato
As you add new resources to your project, return to this calculator and add the same resource here to update your cost estimates.
-### Costs that typically accrue with Azure AI Studio
+### Costs that typically accrue with Azure AI Foundry
When you create resources for a hub, resources for other Azure services are also created. They are:
| Service pricing page | Description with example use cases |
| --- | --- |
-| [Azure AI services](https://azure.microsoft.com/pricing/details/cognitive-services/) | You pay to use services such as Azure OpenAI, Speech, Content Safety, Vision, Document Intelligence, and Language. Costs vary for each service and for some features within each service. For more information about provisioning of Azure AI services, see [Azure AI Studio hubs](../concepts/ai-resources.md#azure-ai-services-api-access-keys).|
+| [Azure AI services](https://azure.microsoft.com/pricing/details/cognitive-services/) | You pay to use services such as Azure OpenAI, Speech, Content Safety, Vision, Document Intelligence, and Language. Costs vary for each service and for some features within each service. For more information about provisioning of Azure AI services, see [Azure AI Foundry hubs](../concepts/ai-resources.md#azure-ai-services-api-access-keys).|
| [Azure AI Search](https://azure.microsoft.com/pricing/details/search/) | An example use case is to store data in a [vector search index](./index-add.md). |
-| [Azure Machine Learning](https://azure.microsoft.com/pricing/details/machine-learning/) | Compute instances are needed to run [Visual Studio Code (Web or Desktop)](./develop/vscode.md) and [prompt flow](./prompt-flow.md) via Azure AI Studio.<br/><br/>When you create a compute instance, the virtual machine (VM) stays on so it's available for your work.<br/><br/>Enable idle shutdown to save on cost when the VM is idle for a specified time period.<br/><br/>Or set up a schedule to automatically start and stop the compute instance to save cost when you aren't planning to use it. |
+| [Azure Machine Learning](https://azure.microsoft.com/pricing/details/machine-learning/) | Compute instances are needed to run [Visual Studio Code (Web or Desktop)](./develop/vscode.md) and [prompt flow](./prompt-flow.md) via Azure AI Foundry.<br/><br/>When you create a compute instance, the virtual machine (VM) stays on so it's available for your work.<br/><br/>Enable idle shutdown to save on cost when the VM is idle for a specified time period.<br/><br/>Or set up a schedule to automatically start and stop the compute instance to save cost when you aren't planning to use it. |
| [Azure Virtual Machine](https://azure.microsoft.com/pricing/details/virtual-machines/) | Azure Virtual Machines gives you the flexibility of virtualization for a wide range of computing solutions with support for Linux, Windows Server, SQL Server, Oracle, IBM, SAP, and more. |
| [Azure Container Registry Basic account](https://azure.microsoft.com/pricing/details/container-registry) | Provides storage of private Docker container images, enabling fast, scalable retrieval, and network-close deployment of container workloads on Azure. |
-| [Azure Blob Storage](https://azure.microsoft.com/pricing/details/storage/blobs/) | Can be used to store [Azure AI Studio project](./create-projects.md) files. |
+| [Azure Blob Storage](https://azure.microsoft.com/pricing/details/storage/blobs/) | Can be used to store [Azure AI Foundry project](./create-projects.md) files. |
| [Key Vault](https://azure.microsoft.com/pricing/details/key-vault/) | A key vault for storing secrets. |
| [Azure Private Link](https://azure.microsoft.com/pricing/details/private-link/) | Azure Private Link enables you to access Azure PaaS Services (for example, Azure Storage and SQL Database) over a private endpoint in your virtual network. |
@@ -90,27 +91,27 @@ After you delete a hub in the Azure portal or with Azure CLI, the following reso
## Monitor costs
-As you use Azure AI Studio with hubs, you incur costs. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days) or by unit usage (bytes, megabytes, and so on). You can see the incurred costs in [cost analysis](/azure/cost-management-billing/costs/quick-acm-cost-analysis?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+As you use Azure AI Foundry with hubs, you incur costs. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days) or by unit usage (bytes, megabytes, and so on). You can see the incurred costs in [cost analysis](/azure/cost-management-billing/costs/quick-acm-cost-analysis?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
When you use cost analysis, you view hub costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. And you see where overspending might occur. If you create budgets, you can also easily see where they're exceeded.
-### Monitor Azure AI Studio project costs
+### Monitor Azure AI Foundry project costs
-You can get to cost analysis from the [Azure portal](https://portal.azure.com). You can also get to cost analysis from the [Azure AI Studio](https://ai.azure.com).
+You can get to cost analysis from the [Azure portal](https://portal.azure.com). You can also get to cost analysis from the [Azure AI Foundry](https://ai.azure.com).
> [!IMPORTANT]
-> Your AI Studio project costs are only a subset of your overall application or solution costs. You need to monitor costs for all Azure resources used in your application or solution. For more information, see [Azure AI Studio hubs](../concepts/ai-resources.md).
+> Your AI Foundry project costs are only a subset of your overall application or solution costs. You need to monitor costs for all Azure resources used in your application or solution. For more information, see [Azure AI Foundry hubs](../concepts/ai-resources.md).
-For the examples in this section, assume that all Azure AI Studio resources are in the same resource group. But you can have resources in different resource groups. For example, your Azure AI Search resource might be in a different resource group than your project.
+For the examples in this section, assume that all Azure AI Foundry resources are in the same resource group. But you can have resources in different resource groups. For example, your Azure AI Search resource might be in a different resource group than your project.
Here's an example of how to monitor costs for a project. The costs are used as an example only. Your costs vary depending on the services that you use and the amount of usage.
-1. Sign in to [Azure AI Studio](https://ai.azure.com).
+1. Sign in to [Azure AI Foundry](https://ai.azure.com).
1. Select your project and then select **Management center** from the left menu.
1. Under the **Project** heading, select **Overview**.
1. Select **View cost for resources** from the **Total cost** section. The [Azure portal](https://portal.azure.com) opens to the resource group for your project.
- :::image type="content" source="../media/cost-management/project-costs/project-settings-go-view-costs.png" alt-text="Screenshot of the Azure AI Studio portal showing how to see project settings." lightbox="../media/cost-management/project-costs/project-settings-go-view-costs.png":::
+ :::image type="content" source="../media/cost-management/project-costs/project-settings-go-view-costs.png" alt-text="Screenshot of the Azure AI Foundry portal portal showing how to see project settings." lightbox="../media/cost-management/project-costs/project-settings-go-view-costs.png":::
1. Expand the **Resource** column to see the costs for each service that's underlying your [project](../concepts/ai-resources.md#organize-work-in-projects-for-customization). But this view doesn't include costs for all resources that you use in a project.
@@ -126,7 +127,7 @@ Here's an example of how to monitor costs for a project. The costs are used as a
In this example:
- The resource group name is **rg-contosoairesource**.
- - The total cost for all resources and services in the resource group is **$222.97**. In this example, $222.97 is the total cost for your application or solution that you're building with Azure AI Studio. Again, this example assumes that all Azure AI Studio resources are in the same resource group. But you can have resources in different resource groups.
+ - The total cost for all resources and services in the resource group is **$222.97**. In this example, $222.97 is the total cost for your application or solution that you're building with Azure AI Foundry. Again, this example assumes that all Azure AI Foundry resources are in the same resource group. But you can have resources in different resource groups.
- The project name is **contoso-outdoor-proj**.
- The costs that are limited to resources and services in the [project](../concepts/ai-resources.md#organize-work-in-projects-for-customization) total **$212.06**.
@@ -139,7 +140,7 @@ Here's an example of how to monitor costs for a project. The costs are used as a
You can also view resource group costs directly from the Azure portal. To do so:
1. Sign in to [Azure portal](https://portal.azure.com).
1. Select **Resource groups**.
-1. Find and select the resource group that contains your Azure AI Studio resources.
+1. Find and select the resource group that contains your Azure AI Foundry resources.
1. From the left navigation menu, select **Cost analysis**.
:::image type="content" source="../media/cost-management/project-costs/costs-per-resource-group.png" alt-text="Screenshot of the Azure portal cost analysis at the resource group level." lightbox="../media/cost-management/project-costs/costs-per-resource-group.png":::
@@ -148,7 +149,7 @@ For more information, see the [Azure pricing calculator](https://azure.microsoft
### Monitor costs for models offered through the Azure Marketplace
-Models deployed as a service using pay-as-you-go are offered through the Azure Marketplace. The model publishers might apply different costs depending on the offering. Each project in Azure AI Studio has its own subscription with the offering, which allows you to monitor the costs and the consumption happening on that project. Use [Microsoft Cost Management](https://azure.microsoft.com/products/cost-management) to monitor the costs:
+Models deployed as a service using pay-as-you-go are offered through the Azure Marketplace. The model publishers might apply different costs depending on the offering. Each project in Azure AI Foundry portal has its own subscription with the offering, which allows you to monitor the costs and the consumption happening on that project. Use [Microsoft Cost Management](https://azure.microsoft.com/products/cost-management) to monitor the costs:
1. Sign in to [Azure portal](https://portal.azure.com).
Summary
{
"modification_type": "minor update",
"modification_title": "コスト管理ガイドのプラットフォーム名更新"
}
Explanation
この変更は、「costs-plan-manage.md」というドキュメント内で、プラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に更新しています。この改訂の目的は、最新のプラットフォーム情報を整合性を持って提供することです。
主な変更点は以下の通りです:
- ドキュメントのタイトルや説明文で使用されるプラットフォーム名が「Azure AI Foundry」に変更されており、読み手に対して正しいコンテキストを提供しています。
- Azureのコスト計算機やプランニングに関する手順が、Azure AI Foundryに関連する内容に調整されています、ユーザーがコストを見積もる手順が明確に説明されています。
- Azure AI Foundryにおけるリソースのコストが、Azure AI Studioから移行し、新しいプラットフォームにおける詳細が強調されています。
- 一部のテーブルに書かれているリソース名や料金が更新され、Azure AI Foundryにおける具体的なコスト管理策が提供されています。
- ユーザーがコストを監視する方法や、リソースごとのコスト確認に関する手順が、プラットフォームに応じて修正されています。
この更新により、ユーザーはAzure AI Foundryでのコスト管理に関する正確かつ最新の手順を理解しやすくなり、円滑にリソースの利用状況を把握できるようになります。また、プラットフォームの変更に伴う混乱が軽減され、正しい情報にもとづいてコストを管理できるようになります。
articles/ai-studio/how-to/create-azure-ai-hub-template.md
Diff
@@ -1,7 +1,7 @@
---
-title: Create an Azure AI Studio hub using a Bicep template
+title: Create an Azure AI Foundry hub using a Bicep template
titleSuffix: Azure AI Foundry
-description: Use a Microsoft Bicep template to create a new Azure AI Studio hub.
+description: Use a Microsoft Bicep template to create a new Azure AI Foundry hub.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom: devx-track-arm-template, devx-track-bicep, build-2024
@@ -13,16 +13,16 @@ author: Blackmist
#Customer intent: As a DevOps person, I need to automate or customize the creation of a hub by using templates.
---
-# Use an Azure Resource Manager template to create an Azure AI Studio hub
+# Use an Azure Resource Manager template to create an Azure AI Foundry hub
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-Use a [Microsoft Bicep](/azure/azure-resource-manager/bicep/overview) template to create a hub for Azure AI Studio. A template makes it easy to create resources as a single, coordinated operation. A Bicep template is a text document that defines the resources that are needed for a deployment. It might also specify deployment parameters. Parameters are used to provide input values when using the template.
+Use a [Microsoft Bicep](/azure/azure-resource-manager/bicep/overview) template to create a hub for Azure AI Foundry. A template makes it easy to create resources as a single, coordinated operation. A Bicep template is a text document that defines the resources that are needed for a deployment. It might also specify deployment parameters. Parameters are used to provide input values when using the template.
The template used in this article can be found at [https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/aistudio-basics](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/aistudio-basics). Both the source `main.bicep` file and the compiled Azure Resource Manager template (`main.json`) file are available. This template creates the following resources:
- An Azure resource group (if one doesn't already exist)
-- An Azure AI Studio hub
+- An Azure AI Foundry hub
- Azure Storage Account
- Azure Key Vault
- Azure Container Registry
@@ -125,6 +125,6 @@ To run the Bicep template, use the following commands from the `aistudio-basics`
## Next steps
-- [Create an Azure AI Studio project](create-projects.md)
-- [Learn more about Azure AI Studio](../what-is-ai-studio.md)
+- [Create an Azure AI Foundry project](create-projects.md)
+- [Learn more about Azure AI Foundry](../what-is-ai-studio.md)
- [Learn more about hubs](../concepts/ai-resources.md)
Summary
{
"modification_type": "minor update",
"modification_title": "Azure AI Hubテンプレートガイドのプラットフォーム名更新"
}
Explanation
この変更は、「create-azure-ai-hub-template.md」というドキュメントで、プラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に更新しています。この修正により、最新のプラットフォーム情報が反映され、ユーザーが正確な情報に基づいて作業を行えるようになります。
主な変更点は以下の通りです:
- ドキュメントのタイトルおよび説明文が更新されており、「Azure AI Foundry」という名称が適切に使用されています。
- Azure Resource Managerテンプレートを用いたハブの作成に関連する説明も「Azure AI Foundry」に合わせて調整されています。
- 利用されるBicepテンプレートの説明が、正しいプラットフォーム名を指すように更新されています。
- リソース作成に関する詳細が、従来のAzure AI StudioからAzure AI Foundryに切り替えられています。
- 次のステップへのリンクも、新しいプラットフォームに基づいて更新されており、Azure AI Foundryに関連する情報への道筋が明示されています。
この更新により、ユーザーはBicepテンプレートを利用してAzure AI Foundryハブを作成するための正確で最新の手順を把握でき、作業を円滑に進めることができるようになります。プラットフォームの変更による混乱を防ぎ、正しいリソースの使用を促進することが期待されます。
articles/ai-studio/how-to/create-azure-ai-resource.md
Diff
@@ -1,42 +1,43 @@
---
-title: How to create and manage an Azure AI Studio hub
+title: How to create and manage an Azure AI Foundry hub
titleSuffix: Azure AI Foundry
-description: Learn how to create and manage an Azure AI Studio hub from the Azure portal or from the AI Studio. Your developers can then create projects from the hub.
+description: Learn how to create and manage an Azure AI Foundry hub from the Azure portal or from the AI Foundry portal. Your developers can then create projects from the hub.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
- ignite-2023
- build-2024
+ - ignite-2024
ms.topic: how-to
ms.date: 11/19/2024
ms.reviewer: deeikele
ms.author: larryfr
author: Blackmist
-# Customer Intent: As an admin, I need to create and manage an Azure AI Studio hub so that my team can use it to create projects for collaboration.
+# Customer Intent: As an admin, I need to create and manage an Azure AI Foundry hub so that my team can use it to create projects for collaboration.
---
-# How to create and manage an Azure AI Studio hub
+# How to create and manage an Azure AI Foundry hub
-In AI Studio, hubs provide the environment for a team to collaborate and organize work, and help you as a team lead or IT admin centrally set up security settings and govern usage and spend. You can create and manage a hub from the Azure portal or from the AI Studio, and then your developers can create projects from the hub.
+In AI Foundry portal, hubs provide the environment for a team to collaborate and organize work, and help you as a team lead or IT admin centrally set up security settings and govern usage and spend. You can create and manage a hub from the Azure portal or from the AI Foundry portal, and then your developers can create projects from the hub.
-In this article, you learn how to create and manage a hub in AI Studio with the default settings so you can get started quickly. Do you need to customize security or the dependent resources of your hub? Then use [Azure portal](create-secure-ai-hub.md) or [template options](create-azure-ai-hub-template.md).
+In this article, you learn how to create and manage a hub in AI Foundry portal with the default settings so you can get started quickly. Do you need to customize security or the dependent resources of your hub? Then use [Azure portal](create-secure-ai-hub.md) or [template options](create-azure-ai-hub-template.md).
> [!TIP]
-> If you're an individual developer and not an admin, dev lead, or part of a larger effort that requires a hub, you can create a project directly from the AI Studio without creating a hub first. For more information, see [Create a project](create-projects.md).
+> If you're an individual developer and not an admin, dev lead, or part of a larger effort that requires a hub, you can create a project directly from the AI Foundry portal without creating a hub first. For more information, see [Create a project](create-projects.md).
>
-> If you're an admin or dev lead and would like to create your Azure AI Studio hub using a template, see the articles on using [Bicep](create-azure-ai-hub-template.md) or [Terraform](create-hub-terraform.md).
+> If you're an admin or dev lead and would like to create your Azure AI Foundry hub using a template, see the articles on using [Bicep](create-azure-ai-hub-template.md) or [Terraform](create-hub-terraform.md).
-## Create a hub in AI Studio
+## Create a hub in AI Foundry portal
-To create a new hub, you need either the Owner or Contributor role on the resource group or on an existing hub. If you're unable to create a hub due to permissions, reach out to your administrator. If your organization is using [Azure Policy](/azure/governance/policy/overview), don't create the resource in AI Studio. Create the hub [in the Azure portal](#create-a-secure-hub-in-the-azure-portal) instead.
+To create a new hub, you need either the Owner or Contributor role on the resource group or on an existing hub. If you're unable to create a hub due to permissions, reach out to your administrator. If your organization is using [Azure Policy](/azure/governance/policy/overview), don't create the resource in AI Foundry portal. Create the hub [in the Azure portal](#create-a-secure-hub-in-the-azure-portal) instead.
-[!INCLUDE [Create Azure AI Studio hub](../includes/create-hub.md)]
+[!INCLUDE [Create Azure AI Foundry hub](../includes/create-hub.md)]
## Create a secure hub in the Azure portal
-If your organization is using [Azure Policy](/azure/governance/policy/overview), set up a hub that meets your organization's requirements instead of using AI Studio for resource creation.
+If your organization is using [Azure Policy](/azure/governance/policy/overview), set up a hub that meets your organization's requirements instead of using AI Foundry for resource creation.
-1. From the Azure portal, search for `Azure AI Studio` and create a new hub by selecting **+ New Azure AI hub**
+1. From the Azure portal, search for `Azure AI Foundry` and create a new hub by selecting **+ New Azure AI hub**
1. Enter your hub name, subscription, resource group, and location details.
1. For **Azure AI services base models**, select an existing AI services resource or create a new one. Azure AI services include multiple API endpoints for Speech, Content Safety, and Azure OpenAI.
@@ -71,7 +72,7 @@ If your organization is using [Azure Policy](/azure/governance/policy/overview),
### Manage access control
-You can add and remove users from the Azure AI Studio management center. Both the hub and projects within the hub have a **Users** entry in the left-menu that allows you to add and remove users. When adding users, you can assign them built-in roles.
+You can add and remove users from the Azure AI Foundry portal management center. Both the hub and projects within the hub have a **Users** entry in the left-menu that allows you to add and remove users. When adding users, you can assign them built-in roles.
:::image type="content" source="../media/how-to/hubs/studio-user-management.png" alt-text="Screenshot of the users area of the management center for a hub." lightbox="../media/how-to/hubs/studio-user-management.png":::
@@ -98,7 +99,7 @@ At hub creation, select between the networking isolation modes: **Public**, **Pr
At hub creation in the Azure portal, creation of associated Azure AI services, Storage account, Key vault (optional), Application insights (optional), and Container registry (optional) is given. These resources are found on the Resources tab during creation.
-To connect to Azure AI services (Azure OpenAI, Azure AI Search, and Azure AI Content Safety) or storage accounts in Azure AI Studio, create a private endpoint in your virtual network. Ensure the public network access (PNA) flag is disabled when creating the private endpoint connection. For more about Azure AI services connections, follow documentation [here](../../ai-services/cognitive-services-virtual-networks.md). You can optionally bring your own (BYO) search, but this requires a private endpoint connection from your virtual network.
+To connect to Azure AI services (Azure OpenAI, Azure AI Search, and Azure AI Content Safety) or storage accounts in Azure AI Foundry portal, create a private endpoint in your virtual network. Ensure the public network access (PNA) flag is disabled when creating the private endpoint connection. For more about Azure AI services connections, follow documentation [here](../../ai-services/cognitive-services-virtual-networks.md). You can optionally bring your own (BYO) search, but this requires a private endpoint connection from your virtual network.
### Encryption
Projects that use the same hub, share their encryption configuration. Encryption mode can be set only at the time of hub creation between Microsoft-managed keys and Customer-managed keys.
@@ -151,7 +152,7 @@ az ml workspace update -n "myexamplehub" -g "{MY_RESOURCE_GROUP}" -a "APPLICATIO
### Choose how credentials are stored
-Select scenarios in AI Studio store credentials on your behalf. For example when you create a connection in AI Studio to access an Azure Storage account with stored account key, access Azure Container Registry with admin password, or when you create a compute instance with enabled SSH keys. No credentials are stored with connections when you choose Microsoft Entra ID identity-based authentication.
+Select scenarios in AI Foundry portal store credentials on your behalf. For example when you create a connection in AI Foundry portal to access an Azure Storage account with stored account key, access Azure Container Registry with admin password, or when you create a compute instance with enabled SSH keys. No credentials are stored with connections when you choose Microsoft Entra ID identity-based authentication.
You can choose where credentials are stored:
@@ -161,19 +162,19 @@ You can choose where credentials are stored:
After your hub is created, it is not possible to switch between Your Azure Key Vault and using a Microsoft-managed credential store.
-## Delete an Azure AI Studio hub
+## Delete an Azure AI Foundry hub
-To delete a hub from Azure AI Studio, select the hub and then select **Delete hub** from the **Hub properties** section of the page.
+To delete a hub from Azure AI Foundry, select the hub and then select **Delete hub** from the **Hub properties** section of the page.
:::image type="content" source="../media/how-to/hubs/studio-delete-hub.png" alt-text="Screenshot of the delete hub link in hub properties." lightbox="../media/how-to/hubs/studio-delete-hub.png":::
> [!NOTE]
> You can also delete the hub from the Azure portal.
-Deleting a hub deletes all associated projects. When a project is deleted, all nested endpoints for the project are also deleted. You can optionally delete connected resources; however, make sure that no other applications are using this connection. For example, another Azure AI Studio deployment might be using it.
+Deleting a hub deletes all associated projects. When a project is deleted, all nested endpoints for the project are also deleted. You can optionally delete connected resources; however, make sure that no other applications are using this connection. For example, another Azure AI Foundry deployment might be using it.
## Related content
- [Create a project](create-projects.md)
-- [Learn more about Azure AI Studio](../what-is-ai-studio.md)
+- [Learn more about Azure AI Foundry](../what-is-ai-studio.md)
- [Learn more about hubs](../concepts/ai-resources.md)
Summary
{
"modification_type": "minor update",
"modification_title": "Azure AI Hub管理ガイドのプラットフォーム名更新"
}
Explanation
この変更は、「create-azure-ai-resource.md」というドキュメントで、プラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に更新しています。この修正により、文書が最新のプラットフォーム情報を反映し、ユーザーが正確に手順を理解できるようになります。
主な変更点は以下の通りです:
- ドキュメントのタイトルや説明文が更新され、「Azure AI Foundry」という名称が使用されています。
- AzureポータルやAI Foundryポータルを利用してハブを作成および管理する手順が、正確に新しいプラットフォームに合わせて調整されています。
- さまざまなセクションでの「AI Studio」に関連する記述が、「AI Foundry」に置き換えられています。これには、アクセサリの管理、ハブの作成方法、ユーザーの管理方法、接続の設定方法などが含まれます。
- ドキュメントは、個々の開発者がAI Foundryポータルからプロジェクトを直接作成する方法を示す場合にもそれに合った表現に更新されています。
- 「Bicep」や「Terraform」を用いたハブの作成に関するリンクも新しいプラットフォーム用に修正されています。
この更新により、ユーザーはAzure AI Foundryにおけるリソース作成および管理に関する明確で正確な情報にアクセスできるようになり、業務が円滑に進行できることが期待されます。プラットフォームの変更による混乱を軽減し、正しい情報に基づいて作業を行えるようになります。
articles/ai-studio/how-to/create-hub-terraform.md
Diff
@@ -1,6 +1,6 @@
---
-title: 'Use Terraform to create an Azure AI Studio hub'
-description: In this article, you create an Azure AI Studio hub, an Azure AI Studio project, an AI services resource, and more resources.
+title: 'Use Terraform to create an Azure AI Foundry hub'
+description: In this article, you create an Azure AI Foundry hub, an Azure AI Foundry project, an AI services resource, and more resources.
ms.topic: how-to
ms.date: 07/12/2024
titleSuffix: Azure AI Foundry
@@ -13,12 +13,12 @@ ms.custom: devx-track-terraform
content_well_notification:
- AI-contribution
ai-usage: ai-assisted
-#customer intent: As a Terraform user, I want to see how to create an Azure AI Studio hub and its associated resources.
+#customer intent: As a Terraform user, I want to see how to create an Azure AI Foundry hub and its associated resources.
---
-# Use Terraform to create an Azure AI Studio hub
+# Use Terraform to create an Azure AI Foundry hub
-In this article, you use Terraform to create an Azure AI Studio hub, a project, and AI services connection. A hub is a central place for data scientists and developers to collaborate on machine learning projects. It provides a shared, collaborative space to build, train, and deploy machine learning models. The hub is integrated with Azure Machine Learning and other Azure services, making it a comprehensive solution for machine learning tasks. The hub also allows you to manage and monitor your AI deployments, ensuring they're performing as expected.
+In this article, you use Terraform to create an Azure AI Foundry hub, a project, and AI services connection. A hub is a central place for data scientists and developers to collaborate on machine learning projects. It provides a shared, collaborative space to build, train, and deploy machine learning models. The hub is integrated with Azure Machine Learning and other Azure services, making it a comprehensive solution for machine learning tasks. The hub also allows you to manage and monitor your AI deployments, ensuring they're performing as expected.
[!INCLUDE [About Terraform](~/azure-dev-docs-pr/articles/terraform/includes/abstract.md)]
@@ -27,8 +27,8 @@ In this article, you use Terraform to create an Azure AI Studio hub, a project,
> * Set up a storage account
> * Establish a key vault
> * Configure AI services
-> * Build an AI Studio hub
-> * Develop an AI Studio project
+> * Build an AI Foundry hub
+> * Develop an AI Foundry project
> * Establish an AI services connection
## Prerequisites
@@ -131,5 +131,5 @@ In this article, you use Terraform to create an Azure AI Studio hub, a project,
## Next steps
> [!div class="nextstepaction"]
-> [See more articles about Azure AI Studio hub](/search/?terms=Azure%20ai%20hub%20and%20terraform)
+> [See more articles about Azure AI Foundry hub](/search/?terms=Azure%20ai%20hub%20and%20terraform)
Summary
{
"modification_type": "minor update",
"modification_title": "Azure AI Hub作成ガイドのプラットフォーム名更新"
}
Explanation
この変更は、「create-hub-terraform.md」というドキュメントで、プラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に更新しています。この修正により、最新のプラットフォーム情報が反映され、ユーザーが正確な情報をもとに作業を進めることができるようになります。
主な変更点は以下の通りです:
- ドキュメントのタイトルや説明文が更新され、「Azure AI Foundry」という名称が使用されています。これにより、読者が新しいプラットフォームに関連する情報を直感的に理解できるようになります。
- Terraformを使用して作成するリソース内容が「Azure AI Foundry」に則した内容に調整されています。これには、ハブ、プロジェクト、AIサービスとの接続が含まれています。
- 機能の説明の中で、「Azure AI Foundry hub」としての役割が明確に表現され、AIプロジェクトにおけるコラボレーションの重要性とその機能が強調されています。
- 必要な手順や次のステップのセクションにおいても、新しいプラットフォーム名に基づいた情報に更新されています。
この更新により、ユーザーはTerraformを使用してAzure AI Foundryにおけるリソースを作成するための最新で正確な手順にアクセスでき、作業を円滑に進めることができることが期待されます。プラットフォームの変更による混乱を防ぎ、適切な情報に基づいて作業を行ってもらえるようになります。
articles/ai-studio/how-to/create-manage-compute-session.md
Diff
@@ -1,31 +1,32 @@
---
title: Create and manage prompt flow compute sessions
titleSuffix: Azure AI Foundry
-description: In this article, learn how to create and manage compute sessions to run prompt flows in Azure AI Studio.
+description: In this article, learn how to create and manage compute sessions to run prompt flows in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
- ignite-2023
- build-2024
+ - ignite-2024
ms.topic: how-to
ms.date: 11/07/2024
ms.reviewer: lochen
ms.author: sgilley
author: sdgilley
-# customer intent: Learn how to create and manage prompt flow compute sessions in Azure AI Studio.
+# customer intent: Learn how to create and manage prompt flow compute sessions in Azure AI Foundry portal.
---
-# Create and manage prompt flow compute sessions in Azure AI Studio
+# Create and manage prompt flow compute sessions in Azure AI Foundry portal
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-You need a compute session to run [prompt flows](prompt-flow.md). Use Azure AI Studio to create and manage prompt flow compute sessions.
+You need a compute session to run [prompt flows](prompt-flow.md). Use Azure AI Foundry to create and manage prompt flow compute sessions.
-A prompt flow compute session has computing resources that are required for the application to run, including a Docker image that contains all necessary dependency packages. In addition to flow execution, Azure AI Studio uses the compute session to ensure the accuracy and functionality of the tools incorporated within the flow when you make updates to the prompt or code content.
+A prompt flow compute session has computing resources that are required for the application to run, including a Docker image that contains all necessary dependency packages. In addition to flow execution, Azure AI Foundry uses the compute session to ensure the accuracy and functionality of the tools incorporated within the flow when you make updates to the prompt or code content.
## Prerequisites
-Sign in to [Azure AI Studio](https://ai.azure.com) and select your project.
+Sign in to [Azure AI Foundry](https://ai.azure.com) and select your project.
## Create a compute session
@@ -35,15 +36,15 @@ When you start a compute session, you can use the default settings or customize
By default, the compute session uses the environment defined in `flow.dag.yaml` in the [flow folder](flow-develop.md#authoring-the-flow). It runs on a serverless compute with a virtual machine (VM) size for which you have sufficient quota in your workspace.
-1. Go to your project in Azure AI Studio.
+1. Go to your project in Azure AI Foundry portal.
1. From the left pane, select **Prompt flow** and then select the flow you want to run.
1. From the top toolbar of your prompt flow, select **Start compute session**.
### Start a compute session with advanced settings
In the advanced settings, you can select the compute type. You can choose between serverless compute and compute instance.
-1. Go to your project in Azure AI Studio.
+1. Go to your project in Azure AI Foundry portal.
1. From the left pane, select **Prompt flow** and then select the flow you want to run.
1. From the top toolbar of your prompt flow, select the dropdown arrow on the right side of the **Start compute session** button. Select **Start with advanced settings** to customize the compute session.
Summary
{
"modification_type": "minor update",
"modification_title": "Computeセッション管理ガイドのプラットフォーム名更新"
}
Explanation
この変更は、「create-manage-compute-session.md」というドキュメントで、プラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に更新しています。この修正は、ユーザーが正確な情報を基に作業を進められるようにするためのものです。
主な変更点は以下の通りです:
- ドキュメントのタイトルおよび説明文が更新され、「Azure AI Foundryポータル」という名称が使用されています。これにより、プラットフォームに関連する手順がより明確になります。
- 記述では、プロンプトフローのコンピュートセッションを作成および管理するためにAzure AI Foundryを利用する方法が具体的に説明されています。
- 各手順の中での「Azure AI Studio」に関連する表現が、「Azure AI Foundry」に変更されており、ユーザーが新しいプラットフォーム名に慣れる手助けをしています。
- 認証に関する手順、セッションの開始方法などで、「Azure AI Foundryポータル」という言葉が頻繁に挿入されており、利用するプラットフォームが一貫して示されています。
この更新により、ユーザーはAzure AI Foundryを使用してプロンプトフローのコンピュートセッションを効果的に管理するための適切な情報にアクセスでき、作業の効率が向上します。プラットフォームの変更に伴う混乱を防ぎ、正確な手順に基づく作業を促進することが期待されます。
articles/ai-studio/how-to/create-manage-compute.md
Diff
@@ -1,30 +1,31 @@
---
-title: How to create and manage compute instances in Azure AI Studio
+title: How to create and manage compute instances in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: This article provides instructions on how to create and manage compute instances in Azure AI Studio.
+description: This article provides instructions on how to create and manage compute instances in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
- ignite-2023
- build-2024
+ - ignite-2024
ms.topic: how-to
ms.date: 11/07/2024
ms.reviewer: deeikele
ms.author: sgilley
author: sdgilley
---
-# How to create and manage compute instances in Azure AI Studio
+# How to create and manage compute instances in Azure AI Foundry portal
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-In this article, you learn how to create a compute instance in Azure AI Studio. You can create a compute instance in the Azure AI Studio.
+In this article, you learn how to create a compute instance in Azure AI Foundry portal. You can create a compute instance in the Azure AI Foundry portal.
You need a compute instance to:
-- Use prompt flow in Azure AI Studio.
+- Use prompt flow in Azure AI Foundry portal.
- Create an index
-- Open Visual Studio Code (Web or Desktop) in Azure AI Studio.
+- Open Visual Studio Code (Web or Desktop) in Azure AI Foundry portal.
You can use the same compute instance for multiple scenarios and workflows. A compute instance can't be shared. It can only be used by a single assigned user. By default, it is assigned to the creator. You can change the assignment to a different user in the security step during creation.
@@ -36,9 +37,9 @@ Compute instances can run jobs securely in a virtual network environment, withou
## Create a compute instance
-To create a compute instance in Azure AI Studio:
+To create a compute instance in Azure AI Foundry portal:
-1. Sign in to [Azure AI Studio](https://ai.azure.com) and select your project. If you don't have a project already, first create one.
+1. Sign in to [Azure AI Foundry](https://ai.azure.com) and select your project. If you don't have a project already, first create one.
1. Select **Management center**
1. Under the **Hub** heading, select **Computes**.
1. Select **New** to create a new compute instance.
@@ -100,7 +101,7 @@ To configure idle shutdown for an existing compute instance follow these steps:
## Start or stop a compute instance
-You can start or stop a compute instance from the Azure AI Studio.
+You can start or stop a compute instance from the Azure AI Foundry portal.
1. From the left menu, select **Management center**.
1. Under the **Hub** heading, select **Computes**.
Summary
{
"modification_type": "minor update",
"modification_title": "コンピュートインスタンス管理ガイドのプラットフォーム名更新"
}
Explanation
この変更は、「create-manage-compute.md」というドキュメントで、プラットフォーム名を「Azure AI Studio」から「Azure AI Foundryポータル」に更新しています。この修正は、ユーザーが最新の情報に基づいて作業を進められるようにすることを目的としています。
主な変更内容は以下の通りです:
- ドキュメントのタイトルや説明文が更新され、「Azure AI Foundryポータル」という名称が明確に示されています。これにより、読者が新しいプラットフォームの名称に対して混乱しないよう配慮されています。
- 「Azure AI Studio」に関連する表現が「Azure AI Foundry」に変更されており、特に作業手順において一貫性が保たれています。
- プロンプトフローを使用するためのコンピュートインスタンスの必要性が、Azure AI Foundryポータルに基づいて再表現されています。
- コンピュートインスタンスの作成手順に関する説明が最新のプラットフォームに合わせて更新され、具体的には「Management center」や「Hub」セクションの言及が新しいポータル用に修正されています。
この更新により、ユーザーはAzure AI Foundryポータルを使用してコンピュートインスタンスを作成し管理する際に、正確で適切な手順に従うことができるようになります。これにより、作業の効率が向上し、新しいプラットフォーム名に基づく手順が容易に理解できるようになることが期待されます。
articles/ai-studio/how-to/create-projects.md
Diff
@@ -1,45 +1,46 @@
---
-title: Create an Azure AI Studio project in Azure AI Studio
+title: Create an Azure AI Foundry project in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: This article describes how to create an Azure AI Studio project so you can work with generative AI in the cloud.
+description: This article describes how to create an Azure AI Foundry project so you can work with generative AI in the cloud.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
- ignite-2023
- build-2024
+ - ignite-2024
ms.topic: how-to
ms.date: 10/01/2024
ms.reviewer: deeikele
ms.author: sgilley
author: sdgilley
-# customer intent: As a developer, I want to create an Azure AI Studio project so I can work with generative AI.
+# customer intent: As a developer, I want to create an Azure AI Foundry project so I can work with generative AI.
---
-# Create a project in Azure AI Studio
+# Create a project in Azure AI Foundry portal
-This article describes how to create an Azure AI Studio project. A project is used to organize your work and save state while building customized AI apps.
+This article describes how to create an Azure AI Foundry project. A project is used to organize your work and save state while building customized AI apps.
-Projects are hosted by an Azure AI Studio hub. If your company has an administrative team that has created a hub for you, you can create a project from that hub. If you are working on your own, you can create a project and a default hub will automatically be created for you.
+Projects are hosted by an Azure AI Foundry hub. If your company has an administrative team that has created a hub for you, you can create a project from that hub. If you are working on your own, you can create a project and a default hub will automatically be created for you.
-For more information about the projects and hubs model, see [Azure AI Studio hubs](../concepts/ai-resources.md).
+For more information about the projects and hubs model, see [Azure AI Foundry hubs](../concepts/ai-resources.md).
## Prerequisites
- An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/).
-- For Python SDK or CLI steps, an Azure AI Studio hub. If you don't have a hub, see [How to create and manage an Azure AI Studio hub](create-azure-ai-resource.md).
-- For Azure AI Studio, a hub isn't required. It is created for you when needed.
+- For Python SDK or CLI steps, an Azure AI Foundry hub. If you don't have a hub, see [How to create and manage an Azure AI Foundry hub](create-azure-ai-resource.md).
+- For Azure AI Foundry, a hub isn't required. It is created for you when needed.
## Create a project
Use the following tabs to select the method you plan to use to create a project:
-# [Azure AI Studio](#tab/ai-studio)
+# [Azure AI Foundry](#tab/ai-studio)
-[!INCLUDE [Create Azure AI Studio project](../includes/create-projects.md)]
+[!INCLUDE [Create Azure AI Foundry project](../includes/create-projects.md)]
# [Python SDK](#tab/python)
-The code in this section assumes you have an existing hub. If you don't have a hub, see [How to create and manage an Azure AI Studio hub](create-azure-ai-resource.md) to create one.
+The code in this section assumes you have an existing hub. If you don't have a hub, see [How to create and manage an Azure AI Foundry hub](create-azure-ai-resource.md) to create one.
[!INCLUDE [SDK setup](../includes/development-environment-config.md)]
@@ -62,7 +63,7 @@ The code in this section assumes you have an existing hub. If you don't have a
# [Azure CLI](#tab/azurecli)
-The code in this section assumes you have an existing hub. If you don't have a hub, see [How to create and manage an Azure AI Studio hub](create-azure-ai-resource.md) to create one.
+The code in this section assumes you have an existing hub. If you don't have a hub, see [How to create and manage an Azure AI Foundry hub](create-azure-ai-resource.md) to create one.
1. If you don't have the Azure CLI and machine learning extension installed, follow the steps in the [Install and set up the machine learning extension](/azure/machine-learning/how-to-configure-cli) article.
@@ -74,7 +75,7 @@ The code in this section assumes you have an existing hub. If you don't have a
For more information on authenticating, see [Authentication methods](/cli/azure/authenticate-azure-cli).
-1. Once the extension is installed and authenticated to your Azure subscription, use the following command to create a new Azure AI Studio project from an existing Azure AI Studio hub:
+1. Once the extension is installed and authenticated to your Azure subscription, use the following command to create a new Azure AI Foundry project from an existing Azure AI Foundry hub:
```azurecli
az ml workspace create --kind project --hub-id {my_hub_ARM_ID} --resource-group {my_resource_group} --name {my_project_name}
@@ -84,17 +85,17 @@ The code in this section assumes you have an existing hub. If you don't have a
## View project settings
-# [Azure AI Studio](#tab/ai-studio)
+# [Azure AI Foundry](#tab/ai-studio)
On the project **Overview** page you can find information about the project.
-:::image type="content" source="../media/how-to/projects/project-settings.png" alt-text="Screenshot of an AI Studio project settings page." lightbox = "../media/how-to/projects/project-settings.png":::
+:::image type="content" source="../media/how-to/projects/project-settings.png" alt-text="Screenshot of an AI Foundry project settings page." lightbox = "../media/how-to/projects/project-settings.png":::
- Name: The name of the project appears in the top left corner. You can rename the project using the edit tool.
- Subscription: The subscription that hosts the hub that hosts the project.
- Resource group: The resource group that hosts the hub that hosts the project.
-Select **Management center** to navigate to the project resources in Azure AI Studio.
+Select **Management center** to navigate to the project resources in Azure AI Foundry portal.
Select **Manage in Azure portal** to navigate to the project resources in the Azure portal.
# [Python SDK](#tab/python)
@@ -132,10 +133,10 @@ In addition, a number of resources are only accessible by users in your project
| workspacefilestore | {project-GUID}-code | Hosts files created on your compute and using prompt flow |
> [!NOTE]
-> Storage connections are not created directly with the project when your storage account has public network access set to disabled. These are created instead when a first user accesses AI Studio over a private network connection. [Troubleshoot storage connections](troubleshoot-secure-connection-project.md#troubleshoot-configurations-on-connecting-to-storage)
+> Storage connections are not created directly with the project when your storage account has public network access set to disabled. These are created instead when a first user accesses AI Foundry over a private network connection. [Troubleshoot storage connections](troubleshoot-secure-connection-project.md#troubleshoot-configurations-on-connecting-to-storage)
## Related content
- [Deploy an enterprise chat web app](../tutorials/deploy-chat-web-app.md)
-- [Learn more about Azure AI Studio](../what-is-ai-studio.md)
+- [Learn more about Azure AI Foundry](../what-is-ai-studio.md)
- [Learn more about hubs](../concepts/ai-resources.md)
Summary
{
"modification_type": "minor update",
"modification_title": "プロジェクト作成ガイドのプラットフォーム名更新"
}
Explanation
この変更は、「create-projects.md」というドキュメントで、プラットフォーム名を「Azure AI Studio」から「Azure AI Foundryポータル」に更新しています。この修正は、ユーザーが最新のプラットフォームに関する情報に基づいて作業を進められるようにすることを目的としています。
主な変更内容は以下の通りです:
- ドキュメントのタイトルや説明文が更新され、「Azure AI Foundryポータル」という名称が使用されています。これにより、プラットフォームでの作業についての理解が深まります。
- プロジェクトの作成に関する各手順が、新しいプラットフォーム名に合わせて修正されており、一貫性が確保されています。
- プロジェクトやハブの説明においても、「Azure AI Foundry」に関連する表現が強調され、ユーザーが正確な手順を理解できるようになっています。
- プロジェクトの設定ページにおいては、スクリーンショットの説明にも「Azure AI Foundry」という名称が反映されています。
- Python SDKやCLIのセクションでは、Azure AI Foundryハブに関連する指示やコマンドが更新され、ユーザーが環境に応じた適切なコマンドを使用できるようにしています。
この更新により、ユーザーはAzure AI Foundryポータルを使用してプロジェクトを作成し管理する際に、最新の情報に基づいて効率よく作業ができるようになります。プラットフォームの名称と関連する情報が整合性を持って提供されることで、作業の効率と理解を向上させることが期待されます。
articles/ai-studio/how-to/create-secure-ai-hub.md
Diff
@@ -1,7 +1,7 @@
---
title: Create a secure hub
titleSuffix: Azure AI Foundry
-description: Create an Azure AI Studio hub inside a managed virtual network. The managed virtual network secures access to managed resources such as computes.
+description: Create an Azure AI Foundry hub inside a managed virtual network. The managed virtual network secures access to managed resources such as computes.
ms.service: azure-ai-studio
ms.custom:
- build-2024
@@ -10,14 +10,14 @@ ms.date: 5/21/2024
ms.reviewer: meerakurup
ms.author: larryfr
author: Blackmist
-# Customer intent: As an administrator, I want to create a secure hub and project with a managed virtual network so that I can secure access to the Azure AI Studio hub and project resources.
+# Customer intent: As an administrator, I want to create a secure hub and project with a managed virtual network so that I can secure access to the Azure AI Foundry hub and project resources.
---
-# How to create a secure Azure AI Studio hub and project with a managed virtual network
+# How to create a secure Azure AI Foundry hub and project with a managed virtual network
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-You can secure your Azure AI Studio hub, projects, and managed resources in a managed virtual network. With a managed virtual network, inbound access is only allowed through a private endpoint for your hub. Outbound access can be configured to allow either all outbound access, or only allowed outbound that you specify. For more information, see [Managed virtual network](configure-managed-network.md).
+You can secure your Azure AI Foundry hub, projects, and managed resources in a managed virtual network. With a managed virtual network, inbound access is only allowed through a private endpoint for your hub. Outbound access can be configured to allow either all outbound access, or only allowed outbound that you specify. For more information, see [Managed virtual network](configure-managed-network.md).
> [!IMPORTANT]
> The managed virtual network doesn't provide inbound connectivity for your clients. For more information, see the [Connect to the hub](#connect-to-the-hub) section.
@@ -29,7 +29,7 @@ You can secure your Azure AI Studio hub, projects, and managed resources in a ma
## Create a hub
-1. From the Azure portal, search for `Azure AI Studio` and create a new resource by selecting **+ New Azure AI**.
+1. From the Azure portal, search for `Azure AI Foundry` and create a new resource by selecting **+ New Azure AI**.
1. Enter your hub name, subscription, resource group, and location details. You can also select an existing Azure AI services resource or create a new one.
:::image type="content" source="../media/how-to/network/ai-hub-basics.png" alt-text="Screenshot of the option to set hub basic information." lightbox="../media/how-to/network/ai-hub-basics.png":::
@@ -38,7 +38,7 @@ You can secure your Azure AI Studio hub, projects, and managed resources in a ma
:::image type="content" source="../media/how-to/network/ai-hub-resources.png" alt-text="Screenshot of the Create a hub with the option to set resource information." lightbox="../media/how-to/network/ai-hub-resources.png":::
-1. Select **Next: Networking** to configure the managed virtual network that AI Studio uses to secure its hub and projects.
+1. Select **Next: Networking** to configure the managed virtual network that AI Foundry uses to secure its hub and projects.
1. Select **Private with Internet Outbound**, which allows compute resources to access the public internet for resources such as Python packages.
@@ -71,5 +71,5 @@ The managed virtual network doesn't directly provide access to your clients. Ins
## Next steps
- [Create a project](create-projects.md)
-- [Learn more about Azure AI Studio](../what-is-ai-studio.md)
-- [Learn more about Azure AI Studio hubs](../concepts/ai-resources.md)
+- [Learn more about Azure AI Foundry](../what-is-ai-studio.md)
+- [Learn more about Azure AI Foundry hubs](../concepts/ai-resources.md)
Summary
{
"modification_type": "minor update",
"modification_title": "セキュアAIハブ作成ガイドのプラットフォーム名更新"
}
Explanation
この変更は、「create-secure-ai-hub.md」というドキュメントの内容において、プラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に更新しています。この修正により、ユーザーは最新のプラットフォームでの手順について正確な情報を得ることができます。
主な変更内容は以下の通りです:
- ドキュメントのタイトルや説明文が新しいプラットフォーム名に基づいて修正され、「Azure AI Foundry」という名称が使用されています。
- プライベートエンドポイントを介したインバウンドアクセスの管理や、アウトバウンドアクセスの設定に関する説明が、「Azure AI Foundry」に対応する形で更新されています。
- プロジェクトの作成手順の各ステップも、以前の「Azure AI Studio」から「Azure AI Foundry」に変更されており、特にリソース作成の手順において一貫性が持たされています。
- 「次のステップ」セクションでも、関連するドキュメントのリンクが新しいプラットフォーム名に基づいて修正され、ユーザーが必要な情報にアクセスしやすくなっています。
この更新により、ユーザーはAzure AI Foundryを使用してセキュアなAIハブを作成する際に、最新かつ正確な手順に従うことができるようになります。プラットフォーム名称の変更によって、情報提供の整合性が向上し、作業の効率が期待されます。
articles/ai-studio/how-to/custom-policy-model-deployment.md
Diff
@@ -1,7 +1,7 @@
---
title: Control AI model deployment with custom policies
titleSuffix: Azure AI Foundry
-description: "Learn how to use custom Azure Policies to control Azure AI services and Azure OpenAI model deployment with Azure AI Studio."
+description: "Learn how to use custom Azure Policies to control Azure AI services and Azure OpenAI model deployment with Azure AI Foundry."
author: Blackmist
ms.author: larryfr
ms.service: azure-ai-studio
@@ -12,9 +12,9 @@ ms.date: 10/25/2024
---
-# Control AI model deployment with custom policies in Azure AI Studio
+# Control AI model deployment with custom policies in Azure AI Foundry portal
-When using models from Azure AI services and Azure OpenAI with Azure AI Studio, you might need to use custom policies to control what models your developers can deploy. Custom Azure Policies allow you to create policy definitions that meet your organization's unique requirements. This article shows you how to create and assign an example custom policy to control model deployment.
+When using models from Azure AI services and Azure OpenAI with Azure AI Foundry, you might need to use custom policies to control what models your developers can deploy. Custom Azure Policies allow you to create policy definitions that meet your organization's unique requirements. This article shows you how to create and assign an example custom policy to control model deployment.
## Prerequisites
@@ -84,7 +84,7 @@ When using models from Azure AI services and Azure OpenAI with Azure AI Studio,
1. From the **Parameters** tab, set **Allowed AI models** to the list of models that you want to allow. The list should be a comma-separated list of model names and approved versions, surrounded by square brackets. For example, `["gpt-4,0613", "gpt-35-turbo,0613"]`.
> [!TIP]
- > You can find the model names and their versions in the [Azure AI Studio Model Catalog](https://ai.azure.com/explore/models). Select the model to view the details, and then copy the model name and their version in the title.
+ > You can find the model names and their versions in the [Azure AI Foundry Model Catalog](https://ai.azure.com/explore/models). Select the model to view the details, and then copy the model name and their version in the title.
1. Optionally, select the **Non-compliance messages** tab at the top of the page and set a custom message for noncompliance.
1. Select **Review + create** tab and verify that the policy assignment is correct. When ready, select **Create** to assign the policy.
@@ -112,7 +112,7 @@ To update an existing policy assignment with new models, follow these steps:
## Best practices
-- **Obtaining model names**: Use the [Azure AI Studio Model Catalog](https://ai.azure.com/explore/models), then select the model to view details. Use the model name in the title with the policy.
+- **Obtaining model names**: Use the [Azure AI Foundry Model Catalog](https://ai.azure.com/explore/models), then select the model to view details. Use the model name in the title with the policy.
- **Granular scoping**: Assign policies at the appropriate scope to balance control and flexibility. For example, apply at the subscription level to control all resources in the subscription, or apply at the resource group level to control resources in a specific group.
- **Policy naming**: Use a consistent naming convention for policy assignments to make it easier to identify the purpose of the policy. Include information such as the purpose and scope in the name.
- **Documentation**: Keep records of policy assignments and configurations for auditing purposes. Document any changes made to the policy over time.
@@ -123,6 +123,6 @@ To update an existing policy assignment with new models, follow these steps:
## Related content
- [Azure Policy overview](/azure/governance/policy/overview)
-- [Azure AI Studio model catalog](model-catalog-overview.md)
+- [Azure AI Foundry model catalog](model-catalog-overview.md)
- [Azure AI services documentation](/azure/ai-services)
Summary
{
"modification_type": "minor update",
"modification_title": "カスタムポリシーモデル展開ガイドのプラットフォーム名更新"
}
Explanation
この変更は、「custom-policy-model-deployment.md」というドキュメントにおいて、関連するプラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に更新しています。この修正により、最新のプラットフォームに関する情報が提供され、ユーザーがより正確な手順で作業を行えるようになります。
主な変更内容は以下の通りです:
- ドキュメントのタイトルと説明が、プラットフォーム名に合わせて更新されています。「Azure AI Foundry」が新たに使用され、その概要に基づいて内容が修正されています。
- Azure AIサービスとAzure OpenAIからのモデル使用に関する説明が、Azure AI Foundry用に適応されており、ユーザーが適切な政策を作成する方法が示されています。
- モデル名やバージョンの取得に関するヒントが、以前の「Azure AI Studio」から「Azure AI Foundry」に変更され、関連リンクも適宜更新されています。
- 罰則メッセージやポリシーの更新手順に関する指示が、もとのプラットフォームに最適化されていて、一貫性を持たせています。
- また、関連コンテンツのリンクも新しいプラットフォーム名に基づいて修正されています。
この更新により、ユーザーはAzure AI Foundryを通じてカスタムポリシーによるAIモデル展開を制御する際に、最新の情報に基づいて適切に作業を進めることが可能になります。プラットフォームの名称の統一化が図られ、全体的な内容の整合性が向上しています。
articles/ai-studio/how-to/data-add.md
Diff
@@ -1,23 +1,24 @@
---
-title: How to add and manage data in your Azure AI Studio project
+title: How to add and manage data in your Azure AI Foundry project
titleSuffix: Azure AI Foundry
-description: Learn how to add and manage data in your Azure AI Studio project.
+description: Learn how to add and manage data in your Azure AI Foundry project.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
- ignite-2023
- build-2024
+ - ignite-2024
ms.topic: how-to
ms.date: 10/25/2024
ms.author: franksolomon
-author: fbsolo-ms1
+author: fbsolo-ms1
---
-# How to add and manage data in your Azure AI Studio project
+# How to add and manage data in your Azure AI Foundry project
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-This article describes how to create and manage data in Azure AI Studio. Data can be used as a source for indexing in Azure AI Studio.
+This article describes how to create and manage data in Azure AI Foundry portal. Data can be used as a source for indexing in Azure AI Foundry portal.
Data can help when you need these capabilities:
@@ -26,39 +27,39 @@ Data can help when you need these capabilities:
> - **Reproducibility:** Once you create a data version, it is *immutable*. It cannot be modified or deleted. Therefore, jobs or prompt flow pipelines that consume the data can be reproduced.
> - **Auditability:** Because the data version is immutable, you can track the asset versions, who updated a version, and when the version updates occurred.
> - **Lineage:** For any given data, you can view which jobs or prompt flow pipelines consume the data.
-> - **Ease-of-use:** An Azure AI Studio data resembles web browser bookmarks (favorites). Instead of remembering long storage paths that *reference* your frequently-used data on Azure Storage, you can create a data *version* and then access that version of the asset with a friendly name.
+> - **Ease-of-use:** An Azure AI Foundry data resembles web browser bookmarks (favorites). Instead of remembering long storage paths that *reference* your frequently-used data on Azure Storage, you can create a data *version* and then access that version of the asset with a friendly name.
## Prerequisites
To create and work with data, you need:
- An Azure subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/).
-- An [AI Studio project](../how-to/create-projects.md).
+- An [AI Foundry project](../how-to/create-projects.md).
## Create data
-When you create your data, you need to set the data type. AI Studio supports these data types:
+When you create your data, you need to set the data type. AI Foundry supports these data types:
|Type |**Canonical Scenarios**|
|---------|---------|
|**`file`**<br>Reference a single file | Read a single file on Azure Storage (the file can have any format). |
|**`folder`**<br> Reference a folder | Read a folder of parquet/CSV files into Pandas/Spark.<br><br>Read unstructured data (for example: images, text, or audio) located in a folder. |
-Azure AI Studio shows the supported source paths. You can create a data from a folder or file:
+Azure AI Foundry shows the supported source paths. You can create a data from a folder or file:
-- If you select **folder type**, you can choose the folder URL format. Azure AI Studio shows the supported folder URL formats. You can create a data resource as shown:
+- If you select **folder type**, you can choose the folder URL format. Azure AI Foundry shows the supported folder URL formats. You can create a data resource as shown:
:::image type="content" source="../media/data-add/studio-url-folder.png" alt-text="Screenshot of folder URL format.":::
-- If you select **file type**, you can choose the file URL format. The supported file URL formats are shown in Azure AI Studio. You can create a data resource as shown:
+- If you select **file type**, you can choose the file URL format. The supported file URL formats are shown in Azure AI Foundry portal. You can create a data resource as shown:
:::image type="content" source="../media/data-add/studio-url-file.png" alt-text="Screenshot of file URL format.":::
### Create data: File type
A file (`uri_file`) data resource type points to a *single file* on storage (for example, a CSV file).
-These steps explain how to create a File typed data in Azure AI Studio:
+These steps explain how to create a File typed data in Azure AI Foundry portal:
-1. Navigate to [Azure AI Studio](https://ai.azure.com/).
+1. Navigate to [Azure AI Foundry](https://ai.azure.com/).
1. Select the project where you want to create the data.
@@ -85,9 +86,9 @@ These steps explain how to create a File typed data in Azure AI Studio:
### Create data: Folder type
-A Folder (`uri_folder`) data source type points to a *folder* on a storage resource (for example, a folder containing several subfolders of images). Use these steps to create a Folder type data resource in Azure AI Studio:
+A Folder (`uri_folder`) data source type points to a *folder* on a storage resource (for example, a folder containing several subfolders of images). Use these steps to create a Folder type data resource in Azure AI Foundry portal:
-1. Navigate to [Azure AI Studio](https://ai.azure.com/)
+1. Navigate to [Azure AI Foundry](https://ai.azure.com/)
1. Select the project where you want to create the data.
@@ -118,9 +119,9 @@ A Folder (`uri_folder`) data source type points to a *folder* on a storage resou
### Delete data
> [!IMPORTANT]
-> Data deletion is not supported. Data is immutable in AI Studio. Once you create a data version, it can't be modified or deleted. This immutability provides a level of protection when working in a team that creates production workloads.
+> Data deletion is not supported. Data is immutable in AI Foundry portal. Once you create a data version, it can't be modified or deleted. This immutability provides a level of protection when working in a team that creates production workloads.
-If AI Studio allowed data deletion, it would have the following adverse effects:
+If AI Foundry allowed data deletion, it would have the following adverse effects:
- Production jobs that consume data that is later deleted would fail.
- Machine learning experiment reproduction would become more difficult.
- Job lineage would break, because it would become impossible to view the deleted data version.
@@ -138,33 +139,33 @@ When a data resource is erroneously created - for example, with an incorrect nam
### Archive data
-By default, archiving a data resource hides it from both list queries (for example, in the CLI `az ml data list`) and the data listing in Azure AI Studio. You can still continue to reference and use an archived data resource in your workflows. You can either archive:
+By default, archiving a data resource hides it from both list queries (for example, in the CLI `az ml data list`) and the data listing in Azure AI Foundry portal. You can still continue to reference and use an archived data resource in your workflows. You can either archive:
- *all versions* of the data under a given name
- a specific data version
#### Archive all versions of a data
-At this time, Azure AI Studio doesn't support archiving *all versions* of the data resource under a given name.
+At this time, Azure AI Foundry doesn't support archiving *all versions* of the data resource under a given name.
#### Archive a specific data version
-At this time, Azure AI Studio doesn't support archiving a specific version of the data resource.
+At this time, Azure AI Foundry doesn't support archiving a specific version of the data resource.
### Restore an archived data
You can restore an archived data resource. If all of versions of the data are archived, you can't restore individual versions of the data - you must restore all versions.
#### Restore all versions of a data
-At this time, Azure AI Studio doesn't support restoration of *all versions* of the data under a given name.
+At this time, Azure AI Foundry doesn't support restoration of *all versions* of the data under a given name.
#### Restore a specific data version
> [!IMPORTANT]
> If all data versions were archived, you cannot restore individual versions of the data - you must restore all versions.
-Currently, Azure AI Studio doesn't support restoration of a specific data version.
+Currently, Azure AI Foundry doesn't support restoration of a specific data version.
### Data tagging
@@ -181,9 +182,9 @@ You can add tags to existing data.
You can browse the folder structure and preview the file in the Data details page. We support data preview for the following types:
- Data file types that are supported via preview API: ".tsv", ".csv", ".parquet", ".jsonl".
-- Other file types, Studio UI attempts to preview the file in the browser natively. The supported file types might depend on the browser itself.
+- Other file types, AI Foundry portal attempts to preview the file in the browser natively. The supported file types might depend on the browser itself.
Normally for images, these file image types are supported: ".png", ".jpg", ".gif". Normally, these file types are supported: ".ipynb", ".py", ".yml", ".html".
## Next steps
-- Learn how to [create a project in Azure AI Studio](./create-projects.md).
+- Learn how to [create a project in Azure AI Foundry portal](./create-projects.md).
Summary
{
"modification_type": "minor update",
"modification_title": "データ管理ガイドのプラットフォーム名更新"
}
Explanation
この変更は、「data-add.md」というドキュメントにおいて、プラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に更新しています。この修正により、ユーザーは新しいプラットフォームに関連した正確な情報にアクセスし、データの追加と管理手順を最新のものとして理解することが可能になります。
主な変更内容は以下の通りです:
- ドキュメントのタイトルおよび説明がプラットフォーム名に合わせて更新され、Azure AI Foundryのプロジェクトでのデータ管理に関する内容として明示されています。
- データの作成および管理に関する説明は、Azure AI Foundry portalに基づいて修正されています。
- 主な機能や目的、利用規則に関する説明が「Azure AI Foundry」に統一されており、関連する手順やヒントもそれに合わせて変更されています。
- データのタイプに関する記述が、Azure AI Foundryのサポート内容として再定義され、具体的なデータタイプに基づく手順が整えられています。
- データの削除、アーカイブ、および復元に関する情報も、「Azure AI Foundry」に対応して修正されています。
- プラットフォーム名に関連するリンクも適宜更新されており、ユーザーは一貫した情報に基づいて手続きを行えるようになっています。
この更新により、ユーザーは今後Azure AI Foundryを基盤にしたデータの管理や運用に関するナレッジを正確に収集でき、操作性が向上します。プラットフォーム間の混乱を避けるために、情報の整合性が強化されています。
articles/ai-studio/how-to/data-image-add.md
Diff
@@ -1,7 +1,7 @@
---
title: 'Use your image data with Azure OpenAI Service'
titleSuffix: Azure AI Foundry
-description: Use this article to learn about using your image data for image generation in Azure AI Studio.
+description: Use this article to learn about using your image data for image generation in Azure AI Foundry portal.
manager: nitinme
ms.service: azure-ai-studio
ms.custom:
@@ -28,12 +28,12 @@ Use this article to learn how to provide your own image data for GPT-4 Turbo wit
- An Azure OpenAI resource with the GPT-4 Turbo with Vision model deployed. For more information about model deployment, see the [resource deployment guide](../../ai-services/openai/how-to/create-resource.md).
- Be sure that you're assigned at least the [Cognitive Services Contributor role](../../ai-services/openai/how-to/role-based-access-control.md#cognitive-services-contributor) for the Azure OpenAI resource.
- An Azure AI Search resource. See [create an Azure AI Search service in the portal](/azure/search/search-create-service-portal). If you don't have an Azure AI Search resource, you're prompted to create one when you add your data source later in this guide.
-- An [AI Studio hub](../how-to/create-azure-ai-resource.md) with your Azure OpenAI resource and Azure AI Search resource added as connections.
+- An [AI Foundry hub](../how-to/create-azure-ai-resource.md) with your Azure OpenAI resource and Azure AI Search resource added as connections.
## Deploy a GPT-4 Turbo with Vision model
-1. Sign in to [Azure AI Studio](https://ai.azure.com) and select the hub you'd like to work in.
+1. Sign in to [Azure AI Foundry](https://ai.azure.com) and select the hub you'd like to work in.
1. On the left nav menu, select **AI Services**. Select the **Try out GPT-4 Turbo** panel.
1. On the gpt-4 page, select **Deploy**. In the window that appears, select your Azure OpenAI resource. Select `vision-preview` as the model version.
1. Select **Deploy**.
@@ -45,8 +45,8 @@ Use this article to learn how to provide your own image data for GPT-4 Turbo wit
1. On the left pane, select the **Add your data** tab and select **Add a data source**.
1. In the window that appears, select a data source option. Each option uses an Azure AI Search index that's trained on your images and can be used for retrieval augmented generation in the chat playground.
* **Azure AI Search**: If you have an existing [Azure AI Search](/azure/search/search-what-is-azure-search) index, you can use it as a data source.
- * **Azure Blob Storage**: The Azure Blob storage option is especially useful if you have a large number of image files and don't want to manually upload each one. Azure AI Studio will generate an image search index for you.
- * **Upload image files and metadata**: You can upload image files and metadata using the playground. This option is useful if you have a small number of image files. Azure AI Studio will generate an image search index for you.
+ * **Azure Blob Storage**: The Azure Blob storage option is especially useful if you have a large number of image files and don't want to manually upload each one. Azure AI Foundry will generate an image search index for you.
+ * **Upload image files and metadata**: You can upload image files and metadata using the playground. This option is useful if you have a small number of image files. Azure AI Foundry will generate an image search index for you.
## Add your image data
@@ -96,7 +96,7 @@ After you have a blob storage container populated with image files and at least
> [!NOTE]
> Azure OpenAI needs both a storage account resource and a search resource to access and index your data. Your data is stored securely in your Azure subscription.
>
- > When adding data to the selected storage account for the first time in Azure AI Studio, you might be prompted to turn on [cross-origin resource sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services). Azure AI Studio and Azure OpenAI need access your Azure Blob storage account.
+ > When adding data to the selected storage account for the first time in Azure AI Foundry portal, you might be prompted to turn on [cross-origin resource sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services). Azure AI Foundry and Azure OpenAI need access your Azure Blob storage account.
:::image type="content" source="../media/data-add/use-your-image-data/add-image-data-blob.png" alt-text="A screenshot showing the Azure storage account and Azure AI Search index selection." lightbox="../media/data-add/use-your-image-data/add-image-data-blob.png":::
@@ -117,7 +117,7 @@ After you have a blob storage container populated with image files and at least
> [!NOTE]
> Azure OpenAI needs both a storage account resource and a search resource to access and index your data. Your data is stored securely in your Azure subscription.
>
- > When adding data to the selected storage account for the first time in Azure AI Studio, you might be prompted to turn on [cross-origin resource sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services). Azure AI Studio and Azure OpenAI need access your Azure Blob storage account.
+ > When adding data to the selected storage account for the first time in Azure AI Foundry portal, you might be prompted to turn on [cross-origin resource sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services). Azure AI Foundry and Azure OpenAI need access your Azure Blob storage account.
:::image type="content" source="../media/data-add/use-your-image-data/add-image-data-upload.png" alt-text="A screenshot showing the storage account and index selection for image file upload." lightbox="../media/data-add/use-your-image-data/add-image-data-upload.png":::
@@ -168,5 +168,5 @@ When you remove a data source, you'll see a warning message. Removing a data sou
## Next steps
-- Learn how to [create a project in Azure AI Studio](./create-projects.md).
+- Learn how to [create a project in Azure AI Foundry portal](./create-projects.md).
- [Deploy an enterprise chat web app](../tutorials/deploy-chat-web-app.md)
Summary
{
"modification_type": "minor update",
"modification_title": "画像データ使用ガイドのプラットフォーム名更新"
}
Explanation
この変更は、「data-image-add.md」というドキュメントにおけるプラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に変更しています。この更新により、ユーザーは新しいプラットフォームに基づいた正確な情報を得ることができ、画像データを使用する手順をより明確に理解することが可能になります。
主な変更内容は以下の通りです:
- ドキュメントのタイトルおよび説明が、Azure AI Foundryに関連する内容に置き換えられています。
- 画像データを追加するための手順および必要なリソースに関する説明が、Azure AI Foundryへの適応に伴って修正されています。
- Azure OpenAIサービスでのGPT-4 Turboモデルの展開に関する手順が、Azure AI Foundry portalに基づくものに変更されており、ユーザーが正確な手続きを行うことができるようになっています。
- 画像ファイルをアップロードおよび管理する手順も、Azure AI Foundryに合わせて修正されており、一貫したプラットフォーム名が使用されています。
- CORS(クロスオリジンリソースシェアリング)に関するNotesも、Azure AI Foundryに対応する内容に更新され、必要な手順が明確に提示されています。
これらの変更により、ユーザーはAzure AI Foundryの環境で画像データを活用するための具体的な手法を確実に理解し、適切な操作を行うことができます。また、プラットフォームの名称の一貫性を保つことで、誤解を避けることができ、全体的な利用体験が向上しています。
articles/ai-studio/how-to/deploy-models-cohere-command.md
Diff
@@ -1,7 +1,7 @@
---
-title: How to use Cohere Command chat models with Azure AI Studio
+title: How to use Cohere Command chat models with Azure AI Foundry
titleSuffix: Azure AI Foundry
-description: Learn how to use Cohere Command chat models with Azure AI Studio.
+description: Learn how to use Cohere Command chat models with Azure AI Foundry.
ms.service: azure-ai-studio
manager: scottpolly
ms.topic: how-to
@@ -108,15 +108,15 @@ The following models are available:
## Prerequisites
-To use Cohere Command chat models with Azure AI Studio, you need the following prerequisites:
+To use Cohere Command chat models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Cohere Command chat models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -142,7 +142,7 @@ Read more about the [Azure AI inference package and reference](https://aka.ms/az
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Cohere Command chat models.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Cohere Command chat models.
### Create a client to consume the model
@@ -581,15 +581,15 @@ The following models are available:
## Prerequisites
-To use Cohere Command chat models with Azure AI Studio, you need the following prerequisites:
+To use Cohere Command chat models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Cohere Command chat models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -613,7 +613,7 @@ npm install @azure-rest/ai-inference
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Cohere Command chat models.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Cohere Command chat models.
### Create a client to consume the model
@@ -1068,15 +1068,15 @@ The following models are available:
## Prerequisites
-To use Cohere Command chat models with Azure AI Studio, you need the following prerequisites:
+To use Cohere Command chat models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Cohere Command chat models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -1123,7 +1123,7 @@ using System.Reflection;
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Cohere Command chat models.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Cohere Command chat models.
### Create a client to consume the model
@@ -1580,15 +1580,15 @@ The following models are available:
## Prerequisites
-To use Cohere Command chat models with Azure AI Studio, you need the following prerequisites:
+To use Cohere Command chat models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Cohere Command chat models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -1605,7 +1605,7 @@ Models deployed with the [Azure AI model inference API](https://aka.ms/azureai/m
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Cohere Command chat models.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Cohere Command chat models.
### Create a client to consume the model
@@ -2151,7 +2151,7 @@ For more examples of how to use Cohere models, see the following examples and tu
Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
-Cohere models deployed as a serverless API are offered by Cohere through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when deploying the model.
+Cohere models deployed as a serverless API are offered by Cohere through the Azure Marketplace and integrated with Azure AI Foundry for use. You can find the Azure Marketplace pricing when deploying the model.
Each time a project subscribes to a given offer from the Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference; however, multiple meters are available to track each scenario independently.
@@ -2162,6 +2162,6 @@ For more information on how to track costs, see [Monitor costs for models offere
* [Azure AI Model Inference API](../reference/reference-model-inference-api.md)
* [Deploy models as serverless APIs](deploy-models-serverless.md)
-* [Consume serverless API endpoints from a different Azure AI Studio project or hub](deploy-models-serverless-connect.md)
+* [Consume serverless API endpoints from a different Azure AI Foundry project or hub](deploy-models-serverless-connect.md)
* [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md)
* [Plan and manage costs (marketplace)](costs-plan-manage.md#monitor-costs-for-models-offered-through-the-azure-marketplace)
Summary
{
"modification_type": "minor update",
"modification_title": "Cohere Commandチャットモデルの利用ガイドのプラットフォーム名更新"
}
Explanation
この変更は、「deploy-models-cohere-command.md」というドキュメントにおいて、Cohere Commandチャットモデルに関連するプラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に更新しています。この修正によって、ユーザーは新しいプラットフォームに関する正確な情報を得ることができ、Cohere Commandモデルのデプロイに関する手続きを最新のものとして理解することが可能になります。
主な変更内容は以下の通りです:
- ドキュメントのタイトルおよび説明がAzure AI Foundryに合わせて修正され、ユーザーはCohere Commandチャットモデルの利用法について正確な情報に基づいて学ぶことができます。
- モデルのデプロイとサーバーレスAPIへの接続に関するセクションがAzure AI Foundryに基づいて更新され、特に必要なリソースについての説明が整備されています。
- CORS(クロスオリジンリソースシェアリング)に関する注意事項も、新しいプラットフォームに関連する情報に更新されています。
- Azure AI Foundry portalの使用に関する手順が明確にされており、手順とリンクが一貫していることから、エラーの可能性を低減しています。
- サーバーレスAPIエンドポイントの利用における注意事項が、Cohere Commandモデル及びAzure AI Foundryに置き換えられています。
この更新により、ユーザーはAzure AI FoundryをベースにしたCohere Commandチャットモデルのデプロイ手順をスムーズに理解し、より良い体験を得ることができるでしょう。プラットフォーム名称の一貫性を保つことで、情報の誤解を避けることができ、全体的な利用のより良い体験が期待されます。
articles/ai-studio/how-to/deploy-models-cohere-embed.md
Diff
@@ -1,7 +1,7 @@
---
-title: How to use Cohere Embed V3 models with Azure AI Studio
+title: How to use Cohere Embed V3 models with Azure AI Foundry
titleSuffix: Azure AI Foundry
-description: Learn how to use Cohere Embed V3 models with Azure AI Studio.
+description: Learn how to use Cohere Embed V3 models with Azure AI Foundry.
ms.service: azure-ai-studio
manager: scottpolly
ms.topic: how-to
@@ -14,11 +14,11 @@ ms.custom: references_regions, generated
zone_pivot_groups: azure-ai-model-catalog-samples-embeddings
---
-# How to use Cohere Embed V3 models with Azure AI Studio
+# How to use Cohere Embed V3 models with Azure AI Foundry
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-In this article, you learn about Cohere Embed V3 models and how to use them with Azure AI Studio.
+In this article, you learn about Cohere Embed V3 models and how to use them with Azure AI Foundry.
The Cohere family of models includes various models optimized for different use cases, including chat completions, embeddings, and rerank. Cohere models are optimized for various use cases that include reasoning, summarization, and question answering.
[!INCLUDE [models-preview](../includes/models-preview.md)]
@@ -54,15 +54,15 @@ Image embeddings consume a fixed number of tokens per image—1,000 tokens per i
## Prerequisites
-To use Cohere Embed V3 models with Azure AI Studio, you need the following prerequisites:
+To use Cohere Embed V3 models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Cohere Embed V3 models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -247,15 +247,15 @@ Image embeddings consume a fixed number of tokens per image—1,000 tokens per i
## Prerequisites
-To use Cohere Embed V3 models with Azure AI Studio, you need the following prerequisites:
+To use Cohere Embed V3 models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Cohere Embed V3 models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -442,15 +442,15 @@ Image embeddings consume a fixed number of tokens per image—1,000 tokens per i
## Prerequisites
-To use Cohere Embed V3 models with Azure AI Studio, you need the following prerequisites:
+To use Cohere Embed V3 models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Cohere Embed V3 models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -651,7 +651,7 @@ Cohere Embed V3 models can optimize the embeddings based on its use case.
## Cost and quota considerations for Cohere family of models deployed as serverless API endpoints
-Cohere models deployed as a serverless API are offered by Cohere through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when deploying the model.
+Cohere models deployed as a serverless API are offered by Cohere through the Azure Marketplace and integrated with Azure AI Foundry for use. You can find the Azure Marketplace pricing when deploying the model.
Each time a project subscribes to a given offer from the Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference; however, multiple meters are available to track each scenario independently.
@@ -664,6 +664,6 @@ Quota is managed per deployment. Each deployment has a rate limit of 200,000 tok
* [Azure AI Model Inference API](../reference/reference-model-inference-api.md)
* [Deploy models as serverless APIs](deploy-models-serverless.md)
-* [Consume serverless API endpoints from a different Azure AI Studio project or hub](deploy-models-serverless-connect.md)
+* [Consume serverless API endpoints from a different Azure AI Foundry project or hub](deploy-models-serverless-connect.md)
* [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md)
* [Plan and manage costs (marketplace)](costs-plan-manage.md#monitor-costs-for-models-offered-through-the-azure-marketplace)
Summary
{
"modification_type": "minor update",
"modification_title": "Cohere Embed V3モデルの利用ガイドのプラットフォーム名更新"
}
Explanation
この変更は、「deploy-models-cohere-embed.md」というドキュメントにおいて、Cohere Embed V3モデルに関連するプラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に更新しています。これにより、ユーザーは新しいプラットフォームに基づいた正確な情報を得ることができ、Cohere Embed V3モデルの利用についてより明確に理解できるようになります。
主な変更内容は以下の通りです:
- ドキュメントのタイトルおよび説明が、Azure AI Foundryに基づく内容に修正されており、ユーザーは正しいプラットフォーム名を用いた情報を検索できるようになっています。
- Cohere Embed V3モデルに関する手順や前提条件のセクションも、Azure AI Foundryに合わせて更新され、必要なリソースと手順についての情報が明確化されています。
- サーバーレスAPIエンドポイントへのデプロイに関連する手順も、Azure AI Foundryポータルに基づいて調整され、具体的なリンクや推奨リソースが更新されています。
- CORS(クロスオリジンリソースシェアリング)に関する注意事項も新しいプラットフォームに合わせて変更され、一貫した情報提供が行われています。
- Azure マーケットプレースとの統合に関する情報も、Azure AI Foundryを基にした内容に置き換えられており、コスト管理の手続きがより明確にされています。
これらの変更により、ユーザーはCohere Embed V3モデルをAzure AI Foundryで効果的に活用するための具体的な手法を理解しやすくなり、全体的な利用体験が向上します。プラットフォーム名称の一貫性を保つことで、情報の誤解を避けることができ、より良い学習と実行が可能になります。
articles/ai-studio/how-to/deploy-models-cohere-rerank.md
Diff
@@ -1,28 +1,28 @@
---
title: How to deploy Cohere Rerank models as serverless APIs
titleSuffix: Azure AI Foundry
-description: Learn to deploy and use Cohere Rerank models with Azure AI Studio.
+description: Learn to deploy and use Cohere Rerank models with Azure AI Foundry.
manager: scottpolly
ms.service: azure-ai-studio
ms.topic: how-to
ms.date: 07/24/2024
ms.reviewer: shubhiraj
ms.author: mopeakande
author: msakande
-ms.custom: references_regions, build-2024
+ms.custom: references_regions, build-2024, ignite-2024
---
-# How to deploy Cohere Rerank models with Azure AI Studio
+# How to deploy Cohere Rerank models with Azure AI Foundry
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-In this article, you learn about the Cohere Rerank models, how to use Azure AI Studio to deploy them as serverless APIs with pay-as-you-go token-based billing, and how to work with the deployed models.
+In this article, you learn about the Cohere Rerank models, how to use Azure AI Foundry to deploy them as serverless APIs with pay-as-you-go token-based billing, and how to work with the deployed models.
[!INCLUDE [models-preview](../includes/models-preview.md)]
## Cohere Rerank models
-Cohere offers two Rerank models in [Azure AI Studio](https://ai.azure.com). These models are available in the model catalog for deployment as serverless APIs:
+Cohere offers two Rerank models in [Azure AI Foundry](https://ai.azure.com). These models are available in the model catalog for deployment as serverless APIs:
* Cohere Rerank 3 - English
* Cohere Rerank 3 - Multilingual
@@ -64,7 +64,7 @@ You can deploy the previously mentioned Cohere models as a service with pay-as-y
### Prerequisites
- An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
-- An [AI Studio hub](../how-to/create-azure-ai-resource.md). The serverless API model deployment offering for Cohere Rerank is only available with hubs created in these regions:
+- An [AI Foundry hub](../how-to/create-azure-ai-resource.md). The serverless API model deployment offering for Cohere Rerank is only available with hubs created in these regions:
* East US
* East US 2
@@ -76,8 +76,8 @@ You can deploy the previously mentioned Cohere models as a service with pay-as-y
For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md).
-- An [Azure AI Studio project](../how-to/create-projects.md).
-- Azure role-based access controls are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md).
+- An [Azure AI Foundry project](../how-to/create-projects.md).
+- Azure role-based access controls are used to grant access to operations in Azure AI Foundry portal. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Role-based access control in Azure AI Foundry portal](../concepts/rbac-ai-studio.md).
### Create a new deployment
@@ -86,12 +86,12 @@ The following steps demonstrate the deployment of Cohere Rerank 3 - English, but
To create a deployment:
-1. Sign in to [Azure AI Studio](https://ai.azure.com).
+1. Sign in to [Azure AI Foundry](https://ai.azure.com).
1. Select **Model catalog** from the left sidebar.
1. Search for *Cohere*.
1. Select **cohere-rerank-3-english** to open the Model Details page.
1. Select **Deploy** to open a serverless API deployment window for the model.
-1. Alternatively, you can initiate a deployment by starting from your project in AI Studio.
+1. Alternatively, you can initiate a deployment by starting from your project in AI Foundry portal.
1. From the left sidebar of your project, select **Models + Endpoints**.
1. Select **+ Deploy model**.
@@ -242,7 +242,7 @@ The `results` object is a dictionary with the following fields:
Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
-Cohere models deployed as serverless APIs with pay-as-you-go billing are offered by Cohere through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when deploying the model.
+Cohere models deployed as serverless APIs with pay-as-you-go billing are offered by Cohere through the Azure Marketplace and integrated with Azure AI Foundry for use. You can find the Azure Marketplace pricing when deploying the model.
Each time a project subscribes to a given offer from the Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference; however, multiple meters are available to track each scenario independently.
@@ -252,6 +252,6 @@ For more information on how to track costs, see [monitor costs for models offere
## Related content
-- [What is Azure AI Studio?](../what-is-ai-studio.md)
+- [What is Azure AI Foundry?](../what-is-ai-studio.md)
- [Azure AI FAQ article](../faq.yml)
- [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md)
Summary
{
"modification_type": "minor update",
"modification_title": "Cohere Rerankモデルの利用ガイドのプラットフォーム名更新"
}
Explanation
この変更は、「deploy-models-cohere-rerank.md」というドキュメントにおいて、Cohere Rerankモデルに関連するプラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に更新しています。この修正により、ユーザーは新しいプラットフォームに関する正確な情報を得ることができ、Cohere Rerankモデルのデプロイや利用方法についてより明確に理解できるようになります。
主な変更内容は以下の通りです:
- ドキュメントのタイトルおよび説明がAzure AI FoundryにてCohere Rerankモデルを利用することに基づいて更新され、正確な情報提供が行われています。
- サーバーレスAPIとしてのデプロイに関する手順が、Azure AI Foundry向けに詳細に書き換えられ、特に必要なリソースや前提条件についての情報が明確になっています。
- Azure AI Studioではなく、Azure AI Foundryでのプロジェクト作成や管理に関連する手順とリンクが新設されており、ドキュメント全体の整合性が保たれています。
- Azure Marketplaceにおける料金情報にも、Azure AI Foundryとの統合に基づく内容が反映されています。
- 付随する参考リンクや関連コンテンツも、Azure AI Foundryに関する情報に更新されています。
これらの変更により、ユーザーはAzure AI Foundry上でCohere Rerankモデルをより効果的に利用できるようになります。プラットフォーム名称の明確化は、情報の混乱を避け、ユーザーエクスペリエンスを向上させるための重要な要素です。
articles/ai-studio/how-to/deploy-models-jais.md
Diff
@@ -1,7 +1,7 @@
---
-title: How to use Jais chat models with Azure AI Studio
+title: How to use Jais chat models with Azure AI Foundry
titleSuffix: Azure AI Foundry
-description: Learn how to use Jais chat models with Azure AI Studio.
+description: Learn how to use Jais chat models with Azure AI Foundry.
ms.service: azure-ai-studio
manager: scottpolly
ms.topic: how-to
@@ -40,15 +40,15 @@ You can learn more about the models in their respective model card:
## Prerequisites
-To use Jais chat models with Azure AI Studio, you need the following prerequisites:
+To use Jais chat models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Jais chat models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -74,7 +74,7 @@ Read more about the [Azure AI inference package and reference](https://aka.ms/az
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Jais chat models.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Jais chat models.
### Create a client to consume the model
@@ -291,15 +291,15 @@ You can learn more about the models in their respective model card:
## Prerequisites
-To use Jais chat models with Azure AI Studio, you need the following prerequisites:
+To use Jais chat models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Jais chat models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -323,7 +323,7 @@ npm install @azure-rest/ai-inference
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Jais chat models.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Jais chat models.
### Create a client to consume the model
@@ -565,15 +565,15 @@ You can learn more about the models in their respective model card:
## Prerequisites
-To use Jais chat models with Azure AI Studio, you need the following prerequisites:
+To use Jais chat models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Jais chat models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -620,7 +620,7 @@ using System.Reflection;
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Jais chat models.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Jais chat models.
### Create a client to consume the model
@@ -855,15 +855,15 @@ You can learn more about the models in their respective model card:
## Prerequisites
-To use Jais chat models with Azure AI Studio, you need the following prerequisites:
+To use Jais chat models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Jais chat models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -880,7 +880,7 @@ Models deployed with the [Azure AI model inference API](https://aka.ms/azureai/m
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Jais chat models.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Jais chat models.
### Create a client to consume the model
@@ -1176,7 +1176,7 @@ For more examples of how to use Jais models, see the following examples and tuto
Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
-Jais models deployed as a serverless API are offered by G42 through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when deploying the model.
+Jais models deployed as a serverless API are offered by G42 through the Azure Marketplace and integrated with Azure AI Foundry for use. You can find the Azure Marketplace pricing when deploying the model.
Each time a project subscribes to a given offer from the Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference; however, multiple meters are available to track each scenario independently.
@@ -1187,6 +1187,6 @@ For more information on how to track costs, see [Monitor costs for models offere
* [Azure AI Model Inference API](../reference/reference-model-inference-api.md)
* [Deploy models as serverless APIs](deploy-models-serverless.md)
-* [Consume serverless API endpoints from a different Azure AI Studio project or hub](deploy-models-serverless-connect.md)
+* [Consume serverless API endpoints from a different Azure AI Foundry project or hub](deploy-models-serverless-connect.md)
* [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md)
* [Plan and manage costs (marketplace)](costs-plan-manage.md#monitor-costs-for-models-offered-through-the-azure-marketplace)
Summary
{
"modification_type": "minor update",
"modification_title": "Jaisチャットモデルの利用ガイドのプラットフォーム名更新"
}
Explanation
この変更は、「deploy-models-jais.md」というドキュメントにおいて、Jaisチャットモデルに関連するプラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に更新しています。この修正により、ユーザーは新しいプラットフォームを基にした正確な情報を得ることができ、Jaisチャットモデルのデプロイや利用方法についてより明確に理解できるようになります。
主な変更点は以下の通りです:
- ドキュメントのタイトルおよび説明が、Azure AI Foundryに関する内容に更新され、ユーザーに対して正しい情報が提供されています。
- Jaisチャットモデルに関する前提条件のセクションがAzure AI Foundryに合わせて書き換えられ、必要な手続きについての情報が明確化されています。
- サーバーレスAPIエンドポイントへのデプロイ手順も、Azure AI Foundryポータルに基づいた内容に更新されており、具体的なガイドが提供されています。
- Azure AIモデル推論APIに関する情報も、Azure AI Foundryにおける利用が強調されており、使用方法の説明が統一されています。
- Azure Marketplaceとの統合に関する情報も、新しいプラットフォーム名に合わせて更新され、コスト管理の手続きがより明確にされています。
- 付随する参考リンクや関連コンテンツも、Azure AI Foundryに関する情報に修正されています。
これらの変更は、ユーザーがAzure AI Foundry上でJaisチャットモデルを効果的に利用できるようにするための重要なものであり、プラットフォーム名称の一貫性を保つことで情報の混乱を防ぎ、全体的な利用体験を向上させる役割を果たしています。
articles/ai-studio/how-to/deploy-models-jamba.md
Diff
@@ -1,22 +1,22 @@
---
-title: How to deploy AI21's Jamba family models with Azure AI Studio
+title: How to deploy AI21's Jamba family models with Azure AI Foundry
titleSuffix: Azure AI Foundry
-description: How to deploy AI21's Jamba family models with Azure AI Studio
+description: How to deploy AI21's Jamba family models with Azure AI Foundry
manager: scottpolly
ms.service: azure-machine-learning
ms.topic: how-to
ms.date: 08/06/2024
ms.author: ssalgado
ms.reviewer: tgokal
reviewer: tgokal
-ms.custom: references_regions
+ms.custom: references_regions, ignite-2024
---
-# How to deploy AI21's Jamba family models with Azure AI Studio
+# How to deploy AI21's Jamba family models with Azure AI Foundry
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-In this article, you learn how to use Azure AI Studio to deploy AI21's Jamba family models as a serverless API with pay-as-you-go billing.
+In this article, you learn how to use Azure AI Foundry to deploy AI21's Jamba family models as a serverless API with pay-as-you-go billing.
The Jamba family models are AI21's production-grade Mamba-based large language model (LLM) which leverages AI21's hybrid Mamba-Transformer architecture. It's an instruction-tuned version of AI21's hybrid structured state space model (SSM) transformer Jamba model. The Jamba family models are built for reliable commercial use with respect to quality and performance.
@@ -47,7 +47,7 @@ To get started with Jamba 1.5 mini deployed as a serverless API, explore our int
### Prerequisites
- An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
-- An [Azure AI Studio project](../how-to/create-projects.md). The serverless API model deployment offering for Jamba family models is only available with projects created in these regions:
+- An [Azure AI Foundry project](../how-to/create-projects.md). The serverless API model deployment offering for Jamba family models is only available with projects created in these regions:
* East US
* East US 2
@@ -59,9 +59,9 @@ To get started with Jamba 1.5 mini deployed as a serverless API, explore our int
For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md).
-- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions:
+- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Foundry portal. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions:
- - On the Azure subscription—to subscribe the AI Studio project to the Azure Marketplace offering, once for each project, per offering:
+ - On the Azure subscription—to subscribe the AI Foundry project to the Azure Marketplace offering, once for each project, per offering:
- `Microsoft.MarketplaceOrdering/agreements/offers/plans/read`
- `Microsoft.MarketplaceOrdering/agreements/offers/plans/sign/action`
- `Microsoft.MarketplaceOrdering/offerTypes/publishers/offers/plans/agreements/read`
@@ -72,11 +72,11 @@ To get started with Jamba 1.5 mini deployed as a serverless API, explore our int
- `Microsoft.SaaS/resources/read`
- `Microsoft.SaaS/resources/write`
- - On the AI Studio project—to deploy endpoints (the Azure AI Developer role contains these permissions already):
+ - On the AI Foundry project—to deploy endpoints (the Azure AI Developer role contains these permissions already):
- `Microsoft.MachineLearningServices/workspaces/marketplaceModelSubscriptions/*`
- `Microsoft.MachineLearningServices/workspaces/serverlessEndpoints/*`
- For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md).
+ For more information on permissions, see [Role-based access control in Azure AI Foundry portal](../concepts/rbac-ai-studio.md).
### Create a new deployment
@@ -89,7 +89,7 @@ These steps demonstrate the deployment of `AI21 Jamba 1.5 Large` or `AI21 Jamba
1. Select **Deploy** to open a serverless API deployment window for the model.
-1. Alternatively, you can initiate a deployment by starting from the **Models + endpoints** page in AI Studio.
+1. Alternatively, you can initiate a deployment by starting from the **Models + endpoints** page in AI Foundry portal.
1. From the left navigation pane of your project, select **My assets** > **Models + endpoints**.
1. Select **+ Deploy model** > **Deploy base model**.
@@ -191,7 +191,7 @@ Payload is a JSON formatted string containing the following parameters:
| `temperature` | `float` | N <br>`1` | 0.0 – 2.0 | How much variation to provide in each answer. Setting this value to 0 guarantees the same response to the same question every time. Setting a higher value encourages more variation. Modifies the distribution from which tokens are sampled. We recommend altering this or `top_p`, but not both. |
| `top_p` | `float` | N <br>`1` | 0 < _value_ <=1.0 | Limit the pool of next tokens in each step to the top N percentile of possible tokens, where 1.0 means the pool of all possible tokens, and 0.01 means the pool of only the most likely next tokens. |
| `stop` | `string` OR `list[string]` | N <br> | "" | String or list of strings containing the word(s) where the API should stop generating output. Newlines are allowed as "\n". The returned text won't contain the stop sequence. |
-| `n` | `integer` | N <br>`1` | 1 – 16 | How many responses to generate for each prompt. With Azure AI Studio's Playground, `n=1` as we work on multi-response Playground. |
+| `n` | `integer` | N <br>`1` | 1 – 16 | How many responses to generate for each prompt. With Azure AI Foundry's Playground, `n=1` as we work on multi-response Playground. |
| `stream` | `boolean` | N <br>`False` | `True` OR `False` | Whether to enable streaming. If true, results are returned one token at a time. If set to true, `n` must be 1, which is automatically set.|
| `tools` | `array[tool]` | N | "" | A list of `tools` the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.|
| `response_format` | `object` | N <br>`null` | "" | Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.|
@@ -398,7 +398,7 @@ data: [DONE]
### Cost and quota considerations for Jamba family models deployed as a serverless API
-The Jamba family models are deployed as a serverless API and is offered by AI21 through Azure Marketplace and integrated with Azure AI studio for use. You can find Azure Marketplace pricing when deploying or fine-tuning models.
+The Jamba family models are deployed as a serverless API and is offered by AI21 through Azure Marketplace and integrated with Azure AI Foundry for use. You can find Azure Marketplace pricing when deploying or fine-tuning models.
Each time a workspace subscribes to a given model offering from Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference and fine-tuning; however, multiple meters are available to track each scenario independently.
@@ -412,6 +412,6 @@ Models deployed as a serverless API are protected by Azure AI content safety. Wi
## Related content
-- [What is Azure AI Studio?](../what-is-ai-studio.md)
+- [What is Azure AI Foundry?](../what-is-ai-studio.md)
- [Azure AI FAQ article](../faq.yml)
- [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md)
Summary
{
"modification_type": "minor update",
"modification_title": "AI21のJambaファミリーモデルの利用ガイドのプラットフォーム名更新"
}
Explanation
この変更は、「deploy-models-jamba.md」というドキュメントにおいて、AI21のJambaファミリーモデルに関連するプラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に更新しています。この修正により、ユーザーは新しいプラットフォーム上でのモデルのデプロイに関する正確な情報を得ることができます。
主な変更点は以下の通りです:
- ドキュメントのタイトルおよび説明が、Azure AI Foundryに基づいて修正されています。具体的には、Jambaファミリーモデルを使用する際の内容が明確になっています。
- JambaファミリーモデルをサーバーレスAPIとしてデプロイする手順に関する記述も、Azure AI Foundryを用いた説明に変更されています。
- プロジェクト作成やデプロイに必要な前提条件が、新しいプラットフォームに対応して更新されており、特にAzure Marketplaceの利用に関する情報も修正されています。
- Azure AI Foundryポータルでの操作に基づいた手順やアクセス権限に関する説明が強調され、ユーザーが必要な権限を明確に理解できるようになっています。
- Jambaモデルを利用するための具体的なデプロイ手順が、Azure AI Foundryに焦点を当てて示されています。
- 最後に、関係するリソースや関連コンテンツのリンクも新しいプラットフォームの情報に更新され、全体として一貫性のある情報提供が行われています。
これらの変更により、ユーザーはJambaファミリーモデルをAzure AI Foundryで効果的に利用できるようになり、プラットフォーム名の統一が情報の混乱を防ぎ、全体的なユーザーエクスペリエンスを向上させることに貢献しています。
articles/ai-studio/how-to/deploy-models-llama.md
Diff
@@ -1,7 +1,7 @@
---
-title: How to use the Meta Llama family of models with Azure AI Studio
+title: How to use the Meta Llama family of models with Azure AI Foundry
titleSuffix: Azure AI Foundry
-description: Learn how to use the Meta Llama family of models with Azure AI Studio.
+description: Learn how to use the Meta Llama family of models with Azure AI Foundry.
ms.service: azure-ai-studio
manager: scottpolly
ms.topic: how-to
@@ -81,15 +81,15 @@ The following models are available:
## Prerequisites
-To use Meta Llama models with Azure AI Studio, you need the following prerequisites:
+To use Meta Llama models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Meta Llama models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -124,7 +124,7 @@ Read more about the [Azure AI inference package and reference](https://aka.ms/az
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Meta Llama Instruct models - text-only or image reasoning models.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Meta Llama Instruct models - text-only or image reasoning models.
### Create a client to consume the model
@@ -406,15 +406,15 @@ The following models are available:
## Prerequisites
-To use Meta Llama models with Azure AI Studio, you need the following prerequisites:
+To use Meta Llama models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Meta Llama models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -447,7 +447,7 @@ npm install @azure-rest/ai-inference
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Meta Llama models.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Meta Llama models.
### Create a client to consume the model
@@ -754,15 +754,15 @@ The following models are available:
## Prerequisites
-To use Meta Llama models with Azure AI Studio, you need the following prerequisites:
+To use Meta Llama models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Meta Llama models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -818,7 +818,7 @@ using System.Reflection;
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Meta Llama chat models.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Meta Llama chat models.
### Create a client to consume the model
@@ -1114,15 +1114,15 @@ The following models are available:
## Prerequisites
-To use Meta Llama models with Azure AI Studio, you need the following prerequisites:
+To use Meta Llama models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Meta Llama chat models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -1148,7 +1148,7 @@ Models deployed with the [Azure AI model inference API](https://aka.ms/azureai/m
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Meta Llama chat models.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Meta Llama chat models.
### Create a client to consume the model
@@ -1470,7 +1470,7 @@ For more examples of how to use Meta Llama models, see the following examples an
Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
-Meta Llama models deployed as a serverless API are offered by Meta through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when deploying the model.
+Meta Llama models deployed as a serverless API are offered by Meta through the Azure Marketplace and integrated with Azure AI Foundry for use. You can find the Azure Marketplace pricing when deploying the model.
Each time a project subscribes to a given offer from the Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference; however, multiple meters are available to track each scenario independently.
@@ -1487,6 +1487,6 @@ It is a good practice to start with a low number of instances and scale up as ne
* [Azure AI Model Inference API](../reference/reference-model-inference-api.md)
* [Deploy models as serverless APIs](deploy-models-serverless.md)
-* [Consume serverless API endpoints from a different Azure AI Studio project or hub](deploy-models-serverless-connect.md)
+* [Consume serverless API endpoints from a different Azure AI Foundry project or hub](deploy-models-serverless-connect.md)
* [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md)
* [Plan and manage costs (marketplace)](costs-plan-manage.md#monitor-costs-for-models-offered-through-the-azure-marketplace)
Summary
{
"modification_type": "minor update",
"modification_title": "Meta Llamaファミリーモデルの利用ガイドのプラットフォーム名更新"
}
Explanation
この変更は、「deploy-models-llama.md」というドキュメントにおいて、MetaのLlamaファミリーモデルに関連するプラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に更新しています。この修正により、ユーザーは新しいプラットフォームに基づいた正確な情報を得られるようになり、Llamaモデルのデプロイ手順をよりスムーズに進めることが可能になります。
主な変更点は以下の通りです:
- ドキュメントのタイトルや説明が、Azure AI Foundryを前提に修正され、ユーザーが正確な情報を理解できるように更新されています。
- LlamaファミリーモデルをサーバーレスAPIとしてデプロイするための手順において、依存するプラットフォームがAzure AI Foundryに切り替えられています。
- プロジェクト作成やデプロイに必要な前提条件が、新しいプラットフォーム名に合わせて改訂されており、特にAzure Marketplaceを通じたリソースへのアクセス方法も明確化されています。
- Azure AI Foundryポータルでの操作に基づいた手順が強調されており、ユーザーが求める権限やロールが具体的に示されています。
- Meta Llamaモデルを利用する際の具体的なデプロイ手順も、Azure AI Foundryに合わせて記載されています。
- 関連するリソースやリンクも、新しいプラットフォームに関連する情報に更新され、全体の整合性が保たれています。
この変更により、ユーザーはMeta LlamaファミリーモデルをAzure AI Foundryで利用する際の情報がより一貫しており、プラットフォームに関連する誤解を避ける手助けとなります。また、全体的なユーザーエクスペリエンスを向上させるための重要なステップとなっています。
articles/ai-studio/how-to/deploy-models-managed.md
Diff
@@ -1,6 +1,6 @@
---
title: How to deploy and inference a managed compute deployment with code
-titleSuffix: AI Studio
+titleSuffix: AI Foundry
description: Learn how to deploy and inference a managed compute deployment with code.
manager: scottpolly
ms.service: azure-ai-studio
@@ -16,7 +16,7 @@ author: msakande
# How to deploy and inference a managed compute deployment with code
-The AI Studio [model catalog](../how-to/model-catalog-overview.md) offers over 1,600 models, and the most common way to deploy these models is to use the managed compute deployment option, which is also sometimes referred to as a managed online deployment.
+the AI Foundry portal [model catalog](../how-to/model-catalog-overview.md) offers over 1,600 models, and the most common way to deploy these models is to use the managed compute deployment option, which is also sometimes referred to as a managed online deployment.
Deployment of a large language model (LLM) makes it available for use in a website, an application, or other production environment. Deployment typically involves hosting the model on a server or in the cloud and creating an API or other interface for users to interact with the model. You can invoke the deployment for real-time inference of generative AI applications such as chat and copilot.
@@ -26,7 +26,7 @@ In this article, you learn how to deploy models using the Azure Machine Learning
You can deploy managed compute models using the Azure Machine Learning SDK, but first, let's browse the model catalog and get the model ID you need for deployment.
-1. Sign in to [AI Studio](https://ai.azure.com) and go to the **Home** page.
+1. Sign in to [AI Foundry](https://ai.azure.com) and go to the **Home** page.
1. Select **Model catalog** from the left sidebar.
1. In the **Deployment options** filter, select **Managed compute**.
@@ -48,7 +48,7 @@ pip install azure-ai-ml
pip install azure-identity
```
-Use this code to authenticate with Azure Machine Learning and create a client object. Replace the placeholders with your subscription ID, resource group name, and AI Studio project name.
+Use this code to authenticate with Azure Machine Learning and create a client object. Replace the placeholders with your subscription ID, resource group name, and AI Foundry project name.
```python
from azure.ai.ml import MLClient
@@ -153,13 +153,13 @@ print(json.dumps(response_json, indent=2))
## Delete the deployment endpoint
-To delete deployments in AI Studio, select the **Delete** button on the top panel of the deployment details page.
+To delete deployments in AI Foundry portal, select the **Delete** button on the top panel of the deployment details page.
## Quota considerations
-To deploy and perform inferencing with real-time endpoints, you consume Virtual Machine (VM) core quota that is assigned to your subscription on a per-region basis. When you sign up for AI Studio, you receive a default VM quota for several VM families available in the region. You can continue to create deployments until you reach your quota limit. Once that happens, you can request for a quota increase.
+To deploy and perform inferencing with real-time endpoints, you consume Virtual Machine (VM) core quota that is assigned to your subscription on a per-region basis. When you sign up for AI Foundry, you receive a default VM quota for several VM families available in the region. You can continue to create deployments until you reach your quota limit. Once that happens, you can request for a quota increase.
## Next steps
-- Learn more about what you can do in [AI Studio](../what-is-ai-studio.md)
+- Learn more about what you can do in [AI Foundry](../what-is-ai-studio.md)
- Get answers to frequently asked questions in the [Azure AI FAQ article](../faq.yml)
Summary
{
"modification_type": "minor update",
"modification_title": "管理されたコンピュートデプロイメントに関するガイドのプラットフォーム名更新"
}
Explanation
この変更は、「deploy-models-managed.md」というドキュメントにおいて、管理されたコンピュートデプロイメントに関連するプラットフォーム名を「AI Studio」から「AI Foundry」に更新しています。この修正によって、ユーザーはAzureの新しいプラットフォームに基づいた正確な情報を得ることができ、モデルのデプロイメントや推論に関する手順が一貫性を保たれます。
主な変更点は以下の通りです:
- ドキュメントタイトルや説明が、AI Foundryに合わせて修正されています。
- モデルカタログの説明において、プラットフォーム名がAI Foundryに変更され、さらにその利用方法が一貫して明記されています。
- 次のステップのセクションで、AI Foundryへの移行による情報が強調されており、ユーザーに対する情報提供がスムーズに行われます。
- 具体的な手順の中で、サインインするプラットフォーム名が調整され、ユーザーがAI Foundryポータルを利用することを明確に示しています。
- Azure Machine Learningのクライアントオブジェクトを作成する際にも、プロジェクト名がAI Foundryにリンクするように修正されています。
- デプロイメントの削除に関する手順でも、AI Foundryポータルでの操作方法に更新されています。
これらの変更により、ユーザーは管理されたコンピュートデプロイメントをAI Foundryで効果的に利用できるようになり、プラットフォームの整合性を保ちながら、より良いユーザーエクスペリエンスを享受できるようになります。
articles/ai-studio/how-to/deploy-models-mistral-nemo.md
Diff
@@ -1,7 +1,7 @@
---
-title: How to use Mistral Nemo chat model with Azure AI Studio
+title: How to use Mistral Nemo chat model with Azure AI Foundry
titleSuffix: Azure AI Foundry
-description: Learn how to use Mistral Nemo chat model with Azure AI Studio.
+description: Learn how to use Mistral Nemo chat model with Azure AI Foundry.
ms.service: azure-ai-studio
manager: scottpolly
ms.topic: how-to
@@ -49,15 +49,15 @@ You can learn more about the models in their respective model card:
## Prerequisites
-To use Mistral Nemo chat model with Azure AI Studio, you need the following prerequisites:
+To use Mistral Nemo chat model with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Mistral Nemo chat model can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -83,7 +83,7 @@ Read more about the [Azure AI inference package and reference](https://aka.ms/az
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Mistral Nemo chat model.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Mistral Nemo chat model.
### Create a client to consume the model
@@ -489,15 +489,15 @@ You can learn more about the models in their respective model card:
## Prerequisites
-To use Mistral Nemo chat model with Azure AI Studio, you need the following prerequisites:
+To use Mistral Nemo chat model with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Mistral Nemo chat model can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -521,7 +521,7 @@ npm install @azure-rest/ai-inference
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Mistral Nemo chat model.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Mistral Nemo chat model.
### Create a client to consume the model
@@ -948,15 +948,15 @@ You can learn more about the models in their respective model card:
## Prerequisites
-To use Mistral Nemo chat model with Azure AI Studio, you need the following prerequisites:
+To use Mistral Nemo chat model with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Mistral Nemo chat model can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -1003,7 +1003,7 @@ using System.Reflection;
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Mistral Nemo chat model.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Mistral Nemo chat model.
### Create a client to consume the model
@@ -1429,15 +1429,15 @@ You can learn more about the models in their respective model card:
## Prerequisites
-To use Mistral Nemo chat model with Azure AI Studio, you need the following prerequisites:
+To use Mistral Nemo chat model with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Mistral Nemo chat model can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -1454,7 +1454,7 @@ Models deployed with the [Azure AI model inference API](https://aka.ms/azureai/m
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Mistral Nemo chat model.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Mistral Nemo chat model.
### Create a client to consume the model
@@ -2029,7 +2029,7 @@ For more examples of how to use Mistral models, see the following examples and t
Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
-Mistral models deployed as a serverless API are offered by MistralAI through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when deploying the model.
+Mistral models deployed as a serverless API are offered by MistralAI through the Azure Marketplace and integrated with Azure AI Foundry for use. You can find the Azure Marketplace pricing when deploying the model.
Each time a project subscribes to a given offer from the Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference; however, multiple meters are available to track each scenario independently.
@@ -2040,6 +2040,6 @@ For more information on how to track costs, see [Monitor costs for models offere
* [Azure AI Model Inference API](../reference/reference-model-inference-api.md)
* [Deploy models as serverless APIs](deploy-models-serverless.md)
-* [Consume serverless API endpoints from a different Azure AI Studio project or hub](deploy-models-serverless-connect.md)
+* [Consume serverless API endpoints from a different Azure AI Foundry project or hub](deploy-models-serverless-connect.md)
* [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md)
* [Plan and manage costs (marketplace)](costs-plan-manage.md#monitor-costs-for-models-offered-through-the-azure-marketplace)
Summary
{
"modification_type": "minor update",
"modification_title": "Mistral Nemoチャットモデルの利用ガイドのプラットフォーム名更新"
}
Explanation
この変更は、「deploy-models-mistral-nemo.md」というドキュメントにおいて、Mistral Nemoチャットモデルに関連するプラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に更新しています。この修正により、ユーザーは最新のプラットフォームに基づいた正確な情報を得られるようになります。
主な変更点は以下の通りです:
- ドキュメントのタイトルおよび説明がAzure AI Foundryに基づいて修正され、モデルの利用方法が明確に表現されています。
- Mistral Nemoチャットモデルを使用するための前提条件が、AI Foundry向けに調整され、関連情報が一貫したものとなっています。
- サーバーレスAPIエンドポイントにデプロイする際の手順においても、プラットフォーム名をAI Foundryに変更することで、最新の情報を反映させています。
- ユーザーがモデルをデプロイするために必要なツールや手順が、Azure AI Foundryポータルを使用した形式にアップデートされており、これにより整合性が確保されています。
- APIを通じてモデルと対話する方法の説明も、AI Foundryポータルでの操作に合わせて修正されています。
- 「次のステップ」のセクションでは、AI Foundryに関連するリンクが改訂され、ユーザーが新しいプラットフォームの詳細について学ぶための道筋が示されています。
これらの変更により、ユーザーはMistral NemoチャットモデルをAzure AI Foundryでより効果的に利用でき、整合性のある情報を通じて、より良いユーザー体験を享受することができます。
articles/ai-studio/how-to/deploy-models-mistral-open.md
Diff
@@ -1,7 +1,7 @@
---
-title: How to use Mistral-7B and Mixtral chat models with Azure AI Studio
+title: How to use Mistral-7B and Mixtral chat models with Azure AI Foundry
titleSuffix: Azure AI Foundry
-description: Learn how to use Mistral-7B and Mixtral chat models with Azure AI Studio.
+description: Learn how to use Mistral-7B and Mixtral chat models with Azure AI Foundry.
ms.service: azure-ai-studio
manager: scottpolly
ms.topic: how-to
@@ -80,7 +80,7 @@ The following models are available:
## Prerequisites
-To use Mistral-7B and Mixtral chat models with Azure AI Studio, you need the following prerequisites:
+To use Mistral-7B and Mixtral chat models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
@@ -114,7 +114,7 @@ Read more about the [Azure AI inference package and reference](https://aka.ms/az
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Mistral-7B and Mixtral chat models.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Mistral-7B and Mixtral chat models.
### Create a client to consume the model
@@ -369,7 +369,7 @@ The following models are available:
## Prerequisites
-To use Mistral-7B and Mixtral chat models with Azure AI Studio, you need the following prerequisites:
+To use Mistral-7B and Mixtral chat models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
@@ -401,7 +401,7 @@ npm install @azure-rest/ai-inference
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Mistral-7B and Mixtral chat models.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Mistral-7B and Mixtral chat models.
### Create a client to consume the model
@@ -675,7 +675,7 @@ The following models are available:
## Prerequisites
-To use Mistral-7B and Mixtral chat models with Azure AI Studio, you need the following prerequisites:
+To use Mistral-7B and Mixtral chat models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
@@ -730,7 +730,7 @@ using System.Reflection;
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Mistral-7B and Mixtral chat models.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Mistral-7B and Mixtral chat models.
### Create a client to consume the model
@@ -993,7 +993,7 @@ The following models are available:
## Prerequisites
-To use Mistral-7B and Mixtral chat models with Azure AI Studio, you need the following prerequisites:
+To use Mistral-7B and Mixtral chat models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
@@ -1018,7 +1018,7 @@ Models deployed with the [Azure AI model inference API](https://aka.ms/azureai/m
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Mistral-7B and Mixtral chat models.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Mistral-7B and Mixtral chat models.
### Create a client to consume the model
@@ -1305,6 +1305,6 @@ It is a good practice to start with a low number of instances and scale up as ne
* [Azure AI Model Inference API](../reference/reference-model-inference-api.md)
* [Deploy models as serverless APIs](deploy-models-serverless.md)
-* [Consume serverless API endpoints from a different Azure AI Studio project or hub](deploy-models-serverless-connect.md)
+* [Consume serverless API endpoints from a different Azure AI Foundry project or hub](deploy-models-serverless-connect.md)
* [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md)
* [Plan and manage costs (marketplace)](costs-plan-manage.md#monitor-costs-for-models-offered-through-the-azure-marketplace)
Summary
{
"modification_type": "minor update",
"modification_title": "Mistral-7BおよびMixtralチャットモデルの利用ガイドのプラットフォーム名更新"
}
Explanation
この変更は、「deploy-models-mistral-open.md」というドキュメントにおいて、Mistral-7BおよびMixtralチャットモデルに関連するプラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に更新しています。この修正により、ユーザーが最新のプラットフォーム情報にアクセスできるようになります。
主な変更点は以下の通りです:
- ドキュメントのタイトルおよび説明が、AI Foundryに基づいて修正されています。
- Mistral-7BおよびMixtralチャットモデルを使用するための前提条件が、AI Foundry用に調整されており、一貫性が強化されています。
- サーバーレスAPIエンドポイントにデプロイする手順においても、AI Foundryのプラットフォーム名が使用されています。
- APIを通じてモデルと対話する方法の説明は、Azure AI Foundryポータルに合わせて更新されています。
- 次のステップのセクションでは、AI Foundryに関連するリンクが改訂され、ユーザーが新しいプラットフォームの詳細について学ぶための情報が提供されています。
これらの変更によって、ユーザーはMistral-7BおよびMixtralチャットモデルをAzure AI Foundryでより効果的に利用でき、最新かつ整合性のある情報に基づいた体験を享受できるようになります。
articles/ai-studio/how-to/deploy-models-mistral.md
Diff
@@ -1,7 +1,7 @@
---
-title: How to use Mistral premium chat models with Azure AI Studio
+title: How to use Mistral premium chat models with Azure AI Foundry
titleSuffix: Azure AI Foundry
-description: Learn how to use Mistral premium chat models with Azure AI Studio.
+description: Learn how to use Mistral premium chat models with Azure AI Foundry.
ms.service: azure-ai-studio
manager: scottpolly
ms.topic: how-to
@@ -93,15 +93,15 @@ The following models are available:
## Prerequisites
-To use Mistral premium chat models with Azure AI Studio, you need the following prerequisites:
+To use Mistral premium chat models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Mistral premium chat models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -127,7 +127,7 @@ Read more about the [Azure AI inference package and reference](https://aka.ms/az
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Mistral premium chat models.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Mistral premium chat models.
### Create a client to consume the model
@@ -577,15 +577,15 @@ The following models are available:
## Prerequisites
-To use Mistral premium chat models with Azure AI Studio, you need the following prerequisites:
+To use Mistral premium chat models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Mistral premium chat models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -609,7 +609,7 @@ npm install @azure-rest/ai-inference
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Mistral premium chat models.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Mistral premium chat models.
### Create a client to consume the model
@@ -1080,15 +1080,15 @@ The following models are available:
## Prerequisites
-To use Mistral premium chat models with Azure AI Studio, you need the following prerequisites:
+To use Mistral premium chat models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Mistral premium chat models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -1135,7 +1135,7 @@ using System.Reflection;
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Mistral premium chat models.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Mistral premium chat models.
### Create a client to consume the model
@@ -1605,15 +1605,15 @@ The following models are available:
## Prerequisites
-To use Mistral premium chat models with Azure AI Studio, you need the following prerequisites:
+To use Mistral premium chat models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Mistral premium chat models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -1630,7 +1630,7 @@ Models deployed with the [Azure AI model inference API](https://aka.ms/azureai/m
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Mistral premium chat models.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Mistral premium chat models.
### Create a client to consume the model
@@ -2205,7 +2205,7 @@ For more examples of how to use Mistral models, see the following examples and t
Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
-Mistral models deployed as a serverless API are offered by MistralAI through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when deploying the model.
+Mistral models deployed as a serverless API are offered by MistralAI through the Azure Marketplace and integrated with Azure AI Foundry for use. You can find the Azure Marketplace pricing when deploying the model.
Each time a project subscribes to a given offer from the Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference; however, multiple meters are available to track each scenario independently.
@@ -2216,6 +2216,6 @@ For more information on how to track costs, see [Monitor costs for models offere
* [Azure AI Model Inference API](../reference/reference-model-inference-api.md)
* [Deploy models as serverless APIs](deploy-models-serverless.md)
-* [Consume serverless API endpoints from a different Azure AI Studio project or hub](deploy-models-serverless-connect.md)
+* [Consume serverless API endpoints from a different Azure AI Foundry project or hub](deploy-models-serverless-connect.md)
* [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md)
* [Plan and manage costs (marketplace)](costs-plan-manage.md#monitor-costs-for-models-offered-through-the-azure-marketplace)
Summary
{
"modification_type": "minor update",
"modification_title": "Mistralプレミアムチャットモデルの利用ガイドのプラットフォーム名更新"
}
Explanation
この変更は、「deploy-models-mistral.md」というドキュメントにおいて、Mistralプレミアムチャットモデルに関する情報のプラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に変更しています。この修正により、正確なプラットフォーム情報が提供され、ユーザーが最新のガイダンスに基づいてモデルを利用できるようになります。
主な変更点は以下の通りです:
- ドキュメントのタイトルおよび説明がAzure AI Foundryに合わせて改訂され、プラットフォームに関する整合性が向上しています。
- Mistralプレミアムチャットモデルを使用するための前提条件がAI Foundryに基づいて修正されています。
- サーバーレスAPIエンドポイントへのデプロイメントに関する手順が更新され、Azure AI Foundryポータルを使用する方法が示されています。
- APIを介してモデルと対話する方法の説明も、Azure AI Foundryに合わせて修正されています。
- 次のステップセクションでは、AI Foundryに関連するリンクがアップデートされ、ユーザーが新しいプラットフォームに関する詳細情報を得られるようになっています。
これらの変更により、MistralプレミアムチャットモデルをAzure AI Foundryでより効果的に利用できるようになり、ユーザーは最新の情報に基づいた簡潔かつ整合性のある体験を享受することができます。
articles/ai-studio/how-to/deploy-models-openai.md
Diff
@@ -1,39 +1,40 @@
---
-title: How to deploy Azure OpenAI models with Azure AI Studio
+title: How to deploy Azure OpenAI models with Azure AI Foundry
titleSuffix: Azure AI Foundry
-description: Learn how to deploy Azure OpenAI models with Azure AI Studio.
+description: Learn how to deploy Azure OpenAI models with Azure AI Foundry.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
- ignite-2023
- build-2024
- ai-learning-hub
+ - ignite-2024
ms.topic: how-to
ms.date: 11/05/2024
ms.reviewer: fasantia
ms.author: mopeakande
author: msakande
---
-# How to deploy Azure OpenAI models with Azure AI Studio
+# How to deploy Azure OpenAI models with Azure AI Foundry
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-In this article, you learn to create Azure OpenAI model deployments in Azure AI Studio.
+In this article, you learn to create Azure OpenAI model deployments in Azure AI Foundry portal.
-Azure OpenAI Service offers a diverse set of models with different capabilities and price points. When you deploy Azure OpenAI models in Azure AI Studio, you can consume the deployments, using prompt flow or another tool. Model availability varies by region. To learn more about the details of each model see [Azure OpenAI Service models](../../ai-services/openai/concepts/models.md).
+Azure OpenAI Service offers a diverse set of models with different capabilities and price points. When you deploy Azure OpenAI models in Azure AI Foundry portal, you can consume the deployments, using prompt flow or another tool. Model availability varies by region. To learn more about the details of each model see [Azure OpenAI Service models](../../ai-services/openai/concepts/models.md).
-To modify and interact with an Azure OpenAI model in the [Azure AI Studio](https://ai.azure.com) playground, first you need to deploy a base Azure OpenAI model to your project. Once the model is deployed and available in your project, you can consume its REST API endpoint as-is or customize further with your own data and other components (embeddings, indexes, and more).
+To modify and interact with an Azure OpenAI model in the [Azure AI Foundry](https://ai.azure.com) playground, first you need to deploy a base Azure OpenAI model to your project. Once the model is deployed and available in your project, you can consume its REST API endpoint as-is or customize further with your own data and other components (embeddings, indexes, and more).
## Prerequisites
- An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
-- An [Azure AI Studio project](create-projects.md).
+- An [Azure AI Foundry project](create-projects.md).
## Deploy an Azure OpenAI model from the model catalog
-Follow the steps below to deploy an Azure OpenAI model such as `gpt-4o-mini` to a real-time endpoint from the AI Studio [model catalog](./model-catalog-overview.md):
+Follow the steps below to deploy an Azure OpenAI model such as `gpt-4o-mini` to a real-time endpoint from the AI Foundry portal [model catalog](./model-catalog-overview.md):
[!INCLUDE [open-catalog](../includes/open-catalog.md)]
@@ -51,9 +52,9 @@ Follow the steps below to deploy an Azure OpenAI model such as `gpt-4o-mini` to
## Deploy an Azure OpenAI model from your project
-Alternatively, you can initiate deployment by starting from your project in AI Studio.
+Alternatively, you can initiate deployment by starting from your project in AI Foundry portal.
-1. Go to your project in AI Studio.
+1. Go to your project in AI Foundry portal.
1. From the left sidebar of your project, go to **My assets** > **Models + endpoints**.
1. Select **+ Deploy model** > **Deploy base model**.
1. In the **Collections** filter, select **Azure OpenAI**.
@@ -79,16 +80,16 @@ For Azure OpenAI models, the default quota for models varies by model and region
## Quota for deploying and inferencing a model
-For Azure OpenAI models, deploying and inferencing consume quota that is assigned to your subscription on a per-region, per-model basis in units of Tokens-per-Minute (TPM). When you sign up for Azure AI Studio, you receive default quota for most of the available models. Then, you assign TPM to each deployment as it is created, thus reducing the available quota for that model by the amount you assigned. You can continue to create deployments and assign them TPMs until you reach your quota limit.
+For Azure OpenAI models, deploying and inferencing consume quota that is assigned to your subscription on a per-region, per-model basis in units of Tokens-per-Minute (TPM). When you sign up for Azure AI Foundry, you receive default quota for most of the available models. Then, you assign TPM to each deployment as it is created, thus reducing the available quota for that model by the amount you assigned. You can continue to create deployments and assign them TPMs until you reach your quota limit.
Once you reach your quota limit, the only way for you to create new deployments of that model is to:
- Request more quota by submitting a [quota increase form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR4xPXO648sJKt4GoXAed-0pURVJWRU4yRTMxRkszU0NXRFFTTEhaT1g1NyQlQCN0PWcu).
- Adjust the allocated quota on other model deployments to free up tokens for new deployments on the [Azure OpenAI Portal](https://oai.azure.com/portal).
-To learn more about quota, see [Azure AI Studio quota](./quota.md) and [Manage Azure OpenAI Service quota](../../ai-services/openai/how-to/quota.md?tabs=rest).
+To learn more about quota, see [Azure AI Foundry quota](./quota.md) and [Manage Azure OpenAI Service quota](../../ai-services/openai/how-to/quota.md?tabs=rest).
## Related content
-- Learn more about what you can do in [Azure AI Studio](../what-is-ai-studio.md)
+- Learn more about what you can do in [Azure AI Foundry](../what-is-ai-studio.md)
- Get answers to frequently asked questions in the [Azure AI FAQ article](../faq.yml)
Summary
{
"modification_type": "minor update",
"modification_title": "Azure OpenAIモデルデプロイガイドのプラットフォーム名更新"
}
Explanation
今回の変更は、「deploy-models-openai.md」というドキュメントにおいて、Azure OpenAIモデルのデプロイに関するプラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に更新しています。これにより、ユーザーに正確で最新の情報が提供されるようになります。
主な変更点は以下の通りです:
- ドキュメントのタイトルおよび説明が、AI Foundryポータルに合わせて修正されています。
- Azure OpenAIモデルを使用するための前提条件が、「Azure AI Foundryプロジェクト」に変更され、環境に関する整合性が向上しています。
- モデルのカタログからAzure OpenAIモデルをデプロイする手順が、AI Foundryポータルに基づいて更新されています。
- Azure AI Foundryを舞台にした操作手順が示されており、ユーザーが新しいプラットフォームでの操作を理解しやすくなっています。
- モデルのデプロイと推論に関するクォータの説明も、Azure AI Foundryに合わせて更新されており、ユーザーはプラットフォーム固有の情報を得られます。
これらの変更により、Azure OpenAIモデルのデプロイを行う際に、ユーザーは最新の情報に基づいた明瞭な指針を得ることができ、Azure AI Foundryを通じての操作がより効果的に行えるようになります。
articles/ai-studio/how-to/deploy-models-phi-3-5-vision.md
Diff
@@ -1,7 +1,7 @@
---
-title: How to use Phi-3.5 chat model with vision with Azure AI Studio
+title: How to use Phi-3.5 chat model with vision with Azure AI Foundry
titleSuffix: Azure AI Foundry
-description: Learn how to use Phi-3.5 chat model with vision with Azure AI Studio.
+description: Learn how to use Phi-3.5 chat model with vision with Azure AI Foundry.
ms.service: azure-ai-studio
manager: scottpolly
ms.topic: how-to
@@ -37,15 +37,15 @@ You can learn more about the models in their respective model card:
## Prerequisites
-To use Phi-3.5 chat model with vision with Azure AI Studio, you need the following prerequisites:
+To use Phi-3.5 chat model with vision with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Phi-3.5 chat model with vision can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -80,7 +80,7 @@ Read more about the [Azure AI inference package and reference](https://aka.ms/az
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Phi-3.5 chat model with vision.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Phi-3.5 chat model with vision.
### Create a client to consume the model
@@ -405,15 +405,15 @@ You can learn more about the models in their respective model card:
## Prerequisites
-To use Phi-3.5 chat model with vision with Azure AI Studio, you need the following prerequisites:
+To use Phi-3.5 chat model with vision with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Phi-3.5 chat model with vision can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -446,7 +446,7 @@ npm install @azure-rest/ai-inference
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Phi-3.5 chat model with vision.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Phi-3.5 chat model with vision.
### Create a client to consume the model
@@ -802,15 +802,15 @@ You can learn more about the models in their respective model card:
## Prerequisites
-To use Phi-3.5 chat model with vision with Azure AI Studio, you need the following prerequisites:
+To use Phi-3.5 chat model with vision with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Phi-3.5 chat model with vision can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -866,7 +866,7 @@ using System.Reflection;
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Phi-3.5 chat model with vision.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Phi-3.5 chat model with vision.
### Create a client to consume the model
@@ -1196,15 +1196,15 @@ You can learn more about the models in their respective model card:
## Prerequisites
-To use Phi-3.5 chat model with vision with Azure AI Studio, you need the following prerequisites:
+To use Phi-3.5 chat model with vision with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Phi-3.5 chat model with vision can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -1230,7 +1230,7 @@ Models deployed with the [Azure AI model inference API](https://aka.ms/azureai/m
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Phi-3.5 chat model with vision.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Phi-3.5 chat model with vision.
### Create a client to consume the model
@@ -1637,6 +1637,6 @@ It is a good practice to start with a low number of instances and scale up as ne
* [Azure AI Model Inference API](../reference/reference-model-inference-api.md)
* [Deploy models as serverless APIs](deploy-models-serverless.md)
-* [Consume serverless API endpoints from a different Azure AI Studio project or hub](deploy-models-serverless-connect.md)
+* [Consume serverless API endpoints from a different Azure AI Foundry project or hub](deploy-models-serverless-connect.md)
* [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md)
* [Plan and manage costs (marketplace)](costs-plan-manage.md#monitor-costs-for-models-offered-through-the-azure-marketplace)
Summary
{
"modification_type": "minor update",
"modification_title": "Phi-3.5チャットモデルのビジョン機能のプラットフォーム名更新"
}
Explanation
この変更は、「deploy-models-phi-3-5-vision.md」というドキュメントにおいて、Phi-3.5チャットモデルのビジョン機能に関する内容のプラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に更新しています。この修正により、正確なプラットフォーム情報が提供され、ユーザーが最新のガイダンスに基づいてモデルを利用できるようになります。
主な変更点は以下の通りです:
- ドキュメントのタイトルおよび説明がAI Foundryに合わせて改訂され、プラットフォームに関連する整合性が向上しています。
- Phi-3.5チャットモデルとビジョン機能を使用するための前提条件が、AI Foundryに基づいて修正されています。
- サーバーレスAPIエンドポイントへのデプロイメントに関する手順が更新され、Azure AI Foundryポータルを使用する方法が示されています。
- APIを介してモデルと対話する方法の説明も、Azure AI Foundryに合わせて修正されています。
- 次のステップセクションがAI Foundryに関連する内容に更新されており、ユーザーが新しいプラットフォーム機能を利用しやすくなっています。
これらの変更により、Phi-3.5チャットモデルのビジョン機能をAzure AI Foundryでよりスムーズに利用できるようになり、ユーザーは最新の情報に基づいた一貫した体験を享受することができます。
articles/ai-studio/how-to/deploy-models-phi-3-vision.md
Diff
@@ -1,7 +1,7 @@
---
-title: How to use Phi-3 chat model with vision with Azure AI Studio
+title: How to use Phi-3 chat model with vision with Azure AI Foundry
titleSuffix: Azure AI Foundry
-description: Learn how to use Phi-3 chat model with vision with Azure AI Studio.
+description: Learn how to use Phi-3 chat model with vision with Azure AI Foundry.
ms.service: azure-ai-studio
manager: scottpolly
ms.topic: how-to
@@ -37,7 +37,7 @@ You can learn more about the models in their respective model card:
## Prerequisites
-To use Phi-3 chat model with vision with Azure AI Studio, you need the following prerequisites:
+To use Phi-3 chat model with vision with Azure AI Foundry, you need the following prerequisites:
### A model deployment
@@ -71,7 +71,7 @@ Read more about the [Azure AI inference package and reference](https://aka.ms/az
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Phi-3 chat model with vision.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Phi-3 chat model with vision.
### Create a client to consume the model
@@ -357,7 +357,7 @@ You can learn more about the models in their respective model card:
## Prerequisites
-To use Phi-3 chat model with vision with Azure AI Studio, you need the following prerequisites:
+To use Phi-3 chat model with vision with Azure AI Foundry, you need the following prerequisites:
### A model deployment
@@ -389,7 +389,7 @@ npm install @azure-rest/ai-inference
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Phi-3 chat model with vision.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Phi-3 chat model with vision.
### Create a client to consume the model
@@ -700,7 +700,7 @@ You can learn more about the models in their respective model card:
## Prerequisites
-To use Phi-3 chat model with vision with Azure AI Studio, you need the following prerequisites:
+To use Phi-3 chat model with vision with Azure AI Foundry, you need the following prerequisites:
### A model deployment
@@ -755,7 +755,7 @@ using System.Reflection;
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Phi-3 chat model with vision.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Phi-3 chat model with vision.
### Create a client to consume the model
@@ -1040,7 +1040,7 @@ You can learn more about the models in their respective model card:
## Prerequisites
-To use Phi-3 chat model with vision with Azure AI Studio, you need the following prerequisites:
+To use Phi-3 chat model with vision with Azure AI Foundry, you need the following prerequisites:
### A model deployment
@@ -1065,7 +1065,7 @@ Models deployed with the [Azure AI model inference API](https://aka.ms/azureai/m
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Phi-3 chat model with vision.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Phi-3 chat model with vision.
### Create a client to consume the model
@@ -1424,6 +1424,6 @@ It is a good practice to start with a low number of instances and scale up as ne
* [Azure AI Model Inference API](../reference/reference-model-inference-api.md)
* [Deploy models as serverless APIs](deploy-models-serverless.md)
-* [Consume serverless API endpoints from a different Azure AI Studio project or hub](deploy-models-serverless-connect.md)
+* [Consume serverless API endpoints from a different Azure AI Foundry project or hub](deploy-models-serverless-connect.md)
* [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md)
* [Plan and manage costs (marketplace)](costs-plan-manage.md#monitor-costs-for-models-offered-through-the-azure-marketplace)
Summary
{
"modification_type": "minor update",
"modification_title": "Phi-3チャットモデルのビジョン機能のプラットフォーム名更新"
}
Explanation
この変更は、「deploy-models-phi-3-vision.md」というドキュメントにおいて、Phi-3チャットモデルとビジョン機能に関する内容のプラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に更新しています。この修正により、ユーザーに最新のプラットフォーム情報が正確に提供され、利用体験が向上することを目的としています。
主な変更点は以下の通りです:
- ドキュメントのタイトルおよび説明がAI Foundryに合わせて修正され、プラットフォームに関する一致が確保されています。
- Phi-3チャットモデルとビジョン機能を使用する際に必要な前提条件が、Azure AI Foundryに基づいて更新されています。
- モデルをサーバーレスAPIエンドポイントにデプロイする手順がAI Foundryに関連する最新の情報に基づいて修正されており、ユーザーが手順をより理解しやすくなっています。
- Azure AI推論APIの使用に関する説明も、AI Foundryポータルとの互換性を強調しています。
- その他の関連情報のリンクも、Azure AI Foundryに対応するように更新されています。
これにより、Phi-3チャットモデルのビジョン機能をAzure AI Foundryで利用する際のドキュメントが一貫性を持ち、ユーザーが新しいプラットフォームでの利用をよりスムーズにするための情報が提供されています。
articles/ai-studio/how-to/deploy-models-phi-3.md
Diff
@@ -1,7 +1,7 @@
---
-title: How to use Phi-3 family chat models with Azure AI Studio
+title: How to use Phi-3 family chat models with Azure AI Foundry
titleSuffix: Azure AI Foundry
-description: Learn how to use Phi-3 family chat models with Azure AI Studio.
+description: Learn how to use Phi-3 family chat models with Azure AI Foundry.
ms.service: azure-ai-studio
manager: scottpolly
ms.topic: how-to
@@ -71,15 +71,15 @@ The following models are available:
## Prerequisites
-To use Phi-3 family chat models with Azure AI Studio, you need the following prerequisites:
+To use Phi-3 family chat models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Phi-3 family chat models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -114,7 +114,7 @@ Read more about the [Azure AI inference package and reference](https://aka.ms/az
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Phi-3 family chat models.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Phi-3 family chat models.
### Create a client to consume the model
@@ -399,15 +399,15 @@ The following models are available:
## Prerequisites
-To use Phi-3 family chat models with Azure AI Studio, you need the following prerequisites:
+To use Phi-3 family chat models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Phi-3 family chat models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -440,7 +440,7 @@ npm install @azure-rest/ai-inference
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Phi-3 family chat models.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Phi-3 family chat models.
### Create a client to consume the model
@@ -750,15 +750,15 @@ The following models are available:
## Prerequisites
-To use Phi-3 family chat models with Azure AI Studio, you need the following prerequisites:
+To use Phi-3 family chat models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Phi-3 family chat models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -814,7 +814,7 @@ using System.Reflection;
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Phi-3 family chat models.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Phi-3 family chat models.
### Create a client to consume the model
@@ -1113,15 +1113,15 @@ The following models are available:
## Prerequisites
-To use Phi-3 family chat models with Azure AI Studio, you need the following prerequisites:
+To use Phi-3 family chat models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
Phi-3 family chat models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -1147,7 +1147,7 @@ Models deployed with the [Azure AI model inference API](https://aka.ms/azureai/m
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Phi-3 family chat models.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Phi-3 family chat models.
### Create a client to consume the model
@@ -1485,6 +1485,6 @@ You can use this [sample notebook](https://github.com/Azure/azureml-examples/blo
* [Azure AI Model Inference API](../reference/reference-model-inference-api.md)
* [Deploy models as serverless APIs](deploy-models-serverless.md)
-* [Consume serverless API endpoints from a different Azure AI Studio project or hub](deploy-models-serverless-connect.md)
+* [Consume serverless API endpoints from a different Azure AI Foundry project or hub](deploy-models-serverless-connect.md)
* [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md)
* [Plan and manage costs (marketplace)](costs-plan-manage.md#monitor-costs-for-models-offered-through-the-azure-marketplace)
Summary
{
"modification_type": "minor update",
"modification_title": "Phi-3ファミリーチャットモデルのプラットフォーム名更新"
}
Explanation
この変更は、「deploy-models-phi-3.md」というドキュメントに対して行われ、Phi-3ファミリーチャットモデルに関するプラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に変更しています。この修正により、ユーザーが最新のプラットフォーム情報に基づいてモデルを利用できるようにします。
主な変更点は以下の通りです:
- ドキュメントのタイトルおよび説明が、AI Foundryに沿って更新されました。これにより、情報の一貫性が確保されます。
- Phi-3ファミリーチャットモデルを使用する際の前提条件が、Azure AI Foundryに基づいて修正されています。
- サーバーレスAPIエンドポイントへのモデルのデプロイに関する手順が、AI Foundryポータルを参照するように更新されました。
- Azure AI推論APIに関する情報も、AI Foundryに関連する内容に調整されています。
- その他の関連リンクも、AI Foundryに合わせて修正されています。
これにより、Pis-3ファミリーチャットモデルの使用に関するドキュメントが最新の情報を反映し、ユーザーがAI Foundryでの利用においてより正確で一貫した体験を得られるようになります。
articles/ai-studio/how-to/deploy-models-serverless-availability.md
Diff
@@ -1,7 +1,7 @@
---
title: Region availability for models in Serverless API endpoints
titleSuffix: Azure AI Foundry
-description: Learn about the regions where each model is available for deployment in serverless API endpoints via Azure AI Studio.
+description: Learn about the regions where each model is available for deployment in serverless API endpoints via Azure AI Foundry.
manager: scottpolly
ms.service: azure-ai-studio
ms.topic: how-to
Summary
{
"modification_type": "minor update",
"modification_title": "サーバーレスAPIエンドポイントモデルの地域情報更新"
}
Explanation
この変更は、「deploy-models-serverless-availability.md」というドキュメントにおいて、サーバーレスAPIエンドポイントにおける各モデルのデプロイ可能地域に関する説明を更新しました。具体的には、言及されているプラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に変更しています。
主な変更点は以下の通りです:
- ドキュメントの説明部分が修正され、モデルのデプロイ対象地域に対する情報がAI Foundryに関連した内容となりました。これにより、ユーザーは正確なプラットフォーム情報を得ることができます。
この更新によって、デプロイ可能な地域についての情報が最新の状況を反映し、ユーザーがAI Foundryでの利用の際に正しい情報を得られるようになります。
articles/ai-studio/how-to/deploy-models-serverless-connect.md
Diff
@@ -6,16 +6,17 @@ manager: scottpolly
ms.service: azure-ai-studio
ms.topic: how-to
ms.date: 5/21/2024
-ms.author: mopeakande
+ms.author: mopeakande
author: msakande
ms.reviewer: fasantia
reviewer: santiagxf
-ms.custom:
- - build-2024
- - serverless
+ms.custom:
+ - build-2024
+ - serverless
+ - ignite-2024
---
-# Consume serverless API endpoints from a different Azure AI Studio project or hub
+# Consume serverless API endpoints from a different Azure AI Foundry project or hub
In this article, you learn how to configure an existing serverless API endpoint in a different project or hub than the one that was used to create the deployment.
@@ -32,17 +33,17 @@ The need to consume a serverless API endpoint in a different project or hub than
- An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
-- An [Azure AI Studio hub](create-azure-ai-resource.md).
+- An [Azure AI Foundry hub](create-azure-ai-resource.md).
-- An [Azure AI Studio project](create-projects.md).
+- An [Azure AI Foundry project](create-projects.md).
- A model [deployed to a serverless API endpoint](deploy-models-serverless.md). This article assumes that you previously deployed the **Meta-Llama-3-8B-Instruct** model. To learn how to deploy this model as a serverless API, see [Deploy models as serverless APIs](deploy-models-serverless.md).
-- You need to install the following software to work with Azure AI Studio:
+- You need to install the following software to work with Azure AI Foundry:
- # [AI Studio](#tab/azure-ai-studio)
+ # [AI Foundry](#tab/azure-ai-studio)
- You can use any compatible web browser to navigate [Azure AI Studio](https://ai.azure.com).
+ You can use any compatible web browser to navigate [Azure AI Foundry](https://ai.azure.com).
# [Azure CLI](#tab/cli)
@@ -87,9 +88,9 @@ Follow these steps to create a connection:
1. Connect to the project or hub where the endpoint is deployed:
- # [AI Studio](#tab/azure-ai-studio)
+ # [AI Foundry](#tab/azure-ai-studio)
- Go to [Azure AI Studio](https://ai.azure.com) and navigate to the project where the endpoint you want to connect to is deployed.
+ Go to [Azure AI Foundry](https://ai.azure.com) and navigate to the project where the endpoint you want to connect to is deployed.
# [Azure CLI](#tab/cli)
@@ -115,9 +116,9 @@ Follow these steps to create a connection:
1. Get the endpoint's URL and credentials for the endpoint you want to connect to. In this example, you get the details for an endpoint name **meta-llama3-8b-qwerty**.
- # [AI Studio](#tab/azure-ai-studio)
+ # [AI Foundry](#tab/azure-ai-studio)
- 1. From the left sidebar of your project in AI Studio, go to **My assets** > **Models + endpoints** to see the list of deployments in the project.
+ 1. From the left sidebar of your project in AI Foundry portal, go to **My assets** > **Models + endpoints** to see the list of deployments in the project.
1. Select the deployment you want to connect to.
@@ -140,7 +141,7 @@ Follow these steps to create a connection:
1. Now, connect to the project or hub **where you want to create the connection**:
- # [AI Studio](#tab/azure-ai-studio)
+ # [AI Foundry](#tab/azure-ai-studio)
Go to the project where the connection needs to be created to.
@@ -168,9 +169,9 @@ Follow these steps to create a connection:
1. Create the connection in the project:
- # [AI Studio](#tab/azure-ai-studio)
+ # [AI Foundry](#tab/azure-ai-studio)
- 1. From the left sidebar of your project in AI Studio, select **Management center**.
+ 1. From the left sidebar of your project in AI Foundry portal, select **Management center**.
1. From the left sidebar of the management center, select **Connected resources**.
@@ -217,7 +218,7 @@ Follow these steps to create a connection:
1. To validate that the connection is working:
- 1. Return to your project in AI Studio.
+ 1. Return to your project in AI Foundry portal.
1. From the left sidebar of your project, go to **Build and customize** > **Prompt flow**.
@@ -238,5 +239,5 @@ Follow these steps to create a connection:
## Related content
-- [What is Azure AI Studio?](../what-is-ai-studio.md)
+- [What is Azure AI Foundry?](../what-is-ai-studio.md)
- [Azure AI FAQ article](../faq.yml)
Summary
{
"modification_type": "minor update",
"modification_title": "サーバーレスAPIエンドポイント接続ガイドの修正"
}
Explanation
この変更は、「deploy-models-serverless-connect.md」というドキュメントに行われ、サーバーレスAPIエンドポイントを異なるプロジェクトまたはハブから利用する手順を更新しました。主に、プラットフォーム名を「Azure AI Studio」から「Azure AI Foundry」に変更しています。
主な変更点は以下の通りです:
- ドキュメント全体にわたり、「Azure AI Studio」の言及が「Azure AI Foundry」に置き換えられました。これにより、ユーザーは最新かつ正確なプラットフォーム情報を取得できるようになります。
- 必要なソフトウェアや手順に関する具体的な記述も、AI Foundryに関連した内容に更新されています。
- 関連リンクが修正され、「Azure AI Foundry」に関する情報が追加されました。
この更新によって、サーバーレスAPIエンドポイントの設定や接続に関するドキュメントが最新のプラットフォーム情報を反映し、ユーザーがAI Foundryにおいて適切に手続きを進めるためのガイドラインが強化されました。
articles/ai-studio/how-to/deploy-models-serverless.md
Diff
@@ -1,16 +1,16 @@
---
title: Deploy models as serverless APIs
titleSuffix: Azure AI Foundry
-description: Learn to deploy models as serverless APIs, using Azure AI Studio.
+description: Learn to deploy models as serverless APIs, using Azure AI Foundry.
manager: scottpolly
ms.service: azure-ai-studio
ms.topic: how-to
ms.date: 07/18/2024
-ms.author: mopeakande
+ms.author: mopeakande
author: msakande
ms.reviewer: fasantia
reviewer: santiagxf
-ms.custom: build-2024, serverless, devx-track-azurecli
+ms.custom: build-2024, serverless, devx-track-azurecli, ignite-2024
---
# Deploy models as serverless APIs
@@ -27,17 +27,17 @@ This article uses a Meta Llama model deployment for illustration. However, you c
- An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
-- An [Azure AI Studio hub](create-azure-ai-resource.md).
+- An [Azure AI Foundry hub](create-azure-ai-resource.md).
-- An [Azure AI Studio project](create-projects.md).
+- An [Azure AI Foundry project](create-projects.md).
-- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md).
+- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Foundry portal. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Role-based access control in Azure AI Foundry portal](../concepts/rbac-ai-studio.md).
-- You need to install the following software to work with Azure AI Studio:
+- You need to install the following software to work with Azure AI Foundry:
- # [AI Studio](#tab/azure-ai-studio)
+ # [AI Foundry](#tab/azure-ai-studio)
- You can use any compatible web browser to navigate [Azure AI Studio](https://ai.azure.com).
+ You can use any compatible web browser to navigate [Azure AI Foundry](https://ai.azure.com).
# [Azure CLI](#tab/cli)
@@ -132,7 +132,7 @@ Serverless API endpoints can deploy both Microsoft and non-Microsoft offered mod
1. Create the model's marketplace subscription. When you create a subscription, you accept the terms and conditions associated with the model offer.
- # [AI Studio](#tab/azure-ai-studio)
+ # [AI Foundry](#tab/azure-ai-studio)
1. On the model's **Details** page, select **Deploy**. A **Deployment options** window opens up, giving you the choice between serverless API deployment and deployment using a managed compute.
@@ -259,7 +259,7 @@ Serverless API endpoints can deploy both Microsoft and non-Microsoft offered mod
1. At any point, you can see the model offers to which your project is currently subscribed:
- # [AI Studio](#tab/azure-ai-studio)
+ # [AI Foundry](#tab/azure-ai-studio)
1. Go to the [Azure portal](https://portal.azure.com).
@@ -314,7 +314,7 @@ In this section, you create an endpoint with the name **meta-llama3-8b-qwerty**.
1. Create the serverless endpoint
- # [AI Studio](#tab/azure-ai-studio)
+ # [AI Foundry](#tab/azure-ai-studio)
1. To deploy a Microsoft model that doesn't require subscribing to a model offering:
1. Select **Deploy** and then select **Serverless API with Azure AI Content Safety (preview)** to open the deployment wizard.
@@ -328,7 +328,7 @@ In this section, you create an endpoint with the name **meta-llama3-8b-qwerty**.
:::image type="content" source="../media/deploy-monitor/serverless/deployment-name.png" alt-text="A screenshot showing how to specify the name of the deployment you want to create." lightbox="../media/deploy-monitor/serverless/deployment-name.png":::
> [!TIP]
- > The **Content filter (preview)** option is enabled by default. Leave the default setting for the service to detect harmful content such as hate, self-harm, sexual, and violent content. For more information about content filtering (preview), see [Content filtering in Azure AI Studio](../concepts/content-filtering.md).
+ > The **Content filter (preview)** option is enabled by default. Leave the default setting for the service to detect harmful content such as hate, self-harm, sexual, and violent content. For more information about content filtering (preview), see [Content filtering in Azure AI Foundry portal](../concepts/content-filtering.md).
1. Select **Deploy**. Wait until the deployment is ready and you're redirected to the Deployments page.
@@ -466,7 +466,7 @@ In this section, you create an endpoint with the name **meta-llama3-8b-qwerty**.
1. At any point, you can see the endpoints deployed to your project:
- # [AI Studio](#tab/azure-ai-studio)
+ # [AI Foundry](#tab/azure-ai-studio)
1. Go to your project.
@@ -515,7 +515,7 @@ In this section, you create an endpoint with the name **meta-llama3-8b-qwerty**.
1. The created endpoint uses key authentication for authorization. Use the following steps to get the keys associated with a given endpoint.
- # [AI Studio](#tab/azure-ai-studio)
+ # [AI Foundry](#tab/azure-ai-studio)
You can select the deployment, and note the endpoint's _Target URI_ and _Key_. Use them to call the deployment and generate predictions.
@@ -553,15 +553,15 @@ In this section, you create an endpoint with the name **meta-llama3-8b-qwerty**.
## Use the serverless API endpoint
-Models deployed in Azure Machine Learning and Azure AI Studio in Serverless API endpoints support the [Azure AI Model Inference API](../reference/reference-model-inference-api.md) that exposes a common set of capabilities for foundational models and that can be used by developers to consume predictions from a diverse set of models in a uniform and consistent way.
+Models deployed in Azure Machine Learning and Azure AI Foundry in Serverless API endpoints support the [Azure AI Model Inference API](../reference/reference-model-inference-api.md) that exposes a common set of capabilities for foundational models and that can be used by developers to consume predictions from a diverse set of models in a uniform and consistent way.
Read more about the [capabilities of this API](../reference/reference-model-inference-api.md#capabilities) and how [you can use it when building applications](../reference/reference-model-inference-api.md#getting-started).
## Network isolation
-Endpoints for models deployed as Serverless APIs follow the public network access (PNA) flag setting of the AI Studio Hub that has the project in which the deployment exists. To secure your MaaS endpoint, disable the PNA flag on your AI Studio Hub. You can secure inbound communication from a client to your endpoint by using a private endpoint for the hub.
+Endpoints for models deployed as Serverless APIs follow the public network access (PNA) flag setting of the AI Foundry portal Hub that has the project in which the deployment exists. To secure your MaaS endpoint, disable the PNA flag on your AI Foundry Hub. You can secure inbound communication from a client to your endpoint by using a private endpoint for the hub.
-To set the PNA flag for the Azure AI Studio hub:
+To set the PNA flag for the Azure AI Foundry hub:
1. Go to the [Azure portal](https://portal.azure.com).
2. Search for the Resource group to which the hub belongs, and select the **Azure AI hub** from the resources listed for this resource group.
@@ -573,11 +573,11 @@ To set the PNA flag for the Azure AI Studio hub:
You can delete model subscriptions and endpoints. Deleting a model subscription makes any associated endpoint become *Unhealthy* and unusable.
-# [AI Studio](#tab/azure-ai-studio)
+# [AI Foundry](#tab/azure-ai-studio)
To delete a serverless API endpoint:
-1. Go to the [Azure AI Studio](https://ai.azure.com).
+1. Go to the [Azure AI Foundry](https://ai.azure.com).
1. Go to your project.
@@ -659,7 +659,7 @@ You can find the pricing information on the __Pricing and terms__ tab of the dep
#### Cost for non-Microsoft models
-Non-Microsoft models deployed as serverless API endpoints are offered through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when deploying or fine-tuning these models.
+Non-Microsoft models deployed as serverless API endpoints are offered through the Azure Marketplace and integrated with Azure AI Foundry for use. You can find the Azure Marketplace pricing when deploying or fine-tuning these models.
Each time a project subscribes to a given offer from the Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference and fine-tuning; however, multiple meters are available to track each scenario independently.
@@ -670,7 +670,7 @@ For more information on how to track costs, see [Monitor costs for models offere
## Permissions required to subscribe to model offerings
-Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __Owner__, __Contributor__, or __Azure AI Developer__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions:
+Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Foundry portal. To perform the steps in this article, your user account must be assigned the __Owner__, __Contributor__, or __Azure AI Developer__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions:
- On the Azure subscription—to subscribe the workspace to the Azure Marketplace offering, once for each workspace, per offering:
- `Microsoft.MarketplaceOrdering/agreements/offers/plans/read`
@@ -687,9 +687,9 @@ Azure role-based access controls (Azure RBAC) are used to grant access to operat
- `Microsoft.MachineLearningServices/workspaces/marketplaceModelSubscriptions/*`
- `Microsoft.MachineLearningServices/workspaces/serverlessEndpoints/*`
-For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md).
+For more information on permissions, see [Role-based access control in Azure AI Foundry portal](../concepts/rbac-ai-studio.md).
## Related content
* [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md)
-* [Fine-tune a Meta Llama 2 model in Azure AI Studio](fine-tune-model-llama.md)
+* [Fine-tune a Meta Llama 2 model in Azure AI Foundry portal](fine-tune-model-llama.md)
Summary
{
"modification_type": "minor update",
"modification_title": "サーバーレスAPIとしてモデルをデプロイするガイドの修正"
}
Explanation
この変更は、「deploy-models-serverless.md」というドキュメントに行われ、サーバーレスAPIとしてモデルをデプロイする手順を更新しました。主に、ナビゲーションや参照の対象が「Azure AI Studio」から「Azure AI Foundry」に変更されました。
主な変更点は以下の通りです:
- ドキュメントの説明部分で、プラットフォームの名称が「Azure AI Studio」から「Azure AI Foundry」に更新されました。
- Azureの役割ベースのアクセス制御(RBAC)に関する記述も、新しいプラットフォームに対応するように修正されています。
- 各手順において、チュートリアル内容の適切な更新が行われており、ユーザーが新プラットフォームを利用する際に必要な情報が明確に伝えられています。
この更新により、ユーザーはサーバーレスAPIエンドポイントへのモデルデプロイに関する最新の手順とプラットフォーム情報を取得でき、Azure AI Foundryでの操作が円滑に行えるように配慮されています。
articles/ai-studio/how-to/deploy-models-timegen-1.md
Diff
@@ -1,7 +1,7 @@
---
-title: How to deploy TimeGEN-1 model with Azure AI Studio
+title: How to deploy TimeGEN-1 model with Azure AI Foundry
titleSuffix: Azure AI Foundry
-description: Learn how to deploy TimeGEN-1 with Azure AI Studio.
+description: Learn how to deploy TimeGEN-1 with Azure AI Foundry.
manager: scottpolly
ms.service: azure-ai-studio
ms.topic: how-to
@@ -10,14 +10,14 @@ ms.reviewer: kritifaujdar
reviewer: fkriti
ms.author: mopeakande
author: msakande
-ms.custom: references_regions, build-2024
+ms.custom: references_regions, build-2024, ignite-2024
---
-# How to deploy a TimeGEN-1 model with Azure AI Studio
+# How to deploy a TimeGEN-1 model with Azure AI Foundry
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-In this article, you learn how to use Azure AI Studio to deploy the TimeGEN-1 model as a serverless API with pay-as-you-go billing.
+In this article, you learn how to use Azure AI Foundry to deploy the TimeGEN-1 model as a serverless API with pay-as-you-go billing.
You filter on the Nixtla collection to browse the TimeGEN-1 model in the [Model Catalog](model-catalog.md).
The Nixtla TimeGEN-1 is a generative, pretrained forecasting and anomaly detection model for time series data. TimeGEN-1 can produce accurate forecasts for new time series without training, using only historical values and exogenous covariates as inputs.
@@ -33,7 +33,7 @@ You can deploy TimeGEN-1 as a serverless API with pay-as-you-go billing. Nixtla
### Prerequisites
- An Azure subscription with a valid payment method. Free or trial Azure subscriptions don't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
-- An [Azure AI Studio project](../how-to/create-projects.md). The serverless API model deployment offering for TimeGEN-1 is only available with projects created in these regions:
+- An [Azure AI Foundry project](../how-to/create-projects.md). The serverless API model deployment offering for TimeGEN-1 is only available with projects created in these regions:
> [!div class="checklist"]
> * East US
@@ -46,7 +46,7 @@ You can deploy TimeGEN-1 as a serverless API with pay-as-you-go billing. Nixtla
For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md).
-- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, visit [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md).
+- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Foundry portal. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, visit [Role-based access control in Azure AI Foundry portal](../concepts/rbac-ai-studio.md).
#### Estimate the number of tokens needed
@@ -91,7 +91,7 @@ These steps demonstrate the deployment of TimeGEN-1. To create a deployment:
4. Search for and select **TimeGEN-1** to open its Details page.
1. Select **Deploy** to open a serverless API deployment window for the model.
-1. Alternatively, you can initiate a deployment by starting from the **Models + endpoints** page in AI Studio.
+1. Alternatively, you can initiate a deployment by starting from the **Models + endpoints** page in AI Foundry portal.
1. From the left navigation pane of your project, select **My assets** > **Models + endpoints**.
1. Select **+ Deploy model** > **Deploy base model**.
1. Search for and select **TimeGEN-1**. to open the Model's Details page.
@@ -134,12 +134,12 @@ For more information about use of the APIs, visit the [reference](#reference-for
#### Forecast API
-Use the method `POST` to send the request to the `/forecast_multi_series` route:
+Use the method `POST` to send the request to the `/forecast` route:
__Request__
```rest
-POST /forecast_multi_series HTTP/1.1
+POST /forecast HTTP/1.1
Host: <DEPLOYMENT_URI>
Authorization: Bearer <TOKEN>
Content-type: application/json
@@ -151,8 +151,7 @@ The Payload JSON formatted string contains these parameters:
| Key | Type | Default | Description |
|-----|-----|-----|-----|
-| **DataFrame (`df`)** | `DataFrame` | No default. This value must be specified. | The DataFrame on which the function operates. Expected to contain at least these columns:<br><br>`time_col`: Column name in `df` that contains the time indices of the time series. This column is typically a datetime column with regular intervals - for example, hourly, daily, monthly data points.<br><br>`target_col`: Column name in `df` that contains the target variable of the time series, in other words, the variable we wish to predict or analyze.<br><br>Additionally, you can pass multiple time series (stacked in the dataframe) considering another column:<br><br>`id_col`: Column name in `df` that identifies unique time series. Each unique value in this column corresponds to a unique time series.|
-| **Forecast Horizon (`h`)** | `int` | No default. This value must be specified. | Forecast horizon |
+| **Forecast Horizon (`fh`)** | `int` | No default. This value must be specified. | Forecast horizon |
| **Frequency (`freq`)** | `str` | None |Frequency of the data. By default, the frequency is inferred automatically. For more information, visit [pandas available frequencies](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases). |
| **Identifying Column (`id_col`)** | `str` | `unique_id` | Column that identifies each series.|
|**Time Column (`time_col`)**| `str` |`ds` | Column that identifies each timestep; its values can be timestamps or integers. |
@@ -274,7 +273,7 @@ This JSON sample is an example response:
### Cost and quota considerations for TimeGEN-1 deployed as a serverless API
-Nixtla offers TimeGEN-1 deployed as a serverless API through the Azure Marketplace. TimeGEN-1 is integrated with Azure AI Studio for use. You can find more information about Azure Marketplace pricing when you deploy the model.
+Nixtla offers TimeGEN-1 deployed as a serverless API through the Azure Marketplace. TimeGEN-1 is integrated with Azure AI Foundry for use. You can find more information about Azure Marketplace pricing when you deploy the model.
Each time a project subscribes to a given offer from the Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference; however, multiple meters are available to track each scenario independently.
@@ -284,6 +283,6 @@ Quota is managed per deployment. Each deployment has a rate limit of 200,000 tok
## Related content
-- [What is Azure AI Studio?](../what-is-ai-studio.md)
+- [What is Azure AI Foundry?](../what-is-ai-studio.md)
- [Azure AI FAQ article](../faq.yml)
- [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md)
Summary
{
"modification_type": "minor update",
"modification_title": "TimeGEN-1モデルのデプロイに関するガイドの修正"
}
Explanation
この変更は、「deploy-models-timegen-1.md」というドキュメントに行われ、TimeGEN-1モデルをAzure AI StudioではなくAzure AI Foundryを用いてデプロイする手順を更新しました。その内容は、プラットフォーム名の変更と、それに関連する手順の一貫性を保つための修正が含まれています。
主な変更点は以下の通りです:
- ドキュメントのタイトルや説明部分で、プラットフォーム名が「Azure AI Studio」から「Azure AI Foundry」に変更されました。
- 前提条件や手順の記述で、プロジェクトの作成と権限に関する情報も新しいプラットフォームに対応する形で更新されています。
- APIの呼び出しに関する詳細についても修正が行われ、使用するエンドポイントの呼び出し方法が明確になりました。
- 関連情報についても、利用するプラットフォームがAzure AI Foundryに更新されています。
これらの変更により、ユーザーは最新のプラットフォームに基づいた正確な手順を参照できるようになり、TimeGEN-1モデルのデプロイに関する理解が深まることを目的としています。
articles/ai-studio/how-to/deploy-models-tsuzumi.md
Diff
@@ -1,7 +1,7 @@
---
-title: How to use tsuzumi-7b models with Azure AI Studio
+title: How to use tsuzumi-7b models with Azure AI Foundry
titleSuffix: Azure AI Foundry
-description: Learn how to use tsuzumi-7b models with Azure AI Studio.
+description: Learn how to use tsuzumi-7b models with Azure AI Foundry.
ms.service: azure-ai-studio
manager: scottpolly
ms.topic: how-to
@@ -36,15 +36,15 @@ You can learn more about the models in their respective model card:
## Prerequisites
-To use tsuzumi-7b models with Azure AI Studio, you need the following prerequisites:
+To use tsuzumi-7b models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
tsuzumi-7b models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -70,7 +70,7 @@ Read more about the [Azure AI inference package and reference](https://aka.ms/az
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including tsuzumi-7b models.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including tsuzumi-7b models.
### Create a client to consume the model
@@ -285,15 +285,15 @@ You can learn more about the models in their respective model card:
## Prerequisites
-To use tsuzumi-7b models with Azure AI Studio, you need the following prerequisites:
+To use tsuzumi-7b models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
tsuzumi-7b models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -317,7 +317,7 @@ npm install @azure-rest/ai-inference
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including tsuzumi-7b models.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including tsuzumi-7b models.
### Create a client to consume the model
@@ -602,15 +602,15 @@ You can learn more about the models in their respective model card:
## Prerequisites
-To use tsuzumi-7b models with Azure AI Studio, you need the following prerequisites:
+To use tsuzumi-7b models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
tsuzumi-7b models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -657,7 +657,7 @@ using System.Reflection;
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including tsuzumi-7b models.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including tsuzumi-7b models.
### Create a client to consume the model
@@ -936,15 +936,15 @@ You can learn more about the models in their respective model card:
## Prerequisites
-To use tsuzumi-7b models with Azure AI Studio, you need the following prerequisites:
+To use tsuzumi-7b models with Azure AI Foundry, you need the following prerequisites:
### A model deployment
**Deployment to serverless APIs**
tsuzumi-7b models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
-Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
+Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
> [!div class="nextstepaction"]
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -961,7 +961,7 @@ Models deployed with the [Azure AI model inference API](https://aka.ms/azureai/m
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
> [!TIP]
-> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including tsuzumi-7b models.
+> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including tsuzumi-7b models.
### Create a client to consume the model
@@ -1326,7 +1326,7 @@ The following example shows how to handle events when the model detects harmful
Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
-tsuzumi models deployed as a serverless API are offered by NTTDATA through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when deploying the model.
+tsuzumi models deployed as a serverless API are offered by NTTDATA through the Azure Marketplace and integrated with Azure AI Foundry for use. You can find the Azure Marketplace pricing when deploying the model.
Each time a project subscribes to a given offer from the Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference; however, multiple meters are available to track each scenario independently.
@@ -1337,6 +1337,6 @@ For more information on how to track costs, see [Monitor costs for models offere
* [Azure AI Model Inference API](../reference/reference-model-inference-api.md)
* [Deploy models as serverless APIs](deploy-models-serverless.md)
-* [Consume serverless API endpoints from a different Azure AI Studio project or hub](deploy-models-serverless-connect.md)
+* [Consume serverless API endpoints from a different Azure AI Foundry project or hub](deploy-models-serverless-connect.md)
* [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md)
* [Plan and manage costs (marketplace)](costs-plan-manage.md#monitor-costs-for-models-offered-through-the-azure-marketplace)
Summary
{
"modification_type": "minor update",
"modification_title": "tsuzumi-7bモデルの利用に関するガイドの修正"
}
Explanation
この変更は、「deploy-models-tsuzumi.md」というドキュメントに行われ、tsuzumi-7bモデルの利用方法に関する手順を更新しました。変更の主なポイントは、プラットフォーム名の更新と、それに関連する情報の整合性を確保するための修正が含まれています。
主な変更点は以下の通りです:
- ドキュメントのタイトルや説明文で、プラットフォーム名が「Azure AI Studio」から「Azure AI Foundry」に変更されています。
- tsuzumi-7bモデルのデプロイ手順に関する言及が新しいプラットフォームに基づいて更新されています。
- サーバーレスAPIエンドポイントに関するデプロイ方法の部分で、利用するポータルがAzure AI Foundryに変更されており、必要な手順が明確にされています。
- Azure AIモデル推論APIに関する情報も同様に更新され、各種手順や前提条件が新しいプラットフォームでの利用を反映した内容に修正されています。
これらの変更により、利用者は最新のプラットフォーム環境に沿った正確な手順を参照できるようになり、tsuzumi-7bモデルのデプロイと利用に関する理解が深まることが期待されます。
articles/ai-studio/how-to/develop/ai-template-get-started.md
Diff
@@ -4,6 +4,8 @@ titleSuffix: Azure AI Foundry
description: This article provides instructions on how to get started with an AI template.
manager: scottpolly
ms.service: azure-ai-studio
+ms.custom:
+ - ignite-2024
ms.topic: how-to
ms.date: 5/21/2024
ms.reviewer: dantaylo
@@ -28,13 +30,13 @@ Start with our sample applications! Choose the right template for your needs, th
| Template | App host | Tech stack | Description |
| ----------- | ----------| ----------- | ------------|
-| [Azure AI Basic Template with Python](https://github.com/azure-samples/azureai-basic-python) | [Azure AI Studio online endpoints](/azure/machine-learning/concept-endpoints-online) | [Azure Managed Identity](/entra/identity/managed-identities-azure-resources/overview), [Azure OpenAI Service](../../../ai-services/openai/overview.md), Bicep | The app serves as a straightforward example of integrating Azure AI Services within a basic prompt-based application. This template walks you through building a simple chat app that utilizes models and prompts. It also covers setting up the necessary infrastructure for the app, including creating an Azure AI Studio Hub, configuring projects, and provisioning essential resources such as Azure AI Service, Azure Container Apps, Cognitive Search, and more. <br>You can build, deploy, and test it with a single command. |
-| [Contoso Chat Retail copilot with Azure AI Studio](https://github.com/Azure-Samples/contoso-chat) | [Azure Container Apps](/azure/container-apps/overview) | [Azure Cosmos DB](/azure/cosmos-db/index-overview), [Azure Managed Identity](/entra/identity/managed-identities-azure-resources/overview), [Azure OpenAI Service](../../../ai-services/openai/overview.md), [Azure AI Search](/azure/search/search-what-is-azure-search), Bicep | A retailer conversation agent that can answer questions grounded in your product catalog and customer order history. This template uses a retrieval augmented generation architecture with cutting-edge models for chat completion, chat evaluation, and embeddings. Build, evaluate, and deploy, an end-to-end solution with a single command. |
-| [Process Automation: speech to text and summarization with Azure AI Studio](https://github.com/Azure-Samples/summarization-openai-python-prompflow) | [Azure AI Studio online endpoints](/azure/machine-learning/concept-endpoints-online) | [Azure Managed Identity](/entra/identity/managed-identities-azure-resources/overview), [Azure OpenAI Service](../../../ai-services/openai/overview.md), [Azure AI speech to text service](../../../ai-services/speech-service/index-speech-to-text.yml), Bicep | An app for workers to report issues via text or speech, translating audio to text, summarizing it, and specify the relevant department. |
-| [Multi-Modal Creative Writing copilot with Dalle](https://github.com/Azure-Samples/agent-openai-python-prompty) | [Azure AI Studio online endpoints](/azure/machine-learning/concept-endpoints-online) | [Azure AI Search](/azure/search/search-what-is-azure-search), [Azure OpenAI Service](../../../ai-services/openai/overview.md), Bicep | demonstrates how to create and work with AI agents. The app takes a topic and instruction input and then calls a research agent, writer agent, and editor agent. |
-| [Assistant API Analytics Copilot with Python and Azure AI Studio](https://github.com/Azure-Samples/assistant-data-openai-python-promptflow) | [Azure AI Studio online endpoints](/azure/machine-learning/concept-endpoints-online) | [Azure Managed Identity](/entra/identity/managed-identities-azure-resources/overview), [Azure AI Search](/azure/search/search-what-is-azure-search), [Azure OpenAI Service](../../../ai-services/openai/overview.md), Bicep| A data analytics chatbot based on the Assistants API. The chatbot can answer questions in natural language, and interpret them as queries on an example sales dataset. |
-| [Function Calling with Prompty, LangChain, and Pinecone](https://github.com/Azure-Samples/agent-openai-python-prompty-langchain-pinecone) | [Azure AI Studio online endpoints](/azure/machine-learning/concept-endpoints-online) | [Azure Managed Identity](/entra/identity/managed-identities-azure-resources/overview), [Azure OpenAI Service](../../../ai-services/openai/overview.md), [LangChain](https://python.langchain.com/v0.1/docs/get_started/introduction), [Pinecone](https://www.pinecone.io/), Bicep | Utilize the new Prompty tool, LangChain, and Pinecone to build a large language model (LLM) search agent. This agent with Retrieval-Augmented Generation (RAG) technology is capable of answering user questions based on the provided data by integrating real-time information retrieval with generative responses. |
-| [Function Calling with Prompty, LangChain, and Elastic Search](https://github.com/Azure-Samples/agent-python-openai-prompty-langchain) | [Azure AI Studio online endpoints](/azure/machine-learning/concept-endpoints-online) | [Azure Managed Identity](/entra/identity/managed-identities-azure-resources/overview), [Azure OpenAI Service](../../../ai-services/openai/overview.md), [Elastic Search](https://www.elastic.co/elasticsearch), [LangChain](https://python.langchain.com/v0.1/docs/get_started/introduction) , Bicep | Utilize the new Prompty tool, LangChain, and Elasticsearch to build a large language model (LLM) search agent. This agent with Retrieval-Augmented Generation (RAG) technology is capable of answering user questions based on the provided data by integrating real-time information retrieval with generative responses |
+| [Azure AI Basic Template with Python](https://github.com/azure-samples/azureai-basic-python) | [Azure AI Foundry online endpoints](/azure/machine-learning/concept-endpoints-online) | [Azure Managed Identity](/entra/identity/managed-identities-azure-resources/overview), [Azure OpenAI Service](../../../ai-services/openai/overview.md), Bicep | The app serves as a straightforward example of integrating Azure AI Services within a basic prompt-based application. This template walks you through building a simple chat app that utilizes models and prompts. It also covers setting up the necessary infrastructure for the app, including creating an Azure AI Foundry Hub, configuring projects, and provisioning essential resources such as Azure AI Service, Azure Container Apps, Cognitive Search, and more. <br>You can build, deploy, and test it with a single command. |
+| [Contoso Chat Retail copilot with Azure AI Foundry](https://github.com/Azure-Samples/contoso-chat) | [Azure Container Apps](/azure/container-apps/overview) | [Azure Cosmos DB](/azure/cosmos-db/index-overview), [Azure Managed Identity](/entra/identity/managed-identities-azure-resources/overview), [Azure OpenAI Service](../../../ai-services/openai/overview.md), [Azure AI Search](/azure/search/search-what-is-azure-search), Bicep | A retailer conversation agent that can answer questions grounded in your product catalog and customer order history. This template uses a retrieval augmented generation architecture with cutting-edge models for chat completion, chat evaluation, and embeddings. Build, evaluate, and deploy, an end-to-end solution with a single command. |
+| [Process Automation: speech to text and summarization with Azure AI Foundry](https://github.com/Azure-Samples/summarization-openai-python-prompflow) | [Azure AI Foundry online endpoints](/azure/machine-learning/concept-endpoints-online) | [Azure Managed Identity](/entra/identity/managed-identities-azure-resources/overview), [Azure OpenAI Service](../../../ai-services/openai/overview.md), [Azure AI speech to text service](../../../ai-services/speech-service/index-speech-to-text.yml), Bicep | An app for workers to report issues via text or speech, translating audio to text, summarizing it, and specify the relevant department. |
+| [Multi-Modal Creative Writing copilot with Dalle](https://github.com/Azure-Samples/agent-openai-python-prompty) | [Azure AI Foundry online endpoints](/azure/machine-learning/concept-endpoints-online) | [Azure AI Search](/azure/search/search-what-is-azure-search), [Azure OpenAI Service](../../../ai-services/openai/overview.md), Bicep | demonstrates how to create and work with AI agents. The app takes a topic and instruction input and then calls a research agent, writer agent, and editor agent. |
+| [Assistant API Analytics Copilot with Python and Azure AI Foundry](https://github.com/Azure-Samples/assistant-data-openai-python-promptflow) | [Azure AI Foundry online endpoints](/azure/machine-learning/concept-endpoints-online) | [Azure Managed Identity](/entra/identity/managed-identities-azure-resources/overview), [Azure AI Search](/azure/search/search-what-is-azure-search), [Azure OpenAI Service](../../../ai-services/openai/overview.md), Bicep| A data analytics chatbot based on the Assistants API. The chatbot can answer questions in natural language, and interpret them as queries on an example sales dataset. |
+| [Function Calling with Prompty, LangChain, and Pinecone](https://github.com/Azure-Samples/agent-openai-python-prompty-langchain-pinecone) | [Azure AI Foundry online endpoints](/azure/machine-learning/concept-endpoints-online) | [Azure Managed Identity](/entra/identity/managed-identities-azure-resources/overview), [Azure OpenAI Service](../../../ai-services/openai/overview.md), [LangChain](https://python.langchain.com/v0.1/docs/get_started/introduction), [Pinecone](https://www.pinecone.io/), Bicep | Utilize the new Prompty tool, LangChain, and Pinecone to build a large language model (LLM) search agent. This agent with Retrieval-Augmented Generation (RAG) technology is capable of answering user questions based on the provided data by integrating real-time information retrieval with generative responses. |
+| [Function Calling with Prompty, LangChain, and Elastic Search](https://github.com/Azure-Samples/agent-python-openai-prompty-langchain) | [Azure AI Foundry online endpoints](/azure/machine-learning/concept-endpoints-online) | [Azure Managed Identity](/entra/identity/managed-identities-azure-resources/overview), [Azure OpenAI Service](../../../ai-services/openai/overview.md), [Elastic Search](https://www.elastic.co/elasticsearch), [LangChain](https://python.langchain.com/v0.1/docs/get_started/introduction) , Bicep | Utilize the new Prompty tool, LangChain, and Elasticsearch to build a large language model (LLM) search agent. This agent with Retrieval-Augmented Generation (RAG) technology is capable of answering user questions based on the provided data by integrating real-time information retrieval with generative responses |
### [C#](#tab/csharp)
@@ -50,4 +52,4 @@ Start with our sample applications! Choose the right template for your needs, th
- [Get started building a chat app using the prompt flow SDK](../../quickstarts/get-started-code.md)
- [Work with projects in VS Code](vscode.md)
-- [Connections in Azure AI Studio](../../concepts/connections.md)
+- [Connections in Azure AI Foundry portal](../../concepts/connections.md)
Summary
{
"modification_type": "minor update",
"modification_title": "AIテンプレートの開始方法に関するガイドの修正"
}
Explanation
この変更は、「ai-template-get-started.md」というドキュメントに行われ、AIテンプレートの利用方法に関する情報を更新しました。主な変更点は、プラットフォーム名の変更と関連情報の整合性の確保です。
具体的な変更内容は以下の通りです:
- ドキュメント内の各部分で、プラットフォーム名が「Azure AI Studio」から「Azure AI Foundry」に変更されました。
- AIテンプレートに関する説明の中で、各テンプレートに割り当てられたホストが新しいプラットフォームに基づいて更新されています。
- テンプレートの詳細部分では、Azure AI Foundryを用いたアプリケーションの例が追加されており、それによりユーザーが新しいプラットフォームへの移行を容易に理解できるようになっています。
- また、プラットフォームに関連する接続情報も「Azure AI Foundryポータル」に更新されました。
これにより、利用者は最新のプラットフォームに沿った正確な情報を得ることができ、AIテンプレートを使用した開発の開始に関して理解を深めることが期待されています。
articles/ai-studio/how-to/develop/connections-add-sdk.md
Diff
@@ -1,11 +1,12 @@
---
-title: How to add a new connection in AI Studio using the Azure Machine Learning SDK
+title: How to add a new connection in AI Foundry portal using the Azure Machine Learning SDK
titleSuffix: Azure AI Foundry
description: This article provides instructions on how to add connections to other resources using the Azure Machine Learning SDK.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
- build-2024
+ - ignite-2024
ms.topic: how-to
ms.date: 9/12/2024
ms.reviewer: dantaylo
@@ -19,12 +20,12 @@ author: Blackmist
In this article, you learn how to add a new connection using the Azure Machine Learning SDK.
-Connections are a way to authenticate and consume both Microsoft and other resources within your Azure AI Studio projects. For example, connections can be used for prompt flow, training data, and deployments. [Connections can be created](../../how-to/connections-add.md) exclusively for one project or shared with all projects in the same Azure AI Studio hub. For more information, see [Connections in Azure AI Studio](../../concepts/connections.md).
+Connections are a way to authenticate and consume both Microsoft and other resources within your Azure AI Foundry projects. For example, connections can be used for prompt flow, training data, and deployments. [Connections can be created](../../how-to/connections-add.md) exclusively for one project or shared with all projects in the same Azure AI Foundry hub. For more information, see [Connections in Azure AI Foundry portal](../../concepts/connections.md).
## Prerequisites
-- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure AI Studio](https://azure.microsoft.com/free/) today.
-- An Azure AI Studio hub. For information on creating a hub, see [Create AI Studio resources with the SDK](./create-hub-project-sdk.md).
+- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure AI Foundry](https://azure.microsoft.com/free/) today.
+- An Azure AI Foundry hub. For information on creating a hub, see [Create AI Foundry resources with the SDK](./create-hub-project-sdk.md).
- A resource to create a connection to. For example, an AI Services resource. The examples in this article use placeholders that you must replace with your own values when running the code.
## Set up your environment
@@ -334,4 +335,4 @@ ml_client.connections.delete(name)
- [Get started building a chat app using the prompt flow SDK](../../quickstarts/get-started-code.md)
- [Work with projects in VS Code](vscode.md)
-- [Connections in Azure AI Studio](../../concepts/connections.md)
+- [Connections in Azure AI Foundry portal](../../concepts/connections.md)
Summary
{
"modification_type": "minor update",
"modification_title": "AI Studioにおける接続の追加に関するガイドの修正"
}
Explanation
この変更は、「connections-add-sdk.md」というドキュメントに行われ、Azure Machine Learning SDKを使用してAI Studioに新しい接続を追加する方法についての情報を更新しました。主な変更点は、プラットフォーム名の更新と内容の整合性向上です。
具体的な変更内容は以下の通りです:
- ドキュメントのタイトルと説明文において、プラットフォーム名が「AI Studio」から「AI Foundryポータル」に変更されました。
- 接続を追加する方法に関する説明において、プロジェクトやHubの名前も新しいプラットフォームに対応するよう更新されています。
- プリクエリジット(前提条件)セクションには、Azure AI Foundryに関する言及が追加され、使用するリソースの設定にのっとった具体的な情報が反映されています。
- 最後に、接続に関するリファレンスリンクが新しいプラットフォームに適応した内容に更新されています。
変更により、利用者は新しいプラットフォームにおける接続の設定方法をより正確に理解できるようになり、実際の開発環境での適用がスムーズになります。
articles/ai-studio/how-to/develop/create-hub-project-sdk.md
Diff
@@ -1,7 +1,7 @@
---
title: How to create a hub using the Azure Machine Learning SDK/CLI
titleSuffix: Azure AI Foundry
-description: This article provides instructions on how to create an AI Studio hub using the Azure Machine Learning SDK and Azure CLI extension.
+description: This article provides instructions on how to create an AI Foundry hub using the Azure Machine Learning SDK and Azure CLI extension.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom: build-2024, devx-track-azurecli
@@ -16,13 +16,13 @@ author: sdgilley
[!INCLUDE [feature-preview](../../includes/feature-preview.md)]
-In this article, you learn how to create the following AI Studio resources using the Azure Machine Learning SDK and Azure CLI (with machine learning extension):
-- An Azure AI Studio hub
+In this article, you learn how to create the following AI Foundry resources using the Azure Machine Learning SDK and Azure CLI (with machine learning extension):
+- An Azure AI Foundry hub
- An Azure AI Services connection
## Prerequisites
-- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure AI Studio](https://azure.microsoft.com/free/) today.
+- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure AI Foundry](https://azure.microsoft.com/free/) today.
## Set up your environment
@@ -46,7 +46,7 @@ Use the following tabs to select whether you're using the Python SDK or Azure CL
---
-## Create the AI Studio hub and AI Services connection
+## Create the AI Foundry hub and AI Services connection
Use the following examples to create a new hub. Replace example string values with your own values:
@@ -127,7 +127,7 @@ You can use either an API key or credential-less YAML configuration file. For mo
---
-## Create an AI Studio hub using existing dependency resources
+## Create an AI Foundry hub using existing dependency resources
You can also create a hub using existing resources such as Azure Storage and Azure Key Vault. In the following examples, replace the example string values with your own values:
Summary
{
"modification_type": "minor update",
"modification_title": "AI Studioハブの作成に関するガイドの修正"
}
Explanation
この変更は、「create-hub-project-sdk.md」というドキュメントに対するもので、Azure Machine Learning SDKおよびCLIを使用してAI Studioハブを作成する方法に関する情報を更新しました。主な変更点は、プラットフォーム名の変更に関連しています。
具体的な変更内容は以下の通りです:
- ドキュメントの説明文および各セクションのタイトルにおいて、「AI Studio」から「AI Foundry」に名称が変更されました。
- 記事の内容全体にわたり、作成されるリソースの表記も同様に更新されており、読者が最新のプラットフォームでの操作を明確に把握できるようになっています。
- プリクエリジット(前提条件)セクションでは、利用者に対して新しいプラットフォームの試用を促すリンクが更新されています。
これにより、利用者はAI Foundryを使用してハブを作成する際に、正確な情報と手順にアクセスできるようになり、よりスムーズに作業を進めることが期待されます。
articles/ai-studio/how-to/develop/evaluate-sdk.md
Diff
@@ -7,6 +7,7 @@ ms.service: azure-ai-studio
ms.custom:
- build-2024
- references_regions
+ - ignite-2024
ms.topic: how-to
ms.date: 11/19/2024
ms.reviewer: minthigpen
@@ -941,4 +942,4 @@ evaluation = client.evaluations.create(
- [Learn more about simulating test datasets for evaluation](./simulator-interaction-data.md)
- [View your evaluation results in Azure AI project](../../how-to/evaluate-results.md)
- [Get started building a chat app using the Azure AI Foundry SDK](../../quickstarts/get-started-code.md)
-- [Get started with evaluation samples](https://aka.ms/aistudio/eval-samples)
\ No newline at end of file
+- [Get started with evaluation samples](https://aka.ms/aistudio/eval-samples)
Summary
{
"modification_type": "minor update",
"modification_title": "評価SDKに関するドキュメントの更新"
}
Explanation
この変更は、「evaluate-sdk.md」というドキュメントに行われ、Azure AI Studioにおける評価SDKの使用方法に関する情報を更新しました。主な変更点は、メタデータの追加とリンクの修正です。
具体的な変更内容は以下の通りです:
- メタデータとして、ignite-2024
というタグが追加され、ドキュメントがより最新の情報に関連づけられました。このタグは、特定のイベントやリリースに関連する情報を示すために使用されます。
- 評価に関する資料へのリンクが更新され、最新のリソースに正確につながるようになりました。
- 追加されたリンクや情報により、ユーザーが評価サンプルについてより便利にアクセスできるようになっています。
これにより、読者はAzure AI Studioの評価SDKについて、最新かつ関連性の高い情報を基に作業を進めることが可能となります。
articles/ai-studio/how-to/develop/index-build-consume-sdk.md
Diff
@@ -23,12 +23,12 @@ In this article, you learn how to create an index and consume it from code. To c
You must have:
-- An [AI Studio hub](../../how-to/create-azure-ai-resource.md) and [project](../../how-to/create-projects.md).
+- An [AI Foundry hub](../../how-to/create-azure-ai-resource.md) and [project](../../how-to/create-projects.md).
- An [Azure AI Search service connection](../../how-to/connections-add.md#create-a-new-connection) to index the sample product and customer data. If you don't have an Azure AI Search service, you can create one from the [Azure portal](https://portal.azure.com/) or see the instructions [here](/azure/search/search-create-service-portal).
- Models for embedding:
- You can use an ada-002 embedding model from Azure OpenAI. The instructions to deploy can be found [here](../deploy-models-openai.md).
- - OR you can use any another embedding model deployed in your AI Studio project. In this example we use Cohere multi-lingual embedding. The instructions to deploy this model can be found [here](../deploy-models-cohere-embed.md).
+ - OR you can use any another embedding model deployed in your AI Foundry project. In this example we use Cohere multi-lingual embedding. The instructions to deploy this model can be found [here](../deploy-models-cohere-embed.md).
## Build and consume an index locally
@@ -88,9 +88,9 @@ local_index_aoai=build_index(
The above code builds an index locally. It uses environment variables to get the AI Search service and also to connect to the Azure OpenAI embedding model.
-### Build an index locally using other embedding models deployed in your AI Studio project
+### Build an index locally using other embedding models deployed in your AI Foundry project
-To create an index that uses an embedding model deployed in your AI Studio project, we configure the connection to the model using a `ConnectionConfig` as shown below. The `subscription`, `resource_group` and `workspace` refers to the project where the embedding model is installed. The `connection_name` refers to the connection name for the model, which can be found in the AI Studio project settings page.
+To create an index that uses an embedding model deployed in your AI Foundry project, we configure the connection to the model using a `ConnectionConfig` as shown below. The `subscription`, `resource_group` and `workspace` refers to the project where the embedding model is installed. The `connection_name` refers to the connection name for the model, which can be found in the AI Foundry project settings page.
```python
from promptflow.rag.config import ConnectionConfig
@@ -142,14 +142,14 @@ retriever.get_relevant_documents("<your search query>")
retriever=get_langchain_retriever_from_index(local_index_cohere)
retriever.get_relevant_documents("<your search query>")
```
-### Registering the index in your AI Studio project (Optional)
+### Registering the index in your AI Foundry project (Optional)
-Optionally, you can register the index in your AI Studio project so that you or others who have access to your project can use it from the cloud. Before proceeding [install the required packages](#required-packages-for-remote-index-operations) for remote operations.
+Optionally, you can register the index in your AI Foundry project so that you or others who have access to your project can use it from the cloud. Before proceeding [install the required packages](#required-packages-for-remote-index-operations) for remote operations.
#### Connect to the project
```python
-# connect to the AI Studio project
+# connect to the AI Foundry project
from azure.identity import DefaultAzureCredential
from azure.ai.ml import MLClient
@@ -185,9 +185,9 @@ client.indexes.create_or_update(
> [!NOTE]
> Environment variables are intended for convenience in a local environment. However, if you register a local index created using environment variables, the index may not function as expected because secrets from environment variables won't be transferred to the cloud index. To address this issue, you can use a `ConnectionConfig` or `connection_id` to create a local index before registering.
-## Build an index (remotely) in your AI Studio project
+## Build an index (remotely) in your AI Foundry project
-We build an index in the cloud in your AI Studio project.
+We build an index in the cloud in your AI Foundry project.
### Required packages for remote index operations
@@ -197,12 +197,12 @@ Install the following packages required for remote index creation.
pip install azure-ai-ml promptflow-rag langchain langchain-openai
```
-### Connect to the AI Studio project
+### Connect to the AI Foundry project
To get started, we connect to the project. The `subscription`, `resource_group` and `workspace` in the code below refers to the project you want to connect to.
```python
-# connect to the AI Studio project
+# connect to the AI Foundry project
from azure.identity import DefaultAzureCredential
from azure.ai.ml import MLClient
@@ -245,7 +245,7 @@ embeddings_model_config = IndexModelConfiguration.from_connection(
deployment_name="text-embedding-ada-002")
```
-You can connect to embedding model deployed in your AI Studio project (non Azure OpenAI models) using the serverless connection.
+You can connect to embedding model deployed in your AI Foundry project (non Azure OpenAI models) using the serverless connection.
```python
from azure.ai.ml.entities import IndexModelConfiguration
@@ -392,6 +392,6 @@ print(result["answer"])
## Related content
-- [Create and consume an index from the AI Studio UI](../index-add.md)
+- [Create and consume an index from the AI Foundry portal UI](../index-add.md)
- [Get started building a chat app using the prompt flow SDK](../../quickstarts/get-started-code.md)
- [Work with projects in VS Code](vscode.md)
\ No newline at end of file
Summary
{
"modification_type": "minor update",
"modification_title": "AI Studio SDKドキュメントの名称変更"
}
Explanation
この変更は、「index-build-consume-sdk.md」というドキュメントに対するもので、Azure AI Studio SDKを利用してインデックスを作成・利用する方法についての内容を更新しました。主な変更点は、“AI Studio”から”AI Foundry”への名称変更に関連しています。
具体的な変更内容は以下の通りです:
- ドキュメント内のすべての箇所で「AI Studio」という表現が「AI Foundry」に更新されました。これにより、利用者が現在のプラットフォームを正しく理解し、それに基づいて操作を行えるようになります。
- 各セクションや手順の説明文が、AI Foundryに関連する情報として更新され、具体的な使用手順や接続設定が明記されています。
- リンク先についても、関連するリソースがAI Foundryに適したものに修正されています。
これにより、読者は最新のプラットフォームであるAI Foundryを使用する際に、正確な情報と手順を基に作業を行うことができ、よりスムーズな体験を得ることが期待されます。
articles/ai-studio/how-to/develop/langchain.md
Diff
@@ -1,23 +1,25 @@
---
-title: Develop application with LangChain and Azure AI studio
+title: Develop application with LangChain and Azure AI Foundry
titleSuffix: Azure AI Foundry
-description: This article explains how to use LangChain with models deployed in Azure AI studio to build advance intelligent applications.
+description: This article explains how to use LangChain with models deployed in Azure AI Foundry portal to build advance intelligent applications.
manager: scottpolly
ms.service: azure-ai-studio
+ms.custom:
+ - ignite-2024
ms.topic: how-to
ms.date: 11/04/2024
ms.reviewer: fasantia
ms.author: sgilley
author: sdgilley
---
-# Develop applications with LangChain and Azure AI studio
+# Develop applications with LangChain and Azure AI Foundry
LangChain is a development ecosystem that makes as easy possible for developers to build applications that reason. The ecosystem is composed by multiple components. Most of the them can be used by themselves, allowing you to pick and choose whichever components you like best.
-Models deployed to Azure AI studio can be used with LangChain in two ways:
+Models deployed to Azure AI Foundry can be used with LangChain in two ways:
-- **Using the Azure AI model inference API:** All models deployed to Azure AI studio support the [Azure AI model inference API](../../reference/reference-model-inference-api.md), which offers a common set of functionalities that can be used for most of the models in the catalog. The benefit of this API is that, since it's the same for all the models, changing from one to another is as simple as changing the model deployment being use. No further changes are required in the code. When working with LangChain, install the extensions `langchain-azure-ai`.
+- **Using the Azure AI model inference API:** All models deployed to Azure AI Foundry support the [Azure AI model inference API](../../reference/reference-model-inference-api.md), which offers a common set of functionalities that can be used for most of the models in the catalog. The benefit of this API is that, since it's the same for all the models, changing from one to another is as simple as changing the model deployment being use. No further changes are required in the code. When working with LangChain, install the extensions `langchain-azure-ai`.
- **Using the model's provider specific API:** Some models, like OpenAI, Cohere, or Mistral, offer their own set of APIs and extensions for LlamaIndex. Those extensions may include specific functionalities that the model support and hence are suitable if you want to exploit them. When working with LangChain, install the extension specific for the model you want to use, like `langchain-openai` or `langchain-cohere`.
@@ -28,7 +30,7 @@ In this tutorial, you learn how to use the packages `langchain-azure-ai` to buil
To run this tutorial, you need:
* An [Azure subscription](https://azure.microsoft.com).
-* An Azure AI project as explained at [Create a project in Azure AI Studio](../create-projects.md).
+* An Azure AI project as explained at [Create a project in Azure AI Foundry portal](../create-projects.md).
* A model supporting the [Azure AI model inference API](https://aka.ms/azureai/modelinference) deployed. In this example, we use a `Mistral-Large` deployment, but use any model of your preference.
* You can follow the instructions at [Deploy models as serverless APIs](../deploy-models-serverless.md).
@@ -48,9 +50,9 @@ To run this tutorial, you need:
## Configure the environment
-To use LLMs deployed in Azure AI studio, you need the endpoint and credentials to connect to it. Follow these steps to get the information you need from the model you want to use:
+To use LLMs deployed in Azure AI Foundry portal, you need the endpoint and credentials to connect to it. Follow these steps to get the information you need from the model you want to use:
-1. Go to the [Azure AI studio](https://ai.azure.com/).
+1. Go to the [Azure AI Foundry](https://ai.azure.com/).
1. Open the project where the model is deployed, if it isn't already open.
1. Go to **Models + endpoints** and select the model you deployed as indicated in the prerequisites.
1. Copy the endpoint URL and the key.
@@ -176,7 +178,7 @@ chain.invoke({"language": "italian", "text": "hi"})
### Chaining multiple LLMs together
-Models deployed to Azure AI studio support the Azure AI model inference API, which is standard across all the models. Chain multiple LLM operations based on the capabilities of each model so you can optimize for the right model based on capabilities.
+Models deployed to Azure AI Foundry support the Azure AI model inference API, which is standard across all the models. Chain multiple LLM operations based on the capabilities of each model so you can optimize for the right model based on capabilities.
In the following example, we create 2 model clients, one is a producer and another one is a verifier. To make the distinction clear, we are using a multi-model endpoint like the [Azure AI model inference service](../../ai-services/model-inference.md) and hence we are passing the parameter `model_name` to use a `Mistral-Large` and a `Mistral-Small` model, quoting the fact that **producing content is more complex than verifying it**.
@@ -314,15 +316,3 @@ llm = AzureAIChatCompletionsModel(
* [Develop applications with LlamaIndex](llama-index.md)
* [Use the Azure AI model inference service](../../ai-services/model-inference.md)
* [Reference: Azure AI model inference API](../../reference/reference-model-inference-api.md)
-
-
-
-
-
-
-
-
-
-
-
-
Summary
{
"modification_type": "minor update",
"modification_title": "LangChainおよびAzure AI Foundryに関するドキュメントの更新"
}
Explanation
この変更は、「langchain.md」というドキュメントに対して行われ、LangChainを使用してAzure AI Foundryでアプリケーションを開発する方法についての情報が更新されました。主な変更点は、ドキュメント内の言及するプラットフォームの名称が「Azure AI Studio」から「Azure AI Foundry」に修正されたことです。
具体的な変更内容は以下の通りです:
- ドキュメントのタイトルや説明において、「Azure AI Studio」という表現がすべて「Azure AI Foundry」に変更され、最新のプラットフォームに正しく言及するようになりました。
- Azure AI FoundryでモデルがどのようにLangChainと連携できるか、特にモデル推論APIに関する情報が適切に更新され、利用者が形式的な手順や依存関係について理解しやすくなっています。
- 環境の設定方法や、必要な情報を取得する手順が改善され、ユーザーが新しいプラットフォームでの作業を簡潔に理解できるようになっています。
これにより、読者はAzure AI Foundryを使用してLangChainアプリケーションを開発する際に、正しい情報と最新の手順を基に作業を進めることが可能となります。
articles/ai-studio/how-to/develop/llama-index.md
Diff
@@ -1,23 +1,25 @@
---
-title: Develop application with LlamaIndex and Azure AI studio
+title: Develop application with LlamaIndex and Azure AI Foundry
titleSuffix: Azure AI Foundry
-description: This article explains how to use LlamaIndex with models deployed in Azure AI studio to build advance intelligent applications.
+description: This article explains how to use LlamaIndex with models deployed in Azure AI Foundry portal to build advance intelligent applications.
manager: scottpolly
ms.service: azure-ai-studio
+ms.custom:
+ - ignite-2024
ms.topic: how-to
ms.date: 11/04/2024
ms.reviewer: fasantia
ms.author: sgilley
author: sdgilley
---
-# Develop applications with LlamaIndex and Azure AI studio
+# Develop applications with LlamaIndex and Azure AI Foundry
-In this article, you learn how to use [LlamaIndex](https://github.com/run-llama/llama_index) with models deployed from the Azure AI model catalog in Azure AI studio.
+In this article, you learn how to use [LlamaIndex](https://github.com/run-llama/llama_index) with models deployed from the Azure AI model catalog in Azure AI Foundry portal.
-Models deployed to Azure AI studio can be used with LlamaIndex in two ways:
+Models deployed to Azure AI Foundry can be used with LlamaIndex in two ways:
-- **Using the Azure AI model inference API:** All models deployed to Azure AI studio support the [Azure AI model inference API](../../reference/reference-model-inference-api.md), which offers a common set of functionalities that can be used for most of the models in the catalog. The benefit of this API is that, since it's the same for all the models, changing from one to another is as simple as changing the model deployment being use. No further changes are required in the code. When working with LlamaIndex, install the extensions `llama-index-llms-azure-inference` and `llama-index-embeddings-azure-inference`.
+- **Using the Azure AI model inference API:** All models deployed to Azure AI Foundry support the [Azure AI model inference API](../../reference/reference-model-inference-api.md), which offers a common set of functionalities that can be used for most of the models in the catalog. The benefit of this API is that, since it's the same for all the models, changing from one to another is as simple as changing the model deployment being use. No further changes are required in the code. When working with LlamaIndex, install the extensions `llama-index-llms-azure-inference` and `llama-index-embeddings-azure-inference`.
- **Using the model's provider specific API:** Some models, like OpenAI, Cohere, or Mistral, offer their own set of APIs and extensions for LlamaIndex. Those extensions may include specific functionalities that the model support and hence are suitable if you want to exploit them. When working with `llama-index`, install the extension specific for the model you want to use, like `llama-index-llms-openai` or `llama-index-llms-cohere`.
@@ -28,7 +30,7 @@ In this example, we are working with the **Azure AI model inference API**.
To run this tutorial, you need:
* An [Azure subscription](https://azure.microsoft.com).
-* An Azure AI project as explained at [Create a project in Azure AI Studio](../create-projects.md).
+* An Azure AI project as explained at [Create a project in Azure AI Foundry portal](../create-projects.md).
* A model supporting the [Azure AI model inference API](https://aka.ms/azureai/modelinference) deployed. In this example, we use a `Mistral-Large` deployment, but use any model of your preference. For using embeddings capabilities in LlamaIndex, you need an embedding model like `cohere-embed-v3-multilingual`.
* You can follow the instructions at [Deploy models as serverless APIs](../deploy-models-serverless.md).
@@ -52,9 +54,9 @@ To run this tutorial, you need:
## Configure the environment
-To use LLMs deployed in Azure AI studio, you need the endpoint and credentials to connect to it. Follow these steps to get the information you need from the model you want to use:
+To use LLMs deployed in Azure AI Foundry portal, you need the endpoint and credentials to connect to it. Follow these steps to get the information you need from the model you want to use:
-1. Go to the [Azure AI studio](https://ai.azure.com/).
+1. Go to the [Azure AI Foundry](https://ai.azure.com/).
1. Open the project where the model is deployed, if it isn't already open.
1. Go to **Models + endpoints** and select the model you deployed as indicated in the prerequisites.
1. Copy the endpoint URL and the key.
Summary
{
"modification_type": "minor update",
"modification_title": "LlamaIndexおよびAzure AI Foundryに関するドキュメントの更新"
}
Explanation
この変更は、「llama-index.md」というドキュメントに対して行われ、LlamaIndexを使用してAzure AI Foundryでアプリケーションを開発する方法についての情報が更新されました。主な変更点は、文書内の「Azure AI Studio」という名称が「Azure AI Foundry」に置き換えられたことです。
具体的な変更内容は以下の通りです:
- ドキュメントのタイトルや説明において、「Azure AI Studio」という表現が「Azure AI Foundry」に変更され、最新のプラットフォームに関する正確な情報が反映されました。
- Azure AI FoundryでモデルがどのようにLlamaIndexと連携できるか、特にモデル推論APIに関する情報が適切に更新され、利用者が理解しやすい形になっています。
- 環境の設定や手順に関する情報も見直され、ユーザーが新しいプラットフォームを利用する際に必要な情報を簡潔に得られます。
これにより、読者はAzure AI Foundryを使用してLlamaIndexアプリケーションを開発する際に、正しい情報および最新の手順に基づいて作業を進めることが可能となります。
articles/ai-studio/how-to/develop/simulator-interaction-data.md
Diff
@@ -298,13 +298,13 @@ eval_output = evaluate(
## Generate adversarial simulations for safety evaluation
-Augment and accelerate your red-teaming operation by using Azure AI Studio safety evaluations to generate an adversarial dataset against your application. We provide adversarial scenarios along with configured access to a service-side Azure OpenAI GPT-4 model with safety behaviors turned off to enable the adversarial simulation.
+Augment and accelerate your red-teaming operation by using Azure AI Foundry safety evaluations to generate an adversarial dataset against your application. We provide adversarial scenarios along with configured access to a service-side Azure OpenAI GPT-4 model with safety behaviors turned off to enable the adversarial simulation.
```python
from azure.ai.evaluation.simulator import AdversarialSimulator
```
-The adversarial simulator works by setting up a service-hosted GPT large language model to simulate an adversarial user and interact with your application. An AI Studio project is required to run the adversarial simulator:
+The adversarial simulator works by setting up a service-hosted GPT large language model to simulate an adversarial user and interact with your application. An AI Foundry project is required to run the adversarial simulator:
```python
from azure.identity import DefaultAzureCredential
Summary
{
"modification_type": "minor update",
"modification_title": "シミュレーター相互作用データに関するドキュメントの更新"
}
Explanation
この変更は、「simulator-interaction-data.md」というドキュメントに対して行われ、Azure AI StudioからAzure AI Foundryへの名称変更に伴う内容の更新があります。主な変更点は、Azure AI Studioに関連する表現がすべてAzure AI Foundryに改訂されたことです。
具体的な変更内容は以下の通りです:
- 内容の一部において、「Azure AI Studio」が「Azure AI Foundry」に変更されました。特に、アドバーサリアルシミュレーションの生成に関する説明や、シミュレーターの動作に必要なプロジェクトについての記述が更新されています。
- アドバーサリアルデータセットを生成するための手段として、Azure AI Foundryの安全性評価を活用することが強調され、読者が新しいプラットフォームでどのように作業を進めるかを具体的に理解できるようになっています。
この変更により、読者はAzure AI Foundryを利用してシミュレーター相互作用データを扱う際に、正確で最新の情報に基づいて作業を進めることができるようになります。
articles/ai-studio/how-to/develop/trace-local-sdk.md
Diff
@@ -6,10 +6,11 @@ manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
- build-2024
+ - ignite-2024
ms.topic: how-to
ms.date: 11/19/2024
ms.reviewer: truptiparkar
-ms.author: lagayhar
+ms.author: lagayhar
author: lgayhardt
---
@@ -24,8 +25,8 @@ In this article you'll learn how to trace your application with Azure AI Inferen
### Prerequisites
- An [Azure Subscription](https://azure.microsoft.com/).
-- An Azure AI project, see [Create a project in Azure AI Studio](../create-projects.md).
-- An AI model supporting the [Azure AI model inference API](https://aka.ms/azureai/modelinference) deployed through AI Studio.
+- An Azure AI project, see [Create a project in Azure AI Foundry portal](../create-projects.md).
+- An AI model supporting the [Azure AI model inference API](https://aka.ms/azureai/modelinference) deployed through AI Foundry.
- If using Python, you need Python 3.8 or later installed, including pip.
- If using JavaScript, the supported environments are LTS versions of Node.js.
@@ -208,7 +209,7 @@ To trace your own custom functions, you can leverage OpenTelemetry, you'll need
## Attach User feedback to traces
-To attach user feedback to traces and visualize them in AI Studio using OpenTelemetry's semantic conventions, you can instrument your application enabling tracing and logging user feedback. By correlating feedback traces with their respective chat request traces using the response ID, you can use view and manage these traces in AI studio. OpenTelemetry's specification allows for standardized and enriched trace data, which can be analyzed in AI Studio for performance optimization and user experience insights. This approach helps you use the full power of OpenTelemetry for enhanced observability in your applications.
+To attach user feedback to traces and visualize them in AI Foundry portal using OpenTelemetry's semantic conventions, you can instrument your application enabling tracing and logging user feedback. By correlating feedback traces with their respective chat request traces using the response ID, you can use view and manage these traces in AI Foundry portal. OpenTelemetry's specification allows for standardized and enriched trace data, which can be analyzed in AI Foundry portal for performance optimization and user experience insights. This approach helps you use the full power of OpenTelemetry for enhanced observability in your applications.
## Related content
Summary
{
"modification_type": "minor update",
"modification_title": "Trace Local SDKに関するドキュメントの更新"
}
Explanation
この変更は、「trace-local-sdk.md」というドキュメントに対して行われ、Azure AI StudioからAzure AI Foundryへの名称変更に伴う内容の更新が見られます。主な変更点は、文書内で「Azure AI Studio」という表現を「Azure AI Foundry」に置き換えたことです。
具体的な変更内容は以下の通りです:
- ドキュメントのプリアプリケーションに関する記述や、AIモデルのデプロイにおける引き合いが、すべてAzure AI Foundryに移行されました。具体的には、「Azure AI Studio」へのプロジェクト作成手順やデプロイに関する情報が、Azure AI Foundry具体的に記述されています。
- また、ユーザーフィードバックをトレースに関連付けて可視化する方法の説明においても、Azure AI Foundryポータルに情報が更新され、最新のプラットフォームでの操作手順が強調されています。
- 一部のメタデータとして、著者情報やリリース情報もわずかに変更されています。
これにより、読者はAzure AI Foundryを利用してトレースを行う際の正確で最新の手順を確認でき、より効果的に作業を進めることが可能です。
articles/ai-studio/how-to/develop/trace-production-sdk.md
Diff
@@ -1,7 +1,7 @@
---
title: How to enable tracing and collect feedback for a flow deployment
titleSuffix: Azure AI Foundry
-description: This article provides instructions on how to enable tracing and collect feedback for a flow deployment in Azure AI Studio.
+description: This article provides instructions on how to enable tracing and collect feedback for a flow deployment in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
@@ -27,9 +27,9 @@ In this article, you learn to enable tracing, collect aggregated metrics, and co
## Prerequisites
- The Azure CLI and the Azure Machine Learning extension to the Azure CLI.
-- An AI Studio project. If you don't already have a project, you can [create one here](../../how-to/create-projects.md).
+- An AI Foundry project. If you don't already have a project, you can [create one here](../../how-to/create-projects.md).
- An Application Insights. If you don't already have an Application Insights resource, you can [create one here](/azure/azure-monitor/app/create-workspace-resource).
-- Azure role-based access controls are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, you must have **Owner** or **Contributor** permissions on the selected resource group. For more information, see [Role-based access control in Azure AI Studio](../../concepts/rbac-ai-studio.md).
+- Azure role-based access controls are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, you must have **Owner** or **Contributor** permissions on the selected resource group. For more information, see [Role-based access control in Azure AI Foundry portal](../../concepts/rbac-ai-studio.md).
## Deploy a flow for real-time inference
@@ -42,7 +42,7 @@ You can also [deploy to other platforms, such as Docker container, Kubernetes cl
## Enable trace and collect system metrics for your deployment
-If you're using studio UI to deploy, then you can turn-on **Application Insights diagnostics** in **Advanced settings** > **Deployment** step in the deployment wizard, in which way the tracing data and system metrics are collected to the project linked to Application Insights.
+If you're using AI Foundry portal to deploy, then you can turn-on **Application Insights diagnostics** in **Advanced settings** > **Deployment** step in the deployment wizard, in which way the tracing data and system metrics are collected to the project linked to Application Insights.
If you're using SDK or CLI, you can by adding a property `app_insights_enabled: true` in the deployment yaml file that collects data to the project linked to application insights.
Summary
{
"modification_type": "minor update",
"modification_title": "トレースプロダクションSDKに関するドキュメントの更新"
}
Explanation
この変更は、「trace-production-sdk.md」というドキュメントに対して行われ、Azure AI StudioからAzure AI Foundryへの名称変更に伴う内容の更新が含まれています。主要な変更点は、文書内で「Azure AI Studio」という表現を「Azure AI Foundry」に置き換えたことです。
具体的な変更内容は以下の通りです:
- 記事の説明部分が更新され、トレースを有効にし、フィードバックを収集するための手順が「Azure AI Foundryポータル」に関連付けられています。
- プリアプリケーションに関する要求事項の記述においても、「AI Studioプロジェクト」を「AI Foundryプロジェクト」に変更し、新しいプラットフォームへの適応を示しています。
- また、Azure Machine Learningにおけるロールベースのアクセス制御に関する説明も更新され、正確なリソースグループへのアクセス権限を確保するための情報が提供されています。
- デプロイの際の手順に関する記述でも、AI Foundryポータルの使用を強調し、新しいプラットフォームでの操作を明確に示しています。
これにより、読者はAzure AI Foundryを利用してトレースを設定し、フィードバックを収集する際の手順を最新の情報に基づいて理解しやすくなります。
articles/ai-studio/how-to/develop/visualize-traces.md
Diff
@@ -4,10 +4,12 @@ titleSuffix: Azure AI Foundry
description: This article provides instructions on how to visualize your traces.
manager: scottpolly
ms.service: azure-ai-studio
+ms.custom:
+ - ignite-2024
ms.topic: how-to
ms.date: 11/19/2024
ms.reviewer: amipatel
-ms.author: lagayhar
+ms.author: lagayhar
author: lgayhardt
---
@@ -52,13 +54,13 @@ os.environ['AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED'] = 'true'
application_insights_connection_string = project.telemetry.get_connection_string()
if not application_insights_connection_string:
print("Application Insights was not enabled for this project.")
- print("Enable it via the 'Tracing' tab in your AI Studio project page.")
+ print("Enable it via the 'Tracing' tab in your AI Foundry project page.")
exit()
configure_azure_monitor(connection_string=application_insights_connection_string)
```
-Finally, run an inferencing call. The call is logged to Azure AI Studio. This code prints a link to the traces.
+Finally, run an inferencing call. The call is logged to Azure AI Foundry. This code prints a link to the traces.
```python
response = chat.complete(
@@ -73,7 +75,7 @@ print("View traces at:")
print(f"https://ai.azure.com/tracing?wsid=/subscriptions/{project.scope['subscription_id']}/resourceGroups/{project.scope['resource_group_name']}/providers/Microsoft.MachineLearningServices/workspaces/{project.scope['project_name']}")
```
-Select the link and begin viewing traces in Azure AI Studio!
+Select the link and begin viewing traces in Azure AI Foundry portal!
### Debug and filter traces
@@ -97,12 +99,12 @@ For more information on how to send Azure AI Inference traces to Azure Monitor a
### View your generative AI spans and traces
-From Azure AI studio project, you can also open your custom dashboard that provides you with insights specifically to help you monitor your generative AI application.
+From Azure AI Foundry project, you can also open your custom dashboard that provides you with insights specifically to help you monitor your generative AI application.
In this Azure Workbook, you can view your Gen AI spans and jump into the Azure Monitor **End-to-end transaction details view** view to deep dive and investigate.
Learn more about using this workbook to monitor your application, see [Azure Workbook documentation](/azure/azure-monitor/visualize/workbooks-create-workbook).
## Related content
-- [Trace your application with Azure AI Inference SDK](./trace-local-sdk.md)
\ No newline at end of file
+- [Trace your application with Azure AI Inference SDK](./trace-local-sdk.md)
Summary
{
"modification_type": "minor update",
"modification_title": "トレースの可視化に関するドキュメントの更新"
}
Explanation
この変更は、「visualize-traces.md」というドキュメントに対するもので、主にAzure AI StudioからAzure AI Foundryへの名前の変更に対応する内容が含まれています。具体的には、文書内のいくつかの表現が「Azure AI Foundry」に変更されており、これにより新しいプラットフォームの使用を反映しています。
変更内容の主なポイントは次のとおりです:
- ドキュメントの説明や関連手順の中で、「AI Studio」という表現が「AI Foundry」に置き換わりました。これには、トレースを有効にしてフィードバックを収集するための手順の記述が含まれています。
- トレースデータのログが取られる場所の説明や、ユーザーがトレースを確認する際のリンクの説明もAzure AI Foundryに更新されています。
- ドキュメントの更新によって、ユーザーはAI Foundryポータル内でトレースを可視化する方法について最新の情報を得ることが可能になっています。
このアップデートにより、読者は新しいプラットフォーム環境でトレースを可視化する手順や詳細について、より正確に理解することができるようになります。
articles/ai-studio/how-to/develop/vscode.md
Diff
@@ -1,30 +1,31 @@
---
-title: Work with Azure AI Studio projects in VS Code
+title: Work with Azure AI Foundry projects in VS Code
titleSuffix: Azure AI Foundry
-description: This article provides instructions on how to get started with Azure AI Studio projects in VS Code.
+description: This article provides instructions on how to get started with Azure AI Foundry projects in VS Code.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
- ignite-2023
- build-2024
+ - ignite-2024
ms.topic: how-to
ms.date: 10/30/2024
ms.reviewer: lebaro
ms.author: sgilley
author: sdgilley
-# customer intent: As a Developer, I want to use Azure AI Studio projects in VS Code.
+# customer intent: As a Developer, I want to use Azure AI Foundry projects in VS Code.
---
-# Get started with Azure AI Studio projects in VS Code (Preview)
+# Get started with Azure AI Foundry projects in VS Code (Preview)
[!INCLUDE [feature-preview](../../includes/feature-preview.md)]
-Azure AI Studio supports developing in VS Code - Desktop and Web. In each scenario, your VS Code instance is remotely connected to a prebuilt custom container running on a virtual machine, also known as a compute instance.
+Azure AI Foundry supports developing in VS Code - Desktop and Web. In each scenario, your VS Code instance is remotely connected to a prebuilt custom container running on a virtual machine, also known as a compute instance.
-## Launch VS Code from Azure AI Studio
+## Launch VS Code from Azure AI Foundry
-1. Go to [Azure AI Studio](https://ai.azure.com).
-1. Open your project in Azure AI Studio.
+1. Go to [Azure AI Foundry](https://ai.azure.com).
+1. Open your project in Azure AI Foundry portal.
1. On the left menu, select **Code**.
1. Select **VS Code container**.
1. For **Compute**, select an existing compute instance or create a new one.
@@ -45,9 +46,9 @@ Azure AI Studio supports developing in VS Code - Desktop and Web. In each scenar
Our prebuilt development environments are based on a docker container that has Azure AI SDKs, the prompt flow SDK, and other tools. The environment is configured to run VS Code remotely inside of the container. The container is defined in a similar way to [this Dockerfile](https://github.com/Azure-Samples/aistudio-python-quickstart-sample/blob/main/.devcontainer/Dockerfile), and is based on [Microsoft's Python 3.10 Development Container Image](https://mcr.microsoft.com/product/devcontainers/python/about).
-Your file explorer is opened to the specific project directory you launched from in AI Studio.
+Your file explorer is opened to the specific project directory you launched from in AI Foundry portal.
-The container is configured with the Azure AI folder hierarchy (`afh` directory), which is designed to orient you within your current development context, and help you work with your code, data, and shared files most efficiently. This `afh` directory houses your Azure AI Studio projects, and each project has a dedicated project directory that includes `code`, `data`, and `shared` folders.
+The container is configured with the Azure AI folder hierarchy (`afh` directory), which is designed to orient you within your current development context, and help you work with your code, data, and shared files most efficiently. This `afh` directory houses your Azure AI Foundry projects, and each project has a dedicated project directory that includes `code`, `data`, and `shared` folders.
This table summarizes the folder structure:
@@ -64,9 +65,9 @@ This table summarizes the folder structure:
You can create, reference, and work with prompt flows.
-Prompt flows already created in the Azure AI Studio can be found at `shared\Users\{user-name}\promptflow`. You can also create new flows in your `code` or `shared` folder.
+Prompt flows already created in the Azure AI Foundry portal can be found at `shared\Users\{user-name}\promptflow`. You can also create new flows in your `code` or `shared` folder.
-Prompt flow automatically uses the Azure AI Studio connections your project has access to.
+Prompt flow automatically uses the Azure AI Foundry connections your project has access to.
You can also work with the prompt flow extension in VS Code, which is preinstalled in this environment. Within this extension, you can set the connection provider to your project. See [consume connections from Azure AI](https://microsoft.github.io/promptflow/cloud/azureai/consume-connections-from-azure-ai.html).
Summary
{
"modification_type": "minor update",
"modification_title": "VS CodeでのAzure AIプロジェクトの作業に関するドキュメントの更新"
}
Explanation
この変更は、「vscode.md」というドキュメントに対して行われ、Azure AI StudioからAzure AI Foundryへの名称変更に伴う内容が含まれています。主に、文書内での「AI Studio」という表現が「AI Foundry」に置き換わり、最終的には新しいプラットフォームへの適合を示しています。
変更内容の主要なポイントは次の通りです:
- 記事のタイトルや説明文が「Azure AI Foundryプロジェクト」に関するものに更新され、これにより読者がどのプラットフォームについての情報かを明確に理解できるようになりました。
- VS Codeを介してプロジェクトを開く手順の中でのURLや手順の記述も、Azure AI Foundryのインターフェースに合わせて調整されています。
- VS Codeがリモート接続する先としてのコンテナの設定や、ファイルエクスプローラーの動作、フォルダ構成に関する説明も同様に更新されています。
- さらに、「プロンプトフロー」の利用に関する情報が、Azure AI Foundryの接続を使用する形に変更され、これにより新プラットフォームでの機能へのアクセスが明示されています。
これらの更新により、読者は新しいAzure AI Foundry環境内でVS Codeを利用してプロジェクトを扱う方法について、より正確かつ明確な情報を得ることができるようになっています。
articles/ai-studio/how-to/disable-local-auth.md
Diff
@@ -1,21 +1,21 @@
---
title: "Disable shared key access to the hub storage account"
titleSuffix: Azure AI Foundry
-description: "Disable shared key access to the default storage account used by your Azure AI Studio hub and projects."
+description: "Disable shared key access to the default storage account used by your Azure AI Foundry hub and projects."
author: Blackmist
ms.author: larryfr
ms.service: azure-ai-studio
+ms.custom:
+ - ignite-2024
ms.topic: how-to
ms.date: 11/19/2024
ms.reviewer: ambadal
-
#customer intent: As an admin, I want to disable shared key access to my resources to improve security.
-
---
# Disable shared key access for your hub's storage account (preview)
-An Azure AI Studio hub defaults to use of a shared key to access its default Azure Storage account. With key-based authorization, anyone who has the key and access to the storage account can access data.
+An Azure AI Foundry hub defaults to use of a shared key to access its default Azure Storage account. With key-based authorization, anyone who has the key and access to the storage account can access data.
To reduce the risk of unauthorized access, you can disable key-based authorization, and instead use Microsoft Entra ID for authorization. This configuration uses a Microsoft Entra ID value to authorize access to the storage account. The identity used to access storage is either the user's identity or a managed identity. The user's identity is used to view data in the Azure Machine Learning studio, or to run a notebook while authenticated with the user's identity. The Azure Machine Learning service uses a managed identity to access the storage account - for example, when running a training job as the managed identity.
@@ -94,7 +94,7 @@ When you create a new hub, the creation process can automatically disable shared
# [Azure portal](#tab/portal)
-1. In Azure AI Studio, select __Management center__ from the left menu.
+1. In Azure AI Foundry portal, select __Management center__ from the left menu.
1. Select __All resources__ from the left menu, the dropdown menu next to __+ New project__, and then select __New hub__.
:::image type="content" source="../media/disable-local-auth/create-new-hub.png" alt-text="Screenshot of the new hub dropdown button.":::
@@ -222,11 +222,11 @@ After you create the hub, identify all the users that will use it - for example,
## Update an existing hub
-If you have an existing Azure AI Studio hub, use the steps in this section to update the hub to use Microsoft Entra ID, to authorize access to the storage account. Then, disable shared key access on the storage account.
+If you have an existing Azure AI Foundry hub, use the steps in this section to update the hub to use Microsoft Entra ID, to authorize access to the storage account. Then, disable shared key access on the storage account.
# [Azure portal](#tab/portal)
-1. Go to the Azure portal and select the __AI Studio hub__.
+1. Go to the Azure portal and select the __AI Foundry hub__.
1. From the left menu, select **Properties**. From the bottom of the page, set __Storage account access type__ to __Identity-based__. Select __Save__ from the top of the page to save the configuration.
:::image type="content" source="../media/disable-local-auth/update-existing-hub-identity-based-access.png" alt-text="Screenshot showing selection of Identity-based access." lightbox="../media/disable-local-auth/update-existing-hub-identity-based-access.png":::
@@ -344,7 +344,7 @@ az ml workspace update --name myhub --system-datastores-auth-mode accesskey
# [ARM Template](#tab/armtemplate)
-If you have an existing Azure AI Studio hub, use the steps in this section to update the hub to use Microsoft Entra ID, to authorize access to the storage account. Then, disable shared key access on the storage account.
+If you have an existing Azure AI Foundry hub, use the steps in this section to update the hub to use Microsoft Entra ID, to authorize access to the storage account. Then, disable shared key access on the storage account.
In the following JSON template example, substitute your own values for the following placeholders:
@@ -417,4 +417,4 @@ To work with a storage account with disabled shared key access, you might need t
## Related content
- [Prevent shared key authorization for an Azure Storage account](/azure/storage/common/shared-key-authorization-prevent)
-- [Create an Azure AI Studio hub](develop/create-hub-project-sdk.md)
\ No newline at end of file
+- [Create an Azure AI Foundry hub](develop/create-hub-project-sdk.md)
\ No newline at end of file
Summary
{
"modification_type": "minor update",
"modification_title": "ローカル認証を無効にする方法に関するドキュメントの更新"
}
Explanation
この変更は、「disable-local-auth.md」というドキュメントに関するもので、Azure AI StudioからAzure AI Foundryへの名称変更が行われています。具体的には、文書内で「AI Studio」という表現が「AI Foundry」に修正されており、これにより現在のプラットフォームに関連する正確な情報が提供されています。
主な変更点は次のとおりです:
- ドキュメントのタイトルや説明が、Azure AI Foundryに関連する内容に変更され、読者にとって今後の参照が明確になります。
- 新しいハブを作成する際の手順や、既存のハブを更新する際の指示が、Azure AI Foundryポータルを参照する形に改訂されました。
- Microsoft Entra IDを使用した認証方法と、ストレージアカウントへのアクセス制御に関する説明が、同様に新しいプラットフォームに適合するよう修正されています。
これらの更新により、読者はAzure AI Foundry環境におけるローカル認証の無効化手順について、最新かつ正確な情報を得ることができるようになります。これによりセキュリティを強化し、ストレージアカウントへのアクセスを更に厳格に管理する手助けがされます。
articles/ai-studio/how-to/disaster-recovery.md
Diff
@@ -1,7 +1,7 @@
---
title: Customer enabled disaster recovery
titleSuffix: Azure AI Foundry
-description: Learn how to plan for disaster recovery for Azure AI Studio.
+description: Learn how to plan for disaster recovery for Azure AI Foundry.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
@@ -17,43 +17,43 @@ ms.date: 5/21/2024
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-To maximize your uptime, plan ahead to maintain business continuity and prepare for disaster recovery with Azure AI Studio. Since Azure AI Studio builds on [Azure Machine Learning architecture](/azure/machine-learning/concept-workspace), it's beneficial to reference the foundational architecture.
+To maximize your uptime, plan ahead to maintain business continuity and prepare for disaster recovery with Azure AI Foundry. Since Azure AI Foundry builds on [Azure Machine Learning architecture](/azure/machine-learning/concept-workspace), it's beneficial to reference the foundational architecture.
Microsoft strives to ensure that Azure services are always available. However, unplanned service outages might occur. We recommend having a disaster recovery plan in place for handling regional service outages. In this article, you learn how to:
-* Plan for a multi-regional deployment of Azure AI Studio and associated resources.
+* Plan for a multi-regional deployment of Azure AI Foundry and associated resources.
* Maximize chances to recover logs, notebooks, docker images, and other metadata.
* Design for high availability of your solution.
* Initiate a failover to another region.
> [!IMPORTANT]
-> Azure AI Studio itself does not provide automatic failover or disaster recovery.
+> Azure AI Foundry itself does not provide automatic failover or disaster recovery.
-## Understand Azure services for Azure AI Studio
+## Understand Azure services for Azure AI Foundry
-Azure AI Studio depends on multiple Azure services. Some of these services are provisioned in your subscription. You're responsible for the high-availability configuration of these services. Microsoft manages some services, which are created in a Microsoft subscription.
+Azure AI Foundry depends on multiple Azure services. Some of these services are provisioned in your subscription. You're responsible for the high-availability configuration of these services. Microsoft manages some services, which are created in a Microsoft subscription.
Azure services include:
-* **Azure AI Studio infrastructure**: A Microsoft-managed environment for the Azure AI Studio hub and project. The [underlying architecture](Azure AI Studio architecture doc) is provided by Azure Machine Learning.
+* **Azure AI Foundry infrastructure**: A Microsoft-managed environment for the Azure AI Foundry hub and project. The [underlying architecture](Azure AI Foundry architecture doc) is provided by Azure Machine Learning.
-* **Required associated resources**: Resources provisioned in your subscription during Azure AI Studio hub and project creation. These resources include Azure Storage and Azure Key Vault.
+* **Required associated resources**: Resources provisioned in your subscription during Azure AI Foundry hub and project creation. These resources include Azure Storage and Azure Key Vault.
* Default storage has data such as model, training log data, and references to data assets.
* Key Vault has credentials for Azure Storage and connections.
-* **Optional associated resources**: Resources you can attach to your Azure AI Studio hub. These resources include Azure Container Registry and Application Insights.
+* **Optional associated resources**: Resources you can attach to your Azure AI Foundry hub. These resources include Azure Container Registry and Application Insights.
* Container Registry has a Docker image for training and inferencing environments.
- * Application Insights is for monitoring Azure AI Studio.
+ * Application Insights is for monitoring Azure AI Foundry.
* **Compute instance**: Resource you create after hub deployment. Microsoft-managed model development environments.
-* **Connections**: Azure AI Studio can connect to various other services. You're responsible for cofiguring their high-availability settings.
+* **Connections**: Azure AI Foundry can connect to various other services. You're responsible for cofiguring their high-availability settings.
The following table shows the Azure services that Microsoft manages and the ones you manage. It also indicates the services that are highly available by default.
| Service | Managed by | High availability by default |
| ----- | ----- | ----- |
-| **Azure AI Studio infrastructure** | Microsoft | |
+| **Azure AI Foundry infrastructure** | Microsoft | |
| **Associated resources** |
| Azure Storage | You | |
| Key Vault | You | ✓ |
@@ -67,28 +67,28 @@ The rest of this article describes the actions you need to take to make each of
## Plan for multi-regional deployment
-A multi-regional deployment relies on creation of Azure AI Studio and other resources (infrastructure) in two Azure regions. If a regional outage occurs, you can switch to the other region. When planning on where to deploy your resources, consider:
+A multi-regional deployment relies on creation of Azure AI Foundry and other resources (infrastructure) in two Azure regions. If a regional outage occurs, you can switch to the other region. When planning on where to deploy your resources, consider:
-* __Regional availability__: If possible, use a region in the same geographic area, not necessarily the one that is closest. To check regional availability for Azure AI Studio, see [Azure products by region](https://azure.microsoft.com/global-infrastructure/services/).
+* __Regional availability__: If possible, use a region in the same geographic area, not necessarily the one that is closest. To check regional availability for Azure AI Foundry, see [Azure products by region](https://azure.microsoft.com/global-infrastructure/services/).
* __Azure paired regions__: Paired regions coordinate platform updates and prioritize recovery efforts where needed. However, not all regions support paired regions. For more information, see [Azure paired regions](/azure/reliability/cross-region-replication-azure).
* __Service availability__: Decide whether the resources used by your solution should be hot/hot, hot/warm, or hot/cold.
* __Hot/hot__: Both regions are active at the same time, with one region ready to begin use immediately.
* __Hot/warm__: Primary region active, secondary region has critical resources (for example, deployed models) ready to start. Noncritical resources would need to be manually deployed in the secondary region.
- * __Hot/cold__: Primary region active, secondary region has Azure AI Studio and other resources deployed, along with needed data. Resources such as models, model deployments, or pipelines would need to be manually deployed.
+ * __Hot/cold__: Primary region active, secondary region has Azure AI Foundry and other resources deployed, along with needed data. Resources such as models, model deployments, or pipelines would need to be manually deployed.
> [!TIP]
-> Depending on your business requirements, you may decide to treat different Azure AI Studio resources differently.
+> Depending on your business requirements, you may decide to treat different Azure AI Foundry resources differently.
-Azure AI Studio builds on top of other services. Some services can be configured to replicate to other regions. Others you must manually create in multiple regions. The following table provides a list of services, who is responsible for replication, and an overview of the configuration:
+Azure AI Foundry builds on top of other services. Some services can be configured to replicate to other regions. Others you must manually create in multiple regions. The following table provides a list of services, who is responsible for replication, and an overview of the configuration:
| Azure service | Geo-replicated by | Configuration |
| ----- | ----- | ----- |
-| AI Studio hub and projects | You | Create a hub/projects in the selected regions. |
-| AI Studio compute | You | Create the compute resources in the selected regions. For compute resources that can dynamically scale, make sure that both regions provide sufficient compute quota for your needs. |
-| Key Vault | Microsoft | Use the same Key Vault instance with the Azure AI Studio hub and resources in both regions. Key Vault automatically fails over to a secondary region. For more information, see [Azure Key Vault availability and redundancy](/azure/key-vault/general/disaster-recovery-guidance).|
+| AI Foundry hub and projects | You | Create a hub/projects in the selected regions. |
+| AI Foundry compute | You | Create the compute resources in the selected regions. For compute resources that can dynamically scale, make sure that both regions provide sufficient compute quota for your needs. |
+| Key Vault | Microsoft | Use the same Key Vault instance with the Azure AI Foundry hub and resources in both regions. Key Vault automatically fails over to a secondary region. For more information, see [Azure Key Vault availability and redundancy](/azure/key-vault/general/disaster-recovery-guidance).|
| Storage Account | You | Azure Machine Learning doesn't support __default storage-account__ failover using geo-redundant storage (GRS), geo-zone-redundant storage (GZRS), read-access geo-redundant storage (RA-GRS), or read-access geo-zone-redundant storage (RA-GZRS). Configure a storage account according to your needs and then use it for your hub. All subsequent projects use the hub's storage account. For more information, see [Azure Storage redundancy](/azure/storage/common/storage-redundancy). |
-| Container Registry | Microsoft | Configure the Container Registry instance to geo-replicate registries to the paired region for Azure AI Studio. Use the same instance for both hub instances. For more information, see [Geo-replication in Azure Container Registry](/azure/container-registry/container-registry-geo-replication). |
+| Container Registry | Microsoft | Configure the Container Registry instance to geo-replicate registries to the paired region for Azure AI Foundry. Use the same instance for both hub instances. For more information, see [Geo-replication in Azure Container Registry](/azure/container-registry/container-registry-geo-replication). |
| Application Insights | You | Create Application Insights for the hub in both regions. To adjust the data-retention period and details, see [Data collection, retention, and storage in Application Insights](/azure/azure-monitor/logs/data-retention-archive). |
To enable fast recovery and restart in the secondary region, we recommend the following development practices:
@@ -104,13 +104,13 @@ To enable fast recovery and restart in the secondary region, we recommend the fo
Certain Azure services support availability zones. For regions that support availability zones, if a zone goes down any project pauses and data should be saved. However, the data is unavailable to refresh until the zone is back online.
-For more information, see [Availability zone service and regional support](/azure/reliability/availability-zones-service-support).
+For more information, see [Availability zone service support](/azure/reliability/availability-zones-service-support).
### Deploy critical components to multiple regions
Determine the level of business continuity that you're aiming for. The level might differ between the components of your solution. For example, you might want to have a hot/hot configuration for production pipelines or model deployments, and hot/cold for development.
-Azure AI Studio is a regional service and stores data both service-side and on a storage account in your subscription. If a regional disaster occurs, service data can't be recovered. But you can recover the data stored by the service on the storage account in your subscription given storage redundancy is enforced. Service-side stored data is mostly metadata (tags, asset names, descriptions). Stored on your storage account is typically non-metadata, for example, uploaded data.
+Azure AI Foundry is a regional service and stores data both service-side and on a storage account in your subscription. If a regional disaster occurs, service data can't be recovered. But you can recover the data stored by the service on the storage account in your subscription given storage redundancy is enforced. Service-side stored data is mostly metadata (tags, asset names, descriptions). Stored on your storage account is typically non-metadata, for example, uploaded data.
For connections, we recommend creating two separate resources in two distinct regions and then create two connections for the hub. For example, if AI Services is a critical resource for business continuity, creating two AI Services resources and two connections for the hub, would be a good strategy for business continuity. With this configuration, if one region goes down there's still one region operational.
@@ -120,15 +120,15 @@ For any hubs that are essential to business continuity, deploy resources in two
In the scenario in which you're connecting with data to customize your AI application, typically your datasets could be used in Azure AI but also outside of Azure AI. Dataset volume could be quite large, so for it might be good practice to keep this data in a separate storage account. Evaluate what data replication strategy makes most sense for your use case.
-In AI Studio, make a connection to your data. If you have multiple AI Studio instances in different regions, you might still point to the same storage account because connections work across regions.
+In AI Foundry portal, make a connection to your data. If you have multiple AI Foundry instances in different regions, you might still point to the same storage account because connections work across regions.
## Initiate a failover
### Continue work in the failover hub
-When your primary hub becomes unavailable, you can switch over to the secondary hub to continue development. Azure AI Studio doesn't automatically submit jobs to the secondary hub if there's an outage. Update your code configuration to point to the new hub or project resources. We recommend to avoiding hardcoding hub or project references.
+When your primary hub becomes unavailable, you can switch over to the secondary hub to continue development. Azure AI Foundry doesn't automatically submit jobs to the secondary hub if there's an outage. Update your code configuration to point to the new hub or project resources. We recommend to avoiding hardcoding hub or project references.
-Azure AI Studio can't sync or recover artifacts or metadata between hubs. Dependent on your application deployment strategy, you might have to move or recreate artifacts in the failover hub in order to continue. In case you configure your primary hub and secondary hub to share associated resources with geo-replication enabled, some objects might be directly available to the failover hub. For example, if both hubs share the same docker images, configured datastores, and Azure Key Vault resources.
+Azure AI Foundry can't sync or recover artifacts or metadata between hubs. Dependent on your application deployment strategy, you might have to move or recreate artifacts in the failover hub in order to continue. In case you configure your primary hub and secondary hub to share associated resources with geo-replication enabled, some objects might be directly available to the failover hub. For example, if both hubs share the same docker images, configured datastores, and Azure Key Vault resources.
> [!NOTE]
> Any jobs that are running when a service outage occurs will not automatically transition to the secondary hub. It is also unlikely that the jobs will resume and finish successfully in the primary hub once the outage is resolved. Instead, these jobs must be resubmitted, either in the secondary hub or in the primary (once the outage is resolved).
@@ -141,13 +141,13 @@ If a hub and its existing resources are accidentally deleted, there are some res
| Service | soft delete enabled |
| ------- | ------------------- |
-| Azure AI Studio hub | Unsupported |
-| Azure AI Studio project | Unsupported |
+| Azure AI Foundry hub | Unsupported |
+| Azure AI Foundry project | Unsupported |
| Azure AI Services resource | Yes |
| Azure Storage | See [Recover a deleted storage account](/azure/storage/common/storage-account-recover#recover-a-deleted-account-from-the-azure-portal). |
| Azure Key Vault | Yes |
## Next steps
-* To learn about secure infrastructure deployments with Azure AI Studio, see [Create a secure hub](create-secure-ai-hub.md).
+* To learn about secure infrastructure deployments with Azure AI Foundry, see [Create a secure hub](create-secure-ai-hub.md).
* For information about the SLA, see the [Azure service-level agreements](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services?lang=1).
Summary
{
"modification_type": "minor update",
"modification_title": "障害復旧に関するドキュメントの更新"
}
Explanation
この変更は、「disaster-recovery.md」というドキュメントに対して行われ、Azure AI StudioからAzure AI Foundryへの名称変更や関連情報の調整が含まれています。これにより、読者に対して新しいプラットフォームに最適化された情報が提供されます。
主な変更内容は以下の通りです:
- ドキュメントのタイトルや説明が「Azure AI Foundry」に合わせて更新され、適切なコンテキストが確保されています。
- 文章内で「AI Studio」という表現がすべて「AI Foundry」に変更されており、これによって文脈の一貫性が維持されています。
- 障害復旧の計画や多地域へのデプロイに関する指針についても、同様に新しいプラットフォームの文脈に合った提供がされています。
- Azureサービスに関する情報の変更も含まれており、関連リソースとその管理に関する部分が更新されています。
- 考慮すべきポイントや推奨される開発プラクティスに関する内容も、Azure AI Foundryに特有の情報に再編成されています。
これらの更新により、障害復旧計画を立てる際に役立つ正確かつ最新の情報を読者に提供し、Azure AI Foundry内でのビジネス継続性を確保できるようになっています。新しいプラットフォームでの実装において、使用するリソースや手順についての理解がより深まることが期待されます。
articles/ai-studio/how-to/evaluate-generative-ai-app.md
Diff
@@ -1,24 +1,24 @@
---
-title: How to evaluate generative AI models and applications with Azure AI Studio
+title: How to evaluate generative AI models and applications with Azure AI Foundry
titleSuffix: Azure AI Foundry
-description: Evaluate your generative AI models and applications with Azure AI Studio.
+description: Evaluate your generative AI models and applications with Azure AI Foundry.
manager: scottpolly
ms.service: azure-ai-studio
-ms.custom: ignite-2023, references_regions, build-2024
+ms.custom: ignite-2023, references_regions, build-2024, ignite-2024
ms.topic: how-to
ms.date: 11/19/2024
ms.reviewer: mithigpe
ms.author: lagayhar
author: lgayhardt
---
-# How to evaluate generative AI models and applications with Azure AI Studio
+# How to evaluate generative AI models and applications with Azure AI Foundry
To thoroughly assess the performance of your generative AI models and applications when applied to a substantial dataset, you can initiate an evaluation process. During this evaluation, your model or application is tested with the given dataset, and its performance will be quantitatively measured with both mathematical based metrics and AI-assisted metrics. This evaluation run provides you with comprehensive insights into the application's capabilities and limitations.
-To carry out this evaluation, you can utilize the evaluation functionality in Azure AI Studio, a comprehensive platform that offers tools and features for assessing the performance and safety of your generative AI model. In AI Studio, you're able to log, view, and analyze detailed evaluation metrics.
+To carry out this evaluation, you can utilize the evaluation functionality in Azure AI Foundry portal, a comprehensive platform that offers tools and features for assessing the performance and safety of your generative AI model. In AI Foundry portal, you're able to log, view, and analyze detailed evaluation metrics.
-In this article, you learn to create an evaluation run against model, a test dataset or a flow with built-in evaluation metrics from Azure AI Studio UI. For greater flexibility, you can establish a custom evaluation flow and employ the **custom evaluation** feature. Alternatively, if your objective is solely to conduct a batch run without any evaluation, you can also utilize the custom evaluation feature.
+In this article, you learn to create an evaluation run against model, a test dataset or a flow with built-in evaluation metrics from Azure AI Foundry UI. For greater flexibility, you can establish a custom evaluation flow and employ the **custom evaluation** feature. Alternatively, if your objective is solely to conduct a batch run without any evaluation, you can also utilize the custom evaluation feature.
## Prerequisites
@@ -29,7 +29,7 @@ To run an evaluation with AI-assisted metrics, you need to have the following re
## Create an evaluation with built-in evaluation metrics
-An evaluation run allows you to generate metric outputs for each data row in your test dataset. You can choose one or more evaluation metrics to assess the output from different aspects. You can create an evaluation run from the evaluation, model catalog or prompt flow pages in AI Studio. Then an evaluation creation wizard appears to guide you through the process of setting up an evaluation run.
+An evaluation run allows you to generate metric outputs for each data row in your test dataset. You can choose one or more evaluation metrics to assess the output from different aspects. You can create an evaluation run from the evaluation, model catalog or prompt flow pages in AI Foundry portal. Then an evaluation creation wizard appears to guide you through the process of setting up an evaluation run.
### From the evaluate page
@@ -113,7 +113,7 @@ AI Quality (NLP) metrics are mathematically based measurements that assess your
:::image type="content" source="../media/evaluations/evaluate/select-metrics-ai-quality-nlp.png" alt-text="Screenshot of the AI quality (NLP) with groundedness, relevance, and coherence metrics selected when creating a new evaluation." lightbox="../media/evaluations/evaluate/select-metrics-ai-quality-nlp.png":::
-For risk and safety metrics, you don't need to provide a connection and deployment. The Azure AI Studio safety evaluations back-end service provisions a GPT-4 model that can generate content risk severity scores and reasoning to enable you to evaluate your application for content harms.
+For risk and safety metrics, you don't need to provide a connection and deployment. The Azure AI Foundry portal safety evaluations back-end service provisions a GPT-4 model that can generate content risk severity scores and reasoning to enable you to evaluate your application for content harms.
You can set the threshold to calculate the defect rate for the content harm metrics (self-harm-related content, hateful and unfair content, violent content, sexual content). The defect rate is calculated by taking a percentage of instances with severity levels (Very low, Low, Medium, High) above a threshold. By default, we set the threshold as "Medium".
@@ -122,7 +122,7 @@ For protected material and indirect attack, the defect rate is calculated by tak
:::image type="content" source="../media/evaluations/evaluate/safety-metrics.png" alt-text="Screenshot of risk and safety metrics curated by Microsoft showing self-harm, protected material, and indirect attack selected." lightbox="../media/evaluations/evaluate/safety-metrics.png":::
> [!NOTE]
-> AI-assisted risk and safety metrics are hosted by Azure AI Studio safety evaluations back-end service and is only available in the following regions: East US 2, France Central, UK South, Sweden Central
+> AI-assisted risk and safety metrics are hosted by Azure AI Foundry safety evaluations back-end service and is only available in the following regions: East US 2, France Central, UK South, Sweden Central
**Data mapping for evaluation**: You must specify which data columns in your dataset correspond with inputs needed in the evaluation. Different evaluation metrics demand distinct types of data inputs for accurate calculations.
@@ -235,7 +235,7 @@ The evaluator library is a centralized place that allows you to see the details
The evaluator library also enables version management. You can compare different versions of your work, restore previous versions if needed, and collaborate with others more easily.
-To use the evaluator library in AI Studio, go to your project's **Evaluation** page and select the **Evaluator library** tab.
+To use the evaluator library in AI Foundry portal, go to your project's **Evaluation** page and select the **Evaluator library** tab.
:::image type="content" source="../media/evaluations/evaluate/evaluator-library-list.png" alt-text="Screenshot of the page to select evaluators from the evaluator library." lightbox="../media/evaluations/evaluate/evaluator-library-list.png":::
@@ -251,4 +251,4 @@ Learn more about how to evaluate your generative AI applications:
- [Evaluate your generative AI apps via the playground](./evaluate-prompts-playground.md)
- [View the evaluation results](./evaluate-results.md)
- Learn more about [harm mitigation techniques](../concepts/evaluation-improvement-strategies.md).
-- [Transparency Note for Azure AI Studio safety evaluations](../concepts/safety-evaluations-transparency-note.md).
+- [Transparency Note for Azure AI Foundry safety evaluations](../concepts/safety-evaluations-transparency-note.md).
Summary
{
"modification_type": "minor update",
"modification_title": "生成AIアプリケーションの評価に関するドキュメントの更新"
}
Explanation
この変更は、生成AIモデルとアプリケーションの評価に関するドキュメント「evaluate-generative-ai-app.md」において、Azure AI StudioからAzure AI Foundryへの名称変更及び関連情報の最新化が行われました。これにより、読者には新しいプラットフォームに基づいた正確で最新の情報が提供されます。
主な変更点は以下の通りです:
- ドキュメントのタイトル及び説明が「Azure AI Foundry」に更新され、文脈が統一されました。
- Azure AI Studio関連の各所で言及されていた内容が、全てAzure AI Foundryに置き換えられています。これには、評価機能に関する説明や、評価の実行方法などが含まれます。
- 評価中に使用されるAI支援メトリクスのためのサービスがAzure AI Foundryに更新され、リスクや安全性の評価に必要な詳細が明確になっています。
- 「評価者ライブラリ」機能の利用方法についても、Azure AI Foundryに関する内容に修正されており、利用者が新しい環境での操作を理解しやすくなっています。
これらの変更により、ユーザーは生成AIモデルおよびアプリケーションのパフォーマンス評価を行う際に、Azure AI Foundryに特化した最新の情報と手順を得ることができ、改善されたユーザー体験が提供されます。
articles/ai-studio/how-to/evaluate-prompts-playground.md
Diff
@@ -1,7 +1,7 @@
---
-title: How to manually evaluate prompts in Azure AI Studio playground
+title: How to manually evaluate prompts in Azure AI Foundry portal playground
titleSuffix: Azure AI Foundry
-description: Quickly test and evaluate prompts in Azure AI Studio playground.
+description: Quickly test and evaluate prompts in Azure AI Foundry portal playground.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
@@ -14,13 +14,13 @@ ms.author: lagayhar
author: lgayhardt
---
-# Manually evaluate prompts in Azure AI Studio playground
+# Manually evaluate prompts in Azure AI Foundry portal playground
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
When you get started with prompt engineering, you should test different inputs one at a time to evaluate the effectiveness of the prompt can be very time intensive. This is because it's important to check whether the content filters are working appropriately, whether the response is accurate, and more.
-To make this process simpler, you can utilize manual evaluation in Azure AI Studio, an evaluation tool enabling you to continuously iterate and evaluate your prompt against your test data in a single interface. You can also manually rate the outputs, the model’s responses, to help you gain confidence in your prompt.
+To make this process simpler, you can utilize manual evaluation in Azure AI Foundry portal, an evaluation tool enabling you to continuously iterate and evaluate your prompt against your test data in a single interface. You can also manually rate the outputs, the model’s responses, to help you gain confidence in your prompt.
Manual evaluation can help you get started to understand how well your prompt is performing and iterate on your prompt to ensure you reach your desired level of confidence.
@@ -80,7 +80,7 @@ You can also compare the thumbs up and down ratings across your different manual
## Next steps
Learn more about how to evaluate your generative AI applications:
-- [Evaluate your generative AI apps with the Azure AI Studio or SDK](./evaluate-generative-ai-app.md)
+- [Evaluate your generative AI apps with the Azure AI Foundry portal or SDK](./evaluate-generative-ai-app.md)
- [View the evaluation results](./evaluate-results.md)
Learn more about [harm mitigation techniques](../concepts/evaluation-improvement-strategies.md).
Summary
{
"modification_type": "minor update",
"modification_title": "プロンプト評価に関するドキュメントの更新"
}
Explanation
この変更は、プロンプトの評価に関するドキュメント「evaluate-prompts-playground.md」において、Azure AI StudioからAzure AI Foundryへの名称変更及び関連情報の調整が行われました。これにより、読者には新しいプラットフォームに適合した最新の情報が提供されます。
主な変更点は以下の通りです:
- ドキュメントのタイトルと説明が「Azure AI Foundry」に合わせて更新され、文脈が一貫性を持つようになっています。
- 「Azure AI Studio」という表現がすべて「Azure AI Foundry」に置き換えられ、特に手動評価ツールに関する説明が新しいプラットフォームに基づいて明確化されています。
- 手動評価のプロセスについて、単一のインターフェイス内でテストデータに対してプロンプトを継続的に反復評価できることが強調されています。
- 最後に、次のステップに関するリンクが更新され、ユーザーが新しい情報にアクセスしやすくなっています。
これらの変更により、ユーザーはプロンプトの性能を効果的に評価し、目的に応じた信頼性の高いプロンプトを設計するための最新の手法と情報を得ることができるようになります。
articles/ai-studio/how-to/evaluate-results.md
Diff
@@ -1,22 +1,23 @@
---
-title: How to view evaluation results in Azure AI Studio
+title: How to view evaluation results in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: This article provides instructions on how to view evaluation results in Azure AI Studio.
+description: This article provides instructions on how to view evaluation results in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
- ignite-2023
- build-2024
+ - ignite-2024
ms.topic: how-to
ms.date: 11/19/2024
ms.reviewer: wenxwei
ms.author: lagayhar
author: lgayhardt
---
-# How to view evaluation results in Azure AI Studio
+# How to view evaluation results in Azure AI Foundry portal
-The Azure AI Studio evaluation page is a versatile hub that not only allows you to visualize and assess your results but also serves as a control center for optimizing, troubleshooting, and selecting the ideal AI model for your deployment needs. It's a one-stop solution for data-driven decision-making and performance enhancement in your AI Studio projects. You can seamlessly access and interpret the results from various sources, including your flow, the playground quick test session, evaluation submission UI, and SDK. This flexibility ensures that you can interact with your results in a way that best suits your workflow and preferences.
+The Azure AI Foundry portal evaluation page is a versatile hub that not only allows you to visualize and assess your results but also serves as a control center for optimizing, troubleshooting, and selecting the ideal AI model for your deployment needs. It's a one-stop solution for data-driven decision-making and performance enhancement in your AI Foundry projects. You can seamlessly access and interpret the results from various sources, including your flow, the playground quick test session, evaluation submission UI, and SDK. This flexibility ensures that you can interact with your results in a way that best suits your workflow and preferences.
Once you've visualized your evaluation results, you can dive into a thorough examination. This includes the ability to not only view individual results but also to compare these results across multiple evaluation runs. By doing so, you can identify trends, patterns, and discrepancies, gaining invaluable insights into the performance of your AI system under various conditions.
@@ -158,6 +159,6 @@ Understanding the built-in metrics is vital for assessing the performance and ef
Learn more about how to evaluate your generative AI applications:
- [Evaluate your generative AI apps via the playground](../how-to/evaluate-prompts-playground.md)
-- [Evaluate your generative AI apps with the Azure AI Studio or SDK](../how-to/evaluate-generative-ai-app.md)
+- [Evaluate your generative AI apps with the Azure AI Foundry portal or SDK](../how-to/evaluate-generative-ai-app.md)
Learn more about [harm mitigation techniques](../concepts/evaluation-improvement-strategies.md).
Summary
{
"modification_type": "minor update",
"modification_title": "評価結果の表示に関するドキュメントの更新"
}
Explanation
この変更は、評価結果の表示に関するドキュメント「evaluate-results.md」において、Azure AI StudioからAzure AI Foundryへと名称を変更し、関連情報を最新のものに更新したものです。これによって、読者には新しいプラットフォームに沿った正確な情報が提供されるようになります。
主な変更点は以下の通りです:
- ドキュメントのタイトルと説明が「Azure AI Foundry」に更新され、情報の整合性が保たれています。
- 評価ページに関する記述が「Azure AI Foundry portal」に適応し、AI Foundryのプロジェクトにおけるデータ駆動型の意思決定やパフォーマンス向上のためのハブとしての役割が強調されています。
- ユーザーが評価結果をさまざまなソースからアクセスし解釈できることについての記述がそのまま保持されていますが、これも新しいプラットフォームに合わせて調整されています。
- 評価アプリケーションのリンクが「Azure AI Foundry portal」用に更新され、利用者が最新の情報に基づいてアクセスできるようになっています。
これらの変更により、読者は新しいプラットフォームでの評価結果の確認や解析を効率的に行うことができ、AIシステムの様々な条件下でのパフォーマンスについてより深く理解することが可能になります。
articles/ai-studio/how-to/fine-tune-model-llama.md
Diff
@@ -1,7 +1,7 @@
---
-title: Fine-tune Llama models in Azure AI Studio
+title: Fine-tune Llama models in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: Learn how to fine-tune Meta Llama models in Azure AI Studio.
+description: Learn how to fine-tune Meta Llama models in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.topic: how-to
@@ -13,15 +13,15 @@ author: ssalgadodev
ms.custom: references_regions, build-2024
---
-# Fine-tune Meta Llama models in Azure AI Studio
+# Fine-tune Meta Llama models in Azure AI Foundry portal
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-Azure AI Studio lets you tailor large language models to your personal datasets by using a process known as *fine-tuning*.
+Azure AI Foundry lets you tailor large language models to your personal datasets by using a process known as *fine-tuning*.
Fine-tuning provides significant value by enabling customization and optimization for specific tasks and applications. It leads to improved performance, cost efficiency, reduced latency, and tailored outputs.
-In this article, you learn how to fine-tune Meta Llama models in [Azure AI Studio](https://ai.azure.com).
+In this article, you learn how to fine-tune Meta Llama models in [Azure AI Foundry](https://ai.azure.com).
The [Meta Llama family of large language models (LLMs)](./deploy-models-llama.md) is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. The model family also includes fine-tuned versions optimized for dialogue use cases with Reinforcement Learning from Human Feedback (RLHF), called Llama-Instruct.
@@ -58,15 +58,15 @@ Fine-tuning of Llama 2 models is currently supported in projects located in West
An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
-- An [Azure AI Studio hub](../how-to/create-azure-ai-resource.md).
+- An [Azure AI Foundry hub](../how-to/create-azure-ai-resource.md).
> [!IMPORTANT]
> For Meta Llama 3.1 models, the pay-as-you-go model fine-tune offering is only available with hubs created in **West US 3** regions.
-- An [Azure AI Studio project](../how-to/create-projects.md) in Azure AI Studio.
-- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions:
+- An [Azure AI Foundry project](../how-to/create-projects.md) in Azure AI Foundry portal.
+- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Foundry portal. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions:
- - On the Azure subscription—to subscribe the AI Studio project to the Azure Marketplace offering, once for each project, per offering:
+ - On the Azure subscription—to subscribe the AI Foundry project to the Azure Marketplace offering, once for each project, per offering:
- `Microsoft.MarketplaceOrdering/agreements/offers/plans/read`
- `Microsoft.MarketplaceOrdering/agreements/offers/plans/sign/action`
- `Microsoft.MarketplaceOrdering/offerTypes/publishers/offers/plans/agreements/read`
@@ -77,25 +77,25 @@ Fine-tuning of Llama 2 models is currently supported in projects located in West
- `Microsoft.SaaS/resources/read`
- `Microsoft.SaaS/resources/write`
- - On the AI Studio project—to deploy endpoints (the Azure AI Developer role contains these permissions already):
+ - On the AI Foundry project—to deploy endpoints (the Azure AI Developer role contains these permissions already):
- `Microsoft.MachineLearningServices/workspaces/marketplaceModelSubscriptions/*`
- `Microsoft.MachineLearningServices/workspaces/serverlessEndpoints/*`
- For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md).
+ For more information on permissions, see [Role-based access control in Azure AI Foundry portal](../concepts/rbac-ai-studio.md).
# [Meta Llama 2](#tab/llama-two)
An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
-- An [AI Studio hub](../how-to/create-azure-ai-resource.md).
+- An [AI Foundry hub](../how-to/create-azure-ai-resource.md).
> [!IMPORTANT]
> For Meta Llama 2 models, the pay-as-you-go model fine-tune offering is only available with hubs created in the **West US 3** region.
-- An [AI Studio project](../how-to/create-projects.md) in Azure AI Studio.
-- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions:
+- An [AI Foundry project](../how-to/create-projects.md) in Azure AI Foundry portal.
+- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Foundry portal. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions:
- - On the Azure subscription—to subscribe the AI Studio project to the Azure Marketplace offering, once for each project, per offering:
+ - On the Azure subscription—to subscribe the AI Foundry project to the Azure Marketplace offering, once for each project, per offering:
- `Microsoft.MarketplaceOrdering/agreements/offers/plans/read`
- `Microsoft.MarketplaceOrdering/agreements/offers/plans/sign/action`
- `Microsoft.MarketplaceOrdering/offerTypes/publishers/offers/plans/agreements/read`
@@ -106,11 +106,11 @@ Fine-tuning of Llama 2 models is currently supported in projects located in West
- `Microsoft.SaaS/resources/read`
- `Microsoft.SaaS/resources/write`
- - On the AI Studio project—to deploy endpoints (the Azure AI Developer role contains these permissions already):
+ - On the AI Foundry project—to deploy endpoints (the Azure AI Developer role contains these permissions already):
- `Microsoft.MachineLearningServices/workspaces/marketplaceModelSubscriptions/*`
- `Microsoft.MachineLearningServices/workspaces/serverlessEndpoints/*`
- For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md).
+ For more information on permissions, see [Role-based access control in Azure AI Foundry portal](../concepts/rbac-ai-studio.md).
---
### Subscription provider registration
@@ -179,8 +179,8 @@ The supported file type is JSON Lines. Files are uploaded to the default datasto
To fine-tune a LLama 3.1 model:
-1. Sign in to [Azure AI Studio](https://ai.azure.com).
-1. Choose the model you want to fine-tune from the Azure AI Studio [model catalog](https://ai.azure.com/explore/models).
+1. Sign in to [Azure AI Foundry](https://ai.azure.com).
+1. Choose the model you want to fine-tune from the Azure AI Foundry portal [model catalog](https://ai.azure.com/explore/models).
1. On the model's **Details** page, select **fine-tune**.
@@ -199,7 +199,7 @@ To fine-tune a LLama 3.1 model:
> [!NOTE]
> If you have your training/validation files in a credential less datastore, you will need to allow workspace managed identity access to their datastore in order to proceed with MaaS finetuning with a credential less storage. On the "Datastore" page, after clicking "Update authentication" > Select the following option:
- 
+ 
Make sure all your training examples follow the expected format for inference. To fine-tune models effectively, ensure a balanced and diverse dataset. This involves maintaining data balance, including various scenarios, and periodically refining training data to align with real-world expectations, ultimately leading to more accurate and balanced model responses.
- The batch size to use for training. When set to -1, batch_size is calculated as 0.2% of examples in training set and the max is 256.
@@ -210,18 +210,18 @@ To fine-tune a LLama 3.1 model:
1. Review your selections and proceed to train your model.
-Once your model is fine-tuned, you can deploy the model and can use it in your own application, in the playground, or in prompt flow. For more information, see [How to deploy Llama 3.1 family of large language models with Azure AI Studio](./deploy-models-llama.md).
+Once your model is fine-tuned, you can deploy the model and can use it in your own application, in the playground, or in prompt flow. For more information, see [How to deploy Llama 3.1 family of large language models with Azure AI Foundry](./deploy-models-llama.md).
# [Meta Llama 2](#tab/llama-two)
-You can fine-tune a Llama 2 model in Azure AI Studio via the [model catalog](./model-catalog-overview.md) or from your existing project.
+You can fine-tune a Llama 2 model in Azure AI Foundry portal via the [model catalog](./model-catalog-overview.md) or from your existing project.
-To fine-tune a Llama 2 model in an existing Azure AI Studio project, follow these steps:
+To fine-tune a Llama 2 model in an existing Azure AI Foundry project, follow these steps:
-1. Sign in to [Azure AI Studio](https://ai.azure.com).
+1. Sign in to [Azure AI Foundry](https://ai.azure.com).
-1. Choose the model you want to fine-tune from the Azure AI Studio [model catalog](https://ai.azure.com/explore/models).
+1. Choose the model you want to fine-tune from the Azure AI Foundry portal [model catalog](https://ai.azure.com/explore/models).
1. On the model's **Details** page, select **fine-tune**.
@@ -248,13 +248,13 @@ To fine-tune a Llama 2 model in an existing Azure AI Studio project, follow thes
1. Review your selections and proceed to train your model.
-Once your model is fine-tuned, you can deploy the model and can use it in your own application, in the playground, or in prompt flow. For more information, see [How to deploy Llama 2 family of large language models with Azure AI Studio](./deploy-models-llama.md).
+Once your model is fine-tuned, you can deploy the model and can use it in your own application, in the playground, or in prompt flow. For more information, see [How to deploy Llama 2 family of large language models with Azure AI Foundry](./deploy-models-llama.md).
---
## Cleaning up your fine-tuned models
-You can delete a fine-tuned model from the fine-tuning model list in [Azure AI Studio](https://ai.azure.com) or from the model details page. Select the fine-tuned model to delete from the Fine-tuning page, and then select the Delete button to delete the fine-tuned model.
+You can delete a fine-tuned model from the fine-tuning model list in [Azure AI Foundry](https://ai.azure.com) or from the model details page. Select the fine-tuned model to delete from the Fine-tuning page, and then select the Delete button to delete the fine-tuned model.
>[!NOTE]
> You can't delete a custom model if it has an existing deployment. You must first delete your model deployment before you can delete your custom model.
@@ -263,7 +263,7 @@ You can delete a fine-tuned model from the fine-tuning model list in [Azure AI S
### Cost and quota considerations for Meta Llama models fine-tuned as a service
-Meta Llama models fine-tuned as a service are offered by Meta through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when [deploying](./deploy-models-llama.md) or fine-tuning the models.
+Meta Llama models fine-tuned as a service are offered by Meta through the Azure Marketplace and integrated with Azure AI Foundry for use. You can find the Azure Marketplace pricing when [deploying](./deploy-models-llama.md) or fine-tuning the models.
Each time a project subscribes to a given offer from the Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference and fine-tuning; however, multiple meters are available to track each scenario independently.
@@ -279,6 +279,6 @@ Models deployed as a service with pay-as-you-go billing are protected by Azure A
## Next steps
-- [What is Azure AI Studio?](../what-is-ai-studio.md)
+- [What is Azure AI Foundry?](../what-is-ai-studio.md)
- [Learn more about deploying Meta Llama models](./deploy-models-llama.md)
- [Azure AI FAQ article](../faq.yml)
Summary
{
"modification_type": "minor update",
"modification_title": "Llamaモデルのファインチューニングに関するドキュメントの更新"
}
Explanation
この変更は、Llamaモデルのファインチューニングに関するドキュメント「fine-tune-model-llama.md」において、Azure AI StudioからAzure AI Foundryへ名称を更新し、関連情報を最新に保つためのものです。これにより、利用者は新しいプラットフォームに基づいた正しい情報を得ることができます。
主な変更点は以下の通りです:
- ドキュメントのタイトルと説明が「Azure AI Foundry portal」に変更され、全体の文脈が新しいプラットフォームに適合するように更新されています。
- テキストの多くの部分が、「Azure AI Studio」から「Azure AI Foundry」に統一されており、これにより読み手は最新の実施環境に基づいて手順を理解しやすくなっています。
- ファインチューニングのプロセス、使用条件、必要なリソースについての説明はそのまま保持されていますが、すべてが新しい文脈に併せられています。
- モデルのデプロイ手順やアクセス権の管理に関する情報も、新しいポータルに適用されるように変更されています。
これらの更新により、ユーザーはAzure AI Foundryの環境でLlamaモデルを効果的にファインチューニングし、デプロイするための明確なガイダンスを受けられるようになっています。結果として、最新のプラットフォームでの使用に対する理解が深まり、効率的な運用が可能になるでしょう。
articles/ai-studio/how-to/fine-tune-models-tsuzumi.md
Diff
@@ -1,7 +1,7 @@
---
-title: Fine-tune tsuzumi-7b model in Azure AI Studio
+title: Fine-tune tsuzumi-7b model in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: Learn how to fine-tune tsuzumi-7b in Azure AI Studio.
+description: Learn how to fine-tune tsuzumi-7b in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.topic: how-to
@@ -10,18 +10,18 @@ ms.reviewer: rasavage
reviewer: shubhirajMsft
ms.author: ssalgado
author: ssalgadodev
-ms.custom: references_regions, build-2024
+ms.custom: references_regions, build-2024, ignite-2024
---
-# Fine-tune tsuzumi-7b model in Azure AI Studio
+# Fine-tune tsuzumi-7b model in Azure AI Foundry portal
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-Azure AI Studio lets you tailor large language models to your personal datasets by using a process known as *fine-tuning*.
+Azure AI Foundry lets you tailor large language models to your personal datasets by using a process known as *fine-tuning*.
Fine-tuning provides significant value by enabling customization and optimization for specific tasks and applications. It leads to improved performance, cost efficiency, reduced latency, and tailored outputs.
-In this article, you learn how to fine-tune an NTTDATA tsuzumi-7b model in [Azure AI Studio](https://ai.azure.com).
+In this article, you learn how to fine-tune an NTTDATA tsuzumi-7b model in [Azure AI Foundry](https://ai.azure.com).
[!INCLUDE [models-preview](../includes/models-preview.md)]
@@ -30,15 +30,15 @@ In this article, you learn how to fine-tune an NTTDATA tsuzumi-7b model in [Azur
## Prerequisites
An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
-- An [Azure AI Studio hub](../how-to/create-azure-ai-resource.md).
+- An [Azure AI Foundry hub](../how-to/create-azure-ai-resource.md).
> [!IMPORTANT]
> For Fine-tuning the tsuzumi-7b model, the pay-as-you-go model fine-tune offering is only available with hubs created in **West US 3** regions.
-- An [Azure AI Studio project](../how-to/create-projects.md) in Azure AI Studio.
-- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions:
+- An [Azure AI Foundry project](../how-to/create-projects.md) in Azure AI Foundry portal.
+- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Foundry portal. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions:
- - On the Azure subscription—to subscribe the AI Studio project to the Azure Marketplace offering, once for each project, per offering:
+ - On the Azure subscription—to subscribe the AI Foundry project to the Azure Marketplace offering, once for each project, per offering:
- `Microsoft.MarketplaceOrdering/agreements/offers/plans/read`
- `Microsoft.MarketplaceOrdering/agreements/offers/plans/sign/action`
- `Microsoft.MarketplaceOrdering/offerTypes/publishers/offers/plans/agreements/read`
@@ -49,11 +49,11 @@ In this article, you learn how to fine-tune an NTTDATA tsuzumi-7b model in [Azur
- `Microsoft.SaaS/resources/read`
- `Microsoft.SaaS/resources/write`
- - On the AI Studio project—to deploy endpoints (the Azure AI Developer role contains these permissions already):
+ - On the AI Foundry project—to deploy endpoints (the Azure AI Developer role contains these permissions already):
- `Microsoft.MachineLearningServices/workspaces/marketplaceModelSubscriptions/*`
- `Microsoft.MachineLearningServices/workspaces/serverlessEndpoints/*`
- For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md).
+ For more information on permissions, see [Role-based access control in Azure AI Foundry portal](../concepts/rbac-ai-studio.md).
### Subscription provider registration
@@ -94,8 +94,8 @@ The supported file type is JSON Lines. Files are uploaded to the default datasto
To fine-tune a tsuzumi-7b model:
-1. Sign in to [Azure AI Studio](https://ai.azure.com).
-1. Choose the model you want to fine-tune from the Azure AI Studio [model catalog](https://ai.azure.com/explore/models).
+1. Sign in to [Azure AI Foundry](https://ai.azure.com).
+1. Choose the model you want to fine-tune from the Azure AI Foundry portal [model catalog](https://ai.azure.com/explore/models).
1. On the model's **Details** page, select **fine-tune**.
@@ -114,7 +114,7 @@ To fine-tune a tsuzumi-7b model:
> [!NOTE]
> If you have your training/validation files in a credential less datastore, you will need to allow workspace managed identity access to their datastore in order to proceed with MaaS finetuning with a credential less storage. On the "Datastore" page, after clicking "Update authentication" > Select the following option:
- 
+ 
Make sure all your training examples follow the expected format for inference. To fine-tune models effectively, ensure a balanced and diverse dataset. This involves maintaining data balance, including various scenarios, and periodically refining training data to align with real-world expectations, ultimately leading to more accurate and balanced model responses.
- The batch size to use for training. When set to -1, batch_size is calculated as 0.2% of examples in training set and the max is 256.
@@ -124,12 +124,12 @@ To fine-tune a tsuzumi-7b model:
1. Review your selections and proceed to train your model.
-Once your model is fine-tuned, you can deploy the model and can use it in your own application, in the playground, or in prompt flow. For more information, see [How to deploy tsuzumi large language models with Azure AI Studio](./deploy-models-tsuzumi.md).
+Once your model is fine-tuned, you can deploy the model and can use it in your own application, in the playground, or in prompt flow. For more information, see [How to deploy tsuzumi large language models with Azure AI Foundry](./deploy-models-tsuzumi.md).
## Cleaning up your fine-tuned models
-You can delete a fine-tuned model from the fine-tuning model list in [Azure AI Studio](https://ai.azure.com) or from the model details page. Select the fine-tuned model to delete from the Fine-tuning page, and then select the Delete button to delete the fine-tuned model.
+You can delete a fine-tuned model from the fine-tuning model list in [Azure AI Foundry](https://ai.azure.com) or from the model details page. Select the fine-tuned model to delete from the Fine-tuning page, and then select the Delete button to delete the fine-tuned model.
>[!NOTE]
> You can't delete a custom model if it has an existing deployment. You must first delete your model deployment before you can delete your custom model.
@@ -138,7 +138,7 @@ You can delete a fine-tuned model from the fine-tuning model list in [Azure AI S
### Cost and quota considerations for a tsuzumi-7b fine-tuned as a service
-tsuzumi-7b models fine-tuned as a service are offered by NTTDATA through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when [deploying](./deploy-models-tsuzumi.md) or fine-tuning the models.
+tsuzumi-7b models fine-tuned as a service are offered by NTTDATA through the Azure Marketplace and integrated with Azure AI Foundry for use. You can find the Azure Marketplace pricing when [deploying](./deploy-models-tsuzumi.md) or fine-tuning the models.
Each time a project subscribes to a given offer from the Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference and fine-tuning; however, multiple meters are available to track each scenario independently.
@@ -150,6 +150,6 @@ Models deployed as a service with pay-as-you-go billing are protected by Azure A
## Next steps
-- [What is Azure AI Studio?](../what-is-ai-studio.md)
+- [What is Azure AI Foundry?](../what-is-ai-studio.md)
- [Learn more about deploying an NTTDATA tsuzumi model](./deploy-models-tsuzumi.md)
- [Azure AI FAQ article](../faq.yml)
Summary
{
"modification_type": "minor update",
"modification_title": "tsuzumi-7bモデルのファインチューニングに関するドキュメントの更新"
}
Explanation
この変更は、tsuzumi-7bモデルのファインチューニングに関するドキュメント「fine-tune-models-tsuzumi.md」において、Azure AI StudioからAzure AI Foundryへの名称変更と、それに伴う関連情報の更新を行ったものです。これにより、読者は新しいプラットフォームに基づいた最新の情報を得ることができます。
主な変更点は以下の通りです:
- ドキュメントのタイトルおよび説明が「Azure AI Foundry portal」に更新され、全体の文脈が新しいプラットフォームに適合するようになっています。
- ファインチューニングのプロセスについての説明はそのまま維持されつつ、参照される環境が全て新しいポータルに適用されています。
- 「Azure AI Foundry」用にプロジェクトやハブの作成に関する情報が修正され、各手順が最新の環境に即して説明されています。
- 利用者がモデルをファインチューニングした後のデプロイ手順や、権限に関する情報も更新されています。
これにより、利用者はAzure AI Foundryの環境でtsuzumi-7bモデルを効果的にファインチューニングし、デプロイするための明確な指針を得られるようになります。結果として、最新のプラットフォームでの操作に対する理解が深まり、効率的な運用が可能に近づくでしょう。
articles/ai-studio/how-to/fine-tune-phi-3.md
Diff
@@ -1,21 +1,21 @@
---
-title: Fine-tune Phi-3 models in Azure AI Studio
+title: Fine-tune Phi-3 models in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: This article introduces fine-tuning Phi-3 models in Azure AI Studio.
+description: This article introduces fine-tuning Phi-3 models in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
ms.topic: how-to
ms.date: 7/16/2024
---
-# Fine-tune Phi-3 models in Azure AI Studio
+# Fine-tune Phi-3 models in Azure AI Foundry portal
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-Azure AI Studio lets you tailor large language models to your personal datasets by using a process known as fine-tuning. Fine-tuning provides significant value by enabling customization and optimization for specific tasks and applications. It leads to improved performance, cost efficiency, reduced latency, and tailored outputs.
+Azure AI Foundry lets you tailor large language models to your personal datasets by using a process known as fine-tuning. Fine-tuning provides significant value by enabling customization and optimization for specific tasks and applications. It leads to improved performance, cost efficiency, reduced latency, and tailored outputs.
-In this article, you learn how to fine-tune Phi-3 family of small language models (SLMs) in Azure AI Studio as a service with pay-as you go billing.
+In this article, you learn how to fine-tune Phi-3 family of small language models (SLMs) in Azure AI Foundry portal as a service with pay-as you go billing.
The Phi-3 family of SLMs is a collection of instruction-tuned generative text models. Phi-3 models are the most capable and cost-effective small language models (SLMs) available, outperforming models of the same size and next size up across various language, reasoning, coding, and math benchmarks.
@@ -60,15 +60,15 @@ The models underwent a rigorous enhancement process, incorporating both supervis
### Prerequisites
- An Azure subscription. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
-- An [AI Studio hub](../how-to/create-azure-ai-resource.md).
+- An [AI Foundry hub](../how-to/create-azure-ai-resource.md).
> [!IMPORTANT]
> For Phi-3 family models, the pay-as-you-go model fine-tune offering is only available with hubs created in **East US 2** regions.
-- An [AI Studio project](../how-to/create-projects.md).
-- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group.
+- An [AI Foundry project](../how-to/create-projects.md).
+- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Foundry portal. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group.
- For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md).
+ For more information on permissions, see [Role-based access control in Azure AI Foundry portal](../concepts/rbac-ai-studio.md).
### Subscription provider registration
@@ -109,27 +109,27 @@ The supported file type is JSON Lines. Files are uploaded to the default datasto
To fine-tune a Phi-3 model:
-1. Sign in to [Azure AI Studio](https://ai.azure.com).
-1. Choose the model you want to fine-tune from the Azure AI Studio [model catalog](https://ai.azure.com/explore/models).
+1. Sign in to [Azure AI Foundry](https://ai.azure.com).
+1. Choose the model you want to fine-tune from the Azure AI Foundry portal [model catalog](https://ai.azure.com/explore/models).
1. On the model's **Details** page, select **fine-tune**.
1. Select the project in which you want to fine-tune your models. To use the pay-as-you-go model fine-tune offering, your workspace must belong to the **East US 2** region.
-1. On the fine-tune wizard, select the link to **Azure AI Studio Terms** to learn more about the terms of use. You can also select the **Azure AI Studio offer details** tab to learn about pricing for the selected model.
-1. If this is your first time fine-tuning the model in the project, you have to subscribe your project for the particular offering (for example, Phi-3-mini-128k-instruct) from Azure AI Studio. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each project has its own subscription to the particular Azure AI Studio offering, which allows you to control and monitor spending. Select **Subscribe and fine-tune**.
+1. On the fine-tune wizard, select the link to **Azure AI Foundry Terms** to learn more about the terms of use. You can also select the **Azure AI Foundry offer details** tab to learn about pricing for the selected model.
+1. If this is your first time fine-tuning the model in the project, you have to subscribe your project for the particular offering (for example, Phi-3-mini-128k-instruct) from Azure AI Foundry. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each project has its own subscription to the particular Azure AI Foundry offering, which allows you to control and monitor spending. Select **Subscribe and fine-tune**.
> [!NOTE]
- > Subscribing a project to a particular Azure AI Studio offering (in this case, Phi-3-mini-128k-instruct) requires that your account has **Contributor** or **Owner** access at the subscription level where the project is created. Alternatively, your user account can be assigned a custom role that has the Azure subscription permissions and resource group permissions listed in the [prerequisites](#prerequisites).
+ > Subscribing a project to a particular Azure AI Foundry offering (in this case, Phi-3-mini-128k-instruct) requires that your account has **Contributor** or **Owner** access at the subscription level where the project is created. Alternatively, your user account can be assigned a custom role that has the Azure subscription permissions and resource group permissions listed in the [prerequisites](#prerequisites).
-1. Once you sign up the project for the particular Azure AI Studio offering, subsequent fine-tuning of the _same_ offering in the _same_ project don't require subscribing again. Therefore, you don't need to have the subscription-level permissions for subsequent fine-tune jobs. If this scenario applies to you, select **Continue to fine-tune**.
+1. Once you sign up the project for the particular Azure AI Foundry offering, subsequent fine-tuning of the _same_ offering in the _same_ project don't require subscribing again. Therefore, you don't need to have the subscription-level permissions for subsequent fine-tune jobs. If this scenario applies to you, select **Continue to fine-tune**.
1. Enter a name for your fine-tuned model and the optional tags and description.
1. Select training data to fine-tune your model. See [data preparation](#data-preparation) for more information.
> [!NOTE]
> If the you have your training/validation files in a credential less datastore, you will need to allow workspace managed identity access to your datastore in order to proceed with MaaS fine-tuning with a credential less storage. On the "Datastore" page, after clicking "Update authentication" > Select the following option:
- 
+ 
Make sure all your training examples follow the expected format for inference. To fine-tune models effectively, ensure a balanced and diverse dataset. This involves maintaining data balance, including various scenarios, and periodically refining training data to align with real-world expectations, ultimately leading to more accurate and balanced model responses.
- The batch size to use for training. When set to -1, batch_size is calculated as 0.2% of examples in training set and the max is 256.
@@ -140,33 +140,33 @@ To fine-tune a Phi-3 model:
1. Review your selections and proceed to train your model.
-Once your model is fine-tuned, you can deploy the model and can use it in your own application, in the playground, or in prompt flow. For more information, see [How to deploy Phi-3 family of large language models with Azure AI Studio](./deploy-models-phi-3.md).
+Once your model is fine-tuned, you can deploy the model and can use it in your own application, in the playground, or in prompt flow. For more information, see [How to deploy Phi-3 family of large language models with Azure AI Foundry](./deploy-models-phi-3.md).
# [Phi-3-medium](#tab/phi-3-medium)
To fine-tune a Phi-3 model:
-1. Sign in to [Azure AI Studio](https://ai.azure.com).
-1. Choose the model you want to fine-tune from the Azure AI Studio [model catalog](https://ai.azure.com/explore/models).
+1. Sign in to [Azure AI Foundry](https://ai.azure.com).
+1. Choose the model you want to fine-tune from the Azure AI Foundry portal [model catalog](https://ai.azure.com/explore/models).
1. On the model's **Details** page, select **fine-tune**.
1. Select the project in which you want to fine-tune your models. To use the pay-as-you-go model fine-tune offering, your workspace must belong to the **East US 2** region.
-1. On the fine-tune wizard, select the link to **Azure AI Studio Terms** to learn more about the terms of use. You can also select the **Azure AI Studio offer details** tab to learn about pricing for the selected model.
-1. If this is your first time fine-tuning the model in the project, you have to subscribe your project for the particular offering (for example, Phi-3-medium-128k-instruct) from Azure AI Studio. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each project has its own subscription to the particular Azure AI Studio offering, which allows you to control and monitor spending. Select **Subscribe and fine-tune**.
+1. On the fine-tune wizard, select the link to **Azure AI Foundry Terms** to learn more about the terms of use. You can also select the **Azure AI Foundry offer details** tab to learn about pricing for the selected model.
+1. If this is your first time fine-tuning the model in the project, you have to subscribe your project for the particular offering (for example, Phi-3-medium-128k-instruct) from Azure AI Foundry. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each project has its own subscription to the particular Azure AI Foundry offering, which allows you to control and monitor spending. Select **Subscribe and fine-tune**.
> [!NOTE]
- > Subscribing a project to a particular Azure AI Studio offering (in this case, Phi-3-medium-128k-instruct) requires that your account has **Contributor** or **Owner** access at the subscription level where the project is created. Alternatively, your user account can be assigned a custom role that has the Azure subscription permissions and resource group permissions listed in the [prerequisites](#prerequisites).
+ > Subscribing a project to a particular Azure AI Foundry offering (in this case, Phi-3-medium-128k-instruct) requires that your account has **Contributor** or **Owner** access at the subscription level where the project is created. Alternatively, your user account can be assigned a custom role that has the Azure subscription permissions and resource group permissions listed in the [prerequisites](#prerequisites).
-1. Once you sign up the project for the particular Azure AI Studio offering, subsequent fine-tuning of the _same_ offering in the _same_ project don't require subscribing again. Therefore, you don't need to have the subscription-level permissions for subsequent fine-tune jobs. If this scenario applies to you, select **Continue to fine-tune**.
+1. Once you sign up the project for the particular Azure AI Foundry offering, subsequent fine-tuning of the _same_ offering in the _same_ project don't require subscribing again. Therefore, you don't need to have the subscription-level permissions for subsequent fine-tune jobs. If this scenario applies to you, select **Continue to fine-tune**.
1. Enter a name for your fine-tuned model and the optional tags and description.
1. Select training data to fine-tune your model. See [data preparation](#data-preparation) for more information.
> [!NOTE]
> If you have your training/validation files in a credential less datastore, you will need to allow workspace managed identity access to your datastore in order to proceed with MaaS finetuning with a credential less storage. On the "Datastore" page, after clicking "Update authentication" > Select the following option:
- 
+ 
Make sure all your training examples follow the expected format for inference. To fine-tune models effectively, ensure a balanced and diverse dataset. This involves maintaining data balance, including various scenarios, and periodically refining training data to align with real-world expectations, ultimately leading to more accurate and balanced model responses.
- The batch size to use for training. When set to -1, batch_size is calculated as 0.2% of examples in training set and the max is 256.
@@ -177,34 +177,34 @@ To fine-tune a Phi-3 model:
1. Review your selections and proceed to train your model.
-Once your model is fine-tuned, you can deploy the model and can use it in your own application, in the playground, or in prompt flow. For more information, see [How to deploy Phi-3 family of large language models with Azure AI Studio](./deploy-models-phi-3.md).
+Once your model is fine-tuned, you can deploy the model and can use it in your own application, in the playground, or in prompt flow. For more information, see [How to deploy Phi-3 family of large language models with Azure AI Foundry](./deploy-models-phi-3.md).
# [Phi-3.5](#tab/phi-3-5)
To fine-tune a Phi-3.5 model:
-1. Sign in to [Azure AI Studio](https://ai.azure.com).
-1. Choose the model you want to fine-tune from the Azure AI Studio [model catalog](https://ai.azure.com/explore/models).
+1. Sign in to [Azure AI Foundry](https://ai.azure.com).
+1. Choose the model you want to fine-tune from the Azure AI Foundry portal [model catalog](https://ai.azure.com/explore/models).
1. On the model's **Details** page, select **fine-tune**.
1. Select the project in which you want to fine-tune your models. To use the pay-as-you-go model fine-tune offering, your workspace must belong to the **East US 2** region.
-1. On the fine-tune wizard, select the link to **Azure AI Studio Terms** to learn more about the terms of use. You can also select the **Azure AI Studio offer details** tab to learn about pricing for the selected model.
-1. If this is your first time fine-tuning the model in the project, you have to subscribe your project for the particular offering (for example, Phi-3.5-mini-instruct) from Azure AI Studio. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each project has its own subscription to the particular Azure AI Studio offering, which allows you to control and monitor spending. Select **Subscribe and fine-tune**.
+1. On the fine-tune wizard, select the link to **Azure AI Foundry Terms** to learn more about the terms of use. You can also select the **Azure AI Foundry offer details** tab to learn about pricing for the selected model.
+1. If this is your first time fine-tuning the model in the project, you have to subscribe your project for the particular offering (for example, Phi-3.5-mini-instruct) from Azure AI Foundry. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each project has its own subscription to the particular Azure AI Foundry offering, which allows you to control and monitor spending. Select **Subscribe and fine-tune**.
> [!NOTE]
- > Subscribing a project to a particular Azure AI Studio offering (in this case, Phi-3.5-mini-instruct) requires that your account has **Contributor** or **Owner** access at the subscription level where the project is created. Alternatively, your user account can be assigned a custom role that has the Azure subscription permissions and resource group permissions listed in the [prerequisites](#prerequisites).
+ > Subscribing a project to a particular Azure AI Foundry offering (in this case, Phi-3.5-mini-instruct) requires that your account has **Contributor** or **Owner** access at the subscription level where the project is created. Alternatively, your user account can be assigned a custom role that has the Azure subscription permissions and resource group permissions listed in the [prerequisites](#prerequisites).
-1. Once you sign up the project for the particular Azure AI Studio offering, subsequent fine-tuning of the _same_ offering in the _same_ project don't require subscribing again. Therefore, you don't need to have the subscription-level permissions for subsequent fine-tune jobs. If this scenario applies to you, select **Continue to fine-tune**.
+1. Once you sign up the project for the particular Azure AI Foundry offering, subsequent fine-tuning of the _same_ offering in the _same_ project don't require subscribing again. Therefore, you don't need to have the subscription-level permissions for subsequent fine-tune jobs. If this scenario applies to you, select **Continue to fine-tune**.
1. Enter a name for your fine-tuned model and the optional tags and description.
1. Select training data to fine-tune your model. See [data preparation](#data-preparation) for more information.
> [!NOTE]
> If you have your training/validation files in a credential less datastore, you will need to allow workspace managed identity access to your datastore in order to proceed with MaaS finetuning with a credential less storage. On the "Datastore" page, after clicking "Update authentication" > Select the following option:
- 
+ 
Make sure all your training examples follow the expected format for inference. To fine-tune models effectively, ensure a balanced and diverse dataset. This involves maintaining data balance, including various scenarios, and periodically refining training data to align with real-world expectations, ultimately leading to more accurate and balanced model responses.
- The batch size to use for training. When set to -1, batch_size is calculated as 0.2% of examples in training set and the max is 256.
@@ -215,13 +215,13 @@ To fine-tune a Phi-3.5 model:
1. Review your selections and proceed to train your model.
-Once your model is fine-tuned, you can deploy the model and can use it in your own application, in the playground, or in prompt flow. For more information, see [How to deploy Phi-3 family of large language models with Azure AI Studio](./deploy-models-phi-3.md).
+Once your model is fine-tuned, you can deploy the model and can use it in your own application, in the playground, or in prompt flow. For more information, see [How to deploy Phi-3 family of large language models with Azure AI Foundry](./deploy-models-phi-3.md).
---
## Cleaning up your fine-tuned models
-You can delete a fine-tuned model from the fine-tuning model list in [Azure AI Studio](https://ai.azure.com) or from the model details page. Select the fine-tuned model to delete from the Fine-tuning page, and then select the Delete button to delete the fine-tuned model.
+You can delete a fine-tuned model from the fine-tuning model list in [Azure AI Foundry](https://ai.azure.com) or from the model details page. Select the fine-tuned model to delete from the Fine-tuning page, and then select the Delete button to delete the fine-tuned model.
>[!NOTE]
> You can't delete a custom model if it has an existing deployment. You must first delete your model deployment before you can delete your custom model.
@@ -230,7 +230,7 @@ You can delete a fine-tuned model from the fine-tuning model list in [Azure AI S
### Cost and quota considerations for Phi models fine-tuned as a service
-Phi models fine-tuned as a service are offered by Microsoft and integrated with Azure AI Studio for use. You can find the pricing when [deploying](./deploy-models-phi-3.md) or fine-tuning the models under the Pricing and terms tab on deployment wizard.
+Phi models fine-tuned as a service are offered by Microsoft and integrated with Azure AI Foundry for use. You can find the pricing when [deploying](./deploy-models-phi-3.md) or fine-tuning the models under the Pricing and terms tab on deployment wizard.
## Sample notebook
@@ -242,6 +242,6 @@ Models deployed as a service with pay-as-you-go are protected by Azure AI Conten
## Next steps
-- [What is Azure AI Studio?](../what-is-ai-studio.md)
+- [What is Azure AI Foundry?](../what-is-ai-studio.md)
- [Learn more about deploying Phi-3 models](./deploy-models-phi-3.md)
- [Azure AI FAQ article](../faq.yml)
Summary
{
"modification_type": "minor update",
"modification_title": "Phi-3モデルのファインチューニングに関するドキュメントの更新"
}
Explanation
この変更は、Phi-3モデルのファインチューニングに関するドキュメント「fine-tune-phi-3.md」の内容を、Azure AI StudioからAzure AI Foundryへ移行するための更新です。この修正により、読者は新しいプラットフォームに関連する最新の情報にアクセスできるようになります。
主な変更内容は以下の通りです:
- タイトルと説明文が「Azure AI Foundry portal」に変更され、全体の文脈を新しいプラットフォームに適合させています。
- ウェブサイトリンクの文言もすべて「Azure AI Foundry」用に更新され、手順が新しい環境に即して修正されています。
- ファインチューニングの手順や前提条件に関する情報も、Azure AI Foundryに適用されるように書き換えられています。
- ユーザーがモデルをファインチューニングした後、デプロイする際の手順や、役割についての情報が更新されています。
これにより、ユーザーは新しいAzure AI FoundryにおけるPhi-3モデルのファインチューニングをスムーズに実行できるようになります。また、この文脈における運用の理解が深まり、効率的な作業が可能になることが期待されます。
articles/ai-studio/how-to/flow-bulk-test-evaluation.md
Diff
@@ -1,7 +1,7 @@
---
title: Submit batch run and evaluate a flow
titleSuffix: Azure AI Foundry
-description: Learn how to submit batch run and use built-in evaluation methods in prompt flow to evaluate how well your flow performs with a large dataset with Azure AI Studio.
+description: Learn how to submit batch run and use built-in evaluation methods in prompt flow to evaluate how well your flow performs with a large dataset with Azure AI Foundry.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
Summary
{
"modification_type": "minor update",
"modification_title": "バッチ実行とフロー評価に関するドキュメントの更新"
}
Explanation
この変更は、「flow-bulk-test-evaluation.md」ドキュメントの内容を、Azure AI StudioからAzure AI Foundryへの移行を反映させるものです。具体的には、説明文の更新が行われました。新しい内容では、フローのパフォーマンス評価に関して、Azure AI Foundryを使用する旨が強調されています。
主な変更点は以下の通りです:
- 説明文が「Azure AI Studio」から「Azure AI Foundry」に変更され、最新のプラットフォームに適合する内容になっています。
この修正により、ユーザーは最新のAzure AI Foundry環境でバッチ実行を行い、評価方法を利用してフローのパフォーマンスを適切に評価できるようになります。また、プラットフォームの違いを明確に理解する助けともなり、関心のある読者に対してより正確な情報を提供することができます。
articles/ai-studio/how-to/flow-deploy.md
Diff
@@ -1,12 +1,13 @@
---
title: Deploy a flow as a managed online endpoint for real-time inference
titleSuffix: Azure AI Foundry
-description: Learn how to deploy a flow as a managed online endpoint for real-time inference with Azure AI Studio.
+description: Learn how to deploy a flow as a managed online endpoint for real-time inference with Azure AI Foundry.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
- ignite-2023
- build-2024
+ - ignite-2024
ms.topic: how-to
ms.date: 5/21/2024
ms.reviewer: likebupt
@@ -35,13 +36,13 @@ In this article, you learn how to deploy a flow as a managed online endpoint for
To deploy a prompt flow as an online endpoint, you need:
* An Azure subscription. If you don't have one, create a free account before you begin.
-* An Azure AI Studio project.
+* An Azure AI Foundry project.
## Create an online deployment
Now that you have built a flow and tested it properly, it's time to create your online endpoint for real-time inference.
-Follow the steps below to deploy a prompt flow as an online endpoint in Azure AI Studio.
+Follow the steps below to deploy a prompt flow as an online endpoint in Azure AI Foundry portal.
1. Have a prompt flow ready for deployment. If you don't have one, see [how to build a prompt flow](./flow-develop.md).
1. Optional: Select **Chat** to test if the flow is working correctly. Testing your flow before deployment is recommended best practice.
@@ -77,7 +78,7 @@ Follow the steps below to deploy a prompt flow as an online endpoint in Azure AI
For more information, see the sections below.
> [!TIP]
-> For a guide about how to deploy a base model, see [Deploying models with Azure AI Studio](deploy-models-open.md).
+> For a guide about how to deploy a base model, see [Deploying models with Azure AI Foundry](deploy-models-open.md).
## Settings and configurations
@@ -122,7 +123,7 @@ The authentication method for the endpoint. Key-based authentication provides a
#### Identity type
-The endpoint needs to access Azure resources such as the Azure Container Registry or your AI Studio hub connections for inferencing. You can allow the endpoint permission to access Azure resources via giving permission to its managed identity.
+The endpoint needs to access Azure resources such as the Azure Container Registry or your AI Foundry hub connections for inferencing. You can allow the endpoint permission to access Azure resources via giving permission to its managed identity.
System-assigned identity will be autocreated after your endpoint is created, while user-assigned identity is created by user. [Learn more about managed identities.](/azure/active-directory/managed-identities-azure-resources/overview)
@@ -132,16 +133,16 @@ You notice there's an option whether *Enforce access to connection secrets (prev
##### User-assigned
-When you create the deployment, Azure tries to pull the user container image from the Azure AI Studio hub's Azure Container Registry (ACR) and mounts the user model and code artifacts into the user container from the hub's storage account.
+When you create the deployment, Azure tries to pull the user container image from the Azure AI Foundry hub's Azure Container Registry (ACR) and mounts the user model and code artifacts into the user container from the hub's storage account.
If you created the associated endpoint with **User Assigned Identity**, the user-assigned identity must be granted the following roles before the deployment creation; otherwise, the deployment creation fails.
|Scope|Role|Why it's needed|
|---|---|---|
-|AI Studio project|**Azure Machine Learning Workspace Connection Secrets Reader** role **OR** a customized role with `Microsoft.MachineLearningServices/workspaces/connections/listsecrets/action` | Get project connections|
-|AI Studio project container registry |**ACR pull** |Pull container image |
-|AI Studio project default storage| **Storage Blob Data Reader**| Load model from storage |
-|AI Studio project|**Workspace metrics writer**| After you deploy then endpoint, if you want to monitor the endpoint related metrics like CPU/GPU/Disk/Memory utilization, you need to give this permission to the identity.<br/><br/>Optional|
+|AI Foundry project|**Azure Machine Learning Workspace Connection Secrets Reader** role **OR** a customized role with `Microsoft.MachineLearningServices/workspaces/connections/listsecrets/action` | Get project connections|
+|AI Foundry project container registry |**ACR pull** |Pull container image |
+|AI Foundry project default storage| **Storage Blob Data Reader**| Load model from storage |
+|AI Foundry project|**Workspace metrics writer**| After you deploy then endpoint, if you want to monitor the endpoint related metrics like CPU/GPU/Disk/Memory utilization, you need to give this permission to the identity.<br/><br/>Optional|
See detailed guidance about how to grant permissions to the endpoint identity in [Grant permissions to the endpoint](#grant-permissions-to-the-endpoint).
@@ -177,7 +178,7 @@ If you enable this, tracing data and system metrics during inference time (such
You can grant the required permissions in Azure portal UI by following steps.
-1. Go to the Azure AI Studio project overview page in [Azure portal](https://ms.portal.azure.com/#home).
+1. Go to the Azure AI Foundry project overview page in [Azure portal](https://ms.portal.azure.com/#home).
1. Select **Access control**, and select **Add role assignment**.
:::image type="content" source="../media/prompt-flow/how-to-deploy-for-real-time-inference/access-control.png" alt-text="Screenshot of Access control with add role assignment highlighted." lightbox = "../media/prompt-flow/how-to-deploy-for-real-time-inference/access-control.png":::
@@ -246,6 +247,6 @@ If you aren't going use the endpoint after completing this tutorial, you should
## Next Steps
-- Learn more about what you can do in [Azure AI Studio](../what-is-ai-studio.md)
+- Learn more about what you can do in [Azure AI Foundry](../what-is-ai-studio.md)
- Get answers to frequently asked questions in the [Azure AI FAQ article](../faq.yml)
- [Enable trace and collect feedback for your deployment] (./develop/trace-production-sdk.md)
Summary
{
"modification_type": "minor update",
"modification_title": "フローのデプロイに関するドキュメントの更新"
}
Explanation
この変更では、「flow-deploy.md」ドキュメントの内容が更新され、Azure AI StudioからAzure AI Foundryへの再構築が反映されています。主に、デプロイ手順と関連情報が新しいプラットフォームに適合するように調整されています。
主な変更点は以下の通りです:
- 説明文が「Azure AI Studio」の表現から「Azure AI Foundry」に書き換えられ、最新のプラットフォームに関する情報に更新されています。
- 前提条件や手順において、「Azure AI Studioプロジェクト」という表現が「Azure AI Foundryプロジェクト」に変更され、具体的なプラットフォームの参照が整えられています。
- ユーザーがデプロイを行う際に必要なリソースへのアクセスの設定や権限付与に関する部分でも、AI Foundryに適した内容に修正されています。
- 他にも関連するリンクや情報がAI Foundryに適合するように修正されており、ユーザーが新しいプラットフォームでの操作を円滑に行えることを目指しています。
この更新により、ユーザーはAzure AI Foundryでフローを本格的にデプロイし、リアルタイム推論を行うための正確で最新のガイドラインを得ることができます。
articles/ai-studio/how-to/flow-develop-evaluation.md
Diff
@@ -1,7 +1,7 @@
---
title: Develop an evaluation flow
titleSuffix: Azure AI Foundry
-description: Learn how to customize or create your own evaluation flow tailored to your tasks and objectives, and then use in a batch run as an evaluation method in prompt flow with Azure AI Studio.
+description: Learn how to customize or create your own evaluation flow tailored to your tasks and objectives, and then use in a batch run as an evaluation method in prompt flow with Azure AI Foundry.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
@@ -14,7 +14,7 @@ ms.author: lagayhar
author: lgayhardt
---
-# Develop an evaluation flow in Azure AI Studio
+# Develop an evaluation flow in Azure AI Foundry portal
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
Summary
{
"modification_type": "minor update",
"modification_title": "評価フローの開発に関するドキュメントの更新"
}
Explanation
この変更では、「flow-develop-evaluation.md」ドキュメントが更新され、Azure AI StudioからAzure AI Foundryへの内容を反映しています。具体的には、評価フローの開発に関する説明が新しいプラットフォームに対応するよう修正されています。
主な変更点は以下の通りです:
- 説明文内で「Azure AI Studio」の表現が「Azure AI Foundry」に更新され、最新のプラットフォーム情報が提供されています。
- タイトルセクションでも、ドキュメントのタイトルが「Azure AI Studio」から「Azure AI Foundry portal」に変更され、プラットフォーム間の整合性が保たれています。
これにより、ユーザーはAzure AI Foundryを使用して独自の評価フローをカスタマイズまたは作成する方法をより明確に理解できるようになり、バッチ実行として評価方法を利用する際のガイダンスが一貫性を持つようになっています。
articles/ai-studio/how-to/flow-develop.md
Diff
@@ -7,6 +7,7 @@ ms.service: azure-ai-studio
ms.custom:
- ignite-2023
- build-2024
+ - ignite-2024
ms.topic: how-to
ms.date: 11/08/2024
ms.reviewer: jinzhong
@@ -26,21 +27,21 @@ With prompt flow, you're able to:
- Test, debug, and iterate your flows with ease.
- Create prompt variants and compare their performance.
-In this article, you learn how to create and develop your first prompt flow in Azure AI Studio.
+In this article, you learn how to create and develop your first prompt flow in Azure AI Foundry portal.
## Prerequisites
-- If you don't have an Azure AI Studio project already, first [create a project](create-projects.md).
-- Prompt flow requires a compute session. If you don't have a runtime, you can [create one in Azure AI Studio](./create-manage-compute-session.md).
+- If you don't have an Azure AI Foundry project already, first [create a project](create-projects.md).
+- Prompt flow requires a compute session. If you don't have a runtime, you can [create one in Azure AI Foundry portal](./create-manage-compute-session.md).
- You need a deployed model.
## Create and develop your Prompt flow
You can create a flow by either cloning the samples available in the gallery or creating a flow from scratch. If you already have flow files in local or file share, you can also import the files to create a flow.
-To create a prompt flow from the gallery in Azure AI Studio:
+To create a prompt flow from the gallery in Azure AI Foundry portal:
-1. Sign in to [Azure AI Studio](https://ai.azure.com) and select your project.
+1. Sign in to [Azure AI Foundry](https://ai.azure.com) and select your project.
1. From the collapsible left menu, select **Prompt flow**.
1. Select **+ Create**.
1. In the **Standard flow** tile, select **Create**.
Summary
{
"modification_type": "minor update",
"modification_title": "プロンプトフローの開発に関するドキュメントの更新"
}
Explanation
この変更では、「flow-develop.md」ドキュメントが更新され、Azure AI StudioからAzure AI Foundryへの内容を反映しています。これは、ユーザーがプロンプトフローを作成および開発する際の最新のガイダンスを提供することを目的としています。
主な変更点は以下の通りです:
- 説明文が「Azure AI Studio」から「Azure AI Foundry portal」に更新され、プラットフォームに関する最新情報が反映されています。
- 前提条件およびプロンプトフローの作成手順において、すべての「Azure AI Studio」の表現が「Azure AI Foundry」に置き換えられています。これにより、ユーザーは新しいプラットフォームでの手順を正確に把握できます。
- いくつかのメタデータが更新され、例えば、カスタムタグに「ignite-2024」が追加され、ドキュメントの日付が「11/08/2024」に更新されています。
これにより、ユーザーはAzure AI Foundryにおけるプロンプトフローの開発を始めるために必要な情報を得られ、今後の開発活動を円滑に進めることができるようになります。
articles/ai-studio/how-to/flow-tune-prompts-using-variants.md
Diff
@@ -1,7 +1,7 @@
---
title: Tune prompts using variants
titleSuffix: Azure AI Foundry
-description: Learn how to tune prompts using variants in Prompt flow with Azure AI Studio.
+description: Learn how to tune prompts using variants in Prompt flow with Azure AI Foundry.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
@@ -14,7 +14,7 @@ ms.author: lagayhar
author: lgayhardt
---
-# Tune prompts using variants in Azure AI Studio
+# Tune prompts using variants in Azure AI Foundry portal
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
Summary
{
"modification_type": "minor update",
"modification_title": "プロンプトをバリアントを使用して調整する手順の更新"
}
Explanation
この変更では、「flow-tune-prompts-using-variants.md」ドキュメントが更新され、Azure AI Studioとの関係がAzure AI Foundryに調整されています。これは、ユーザーがプロンプトをバリアントを使用して調整する方法を正確に理解できるようにするためのものです。
主な変更点は以下の通りです:
- 説明文が「Azure AI Studio」から「Azure AI Foundry」に更新され、最新のプラットフォームが反映されています。
- タイトルセクションでも、「Azure AI Studio」から「Azure AI Foundry portal」に表現が変更され、プラットフォームに関する整合性が保たれています。
これにより、ユーザーはAzure AI Foundryにおいて、プロンプトをバリアントで調整する方法をより明確に学ぶことができ、最新の情報に基づいた作業を進められるようになります。
articles/ai-studio/how-to/healthcare-ai/deploy-cxrreportgen.md
Diff
@@ -1,7 +1,7 @@
---
-title: How to deploy and use CXRReportGen healthcare AI model with AI Studio
+title: How to deploy and use CXRReportGen healthcare AI model with AI Foundry
titleSuffix: Azure AI Foundry
-description: Learn how to use CXRReportGen Healthcare AI Model with Azure AI Studio.
+description: Learn how to use CXRReportGen Healthcare AI Model with Azure AI Foundry.
ms.service: azure-ai-studio
manager: scottpolly
ms.topic: how-to
@@ -43,7 +43,7 @@ To use the CXRReportGen model, you need the following prerequisites:
**Deployment to a self-hosted managed compute**
-CXRReportGen model can be deployed to our self-hosted managed inference solution, which allows you to customize and control all the details about how the model is served. You can deploy the model through the catalog UI (in [AI Studio](https://aka.ms/healthcaremodelstudio) or [Azure Machine Learning studio](https://ml.azure.com/model/catalog)) or deploy programmatically.
+CXRReportGen model can be deployed to our self-hosted managed inference solution, which allows you to customize and control all the details about how the model is served. You can deploy the model through the catalog UI (in [AI Foundry](https://aka.ms/healthcaremodelstudio) or [Azure Machine Learning studio](https://ml.azure.com/model/catalog)) or deploy programmatically.
To __deploy the model through the UI__:
Summary
{
"modification_type": "minor update",
"modification_title": "CXRReportGenヘルスケアAIモデルのデプロイに関するドキュメントの更新"
}
Explanation
この変更では、「deploy-cxrreportgen.md」ドキュメントが更新され、CXRReportGenヘルスケアAIモデルのデプロイに関する内容がAzure AI StudioからAzure AI Foundryに修正されています。これにより、ユーザーは新しいプラットフォームに基づいた正確な情報を得ることができるようになります。
主な変更点は以下の通りです:
- ドキュメントのタイトルと説明文が「Azure AI Studio」から「Azure AI Foundry」に更新され、プラットフォームに関する最新の情報が反映されています。
- CXRReportGenモデルのデプロイ手順においても、プラットフォームに関連するリンクがAzure AI Foundryに変更されています。具体的には、カタログUIを通じてモデルをデプロイする手順に関する記述が改訂されています。
これにより、ユーザーはCXRReportGenモデルをAzure AI Foundryを使用してデプロイするための手続きや要件を簡単に理解できるようになり、今後のプロジェクトにおける利便性が向上しています。
articles/ai-studio/how-to/healthcare-ai/deploy-medimageinsight.md
Diff
@@ -1,7 +1,7 @@
---
-title: How to deploy and use MedImageInsight healthcare AI model with AI Studio
+title: How to deploy and use MedImageInsight healthcare AI model with AI Foundry
titleSuffix: Azure AI Foundry
-description: Learn how to use MedImageInsight Healthcare AI Model with Azure AI Studio.
+description: Learn how to use MedImageInsight Healthcare AI Model with Azure AI Foundry.
ms.service: azure-ai-studio
manager: scottpolly
ms.topic: how-to
@@ -41,7 +41,7 @@ To use the MedImageInsight model, you need the following prerequisites:
**Deployment to a self-hosted managed compute**
-MedImageInsight model can be deployed to our self-hosted managed inference solution, which allows you to customize and control all the details about how the model is served. You can deploy the model through the catalog UI (in [AI Studio](https://aka.ms/healthcaremodelstudio) or [Azure Machine Learning studio](https://ml.azure.com/model/catalog)) or deploy programmatically.
+MedImageInsight model can be deployed to our self-hosted managed inference solution, which allows you to customize and control all the details about how the model is served. You can deploy the model through the catalog UI (in [AI Foundry](https://aka.ms/healthcaremodelstudio) or [Azure Machine Learning studio](https://ml.azure.com/model/catalog)) or deploy programmatically.
To __deploy the model through the UI__:
Summary
{
"modification_type": "minor update",
"modification_title": "MedImageInsightヘルスケアAIモデルのデプロイに関するドキュメントの更新"
}
Explanation
この変更では、「deploy-medimageinsight.md」ドキュメントが更新され、MedImageInsightヘルスケアAIモデルのデプロイに関連する情報がAzure AI StudioからAzure AI Foundryに修正されています。これにより、ユーザーは新しいプラットフォームに基づいた正確な情報を得ることができるようになります。
主な変更点は以下の通りです:
- ドキュメントのタイトルおよび説明文が「Azure AI Studio」から「Azure AI Foundry」に更新され、プラットフォームに関する最新情報が反映されています。
- MedImageInsightモデルのデプロイにおいても、プラットフォームに関連するリンクがAzure AI Foundryに変更されています。この変更により、モデルをカタログUIを通じてデプロイする手順が正確に示されるようになりました。
これにより、ユーザーはMedImageInsightモデルをAzure AI Foundryを使用してデプロイするための手続きや要件を簡単に理解でき、今後のプロジェクトにおける実行の利便性が向上しています。
articles/ai-studio/how-to/healthcare-ai/deploy-medimageparse.md
Diff
@@ -1,7 +1,7 @@
---
-title: How to deploy and use MedImageParse healthcare AI model with AI Studio
+title: How to deploy and use MedImageParse healthcare AI model with AI Foundry
titleSuffix: Azure AI Foundry
-description: Learn how to use MedImageParse Healthcare AI Model with Azure AI Studio.
+description: Learn how to use MedImageParse Healthcare AI Model with Azure AI Foundry.
ms.service: azure-ai-studio
manager: scottpolly
ms.topic: how-to
@@ -44,7 +44,7 @@ To use the MedImageParse model, you need the following prerequisites:
**Deployment to a self-hosted managed compute**
-MedImageParse model can be deployed to our self-hosted managed inference solution, which allows you to customize and control all the details about how the model is served. You can deploy the model through the catalog UI (in [AI Studio](https://aka.ms/healthcaremodelstudio) or [Azure Machine Learning studio](https://ml.azure.com/model/catalog)) or deploy programmatically.
+MedImageParse model can be deployed to our self-hosted managed inference solution, which allows you to customize and control all the details about how the model is served. You can deploy the model through the catalog UI (in [AI Foundry](https://aka.ms/healthcaremodelstudio) or [Azure Machine Learning studio](https://ml.azure.com/model/catalog)) or deploy programmatically.
To __deploy the model through the UI__:
Summary
{
"modification_type": "minor update",
"modification_title": "MedImageParseヘルスケアAIモデルのデプロイに関するドキュメントの更新"
}
Explanation
この変更では、「deploy-medimageparse.md」ドキュメントが更新され、MedImageParseヘルスケアAIモデルのデプロイに関する情報がAzure AI StudioからAzure AI Foundryに修正されています。これにより、ユーザーは新しいプラットフォームに関連する正確な情報を得ることができるようになります。
主な変更点は以下の通りです:
- ドキュメントのタイトルと説明文が「Azure AI Studio」から「Azure AI Foundry」に修正され、最新のプラットフォーム情報が反映されています。
- MedImageParseモデルのデプロイに関する記述も更新され、ユーザーがカタログUIを通じてモデルをデプロイする際のリンクがAzure AI Foundryに変更されています。
この更新により、ユーザーはMedImageParseモデルをAzure AI Foundryを使用してデプロイする手順や要件をより容易に理解でき、実行時の利便性が向上しています。
articles/ai-studio/how-to/healthcare-ai/healthcare-ai-models.md
Diff
@@ -1,5 +1,5 @@
---
-title: Foundation models for healthcare in AI Studio
+title: Foundation models for healthcare in AI Foundry portal
titleSuffix: Azure AI Foundry
description: Learn about AI models that are applicable to the health and life science industry.
ms.service: azure-ai-studio
@@ -26,7 +26,7 @@ The healthcare industry is undergoing a revolutionary transformation driven by t
:::image type="content" source="../../media/how-to/healthcare-ai/connect-modalities.gif" alt-text="Models that reason about various modalities come together to support discover, development and delivery of healthcare":::
-The Azure AI model catalog available in [AI Studio](../model-catalog-overview.md) and [Azure Machine Learning studio](../../../machine-learning/concept-model-catalog.md) provides healthcare foundation models that facilitate AI-powered analysis of various medical data types and expand well beyond medical text comprehension into the multimodal reasoning about medical data. These AI models can integrate and analyze data from diverse sources that come in various modalities, such as medical imaging, genomics, clinical records, and other structured and unstructured data sources. The models also span several healthcare fields like dermatology, ophthalmology, radiology, and pathology.
+The Azure AI model catalog available in [AI Foundry](../model-catalog-overview.md) and [Azure Machine Learning studio](../../../machine-learning/concept-model-catalog.md) provides healthcare foundation models that facilitate AI-powered analysis of various medical data types and expand well beyond medical text comprehension into the multimodal reasoning about medical data. These AI models can integrate and analyze data from diverse sources that come in various modalities, such as medical imaging, genomics, clinical records, and other structured and unstructured data sources. The models also span several healthcare fields like dermatology, ophthalmology, radiology, and pathology.
## Microsoft first-party models
@@ -47,6 +47,6 @@ The Azure AI model catalog also provides a curated collection of healthcare mode
## Related content
-- [Model catalog and collections in Azure AI Studio](../model-catalog-overview.md)
+- [Model catalog and collections in Azure AI Foundry portal](../model-catalog-overview.md)
- [How to deploy and inference a managed compute deployment with code](../deploy-models-managed.md)
-- [Overview: Deploy models, flows, and web apps with Azure AI Studio](../../concepts/deployments-overview.md)
\ No newline at end of file
+- [Overview: Deploy models, flows, and web apps with Azure AI Foundry](../../concepts/deployments-overview.md)
\ No newline at end of file
Summary
{
"modification_type": "minor update",
"modification_title": "AI Foundryポータルのためのヘルスケアモデルに関するドキュメントの更新"
}
Explanation
この変更では、「healthcare-ai-models.md」ドキュメントが更新され、ヘルスケアに関連する基盤モデルに関する情報がAzure AI StudioからAzure AI Foundryポータルに修正されています。これにより、情報が最新のプラットフォームに基づいて提供されるようになります。
主な変更点は以下の通りです:
- ドキュメントのタイトルが「AI Studio」から「AI Foundryポータル」に変更され、最新のプラットフォーム名が反映されています。
- Azure AI モデルカタログに関する記述も、「AI Studio」から「AI Foundry」に更新され、ユーザーが正確な情報を得られるようになっています。
- 関連するコンテンツのリンクも修正され、AI Foundryポータルに関連する情報が示されています。
これにより、ユーザーはヘルスケア分野におけるAIモデルの活用方法について最新の情報を把握し、様々な医療データのAI駆動分析を促進するためのリソースへのアクセスが容易になります。
articles/ai-studio/how-to/index-add.md
Diff
@@ -1,5 +1,5 @@
---
-title: How to build and consume vector indexes in Azure AI Studio
+title: How to build and consume vector indexes in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
description: Learn how to create and use a vector index for performing Retrieval Augmented Generation (RAG).
manager: scottpolly
@@ -15,7 +15,7 @@ ms.author: ssalgado
author: ssalgadodev
---
-# How to build and consume vector indexes in Azure AI Studio
+# How to build and consume vector indexes in Azure AI Foundry portal
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
@@ -24,13 +24,13 @@ In this article, you learn how to create and use a vector index for performing [
## Prerequisites
You must have:
-- An Azure AI Studio project
+- An Azure AI Foundry project
- An Azure AI Search resource
## Create an index from the Chat playground
-1. Sign in to [Azure AI Studio](https://ai.azure.com).
-1. Go to your project or [create a new project](../how-to/create-projects.md) in Azure AI Studio.
+1. Sign in to [Azure AI Foundry](https://ai.azure.com).
+1. Go to your project or [create a new project](../how-to/create-projects.md) in Azure AI Foundry portal.
1. From the menu on the left, select **Playgrounds**.
@@ -65,7 +65,7 @@ You must have:
## Use an index in prompt flow
-1. Sign in to [Azure AI Studio](https://ai.azure.com) and select your project.
+1. Sign in to [Azure AI Foundry](https://ai.azure.com) and select your project.
1. From the collapsible left menu, select **Prompt flow** from the **Build and customize** section.
1. Open an existing prompt flow or select **+ Create** to create a new flow.
1. On the top menu of the flow designer, select **More tools**, and then select ***Index Lookup***.
Summary
{
"modification_type": "minor update",
"modification_title": "Azure AI Foundryポータルに関するベクターインデックスの使用方法の更新"
}
Explanation
この変更では、「index-add.md」ドキュメントが更新され、Azure AI Studioに関する情報がAzure AI Foundryポータルに修正されています。これにより、ユーザーがベクターインデックスの構築および使用方法を理解するための最新情報が提供されます。
主な変更点は以下の通りです:
- ドキュメントのタイトルとセクション見出しが「Azure AI Studio」から「Azure AI Foundryポータル」に変更され、情報が最新のプラットフォーム名に基づいています。
- 必要なプロジェクトの説明でも、Azure AI StudioからAzure AI Foundryプロジェクトに変更され、ユーザーが正確なリソースを参照できるようになっています。
- インデックスの作成手順において、ログイン先やプロジェクトの選択に関する指示が同様に修正されています。
この更新により、ユーザーはAzure AI Foundryポータルを使用してベクターインデックスを作成し、利用するための具体的な手順を把握しやすくなり、その結果、AIモデルの実装における利便性が向上します。
articles/ai-studio/how-to/model-catalog-overview.md
Diff
@@ -1,38 +1,39 @@
---
-title: Explore the model catalog in Azure AI Studio
+title: Explore the model catalog in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: This article introduces foundation model capabilities and the model catalog in Azure AI Studio.
+description: This article introduces foundation model capabilities and the model catalog in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
- ignite-2023
- build-2024
- ai-learning-hub
+ - ignite-2024
ms.topic: how-to
ms.date: 5/21/2024
ms.reviewer: jcioffi
ms.author: ssalgado
author: ssalgadodev
---
-# Model catalog and collections in Azure AI Studio
+# Model catalog and collections in Azure AI Foundry portal
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-The model catalog in Azure AI Studio is the hub to discover and use a wide range of models for building generative AI applications. The model catalog features hundreds of models across model providers such as Azure OpenAI Service, Mistral, Meta, Cohere, NVIDIA, and Hugging Face, including models that Microsoft trained. Models from providers other than Microsoft are Non-Microsoft Products as defined in [Microsoft Product Terms](https://www.microsoft.com/licensing/terms/welcome/welcomepage) and are subject to the terms provided with the models.
+The model catalog in Azure AI Foundry portal is the hub to discover and use a wide range of models for building generative AI applications. The model catalog features hundreds of models across model providers such as Azure OpenAI Service, Mistral, Meta, Cohere, NVIDIA, and Hugging Face, including models that Microsoft trained. Models from providers other than Microsoft are Non-Microsoft Products as defined in [Microsoft Product Terms](https://www.microsoft.com/licensing/terms/welcome/welcomepage) and are subject to the terms provided with the models.
## Model collections
The model catalog organizes models into different collections:
-* **Curated by Azure AI**: The most popular non-Microsoft open-weight and proprietary models packaged and optimized to work seamlessly on the Azure AI platform. Use of these models is subject to the model providers' license terms. When you deploy these models in Azure AI Studio, their availability is subject to the applicable [Azure service-level agreement (SLA)](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services), and Microsoft provides support for deployment problems.
+* **Curated by Azure AI**: The most popular non-Microsoft open-weight and proprietary models packaged and optimized to work seamlessly on the Azure AI platform. Use of these models is subject to the model providers' license terms. When you deploy these models in Azure AI Foundry portal, their availability is subject to the applicable [Azure service-level agreement (SLA)](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services), and Microsoft provides support for deployment problems.
Models from partners such as Meta, NVIDIA, and Mistral AI are examples of models available in this collection on the catalog. You can identify these models by looking for a green checkmark on the model tiles in the catalog. Or you can filter by the **Curated by Azure AI** collection.
* **Azure OpenAI models exclusively available on Azure**: Flagship Azure OpenAI models available through an integration with Azure OpenAI Service. Microsoft supports these models and their use according to the product terms and [SLA for Azure OpenAI Service](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services).
-* **Open models from the Hugging Face hub**: Hundreds of models from the Hugging Face hub for real-time inference with managed compute. Hugging Face creates and maintains models listed in this collection. For help, use the [Hugging Face forum](https://discuss.huggingface.co) or [Hugging Face support](https://huggingface.co/support). Learn more in [Deploy open models with Azure AI Studio](deploy-models-open.md).
+* **Open models from the Hugging Face hub**: Hundreds of models from the Hugging Face hub for real-time inference with managed compute. Hugging Face creates and maintains models listed in this collection. For help, use the [Hugging Face forum](https://discuss.huggingface.co) or [Hugging Face support](https://huggingface.co/support). Learn more in [Deploy open models with Azure AI Foundry](deploy-models-open.md).
You can submit a request to add a model to the model catalog by using [this form](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR_frVPkg_MhOoQxyrjmm7ZJUM09WNktBMURLSktOWEdDODBDRjg2NExKUy4u).
@@ -68,7 +69,7 @@ Features | Managed compute | Serverless API (pay-per-token)
Deployment experience and billing | Model weights are deployed to dedicated virtual machines with managed compute. A managed compute, which can have one or more deployments, makes available a REST API for inference. You're billed for the virtual machine core hours that the deployments use. | Access to models is through a deployment that provisions an API to access the model. The API provides access to the model that Microsoft hosts and manages, for inference. You're billed for inputs and outputs to the APIs, typically in tokens. Pricing information is provided before you deploy.
API authentication | Keys and Microsoft Entra authentication. | Keys only.
Content safety | Use Azure AI Content Safety service APIs. | Azure AI Content Safety filters are available integrated with inference APIs. Azure AI Content Safety filters are billed separately.
-Network isolation | [Configure managed networks for Azure AI Studio hubs](configure-managed-network.md). | Managed compute follow your hub's public network access (PNA) flag setting. For more information, see the [Network isolation for models deployed via Serverless APIs](#network-isolation-for-models-deployed-via-serverless-apis) section later in this article.
+Network isolation | [Configure managed networks for Azure AI Foundry hubs](configure-managed-network.md). | Managed compute follow your hub's public network access (PNA) flag setting. For more information, see the [Network isolation for models deployed via Serverless APIs](#network-isolation-for-models-deployed-via-serverless-apis) section later in this article.
### Available models for supported deployment options
@@ -112,7 +113,7 @@ Models available for deployment to managed compute can be deployed to Azure Mach
Learn more about deploying models:
* [Deploy Meta Llama models](deploy-models-llama.md)
-* [Deploy Azure AI Studio open models](deploy-models-open.md)
+* [Deploy Azure AI Foundry open models](deploy-models-open.md)
### Building generative AI apps with managed compute
@@ -140,7 +141,7 @@ Learn more about data processing for MaaS in the [article about data privacy](co
### Billing
-The discovery, subscription, and consumption experience for models deployed via MaaS is in Azure AI Studio and Azure Machine Learning studio. Users accept license terms for use of the models. Pricing information for consumption is provided during deployment.
+The discovery, subscription, and consumption experience for models deployed via MaaS is in Azure AI Foundry portal and Azure Machine Learning studio. Users accept license terms for use of the models. Pricing information for consumption is provided during deployment.
Models from non-Microsoft providers are billed through Azure Marketplace, in accordance with the [Microsoft Commercial Marketplace Terms of Use](/legal/marketplace/marketplace-terms).
@@ -152,38 +153,38 @@ Certain models support also serverless fine-tuning. For these models, you can ta
### RAG with models deployed as serverless APIs
-In Azure AI Studio, you can use vector indexes and retrieval-augmented generation (RAG). You can use models that can be deployed via serverless APIs to generate embeddings and inferencing based on custom data. These embeddings and inferencing can then generate answers specific to your use case. For more information, see [Build and consume vector indexes in Azure AI Studio](index-add.md).
+In Azure AI Foundry portal, you can use vector indexes and retrieval-augmented generation (RAG). You can use models that can be deployed via serverless APIs to generate embeddings and inferencing based on custom data. These embeddings and inferencing can then generate answers specific to your use case. For more information, see [Build and consume vector indexes in Azure AI Foundry portal](index-add.md).
### Regional availability of offers and models
-Pay-per-token billing is available only to users whose Azure subscription belongs to a billing account in a country where the model provider has made the offer available. If the offer is available in the relevant region, the user then must have a project resource in the Azure region where the model is available for deployment or fine-tuning, as applicable. See [Region availability for models in serverless API endpoints | Azure AI Studio](deploy-models-serverless-availability.md) for detailed information.
+Pay-per-token billing is available only to users whose Azure subscription belongs to a billing account in a country where the model provider has made the offer available. If the offer is available in the relevant region, the user then must have a project resource in the Azure region where the model is available for deployment or fine-tuning, as applicable. See [Region availability for models in serverless API endpoints | Azure AI Foundry](deploy-models-serverless-availability.md) for detailed information.
### Content safety for models deployed via serverless APIs
[!INCLUDE [content-safety-serverless-models](../includes/content-safety-serverless-models.md)]
### Network isolation for models deployed via serverless APIs
-Managed computes for models deployed as serverless APIs follow the public network access flag setting of the AI Studio hub that has the project in which the deployment exists. To help secure your managed compute, disable the public network access flag on your AI Studio hub. You can help secure inbound communication from a client to your managed compute by using a private endpoint for the hub.
+Managed computes for models deployed as serverless APIs follow the public network access flag setting of the AI Foundry hub that has the project in which the deployment exists. To help secure your managed compute, disable the public network access flag on your AI Foundry hub. You can help secure inbound communication from a client to your managed compute by using a private endpoint for the hub.
-To set the public network access flag for the AI Studio hub:
+To set the public network access flag for the AI Foundry hub:
* Go to the [Azure portal](https://ms.portal.azure.com/).
-* Search for the resource group to which the hub belongs, and select your AI Studio hub from the resources listed for this resource group.
+* Search for the resource group to which the hub belongs, and select your AI Foundry hub from the resources listed for this resource group.
* On the hub overview page, on the left pane, go to **Settings** > **Networking**.
* On the **Public access** tab, you can configure settings for the public network access flag.
* Save your changes. Your changes might take up to five minutes to propagate.
#### Limitations
-* If you have an AI Studio hub with a managed compute created before July 11, 2024, managed computes added to projects in this hub won't follow the networking configuration of the hub. Instead, you need to create a new managed compute for the hub and create new serverless API deployments in the project so that the new deployments can follow the hub's networking configuration.
+* If you have an AI Foundry hub with a managed compute created before July 11, 2024, managed computes added to projects in this hub won't follow the networking configuration of the hub. Instead, you need to create a new managed compute for the hub and create new serverless API deployments in the project so that the new deployments can follow the hub's networking configuration.
-* If you have an AI Studio hub with MaaS deployments created before July 11, 2024, and you enable a managed compute on this hub, the existing MaaS deployments won't follow the hub's networking configuration. For serverless API deployments in the hub to follow the hub's networking configuration, you need to create the deployments again.
+* If you have an AI Foundry hub with MaaS deployments created before July 11, 2024, and you enable a managed compute on this hub, the existing MaaS deployments won't follow the hub's networking configuration. For serverless API deployments in the hub to follow the hub's networking configuration, you need to create the deployments again.
* Currently, [Azure OpenAI On Your Data](/azure/ai-services/openai/concepts/use-your-data) support isn't available for MaaS deployments in private hubs, because private hubs have the public network access flag disabled.
* Any network configuration change (for example, enabling or disabling the public network access flag) might take up to five minutes to propagate.
## Related content
-* [Explore foundation models in Azure AI Studio](models-foundation-azure-ai.md)
+* [Explore foundation models in Azure AI Foundry portal](models-foundation-azure-ai.md)
Summary
{
"modification_type": "minor update",
"modification_title": "Azure AI Foundryポータルにおけるモデルカタログの概要の更新"
}
Explanation
この変更では、「model-catalog-overview.md」ドキュメントが更新され、主にAzure AI Studioに関する情報がAzure AI Foundryポータルに修正されています。これにより、ユーザーが最新のプラットフォームに基づいてモデルカタログの機能について学ぶことができるようになります。
主な変更点は以下の通りです:
- ドキュメントのタイトルや説明文が「Azure AI Studio」から「Azure AI Foundryポータル」に変更され、プラットフォーム名が最新の情報に合わせられています。
- モデルカタログの説明も同様に更新され、今後の参照において正確な情報が提供されます。
- モデルの利用に関するライセンス、サービスレベル契約(SLA)、およびサポート内容についても更新され、ユーザーが安心して利用できるようになっています。
- ユーザーが利用できるモデルの種類や提供されるサービスに関する情報も改訂され、新しいポータルにおける利用方法が詳述されています。
この更新により、ユーザーはAzure AI Foundryポータルを通じてモデルカタログを探索し、生成AIアプリケーションの構築に役立つ最新情報を得ることができるようになります。
articles/ai-studio/how-to/monitor-quality-safety.md
Diff
@@ -1,12 +1,13 @@
---
title: Monitor quality and token usage of deployed prompt flow applications (preview)
titleSuffix: Azure AI Foundry
-description: Learn how to monitor quality and token usage of deployed prompt flow applications with Azure AI Studio.
+description: Learn how to monitor quality and token usage of deployed prompt flow applications with Azure AI Foundry.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
- ignite-2023
- build-2024
+ - ignite-2024
ms.topic: how-to
ms.date: 7/31/2024
ms.reviewer: alehughes
@@ -33,7 +34,7 @@ Integrations for monitoring a prompt flow deployment allow you to:
- Monitor prompts, completion, and total token usage across each model deployment in your prompt flow.
- Monitor operational metrics, such as request count, latency, and error rate.
- Use preconfigured alerts and defaults to run monitoring on a recurring basis.
-- Consume data visualizations and configure advanced behavior in Azure AI Studio.
+- Consume data visualizations and configure advanced behavior in Azure AI Foundry portal.
## Prerequisites
@@ -43,11 +44,11 @@ Before following the steps in this article, make sure you have the following pre
- An Azure subscription with a valid payment method. Free or trial Azure subscriptions aren't supported for this scenario. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
-- An [Azure AI Studio project](create-projects.md).
+- An [Azure AI Foundry project](create-projects.md).
- A prompt flow ready for deployment. If you don't have one, see [Develop a prompt flow](flow-develop.md).
-- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md).
+- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Foundry portal. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Role-based access control in Azure AI Foundry portal](../concepts/rbac-ai-studio.md).
# [Python SDK](#tab/python)
@@ -111,7 +112,7 @@ To set up monitoring for your prompt flow application, you first have to deploy
In this section, you learn to deploy your prompt flow with inferencing data collection enabled. For detailed information on deploying your prompt flow, see [Deploy a flow for real-time inference](flow-deploy.md).
-1. Sign in to [Azure AI Studio](https://ai.azure.com).
+1. Sign in to [Azure AI Foundry](https://ai.azure.com).
1. If you're not already in your project, select it.
1. Select **Prompt flow** from the left navigation bar.
1. Select the prompt flow that you created previously.
@@ -205,7 +206,7 @@ credential = DefaultAzureCredential()
# Update your azure resources details
subscription_id = "INSERT YOUR SUBSCRIPTION ID"
resource_group = "INSERT YOUR RESOURCE GROUP NAME"
-project_name = "INSERT YOUR PROJECT NAME" # This is the same as your AI Studio project name
+project_name = "INSERT YOUR PROJECT NAME" # This is the same as your AI Foundry project name
endpoint_name = "INSERT YOUR ENDPOINT NAME" # This is your deployment name without the suffix (e.g., deployment is "contoso-chatbot-1", endpoint is "contoso-chatbot")
deployment_name = "INSERT YOUR DEPLOYMENT NAME"
aoai_deployment_name ="INSERT YOUR AOAI DEPLOYMENT NAME"
@@ -372,7 +373,7 @@ credential = DefaultAzureCredential()
# Update your azure resources details
subscription_id = "INSERT YOUR SUBSCRIPTION ID"
resource_group = "INSERT YOUR RESOURCE GROUP NAME"
-project_name = "INSERT YOUR PROJECT NAME" # This is the same as your AI Studio project name
+project_name = "INSERT YOUR PROJECT NAME" # This is the same as your AI Foundry project name
endpoint_name = "INSERT YOUR ENDPOINT NAME" # This is your deployment name without the suffix (e.g., deployment is "contoso-chatbot-1", endpoint is "contoso-chatbot")
deployment_name = "INSERT YOUR DEPLOYMENT NAME"
@@ -449,7 +450,7 @@ credential = DefaultAzureCredential()
# Update your azure resources details
subscription_id = "INSERT YOUR SUBSCRIPTION ID"
resource_group = "INSERT YOUR RESOURCE GROUP NAME"
-project_name = "INSERT YOUR PROJECT NAME" # This is the same as your AI Studio project name
+project_name = "INSERT YOUR PROJECT NAME" # This is the same as your AI Foundry project name
endpoint_name = "INSERT YOUR ENDPOINT NAME" # This is your deployment name without the suffix (e.g., deployment is "contoso-chatbot-1", endpoint is "contoso-chatbot")
deployment_name = "INSERT YOUR DEPLOYMENT NAME"
aoai_deployment_name ="INSERT YOUR AOAI DEPLOYMENT NAME"
@@ -534,9 +535,9 @@ model_monitor = MonitorSchedule(
ml_client.schedules.begin_create_or_update(model_monitor)
```
-After you create your monitor from the SDK, you can [consume the monitoring results](#consume-monitoring-results) in AI Studio.
+After you create your monitor from the SDK, you can [consume the monitoring results](#consume-monitoring-results) in AI Foundry portal.
## Related content
-- Learn more about what you can do in [Azure AI Studio](../what-is-ai-studio.md).
+- Learn more about what you can do in [Azure AI Foundry](../what-is-ai-studio.md).
- Get answers to frequently asked questions in the [Azure AI FAQ article](../faq.yml).
Summary
{
"modification_type": "minor update",
"modification_title": "デプロイされたプロンプトフローアプリケーションの品質とトークン使用量を監視する方法の更新"
}
Explanation
この変更では、「monitor-quality-safety.md」ドキュメントが更新され、Azure AI Studioに関する情報がAzure AI Foundryポータルに修正されています。これにより、ユーザーが最新のプラットフォームを使用して、デプロイされたプロンプトフローアプリケーションの品質とトークン使用量を監視する方法を学ぶことができるようになります。
主な変更点は以下の通りです:
- タイトルおよび説明文が「Azure AI Studio」から「Azure AI Foundry」に変更され、正確なプラットフォーム名が反映されています。
- プロジェクトの作成やユーザーアカウントの役割に関する説明でも、Azure AI Foundryポータルの文脈に合わせた修正が施されています。
- アプリケーションのデプロイとモニタリングの手順において、ログイン先やプロジェクト名の説明が更新され、ユーザーが誤解なく手順を行えるような配慮がされています。
- また、SDKを使用して作成したモニタリング結果の消費先についても、Azure AI Foundryポータルに関する情報に変更されています。
この更新により、ユーザーはAzure AI Foundryポータルを通じてプロンプトフローアプリケーションの品質を監視し、トークンの使用量を管理するための最新の手順を理解しやすくなります。
articles/ai-studio/how-to/online-evaluation.md
Diff
@@ -71,7 +71,7 @@ Complete the following prerequisite steps to set up your environment and authent
2. A [Resource Group](/azure/azure-resource-manager/management/manage-resource-groups-portal) in an Evaluation-supported region.
3. A new [User-assigned Managed Identity](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp) in the same resource group and region. Make a note of the `clientId`; you'll need it later.
4. An [Azure AI Hub](../concepts/ai-resources.md) in the same resource group and region.
-5. An Azure AI project in this hub, see [Create a project in Azure AI Studio](./create-projects.md).
+5. An Azure AI project in this hub, see [Create a project in Azure AI Foundry portal](./create-projects.md).
6. An [Azure Monitor Application Insights resource](/azure/azure-monitor/app/create-workspace-resource).
7. Navigate to the hub page in Azure portal and add Application Insights resource, see [Update Azure Application Insights and Azure Container Registry](./create-azure-ai-resource.md?tabs=portal#update-azure-application-insights-and-azure-container-registry).
8. Azure OpenAI Deployment with GPT model supporting `chat completion`, for example `gpt-4`.
@@ -193,7 +193,7 @@ SAMPLE_NAME = "online_eval_name"
# Name of your generative AI application (will be available in trace data in Application Insights)
SERVICE_NAME = "service_name"
-# Connection string to your Azure AI Studio project
+# Connection string to your Azure AI Foundry project
# Currently, it should be in the format "<HostName>;<AzureSubscriptionId>;<ResourceGroup>;<HubName>"
PROJECT_CONNECTION_STRING = "<HostName>;<AzureSubscriptionId>;<ResourceGroup>;<HubName>"
@@ -211,7 +211,7 @@ KUSTO_QUERY = "let gen_ai_spans=(dependencies | where isnotnull(customDimensions
Next, define a client and an Azure OpenAI GPT deployment (such as `GPT-4`) which will be used to run your Online Evaluation schedule. Also, connect to your Application Insights resource:
```python
-# Connect to your Azure AI Studio Project
+# Connect to your Azure AI Foundry Project
project_client = AIProjectClient.from_connection_string(
credential=DefaultAzureCredential(),
conn_str=PROJECT_CONNECTION_STRING
@@ -228,7 +228,7 @@ app_insights_config = ApplicationInsightsConfiguration(
deployment_name = "gpt-4"
api_version = "2024-08-01-preview"
-# This is your AOAI connection name, which can be found in your AI Studio project under the 'Models + Endpoints' tab
+# This is your AOAI connection name, which can be found in your AI Foundry project under the 'Models + Endpoints' tab
default_connection = project_client.connections._get_connection(
"aoai_connection_name"
)
@@ -245,7 +245,7 @@ Next, configure the evaluators you wish to use:
```python
# RelevanceEvaluator
-# id for each evaluator can be found in your AI Studio registry - please see documentation for more information
+# id for each evaluator can be found in your AI Foundry registry - please see documentation for more information
# init_params is the configuration for the model to use to perform the evaluation
# data_mapping is used to map the output columns of your query to the names required by the evaluator
relevance_evaluator_config = EvaluatorConfiguration(
@@ -333,11 +333,11 @@ In this section, you'll learn how Azure AI integrates with Azure Monitor Applica
If you haven’t set this up, here are some quick steps:
-1. Navigate to your project in [Azure AI Studio](https://ai.azure.com).
+1. Navigate to your project in [Azure AI Foundry](https://ai.azure.com).
1. Select the Tracing page on the left-hand side.
1. Connect your Application Insights resource to your project.
-If you already set up tracing in Azure AI studio, all you need to do is select the link to **Check out your Insights for Generative AI application dashboard**.
+If you already set up tracing in Azure AI Foundry portal, all you need to do is select the link to **Check out your Insights for Generative AI application dashboard**.
Once you have your data streaming into your Application Insights resource, you automatically can see it get populated in this customized dashboard.
Summary
{
"modification_type": "minor update",
"modification_title": "オンライン評価に関する手順の更新"
}
Explanation
この変更では、「online-evaluation.md」ドキュメントが更新され、Azure AI Studioに関連する情報がAzure AI Foundryポータルに見直されています。これによって、ユーザーは最新のプラットフォーム用のオンライン評価手順を理解しやすくなります。
主な変更点は以下の通りです:
- 手順や要件の説明が「Azure AI Studio」から「Azure AI Foundry」に置き換えられ、それにより正確なプラットフォーム名が使用されています。
- Azure AIプロジェクトの作成に関するリファレンスが、Azure AI Foundryポータルにリンクされるように変更されています。
- 接続文字列やプロジェクトへの接続に関するコードのコメントも、ユーザーが新しい環境に適応できるように修正されています。
- 評価者の設定に関しても、Azure AI Foundryのレジストリに関する情報が明確にされ、ユーザーが必要な情報を得やすくなっています。
- 最後に、アプリケーションインサイトリソースとの接続やデータのストリーミングに関する手順が更新されており、ユーザーはダッシュボードを通じて得られるインサイトを簡単に確認できます。
この更新によって、ユーザーはAzure AI Foundryを活用してオンライン評価を行う際の最新情報と手順を理解し、スムーズに実行できるようになります。
articles/ai-studio/how-to/prompt-flow-tools/azure-open-ai-gpt-4v-tool.md
Diff
@@ -1,7 +1,7 @@
---
-title: Azure OpenAI GPT-4 Turbo with Vision tool in Azure AI Studio
+title: Azure OpenAI GPT-4 Turbo with Vision tool in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: This article introduces you to the Azure OpenAI GPT-4 Turbo with Vision tool for flows in Azure AI Studio.
+description: This article introduces you to the Azure OpenAI GPT-4 Turbo with Vision tool for flows in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
@@ -13,7 +13,7 @@ ms.author: lagayhar
author: lgayhardt
---
-# Azure OpenAI GPT-4 Turbo with Vision tool in Azure AI Studio
+# Azure OpenAI GPT-4 Turbo with Vision tool in Azure AI Foundry portal
[!INCLUDE [feature-preview](../../includes/feature-preview.md)]
@@ -22,14 +22,14 @@ The prompt flow Azure OpenAI GPT-4 Turbo with Vision tool enables you to use you
## Prerequisites
- An Azure subscription. <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">You can create one for free</a>.
-- An [AI Studio hub](../../how-to/create-azure-ai-resource.md) with a GPT-4 Turbo with Vision model deployed in [one of the regions that support GPT-4 Turbo with Vision](../../../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability). When you deploy from your project's **Deployments** page, select `gpt-4` as the model name and `vision-preview` as the model version.
+- An [AI Foundry hub](../../how-to/create-azure-ai-resource.md) with a GPT-4 Turbo with Vision model deployed in [one of the regions that support GPT-4 Turbo with Vision](../../../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability). When you deploy from your project's **Deployments** page, select `gpt-4` as the model name and `vision-preview` as the model version.
## Build with the Azure OpenAI GPT-4 Turbo with Vision tool
-1. Create or open a flow in [Azure AI Studio](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md).
+1. Create or open a flow in [Azure AI Foundry](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md).
1. Select **+ More tools** > **Azure OpenAI GPT-4 Turbo with Vision** to add the Azure OpenAI GPT-4 Turbo with Vision tool to your flow.
- :::image type="content" source="../../media/prompt-flow/azure-openai-gpt-4-vision-tool.png" alt-text="Screenshot that shows the Azure OpenAI GPT-4 Turbo with Vision tool added to a flow in Azure AI Studio." lightbox="../../media/prompt-flow/azure-openai-gpt-4-vision-tool.png":::
+ :::image type="content" source="../../media/prompt-flow/azure-openai-gpt-4-vision-tool.png" alt-text="Screenshot that shows the Azure OpenAI GPT-4 Turbo with Vision tool added to a flow in Azure AI Foundry portal." lightbox="../../media/prompt-flow/azure-openai-gpt-4-vision-tool.png":::
1. Select the connection to your Azure OpenAI Service. For example, you can select the **Default_AzureOpenAI** connection. For more information, see [Prerequisites](#prerequisites).
1. Enter values for the Azure OpenAI GPT-4 Turbo with Vision tool input parameters described in the [Inputs table](#inputs). For example, you can use this example prompt:
Summary
{
"modification_type": "minor update",
"modification_title": "Azure OpenAI GPT-4 Turbo with VisionツールのAzure AI Foundryポータルに関する更新"
}
Explanation
この変更では、「azure-open-ai-gpt-4v-tool.md」ドキュメントが更新され、Azure AI Studioに関する情報がAzure AI Foundryポータルについてのものに変更されています。これにより、ユーザーは最新の環境でAzure OpenAI GPT-4 Turbo with Visionツールを利用する方法を把握できるようになります。
主な変更点は以下の通りです:
- タイトルが「Azure AI Studio」から「Azure AI Foundryポータル」に変更され、正確なプラットフォーム名が反映されています。
- 説明文においても、「Azure AI Studio」が「Azure AI Foundryポータル」に置き換えられ、内容が一致しています。
- 必要条件のセクションでも、ハブの名前が「AI Studio」から「AI Foundry」に変更されており、新しいプラットフォームでの操作を明示しています。
- フローを作成または開く手順が同様に「Azure AI Studio」から「Azure AI Foundry」に変更され、新しい環境での指示が正確に記述されています。
- スクリーンショットの説明も更新されており、Azure AI Foundryポータルでツールが追加されたことが示されています。
この更新により、ユーザーはAzure AI Foundryポータルを通じてAzure OpenAI GPT-4 Turbo with Visionツールを操作する際の最新の手順と情報を理解しやすくなります。
articles/ai-studio/how-to/prompt-flow-tools/content-safety-tool.md
Diff
@@ -1,7 +1,7 @@
---
-title: Content Safety tool for flows in Azure AI Studio
+title: Content Safety tool for flows in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: This article introduces you to the Content Safety tool for flows in Azure AI Studio.
+description: This article introduces you to the Content Safety tool for flows in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
@@ -14,29 +14,29 @@ ms.author: lagayhar
author: lgayhardt
---
-# Content safety tool for flows in Azure AI Studio
+# Content safety tool for flows in Azure AI Foundry portal
[!INCLUDE [feature-preview](../../includes/feature-preview.md)]
-The prompt flow Content Safety tool enables you to use Azure AI Content Safety in Azure AI Studio.
+The prompt flow Content Safety tool enables you to use Azure AI Content Safety in Azure AI Foundry portal.
Azure AI Content Safety is a content moderation service that helps detect harmful content from different modalities and languages. For more information, see [Azure AI Content Safety](/azure/ai-services/content-safety/).
## Prerequisites
To create an Azure Content Safety connection:
-1. Sign in to [Azure AI Studio](https://ml.azure.com/).
+1. Sign in to [Azure AI Foundry](https://ml.azure.com/).
1. Go to **Project settings** > **Connections**.
1. Select **+ New connection**.
-1. Complete all steps in the **Create a new connection** dialog. You can use an Azure AI Studio hub or Azure AI Content Safety resource. We recommend that you use a hub that supports multiple Azure AI services.
+1. Complete all steps in the **Create a new connection** dialog. You can use an Azure AI Foundry hub or Azure AI Content Safety resource. We recommend that you use a hub that supports multiple Azure AI services.
## Build with the Content Safety tool
-1. Create or open a flow in [Azure AI Studio](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md).
+1. Create or open a flow in [Azure AI Foundry](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md).
1. Select **+ More tools** > **Content Safety (Text)** to add the Content Safety tool to your flow.
- :::image type="content" source="../../media/prompt-flow/content-safety-tool.png" alt-text="Screenshot that shows the Content Safety tool added to a flow in Azure AI Studio." lightbox="../../media/prompt-flow/content-safety-tool.png":::
+ :::image type="content" source="../../media/prompt-flow/content-safety-tool.png" alt-text="Screenshot that shows the Content Safety tool added to a flow in Azure AI Foundry portal." lightbox="../../media/prompt-flow/content-safety-tool.png":::
1. Select the connection to one of your provisioned resources. For example, select **AzureAIContentSafetyConnection** if you created a connection with that name. For more information, see [Prerequisites](#prerequisites).
1. Enter values for the Content Safety tool input parameters described in the [Inputs table](#inputs).
Summary
{
"modification_type": "minor update",
"modification_title": "コンテンツ安全ツールのAzure AI Foundryポータルに関する更新"
}
Explanation
この変更では、「content-safety-tool.md」ドキュメントが更新され、Azure AI Studioに関する情報がAzure AI Foundryポータルに変更されています。これにより、ユーザーは新しいプラットフォーム環境でコンテンツ安全ツールを使用する方法をより理解しやすくなります。
主な変更点は以下の通りです:
- タイトルが「Azure AI Studio」から「Azure AI Foundryポータル」に変更され、最新のプラットフォーム名が反映されています。
- 記事の説明も同様に更新され、Azure AI Foundryポータルにおけるコンテンツ安全ツールの内容が明確に示されています。
- 必要条件のセクションにおいても、最初にサインインするプラットフォーム名が「Azure AI Studio」から「Azure AI Foundry」に変更されています。
- フローの作成または開く手順においても同様の変更があり、ユーザーが新しい環境での操作内容を正確に把握できるようになっています。
- コンテンツ安全ツールをフローに追加する手順に関しても、スクリーンショットの説明が更新されており、Azure AI Foundryポータルでの使用が明示されています。
この更新によって、ユーザーはAzure AI Foundryポータルでコンテンツ安全ツールを活用する際の最新情報と手順を容易に理解することができ、適切に操作を行えるようになります。
articles/ai-studio/how-to/prompt-flow-tools/embedding-tool.md
Diff
@@ -1,7 +1,7 @@
---
-title: Embedding tool for flows in Azure AI Studio
+title: Embedding tool for flows in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: This article introduces you to the Embedding tool for flows in Azure AI Studio.
+description: This article introduces you to the Embedding tool for flows in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
@@ -14,7 +14,7 @@ ms.author: lagayhar
author: lgayhardt
---
-# Embedding tool for flows in Azure AI Studio
+# Embedding tool for flows in Azure AI Foundry portal
[!INCLUDE [feature-preview](../../includes/feature-preview.md)]
@@ -25,10 +25,10 @@ The prompt flow Embedding tool enables you to convert text into dense vector rep
## Build with the Embedding tool
-1. Create or open a flow in [Azure AI Studio](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md).
+1. Create or open a flow in [Azure AI Foundry](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md).
1. Select **+ More tools** > **Embedding** to add the Embedding tool to your flow.
- :::image type="content" source="../../media/prompt-flow/embedding-tool.png" alt-text="Screenshot that shows the Embedding tool added to a flow in Azure AI Studio." lightbox="../../media/prompt-flow/embedding-tool.png":::
+ :::image type="content" source="../../media/prompt-flow/embedding-tool.png" alt-text="Screenshot that shows the Embedding tool added to a flow in Azure AI Foundry portal." lightbox="../../media/prompt-flow/embedding-tool.png":::
1. Select the connection to one of your provisioned resources. For example, select **Default_AzureOpenAI**.
1. Enter values for the Embedding tool input parameters described in the [Inputs table](#inputs).
Summary
{
"modification_type": "minor update",
"modification_title": "埋め込みツールのAzure AI Foundryポータルに関する更新"
}
Explanation
この変更では、「embedding-tool.md」ドキュメントが更新され、Azure AI Studioに関する情報がAzure AI Foundryポータルに変更されています。これにより、ユーザーは最新のプラットフォームで埋め込みツールを使用する方法をより正確に理解できます。
主な変更点は以下の通りです:
- タイトルが「Azure AI Studio」から「Azure AI Foundryポータル」に変更され、プラットフォーム名が最新の情報に合わせて更新されています。
- 説明文も「Azure AI Studio」から「Azure AI Foundryポータル」に置き換えられ、具体的な利用状況が示されています。
- セクション内での説明などでも、Azure AI Foundryポータルに関連した内容に修正されています。
- フローを作成または開く手順が「Azure AI Studio」から「Azure AI Foundry」に変更され、正確なプラットフォーム情報が提示されています。
- スクリーンショットの説明も更新され、埋め込みツールがAzure AI Foundryポータルで追加されたことが明確にされています。
この更新により、ユーザーはAzure AI Foundryポータルで埋め込みツールを使用する際の最新の手順や情報を容易に理解でき、操作を正しく行いやすくなります。
articles/ai-studio/how-to/prompt-flow-tools/index-lookup-tool.md
Diff
@@ -1,7 +1,7 @@
---
-title: Index Lookup tool for flows in Azure AI Studio
+title: Index Lookup tool for flows in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: This article introduces you to the Index Lookup tool for flows in Azure AI Studio.
+description: This article introduces you to the Index Lookup tool for flows in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
@@ -13,18 +13,18 @@ ms.author: lagayhar
author: lgayhardt
---
-# Index Lookup tool for Azure AI Studio
+# Index Lookup tool for Azure AI Foundry
[!INCLUDE [feature-preview](../../includes/feature-preview.md)]
The prompt flow Index Lookup tool enables the use of common vector indices (such as Azure AI Search, Faiss, and Pinecone) for retrieval augmented generation in prompt flow. The tool automatically detects the indices in the workspace and allows the selection of the index to be used in the flow.
## Build with the Index Lookup tool
-1. Create or open a flow in [Azure AI Studio](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md).
+1. Create or open a flow in [Azure AI Foundry](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md).
1. Select **+ More tools** > **Index Lookup** to add the Index Lookup tool to your flow.
- :::image type="content" source="../../media/prompt-flow/configure-index-lookup-tool.png" alt-text="Screenshot that shows the Index Lookup tool added to a flow in Azure AI Studio." lightbox="../../media/prompt-flow/configure-index-lookup-tool.png":::
+ :::image type="content" source="../../media/prompt-flow/configure-index-lookup-tool.png" alt-text="Screenshot that shows the Index Lookup tool added to a flow in Azure AI Foundry portal." lightbox="../../media/prompt-flow/configure-index-lookup-tool.png":::
1. Enter values for the Index Lookup tool [input parameters](#inputs). The large language model [(LLM) tool](llm-tool.md) can generate the vector input.
1. Add more tools to your flow, as needed. Or select **Run** to run the flow.
Summary
{
"modification_type": "minor update",
"modification_title": "インデックスルックアップツールのAzure AI Foundryポータルに関する更新"
}
Explanation
この変更では、「index-lookup-tool.md」ドキュメントが更新され、Azure AI Studioに関する情報がAzure AI Foundryポータルに変更されています。この更新により、ユーザーは最新のプラットフォームに基づいたインデックスルックアップツールの使用方法を理解しやすくなります。
主な変更点は以下の通りです:
- タイトルが「Azure AI Studio」から「Azure AI Foundryポータル」に変更され、新しいプラットフォーム名が適切に反映されています。
- 記事の説明も同様に更新され、インデックスルックアップツールがAzure AI Foundryポータルで利用されることが明確に示されています。
- セクション内の見出しも「Azure AI Studio」から「Azure AI Foundry」へと変更され、最新の情報が提供されています。
- フローを作成または開く手順が「Azure AI Studio」から「Azure AI Foundry」へと更新され、プラットフォームの正確な情報が提供されています。
- スクリーンショットのキャプションも更新され、インデックスルックアップツールがAzure AI Foundryポータルで追加される手順がわかりやすく説明されています。
この更新により、ユーザーはAzure AI Foundryポータルにおけるインデックスルックアップツールの使用手順を最新の情報に基づいて理解でき、効果的に操作できるようになります。
articles/ai-studio/how-to/prompt-flow-tools/llm-tool.md
Diff
@@ -1,7 +1,7 @@
---
-title: LLM tool for flows in Azure AI Studio
+title: LLM tool for flows in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: This article introduces you to the large language model (LLM) tool for flows in Azure AI Studio.
+description: This article introduces you to the large language model (LLM) tool for flows in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
@@ -14,7 +14,7 @@ ms.author: lagayhar
author: lgayhardt
---
-# LLM tool for flows in Azure AI Studio
+# LLM tool for flows in Azure AI Foundry portal
[!INCLUDE [feature-preview](../../includes/feature-preview.md)]
@@ -29,10 +29,10 @@ Prepare a prompt as described in the [Prompt tool](prompt-tool.md#prerequisites)
## Build with the LLM tool
-1. Create or open a flow in [Azure AI Studio](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md).
+1. Create or open a flow in [Azure AI Foundry](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md).
1. Select **+ LLM** to add the LLM tool to your flow.
- :::image type="content" source="../../media/prompt-flow/llm-tool.png" alt-text="Screenshot that shows the LLM tool added to a flow in Azure AI Studio." lightbox="../../media/prompt-flow/llm-tool.png":::
+ :::image type="content" source="../../media/prompt-flow/llm-tool.png" alt-text="Screenshot that shows the LLM tool added to a flow in Azure AI Foundry portal." lightbox="../../media/prompt-flow/llm-tool.png":::
1. Select the connection to one of your provisioned resources. For example, select **Default_AzureOpenAI**.
1. From the **Api** dropdown list, select **chat** or **completion**.
Summary
{
"modification_type": "minor update",
"modification_title": "LLMツールのAzure AI Foundryポータルに関する更新"
}
Explanation
この変更では、「llm-tool.md」ドキュメントが更新され、Azure AI Studioに関する情報がAzure AI Foundryポータルに変更されています。この更新により、ユーザーは最新のプラットフォームでのLLMツールの使用方法を理解しやすくなります。
主な変更点は以下の通りです:
- タイトルが「Azure AI Studio」から「Azure AI Foundryポータル」に変更され、新しいプラットフォーム名が正確に反映されています。
- 記事の説明も「Azure AI Studio」から「Azure AI Foundryポータル」に修正されており、LLMツールの利用がどのプラットフォームで行われるかが明確になっています。
- 見出しも同様に変更され、「Azure AI Studio」から「Azure AI Foundry」に修正されました。
- フローを作成または開く際の手順が、「Azure AI Studio」ではなく「Azure AI Foundry」として更新され、正しいプラットフォーム情報が提供されています。
- スクリーンショットの説明文も変更され、LLMツールがAzure AI Foundryポータルに追加されることを明確に示しています。
この更新により、ユーザーはAzure AI FoundryポータルにおけるLLMツールの使用手順を最新の情報に基づいて理解でき、効果的に操作できるようになります。
articles/ai-studio/how-to/prompt-flow-tools/prompt-flow-tools-overview.md
Diff
@@ -1,7 +1,7 @@
---
-title: Overview of prompt flow tools in Azure AI Studio
+title: Overview of prompt flow tools in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: Learn about prompt flow tools that are available in Azure AI Studio.
+description: Learn about prompt flow tools that are available in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
@@ -13,7 +13,7 @@ ms.author: lagayhar
author: lgayhardt
---
-# Overview of prompt flow tools in Azure AI Studio
+# Overview of prompt flow tools in Azure AI Foundry portal
[!INCLUDE [feature-preview](../../includes/feature-preview.md)]
Summary
{
"modification_type": "minor update",
"modification_title": "プロンプトフローツールに関するAzure AI Foundryポータルの概要更新"
}
Explanation
この変更では、「prompt-flow-tools-overview.md」ドキュメントが更新され、Azure AI Studioに関する情報がAzure AI Foundryポータルに変更されています。この更新により、最新のプラットフォームでのプロンプトフローツールの情報が提供されています。
主な変更点は以下の通りです:
- タイトルが「Azure AI Studio」から「Azure AI Foundryポータル」へと変更され、新しいプラットフォームが明示されています。
- 記事の説明も「Azure AI Studio」から「Azure AI Foundryポータル」に修正が加えられ、プロンプトフローツールがどのプラットフォームで使用されるかが明確になっています。
- 見出しも同様に変更され、「Azure AI Studio」から「Azure AI Foundryポータル」へと更新されています。
この更新により、ユーザーはAzure AI Foundryポータルにおけるプロンプトフローツールの概要を最新の情報に基づいて理解できるようになり、適切な情報を得ることが容易になります。
articles/ai-studio/how-to/prompt-flow-tools/prompt-tool.md
Diff
@@ -1,7 +1,7 @@
---
-title: Prompt tool for flows in Azure AI Studio
+title: Prompt tool for flows in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: This article introduces you to the Prompt tool for flows in Azure AI Studio.
+description: This article introduces you to the Prompt tool for flows in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
@@ -14,7 +14,7 @@ ms.author: lagayhar
author: lgayhardt
---
-# Prompt tool for flows in Azure AI Studio
+# Prompt tool for flows in Azure AI Foundry portal
[!INCLUDE [feature-preview](../../includes/feature-preview.md)]
@@ -44,10 +44,10 @@ For more information and best practices, see [Prompt engineering techniques](../
## Build with the Prompt tool
-1. Create or open a flow in [Azure AI Studio](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md).
+1. Create or open a flow in [Azure AI Foundry](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md).
1. Select **+ Prompt** to add the Prompt tool to your flow.
- :::image type="content" source="../../media/prompt-flow/prompt-tool.png" alt-text="Screenshot that shows the Prompt tool added to a flow in Azure AI Studio." lightbox="../../media/prompt-flow/prompt-tool.png":::
+ :::image type="content" source="../../media/prompt-flow/prompt-tool.png" alt-text="Screenshot that shows the Prompt tool added to a flow in Azure AI Foundry portal." lightbox="../../media/prompt-flow/prompt-tool.png":::
1. Enter values for the Prompt tool input parameters described in the [Inputs table](#inputs). For information about how to prepare the prompt input, see [Prerequisites](#prerequisites).
1. Add more tools (such as the [LLM tool](llm-tool.md)) to your flow, as needed. Or select **Run** to run the flow.
Summary
{
"modification_type": "minor update",
"modification_title": "プロンプトツールに関するAzure AI Foundryポータルの更新"
}
Explanation
この変更では、「prompt-tool.md」ドキュメントが更新され、Azure AI Studioに関する情報がAzure AI Foundryポータルに変更されています。この更新は、プロンプトツールの使用に関する最新のプラットフォーム情報を提供します。
主な変更点は以下の通りです:
- タイトルが「Azure AI Studio」から「Azure AI Foundryポータル」へと変更され、新しいプラットフォームが明示化されています。
- 記事の説明も「Azure AI Studio」から「Azure AI Foundryポータル」に修正されており、プロンプトツールがどのプラットフォームで使用されるかが分かりやすくなっています。
- 見出しも同様に変更され、「Azure AI Studio」から「Azure AI Foundryポータル」へと更新されています。
- フローを作成または開く際の手順が、「Azure AI Studio」ではなく「Azure AI Foundry」として更新され、適切なリンクも提供されています。
- スクリーンショットの説明文も更新され、プロンプトツールがAzure AI Foundryポータルに追加されることが明確に示されています。
この更新により、ユーザーはAzure AI Foundryポータルのプロンプトツールに関する情報を最新のものとして取得でき、正しい操作を実行するための手助けになります。
articles/ai-studio/how-to/prompt-flow-tools/python-tool.md
Diff
@@ -1,29 +1,29 @@
---
-title: Python tool for flows in Azure AI Studio
+title: Python tool for flows in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: This article introduces you to the Python tool for flows in Azure AI Studio.
+description: This article introduces you to the Python tool for flows in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
-ms.custom: ignite-2023, devx-track-python, build-2024
+ms.custom: ignite-2023, devx-track-python, build-2024, ignite-2024
ms.topic: how-to
ms.date: 5/21/2024
ms.reviewer: keli19
ms.author: lagayhar
author: lgayhardt
---
-# Python tool for flows in Azure AI Studio
+# Python tool for flows in Azure AI Foundry portal
[!INCLUDE [feature-preview](../../includes/feature-preview.md)]
The prompt flow Python tool offers customized code snippets as self-contained executable nodes. You can quickly create Python tools, edit code, and verify results.
## Build with the Python tool
-1. Create or open a flow in [Azure AI Studio](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md).
+1. Create or open a flow in [Azure AI Foundry](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md).
1. Select **+ Python** to add the Python tool to your flow.
- :::image type="content" source="../../media/prompt-flow/python-tool.png" alt-text="Screenshot that shows the Python tool added to a flow in Azure AI Studio." lightbox="../../media/prompt-flow/python-tool.png":::
+ :::image type="content" source="../../media/prompt-flow/python-tool.png" alt-text="Screenshot that shows the Python tool added to a flow in Azure AI Foundry portal." lightbox="../../media/prompt-flow/python-tool.png":::
1. Enter values for the Python tool input parameters that are described in the [Inputs table](#inputs). For example, in the **Code** input text box, you can enter the following Python code:
Summary
{
"modification_type": "minor update",
"modification_title": "Pythonツールに関するAzure AI Foundryポータルの更新"
}
Explanation
この変更では、「python-tool.md」ドキュメントが更新され、Azure AI Studioに関連する情報がAzure AI Foundryポータルに変更されています。この更新は、Pythonツールの使用における最新のプラットフォーム情報を提供しています。
主な変更点は以下の通りです:
- タイトルが「Azure AI Studio」から「Azure AI Foundryポータル」へと変更され、新しいプラットフォームが明示化されています。
- 記事の説明も「Azure AI Studio」から「Azure AI Foundryポータル」に修正され、Pythonツールがどのプラットフォームで利用されるかが明確になっています。
- 見出しも同様に変更され、「Azure AI Studio」から「Azure AI Foundryポータル」へと更新されています。
- ユーザーがフローを作成または開く際の手順が、「Azure AI Studio」ではなく「Azure AI Foundry」として修正され、適切なリンクも提供されています。
- スクリーンショットの説明も更新され、PythonツールがAzure AI Foundryポータルに追加されることが明確に示されています。
この更新により、ユーザーはAzure AI FoundryポータルにおけるPythonツールの情報を最新のものとして取得でき、正しい操作を実行するための支援を得ることができます。
articles/ai-studio/how-to/prompt-flow-tools/rerank-tool.md
Diff
@@ -1,7 +1,7 @@
---
-title: Rerank tool for flows in Azure AI Studio
+title: Rerank tool for flows in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: This article introduces you to the Rerank tool for flows in Azure AI Studio.
+description: This article introduces you to the Rerank tool for flows in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.topic: how-to
@@ -12,18 +12,18 @@ author: lgayhardt
---
-# Rerank tool for flows in Azure AI Studio
+# Rerank tool for flows in Azure AI Foundry portal
The prompt flow Rerank tool improves search quality of relevant documents given a query for retrieval-augment generation (RAG) in prompt flow. This tool works best with [Index Look up tool](index-lookup-tool.md) as a ranker after the initial retrieval.
[!INCLUDE [feature-preview](../../includes/feature-preview.md)]
## Use the Rerank tool
-1. Create or open a flow in Azure AI Studio. For more information, see [Create a flow](../flow-develop.md).
+1. Create or open a flow in Azure AI Foundry portal. For more information, see [Create a flow](../flow-develop.md).
1. Select **+More tools** > **Rerank tool** to add the Rerank tool to your flow.
- :::image type="content" source="../../media/prompt-flow/rerank-tool.png" alt-text="Screenshot that shows the rerank tool added to a flow in Azure AI Studio." lightbox="../../media/prompt-flow/rerank-tool.png":::
+ :::image type="content" source="../../media/prompt-flow/rerank-tool.png" alt-text="Screenshot that shows the rerank tool added to a flow in Azure AI Foundry portal." lightbox="../../media/prompt-flow/rerank-tool.png":::
1. Enter values for the Rerank tool input parameters.
1. Add more tools to your flow as needed, or select **Run** to run the flow.
Summary
{
"modification_type": "minor update",
"modification_title": "Rerankツールに関するAzure AI Foundryポータルの更新"
}
Explanation
この変更では、「rerank-tool.md」ドキュメントが更新され、Azure AI Studioに関連する情報がAzure AI Foundryポータルに変更されています。この更新により、Rerankツールの使用に関する最新のプラットフォーム情報が提供されます。
主な変更点は以下の通りです:
- タイトルが「Azure AI Studio」から「Azure AI Foundryポータル」へと変更されており、新しいプラットフォームが明示的に示されています。
- 記事の説明が「Azure AI Studio」から「Azure AI Foundryポータル」に更新されており、Rerankツールがどのプラットフォームで使用されるかが明確になっています。
- 見出しも変更され、「Azure AI Studio」から「Azure AI Foundryポータル」への対応がされていることが分かります。
- ユーザーがフローを作成または開く際の手順が、「Azure AI Studio」ではなく「Azure AI Foundryポータル」として修正され、適切なリンクも提供されています。
- スクリーンショットの説明が更新され、RerankツールがAzure AI Foundryポータルに追加されることが明示化されています。
この更新によって、ユーザーはAzure AI FoundryポータルにおけるRerankツールについて最新の情報を得ることができ、正確な操作を行うための指示を受けることができます。
articles/ai-studio/how-to/prompt-flow-tools/serp-api-tool.md
Diff
@@ -1,7 +1,7 @@
---
-title: Serp API tool for flows in Azure AI Studio
+title: Serp API tool for flows in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: This article introduces you to the Serp API tool for flows in Azure AI Studio.
+description: This article introduces you to the Serp API tool for flows in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
@@ -14,7 +14,7 @@ ms.author: lagayhar
author: lgayhardt
---
-# Serp API tool for flows in Azure AI Studio
+# Serp API tool for flows in Azure AI Foundry portal
[!INCLUDE [feature-preview](../../includes/feature-preview.md)]
@@ -28,7 +28,7 @@ Sign up on the [Serp API home page](https://serpapi.com/).
To create a Serp connection:
-1. Sign in to [Azure AI Studio](https://ml.azure.com/).
+1. Sign in to [Azure AI Foundry](https://ml.azure.com/).
1. Go to **Project settings** > **Connections**.
1. Select **+ New connection**.
1. Add the following custom keys to the connection:
@@ -37,7 +37,7 @@ To create a Serp connection:
- `azureml.flow.module`: `promptflow.connections`
- `api_key`: Your Serp API key. You must select the **is secret** checkbox to keep the API key secure.
- :::image type="content" source="../../media/prompt-flow/serp-custom-connection-keys.png" alt-text="Screenshot that shows adding extra information to a custom connection in AI Studio." lightbox = "../../media/prompt-flow/serp-custom-connection-keys.png":::
+ :::image type="content" source="../../media/prompt-flow/serp-custom-connection-keys.png" alt-text="Screenshot that shows adding extra information to a custom connection in AI Foundry portal." lightbox = "../../media/prompt-flow/serp-custom-connection-keys.png":::
The connection is the model used to establish connections with the Serp API. Get your API key from the Serp API account dashboard.
@@ -47,10 +47,10 @@ The connection is the model used to establish connections with the Serp API. Get
## Build with the Serp API tool
-1. Create or open a flow in [Azure AI Studio](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md).
+1. Create or open a flow in [Azure AI Foundry](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md).
1. Select **+ More tools** > **Serp API** to add the Serp API tool to your flow.
- :::image type="content" source="../../media/prompt-flow/serp-api-tool.png" alt-text="Screenshot that shows the Serp API tool added to a flow in Azure AI Studio." lightbox="../../media/prompt-flow/serp-api-tool.png":::
+ :::image type="content" source="../../media/prompt-flow/serp-api-tool.png" alt-text="Screenshot that shows the Serp API tool added to a flow in Azure AI Foundry portal." lightbox="../../media/prompt-flow/serp-api-tool.png":::
1. Select the connection to one of your provisioned resources. For example, select **SerpConnection** if you created a connection with that name. For more information, see [Prerequisites](#prerequisites).
1. Enter values for the Serp API tool input parameters described in the [Inputs table](#inputs).
Summary
{
"modification_type": "minor update",
"modification_title": "Serp APIツールに関するAzure AI Foundryポータルの更新"
}
Explanation
この変更では、「serp-api-tool.md」ドキュメントが更新され、Azure AI Studioに関連する情報がAzure AI Foundryポータルに変更されています。これにより、Serp APIツールの使用に関する最新情報が提供され、ユーザーが新しいプラットフォームに適応できるように更新されています。
主な変更点は以下の通りです:
- タイトルが「Azure AI Studio」から「Azure AI Foundryポータル」へと変更され、新しいプラットフォームへの移行が明示的に表現されています。
- 記事の説明が同様に更新され、Serp APIツールがどのプラットフォームで動作するかが明確に記載されています。
- 見出しも「Azure AI Studio」から「Azure AI Foundryポータル」に修正されており、新しいプラットフォームへの言及がされました。
- ユーザーがフローを作成または開く際の手順が「Azure AI Studio」から「Azure AI Foundry」として修正され、最新のリンクも提供されています。
- スクリーンショットの説明が更新され、Serp APIツールがAzure AI Foundryポータルに追加されることがはっきりと示されています。
この更新によって、ユーザーはAzure AI FoundryポータルにおけるSerp APIツールの最新の使用方法を理解でき、正確な操作をするための情報を得ることが可能になります。
articles/ai-studio/how-to/prompt-flow-troubleshoot.md
Diff
@@ -4,6 +4,8 @@ titleSuffix: Azure AI Foundry
description: This article addresses frequent questions about prompt flow usage.
manager: scottpolly
ms.service: azure-ai-studio
+ms.custom:
+ - ignite-2024
ms.topic: reference
author: lgayhardt
ms.author: lagayhar
@@ -94,7 +96,7 @@ If you regenerate your Azure OpenAI key and manually update the connection used
This is because the connections used in the endpoints/deployments won't be automatically updated. Any change for key or secrets in deployments should be done by manual update, which aims to avoid impacting online production deployment due to unintentional offline operation.
-- If the endpoint was deployed in the studio UI, you can just redeploy the flow to the existing endpoint using the same deployment name.
+- If the endpoint was deployed in the AI Foundry portal, you can just redeploy the flow to the existing endpoint using the same deployment name.
- If the endpoint was deployed using SDK or CLI, you need to make some modification to the deployment definition such as adding a dummy environment variable, and then use `az ml online-deployment update` to update your deployment.
### Vulnerability issues in prompt flow deployments
Summary
{
"modification_type": "minor update",
"modification_title": "プロンプトフローのトラブルシューティングガイドの更新"
}
Explanation
この変更では、「prompt-flow-troubleshoot.md」ドキュメントが更新され、Azure AI Studioに関連した情報がAzure AI Foundryポータルに変更されています。これにより、プロンプトフローの使用中のトラブルシューティングに関する最新情報が反映されています。
主な変更点は以下の通りです:
- ドキュメントに新しいメタデータが追加され、「ms.custom」フィールドに「ignite-2024」が追加されました。この変更は、ドキュメントが特定のカスタム用途に従って更新されていることを示しています。
- エンドポイントが「Azure AI Studio」から「AI Foundryポータル」に移行され、エンドポイントがどのプラットフォームでデプロイされたかに関する指示が更新されました。この変更により、ユーザーは正しいプラットフォームに基づいた手順を参照できます。
- トラブルシューティングの手順が微調整され、エンドポイントのデプロイに関する情報がクリアになりました。
この更新によって、ユーザーはプロンプトフローのトラブルシューティングに関する正確で最新の情報を取得でき、適切な環境に合わせた対応が可能になります。
articles/ai-studio/how-to/prompt-flow.md
Diff
@@ -1,26 +1,27 @@
---
-title: Prompt flow in Azure AI Studio
+title: Prompt flow in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: This article introduces prompt flow in Azure AI Studio.
+description: This article introduces prompt flow in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
- ignite-2023
- build-2024
+ - ignite-2024
ms.topic: conceptual
ms.date: 11/19/2024
ms.reviewer: yozen
ms.author: lagayhar
author: lgayhardt
---
-# Prompt flow in Azure AI Studio
+# Prompt flow in Azure AI Foundry portal
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
Prompt flow is a development tool designed to streamline the entire development cycle of AI applications powered by Large Language Models (LLMs). Prompt flow provides a comprehensive solution that simplifies the process of prototyping, experimenting, iterating, and deploying your AI applications.
-Prompt flow is available independently as an open-source project on [GitHub](https://github.com/microsoft/promptflow), with its own SDK and [VS Code extension](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow). Prompt flow is also available and recommended to use as a feature within both [Azure AI Studio](https://ai.azure.com) and [Azure Machine Learning studio](https://ml.azure.com). This set of documentation focuses on prompt flow in Azure AI Studio.
+Prompt flow is available independently as an open-source project on [GitHub](https://github.com/microsoft/promptflow), with its own SDK and [VS Code extension](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow). Prompt flow is also available and recommended to use as a feature within both [Azure AI Foundry](https://ai.azure.com) and [Azure Machine Learning studio](https://ml.azure.com). This set of documentation focuses on prompt flow in Azure AI Foundry portal.
Definitions:
@@ -32,7 +33,7 @@ Definitions:
## Benefits of prompt flow
-With prompt flow in Azure AI Studio, you can:
+With prompt flow in Azure AI Foundry portal, you can:
- Orchestrate executable flows with LLMs, prompts, and Python tools through a visualized graph.
- Debug, share, and iterate your flows with ease through team collaboration.
@@ -51,7 +52,7 @@ With prompt flow in Azure AI Studio, you can:
- All-in-one platform: Prompt flow streamlines the entire prompt engineering process, from development and evaluation to deployment and monitoring. You can effortlessly deploy their flows as Azure AI endpoints and monitor their performance in real-time, ensuring optimal operation and continuous improvement.
- Enterprise Readiness Solutions: Prompt flow applies robust Azure AI enterprise readiness solutions, providing a secure, scalable, and reliable foundation for the development, experimentation, and deployment of flows.
-With prompt flow in Azure AI Studio, you can unleash prompt engineering agility, collaborate effectively, and apply enterprise-grade solutions for successful LLM-based application development and deployment.
+With prompt flow in Azure AI Foundry portal, you can unleash prompt engineering agility, collaborate effectively, and apply enterprise-grade solutions for successful LLM-based application development and deployment.
## Flow development lifecycle
@@ -68,7 +69,7 @@ By following this structured and methodical approach, prompt flow empowers you t
## Flow types
-In Azure AI Studio, you can start a new flow by selecting a flow type or a template from the gallery.
+In Azure AI Foundry portal, you can start a new flow by selecting a flow type or a template from the gallery.
:::image type="content" source="../media/prompt-flow/type-or-gallery.png" alt-text="Screenshot of example flow types and templates from the gallery." lightbox="../media/prompt-flow/type-or-gallery.png":::
@@ -94,17 +95,17 @@ With the flow feature in Prompt flow, you have the power to design, customize, a
Tools are the fundamental building blocks of a flow.
-In Azure AI Studio, tool options include the [LLM tool](../how-to/prompt-flow-tools/llm-tool.md), [Prompt tool](../how-to/prompt-flow-tools/prompt-tool.md), [Python tool](../how-to/prompt-flow-tools/python-tool.md), and more.
+In Azure AI Foundry portal, tool options include the [LLM tool](../how-to/prompt-flow-tools/llm-tool.md), [Prompt tool](../how-to/prompt-flow-tools/prompt-tool.md), [Python tool](../how-to/prompt-flow-tools/python-tool.md), and more.
:::image type="content" source="../media/prompt-flow/tool-options.png" alt-text="Screenshot of tool options in prompt flow editor." lightbox="../media/prompt-flow/tool-options.png":::
Each tool is a simple, executable unit with a specific function. By combining different tools, you can create a flow that accomplishes a wide range of goals. For example, you can use the LLM tool to generate text or summarize an article and the Python tool to process the text to inform the next flow component or result.
One of the key benefit of Prompt flow tools is their seamless integration with third-party APIs and python open source packages. This not only improves the functionality of large language models but also makes the development process more efficient for developers.
-If the prompt flow tools in Azure AI Studio don't meet your requirements, you can [develop your own custom tool and make it a tool package](https://microsoft.github.io/promptflow/how-to-guides/develop-a-tool/create-and-use-tool-package.html). To discover more custom tools developed by the open source community, visit [prompt flow custom tools](https://microsoft.github.io/promptflow/integrations/tools/index.html).
+If the prompt flow tools in Azure AI Foundry portal don't meet your requirements, you can [develop your own custom tool and make it a tool package](https://microsoft.github.io/promptflow/how-to-guides/develop-a-tool/create-and-use-tool-package.html). To discover more custom tools developed by the open source community, visit [prompt flow custom tools](https://microsoft.github.io/promptflow/integrations/tools/index.html).
## Next steps
-- [Build with prompt flow in Azure AI Studio](flow-develop.md)
+- [Build with prompt flow in Azure AI Foundry portal](flow-develop.md)
- [Get started with prompt flow in VS Code](https://microsoft.github.io/promptflow/how-to-guides/quick-start.html)
Summary
{
"modification_type": "minor update",
"modification_title": "プロンプトフローに関するAzure AI Foundryポータルの更新"
}
Explanation
この変更では、「prompt-flow.md」ドキュメントが更新され、Azure AI Studioに関する情報がAzure AI Foundryポータルに変更されています。この更新により、プロンプトフローの利用に関する一貫性が保たれ、ユーザーは最新のプラットフォーム情報を得ることができます。
主な変更点は以下の通りです:
- タイトルと説明が「Azure AI Studio」から「Azure AI Foundryポータル」に変更され、新しいプラットフォームが明示されています。
- メタデータに「ignite-2024」が追加され、文書が最新のイベントや機能に関連することが示されました。
- プロンプトフローの機能に関する説明が更新され、Azure AI Foundryポータルでの利用に焦点が当てられています。
- エンドポイントの説明や、ツールの選択肢に関する部分が同様に更新され、「Azure AI Foundryポータル」という表現が使用されています。
- プロンプトフローのカスタムツールの開発に関する注意点が保持されており、ユーザーが独自のツールを開発する方法が引き続き参照可能です。
この更新により、ユーザーはプロンプトフローをAzure AI Foundryポータル内で適切に使用するための情報を正しく理解できるようになり、他の関連リソースへも容易にアクセス可能です。
articles/ai-studio/how-to/quota.md
Diff
@@ -1,31 +1,32 @@
---
-title: Manage and increase quotas for resources with Azure AI Studio
+title: Manage and increase quotas for resources with Azure AI Foundry
titleSuffix: Azure AI Foundry
-description: This article provides instructions on how to manage and increase quotas for resources with Azure AI Studio.
+description: This article provides instructions on how to manage and increase quotas for resources with Azure AI Foundry.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
- ignite-2023
- build-2024
+ - ignite-2024
ms.topic: how-to
ms.date: 11/19/2024
ms.reviewer: siarora
ms.author: larryfr
author: Blackmist
---
-# Manage and increase quotas for resources with Azure AI Studio
+# Manage and increase quotas for resources with Azure AI Foundry
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-Quota provides the flexibility to actively manage the allocation of rate limits across the deployments within your subscription. This article walks through the process of managing quota for your Azure AI Studio virtual machines and Azure OpenAI models.
+Quota provides the flexibility to actively manage the allocation of rate limits across the deployments within your subscription. This article walks through the process of managing quota for your Azure AI Foundry virtual machines and Azure OpenAI models.
Azure uses limits and quotas to prevent budget overruns due to fraud, and to honor Azure capacity constraints. It's also a good way to control costs for admins. Consider these limits as you scale for production workloads.
In this article, you learn about:
- Default limits on Azure resources
-- Creating Azure AI Studio hub-level quotas.
+- Creating Azure AI Foundry hub-level quotas.
- Viewing your quotas and limits
- Requesting quota and limit increases
@@ -36,26 +37,26 @@ Quotas are applied to each subscription in your account. If you have multiple su
A quota is a credit limit on Azure resources, not a capacity guarantee. If you have large-scale capacity needs, contact Azure support to increase your quota.
> [!NOTE]
-> Azure AI Studio compute has a separate quota from the core compute quota.
+> Azure AI Foundry compute has a separate quota from the core compute quota.
Default limits vary by offer category type, such as free trial, pay-as-you-go, and virtual machine (VM) series (such as Dv2, F, and G).
-## Azure AI Studio quota
+## Azure AI Foundry quota
-The following actions in Azure AI Studio consume quota:
+The following actions in Azure AI Foundry portal consume quota:
- Creating a compute instance.
- Building a vector index.
- Deploying open models from model catalog.
-## Azure AI Studio compute
+## Azure AI Foundry compute
-[Azure AI Studio compute](./create-manage-compute.md) has a default quota limit on both the number of cores and the number of unique compute resources that are allowed per region in a subscription.
+[Azure AI Foundry compute](./create-manage-compute.md) has a default quota limit on both the number of cores and the number of unique compute resources that are allowed per region in a subscription.
- The quota on the number of cores is split by each VM Family and cumulative total cores.
- The quota on the number of unique compute resources per region is separate from the VM core quota, as it applies only to the managed compute resources
-To raise the limits for compute, you can [request a quota increase](#view-and-request-quotas-in-azure-ai-studio) in the [Azure AI Studio](https://ai.azure.com).
+To raise the limits for compute, you can [request a quota increase](#view-and-request-quotas-in-azure-ai-foundry-portal) in the [Azure AI Foundry](https://ai.azure.com).
Available resources include:
- Dedicated cores per region have a default limit of 24 to 300, depending on your subscription offer type. You can increase the number of dedicated cores per subscription for each VM family. Specialized VM families like NCv2, NCv3, or ND series start with a default of zero cores. GPUs also default to zero cores.
@@ -75,11 +76,11 @@ When opening the support request to increase the total compute limit, provide th
1. On the **Additional details** page, provide the subscription ID, region, new limit (between 500 and 2500), and business justification to increase the total compute limits for the region.
1. Select **Create** to submit the support request ticket.
-## Azure AI Studio shared quota
+## Azure AI Foundry shared quota
-Azure AI Studio provides a pool of shared quota that is available for different users across various regions to use concurrently. Depending upon availability, users can temporarily access quota from the shared pool, and use the quota to perform testing for a limited amount of time. The specific time duration depends on the use case. By temporarily using quota from the quota pool, you no longer need to file a support ticket for a short-term quota increase or wait for your quota request to be approved before you can proceed with your workload.
+Azure AI Foundry provides a pool of shared quota that is available for different users across various regions to use concurrently. Depending upon availability, users can temporarily access quota from the shared pool, and use the quota to perform testing for a limited amount of time. The specific time duration depends on the use case. By temporarily using quota from the quota pool, you no longer need to file a support ticket for a short-term quota increase or wait for your quota request to be approved before you can proceed with your workload.
-Use of the shared quota pool is available for testing inferencing for Llama-2, Phi, Nemotron, Mistral, Dolly, and Deci-DeciLM models from the Model Catalog. You should use the shared quota only for creating temporary test endpoints, not production endpoints. For endpoints in production, you should [request dedicated quota](#view-and-request-quotas-in-azure-ai-studio). Billing for shared quota is usage-based, just like billing for dedicated virtual machine families.
+Use of the shared quota pool is available for testing inferencing for Llama-2, Phi, Nemotron, Mistral, Dolly, and Deci-DeciLM models from the Model Catalog. You should use the shared quota only for creating temporary test endpoints, not production endpoints. For endpoints in production, you should [request dedicated quota](#view-and-request-quotas-in-azure-ai-foundry-portal). Billing for shared quota is usage-based, just like billing for dedicated virtual machine families.
## Container Instances
@@ -89,13 +90,13 @@ For more information, see [Container Instances limits](/azure/azure-resource-m
Azure Storage has a limit of 250 storage accounts per region, per subscription. This limit includes both Standard and Premium storage accounts.
-## View and request quotas in Azure AI Studio
+## View and request quotas in Azure AI Foundry portal
-Use quotas to manage compute target allocation between multiple Azure AI Studio hubs in the same subscription.
+Use quotas to manage compute target allocation between multiple Azure AI Foundry hubs in the same subscription.
By default, all hubs share the same quota as the subscription-level quota for VM families. However, you can set a maximum quota for individual VM families for more granular cost control and governance on hubs in a subscription. Quotas for individual VM families let you share capacity and avoid resource contention issues.
-1. In Azure AI Studio, select **Management center** from the left menu.
+1. In Azure AI Foundry portal, select **Management center** from the left menu.
:::image type="content" source="../media/management-center/management-center.png" alt-text="Screenshot of the management center link.":::
@@ -111,11 +112,11 @@ By default, all hubs share the same quota as the subscription-level quota for VM
- Use the **charts** along the side of the page to view more details about quota usage. The charts are interactive; hovering over a section of the chart displays more information, and selecting the chart filters the list of models. Selecting the chart legend filters the data displayed in the chart.
- Use the **Azure OpenAI Provisioned** link to view information about provisioned models, including a **Capacity calculator**.
- :::image type="content" source="../media/cost-management/model-quota.png" alt-text="Screenshot of the Model quota page in Azure AI Studio." lightbox="../media/cost-management/model-quota.png":::
+ :::image type="content" source="../media/cost-management/model-quota.png" alt-text="Screenshot of the Model quota page in Azure AI Foundry portal." lightbox="../media/cost-management/model-quota.png":::
1. When you select the **VM quota** link, you can view the quota and usage for the virtual machine families in the selected Azure region. To request more quota, select the VM family and then select **Request quota**.
- :::image type="content" source="../media/cost-management/vm-quota.png" alt-text="Screenshot of the VM quota page in Azure AI Studio." lightbox="../media/cost-management/vm-quota.png":::
+ :::image type="content" source="../media/cost-management/vm-quota.png" alt-text="Screenshot of the VM quota page in Azure AI Foundry portal." lightbox="../media/cost-management/vm-quota.png":::
## Next steps
Summary
{
"modification_type": "minor update",
"modification_title": "Azure AI Foundryにおけるリソースのクォータ管理に関する更新"
}
Explanation
この変更では、「quota.md」ドキュメントが更新され、Azure AI Studioに関連する情報がAzure AI Foundryに変更されています。これにより、リソースのクォータ管理に関する情報が最新のプラットフォームに適応され、明確かつ一貫性のあるガイダンスが提供されます。
主な変更点は以下の通りです:
- タイトルと説明が「Azure AI Studio」から「Azure AI Foundry」に変更されており、対象プラットフォームが明確になりました。
- Azure AI Foundryでの仮想マシンとAzure OpenAIモデルのクォータ管理に関する指示が更新されています。
- クォータの定義、管理方法、リクエスト方法のセクションがすべて「Azure AI Foundry」に合った形で調整されています。
- 具体的なクォータの制限や管理センターへのアクセス方法についても、Azure AI Foundryに合わせて修正されています。
- 特定のリソースと共有クォータの利用に関する説明が更新され、これによりユーザーは適切なリソース管理が行えるようになります。
この更新により、ユーザーはAzure AI Foundryにおいてクォータを適切に管理し、リソースの最適化やコスト管理を効果的に行うための情報を得ることができます。
articles/ai-studio/how-to/secure-data-playground.md
Diff
@@ -1,7 +1,7 @@
---
title: Securely use playground chat
titleSuffix: Azure AI Foundry
-description: Learn how to securely use the Azure AI Studio playground chat on your own data.
+description: Learn how to securely use the Azure AI Foundry portal playground chat on your own data.
manager: scottpolly
ms.service: azure-ai-studio
ms.topic: how-to
@@ -13,12 +13,12 @@ zone_pivot_groups: azure-ai-studio-sdk-cli
# Customer intent: As an administrator, I want to make sure that my data is handled securely when used in the playground chat.
---
-# Use your data securely with the Azure AI Studio playground
+# Use your data securely with the Azure AI Foundry portal playground
-Use this article to learn how to securely use Azure AI Studio's playground chat on your data. The following sections provide our recommended configuration to protect your data and resources by using Microsoft Entra ID role-based access control, a managed network, and private endpoints. We recommend disabling public network access for Azure OpenAI resources, Azure AI Search resources, and storage accounts. Using selected networks with IP rules isn't supported because the services' IP addresses are dynamic.
+Use this article to learn how to securely use Azure AI Foundry's playground chat on your data. The following sections provide our recommended configuration to protect your data and resources by using Microsoft Entra ID role-based access control, a managed network, and private endpoints. We recommend disabling public network access for Azure OpenAI resources, Azure AI Search resources, and storage accounts. Using selected networks with IP rules isn't supported because the services' IP addresses are dynamic.
> [!NOTE]
-> AI Studio's managed virtual network settings apply only to AI Studio's managed compute resources, not platform as a service (PaaS) services like Azure OpenAI or Azure AI Search. When using PaaS services, there is no data exfiltration risk because the services are managed by Microsoft.
+> AI Foundry's managed virtual network settings apply only to AI Foundry's managed compute resources, not platform as a service (PaaS) services like Azure OpenAI or Azure AI Search. When using PaaS services, there is no data exfiltration risk because the services are managed by Microsoft.
The following table summarizes the changes made in this article:
@@ -31,35 +31,35 @@ The following table summarizes the changes made in this article:
## Prerequisites
-Ensure that the AI Studio hub is deployed with the __Identity-based access__ setting for the Storage account. This configuration is required for the correct access control and security of your AI Studio Hub. You can verify this configuration using one of the following methods:
+Ensure that the AI Foundry hub is deployed with the __Identity-based access__ setting for the Storage account. This configuration is required for the correct access control and security of your AI Foundry Hub. You can verify this configuration using one of the following methods:
- In the Azure portal, select the hub and then select __Settings__, __Properties__, and __Options__. At the bottom of the page, verify that __Storage account access type__ is set to __Identity-based access__.
- If deploying using Azure Resource Manager or Bicep templates, include the `systemDatastoresAuthMode: 'identity'` property in your deployment template.
- You must be familiar with using Microsoft Entra ID role-based access control to assign roles to resources and users. For more information, visit the [Role-based access control](/azure/role-based-access-control/overview) article.
-## Configure Network Isolated AI Studio Hub
+## Configure Network Isolated AI Foundry Hub
-If you're __creating a new Azure AI Studio hub__, use one of the following documents to create a hub with network isolation:
+If you're __creating a new Azure AI Foundry hub__, use one of the following documents to create a hub with network isolation:
-- [Create a secure Azure AI Studio hub in Azure portal](create-secure-ai-hub.md)
-- [Create a secure Azure AI Studio hub using the Python SDK or Azure CLI](develop/create-hub-project-sdk.md)
+- [Create a secure Azure AI Foundry hub in Azure portal](create-secure-ai-hub.md)
+- [Create a secure Azure AI Foundry hub using the Python SDK or Azure CLI](develop/create-hub-project-sdk.md)
-If you have an __existing Azure AI Studio hub__ that isn't configured to use a managed network, use the following steps to configure it to use one:
+If you have an __existing Azure AI Foundry hub__ that isn't configured to use a managed network, use the following steps to configure it to use one:
1. From the Azure portal, select the hub, then select __Settings__, __Networking__, __Public access__.
1. To disable public network access for the hub, set __Public network access__ to __Disabled__. Select __Save__ to apply the changes.
- :::image type="content" source="../media/how-to/secure-playground-on-your-data/hub-public-access-disable.png" alt-text="Screenshot of Azure AI Studio hub settings with public access disabled.":::
+ :::image type="content" source="../media/how-to/secure-playground-on-your-data/hub-public-access-disable.png" alt-text="Screenshot of Azure AI Foundry hub settings with public access disabled.":::
1. Select select __Workspace managed outbound access__ and then select either the __Allow Internet Outbound__ or __Allow Only Approved Outbound__ network isolation mode. Select __Save__ to apply the changes.
- :::image type="content" source="../media/how-to/secure-playground-on-your-data/select-network-isolation-configuration.png" alt-text="Screenshot of the Azure AI Studio hub settings with allow internet outbound selected.":::
+ :::image type="content" source="../media/how-to/secure-playground-on-your-data/select-network-isolation-configuration.png" alt-text="Screenshot of the Azure AI Foundry hub settings with allow internet outbound selected.":::
## Configure Azure AI services Resource
Depending on your configuration, you might use an Azure AI services resource that also includes Azure OpenAI or a standalone Azure OpenAI resource. The steps in this section configure an AI services resource. The same steps apply to an Azure OpenAI resource.
-1. If you don't have an existing Azure AI services resource for your Azure AI Studio hub, [create one](/azure/ai-services/openai/how-to/create-resource?pivots=web-portal).
+1. If you don't have an existing Azure AI services resource for your Azure AI Foundry hub, [create one](/azure/ai-services/openai/how-to/create-resource?pivots=web-portal).
1. From the Azure portal, select the AI services resource, then select __Resource Management, __Identity__, and __System assigned__.
1. To create a managed identity for the AI services resource, set the __Status__ to __On__. Select __Save__ to apply the changes.
@@ -75,7 +75,7 @@ Depending on your configuration, you might use an Azure AI services resource tha
1. From the __Basics__ tab, enter a unique name for the private endpoint, network interface, and select the region to create the private endpoint in.
1. From the __Resource__ tab, accept the target subresource of __account__.
- 1. From the __Virtual Network__ tab, select the _Azure Virtual Network_ that the private endpoint connects to. This network should be the same one that your clients connect to, and that the Azure AI Studio hub has a private endpoint connection to.
+ 1. From the __Virtual Network__ tab, select the _Azure Virtual Network_ that the private endpoint connects to. This network should be the same one that your clients connect to, and that the Azure AI Foundry hub has a private endpoint connection to.
1. From the __DNS__ tab, select the defaults for the DNS settings.
1. Continue to the __Review + create__ tab, then select __Create__ to create the private endpoint.
@@ -96,13 +96,13 @@ You might want to consider using an Azure AI Search index when you either want t
To use an existing index, it must have at least one searchable field. Ensure at least one valid vector column is mapped when using vector search.
> [!IMPORTANT]
-> The information in this section is only applicable for securing the Azure AI Search resource for use with Azure AI Studio. If you're using Azure AI Search for other purposes, you might need to configure additional settings. For related information on configuring Azure AI Search, visit the following articles:
+> The information in this section is only applicable for securing the Azure AI Search resource for use with Azure AI Foundry. If you're using Azure AI Search for other purposes, you might need to configure additional settings. For related information on configuring Azure AI Search, visit the following articles:
>
> - [Configure network access and firewall rules](../../search/service-configure-firewall.md)
> - [Enable or disable role-based access control](/azure/search/search-security-enable-roles)
> - [Configure a search service to connect using a managed identity](/azure/search/search-howto-managed-identities-data-sources)
-1. If you don't have an existing Azure AI Search resource for your Azure AI Studio hub, [create one](/azure/search/search-create-service-portal).
+1. If you don't have an existing Azure AI Search resource for your Azure AI Foundry hub, [create one](/azure/search/search-create-service-portal).
1. From the Azure portal, select the AI Search resource, then select __Settings__, __Identity__, and __System assigned__.
1. To create a managed identity for the AI Search resource, set the __Status__ to __On__. Select __Save__ to apply the changes.
@@ -118,7 +118,7 @@ To use an existing index, it must have at least one searchable field. Ensure at
1. From the __Basics__ tab, enter a unique name for the private endpoint, network interface, and select the region to create the private endpoint in.
1. From the __Resource__ tab, select the __Subscription__ that contains the resource, set the __Resource type__ to __Microsoft.Search/searchServices__, and select the Azure AI Search resource. The only available subresource is __searchService__.
- 1. From the __Virtual Network__ tab, select the _Azure Virtual Network_ that the private endpoint connects to. This network should be the same one that your clients connect to, and that the Azure AI Studio hub has a private endpoint connection to.
+ 1. From the __Virtual Network__ tab, select the _Azure Virtual Network_ that the private endpoint connects to. This network should be the same one that your clients connect to, and that the Azure AI Foundry hub has a private endpoint connection to.
1. From the __DNS__ tab, select the defaults for the DNS settings.
1. Continue to the __Review + create__ tab, then select __Create__ to create the private endpoint.
@@ -131,7 +131,7 @@ To use an existing index, it must have at least one searchable field. Ensure at
## Configure Azure Storage (ingestion-only)
-If you're using Azure Storage for the ingestion scenario with the Azure AI Studio playground, you need to configure your Azure Storage Account.
+If you're using Azure Storage for the ingestion scenario with the Azure AI Foundry portal playground, you need to configure your Azure Storage Account.
1. Create a Storage Account resource
1. From the Azure portal, select the Storage Account resource, then select __Security + networking__, __Networking__, and __Firewalls and virtual networks__.
@@ -146,7 +146,7 @@ If you're using Azure Storage for the ingestion scenario with the Azure AI Studi
1. From the __Basics__ tab, enter a unique name for the private endpoint, network interface, and select the region to create the private endpoint in.
1. From the __Resource__ tab, set the __Target sub-resource__ to __blob__.
- 1. From the __Virtual Network__ tab, select the _Azure Virtual Network_ that the private endpoint connects to. This network should be the same one that your clients connect to, and that the Azure AI Studio hub has a private endpoint connection to.
+ 1. From the __Virtual Network__ tab, select the _Azure Virtual Network_ that the private endpoint connects to. This network should be the same one that your clients connect to, and that the Azure AI Foundry hub has a private endpoint connection to.
1. From the __DNS__ tab, select the defaults for the DNS settings.
1. Continue to the __Review + create__ tab, then select __Create__ to create the private endpoint.
@@ -155,22 +155,22 @@ If you're using Azure Storage for the ingestion scenario with the Azure AI Studi
## Configure Azure Key Vault
-Azure AI Studio uses Azure Key Vault to securely store and manage secrets. To allow access to the key vault from trusted services, use the following steps.
+Azure AI Foundry uses Azure Key Vault to securely store and manage secrets. To allow access to the key vault from trusted services, use the following steps.
> [!NOTE]
-> These steps assume that the key vault has already been configured for network isolation when you created your Azure AI Studio Hub.
+> These steps assume that the key vault has already been configured for network isolation when you created your Azure AI Foundry Hub.
1. From the Azure portal, select the Key Vault resource, then select __Settings__, __Networking__, and __Firewalls and virtual networks__.
1. From the __Exception__ section of the page, make sure that __Allow trusted Microsoft services to bypass firewall__ is __enabled__.
## Configure connections to use Microsoft Entra ID
-Connections from Azure AI Studio to Azure AI services and Azure AI Search should use Microsoft Entra ID for secure access. Connections are created from [Azure AI Studio](https://ai.azure.com) instead of the Azure portal.
+Connections from Azure AI Foundry to Azure AI services and Azure AI Search should use Microsoft Entra ID for secure access. Connections are created from [Azure AI Foundry](https://ai.azure.com) instead of the Azure portal.
> [!IMPORTANT]
> Using Microsoft Entra ID with Azure AI Search is currently a preview feature. For more information on connections, visit the [Add connections](connections-add.md#create-a-new-connection) article.
-1. from Azure AI Studio, select __Connections__. If you have existing connections to the resources, you can select the connection and then select the __pencil icon__ in the __Access details__ section to update the connection. Set the __Authentication__ field to __Microsoft Entra ID__, then select __Update__.
+1. from Azure AI Foundry, select __Connections__. If you have existing connections to the resources, you can select the connection and then select the __pencil icon__ in the __Access details__ section to update the connection. Set the __Authentication__ field to __Microsoft Entra ID__, then select __Update__.
1. To create a new connection, select __+ New connection__, then select the resource type. Browse for the resource or enter the required information, then set __Authentication__ to __Microsoft Entra ID__. Select __Add connection__ to create the connection.
Repeat these steps for each resource that you want to connect to using Microsoft Entra ID.
@@ -190,7 +190,7 @@ For more information on assigning roles, see [Tutorial: Grant a user access to r
| Azure AI services/OpenAI | Cognitive Services OpenAI Contributor | Azure AI Search | Allow Search the ability to fine-tune, deploy and generate text |
| Azure Storage Account | Storage Blob Data Contributor | Azure AI Search | Reads blob and writes knowledge store. |
| Azure Storage Account | Storage Blob Data Contributor | Azure AI services/OpenAI | Reads from the input container, and writes the preprocess result to the output container. |
-| Azure Blob Storage private endpoint | Reader | Azure AI Studio project | For your Azure AI Studio project with managed network enabled to access Blob storage in a network restricted environment |
+| Azure Blob Storage private endpoint | Reader | Azure AI Foundry project | For your Azure AI Foundry project with managed network enabled to access Blob storage in a network restricted environment |
| Azure OpenAI Resource for chat model | Cognitive Services OpenAI User | Azure OpenAI resource for embedding model | [Optional] Required only if using two Azure OpenAI resources to communicate. |
> [!NOTE]
@@ -214,17 +214,17 @@ For more information on assigning roles, see [Tutorial: Grant a user access to r
| Azure Storage Account | Storage File Data Privileged Contributor | Developer's Microsoft Entra ID | Needed to Access File Share in Storage for Promptflow data. |
| The resource group or Azure subscription where the developer need to deploy the web app to | Contributor | Developer's Microsoft Entra ID | Deploy web app to the developer's Azure subscription. |
-## Use your data in AI Studio
+## Use your data in AI Foundry portal
-Now, the data you add to AI Studio is secured to the isolated network provided by your Azure AI Studio hub and project. For an example of using data, visit the [build a question and answer copilot](../tutorials/deploy-copilot-ai-studio.md) tutorial.
+Now, the data you add to AI Foundry is secured to the isolated network provided by your Azure AI Foundry hub and project. For an example of using data, visit the [build a question and answer copilot](../tutorials/deploy-copilot-ai-studio.md) tutorial.
## Deploy web apps
For information on configuring web app deployments, visit the [Use Azure OpenAI on your data securely](/azure/ai-services/openai/how-to/use-your-data-securely#web-app) article.
## Limitations
-When using the Chat playground in Azure AI Studio, don't navigate to another tab within Studio. If you do navigate to another tab, when you return to the Chat tab you must remove your data and then add it back.
+When using the Chat playground in Azure AI Foundry portal, don't navigate to another tab within Studio. If you do navigate to another tab, when you return to the Chat tab you must remove your data and then add it back.
## Related content
Summary
{
"modification_type": "minor update",
"modification_title": "Azure AI Foundryポータルにおけるデータプレイグラウンドのセキュリティに関する更新"
}
Explanation
この変更では、「secure-data-playground.md」ドキュメントが更新され、Azure AI Studioに関する情報がAzure AI Foundryポータルに変更されています。これにより、ユーザーは自分のデータを安全に使用するための最新の手順を理解できるようになります。
主な変更点は以下の通りです:
- タイトルと説明が「Azure AI Studioのプレイグラウンド」から「Azure AI Foundryポータルのプレイグラウンド」に変更され、プラットフォームに関する情報が明確になりました。
- データ管理に関する推奨設定や手順がAzure AI Foundryに合わせて調整されており、特にMicrosoft Entra IDの役割ベースのアクセス制御、管理ネットワーク、およびプライベートエンドポイントを利用する方法が説明されています。
- ハブの設定やストレージアカウントのアクセスに関連する部分が最新の構成に合わせて修正され、ユーザーが必要な設定を行うための具体的な手順が提示されています。
- その他、ネットワークの設定やアクセス許可、リソース作成方法についても、「Azure AI Foundry」に基づいた内容に更新されています。
この更新により、ユーザーはAzure AI Foundryポータル上でデータを安全に使用し、適切なセキュリティを確保するための情報を得ることができます。
articles/ai-studio/how-to/troubleshoot-deploy-and-monitor.md
Diff
@@ -1,12 +1,13 @@
---
-title: How to troubleshoot your deployments and monitors in Azure AI Studio
+title: How to troubleshoot your deployments and monitors in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: This article provides instructions on how to troubleshoot your deployments and monitors in Azure AI Studio.
+description: This article provides instructions on how to troubleshoot your deployments and monitors in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
- ignite-2023
- build-2024
+ - ignite-2024
ms.topic: how-to
ms.date: 5/21/2024
ms.reviewer: fasantia
@@ -15,15 +16,15 @@ ms.author: mopeakande
author: msakande
---
-# How to troubleshoot your deployments and monitors in Azure AI Studio
+# How to troubleshoot your deployments and monitors in Azure AI Foundry portal
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-This article provides instructions on how to troubleshoot your deployments and monitors in Azure AI Studio.
+This article provides instructions on how to troubleshoot your deployments and monitors in Azure AI Foundry portal.
## Deployment issues
-For the general deployment error code reference, see [Troubleshooting online endpoints deployment and scoring](/azure/machine-learning/how-to-troubleshoot-online-endpoints) in the Azure Machine Learning documentation. Much of the information there also apply to Azure AI Studio deployments.
+For the general deployment error code reference, see [Troubleshooting online endpoints deployment and scoring](/azure/machine-learning/how-to-troubleshoot-online-endpoints) in the Azure Machine Learning documentation. Much of the information there also apply to Azure AI Foundry deployments.
### Error: Use of Azure OpenAI models in Azure Machine Learning requires Azure OpenAI Services resources
@@ -38,15 +39,15 @@ For more information about managing quota, see:
- [Quota for deploying and inferencing a model](../how-to/deploy-models-openai.md#quota-for-deploying-and-inferencing-a-model)
- [Manage Azure OpenAI Service quota documentation](/azure/ai-services/openai/how-to/quota?tabs=rest)
-- [Manage and increase quotas for resources with Azure AI Studio](quota.md)
+- [Manage and increase quotas for resources with Azure AI Foundry](quota.md)
### Error: `ToolLoadError`
After you deployed a prompt flow, you got the error message: "Tool load failed in 'search_question_from_indexed_docs': (ToolLoadError) Failed to load package tool 'Vector Index Lookup': (HttpResponseError) (AuthorizationFailed)."
To fix this error, take the following steps to manually assign the ML Data scientist role to your endpoint. It might take several minutes for the new role to take effect.
-1. Go to your project in [Azure AI Studio](https://ai.azure.com) and select **Management center** from the left navigation menu to open the settings page.
+1. Go to your project in [Azure AI Foundry](https://ai.azure.com) and select **Management center** from the left navigation menu to open the settings page.
1. Under the **Project** heading, select **Overview**.
1. Under **Quick reference**, select the link to your resource group to open it in the Azure portal.
1. Select **Access control (IAM)** from the left navigation menu in the Azure portal.
@@ -58,7 +59,7 @@ To fix this error, take the following steps to manually assign the ML Data scien
1. Select your endpoint's name.
1. Select **Select**.
1. Select **Review + Assign**.
-1. Return to your project in AI Studio and select **Deployments** from the left navigation menu.
+1. Return to your project in AI Foundry portal and select **Deployments** from the left navigation menu.
1. Select your deployment.
1. Test the prompt flow deployment.
@@ -74,7 +75,7 @@ This error message refers to a situation where the deployment build failed. You
__Option 1: Find the build log for the Azure default blob storage.__
-1. Go to your project in [Azure AI Studio](https://ai.azure.com) and select **Management center** from the left navigation menu to open the settings page.
+1. Go to your project in [Azure AI Foundry](https://ai.azure.com) and select **Management center** from the left navigation menu to open the settings page.
1. Under the **Hub** heading, select **Overview**.
1. In the section for **Connected resources**, select the link to your storage account name. This name should be the name of the storage account listed in the error message you received. You'll be taken to the storage account page in the [Azure portal](https://portal.azure.com).
1. On the storage account page, select **Data Storage** > **Containers** from the left navigation menu.
@@ -84,7 +85,7 @@ __Option 1: Find the build log for the Azure default blob storage.__
__Option 2: Find the build log within Azure Machine Learning studio.__
> [!NOTE]
-> This option to access the build log uses [Azure Machine Learning studio](https://ml.azure.com), which is a different portal than [Azure AI Studio](https://ai.azure.com).
+> This option to access the build log uses [Azure Machine Learning studio](https://ml.azure.com), which is a different portal than [Azure AI Foundry](https://ai.azure.com).
1. Go to [Azure Machine Learning studio](https://ml.azure.com).
2. Select **Endpoints** from the left navigation menu.
@@ -108,5 +109,5 @@ Playground only supports select models, such as Azure OpenAI models and Llama-2.
## Related content
-- [Azure AI Studio overview](../what-is-ai-studio.md)
+- [Azure AI Foundry overview](../what-is-ai-studio.md)
- [Azure AI FAQ](../faq.yml)
Summary
{
"modification_type": "minor update",
"modification_title": "Azure AI Foundryポータルでのデプロイメントおよびモニタリングのトラブルシューティングに関する更新"
}
Explanation
この変更では、「troubleshoot-deploy-and-monitor.md」ドキュメントが更新され、Azure AI Studioに関連する情報がAzure AI Foundryポータルに変更されています。これにより、デプロイメントやモニタリングのトラブルシューティングに関する指示が最新のプラットフォームに適応されました。
主な変更点は以下の通りです:
- タイトルと説明が「Azure AI Studio」から「Azure AI Foundryポータル」に変更され、対象のプラットフォームが明確になりました。
- デプロイメントエラーの参照やトラブルシューティングの手順が、Azure AI Foundryに合わせて修正されています。
- 特定のエラーの解決策を示す手順がすべて「Azure AI Foundry」に基づいた内容に更新され、プロジェクトに関する各ステップが適切に表現されています。
- Azureマシンラーニングスタジオへの言及が「Azure AI Foundry」に基づいて修正されており、異なるポータル間の混乱を避けるための注意文も記載されています。
- 関連コンテンツのセクションでは、Azure AI Studioに関連していたリンクがAzure AI Foundryに関連する内容に変更されています。
この更新により、ユーザーはAzure AI Foundryポータルでのデプロイメントやモニタリングのトラブルシューティングについて、より適切で最新のガイダンスを得ることができるようになります。
articles/ai-studio/how-to/troubleshoot-secure-connection-project.md
Diff
@@ -35,7 +35,7 @@ To connect to a project that's secured behind a VNet, use one of the following m
The troubleshooting steps for DNS configuration differ based on whether you're using Azure DNS or a custom DNS. Use the following steps to determine which one you're using:
-1. In the [Azure portal](https://portal.azure.com), select the private endpoint resource for your Azure AI Studio. If you don't remember the name, select your Azure AI Studio resource, __Networking__, __Private endpoint connections__, and then select the __Private endpoint__ link.
+1. In the [Azure portal](https://portal.azure.com), select the private endpoint resource for your Azure AI Foundry. If you don't remember the name, select your Azure AI Foundry resource, __Networking__, __Private endpoint connections__, and then select the __Private endpoint__ link.
:::image type="content" source="../media/how-to/troubleshoot-secure-connection-project/private-endpoint-connections.png" alt-text="Screenshot of the private endpoint connections for the resource." lightbox="../media/how-to/troubleshoot-secure-connection-project/private-endpoint-connections.png":::
@@ -131,6 +131,6 @@ Try the following steps to troubleshoot:
1. In Azure Portal, check the network settings of the storage account that is associated to your hub.
* If public network access is set to __Enabled from selected virtual networks and IP addresses__, ensure the correct IP address ranges are added to access your storage account.
* If public network access is set to __Disabled__, ensure you have a private endpoint configured from your Azure virtual network to your storage account with Target sub-resource as blob. In addition, you must grant the [Reader](/azure/role-based-access-control/built-in-roles#reader) role for the storage account private endpoint to the managed identity.
-2. In Azure Portal, navigate to your AI Studio hub. Ensure the managed virtual network is provisioned and the outbound private endpoint to blob storage is Active. For more on provisioning the managed virtual network, see [How to configure a managed network for Azure AI Studio hubs](configure-managed-network.md).
-3. Navigate to AI Studio > your project > project settings.
+2. In Azure Portal, navigate to your AI Foundry hub. Ensure the managed virtual network is provisioned and the outbound private endpoint to blob storage is Active. For more on provisioning the managed virtual network, see [How to configure a managed network for Azure AI Foundry hubs](configure-managed-network.md).
+3. Navigate to AI Foundry > your project > project settings.
4. Refresh the page. A number of connections should be created including 'workspaceblobstore'.
Summary
{
"modification_type": "minor update",
"modification_title": "Azure AI Foundryポータルでの安全な接続に関するトラブルシューティングの更新"
}
Explanation
この変更では、「troubleshoot-secure-connection-project.md」ドキュメントが更新され、Azure AI Studioに関する言及がAzure AI Foundryポータルに変更されています。これにより、安全な接続のトラブルシューティング手順が最新のプラットフォームに適応されました。
主な変更点は以下の通りです:
- トラブルシューティング手順の中で、プライベートエンドポイントリソースの選択に関する説明が「Azure AI Studio」から「Azure AI Foundry」に変更され、適切なプラットフォーム名が使用されています。
- 特定の手順におけるナビゲーションも「Azure AI Foundry」に基づいて修正され、関連情報が正確に更新されています。
- Azureポータルの手順において、マネージド仮想ネットワークの確認や関連する設定に関する内容も、AI Foundryに関連するように記載されています。
この更新により、ユーザーはAzure AI Foundryポータルを通じて安全な接続をトラブルシュートする際の手順を、より正確に理解し実行できるようになります。
articles/ai-studio/how-to/use-blocklists.md
Diff
@@ -1,7 +1,7 @@
---
-title: Use blocklists in AI Studio
+title: Use blocklists in AI Foundry portal
titleSuffix: Azure AI Foundry
-description: Learn how to create custom blocklists in Azure AI Studio as part of your content filtering configurations.
+description: Learn how to create custom blocklists in Azure AI Foundry portal as part of your content filtering configurations.
manager: nitinme
ms.service: azure-ai-studio
ms.custom:
@@ -13,13 +13,13 @@ author: PatrickFarley
---
-# Use blocklists in Azure AI Studio
+# Use blocklists in Azure AI Foundry portal
-You can create custom blocklists in the Azure AI Studio as part of your content filtering configurations. The following steps show how to create custom blocklists as part of your content filters in Azure AI Studio.
+You can create custom blocklists in the Azure AI Foundry portal as part of your content filtering configurations. The following steps show how to create custom blocklists as part of your content filters in Azure AI Foundry portal.
## Create a blocklist
-1. Go to [AI Studio](https://ai.azure.com/) and navigate to your project/hub. Then select the **Safety+ Security** page on the left nav and select the **Blocklists** tab.
+1. Go to [AI Foundry](https://ai.azure.com/) and navigate to your project/hub. Then select the **Safety+ Security** page on the left nav and select the **Blocklists** tab.
:::image type="content" source="../media/content-safety/content-filter/select-blocklists.png" lightbox="../media/content-safety/content-filter/select-blocklists.png" alt-text="Screenshot of the Blocklists page tab.":::
1. Select **Create a blocklist**. Enter a name for your blocklist, add a description, and select an Azure OpenAI resource to connect it to. Then select **Create Blocklist**.
1. Select your new blocklist once it's created. On the blocklist's page, select **Add new term**.
Summary
{
"modification_type": "minor update",
"modification_title": "Azure AI Foundryポータルでのブロックリストの使用に関する更新"
}
Explanation
この変更により、「use-blocklists.md」ドキュメントが更新され、「Azure AI Studio」に関する言及が「Azure AI Foundryポータル」に変更されました。これにより、ユーザーに対する最新のプラットフォームでのブロックリストの作成手順が提示されています。
主な変更点は以下の通りです:
- タイトルと説明が「Azure AI Studio」から「Azure AI Foundryポータル」に変更され、対象のプラットフォームが明確化されています。
- ブロックリストの作成方法に関する指示も、全て「Azure AI Foundryポータル」に適用される内容に更新されており、具体的な手順が正確に反映されています。
- ユーザーが特定のタブにナビゲートする手順についても、AI Foundryに基づいた内容に修正され、最新のガイドラインが提供されています。
この更新によって、ユーザーはAzure AI Foundryポータルにおけるカスタムブロックリストの作成方法をより理解しやすくなり、内容フィルタリングの設定での活用が促進されることが期待されます。
articles/ai-studio/includes/ai-services/add-model-deployments.md
Diff
@@ -10,9 +10,9 @@ author: santiagxf
As opposite to GitHub Models where all the models are already configured, the Azure AI Services resource allows you to control which models are available in your endpoint and under which configuration.
-You can add all the models you need in the endpoint by using [Azure AI Studio for GitHub](https://ai.azure.com/github). In the following example, we add a `Mistral-Large` model in the service:
+You can add all the models you need in the endpoint by using [Azure AI Foundry for GitHub](https://ai.azure.com/github). In the following example, we add a `Mistral-Large` model in the service:
-1. Go to **Model catalog** section in [Azure AI Studio for GitHub](https://ai.azure.com/github).
+1. Go to **Model catalog** section in [Azure AI Foundry for GitHub](https://ai.azure.com/github).
2. Scroll to the model you're interested in and select it.
Summary
{
"modification_type": "minor update",
"modification_title": "Azure AI Foundryポータルでのモデルデプロイメントの追加に関する更新"
}
Explanation
この変更では、「add-model-deployments.md」ドキュメントが更新され、Azure AI Studioに関連する文言がAzure AI Foundryポータルに変更されました。この変更により、ユーザーは最新のプラットフォームでモデルをデプロイする手順について正確な情報を得ることができます。
主な変更点は以下の通りです:
- 「Azure AI Studio」が「Azure AI Foundryポータル」に置き換えられ、現在のプラットフォームの名称が正しく反映されています。
- モデルをエンドポイントに追加するための手順において、指示が新しいポータルに基づいた内容に更新されています。
この更新により、ユーザーはAzure AI Foundryポータルを利用してモデルデプロイメントを行う際の手順を明確に理解でき、最新の情報に基づいた操作が可能になります。
articles/ai-studio/includes/chat-with-data.md
Diff
@@ -7,14 +7,14 @@ ms.author: sgilley
ms.service: azure-ai-studio
ms.topic: include
ms.date: 5/21/2024
-ms.custom: include, build-2024
+ms.custom: include, build-2024, ignite-2024
---
To complete this section, you need a local copy of product data. The [Azure-Samples/rag-data-openai-python-promptflow repository on GitHub](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/) contains sample retail product information that's relevant for this tutorial scenario. Specifically, the `product_info_11.md` file contains product information about the TrailWalker hiking shoes that's relevant for this tutorial example. [Download the example Contoso Trek retail product data in a ZIP file](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/raw/refs/heads/main/tutorial/data/product-info.zip) to your local machine.
Follow these steps to add your data in the chat playground to help the assistant answer questions about your products. You're not changing the deployed model itself. Your data is stored separately and securely in your Azure subscription.
-1. Go to your project in [Azure AI Studio](https://ai.azure.com).
+1. Go to your project in [Azure AI Foundry](https://ai.azure.com).
1. Select **Playgrounds**.
1. Select **Try the chat playground**.
1. Select your deployed chat model from the **Deployment** dropdown.
Summary
{
"modification_type": "minor update",
"modification_title": "Azure AI Foundryでのデータチャット機能に関する更新"
}
Explanation
この変更では、「chat-with-data.md」ドキュメントが更新され、Azure AI Studioに関する言及がAzure AI Foundryに置き換えられました。この修正により、ユーザーは最新のプラットフォームに基づいた手順を受け取ることができます。
主な変更点は以下の通りです:
- 「Azure AI Studio」が「Azure AI Foundry」に変更され、最新の環境に適した内容が反映されています。
- Metadateの一部として、ms.custom
の値に「ignite-2024」が追加され、ドキュメントのカスタムタグが更新されています。
これにより、ユーザーはAzure AI Foundryでデータをチャット機能に追加する手順を正確に理解できるようになり、さらにセキュアな環境の中でプロダクト情報に基づくサポートを受けられることが期待されます。
articles/ai-studio/includes/create-env-file-tutorial.md
Diff
@@ -7,7 +7,7 @@ ms.author: sgilley
ms.service: azure-ai-studio
ms.topic: include
ms.date: 11/03/2024
-ms.custom: include
+ms.custom: include, ignite-2024
---
Your project connection string is required to call the Azure OpenAI service from your code. In this quickstart, you save this value in a `.env` file, which is a file that contains environment variables that your application can read.
@@ -25,7 +25,7 @@ EVALUATION_MODEL="gpt-4o-mini"
If you changed the name of the models you deployed, or you want to try different models, update those names in this `.env` file.
-Find your connection string in the Azure AI Studio project you created in the [AI Studio playground quickstart](../quickstarts/get-started-playground.md). Open the project, then find the connection string on the **Overview** page. Copy the connection string and paste it into the `.env` file.
+Find your connection string in the Azure AI Foundry project you created in the [AI Foundry playground quickstart](../quickstarts/get-started-playground.md). Open the project, then find the connection string on the **Overview** page. Copy the connection string and paste it into the `.env` file.
:::image type="content" source="../media/quickstarts/azure-ai-sdk/connection-string.png" alt-text="Screenshot shows the overview page of a project and the location of the connection string.":::
Summary
{
"modification_type": "minor update",
"modification_title": "Azure AI Foundryの接続文字列に関する更新"
}
Explanation
この変更では、「create-env-file-tutorial.md」ドキュメントが更新され、Azure AI Studioに関連する文言がAzure AI Foundryに変更されました。この修正により、最新のプラットフォームに対する手順が正確に伝えられるようになりました。
主な変更点は以下の通りです:
- ms.custom
のメタデータに「ignite-2024」が追加され、ドキュメントに新しいカスタムタグが適用されました。
- 接続文字列を見つける手順において、ユーザーがアクセスするプロジェクトが「Azure AI Studio」から「Azure AI Foundry」に変更され、最新情報に基づく指示が明確に示されています。
これにより、ユーザーはAzure AI Foundry環境における接続文字列の取得手順を正確に理解し、環境変数ファイル(.env
ファイル)に適切に設定できるようになります。
articles/ai-studio/includes/create-env-file.md
Diff
@@ -7,7 +7,7 @@ ms.author: sgilley
ms.service: azure-ai-studio
ms.topic: include
ms.date: 11/03/2024
-ms.custom: include
+ms.custom: include, ignite-2024
---
Your project connection string is required to call the Azure OpenAI service from your code. In this quickstart, you save this value in a `.env` file, which is a file that contains environment variables that your application can read.
@@ -18,7 +18,7 @@ Create a `.env` file, and paste the following code:
PROJECT_CONNECTION_STRING=<your-connection-string>
```
-You find your connection string in the Azure AI Studio project you created in the [AI Studio playground quickstart](../quickstarts/get-started-playground.md). Open the project, then find the connection string on the **Overview** page. Copy the connection string and paste it into the `.env` file.
+You find your connection string in the Azure AI Foundry project you created in the [AI Foundry playground quickstart](../quickstarts/get-started-playground.md). Open the project, then find the connection string on the **Overview** page. Copy the connection string and paste it into the `.env` file.
:::image type="content" source="../media/quickstarts/azure-ai-sdk/connection-string.png" alt-text="Screenshot shows the overview page of a project and the location of the connection string.":::
Summary
{
"modification_type": "minor update",
"modification_title": "Azure AI Foundryの接続文字列に関する更新"
}
Explanation
この変更では、「create-env-file.md」ドキュメントが更新され、Azure AI Studioに関連する部分がAzure AI Foundryに置き換えられました。これによって、利用者は最新のプラットフォームに基づいた正確な情報を得ることができるようになります。
主な変更点は以下の通りです:
- メタデータのms.custom
に「ignite-2024」が追加され、ドキュメントのカスタムタグが更新されています。
- 接続文字列を取得する手順において、ユーザーがアクセスするプロジェクトの名称が「Azure AI Studio」から「Azure AI Foundry」に変更され、最新の環境に関する指示が明確に伝えられています。
これにより、ユーザーはAzure AI Foundryにおける接続文字列の取得方法を正確に理解し、.env
ファイルに情報を適切に保存できるようになります。
articles/ai-studio/includes/create-hub.md
Diff
@@ -7,15 +7,15 @@ ms.author: sgilley
ms.service: azure-ai-studio
ms.topic: include
ms.date: 11/19/2024
-ms.custom: include, build-2024
+ms.custom: include, build-2024, ignite-2024
---
> [!NOTE]
-> A hub in Azure AI Studio is a one-stop shop where you manage everything your AI project needs, like security and resources, so you can develop and test faster. To learn more about how hubs can help you, see the [Hubs and projects overview](/azure/ai-studio/concepts/ai-resources) article.
+> A hub in Azure AI Foundry portal is a one-stop shop where you manage everything your AI project needs, like security and resources, so you can develop and test faster. To learn more about how hubs can help you, see the [Hubs and projects overview](/azure/ai-studio/concepts/ai-resources) article.
-To create a hub in [Azure AI Studio](https://ai.azure.com), follow these steps:
+To create a hub in [Azure AI Foundry](https://ai.azure.com), follow these steps:
-1. Go to [Azure AI Studio](https://ai.azure.com) and sign in with your Azure account.
+1. Go to [Azure AI Foundry](https://ai.azure.com) and sign in with your Azure account.
1. If you’re not already in a project, select one. It doesn't matter which one you select. If you have no projects, first create one by selecting **+ Create project** at the top of the page.
1. Select the **Management center** from the left menu.
Summary
{
"modification_type": "minor update",
"modification_title": "Azure AI Foundryに関するハブの説明の更新"
}
Explanation
この変更では、「create-hub.md」ドキュメントが更新され、Azure AI Studio関連の言及がAzure AI Foundryに変更されました。この修正により、最新のプラットフォームに基づいた情報が提供されるようになっています。
主な変更点は以下の通りです:
- メタデータのms.custom
に「ignite-2024」が追加され、新たにカスタムタグが適用されました。
- ハブの説明において、「Azure AI Studio」から「Azure AI Foundry」に名称が変更され、プラットフォームの名称が最新のものに更新されました。
- ハブの作成手順においても、同様にプロジェクトの所在地が「Azure AI Foundry」に変更され、ユーザーに対して正確なアクセス情報が提供されています。
これにより、ユーザーはAzure AI Foundryポータル内でのハブの作成方法を正しく理解することができ、より迅速にAIプロジェクトの管理を行えるようになります。
articles/ai-studio/includes/create-projects.md
Diff
@@ -7,12 +7,12 @@ ms.author: sgilley
ms.service: azure-ai-studio
ms.topic: include
ms.date: 11/19/2024
-ms.custom: include, build-2024
+ms.custom: include, build-2024, ignite-2024
---
-To create a project in [Azure AI Studio](https://ai.azure.com), follow these steps:
+To create a project in [Azure AI Foundry](https://ai.azure.com), follow these steps:
-1. Go to [Azure AI Studio](https://ai.azure.com). If you are in a project, select **Azure AI Studio** at the top left of the page to go to the **Home** page.
+1. Go to [Azure AI Foundry](https://ai.azure.com). If you are in a project, select **Azure AI Foundry** at the top left of the page to go to the **Home** page.
1. Select **+ Create project**.
1. Enter a name for the project.
1. If you have a hub, you'll see the one you most recently used selected.
@@ -31,7 +31,7 @@ Projects live inside a hub. A hub allows you to share configurations like data c
When you create a new hub, you must have **Owner** or **Contributor** permissions on the selected resource group. If you're part of a team and don't have these permissions, your administrator should create a hub for you.
-While you can create a hub as part of the project creation, you have more control and can set more advanced settings for the hub if you create it separately. For example, you can customize network security or the underlying Azure Storage account. For more information, see [How to create and manage an Azure AI Studio hub](../how-to/create-azure-ai-resource.md).
+While you can create a hub as part of the project creation, you have more control and can set more advanced settings for the hub if you create it separately. For example, you can customize network security or the underlying Azure Storage account. For more information, see [How to create and manage an Azure AI Foundry hub](../how-to/create-azure-ai-resource.md).
When you create a new hub as part of the project creation, default settings are provided. If you want to customize these settings, do so before you create the project:
Summary
{
"modification_type": "minor update",
"modification_title": "Azure AI Foundryに関するプロジェクト作成手順の更新"
}
Explanation
この変更では、「create-projects.md」ドキュメントが更新され、Azure AI Studioに関連する情報がAzure AI Foundryに置き換えられました。この修正により、最新のプラットフォーム情報が利用者に提供されるようになります。
主な変更点は以下の通りです:
- メタデータのms.custom
に「ignite-2024」が追加され、ドキュメントのカスタムタグが拡充されています。
- プロジェクト作成に関する手順の説明において、「Azure AI Studio」から「Azure AI Foundry」に名称が変更され、正確なプラットフォームの参照が行われています。
- ハブに関連する説明部分も、同様に「Azure AI Studio」から「Azure AI Foundry」に変更され、ユーザーが必要とする情報が一貫していることを確保しています。
これにより、ユーザーはAzure AI Foundryにおけるプロジェクト作成手順を理解し、必要なリソースを正しく管理できるようになります。プロジェクトとハブの設定に関する詳細な指示も含まれているため、独自の環境に合わせてカスタマイズすることが可能です。
articles/ai-studio/includes/deploy-model.md
Diff
@@ -7,15 +7,15 @@ ms.author: sgilley
ms.service: azure-ai-studio
ms.topic: include
ms.date: 10/29/2024
-ms.custom: include
+ms.custom: include, ignite-2024
---
To work with a model, you first deploy it into a project. If you don't yet have a project, you create one as part of the deployment step.
-1. Sign in to [Azure AI Studio](https://ai.azure.com).
+1. Sign in to [Azure AI Foundry](https://ai.azure.com).
1. Studio remembers where you were last, so what you do next depends on where you are:
- * If you're new to Azure AI Studio, select **Explore models**.
+ * If you're new to Azure AI Foundry, select **Explore models**.
:::image type="content" source="../media/tutorials/chat/home-page.png" alt-text="Screenshot of the home page if with no projects." lightbox="../media/tutorials/chat/home-page.png":::
@@ -38,4 +38,4 @@ To work with a model, you first deploy it into a project. If you don't yet have
1. Provide a name for your project.
1. Select **Create a project**.
-1. Leave the default **Deployment name**. Select **Connect and deploy**.
\ No newline at end of file
+1. Leave the default **Deployment name**. Select **Connect and deploy**.
Summary
{
"modification_type": "minor update",
"modification_title": "Azure AI Foundryに関するモデルデプロイ手順の更新"
}
Explanation
この変更では、「deploy-model.md」ドキュメントが更新され、Azure AI Studioに関する情報がAzure AI Foundryに置き換えられました。この修正により、利用者に最新のプラットフォームに基づく情報が提供されるようになります。
主な変更点は以下の通りです:
- メタデータのms.custom
に「ignite-2024」が追加され、ドキュメントのカスタムタグが更新されています。
- モデルデプロイの手順において、ユーザーがサインインするプラットフォームが「Azure AI Studio」から「Azure AI Foundry」に変更されました。
- 手順の具体的な説明の中においても、すべての関連する言及が「Azure AI Foundry」へと更新されており、一貫性が保たれています。
これにより、ユーザーはAzure AI Foundryでのモデルデプロイ手順を正確に理解し、必要なリソースを管理できるようになります。また、導入の各ステップを明確に示すことによって、初心者でもスムーズに作業を進められるよう配慮されています。
articles/ai-studio/includes/find-deployments.md
Diff
@@ -7,7 +7,7 @@ ms.author: sdgilley
ms.service: azure-ai-studio
ms.topic: include
ms.date: 10/09/2024
-ms.custom: include,
+ms.custom: include, ignite-2024
---
-You can always find the endpoint's details, URL, and access keys by navigating to your project's **Management center** from the left navigation pane. Then, select **Models + endpoints**.
\ No newline at end of file
+You can always find the endpoint's details, URL, and access keys by navigating to your project's **Management center** from the left navigation pane. Then, select **Models + endpoints**.
Summary
{
"modification_type": "minor update",
"modification_title": "Azure AI Foundryにおけるデプロイメント情報の更新"
}
Explanation
この変更では、「find-deployments.md」ドキュメントが更新され、Azure AI Studioに関連する情報がAzure AI Foundryに合わせて最適化されました。この修正により、ユーザーに最新のプラットフォームに基づく情報が提供されることが目的です。
主な変更点は以下の通りです:
- メタデータのms.custom
に「ignite-2024」が追加され、ドキュメントのカスタムタグが更新されました。
- エンドポイントの詳細、URL、およびアクセスキーの取得方法に関する指示が明確に記載されていますが、内容自体に大きな変更はありません。
この修正により、ユーザーはAzure AI Foundryにおけるデプロイメントの管理方法を正しく理解し、プロジェクトの管理センターを通じて必要な情報にアクセスできるようになります。情報が一貫性を持って提供されるため、ユーザーはより効率的に作業を進めることが可能です。
articles/ai-studio/includes/install-cli.md
Diff
@@ -7,7 +7,7 @@ ms.author: sgilley
ms.service: azure-ai-studio
ms.topic: include
ms.date: 08/29/2024
-ms.custom: include
+ms.custom: include, ignite-2024
---
You install the Azure CLI and sign in from your local development environment, so that you can use your user credentials to call the Azure OpenAI service.
@@ -42,4 +42,4 @@ az login
Alternatively, you can log in manually via the browser with a device code.
```
az login --use-device-code
-```
\ No newline at end of file
+```
Summary
{
"modification_type": "minor update",
"modification_title": "Azure CLIのインストール手順の更新"
}
Explanation
この変更では、「install-cli.md」ドキュメントが更新され、Azure CLIのインストール手順に関する情報が最新の内容に修正されました。特に、ユーザーがAzure OpenAIサービスにアクセスするための手順が明確化されています。
主な変更点は以下の通りです:
- メタデータのms.custom
に「ignite-2024」が追加され、ドキュメントのカスタムタグが更新されました。
- Azure CLIをローカル開発環境にインストールし、ログインする手順についての記述が強調されています。この手順により、ユーザーは自分のユーザー資格情報を使用してAzure OpenAIサービスにアクセスできるようになります。
- az login
コマンドの使用について具体的に記載されており、利用者が簡単に手順を追えるように配慮されています。
- 最後のコードブロックに改行が追加され、 Markdown 表記における整形が改善されています。
この更新により、ユーザーはより簡単にAzure CLIをインストールし、サービスにアクセスできるようになるため、操作がスムーズに進むことが期待されます。
articles/ai-studio/includes/install-python.md
Diff
@@ -7,7 +7,7 @@ ms.author: sgilley
ms.service: azure-ai-studio
ms.topic: include
ms.date: 08/29/2024
-ms.custom: include
+ms.custom: include, ignite-2024
---
First you need to create a new Python environment to use to install the package you need for this tutorial. DO NOT install packages into your global python installation. You should always use a virtual or conda environment when installing python packages, otherwise you can break your global install of Python.
Summary
{
"modification_type": "minor update",
"modification_title": "Python環境のインストール手順の更新"
}
Explanation
この変更では、「install-python.md」ドキュメントが更新され、新しいPython環境を作成する手順と関連する注意事項が強調されています。これにより、ユーザーはPythonパッケージを適切にインストールする方法を理解できます。
主な変更点は以下の通りです:
- メタデータのms.custom
に「ignite-2024」が追加され、ドキュメントのカスタムタグが更新されました。
- パッケージをインストールする前に、新しいPython環境を作成する必須条件が強調されています。また、グローバルなPythonインストールにパッケージを直接インストールしないようにとの警告が記載されています。これは、グローバルインストールを壊さないために、常に仮想環境またはconda環境を使用することが推奨されています。
この更新により、ユーザーはPythonパッケージをインストールする際のベストプラクティスを理解し、効果的に環境を管理できるようになることが期待されます。これにより、誤ったインストールが原因で発生する可能性のある問題を避ける手助けとなります。
articles/ai-studio/includes/new-name.md
Diff
@@ -7,8 +7,8 @@ ms.author: sgilley
ms.service: azure-ai-studio
ms.topic: include
ms.date: 11/03/2024
-ms.custom: include
+ms.custom: include, ignite-2024
---
> [!IMPORTANT]
-> Azure AI Studio is now Azure AI Foundry. We're updating the documentation to reflect this change. In the meantime, you might see references to Azure AI Studio.
\ No newline at end of file
+> Azure AI Studio is now Azure AI Foundry. We're updating the documentation to reflect this change. In the meantime, you might see references to Azure AI Studio.
Summary
{
"modification_type": "minor update",
"modification_title": "Azure AI Studioの名称変更についての更新"
}
Explanation
この変更では、「new-name.md」ドキュメントが更新され、Azure AI Studioの名称がAzure AI Foundryに変更されたことが反映されています。名前の変更に伴い、ドキュメントもその内容に合わせて更新されています。
主な変更点は以下の通りです:
- メタデータのms.custom
に「ignite-2024」が追加され、ドキュメントのカスタムタグが更新されました。
- 重要な情報として、Azure AI StudioがAzure AI Foundryに名称変更された旨が明記されています。今後ドキュメントが更新される過程で、まだAzure AI Studioの参照が残る可能性があることへの注意も促されています。
この更新により、読者は名称変更という重要な変更を認識し、混乱を避けることができるようになります。適切な情報を提供することで、今後の利用者が正確なサービス名を使用できるように意識づけられています。
articles/ai-studio/includes/open-catalog.md
Diff
@@ -7,8 +7,8 @@ ms.author: sgilley
ms.service: azure-ai-studio
ms.topic: include
ms.date: 10/29/2024
-ms.custom: include
+ms.custom: include, ignite-2024
---
-1. Sign in to [Azure AI Studio](https://ai.azure.com).
+1. Sign in to [Azure AI Foundry](https://ai.azure.com).
1. If you’re not already in your project, select it.
1. Select **Model catalog** from the left navigation pane.
Summary
{
"modification_type": "minor update",
"modification_title": "Azure AI StudioからAzure AI Foundryへの名称変更"
}
Explanation
この変更では、「open-catalog.md」ドキュメントが更新され、Azure AI Studioの名称が新たにAzure AI Foundryに変更されたことが反映されています。この更新により、ユーザーが正しいサービス名を認識して利用できるようにしています。
主な変更点は以下の通りです:
- メタデータのms.custom
に「ignite-2024」が追加され、ドキュメントのカスタムタグが更新されました。
- ユーザーが最初にサインインするための指示が、Azure AI StudioからAzure AI Foundryに変更されました。これにより、適切なサービス名を使用することが強調されています。
この更新により、ユーザーは最新のサービス名を正しく知ることができ、混乱を避ける手助けとなることが期待されます。また、ドキュメント全体の整合性が保たれることで、よりクリアな指示が提供されるようになります。
articles/ai-studio/includes/resource-provider-kinds.md
Diff
@@ -12,6 +12,6 @@ ms.custom: include, build-2024
|Resource type|Resource provider|Kind|
|---|---|---|
-|Azure AI Studio hub|`Microsoft.MachineLearningServices/workspace`|`hub`|
-|Azure AI Studio project|`Microsoft.MachineLearningServices/workspace`|`project`|
+|Azure AI Foundry hub|`Microsoft.MachineLearningServices/workspace`|`hub`|
+|Azure AI Foundry project|`Microsoft.MachineLearningServices/workspace`|`project`|
|Azure AI services *or*</br>Azure AI OpenAI Service|`Microsoft.CognitiveServices/account`|`AIServices`</br>`OpenAI`|
Summary
{
"modification_type": "minor update",
"modification_title": "リソースプロバイダーの種類に関する名称変更"
}
Explanation
この変更では、「resource-provider-kinds.md」ドキュメントが更新され、Azure AI Studioの名称がAzure AI Foundryに変更されたことが反映されています。この更新によって、関連するリソースタイプの表記も新しい名称に合わせて修正されています。
主な変更点は以下の通りです:
- メタデータのms.custom
に「build-2024」が追加され、ドキュメントのカスタムタグが更新されました。
- 表内のリソースタイプが「Azure AI Studio hub」と「Azure AI Studio project」から「Azure AI Foundry hub」と「Azure AI Foundry project」に変更されました。この変更により、最新のサービス名が正確に反映されています。
この更新により、ユーザーは正確なサービス名に基づいたリソースの情報を得ることができ、誤解や混乱を避けることが期待されます。また、最新の情報を提供することで、利用者がサービスを正しく利用するための助けとなります。
articles/ai-studio/index.yml
Diff
@@ -3,12 +3,13 @@
title: Azure AI Foundry documentation # < 60 chars
summary: Build cutting-edge, market-ready, responsible applications for your organization with AI.
metadata:
- title: Azure AI Studio documentation
- description: Azure AI Studio helps developers and organizations rapidly create intelligent, cutting-edge, market-ready, and responsible applications with out-of-the-box and pre-built and customizable APIs and models. Example applications include generative AI, natural language processing for conversations, search, monitoring, translation, speech, vision, and decision-making.
+ title: Azure AI Foundry documentation
+ description: Azure AI Foundry helps developers and organizations rapidly create intelligent, cutting-edge, market-ready, and responsible applications with out-of-the-box and pre-built and customizable APIs and models. Example applications include generative AI, natural language processing for conversations, search, monitoring, translation, speech, vision, and decision-making.
ms.service: azure-ai-studio
ms.custom:
- build-2024
- copilot-learning-hub
+ - ignite-2024
ms.topic: landing-page
ms.reviewer: sgilley
ms.author: sgilley
@@ -65,7 +66,7 @@ landingContent:
links:
- text: Get started with the Azure AI SDKs
url: how-to/develop/sdk-overview.md
- - text: Work with AI Studio projects in VS Code
+ - text: Work with AI Foundry projects in VS Code
url: how-to/develop/vscode.md
- linkListType: tutorial
Summary
{
"modification_type": "minor update",
"modification_title": "Azure AI StudioからAzure AI Foundryへの名称変更"
}
Explanation
この変更では、「index.yml」ドキュメントが更新され、Azure AI StudioのリファレンスがAzure AI Foundryに変更されています。この更新によって、ドキュメントのタイトルや説明、関係するリンクも新しい名称に適合するように修正されています。
主な変更点は以下の通りです:
- ドキュメントのタイトルが「Azure AI Studio documentation」から「Azure AI Foundry documentation」に変更されました。
- 説明文も同様に、Azure AI Studioについての記述がAzure AI Foundryについてのものに書き換えられました。
- ms.custom
タグに「ignite-2024」が追加され、カスタム情報が更新されています。
- リンクのテキストも、「Work with AI Studio projects in VS Code」から「Work with AI Foundry projects in VS Code」に変更され、最新のサービス名に合わせて調整されました。
この更新の目的は、サービス名の一貫性を保ち、ユーザーが正しい情報に基づいてリソースを活用できるよう助けることです。混乱を避け、最新の使用方法が反映された内容を提供することで、ユーザーエクスペリエンスが向上することが期待されます。
articles/ai-studio/quickstarts/get-started-code.md
Diff
@@ -4,7 +4,7 @@ titleSuffix: Azure AI Foundry
description: This article provides instructions on how to build a custom chat app in Python using the Azure AI SDK.
manager: scottpolly
ms.service: azure-ai-studio
-ms.custom: build-2024, devx-track-azurecli, devx-track-python
+ms.custom: build-2024, devx-track-azurecli, devx-track-python, ignite-2024
ms.topic: how-to
ms.date: 11/07/2024
ms.reviewer: dantaylo
@@ -20,7 +20,7 @@ In this quickstart, we walk you through setting up your local development enviro
## Prerequisites
-* Before you can follow this quickstart, complete the [AI Studio playground quickstart](../quickstarts/get-started-playground.md) to deploy a **gpt-4o-mini** model into a project.
+* Before you can follow this quickstart, complete the [AI Foundry playground quickstart](../quickstarts/get-started-playground.md) to deploy a **gpt-4o-mini** model into a project.
## Install the Azure CLI and sign in
@@ -48,7 +48,7 @@ Create a file named **chat.py**. Copy and paste the following code into it.
Your project connection string is required to call the Azure OpenAI service from your code.
-Find your connection string in the Azure AI Studio project you created in the [AI Studio playground quickstart](../quickstarts/get-started-playground.md). Open the project, then find the connection string on the **Overview** page.
+Find your connection string in the Azure AI Foundry project you created in the [AI Foundry playground quickstart](../quickstarts/get-started-playground.md). Open the project, then find the connection string on the **Overview** page.
:::image type="content" source="../media/quickstarts/azure-ai-sdk/connection-string.png" alt-text="Screenshot shows the overview page of a project and the location of the connection string.":::
Summary
{
"modification_type": "minor update",
"modification_title": "AI StudioからAI Foundryへの名称変更"
}
Explanation
この変更は、「get-started-code.md」ドキュメントに適用され、Azure AI Studioに関連する部分がAzure AI Foundryに変更されています。これにより、ドキュメント内のタイトル、サービス名、そして一部の手順が新しい名称に基づいて修正されています。
主な変更点は以下の通りです:
- メタデータのms.custom
に「ignite-2024」が追加され、新しいタグが適用されています。
- 「AI Studio playground quickstart」というフレーズが「AI Foundry playground quickstart」に変更されています。
- 接続文字列を見つける手順において、プロジェクトの名称も同様に「Azure AI Studioプロジェクト」から「Azure AI Foundryプロジェクト」に書き換えられました。
この更新の目的は、サービス名の一貫性を確保し、ユーザーが正確な情報をもとに手順を進めることができるようにすることです。更新された内容が反映されることで、ユーザーが新しいサービスの利用方法を容易に理解できるよう支援しています。
articles/ai-studio/quickstarts/get-started-playground.md
Diff
@@ -1,30 +1,31 @@
---
-title: Use the chat playground in Azure AI Studio
+title: Use the chat playground in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: Use this article to learn how to deploy a chat model and use it in the chat playground in Azure AI Studio.
+description: Use this article to learn how to deploy a chat model and use it in the chat playground in Azure AI Foundry portal.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
- build-2024
+ - ignite-2024
ms.topic: quickstart
ms.date: 10/22/2024
ms.reviewer: zuramir
ms.author: sgilley
author: sdgilley
-# customer intent: As a developer, I want use the chat playground in Azure AI Studio so I can work with generative AI.
+# customer intent: As a developer, I want use the chat playground in Azure AI Foundry portal so I can work with generative AI.
---
-# Quickstart: Use the chat playground in Azure AI Studio
+# Quickstart: Use the chat playground in Azure AI Foundry portal
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-In this quickstart, you use [Azure AI Studio](https://ai.azure.com) to deploy a chat model and use it in the chat playground in Azure AI Studio.
+In this quickstart, you use [Azure AI Foundry](https://ai.azure.com) to deploy a chat model and use it in the chat playground in Azure AI Foundry portal.
If you don't have an Azure subscription, <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">create one for free</a>.
## Prerequisites
-- You need permissions to create an Azure AI Studio hub or have one created for you.
+- You need permissions to create an Azure AI Foundry hub or have one created for you.
- If your role is **Contributor** or **Owner**, you can follow the steps in this tutorial.
- If your role is **Azure AI Developer**, the hub must already be created before you can complete this tutorial. Your user role must be **Azure AI Developer**, **Contributor**, or **Owner** on the hub. For more information, see [hubs](../concepts/ai-resources.md) and [Azure AI roles](../concepts/rbac-ai-studio.md).
@@ -43,7 +44,7 @@ For more information about deploying models, see [how to deploy models](../how-t
## Chat in the playground without your data
-In the [Azure AI Studio](https://ai.azure.com) playground, you can observe how your model responds with and without your data. In this quickstart, you test your model without your data.
+In the [Azure AI Foundry](https://ai.azure.com) playground, you can observe how your model responds with and without your data. In this quickstart, you test your model without your data.
To chat with your deployed model in the chat playground, follow these steps:
Summary
{
"modification_type": "minor update",
"modification_title": "AI StudioからAI Foundryへの名称変更"
}
Explanation
この変更では、「get-started-playground.md」ドキュメント内のすべての「Azure AI Studio」関連の記述が「Azure AI Foundry」へと変更されています。これにより、ドキュメント全体で一貫して新しいサービス名が反映されています。
主な変更点は以下の通りです:
- タイトルが「Use the chat playground in Azure AI Studio」から「Use the chat playground in Azure AI Foundry portal」に変更されました。
- 説明文も新しいポータル名に変更され、Azure AI StudioからAzure AI Foundry portalに記述が更新されています。
- ms.custom
情報に「ignite-2024」という新しいタグが追加されています。
- 一部の文において、Azure AI Studioの権限に関する説明もAzure AI Foundryに合わせて修正されています。
この更新の目的は、ドキュメントの内容が最新のサービス名と一致するようにし、利用者が正確な情報を基に手順を進めることができるようにすることです。ユーザーが新しいサービスをスムーズに利用できるようサポートしています。
articles/ai-studio/quickstarts/hear-speak-playground.md
Diff
@@ -1,42 +1,43 @@
---
-title: Hear and speak with chat models in the Azure AI Studio chat playground
+title: Hear and speak with chat models in the Azure AI Foundry portal chat playground
titleSuffix: Azure AI Foundry
-description: Hear and speak with chat models in the Azure AI Studio chat playground.
+description: Hear and speak with chat models in the Azure AI Foundry portal chat playground.
manager: nitinme
ms.service: azure-ai-studio
ms.custom:
- ignite-2023
- build-2024
+ - ignite-2024
ms.topic: quickstart
ms.date: 11/19/2024
ms.reviewer: eur
ms.author: eur
author: eric-urban
---
-# Quickstart: Hear and speak with chat models in the AI Studio chat playground
+# Quickstart: Hear and speak with chat models in the AI Foundry portal chat playground
-In the chat playground in Azure AI Studio, you can use speech to text and text to speech features to interact with chat models. You can try the same model that you use for text-based chat in a speech-based chat. It's just another way to interact with the model.
+In the chat playground in Azure AI Foundry portal, you can use speech to text and text to speech features to interact with chat models. You can try the same model that you use for text-based chat in a speech-based chat. It's just another way to interact with the model.
In this quickstart, you use Azure OpenAI Service and Azure AI Speech to:
- Speak to the assistant via speech to text.
- Hear the assistant's response via text to speech.
-The speech to text and text to speech features can be used together or separately in the AI Studio chat playground. You can use the playground to test your chat model before deploying it.
+The speech to text and text to speech features can be used together or separately in the AI Foundry portal chat playground. You can use the playground to test your chat model before deploying it.
## Prerequisites
- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.
-- An [AI Studio project](../how-to/create-projects.md).
+- An [AI Foundry project](../how-to/create-projects.md).
- A deployed [Azure OpenAI](../how-to/deploy-models-openai.md) chat model. This guide is tested with a `gpt-4o-mini` model.
## Configure the chat playground
Before you can start a chat session, you need to configure the chat playground to use the speech to text and text to speech features.
-1. Sign in to [Azure AI Studio](https://ai.azure.com).
-1. Go to your AI Studio project. If you need to create a project, see [Create an AI Studio project](../how-to/create-projects.md).
+1. Sign in to [Azure AI Foundry](https://ai.azure.com).
+1. Go to your AI Foundry project. If you need to create a project, see [Create an AI Foundry project](../how-to/create-projects.md).
1. Select **Playgrounds** from the left pane and then select a playground to use. In this example, select **Try the chat playground**.
1. Select your deployed chat model from the **Deployment** dropdown.
@@ -95,6 +96,6 @@ To avoid incurring unnecessary Azure costs, you should delete the resources you
## Next steps
-- [Create a project in Azure AI Studio](../how-to/create-projects.md)
+- [Create a project in Azure AI Foundry portal](../how-to/create-projects.md)
- [Deploy an enterprise chat web app](../tutorials/deploy-chat-web-app.md)
- [Learn more about Azure AI Speech](../../ai-services/speech-service/overview.md)
Summary
{
"modification_type": "minor update",
"modification_title": "AI StudioからAI Foundryへの名称変更"
}
Explanation
この変更では、「hear-speak-playground.md」ドキュメント内に記載されているすべての「Azure AI Studio」に関連する表現が「Azure AI Foundry」へと改訂され、タイトル、説明、および手順が新しいサービス名に合わせて更新されています。
主な変更点は以下の通りです:
- タイトルが「Hear and speak with chat models in the Azure AI Studio chat playground」から「Hear and speak with chat models in the Azure AI Foundry portal chat playground」に変更されました。
- 説明文も新しいポータル名に合わせて変更され、Azure AI StudioからAzure AI Foundry portalに記述が改定されています。
- ms.custom
のリストに「ignite-2024」という新しいタグが追加され、プロジェクトに関する記載もAzure AI Foundry用に書き換えられました。
- 各手順において「AI Studioプロジェクト」が「AI Foundryプロジェクト」に修正され、サインインやプロジェクト作成の手順がすべて新しい名称に対応しています。
この更新は、ドキュメントの内容が最新のサービス名と一致することを目的としており、ユーザーが新しいプラットフォームで正確かつ効率的に情報を取得し、実行できるようにサポートしています。
articles/ai-studio/quickstarts/multimodal-vision.md
Diff
@@ -1,7 +1,7 @@
---
-title: Get started using GPT-4 Turbo with Vision on your images and videos in Azure AI Studio
+title: Get started using GPT-4 Turbo with Vision on your images and videos in Azure AI Foundry portal
titleSuffix: Azure AI Foundry
-description: Get started using GPT-4 Turbo with Vision on your images and videos in Azure AI Studio.
+description: Get started using GPT-4 Turbo with Vision on your images and videos in Azure AI Foundry portal.
manager: nitinme
ms.service: azure-ai-studio
ms.custom:
@@ -13,11 +13,11 @@ ms.author: pafarley
author: PatrickFarley
---
-# Quickstart: Get started using GPT-4 Turbo with Vision on your images and videos in Azure AI Studio
+# Quickstart: Get started using GPT-4 Turbo with Vision on your images and videos in Azure AI Foundry portal
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-Use this article to get started using [Azure AI Studio](https://ai.azure.com) to deploy and test the GPT-4 Turbo with Vision model.
+Use this article to get started using [Azure AI Foundry](https://ai.azure.com) to deploy and test the GPT-4 Turbo with Vision model.
GPT-4 Turbo with Vision and [Azure AI Vision](../../ai-services/computer-vision/overview.md) offer advanced functionality including:
@@ -31,7 +31,7 @@ Extra usage fees might apply when using GPT-4 Turbo with Vision and Azure AI Vis
- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.
- Once you have your Azure subscription, <a href="/azure/ai-services/openai/how-to/create-resource?pivots=web-portal" title="Create an Azure OpenAI resource." target="_blank">create an Azure OpenAI resource </a>.
-- An [AI Studio hub](../how-to/create-azure-ai-resource.md) with your Azure OpenAI resource added as a connection.
+- An [AI Foundry hub](../how-to/create-azure-ai-resource.md) with your Azure OpenAI resource added as a connection.
## Prepare your media
@@ -43,7 +43,7 @@ For video prompts, you need a video that's under three minutes in length.
## Deploy a GPT-4 Turbo with Vision model
-1. Sign in to [Azure AI Studio](https://ai.azure.com) and select the hub you'd like to work in.
+1. Sign in to [Azure AI Foundry](https://ai.azure.com) and select the hub you'd like to work in.
1. On the left nav menu, select **AI Services**. Select the **Try out GPT-4 Turbo** panel.
1. On the gpt-4 page, select **Deploy**. In the window that appears, select your Azure OpenAI resource. Select `vision-preview` as the model version.
1. Select **Deploy**.
@@ -129,7 +129,7 @@ In this chat session, you are instructing the assistant to aid in understanding
Below are the known limitations of the video prompt enhancements.
- **Low resolution:** The frames are analyzed using GPT-4 Turbo with Vision's "low resolution" setting, which may affect the accuracy of small object and text recognition in the video.
-- **Video file limits:** Both MP4 and MOV file types are supported. In the Azure AI Studio Playground, videos must be less than 3 minutes long. When you use the API there is no such limitation.
+- **Video file limits:** Both MP4 and MOV file types are supported. In the Azure AI Foundry portal Playground, videos must be less than 3 minutes long. When you use the API there is no such limitation.
- **Prompt limits:** Video prompts only contain one video and no images. In Playground, you can clear the session to try with another video or images.
- **Limited frame selection:** Currently the system selects 20 frames from the entire video, which might not capture all critical moments or details. Frame selection can either be evenly spread through the video or focused by a specific Video Retrieval query, depending on the prompt.
- **Language support:** Currently, the system primarily supports English for grounding with transcripts. Transcripts don't provide accurate information on lyrics from songs.
Summary
{
"modification_type": "minor update",
"modification_title": "AI StudioからAI Foundryへの名称変更"
}
Explanation
この変更では、「multimodal-vision.md」ドキュメントにおける「Azure AI Studio」に関連する表現がすべて「Azure AI Foundry」に変更されています。これにより、ドキュメント全体で新しいサービス名が一貫して使用されるようになります。
主な変更点は以下の通りです:
- タイトルが「Get started using GPT-4 Turbo with Vision on your images and videos in Azure AI Studio」から「Get started using GPT-4 Turbo with Vision on your images and videos in Azure AI Foundry portal」に変更されました。
- 説明の文言も、Azure AI StudioからAzure AI Foundry portalに更新されています。
- 手順の中で「AI Studio hub」が「AI Foundry hub」に修正され、Azure AI StudioへのサインインがAzure AI Foundryへのサインインに変更されました。
- メディア準備やモデルのデプロイに関する手順も同様に新しいサービス名に合わせて書き換えられています。
この更新は、ドキュメントの内容が最新のサービス名に合致するようにし、利用者が正確な情報で迅速に手順を進められるようにサポートしています。
articles/ai-studio/reference/reference-model-inference-api.md
Diff
@@ -14,11 +14,11 @@ ms.custom:
- build-2024
---
-# Azure AI Model Inference API | Azure AI Studio
+# Azure AI Model Inference API | Azure AI Foundry
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-The Azure AI Model Inference is an API that exposes a common set of capabilities for foundational models and that can be used by developers to consume predictions from a diverse set of models in a uniform and consistent way. Developers can talk with different models deployed in Azure AI Studio without changing the underlying code they are using.
+The Azure AI Model Inference is an API that exposes a common set of capabilities for foundational models and that can be used by developers to consume predictions from a diverse set of models in a uniform and consistent way. Developers can talk with different models deployed in Azure AI Foundry portal without changing the underlying code they are using.
## Benefits
@@ -594,25 +594,25 @@ The Azure AI Model Inference API is currently supported in certain models deploy
# [Python](#tab/python)
-The client library `azure-ai-inference` does inference, including chat completions, for AI models deployed by Azure AI Studio and Azure Machine Learning Studio. It supports Serverless API endpoints and Managed Compute endpoints (formerly known as Managed Online Endpoints).
+The client library `azure-ai-inference` does inference, including chat completions, for AI models deployed by Azure AI Foundry and Azure Machine Learning studio. It supports Serverless API endpoints and Managed Compute endpoints (formerly known as Managed Online Endpoints).
Explore our [samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-inference/samples) and read the [API reference documentation](https://aka.ms/azsdk/azure-ai-inference/python/reference) to get yourself started.
# [JavaScript](#tab/javascript)
-The client library `@azure-rest/ai-inference` does inference, including chat completions, for AI models deployed by Azure AI Studio and Azure Machine Learning Studio. It supports Serverless API endpoints and Managed Compute endpoints (formerly known as Managed Online Endpoints).
+The client library `@azure-rest/ai-inference` does inference, including chat completions, for AI models deployed by Azure AI Foundry and Azure Machine Learning studio. It supports Serverless API endpoints and Managed Compute endpoints (formerly known as Managed Online Endpoints).
Explore our [samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) and read the [API reference documentation](https://aka.ms/AAp1kxa) to get yourself started.
# [C#](#tab/csharp)
-The client library `Azure.Ai.Inference` does inference, including chat completions, for AI models deployed by Azure AI Studio and Azure Machine Learning Studio. It supports Serverless API endpoints and Managed Compute endpoints (formerly known as Managed Online Endpoints).
+The client library `Azure.Ai.Inference` does inference, including chat completions, for AI models deployed by Azure AI Foundry and Azure Machine Learning studio. It supports Serverless API endpoints and Managed Compute endpoints (formerly known as Managed Online Endpoints).
Explore our [samples](https://aka.ms/azsdk/azure-ai-inference/csharp/samples) and read the [API reference documentation](https://aka.ms/azsdk/azure-ai-inference/csharp/reference) to get yourself started.
# [REST](#tab/rest)
-Explore the reference section of the Azure AI model inference API to see parameters and options to consume models, including chat completions models, deployed by Azure AI Studio and Azure Machine Learning Studio. It supports Serverless API endpoints and Managed Compute endpoints (formerly known as Managed Online Endpoints).
+Explore the reference section of the Azure AI model inference API to see parameters and options to consume models, including chat completions models, deployed by Azure AI Foundry and Azure Machine Learning studio. It supports Serverless API endpoints and Managed Compute endpoints (formerly known as Managed Online Endpoints).
* [Get info](reference-model-inference-info.md): Returns the information about the model deployed under the endpoint.
* [Text embeddings](reference-model-inference-embeddings.md): Creates an embedding vector representing the input text.
Summary
{
"modification_type": "minor update",
"modification_title": "AI StudioからAI Foundryへの名称変更"
}
Explanation
この変更では、「reference-model-inference-api.md」ドキュメント内の「Azure AI Studio」に関連するすべての言及が「Azure AI Foundry」に修正され、文書全体で一貫したサービス名の使用が確保されています。
主な変更点は以下の通りです:
- タイトルが「Azure AI Model Inference API | Azure AI Studio」から「Azure AI Model Inference API | Azure AI Foundry」に変更されました。
- 説明文の中で、「Azure AI Studio」という表現が、「Azure AI Foundry portal」として更新されました。
- クライアントライブラリのセクションで、すべての言及が「Azure AI Studio」から「Azure AI Foundry」に変更され、APIの使用説明が新しい名称に一致するように修正されています。
この更新は、ドキュメントの情報が最新のサービス名と整合していることを目的としており、開発者が正確で一貫性のある情報を得ることができるように支援しています。
articles/ai-studio/reference/reference-model-inference-chat-completions.md
Diff
@@ -14,7 +14,7 @@ ms.custom:
- build-2024
---
-# Reference: Chat Completions | Azure AI Studio
+# Reference: Chat Completions | Azure AI Foundry
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
Summary
{
"modification_type": "minor update",
"modification_title": "AI StudioからAI Foundryへの名称変更"
}
Explanation
この変更では、「reference-model-inference-chat-completions.md」ドキュメントのタイトルが「Azure AI Studio」から「Azure AI Foundry」に変更されています。具体的には、ドキュメントのヘッダー部分におけるタイトルが修正され、一貫して新しいサービス名が使用されるようになっています。
これは、利用者に最新の情報を提供し、Azure AI Foundryに関するコンテンツが正確であることを保証するためのマイナーな更新です。このような変更により、文書全体での一貫性が保たれ、開発者がサービスを理解しやすくなります。
articles/ai-studio/reference/reference-model-inference-completions.md
Diff
@@ -14,7 +14,7 @@ ms.custom:
- build-2024
---
-# Reference: Completions | Azure AI Studio
+# Reference: Completions | Azure AI Foundry
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
Summary
{
"modification_type": "minor update",
"modification_title": "AI StudioからAI Foundryへの名称変更"
}
Explanation
この変更では、「reference-model-inference-completions.md」ドキュメントにおいて、タイトルが「Azure AI Studio」から「Azure AI Foundry」に変更されています。この修正は、ドキュメント内部でのサービス名を新しい名称に合わせることを目的としています。
具体的には、タイトル部分が「Reference: Completions | Azure AI Studio」から「Reference: Completions | Azure AI Foundry」に更新されました。この変更により、利用者が最新のサービス情報にアクセスできるようになり、文書の一貫性が向上します。開発者は、正確で関連性のある情報を得ることができるようになります。
articles/ai-studio/reference/reference-model-inference-embeddings.md
Diff
@@ -14,7 +14,7 @@ ms.custom:
- build-2024
---
-# Reference: Embeddings | Azure AI Studio
+# Reference: Embeddings | Azure AI Foundry
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
Summary
{
"modification_type": "minor update",
"modification_title": "AI StudioからAI Foundryへの名称変更"
}
Explanation
この変更では、「reference-model-inference-embeddings.md」ドキュメントにおいて、タイトルが「Azure AI Studio」から「Azure AI Foundry」に変更されています。この更新は、ドキュメントの内容が最新のサービス名を反映することを目的としています。
具体的には、ヘッダー部分のテキストが「Reference: Embeddings | Azure AI Studio」から「Reference: Embeddings | Azure AI Foundry」に更新されました。これにより、利用者は最新のサービス情報を得ることができ、ドキュメントの一貫性が保たれます。開発者は、正確で関連性のある情報に基づいて作業を進めることが可能になります。
articles/ai-studio/reference/reference-model-inference-images-embeddings.md
Diff
@@ -14,7 +14,7 @@ ms.custom:
- build-2024
---
-# Reference: Image Embeddings | Azure AI Studio
+# Reference: Image Embeddings | Azure AI Foundry
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
Summary
{
"modification_type": "minor update",
"modification_title": "AI StudioからAI Foundryへの名称変更"
}
Explanation
この変更では、「reference-model-inference-images-embeddings.md」ドキュメント内のタイトルが「Azure AI Studio」から「Azure AI Foundry」に変更されています。この更新は、文書が最新のサービス名を反映することを目的としています。
具体的には、ヘッダー部分のテキストが「Reference: Image Embeddings | Azure AI Studio」から「Reference: Image Embeddings | Azure AI Foundry」に更新されました。この変更により、利用者が最新の情報に基づいて作業を行えるようになり、ドキュメントの整合性が向上します。開発者は、正確で関連性のある情報を迅速に取得できるようになります。
articles/ai-studio/reference/reference-model-inference-info.md
Diff
@@ -14,7 +14,7 @@ ms.custom:
- build-2024
---
-# Reference: Info | Azure AI Studio
+# Reference: Info | Azure AI Foundry
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
Summary
{
"modification_type": "minor update",
"modification_title": "AI StudioからAI Foundryへの名称変更"
}
Explanation
この変更では、「reference-model-inference-info.md」ドキュメント内のタイトルが「Azure AI Studio」から「Azure AI Foundry」に変更されています。この更新は、ドキュメントが最新のサービス名を反映するために行われました。
具体的には、ヘッダー部分のテキストが「Reference: Info | Azure AI Studio」から「Reference: Info | Azure AI Foundry」に更新されました。この変更により、ユーザーは最新の情報を取得できるようになり、文書の整合性が向上します。これにより、開発者や利用者は、正確で関連性のある情報に基づいて作業を進めることができるようになります。
articles/ai-studio/reference/region-support.md
Diff
@@ -1,7 +1,7 @@
---
-title: Azure AI Studio feature availability across clouds regions
+title: Azure AI Foundry feature availability across clouds regions
titleSuffix: Azure AI Foundry
-description: This article lists Azure AI Studio feature availability across clouds regions.
+description: This article lists Azure AI Foundry feature availability across clouds regions.
manager: scottpolly
ms.service: azure-ai-studio
ms.topic: conceptual
@@ -12,15 +12,15 @@ author: sdgilley
ms.custom: references_regions, build-2024
---
-# Azure AI Studio feature availability across clouds regions
+# Azure AI Foundry feature availability across clouds regions
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-Azure AI Studio brings together various Azure AI capabilities that previously were only available as standalone Azure services. While we strive to make all features available in all regions where Azure AI Studio is supported at the same time, feature availability may vary by region. In this article, you'll learn what Azure AI Studio features are available across cloud regions.
+Azure AI Foundry brings together various Azure AI capabilities that previously were only available as standalone Azure services. While we strive to make all features available in all regions where Azure AI Foundry is supported at the same time, feature availability may vary by region. In this article, you'll learn what Azure AI Foundry features are available across cloud regions.
## Azure Public regions
-Azure AI Studio is currently available in the following Azure regions. You can create [Azure AI Studio hubs](../how-to/create-azure-ai-resource.md) and Azure AI Studio projects in these regions.
+Azure AI Foundry is currently available in the following Azure regions. You can create [Azure AI Foundry hubs](../how-to/create-azure-ai-resource.md) and Azure AI Foundry projects in these regions.
- Australia East
- Brazil South
@@ -46,14 +46,14 @@ Azure AI Studio is currently available in the following Azure regions. You can c
### Azure Government regions
-Azure AI Studio is currently not available in Azure Government regions or air-gap regions.
+Azure AI Foundry is currently not available in Azure Government regions or air-gap regions.
## Azure OpenAI
For information on the availability of Azure OpenAI models, see [Azure OpenAI Model summary table and region availability](../../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability).
> [!NOTE]
-> Some models might not be available within the AI Studio model catalog.
+> Some models might not be available within the AI Foundry model catalog.
For more information, see [Azure OpenAI quotas and limits](/azure/ai-services/openai/quotas-limits).
Summary
{
"modification_type": "minor update",
"modification_title": "AI StudioからAI Foundryへの名称変更"
}
Explanation
この変更では、「region-support.md」ドキュメントの内容が「Azure AI Studio」から「Azure AI Foundry」への名称更新を反映しています。主な変更点は、ドキュメントのさまざまな箇所で、サービス名が以前の「Azure AI Studio」から新しい名称「Azure AI Foundry」に変更されたことです。
具体的には、タイトル、説明文、本文中の説明が全て更新され、サービスの特徴や地域での利用可能性に関する情報も新しいサービス名に合わせて修正されています。この変更により、読者は最新の情報に基づいて、Azure AI Foundryが提供する機能や地域ごとのサポート状況を確認できるようになります。全体として、ドキュメントの整合性と最新性が向上します。また、利用者が特定の地域における機能の可用性についての情報にアクセスすることが簡単になったことを意味します。
articles/ai-studio/responsible-use-of-ai-overview.md
Diff
@@ -1,7 +1,7 @@
---
-title: Responsible AI for Azure AI Studio
+title: Responsible AI for Azure AI Foundry
titleSuffix: Azure AI Foundry
-description: Learn how to use AI responsibly with Azure AI Studio.
+description: Learn how to use AI responsibly with Azure AI Foundry.
manager: nitinme
keywords: Azure AI services, cognitive
ms.service: azure-ai-studio
@@ -12,7 +12,7 @@ author: PatrickFarley
ms.custom: ignite-2024
---
-# Responsible AI for Azure AI Studio
+# Responsible AI for Azure AI Foundry
This article aims to provide an overview of the resources available to help you use AI responsibly. Our recommended essential development steps are grounded in the [Microsoft Responsible AI Standard](https://aka.ms/RAI), which sets policy requirements that our own engineering teams follow. Much of the content of the Standard follows a pattern, asking teams to Identify, Measure, and Mitigate potential content risks, and plan for how to Operate the AI system as well.
Summary
{
"modification_type": "minor update",
"modification_title": "AI StudioからAI Foundryへの名称変更"
}
Explanation
この変更では、「responsible-use-of-ai-overview.md」ドキュメントのタイトルと説明が「Azure AI Studio」から「Azure AI Foundry」に変更されています。具体的には、文書のさまざまな箇所で名称が更新され、責任を持ってAIを利用するためのリソースの概要が新しいサービス名に対応しています。
主な変更点として、タイトルが「Responsible AI for Azure AI Studio」から「Responsible AI for Azure AI Foundry」に変更され、説明部分も「Learn how to use AI responsibly with Azure AI Studio」から「Learn how to use AI responsibly with Azure AI Foundry」に更新されています。また、記事の目的や内容に関する記述はそのまま維持されており、Microsoftの責任あるAIスタンダードに基づく推奨開発手順についての情報も引き続き提供されています。この修正により、ドキュメントは最新のサービス名を反映し、読者が正確な情報を得られるようになっています。
articles/ai-studio/toc.yml
Diff
@@ -30,7 +30,7 @@ items:
displayName: code
- name: Get started using Azure OpenAI Assistants
href: ../ai-services/openai/assistants-quickstart.md?context=/azure/ai-studio/context/context
- - name: Use Azure AI Studio with a screen reader
+ - name: Use Azure AI Foundry with a screen reader
href: tutorials/screen-reader.md
- name: Tutorials
items:
@@ -53,14 +53,14 @@ items:
items:
- name: What are AI services?
href: ../ai-services/what-are-ai-services.md?context=/azure/ai-studio/context/context
- - name: Use Azure AI services in AI Studio
+ - name: Use Azure AI services in AI Foundry portal
href: ai-services/how-to/connect-ai-services.md
- name: Azure OpenAI
items:
- name: What is Azure OpenAI?
href: ../ai-services/openai/overview.md?context=/azure/ai-studio/context/context
displayName: cognitive
- - name: Use Azure OpenAI Service in AI Studio
+ - name: Use Azure OpenAI Service in AI Foundry portal
href: ai-services/how-to/connect-azure-openai.md
- name: Deploy Azure OpenAI models
href: how-to/deploy-models-openai.md
@@ -83,7 +83,7 @@ items:
href: ../ai-services/speech-service/pronunciation-assessment-tool.md?context=/azure/ai-studio/context/context
- name: Hear and speak with chat in the playground
href: quickstarts/hear-speak-playground.md
- - name: Fine-tune in AI Studio for custom speech
+ - name: Fine-tune in AI Foundry portal for custom speech
href: ../ai-services/speech-service/custom-speech-ai-studio.md?context=/azure/ai-studio/context/context
- name: Explore and select AI models
items:
@@ -242,9 +242,9 @@ items:
displayName: code,sdk
- name: Develop generative AI apps
items:
- - name: Develop generative AI apps in AI Studio
+ - name: Develop generative AI apps in AI Foundry portal
items:
- - name: Build apps with prompt flow in AI Studio
+ - name: Build apps with prompt flow in AI Foundry portal
items:
- name: Prompt flow overview
href: how-to/prompt-flow.md
@@ -258,7 +258,7 @@ items:
href: how-to/flow-tune-prompts-using-variants.md
- name: Process images in a flow
href: how-to/flow-process-image.md
- - name: Use prompt flow tools in AI Studio
+ - name: Use prompt flow tools in AI Foundry portal
items:
- name: Prompt flow tools overview
href: how-to/prompt-flow-tools/prompt-flow-tools-overview.md
@@ -284,7 +284,7 @@ items:
href: how-to/prompt-flow-troubleshoot.md
- name: Develop generative AI apps using code
items:
- - name: Work with AI Studio projects in VS Code
+ - name: Work with AI Foundry projects in VS Code
href: how-to/develop/vscode.md
- name: Start with an AI template
href: how-to/develop/ai-template-get-started.md
@@ -311,19 +311,19 @@ items:
href: concepts/evaluation-approach-gen-ai.md
- name: Evaluation and monitoring metrics for generative AI
href: concepts/evaluation-metrics-built-in.md
- - name: Manually evaluate prompts in Azure AI Studio playground
+ - name: Manually evaluate prompts in Azure AI Foundry portal playground
href: how-to/evaluate-prompts-playground.md
- name: Generate synthetic and simulated data for evaluation
href: how-to/develop/simulator-interaction-data.md
- name: Evaluate with the Azure AI Evaluation SDK
href: how-to/develop/evaluate-sdk.md
displayName: code,accuracy,metrics
- - name: Run evaluations from Azure AI Studio UI
+ - name: Run evaluations from Azure AI Foundry UI
href: how-to/evaluate-generative-ai-app.md
- - name: View evaluation results in Azure AI Studio
+ - name: View evaluation results in Azure AI Foundry portal
href: how-to/evaluate-results.md
displayName: accuracy,metrics
- - name: Evaluate flows in AI Studio
+ - name: Evaluate flows in AI Foundry portal
items:
- name: Submit batch run and evaluate a flow
href: how-to/flow-bulk-test-evaluation.md
@@ -356,7 +356,7 @@ items:
items:
- name: Identity & access management
items:
- - name: Role-based access control in Azure AI Studio
+ - name: Role-based access control in Azure AI Foundry portal
href: concepts/rbac-ai-studio.md
- name: Network security
items:
@@ -397,7 +397,7 @@ items:
href: responsible-use-of-ai-overview.md
- name: What is Azure AI Content Safety?
href: ai-services/content-safety-overview.md
- - name: Use Azure AI Content Safety in AI Studio
+ - name: Use Azure AI Content Safety in AI Foundry portal
href: ai-services/how-to/content-safety.md
- name: Content filtering
href: concepts/content-filtering.md
Summary
{
"modification_type": "minor update",
"modification_title": "AI StudioからAI Foundryへの名称変更"
}
Explanation
この変更は、「toc.yml」ファイルの中で「Azure AI Studio」という名称を「Azure AI Foundry」に置き換える内容です。この更新により、ドキュメントやリソースへのリンクが新しい名称に一致するように修正され、整合性が保たれています。
具体的には、ドキュメント内の複数の項目が更新されており、特にユーザーが利用する際のナビゲーションに関する情報が新サービス名に変更されています。たとえば、「Use Azure AI Studio with a screen reader」というタイトルは「Use Azure AI Foundry with a screen reader」に、同様に他の項目も「AI Studio」から「AI Foundry」へと適切に改訂されています。これにより、ユーザーは新しいサービスに関する情報を正確に把握できるようになります。全体として、この修正はAI Foundryの利用を促進し、利用者に対して文書管理の一貫性を提供します。
articles/ai-studio/tutorials/copilot-sdk-build-rag.md
Diff
@@ -9,7 +9,7 @@ ms.date: 11/11/2024
ms.reviewer: lebaro
ms.author: sgilley
author: sdgilley
-ms.custom: [copilot-learning-hub]
+ms.custom: copilot-learning-hub, ignite-2024
#customer intent: As a developer, I want to learn how to use the prompt flow SDK so that I can build a RAG-based chat app.
---
@@ -77,7 +77,7 @@ The search index is used to store vectorized data from the embeddings model. The
python create_search_index.py
```
-1. Once the script is run, you can view your newly created index in the **Data + indexes** page of your Azure AI Studio project. For more information, see [How to build and consume vector indexes in Azure AI Studio](../how-to/index-add.md).
+1. Once the script is run, you can view your newly created index in the **Data + indexes** page of your Azure AI Foundry project. For more information, see [How to build and consume vector indexes in Azure AI Foundry portal](../how-to/index-add.md).
1. If you run the script again with the same index name, it creates a new version of the same index.
Summary
{
"modification_type": "minor update",
"modification_title": "AI StudioからAI Foundryへの名称変更"
}
Explanation
この変更は、「copilot-sdk-build-rag.md」ファイルの内容において、「Azure AI Studio」という名称を「Azure AI Foundry」に置き換えるもので、主に更新された情報に対応しています。これにより、読者は正確で最新のサービス名を用いた内容を得られるようになります。
具体的には、metadataセクションでの「ms.custom」フィールドが更新され、azurer AI Studioの代わりに「Azure AI Foundry」と関連する情報が提供されています。さらに、スクリプトを実行した後の説明についても、結果を確認できる場所が「AI Studioプロジェクト」から「AI Foundryプロジェクト」に変更されています。このように、文章内の指示が新サービス名に基づいて更新されていることで、利用者が新しい環境での操作手順を理解しやすくなっています。全体として、これらの変更は情報の整合性を保ち、ユーザー体験を向上させることを目的としています。
articles/ai-studio/tutorials/copilot-sdk-create-resources.md
Diff
@@ -4,6 +4,8 @@ titleSuffix: Azure AI Foundry
description: Build a custom chat app using the Azure AI Foundry SDK. Part 1 of a 3-part tutorial series, which shows how to create the resources you'll need for parts 2 and 3.
manager: scottpolly
ms.service: azure-ai-studio
+ms.custom:
+ - ignite-2024
ms.topic: tutorial
ms.date: 11/11/2024
ms.reviewer: lebaro
@@ -39,9 +41,9 @@ This tutorial is part one of a three-part tutorial.
## Create a project
-To create a project in [Azure AI Studio](https://ai.azure.com), follow these steps:
+To create a project in [Azure AI Foundry](https://ai.azure.com), follow these steps:
-1. Go to the **Home** page of [Azure AI Studio](https://ai.azure.com).
+1. Go to the **Home** page of [Azure AI Foundry](https://ai.azure.com).
1. Select **+ Create project**.
1. Enter a name for the project. Keep all the other settings as default.
1. Projects are created in hubs. For this tutorial, create a new hub. If you see **Create a new hub** select it and specify a name. Then select **Next**. (If you don't see **Create new hub**, it's because a new one is being created for you.)
@@ -52,9 +54,9 @@ To create a project in [Azure AI Studio](https://ai.azure.com), follow these ste
## Deploy models
-You need two models to build a RAG-based chat app: an Azure OpenAI chat model (`gpt-4o-mini`) and an Azure OpenAI embedding model (`text-embedding-ada-002`). Deploy these models in your Azure AI Studio project, using this set of steps for each model.
+You need two models to build a RAG-based chat app: an Azure OpenAI chat model (`gpt-4o-mini`) and an Azure OpenAI embedding model (`text-embedding-ada-002`). Deploy these models in your Azure AI Foundry project, using this set of steps for each model.
-These steps deploy a model to a real-time endpoint from the AI Studio [model catalog](../how-to/model-catalog-overview.md):
+These steps deploy a model to a real-time endpoint from the AI Foundry portal [model catalog](../how-to/model-catalog-overview.md):
1. On the left navigation pane, select **Model catalog**.
1. Select the **gpt-4o-mini** model from the list of models. You can use the search bar to find it.
@@ -83,7 +85,7 @@ If you already have an Azure AI Search service, you can skip to the [next sectio
Otherwise, you can create an Azure AI Search service using the Azure portal.
> [!TIP]
-> This step is the only time you use the Azure portal in this tutorial series. The rest of your work is done in Azure AI Studio or in your local development environment.
+> This step is the only time you use the Azure portal in this tutorial series. The rest of your work is done in Azure AI Foundry portal or in your local development environment.
1. [Create an Azure AI Search service](https://portal.azure.com/#create/Microsoft.Search) in the Azure portal.
1. Select your resource group and instance details. You can see details about pricing and pricing tiers on this page.
@@ -95,9 +97,9 @@ Otherwise, you can create an Azure AI Search service using the Azure portal.
If you already have an Azure AI Search connection in your project, you can skip to [Install the Azure CLI and sign in](#installs).
-In the Azure AI Studio, check for an Azure AI Search connected resource.
+In the Azure AI Foundry portal, check for an Azure AI Search connected resource.
-1. In [AI Studio](https://ai.azure.com), go to your project and select **Management center** from the left pane.
+1. In [AI Foundry](https://ai.azure.com), go to your project and select **Management center** from the left pane.
1. In the **Connected resources** section, look to see if you have a connection of type **Azure AI Search**.
1. If you have an Azure AI Search connection, you can skip ahead to the next section.
1. Otherwise, select **New connection** and then **Azure AI Search**.
Summary
{
"modification_type": "minor update",
"modification_title": "AI StudioからAI Foundryへの名称変更"
}
Explanation
この変更は、「copilot-sdk-create-resources.md」ファイルにおいて、「Azure AI Studio」という名称を「Azure AI Foundry」に置き換える内容です。これにより、ドキュメントの整合性が保たれ、読者が最新のサービス名称に基づいて手順を理解しやすくなります。
具体的な変更内容には、プロジェクトの作成手順やモデルのデプロイに関する説明が含まれています。手順内の各項目が新しい名称に更新され、たとえば、プロジェクトを作成する際の指示や、Azure AI Searchリソースの確認方法について、すべて「Azure AI Foundry」と明記されています。
また、metadataセクションに新たに「ignite-2024」が追加され、ドキュメントのメタ情報も更新されています。これにより、今後のイベントや関連情報にも対応した最新の内容が反映されています。全体として、この変更は利用者に対する情報の明確さを向上させることを目的としています。
articles/ai-studio/tutorials/copilot-sdk-evaluate.md
Diff
@@ -4,6 +4,8 @@ titleSuffix: Azure AI Foundry
description: Evaluate and deploy a custom chat app with the prompt flow SDK. This tutorial is part 3 of a 3-part tutorial series.
manager: scottpolly
ms.service: azure-ai-studio
+ms.custom:
+ - ignite-2024
ms.topic: tutorial
ms.date: 11/06/2024
ms.reviewer: lebaro
@@ -68,7 +70,7 @@ The script also logs the evaluation results to the cloud project so that you can
:::code language="python" source="~/azureai-samples-nov2024/scenarios/rag/custom-rag-app/evaluate.py" id="evaluate_wrapper":::
-1. Finally, add code to run the evaluation, view the results locally, and gives you a link to the evaluation results in AI Studio:
+1. Finally, add code to run the evaluation, view the results locally, and gives you a link to the evaluation results in AI Foundry portal:
:::code language="python" source="~/azureai-samples-nov2024/scenarios/rag/custom-rag-app/evaluate.py" id="run_evaluation":::
@@ -78,7 +80,7 @@ Since the evaluation script calls the model many times, you might want to increa
In Part 1 of this tutorial series, you created an **.env** file that specifies the name of the evaluation model, `gpt-4o-mini`. Try to increase the tokens per minute limit for this model, if you have available quota. If you don't have enough quota to increase the value, don't worry. The script is designed to handle limit errors.
-1. In your project in Azure AI Studio, select **Models + endpoints**.
+1. In your project in Azure AI Foundry portal, select **Models + endpoints**.
1. Select **gpt-4o-mini**.
1. Select **Edit**.
1. If you have quota to increase the **Tokens per Minute Rate Limit**, try increasing it to 30.
@@ -135,22 +137,22 @@ If you weren't able to increase the tokens per minute limit for your model, you
12 Sorry, I only can answer queries related to ou... ... 12
[13 rows x 8 columns]
-('View evaluation results in AI Studio: '
+('View evaluation results in AI Foundry portal: '
'https://xxxxxxxxxxxxxxxxxxxxxxx')
```
-### View evaluation results in AI Studio
+### View evaluation results in AI Foundry portal
-Once the evaluation run completes, follow the link to view the evaluation results on the **Evaluation** page in the Azure AI Studio.
+Once the evaluation run completes, follow the link to view the evaluation results on the **Evaluation** page in the Azure AI Foundry portal.
-:::image type="content" source="../media/tutorials/develop-rag-copilot-sdk/eval-studio-overview.png" alt-text="Screenshot shows evaluation overview in Azure AI Studio.":::
+:::image type="content" source="../media/tutorials/develop-rag-copilot-sdk/eval-studio-overview.png" alt-text="Screenshot shows evaluation overview in Azure AI Foundry portal.":::
You can also look at the individual rows and see metric scores per row, and view the full context/documents that were retrieved. These metrics can be helpful in interpreting and debugging evaluation results.
-:::image type="content" source="../media/tutorials/develop-rag-copilot-sdk/eval-studio-rows.png" alt-text="Screenshot shows rows of evaluation results in Azure AI Studio.":::
+:::image type="content" source="../media/tutorials/develop-rag-copilot-sdk/eval-studio-rows.png" alt-text="Screenshot shows rows of evaluation results in Azure AI Foundry portal.":::
-For more information about evaluation results in AI Studio, see [How to view evaluation results in AI Studio](../how-to/evaluate-results.md).
+For more information about evaluation results in AI Foundry portal, see [How to view evaluation results in AI Foundry portal](../how-to/evaluate-results.md).
## Iterate and improve
Summary
{
"modification_type": "minor update",
"modification_title": "AI StudioからAI Foundryへの名称変更"
}
Explanation
この変更は、「copilot-sdk-evaluate.md」ファイルにおいて、「Azure AI Studio」という名称を「Azure AI Foundry」に統一する内容です。これに伴い、ドキュメントの整合性が向上し、最新のサービス名を正確に反映させることで、ユーザーがより明確に情報を得られるようになります。
具体的には、評価結果のリンク先や関連する手順において、すべて「Azure AI Foundry」と表現されています。たとえば、評価を実行し結果をローカルで表示する手順や、モデルとエンドポイントを選択する際の指示がこれに該当します。また、評価結果を見るためのリンクや、その結果を表示するページがすべて「AI Foundry」ポータルに関連付けられています。
さらに、metadataセクションには新たに「ignite-2024」が追加され、最新の情報が反映されています。この変更により、利用者に対する情報の明瞭さが向上し、新しい環境への適応がしやすくなることを目的としています。全体的に、最新のサービスを反映させることで、より良いユーザー体験を提供することを意図しています。
articles/ai-studio/tutorials/deploy-chat-web-app.md
Diff
@@ -1,25 +1,26 @@
---
-title: "Tutorial: Deploy an enterprise chat web app in the Azure AI Studio playground"
+title: "Tutorial: Deploy an enterprise chat web app in the Azure AI Foundry portal playground"
titleSuffix: Azure AI Foundry
-description: Use this article to deploy an enterprise chat web app in the Azure AI Studio playground.
+description: Use this article to deploy an enterprise chat web app in the Azure AI Foundry portal playground.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
- ignite-2023
- build-2024
+ - ignite-2024
ms.topic: tutorial
ms.date: 11/14/2024
ms.reviewer: tgokal
ms.author: sgilley
author: sdgilley
-# customer intent: As a developer, I want to deploy an enterprise chat web app in the Azure AI Studio playground so that I can use my own data with a large language model.
+# customer intent: As a developer, I want to deploy an enterprise chat web app in the Azure AI Foundry portal playground so that I can use my own data with a large language model.
---
# Tutorial: Deploy an enterprise chat web app
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
-In this article, you deploy an enterprise chat web app that uses your own data with a large language model in AI Studio.
+In this article, you deploy an enterprise chat web app that uses your own data with a large language model in AI Foundry portal.
Your data source is used to help ground the model with specific data. Grounding means that the model uses your data to help it understand the context of your question. You're not changing the deployed model itself. Your data is stored separately and securely in your original data source
@@ -33,7 +34,7 @@ The steps in this tutorial are:
## Prerequisites
- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.
-- A [deployed Azure OpenAI](../how-to/deploy-models-openai.md) chat model. Complete the [AI Studio playground quickstart](../quickstarts/get-started-playground.md) to create this resource if you haven't already.
+- A [deployed Azure OpenAI](../how-to/deploy-models-openai.md) chat model. Complete the [AI Foundry playground quickstart](../quickstarts/get-started-playground.md) to create this resource if you haven't already.
- An [Azure AI Search service connection](../how-to/connections-add.md#create-a-new-connection) to index the sample product data.
@@ -43,25 +44,25 @@ The steps in this tutorial are:
## Add your data and try the chat model again
-In the [AI Studio playground quickstart](../quickstarts/get-started-playground.md) (that's a prerequisite for this tutorial), observe how your model responds without your data. Now you add your data to the model to help it answer questions about your products.
+In the [AI Foundry playground quickstart](../quickstarts/get-started-playground.md) (that's a prerequisite for this tutorial), observe how your model responds without your data. Now you add your data to the model to help it answer questions about your products.
[!INCLUDE [Chat with your data](../includes/chat-with-data.md)]
## Deploy your web app
-Once you're satisfied with the experience in Azure AI Studio, you can deploy the model as a standalone web application.
+Once you're satisfied with the experience in Azure AI Foundry portal, you can deploy the model as a standalone web application.
### Find your resource group in the Azure portal
-In this tutorial, your web app is deployed to the same resource group as your [AI Studio hub](../how-to/create-secure-ai-hub.md). Later you configure authentication for the web app in the Azure portal.
+In this tutorial, your web app is deployed to the same resource group as your [AI Foundry hub](../how-to/create-secure-ai-hub.md). Later you configure authentication for the web app in the Azure portal.
-Follow these steps to navigate from Azure AI Studio to your resource group in the Azure portal:
+Follow these steps to navigate from Azure AI Foundry to your resource group in the Azure portal:
-1. Go to your project in [Azure AI Studio](https://ai.azure.com). Then select **Management center** from the left pane.
+1. Go to your project in [Azure AI Foundry](https://ai.azure.com). Then select **Management center** from the left pane.
1. Under the **Project** heading, select **Overview**.
1. Select the resource group name to open the resource group in the Azure portal. In this example, the resource group is named `rg-contoso`.
- :::image type="content" source="../media/tutorials/chat/resource-group-manage-page.png" alt-text="Screenshot of the resource group in the Azure AI Studio." lightbox="../media/tutorials/chat/resource-group-manage-page.png":::
+ :::image type="content" source="../media/tutorials/chat/resource-group-manage-page.png" alt-text="Screenshot of the resource group in the Azure AI Foundry portal." lightbox="../media/tutorials/chat/resource-group-manage-page.png":::
1. You should now be in the Azure portal, viewing the contents of the resource group where you deployed the hub. Keep this page open in a browser tab. You return to it later.
@@ -77,7 +78,7 @@ To deploy the web app:
1. Complete the steps in the previous section to [add your data](#add-your-data-and-try-the-chat-model-again) to the playground.
> [!NOTE]
- > You can deploy a web app with or without your own data, but at least you need a deployed model as described in the [AI Studio playground quickstart](../quickstarts/get-started-playground.md).
+ > You can deploy a web app with or without your own data, but at least you need a deployed model as described in the [AI Foundry playground quickstart](../quickstarts/get-started-playground.md).
1. Select **Deploy > ...as a web app**.
@@ -123,10 +124,10 @@ By default, the web app will only be accessible to you. In this tutorial, you ad
You're almost there! Now you can test the web app.
1. Wait 10 minutes or so for the authentication settings to take effect.
-1. Return to the browser tab containing the chat playground page in Azure AI Studio.
+1. Return to the browser tab containing the chat playground page in Azure AI Foundry portal.
1. Select **Launch** to launch the deployed web app. If prompted, accept the permissions request.
- *If the authentication settings haven't yet taken effect, close the browser tab for your web app and return to the chat playground in Azure AI Studio. Then wait a little longer and try again.*
+ *If the authentication settings haven't yet taken effect, close the browser tab for your web app and return to the chat playground in Azure AI Foundry portal. Then wait a little longer and try again.*
1. In your web app, you can ask the same question as before ("How much are the TrailWalker hiking shoes"), and this time it uses information from your data to construct the response. You can expand the **reference** button to see the data that was used.
Summary
{
"modification_type": "minor update",
"modification_title": "AI StudioからAI Foundryへの名称変更"
}
Explanation
この変更は、「deploy-chat-web-app.md」ファイルにおいて、「Azure AI Studio」という名称を「Azure AI Foundry」に統一する内容です。この変更により、ドキュメント全体が最新のサービス名を反映し、ユーザーの混乱を避けることが目的とされています。
具体的には、チュートリアルのタイトル、説明文、手順、注意事項など、すべての箇所で「Azure AI Foundry」との表現に変更されています。たとえば、展開するウェブアプリの設定や、モデルとエンドポイントの操作に関する指示が、この新しい名称に応じて更新されています。また、メタデータセクションに「ignite-2024」が追加され、最新情報に基づく内容も反映されています。
この更新により、ユーザーは新たなプラットフォームでの作業がよりスムーズになり、リソース参照や手順についての理解が深まることが期待されます。全体として、この変更は、最新のサービス環境に適応するための重要なステップとなっています。
articles/ai-studio/tutorials/screen-reader.md
Diff
@@ -1,7 +1,7 @@
---
-title: Get started using Azure AI Studio with a screen reader
+title: Get started using Azure AI Foundry with a screen reader
titleSuffix: Azure AI Foundry
-description: This quickstart guides you in how to get oriented and navigate Azure AI Studio with a screen reader.
+description: This quickstart guides you in how to get oriented and navigate Azure AI Foundry with a screen reader.
manager: scottpolly
ms.service: azure-ai-studio
ms.custom:
@@ -14,15 +14,15 @@ ms.author: sgilley
author: sdgilley
---
-# QuickStart: Get started using AI Studio with a screen reader
+# QuickStart: Get started using AI Foundry with a screen reader
-This article is for people who use screen readers such as [Microsoft's Narrator](https://support.microsoft.com/windows/complete-guide-to-narrator-e4397a0d-ef4f-b386-d8ae-c172f109bdb1#WindowsVersion=Windows_11), JAWS, NVDA or Apple's Voiceover. In this quickstart, you'll be introduced to the basic structure of Azure AI Studio and discover how to navigate around efficiently.
+This article is for people who use screen readers such as [Microsoft's Narrator](https://support.microsoft.com/windows/complete-guide-to-narrator-e4397a0d-ef4f-b386-d8ae-c172f109bdb1#WindowsVersion=Windows_11), JAWS, NVDA or Apple's Voiceover. In this quickstart, you'll be introduced to the basic structure of Azure AI Foundry and discover how to navigate around efficiently.
-## Getting oriented in Azure AI Studio
+## Getting oriented in Azure AI Foundry portal
-Most Azure AI Studio pages are composed of the following landmark structure:
+Most Azure AI Foundry pages are composed of the following landmark structure:
-- Banner (contains Azure AI Studio app title, settings, and profile information)
+- Banner (contains Azure AI Foundry app title, settings, and profile information)
- Might sometimes contain a breadcrumb navigation element
- Navigation
- The contents of the navigation are different depending on whether you have selected a hub or project in the studio
@@ -42,12 +42,12 @@ Once you have created or selected a project, you can access more capabilities su
Once you have created or selected a project, you can also use the **Recent projects picker** button within the navigation to change project at any time.
-For more information about the navigation, see [What is Azure AI Studio](../what-is-ai-studio.md).
+For more information about the navigation, see [What is Azure AI Foundry](../what-is-ai-studio.md).
## Projects
-To work within the Azure AI Studio, you must first [create a project](../how-to/create-projects.md):
-1. In [Azure AI Studio](https://ai.azure.com), select **Home** from the navigation.
+To work within the Azure AI Foundry portal, you must first [create a project](../how-to/create-projects.md):
+1. In [Azure AI Foundry](https://ai.azure.com), select **Home** from the navigation.
1. Press the **Tab** key until you hear *New project* and select this button.
1. Enter the information requested in the **Create a project** dialog.
@@ -74,7 +74,7 @@ Prompt flow is a tool to create executable flows, linking LLMs, prompts, and Pyt
Once you have created or selected a project, go to the navigation landmark. Press the down arrow until you hear *Prompt flow* and select this link.
-The prompt flow UI in Azure AI Studio is composed of the following main sections: the command toolbar, flow (includes list of the flow nodes), files, and graph view. The flow, files, and graph sections each have their own H2 headings that can be used for navigation.
+The prompt flow UI in Azure AI Foundry portal is composed of the following main sections: the command toolbar, flow (includes list of the flow nodes), files, and graph view. The flow, files, and graph sections each have their own H2 headings that can be used for navigation.
### Flow
@@ -124,5 +124,5 @@ If you're a government, commercial, or enterprise customer, contact the enterpri
## Related content
-* Learn how you can build generative AI applications in the [Azure AI Studio](../what-is-ai-studio.md).
+* Learn how you can build generative AI applications in the [Azure AI Foundry](../what-is-ai-studio.md).
* Get answers to frequently asked questions in the [Azure AI FAQ article](../faq.yml).
Summary
{
"modification_type": "minor update",
"modification_title": "AI StudioからAI Foundryへの名称変更"
}
Explanation
この変更は、「screen-reader.md」ファイルにおいて、「Azure AI Studio」の名称を「Azure AI Foundry」に更新する内容です。この変更により、ドキュメント全体が最新のプラットフォーム名を反映し、ユーザーに正確な情報を提供することを目的としています。
具体的には、チュートリアルのタイトル、説明、および手順においてすべて「Azure AI Foundry」に変更されています。たとえば、画面読み上げソフトウェアを使用するユーザーのために、Azure AI Foundryの基本構造やナビゲーション方法が説明されています。また、プロジェクト作成やナビゲーションに関する指示も新しい名称に沿って修正されています。
この更新は、ユーザーが新しいプラットフォームでの操作をスムーズに理解できるようにし、以前の情報との整合性を高めています。全体として、この変更は、最新のサービスを反映させるための重要なステップとなっており、ユーザーフレンドリーなドキュメントを目指しています。
articles/ai-studio/what-is-ai-studio.md
Diff
@@ -10,7 +10,7 @@ ms.date: 10/31/2024
ms.reviewer: sgilley
ms.author: sgilley
author: sdgilley
-ms.custom: ignite-2023, build-2024
+ms.custom: ignite-2023, build-2024, ignite-2024
#customer intent: As a developer, I want to understand what Azure AI Foundry is so that I can use it to build AI applications.
---
@@ -88,6 +88,6 @@ But for full functionality there are some requirements:
## Related content
-- [Quickstart: Use the chat playground in Azure AI Studio](quickstarts/get-started-playground.md)
+- [Quickstart: Use the chat playground in Azure AI Foundry portal](quickstarts/get-started-playground.md)
- [Build a custom chat app in Python using the Azure AI SDK](quickstarts/get-started-code.md)
- [Create a project](./how-to/create-projects.md)
Summary
{
"modification_type": "minor update",
"modification_title": "AI StudioからAI Foundryへの名称変更"
}
Explanation
この変更は、「what-is-ai-studio.md」ファイルにおいて、「Azure AI Studio」という名称を「Azure AI Foundry」に更新する内容です。この更新の主な目的は、ユーザーに対して最新のプラットフォーム名を明確に伝えることです。
具体的には、メタデータセクションに「ignite-2024」が追加され、今後のイベントに関する情報が反映されるとともに、関連コンテンツのリンクも更新されています。例えば、「Quickstart: Use the chat playground in Azure AI Studio」という文言が「Azure AI Foundry portal」に変更されており、最新のプラットフォームに対応した内容になっています。
全体として、この変更は最新の情報を提供するための少しの調整ですが、ユーザーが正しいサービスを認識し、ガイダンスを受けるのに役立つ重要な修正です。この更新により、ユーザーはより正確な情報を手に入れ、AIアプリケーションの開発を促進できることが期待されます。