Exam Questions Flashcards

(309 cards)

1
Q

You plan to use a Language Understanding application named app1 that is deployed to a container. App1 was developed by using a Language Understanding authoring resource named lu1.

App1 has the following versions:

|---------|--------------|----------------|
| V1.2    | None         | None           |
| V1.1    | 2020-10-01   | None           |
| V1.0    | 2020-09-01   | 2020-09-15     |

You need to create a container that uses the latest deployable version of app1.

Which three actions should you perform in sequence?
(Move the appropriate actions from the list of Actions to the Answer Area and arrange them in the correct order.)

Actions:

  • Run a container that has version set as an environment variable.
  • Export the model by using the Export as JSON option.
  • Select v1.1 of app1.
  • Run a container and mount the model file.
  • Select v1.0 of app1.
  • Export the model by using the Export for containers (GZIP) option.
  • Select v1.2 of app1.

| Version | Trained date | Published date |

Version | Trained date | Published date |**

A
  1. Select v1.1 of app1.
    1. Export the model using the “Export for containers (GZIP)” option.
    2. Run a container and mount the model file.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You have 100 chatbots that each has its own Language Understanding (LUIS) model.
You frequently need to add the same phrases to each model.

You need to programmatically update the LUIS models to include the new phrases.

How should you complete the code?
Drag the appropriate values to complete the code snippet.

Each value may be used once, more than once, or not at all.
Each correct selection is worth one point.

Values:

  • AddPhraseListAsync
  • Phraselist
  • PhraselistCreateObject
  • Phrases
  • SavePhraseListAsync
  • UploadPhraseListAsync

Code (Answer Area):

var phraselistId = await client.Features.(
    appId, versionId, new *
    {
        EnabledForAllModels = false,
        IsExchangeable = true,
        Name = "PL1",
        Phrases = "item1,item2,item3,item4,item5"
    });

**

A

Answer:

  1. AddPhraseListAsync
  2. PhraselistCreateObject

Explanation:

  • AddPhraseListAsync is the method used to add a phrase list feature.
  • PhraselistCreateObject represents the structure needed to define the new phrase list.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

You need to build a chatbot that meets the following requirements:

  • Supports chit-chat, knowledge base, and multilingual models
  • Performs sentiment analysis on user messages
  • Selects the best language model automatically

What should you integrate into the chatbot?

A. QnA Maker, Language Understanding, and Dispatch
B. Translator, Speech, and Dispatch
C. Language Understanding, Text Analytics, and QnA Maker
D. Text Analytics, Translator, and Dispatch

—**

A

Answer:
D. Text Analytics, Translator, and Dispatch

Explanation:

  • Text Analytics handles sentiment analysis.
  • Translator supports multilingual capabilities.
  • Dispatch helps automatically select and route to the correct language model or service (LUIS, QnA Maker, etc.).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Question:

Your company wants to reduce how long it takes for employees to log receipts in expense reports. All the receipts are in English.

You need to extract top-level information from the receipts, such as the vendor and the transaction total. The solution must minimize development effort.

Which Azure service should you use?

A. Custom Vision
B. Personalizer
C. Form Recognizer
D. Computer Vision

—**

A

Answer:
C. Form Recognizer

Explanation:
Form Recognizer is designed to extract key-value pairs, tables, and text from documents like receipts, minimizing development effort with prebuilt models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

You need to create a new resource that will be used to perform sentiment analysis and optical character recognition (OCR).
The solution must meet the following requirements:

  • Use a single key and endpoint to access multiple services.
  • Consolidate billing for future services that you might use.
  • Support the use of Computer Vision in the future.

How should you complete the HTTP request to create the new resource?
(Select the appropriate options in the answer area.)

Answer Area:

[Dropdown: PATCH / POST / PUT] https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/RG1/providers/Microsoft.CognitiveServices/accounts/CS1?api-version=2017-04-18

{
  "location": "West US",
  "kind": "[Dropdown: CognitiveServices / ComputerVision / TextAnalytics]",
  "sku": {
    "name": "S0"
  },
  "properties": {},
  "identity": {
    "type": "SystemAssigned"
  }
}
A

Answer:

  • HTTP Method: PUT
  • Kind: CognitiveServices

Explanation:

  • PUT is the correct HTTP verb to create or update a resource.
  • CognitiveServices allows access to multiple services (e.g., Text Analytics and Computer Vision) using a single endpoint and key, meeting all the stated requirements.

PUT is correct because:

You are creating or replacing a resource at a known URI.

In Azure Resource Manager (ARM) operations, PUT is the standard method for creating or updating a resource.

POST is used when the server determines the resource URI (e.g., for custom actions or when the resource ID is not known in advance).

PATCH is used for partial updates.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Question:

You are developing a new sales system that will process the video and text from a public-facing website.

You plan to notify users that their data has been processed by the sales system.

Which responsible AI principle does this help meet?

A. Transparency
B. Fairness
C. Inclusiveness
D. Reliability and safety

A

Answer:
A. Transparency

Explanation:
Informing users that their data is being processed supports the principle of transparency, helping build trust and understanding of AI system behavior and data use.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Question:

You have an app that manages feedback.

You need to ensure that the app can detect negative comments using the Sentiment Analysis API in Azure AI Language.
The solution must also ensure that the managed feedback remains on your company’s internal network.

Which three actions should you perform in sequence?
(Move the appropriate actions to the answer area and arrange them in the correct order.)

Actions:

  • Identify the Language service endpoint URL and query the prediction endpoint.
  • Provision the Language service resource in Azure.
  • Run the container and query the prediction endpoint.
  • Deploy a Docker container to an on-premises server.
  • Deploy a Docker container to an Azure container instance.

Answer Area:
[Step 1]
[Step 2]
[Step 3]

A

Answer:

  1. Provision the Language service resource in Azure.
  2. Deploy a Docker container to an on-premises server.
  3. Run the container and query the prediction endpoint.

Explanation:

  • You first need to provision the service to obtain the container image and required keys.
  • To keep data on-premises, you should deploy the container locally.
  • Finally, you run the container and query the prediction endpoint to perform sentiment analysis within your network.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Question:

You plan to use containerized versions of the Anomaly Detector API on local devices for testing and in on-premises datacenters.

You need to ensure that the containerized deployments meet the following requirements:

  • Prevent billing and API information from being stored in the command-line histories of the devices that run the container.
  • Control access to the container images by using Azure role-based access control (RBAC).

Which four actions should you perform in sequence?
(Move the appropriate actions from the list to the answer area and arrange them in the correct order.)

Actions:

  • Create a custom Dockerfile.
  • Pull the Anomaly Detector container image.
  • Distribute a docker run script.
  • Push the image to an Azure container registry.
  • Build the image.
  • Push the image to Docker Hub.

Answer Area:

[Step 1]
[Step 2]
[Step 3]
[Step 4]

A
  1. Pull the Anomaly Detector container image.
  2. Create a custom Dockerfile.
  3. Build the image.
  4. Push the image to an Azure container registry.

🔍 Explanation:

  • Step 1: Pull the base image from Microsoft to get the official container content.
  • Step 2: Create a Dockerfile where you might modify environment variables, set configuration, or enhance the image securely (without hardcoding billing keys).
  • Step 3: Build the image based on your custom Dockerfile.
  • Step 4: Push the image to Azure Container Registry (ACR) so you can use Azure RBAC to control access to the image.

Not chosen:

  • docker run script isn’t necessary yet, and distributing it may expose billing info unless handled very carefully.
  • Docker Hub doesn’t support Azure RBAC, so it doesn’t meet the access control requirement.

Let me know if you want a version with blanks in the question section.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Question:

You plan to deploy a containerized version of an Azure Cognitive Services service to be used for text analysis.

You configure https://contoso.cognitiveservices.azure.com as the endpoint URI for the service, and you plan to pull the latest version of the Text Analytics Sentiment Analysis container.

You intend to run the container on an Azure virtual machine using Docker.

How should you complete the following Docker command?
(Select the appropriate options in the answer area.)

Answer Area:

docker run --rm -it -p 5000:5000 --memory 8g --cpus 1 \
<Dropdown: image> \
Eula=accept \
Billing=<Dropdown: billing value> \
ApiKey=xxxxxxxxxxxxxxxxxxxxxxxx

Image Options:

Billing Options:

A

Answer:

docker run --rm -it -p 5000:5000 --memory 8g --cpus 1 \
mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment \
Eula=accept \
Billing=https://contoso.cognitiveservices.azure.com \
ApiKey=xxxxxxxxxxxxxxxxxxxxxxxx

Explanation:

  • The correct image for sentiment analysis is mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment.
  • The Billing parameter must point to the Cognitive Services endpoint you’ve configured — in this case: https://contoso.cognitiveservices.azure.com.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Question:

You have the following C# method for creating Azure Cognitive Services resources programmatically:

static void create_resource(CognitiveServicesManagementClient client, string resource_name, string kind, string account_tier, string location)
{
    CognitiveServicesAccount parameters = 
        new CognitiveServicesAccount(null, null, kind, location, resource_name, 
        new CognitiveServicesAccountProperties(), new Sku(account_tier));

    var result = client.Accounts.Create(resource_group_name, account_tier, parameters);
}

You need to call the method to create a free Azure resource in the West US region.

The resource will be used to generate captions of images automatically.

Which code should you use?

A. create_resource(client, "res1", "ComputerVision", "F0", "westus")
B. create_resource(client, "res1", "CustomVision.Prediction", "F0", "westus")
C. create_resource(client, "res1", "ComputerVision", "S0", "westus")
D. create_resource(client, "res1", "CustomVision.Prediction", "S0", "westus")

A

Answer:
A. create_resource(client, “res1”, “ComputerVision”, “F0”, “westus”)

Explanation:

  • ComputerVision is the correct kind for generating image captions.
  • F0 is the free pricing tier.
  • “westus” is the specified location.
    Therefore, A is the correct call to meet all requirements.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Question:

You successfully run the following HTTP request:

POST https://management.azure.com/subscriptions/18c51a87-3a69-47a8-aedc-a54745f708a1/resourceGroups/RG1/providers/Microsoft.CognitiveServices/accounts/contoso1/regenerateKey?api-version=2017-04-18  
Body: { "keyName": "Key2" }

What is the result of the request?

A. A key for Azure Cognitive Services was generated in Azure Key Vault.
B. A new query key was generated.
C. The primary subscription key and the secondary subscription key were rotated.
D. The secondary subscription key was reset.

A

Answer:
D. The secondary subscription key was reset.

Explanation:
Calling the regenerateKey API with "keyName": "Key2" targets Key2, which refers to the secondary key. This resets that specific key, allowing key rotation without service disruption.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Question:

You build a custom Form Recognizer model.

You receive sample files to use for training the model, as shown in the table below:

Which three files can you use to train the model?
(Each correct answer presents a complete solution. Each correct selection is worth one point.)

A. File1
B. File2
C. File3
D. File4
E. File5
F. File6

Name | Type | Size |
| —– | —- | —— |
| File1 | PDF | 20 MB |
| File2 | MP4 | 100 MB |
| File3 | JPG | 20 MB |
| File4 | PDF | 100 MB |
| File5 | GIF | 1 MB |
| File6 | JPG | 40 MB |

A

Answer:
A. File1
C. File3
F. File6

Explanation:
Form Recognizer supports training with:

  • PDF, JPG, PNG, and TIFF formats.
  • Files must be ≤ 50 MB in size.

✅ Valid:

  • File1: PDF, 20 MB
  • File3: JPG, 20 MB
  • File6: JPG, 40 MB

❌ Invalid:

  • File2: MP4 — unsupported format
  • File4: PDF — too large (100 MB)
  • File5: GIF — unsupported format
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Question:

A customer uses Azure Cognitive Search.

The customer plans to enable server-side encryption and use customer-managed keys (CMK) stored in Azure.

What are three implications of the planned change?
(Each correct answer presents a complete solution. Each correct selection is worth one point.)

A. The index size will increase.
B. Query times will increase.
C. A self-signed X.509 certificate is required.
D. The index size will decrease.
E. Query times will decrease.
F. Azure Key Vault is required.

A

Answer:
A. The index size will increase
B. Query times will increase
F. Azure Key Vault is required

Explanation:

  • Enabling CMK encryption typically increases index size due to encryption metadata.
  • Query times may increase slightly due to encryption/decryption overhead.
  • Azure Key Vault is required to manage and store the customer-managed keys used for encryption.
  • A self-signed X.509 certificate is not required in this scenario.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Question:

You are developing a new sales system that will process the video and text from a public-facing website.

You plan to notify users that their data has been processed by the sales system.

Which responsible AI principle does this help meet?

A. Transparency
B. Fairness
C. Inclusiveness
D. Reliability and safety

A

Answer:
A. Transparency

Explanation:
Notifying users that their data has been processed promotes transparency, which ensures users are aware of how their data is collected, used, and managed by AI systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

You create a web app named app1 that runs on an Azure virtual machine named vm1.
vm1 is located in a virtual network named vnet1.

You plan to create a new Azure Cognitive Search service named service1.

You need to ensure that app1 can connect directly to service1 without routing traffic over the public internet.

Proposed solution:
You deploy service1 and a public endpoint to a new virtual network, and you configure Azure Private Link.

Does this meet the goal?

A. Yes
B. No

A

Answer:
B. No

Explanation:
Although Azure Private Link enables private access to Azure services, using a public endpoint does not prevent traffic from potentially going over the public internet.
To meet the goal, the public network access must be disabled, and the private endpoint must be correctly configured within the same virtual network or peered VNets to ensure completely private communication.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Question:

You create a web app named app1 that runs on an Azure virtual machine named vm1.
vm1 is on an Azure virtual network named vnet1.

You plan to create a new Azure Cognitive Search service named service1.

You need to ensure that app1 can connect directly to service1 without routing traffic over the public internet.

Proposed solution:
You deploy service1 with a public endpoint, and you configure an IP firewall rule.

Does this meet the goal?

A. Yes
B. No

A

Answer:
B. No

Explanation:
Configuring a public endpoint with an IP firewall rule restricts which IP addresses can access the service but does not prevent traffic from routing over the public internet.
To avoid public internet routing, you must use Private Link or a private endpoint, not just IP restrictions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Question:

You create a web app named app1 that runs on an Azure virtual machine named vm1.
vm1 is on an Azure virtual network named vnet1.

You plan to create a new Azure Cognitive Search service named service1.

You need to ensure that app1 can connect directly to service1 without routing traffic over the public internet.

Proposed solution:
You deploy service1 with a public endpoint, and you configure a network security group (NSG) for vnet1.

Does this meet the goal?

A. Yes
B. No

A

Answer:
B. No

Explanation:
A Network Security Group (NSG) controls inbound and outbound traffic within a virtual network, but it does not eliminate routing through the public internet when accessing a service via its public endpoint.
To keep traffic off the public internet, you must use Private Endpoint or Azure Private Link. NSG alone does not meet the requirement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Question:

You plan to perform predictive maintenance.

You collect IoT sensor data from 100 industrial machines for a year.
Each machine has 50 different sensors generating data at one-minute intervals.
In total, you have 5,000 time series datasets.

You need to identify unusual values in each time series to help predict machinery failures.

Which Azure service should you use?

A. Azure AI Computer Vision
B. Cognitive Search
C. Azure AI Document Intelligence
D. Azure AI Anomaly Detector

A

Answer:
D. Azure AI Anomaly Detector

Explanation:
Azure AI Anomaly Detector is specifically designed to detect anomalies in time series data, making it ideal for identifying unusual patterns that could indicate machinery failures in IoT scenarios.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

HOTSPOT -
You are developing a streaming Speech to Text solution that will use the Speech SDK and MP3 encoding.
You need to develop a method to convert speech to text for streaming MP3 data.
How should you complete the code? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:

Answer Area:

Audio Format:

  • AudioConfig.SetProperty
  • AudioStreamFormat.GetCompressedFormat
  • AudioStreamFormat.GetWaveFormatPCM
  • PullAudioInputStream

Recognizer:

  • KeywordRecognizer
  • SpeakerRecognizer
  • SpeechRecognizer
  • SpeechSynthesizer

Code:

var audioFormat = xxxxxxxxxxxxx (AudioStreamContainerFormat.MP3);

var speechConfig = SpeechConfig.FromSubscription("18c51887-3a69-47a8-aedc-a547457f08a1", "westus");

var audioConfig = AudioConfig.FromStreamInput(pushStream, audioFormat);

using (var recognizer = new  xxxxxxxxxxx (speechConfig, audioConfig))
{
    var result = await recognizer.RecognizeOnceAsync();
    var text = result.Text;
}
A

Answer Section:

  1. Audio Format - AudioStreamFormat.GetCompressedFormat:
    To handle MP3 data, you need to choose the compressed format for streaming. GetCompressedFormat is the correct choice for MP3 encoding.
  2. Recognizer - SpeechRecognizer:
    For converting speech to text, you should use SpeechRecognizer. It is designed for speech-to-text tasks, whereas other recognizers like KeywordRecognizer and SpeakerRecognizer serve different purposes (e.g., keyword spotting or speaker identification). SpeechSynthesizer is used for text-to-speech, not speech recognition.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Question:

You are developing an internet-based training solution for remote learners.
Your company identifies that during the training, some learners leave their desk for long periods or become distracted.

You need to use a video and audio feed from each learner’s computer to detect whether the learner is present and paying attention. The solution must minimize development effort and identify each learner.

Which Azure Cognitive Services should you use for each requirement?

  1. From a learner’s video feed, verify whether the learner is present.
  2. From a learner’s facial expression in the video feed, verify whether the learner is paying attention.
  3. From a learner’s audio feed, detect whether the learner is talking.

Options: Face, Speech, Text Analytics

A

Answer:

  1. From a learner’s video feed, verify whether the learner is presentFace
  2. From a learner’s facial expression in the video feed, verify whether the learner is paying attentionFace
  3. From a learner’s audio feed, detect whether the learner is talkingSpeech

Explanation:

  • The Face API detects and identifies people, and can analyze presence and facial expressions (e.g., attention, emotions).
  • The Speech API can detect when someone is speaking, even with streaming audio.
  • Text Analytics is for processing text, not suited for real-time video or audio input.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Question:

You plan to provision a QnA Maker service in a new resource group named RG1.
In RG1, you create an App Service plan named AP1.

Which two Azure resources are automatically created in RG1 when you provision the QnA Maker service?
(Each correct answer presents part of the solution. Each correct selection is worth one point.)

A. Language Understanding
B. Azure SQL Database
C. Azure Storage
D. Azure Cognitive Search
E. Azure App Service

A

Answer:
D. Azure Cognitive Search
E. Azure App Service

Explanation:
When you provision QnA Maker, it automatically creates:

  • An Azure Cognitive Search resource to index and search knowledge base content.
  • An Azure App Service to host the QnA Maker runtime endpoint used for querying.

Note: Azure QnA Maker is now deprecated and replaced by Azure Language Studio (Question Answering), but this remains valid for legacy questions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Question:

You are building a language model by using a Language Understanding (classic) service.
You create a new Language Understanding (classic) resource.

You need to add more contributors.

What should you use?

A. A conditional access policy in Azure Active Directory (Azure AD)
B. The Access control (IAM) page for the authoring resources in the Azure portal
C. The Access control (IAM) page for the prediction resources in the Azure portal

A

Answer:
B. The Access control (IAM) page for the authoring resources in the Azure portal

Explanation:
To add contributors who can create, edit, and manage models, you must assign them roles on the authoring resource, not the prediction resource.
Authoring is where models are developed, so IAM permissions apply there for contributor roles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

You have an Azure Cognitive Search service.

During the past 12 months, query volume steadily increased.
You discover that some search query requests are being throttled.

You need to reduce the likelihood that search query requests are throttled.

Solution: You add indexes.

Does this meet the goal?

A. Yes
B. No

—**

A

Answer:
B. No

Explanation:
Adding indexes increases the number of searchable datasets but does not increase query capacity or throughput.
To reduce throttling, you should consider scaling up the service (e.g., adding replicas or partitions), which directly affects query handling capacity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Question:

You need to develop an automated call handling system that can respond to callers in their own language.
The system will support only French and English.

Which Azure Cognitive Services should you use to meet each requirement?

Requirements:

  1. Detect the incoming language
  2. Respond in the caller’s own language

Available Services:

  • Speaker Recognition
  • Speech to Text
  • Text Analytics
  • Text to Speech
  • Translator
A

Answer:

  1. Detect the incoming language: Speech to Text
  2. Respond in the caller’s own language: Text to Speech

🔍 Why this is now correct:

  1. Speech to Text with AutoDetectSourceLanguageConfig
  • Azure’s Speech to Text now includes automatic spoken language identification using AutoDetectSourceLanguageConfig.
  • This allows the service to detect if the caller is speaking English or French directly from the audio stream, without needing a separate transcription step via Translator or Text Analytics.
  • Official doc: Azure Speech Language Identification
  1. Text to Speech
  • Still the right service to generate spoken responses in the caller’s language.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
--- **Question:** You have **receipts accessible from a URL**. You need to extract data from the receipts using **Form Recognizer** and the **SDK**, and the solution must use a **prebuilt model**. **Which client and method should you use?** A. The `FormRecognizerClient` client and the `StartRecognizeContentFromUri` method B. The `FormTrainingClient` client and the `StartRecognizeContentFromUri` method C. The `FormRecognizerClient` client and the `StartRecognizeReceiptsFromUri` method D. The `FormTrainingClient` client and the `StartRecognizeReceiptsFromUri` method ---
**Answer:** **C. The FormRecognizerClient client and the StartRecognizeReceiptsFromUri method** **Explanation:** * **FormRecognizerClient** is used to analyze forms and documents using **prebuilt models** (such as receipts, invoices, ID docs). * `StartRecognizeReceiptsFromUri` is the correct method to analyze **receipt data from a URL** using the **prebuilt receipt model**. * **FormTrainingClient** is only used for building and managing **custom models**, not for calling prebuilt ones.
26
Question: You have a collection of 50,000 scanned documents that contain text. You plan to make the text available through Azure Cognitive Search. You need to configure an enrichment pipeline to perform OCR and Text Analytics. The solution must minimize costs. What should you attach to the skillset? A. A new Computer Vision resource B. A free (Limited enrichments) Cognitive Services resource C. An Azure Machine Learning pipeline D. A new Cognitive Services resource that uses the S0 pricing tier ---
**✅ Answer:** **D. A new Cognitive Services resource that uses the S0 pricing tier** --- **Explanation:** * Azure Cognitive Search **skillsets** require a **Cognitive Services resource** to enable AI enrichment like **OCR** and **Text Analytics**. * The **Free tier** is limited to **20 documents per day**, which is **insufficient** for 50,000 documents. * **Computer Vision** alone only supports OCR, not Text Analytics (e.g., key phrase extraction, sentiment). * **Azure Machine Learning** is not required for standard OCR and text enrichment and would **increase cost and complexity**. * The **S0 pricing tier** provides scalable access to multiple Cognitive Services, including both OCR and Text Analytics, making it the correct and cost-effective choice for this scale. ---
27
You have an Azure Cognitive Search service. During the past 12 months, query volume steadily increased. You discover that some search query requests are being throttled. You need to reduce the likelihood that search query requests are throttled. **Solution: You add indexes.** Does this meet the goal? A. Yes B. No ---**
**Answer:** **B. No** **Explanation:** Adding **indexes** does **not affect query capacity** or reduce throttling. To reduce throttling in Azure Cognitive Search, you should increase the number of **replicas**, which scale out the query workload. Indexes define what data can be searched, not how much load the system can handle.
28
You have an Azure Cognitive Search service. During the past 12 months, query volume steadily increased. You discover that some search query requests are being throttled. You need to reduce the likelihood that search query requests are throttled. **Solution: You enable customer-managed key (CMK) encryption.** Does this meet the goal? A. Yes B. No ---**
**Answer:** **B. No** **Explanation:** Enabling **customer-managed key (CMK) encryption** affects **data security and compliance**, not **performance or throughput**. To reduce query throttling, you need to **scale the search service** by increasing the **number of replicas** (which handle query load), not modify encryption settings.
29
--- Question: You create a web app named app1 that runs on an Azure virtual machine named vm1. vm1 is on an Azure virtual network named vnet1. You plan to create a new Azure Cognitive Search service named service1. You need to ensure that app1 can connect directly to service1 without routing traffic over the public internet. Solution: **You deploy service1 and a public endpoint to a new virtual network, and you configure Azure Private Link.** Does this meet the goal? A. Yes B. No ---
B. No This solution does not meet the goal because it mentions using a public endpoint for service1, which means traffic would still be exposed to the public internet. To ensure app1 can connect directly to service1 without routing traffic over the public internet, you should deploy service1 with a private endpoint instead, using Azure Private Link. This approach creates a secure, private connection over the Azure backbone.
30
--- **Question:** You have a **Language Understanding** resource named **lu1**. You build and deploy an **Azure bot** named **bot1** that uses **lu1**. You need to ensure that **bot1 adheres to the Microsoft responsible AI principle of *inclusiveness***. **How should you extend bot1?** A. Implement authentication for bot1 B. Enable active learning for lu1 C. Host lu1 in a container D. Add **Direct Line Speech** to bot1 ---
**Answer:** **D. Add Direct Line Speech to bot1** **Explanation:** The **inclusiveness** principle aims to make AI accessible to **all users**, including those with visual or motor impairments or low literacy. Adding **Direct Line Speech** enables **voice interaction** with the bot, allowing users who may struggle with text-based interfaces to engage via spoken input and output — directly supporting inclusiveness.
31
--- **Question:** You are building an app that will **process incoming email and direct messages** to either **French or English language support teams**. **Which Azure Cognitive Services API should you use?** (Select the appropriate host and endpoint to route messages to the correct support team based on language.) ---
**Answer:** * **Host:** `eastus.api.cognitive.microsoft.com` * **Endpoint:** `/text/analytics/v3.1/languages` --- **Explanation:** * To **detect the language** of incoming messages (so you can route them to the correct team), you should use the **Text Analytics API** for **language detection**. * The host `eastus.api.cognitive.microsoft.com` is a valid region-based endpoint for the **Azure Cognitive Services** APIs. * The path `/text/analytics/v3.1/languages` specifically identifies the **language detection operation**. The **Translator API endpoints** (e.g., `/translate?to=en`) are for translation, **not language detection**. `portal.azure.com` is a management portal, **not a valid API endpoint**.
32
--- **Question:** You have an **Azure Cognitive Search** instance that indexes **purchase orders** using **Form Recognizer**. You need to analyze the extracted information using **Microsoft Power BI**. The solution must **minimize development effort**. **What should you add to the indexer?** A. A projection group B. A table projection C. A file projection D. An object projection ---
**Answer:** **B. A table projection** --- **Explanation:** **Table projections** structure the enriched data output from a skillset into **tabular format**, which can be **easily consumed by Power BI** or other analytics tools. They reduce development effort by avoiding the need to flatten or restructure JSON data manually. This is the most suitable and efficient option for integrating with Power BI.
33
You have an Azure Cognitive Search service. During the past 12 months, query volume steadily increased. You discover that some search query requests are being throttled. You need to reduce the likelihood that search query requests are throttled. **Solution: You add replicas.** Does this meet the goal? A. Yes B. No ---**
**Answer:** **A. Yes** --- **Explanation:** In Azure Cognitive Search: * **Replicas** are used to handle **query load** (read operations). * Adding **more replicas** increases **query throughput**, which directly helps **reduce throttling** when query volume increases. This solution **correctly addresses** the problem.
34
--- **Question:** You have an **Azure Cognitive Search** solution and a collection of blog posts that include a **category field**. You need to index the posts. The solution must meet the following requirements: * Include the **category field** in the **search results**. * Ensure that users can **search for words** in the category field. * Ensure that users can **perform drill-down filtering** based on category. **Which index attributes should you configure for the category field?** A. Searchable, sortable, and retrievable B. Searchable, facetable, and retrievable C. Retrievable, filterable, and sortable D. Retrievable, facetable, and key ---
**Answer:** **B. Searchable, facetable, and retrievable** --- **Explanation:** * **Searchable**: Allows users to search for **text within the field** (e.g., keywords in the category). * **Facetable**: Enables **drill-down filtering** (e.g., selecting from a list of categories). * **Retrievable**: Ensures the field's value is **included in the search results**. This combination satisfies all listed requirements.
35
HARD! **Question:** You have an **Azure IoT Hub** that receives **sensor data from machinery**. You need to build an app that performs the following actions: * Perform **anomaly detection** across **multiple correlated sensors** * Identify the **root cause** of process stops * Send **incident alerts** The solution must **minimize development time**. **Which Azure service should you use?** A. Azure Metrics Advisor B. Form Recognizer C. Azure Machine Learning D. Anomaly Detector ---
**Answer:** **A. Azure Metrics Advisor** --- **Explanation:** * **Azure Metrics Advisor** is specifically designed for **real-time monitoring**, **anomaly detection**, and **root cause analysis** of **time-series data** from services like **IoT Hub**. * It supports **multi-sensor correlation**, **incident detection**, and has built-in **alerting** mechanisms. * Compared to **Anomaly Detector**, Metrics Advisor offers a **more comprehensive, managed solution** that reduces development time. * **Form Recognizer** is for document data extraction, and **Azure Machine Learning** requires custom model development — not ideal for quick deployment.
36
--- **Question:** You have an app that analyzes images using the **Computer Vision API**. You need to configure the app to provide an **output for users who are vision impaired**. The solution must provide the **output in complete sentences**. **Which API call should you perform?** A. `readInStreamAsync` B. `analyzeImagesByDomainInStreamAsync` C. `tagImageInStreamAsync` D. `describeImageInStreamAsync` ---
**Answer:** **D. describeImageInStreamAsync** --- **Explanation:** * The `describeImageInStreamAsync` method returns **natural language descriptions** of images (e.g., “A group of people standing near a fountain”), which are suitable for **screen readers** and users with **visual impairments**. * Other methods (e.g., `tagImageInStreamAsync`) return tags or text data, but not full sentence descriptions. * This makes **option D** the correct choice for accessibility-focused output.
37
HARD **Question:** You have a **Custom Vision** service project that performs **object detection**. The project currently uses the **General domain** for classification and contains a **trained model**. You need to **export the model** for use on a **network that is disconnected from the internet**. **Which three actions should you perform in sequence?** (Drag and drop the correct three actions into the answer area and arrange them in the correct order.) **Available Actions:** * Change the classification type * Export the model * Retrain the model * Change Domains to **General (compact)** * Create a new classification model ---
**Correct Answer (in order):** 1. **Change Domains to General (compact).** 2. **Retrain the model.** 3. **Export the model.** --- **Explanation:** * **Compact domains** are required for **exporting models** in a format that can run offline (e.g., ONNX, TensorFlow, etc.). * After changing to a **compact domain**, the model must be **retrained**, because domain change invalidates previous training. * Once retrained, the model can be **exported** for offline use. This sequence satisfies the requirement for **offline deployment** on a network disconnected from the internet.
38
--- **Question:** You are building an **AI solution** that will use **Sentiment Analysis** results from surveys to **calculate bonuses** for customer service staff. You need to ensure that the solution meets the **Microsoft responsible AI principles**. **What should you do?** A. Add a human review and approval step before making decisions that affect the staff’s financial situation. B. Include the Sentiment Analysis results when surveys return a low confidence score. C. Use all the surveys, including surveys by customers who requested that their account be deleted and their data be removed. D. Publish the raw survey data to a central location and provide the staff with access to the location. ---
**Answer:** **A. Add a human review and approval step before making decisions that affect the staff’s financial situation.** --- **Explanation:** * **Responsible AI** principles—such as **fairness**, **accountability**, and **reliability**—require human oversight when **AI influences outcomes with financial or personal impact**. * **Option A** introduces a **human-in-the-loop**, which aligns with the principle of **accountability** and reduces risk from over-reliance on potentially biased or low-confidence AI predictions. --- **Why the other options are incorrect:** * **B. Include results with low confidence scores** – This **increases risk** of inaccurate or unfair outcomes, violating the principle of reliability. * **C. Use deleted customer data** – Violates **data privacy** and **consent**, going against ethical AI use and GDPR compliance. * **D. Publish raw data** – Raises **privacy** and **data governance** concerns. Raw data may include sensitive or personally identifiable information. Adding human review is the most responsible and compliant approach.
39
hard Question: You have an Azure subscription that contains an Azure AI Service resource named CSAccount1 and a virtual network named VNet1. CSAccount1 is connected to VNet1. You need to ensure that only specific resources can access CSAccount1. The solution must meet the following requirements: * Prevent external access to CSAccount1. * Minimize administrative effort. Which two actions should you perform? (Each correct answer presents part of the solution.) Options: A. In VNet1, enable a service endpoint for CSAccount1. B. In CSAccount1, configure the Access Control (IAM) settings. C. In VNet1, modify the virtual network settings. D. In VNet1, create a virtual subnet. E. In CSAccount1, modify the virtual network settings.
Answer: 1. A. In VNet1, enable a service endpoint for CSAccount1. 2. E. In CSAccount1, modify the virtual network settings. --- Explanation: * A. In VNet1, enable a service endpoint for CSAccount1: Enabling a service endpoint in VNet1 allows secure communication between CSAccount1 and the resources inside VNet1 over the Azure backbone network, which prevents external access. This solution also helps minimize administrative effort by providing secure access through the existing virtual network. * E. In CSAccount1, modify the virtual network settings: Modifying the virtual network settings in CSAccount1 allows you to restrict access to specific VNets and subnets. This ensures that only resources within VNet1 can access CSAccount1, meeting the requirement to limit access while minimizing the need for manual configuration. This combination fulfills the need to restrict access to CSAccount1 and ensures minimal administrative effort by leveraging built-in network security features. --- Why A and B are not correct: * A. In VNet1, enable a service endpoint for CSAccount1 (Correct, but not sufficient by itself): While service endpoints allow secure connectivity, they do not provide full control over which specific resources or subnets within VNet1 can access CSAccount1. This alone does not meet the requirement to prevent all external access to CSAccount1 or restrict access to specific resources. * B. In CSAccount1, configure the Access Control (IAM) settings: While IAM settings can restrict access to specific users and resources, IAM does not directly control network-level access to the service. The virtual network settings are what ensure network-level access restrictions, making service endpoints and network settings the correct solution for this scenario. --- Final Answer Recap: A. In VNet1, enable a service endpoint for CSAccount1. E. In CSAccount1, modify the virtual network settings. This solution ensures secure access within the virtual network while minimizing administrative effort.**
40
--- **Question:** You have an Azure subscription that contains a **Language service resource** named **ta1** and a **virtual network** named **vnet1**. You need to ensure that **only resources in vnet1** can access **ta1**. **What should you configure?** **Options:** A. A network security group (NSG) for vnet1 B. Azure Firewall for vnet1 C. The virtual network settings for ta1 D. A Language service container for ta1 ---
**Answer:** **C. The virtual network settings for ta1** --- **Explanation:** * **C. The virtual network settings for ta1:** The **virtual network settings** for **ta1** are what control whether **ta1** is accessible from **specific virtual networks**. By configuring **vnet1** in **ta1**'s settings, you can restrict access to only resources within **vnet1**, ensuring that no external resources can access **ta1**. * **Why the other options are incorrect:** * **A. A network security group (NSG) for vnet1:** **NSGs** control **traffic flow** at the subnet or network interface level within a virtual network. However, an **NSG** cannot restrict access to a specific Azure service like **ta1**. It only controls traffic between **resources within a VNet**, not to external services. * **B. Azure Firewall for vnet1:** **Azure Firewall** can manage traffic across different VNets and control access to Azure services, but it is not used to **directly configure access** for a **Language service resource**. The **virtual network settings** of the service itself are what directly manage this. * **D. A Language service container for ta1:** **Language service containers** are used to deploy **the service in a containerized environment**, but **containers** do not manage **access restrictions** to a service from a specific VNet. That functionality is handled by the **virtual network settings**. --- **Final Answer Recap:** **C. The virtual network settings for ta1** This option ensures **ta1** is restricted to access only from **vnet1**, meeting the requirement for controlling access. **Answer:** **C. The virtual network settings for ta1** --- **Explanation:** * **C. The virtual network settings for ta1:** The **virtual network settings** for **ta1** are what control whether **ta1** is accessible from **specific virtual networks**. By configuring **vnet1** in **ta1**'s settings, you can restrict access to only resources within **vnet1**, ensuring that no external resources can access **ta1**. * **Why the other options are incorrect:** * **A. A network security group (NSG) for vnet1:** **NSGs** control **traffic flow** at the subnet or network interface level within a virtual network. However, an **NSG** cannot restrict access to a specific Azure service like **ta1**. It only controls traffic between **resources within a VNet**, not to external services. * **B. Azure Firewall for vnet1:** **Azure Firewall** can manage traffic across different VNets and control access to Azure services, but it is not used to **directly configure access** for a **Language service resource**. The **virtual network settings** of the service itself are what directly manage this. * **D. A Language service container for ta1:** **Language service containers** are used to deploy **the service in a containerized environment**, but **containers** do not manage **access restrictions** to a service from a specific VNet. That functionality is handled by the **virtual network settings**. --- **Final Answer Recap:** **C. The virtual network settings for ta1** This option ensures **ta1** is restricted to access only from **vnet1**, meeting the requirement for controlling access.
41
--- **Question:** You are developing a **monitoring system** that will analyze **engine sensor data**, such as **rotation speed**, **angle**, **temperature**, and **pressure**. The system must generate an **alert** in response to **atypical values**. **What should you include in the solution?** **Options:** A. Application Insights in Azure Monitor B. Metric alerts in Azure Monitor C. Multivariate Anomaly Detection D. Univariate Anomaly Detection ---
**Answer:** **C. Multivariate Anomaly Detection** --- **Explanation:** * **Multivariate Anomaly Detection** is designed to analyze multiple **sensor data streams simultaneously**. Since your monitoring system involves multiple parameters (such as **rotation speed**, **angle**, **temperature**, and **pressure**), this approach is ideal for detecting anomalies across multiple related variables. * **Multivariate anomaly detection** considers the relationships between different features (e.g., how changes in temperature may affect pressure), making it more accurate for detecting anomalies in systems with **multiple interdependent measurements** like engine sensors. --- **Why other options are incorrect:** * **A. Application Insights in Azure Monitor**: **Application Insights** is primarily used for monitoring and diagnosing applications, but it is not specifically designed for **anomaly detection** in **sensor data**. * **B. Metric alerts in Azure Monitor**: **Metric alerts** are typically used for monitoring individual metrics like CPU usage or disk space. While they can alert you to thresholds being exceeded, they are not as effective at detecting anomalies in **multivariate sensor data**. * **D. Univariate Anomaly Detection**: **Univariate anomaly detection** analyzes individual variables, not the relationships between multiple data points. It would only be suitable for detecting anomalies in **single metrics** (e.g., temperature alone), which is less effective in complex systems with multiple correlated variables. --- **Final Answer Recap:** **C. Multivariate Anomaly Detection** This option is the most appropriate for analyzing and detecting anomalies in **interrelated sensor data**.
42
hard Question: You have an app named App1 that uses an Azure Cognitive Services model to identify anomalies in a time series data stream. You need to run App1 in a location that has limited connectivity. The solution must minimize costs. What should you use to host the model? Options: A. Azure Kubernetes Service (AKS) B. Azure Container Instances C. A Kubernetes cluster hosted in an Azure Stack Hub integrated system D. The Docker Engine
**Answer:** **B. Azure Container Instances** --- **Explanation:** * **Azure Container Instances (ACI)** provide an easy way to run containers without managing the underlying infrastructure. It is **cost-effective** for scenarios with **limited connectivity** and **low administrative overhead**. ACI offers a **serverless container solution**, ideal for quickly running containerized applications with minimal configuration, making it the most suitable option for running **App1** in a location with limited connectivity and for minimizing costs. --- **Why the other options are incorrect:** * **A. Azure Kubernetes Service (AKS):** **AKS** is a managed Kubernetes service that can run containerized applications at scale. However, it comes with more overhead in terms of management, and costs are higher due to the need for managing the Kubernetes cluster. For an environment with **limited connectivity** and a focus on minimizing costs, ACI is a simpler and more cost-effective solution. * **C. A Kubernetes cluster hosted in an Azure Stack Hub integrated system:** While **Azure Stack Hub** provides hybrid cloud capabilities, it adds significant **infrastructure overhead** and is not cost-effective for smaller, more isolated environments. This would also involve managing and maintaining a Kubernetes environment, which is more complex than using ACI. * **D. The Docker Engine:** While **Docker Engine** can run containers locally, it doesn't integrate well with **Azure Cognitive Services** and is not suitable for running production-level applications, especially in a **cloud environment** with limited connectivity. It lacks the scalability, management, and integration features provided by ACI or AKS. --- **Final Answer Recap:** **B. Azure Container Instances** This option provides the easiest and most cost-effective solution for running containerized models in environments with **limited connectivity** and minimizes **administrative effort**.
43
hard --- **Question:** You have an **Azure Cognitive Search** resource named **Search1** that is used by multiple apps. You need to **secure Search1**. The solution must meet the following requirements: * **Prevent access to Search1 from the internet**. * **Limit the access of each app to specific queries**. **What should you do?** To answer, select the appropriate options in the answer area. **NOTE**: Each correct answer is worth one point. **Options:** **To prevent access from the internet:** 1. Create a private endpoint 2. Configure an IP firewall 3. Use azure roles **To limit access to queries:** 1. Use Azure roles 2. Create a private endpoint 3. Use key authentication ---
**Answer:** 1. **To prevent access from the internet:** **Create a private endpoint** 2. **To limit access to queries:** **Use Azure roles** --- **Explanation:** * **Prevent access to Search1 from the internet**: The best option for securing **Search1** from external access is to **create a private endpoint**. This **restricts access** to the **Azure backbone network**, ensuring that the service is not publicly accessible over the internet. * **Limit the access of each app to specific queries**: Using **Azure roles** allows you to control **who can access specific features** of **Search1** by defining permissions for different roles (such as read-only, contributor, etc.) at a **granular level**. This is the correct way to limit access to **specific queries** based on **user or application roles**. --- **Why other options are incorrect:** * **To prevent access from the internet:** * **Configure an IP firewall**: While this limits access based on **IP addresses**, it’s not the most **secure option** for preventing internet access, especially if private endpoints are available. IP restrictions can still be bypassed with the right tools. * **To limit access to queries:** * **Use key authentication**: This would authenticate access to **Search1**, but it doesn’t provide fine-grained control over specific query types or app access. **Azure roles** provide more **flexible and scalable security** by managing access at the **role level**, making it more suitable for **query-specific access control**. --- **Final Answer Recap:** 1. **Create a private endpoint** 2. **Use Azure roles** This combination ensures **secure access** while allowing specific control over who can **access certain queries**.
44
--- **Question:** You are building a solution that will detect **anomalies in sensor data** from the **previous 24 hours**. You need to ensure that the solution **scans the entire dataset at the same time** for anomalies. **Which type of detection should you use?** **Options:** A. Batch B. Streaming C. Change points ---
**Answer:** **A. Batch** --- **Explanation:** * **Batch detection** processes **large datasets at once**. In this case, since you want to scan the **entire dataset** from the **past 24 hours** at the same time, **batch** is the correct choice. Batch processing allows for analyzing the entire dataset in **one operation**, making it suitable for use cases like scanning historical data. * **Why the other options are incorrect:** * **B. Streaming**: Streaming is used for processing **continuous** data in **real-time** as it arrives. It’s ideal for situations where you need to process and analyze data **immediately** as it flows, not historical data. * **C. Change points**: Change point detection focuses on identifying **sudden shifts** in the data over time. While useful for detecting structural changes in data, it’s not meant for **scanning** an entire dataset for **anomalies** all at once. --- **Final Answer Recap:** **A. Batch** This method is ideal for scanning **large datasets** from the **last 24 hours** to detect anomalies.
45
HARD Question: You are building an app that will scan confidential documents and use the Language service to analyze the contents. You provision an Azure Cognitive Services resource. You need to ensure that the app can make requests to the Language service endpoint. The solution must ensure that confidential documents remain on-premises. Which three actions should you perform in sequence? (Drag and drop the correct actions into the answer area and arrange them in the correct order.) --- Actions: * Run the container and specify an App ID and Client Secret. * Provision an on-premises Kubernetes cluster that is isolated from the internet. * Pull an image from the Microsoft Container Registry (MCR). * Run the container and specify an API Key and the Endpoint URL of the Cognitive Services resource. * Provision an on-premises Kubernetes cluster that has internet connectivity. * Pull an image from Docker Hub. * Provision an Azure Kubernetes Service (AKS) resource.
**Answer:** 1. **Provision an on-premises Kubernetes cluster that is isolated from the internet.** 2. **Pull an image from the Microsoft Container Registry (MCR).** 3. **Run the container and specify an API Key and the Endpoint URL of the Cognitive Services resource.** --- **Explanation:** * **Step 1: Provision an on-premises Kubernetes cluster that is isolated from the internet.** The key requirement is to ensure that confidential documents **remain on-premises**. Therefore, the Kubernetes cluster should not have internet access to prevent data from leaving the premises. * **Step 2: Pull an image from the Microsoft Container Registry (MCR).** **MCR** is where the official images for **Azure Cognitive Services** are stored. Since the solution needs to run the Cognitive Services container on-premises, pulling the image from **MCR** is the correct approach. * **Step 3: Run the container and specify an API Key and the Endpoint URL of the Cognitive Services resource.** After setting up the cluster and pulling the necessary image, you will run the container and configure it with the **API Key** and **Endpoint URL** of the **Cognitive Services** resource to interact with the service. --- **Why the other options are incorrect:** * **Provision an on-premises Kubernetes cluster that has internet connectivity:** This contradicts the requirement of keeping **confidential documents on-premises**. Allowing internet connectivity would expose the system to external access, which goes against the security requirements. * **Pull an image from Docker Hub:** **Docker Hub** may not be the appropriate source for official **Azure Cognitive Services** containers. The proper source is the **Microsoft Container Registry (MCR)**. * **Provision an Azure Kubernetes Service (AKS) resource:** **AKS** is a managed Kubernetes service on **Azure**, which would not meet the requirement to keep documents **on-premises**. The solution should use an **on-premises Kubernetes cluster**. --- **Final Answer Recap:** 1. **Provision an on-premises Kubernetes cluster that is isolated from the internet.** 2. **Pull an image from the Microsoft Container Registry (MCR).** 3. **Run the container and specify an API Key and the Endpoint URL of the Cognitive Services resource.** This approach ensures that the solution remains **on-premises** and allows interaction with the **Azure Cognitive Services** while minimizing exposure.
46
**Question:** You have an **Azure subscription** with the following configurations: * **Subscription ID**: `8d3591aa-96b4-4737-ad09-009fb1ed35ad` * **Tenant ID**: `3ed5f572-cb54-3ced-ae12-c5c177f39a12` You plan to create a resource that will perform **sentiment analysis** and **optical character recognition (OCR)**. You need to use an **HTTP request** to create the resource in the subscription. The solution must use a **single key and endpoint**. **How should you complete the request?** **Answer Area:** Select the appropriate options in the two dropdowns. **Dropdown 1:** * subscriptions/8d3591aa-96b4-4737-ad09-009fb1ed35ad * tenant/3ed5f572-cb54-3ced-ae12-c5c177f39a12 * subscriptions/3ed5f572-cb54-3ced-ae12-c5c177f39a12 * tenant/8d3591aa-96b4-4737-ad09-009fb1ed35ad **Dropdown 2:** * Microsoft.ApiManagement * Microsoft.CognitiveServices * Microsoft.ContainerService * Microsoft.KeyVault ---
**Answer:** 1. **Dropdown 1**: **subscriptions/8d3591aa-96b4-4737-ad09-009fb1ed35ad** 2. **Dropdown 2**: **Microsoft.CognitiveServices** --- **Explanation:** * **Dropdown 1**: **subscriptions/8d3591aa-96b4-4737-ad09-009fb1ed35ad** corresponds to the **Subscription ID** that identifies where the resource will be created. * **Dropdown 2**: **Microsoft.CognitiveServices** is the correct resource provider for **Cognitive Services** like sentiment analysis and OCR. This combination ensures the **correct subscription** and **resource provider** for creating the **Cognitive Services** resource via the **HTTP request**. ---
47
--- **Question:** You have an Azure subscription and you want to deploy an instance of the **Anomaly Detector** service using **Docker** on an on-premises server. You need to ensure that the containerized service interacts correctly with your Azure subscription for **usage tracking and billing** purposes. **Which parameter should you include in the Docker run command?** **Options:** A. Fluentd B. Billing C. Http Proxy D. Mounts ---
**Answer:** **B. Billing** --- **Explanation:** * **Billing**: The **Billing** parameter is required to link the containerized **Anomaly Detector** service to your **Azure subscription** for usage tracking and billing purposes. It ensures that the service can interact with Azure for resource consumption and cost management. * **Why other options are incorrect:** * **A. Fluentd**: **Fluentd** is used for log aggregation and isn’t necessary for running the **Anomaly Detector** container. * **C. Http Proxy**: An **HTTP proxy** is used for network routing and isn’t required for the setup of **Anomaly Detector** in a Docker container. * **D. Mounts**: The **mounts** parameter refers to the file system mounts in Docker, but it’s not relevant for the **Anomaly Detector** container setup in this context. --- **Final Answer Recap:** **B. Billing** This is the required parameter to authenticate the container and associate it with your **Azure subscription** for **billing and usage tracking**.
48
Hard --- **Question:** You are building an app that will use the **Speech service**. You need to ensure that the app can authenticate to the service by using a **Microsoft Azure Active Directory (Azure AD)** token, part of **Microsoft Entra**. **Which two actions should you perform?** Each correct answer presents part of the solution. **Options:** A. Enable a virtual network service endpoint. B. Configure a custom subdomain. C. Request an X.509 certificate. D. Create a private endpoint. E. Create a Conditional Access policy. ---
**Answer: B. Configure a custom subdomain. D. Create a private endpoint. --- Explanation: * B. Configure a custom subdomain: To authenticate using Microsoft Entra ID tokens, the Speech service requires a custom subdomain to enable token-based authentication. * D. **Create a private endpoint**: Private endpoints ensure secure communication between your Speech service and your virtual network using a private IP address. **This configuration is required for authenticating with Microsoft Entra tokens.** --- Why the other options are incorrect: * A. Enable a virtual network service endpoint: This is useful for network connectivity but doesn’t specifically relate to Entra token authentication. * C. Request an X.509 certificate: X.509 certificates are not required for Azure AD token authentication for the Speech service. * E. Create a Conditional Access policy: While Conditional Access policies are used to manage user access, they are not directly needed for setting up token-based authentication in this scenario. --- Final Answer Recap: B. Configure a custom subdomain D. Create a private endpoint These two actions ensure secure authentication using Microsoft Entra tokens for the Speech service.**
49
hard You plan to deploy an Azure OpenAI resource using an Azure Resource Manager (ARM) template. You need to ensure that the resource can respond to 600 requests per minute. How should you complete the template? Answer Area: Select the appropriate options in the dropdowns. --- Dropdown 1: * capacity * count * maxValue * size Dropdown 2: * 1 * 60 * 100 * 600 --- ``` { "type": "Microsoft.CognitiveServices/accounts/deployments", "apiVersion": "2023-05-01", "name": "arm-aoai-sample-resource/arm-je-std-deployment", "dependsOn": [ "[resourceId('Microsoft.CognitiveServices/accounts', 'arm-aoai-sample-resource')]" ], "sku": { "name": "Standard", "Dropdown 1": "Dropdown 2" }, "properties": { "model": { "format": "OpenAI" } } } ```**
**Answer:** **Dropdown 1: capacity** **Dropdown 2: 100** --- **Explanation:** * **Dropdown 1: capacity** The **capacity** defines how many **tokens per minute (TPM)** the service can handle. **Each capacity unit corresponds to 1,000 TPM**. Since the service must handle **600 requests per minute**, we need **100 capacity units**, which corresponds to **100,000 TPM** (600 RPM \* 100 tokens). * **Dropdown 2: 100** This setting aligns with the need for **100,000 TPM** to meet the **600 requests per minute (RPM)**. **100 capacity units** is the correct setting to achieve this. --- **Why the other options are incorrect:** * **Dropdown 1: count** The **count** refers to the number of instances but does not influence the **requests per minute** or **TPM** directly for the **OpenAI** resource. * **Dropdown 1: maxValue** The **maxValue** option is generally used to define upper limits but does not apply to the **TPM** or **requests per minute** configuration. * **Dropdown 2: 1** **1** capacity unit corresponds to **1,000 TPM**, which would support only **6 RPM**. This isn't enough to meet the **600 RPM** requirement. * **Dropdown 2: 60** **60** would provide only **60,000 TPM**, which is not enough to meet the **600 RPM** requirement. --- **Final Answer Recap:** **Dropdown 1: capacity** **Dropdown 2: 100** These settings ensure the **Azure OpenAI** resource can handle **600 requests per minute** with sufficient **tokens per minute (TPM)**.
50
Question: You have an Azure OpenAI resource named A1 that hosts three deployments of the GPT 3.5 model. Each deployment is optimized for a unique workload. You plan to deploy three apps. Each app will access A1 by using the REST API and will use the deployment that was optimized for the app's intended workload. You need to provide each app with access to A1 and the appropriate deployment. The solution must ensure that only the apps can access A1. What should you use to provide access to A1, and what should each app use to connect to its appropriate deployment? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. --- ### Answer Area: Provide access to A1 by using: * An API key * A bearer token * A shared access signature (SAS) token Connect to the deployment by using: * An API key * A deployment endpoint * A deployment name * A deployment type ---
**Correct Answer:** * **Provide access to A1 by using:** **An API key** * **Connect to the deployment by using:** **A deployment endpoint** --- **Explanation:** * **API Key**: The **API key** is used for authentication with the Azure OpenAI service. It is the primary method for ensuring secure and authorized access to the **A1** resource, as discussed in the official documentation. * **Deployment Endpoint**: Each **deployment** of the **GPT-3.5** model in **A1** has a unique **deployment endpoint**. The **endpoint** ensures that each app connects to the specific deployment optimized for its intended workload. Therefore, using the **deployment endpoint** is essential to access the correct resource. --- **Why Other Options Are Incorrect:** * **Bearer token**: While **bearer tokens** are used for some API services, in this case, **Azure OpenAI** uses an **API key** for authentication. * **Deployment Name**: The **deployment name** identifies the specific model deployment within the **A1** resource. However, the app does not directly use the **deployment name** itself to connect to the model. Instead, it connects via the **deployment endpoint**, which includes the **deployment name** in the URL. Thus, the **deployment endpoint** is the correct choice. --- **Final Answer Recap:** * **Provide access to A1 by using:** **An API key** * **Connect to the deployment by using:** **A deployment endpoint** These are the correct selections because the **API key** authenticates the apps, and the **deployment endpoint** ensures each app accesses the correct deployment of the GPT model.
51
**Question:** You build a bot by using the **Microsoft Bot Framework SDK**. You start the bot on a local computer. You need to validate the functionality of the bot. **What should you do before you connect to the bot?** --- **Options:** A. Run the Bot Framework Emulator. B. Run the Bot Framework Composer. C. Register the bot with Azure Bot Service. D. Run Windows Terminal. ---
**Answer:** **A. Run the Bot Framework Emulator.** --- **Explanation:** * **Run the Bot Framework Emulator**: The **Bot Framework Emulator** is a tool designed specifically to test and validate the functionality of a bot locally before deploying it to Azure. It allows you to simulate a real-world conversation with the bot and ensures that the bot works as expected during local development. --- **Why Other Options Are Incorrect:** * **Run the Bot Framework Composer**: **Bot Framework Composer** is a development tool for building bots, but it is not used for validating the functionality of a bot. It is primarily used for creating the bot and managing dialogs. * **Register the bot with Azure Bot Service**: While registering the bot with **Azure Bot Service** is important for deployment and making the bot available in production, it is not necessary for validating the bot's functionality locally. The **Bot Framework Emulator** is used to test the bot's functionality on the local machine before deploying it to the cloud. * **Run Windows Terminal**: **Windows Terminal** is a terminal emulator and is not directly related to the bot validation process. While you can use it for running other tools, it's not specific to bot functionality validation. --- **Final Answer Recap:** **A. Run the Bot Framework Emulator** is the correct choice for testing and validating the functionality of a bot locally before connecting it to any other services.
52
**Question:** You have an **Azure OpenAI** model named **A1**. You are building a web app named **App1** by using the **Azure OpenAI SDK**. You need to configure **App1** to connect to **A1**. **What information must you provide?** --- **Options:** A. the endpoint, key, and model name B. the deployment name, key, and model name C. the deployment name, endpoint, and key D. the endpoint, key, and model type ---
**Answer:** **C. the deployment name, endpoint, and key** --- **Explanation:** * **Deployment name**: The **deployment name** is used to identify a specific deployment of the **Azure OpenAI** model, which could be different from the model name itself. Each deployment can be configured differently based on your workload needs, so it's important to reference the correct deployment. * **Endpoint**: The **endpoint** provides the specific URL for accessing the **Azure OpenAI** resource. This is necessary to connect **App1** to **A1**. * **Key**: The **API key** is required to authenticate **App1** with **A1**, ensuring that the request to access the service is authorized. --- **Why Other Options Are Incorrect:** * **A. the endpoint, key, and model name**: While the **endpoint** and **key** are correct, the **model name** alone is insufficient to connect to a specific deployment of the model. You need the **deployment name** to access the correct model deployment. * **B. the deployment name, key, and model name**: The **model name** is not necessary to configure the connection. Instead, the **deployment name** and **endpoint** are needed for a proper connection. * **D. the endpoint, key, and model type**: The **model type** is not required for establishing a connection. The **deployment name** should be used instead to access the right model deployment. --- **Final Answer Recap:** **C. the deployment name, endpoint, and key** is the correct information needed to configure **App1** to connect to **A1**.
53
**Question:** You are building a solution in Azure that will use **Azure Cognitive Service for Language** to process sensitive customer data. You need to ensure that only specific Azure processes can access the Language service. The solution must minimize administrative effort. **What should you include in the solution?** --- **Options:** A. IPsec rules B. Azure Application Gateway C. a virtual network gateway D. virtual network rules ---
**Answer:** **D. virtual network rules** --- **Explanation:** * **Virtual Network Rules**: By configuring **virtual network rules**, you can ensure that only resources within a specified **Azure Virtual Network (VNet)** can access the **Azure Cognitive Services for Language** resource. This approach leverages **private networking** for secure access while minimizing administrative overhead by using built-in **Azure networking features**. This solution ensures that only specific Azure processes (such as those within the same VNet) can interact with the service. --- **Why Other Options Are Incorrect:** * **A. IPsec rules**: While **IPsec** can provide encrypted tunnels for secure communication, it is typically used for **VPNs** or inter-network connections, not for fine-grained access control within **Azure services** like Cognitive Services. **Virtual network rules** are more appropriate for this use case. * **B. Azure Application Gateway**: The **Azure Application Gateway** is a load balancer that manages traffic to web applications, but it doesn't control direct access to Azure Cognitive Services resources. **Virtual network rules** are the correct solution for securing access to the service. * **C. a virtual network gateway**: A **virtual network gateway** is used for setting up **VPNs** and hybrid cloud configurations, but it is not the most direct solution for restricting access to a service like **Azure Cognitive Services for Language**. **Virtual network rules** provide the necessary access control within **Azure**. --- **Final Answer Recap:** **D. virtual network rules** are the correct choice for ensuring that only specific Azure processes can access the **Azure Cognitive Services for Language** resource while minimizing administrative effort.
54
**Question:** You have a **Microsoft OneDrive** folder that contains a **20-GB video file** named **File1.avi**. You need to **index** File1.avi by using the **Azure Video Indexer** website. **What should you do?** --- **Options:** A. Upload File1.avi to the [www.youtube.com](http://www.youtube.com) webpage, and then copy the URL of the video to the Azure AI Video Indexer website. B. Download File1.avi to a local computer, and then upload the file to the Azure AI Video Indexer website. C. From OneDrive, create a download link, and then copy the link to the Azure AI Video Indexer website. D. From OneDrive, create a sharing link for File1.avi, and then copy the link to the Azure AI Video Indexer website. ---
**Answer:** **D. From OneDrive, create a sharing link for File1.avi, and then copy the link to the Azure AI Video Indexer website.** --- **Explanation:** * **Option D** is the correct choice. You can generate a **sharing link** for the file stored in OneDrive and then use that link to directly provide the Azure Video Indexer website with access to the video. This method allows you to index videos stored in OneDrive without having to download or manually upload the file, making it an efficient and seamless process. --- **Why Other Options Are Incorrect:** * **A. Upload File1.avi to the [www.youtube.com](http://www.youtube.com) webpage, and then copy the URL of the video to the Azure AI Video Indexer website.** This option requires uploading the video to YouTube, which is unnecessary and can result in potential privacy concerns or delays. Azure Video Indexer can work with files directly from storage services like OneDrive, without needing to use YouTube. * **B. Download File1.avi to a local computer, and then upload the file to the Azure AI Video Indexer website.** While this is a valid approach, it involves unnecessary downloading and uploading of a large 20-GB file. This can be inefficient, especially if the file is already accessible on OneDrive. * **C. From OneDrive, create a download link, and then copy the link to the Azure AI Video Indexer website.** A **download link** provides access to the file for downloading, but Azure Video Indexer requires a **sharing link** to directly process the video from OneDrive. A sharing link ensures the Azure Video Indexer service can interact with the file directly. --- **Final Answer Recap:** **D. From OneDrive, create a sharing link for File1.avi, and then copy the link to the Azure AI Video Indexer website.** This is the most efficient way to index the video directly from OneDrive to the Azure Video Indexer.
55
**Question:** You are building an **internet-based training solution**. The solution requires that a **user's camera and microphone remain enabled**. You need to **monitor a video stream** of the user and **detect when the user asks an instructor a question**. The solution must **minimize development effort**. **What should you include in the solution?** --- **Options:** A. speech-to-text in the Azure AI Speech service B. language detection in Azure AI Language Service C. the Face service in Azure AI Vision D. object detection in Azure AI Custom Vision ---
**Answer:** **A. speech-to-text in the Azure AI Speech service** --- **Explanation:** * **Speech-to-text in the Azure AI Speech service**: The **Speech-to-Text** service transcribes spoken words into text. By enabling speech recognition, you can monitor when the user **speaks** (such as asking a question), which is crucial for detecting specific actions (like asking a question) in the video stream. This solution directly addresses the need to detect when the user asks a question by **converting speech to text**. --- **Why Other Options Are Incorrect:** * **B. Language detection in Azure AI Language Service**: Language detection can identify the language being spoken, but it does not help in detecting specific actions like asking a question or understanding the context of the speech. You would need **speech-to-text** to convert spoken words into text before analyzing the content. * **C. The Face service in Azure AI Vision**: The **Face service** is used for detecting and recognizing faces in images, not for processing audio or detecting specific spoken actions like asking a question. It doesn’t address the need to monitor the **audio** or **detect speech**. * **D. Object detection in Azure AI Custom Vision**: Object detection is used for identifying objects in images or videos, such as recognizing physical objects or scenes. It does not relate to **audio detection** or understanding **speech patterns** when the user asks a question. --- **Final Answer Recap:** **A. speech-to-text in the Azure AI Speech service** is the best option to minimize development effort while enabling the system to detect when a user asks an instructor a question.
56
**Question:** You are developing an application that will use the Computer Vision client library. The application has the following code: ``` public async Task AnalyzeImage(ComputerVisionClient client, string localImage) { List features = new List() { VisualFeatureTypes.Description, VisualFeatureTypes.Tags, }; using (Stream imageStream = File.OpenRead(localImage)) { try { ImageAnalysis results = await client.AnalyzeImageInStreamAsync(imageStream, features); foreach (var caption in results.Description.Captions) { Console.WriteLine($"{caption.Text} with confidence {caption.Confidence}"); } foreach (var tag in results.Tags) { Console.WriteLine($"{tag.Name} ({tag.Confidence})"); } } catch (Exception ex) { Console.WriteLine(ex.Message); } } } ``` For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. --- **Answer Area:** **Statements** 1. The code will perform face recognition. * Yes * No 2. The code will list tags and their associated confidence. * Yes * No 3. The code will read a file from the local file system. * Yes * No ---
**Answer Key:** 1. **No** * The code performs **description** and **tags** analysis, not face recognition. The relevant feature `VisualFeatureTypes.Description` and `VisualFeatureTypes.Tags` are used, but face recognition is not part of the analysis. 2. **Yes** * The code processes tags and their associated confidence using `results.Tags`. The `tag.Name` and `tag.Confidence` are printed in the loop. 3. **Yes** * The code uses `File.OpenRead(localImage)` to open and read the image file from the local file system. ---
57
**Question:** You are developing a method that uses the Azure AI Vision client library. The method will perform optical character recognition (OCR) in images. The method has the following code: ``` def read_file_url(computervision_client, url_file): read_response = computervision_client.read(url_file, raw=True) read_operation_location = read_response.headers["Operation-Location"] operation_id = read_operation_location.split("/")[-1] read_result = computervision_client.get_read_result(operation_id) for page in read_result.analyze_result.read_results: for line in page.lines: print(line.text) ``` During testing, you discover that the call to the `get_read_result` method occurs before the read operation is complete. You need to prevent the `get_read_result` method from proceeding until the read operation is complete. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. --- **Answer Area:** **Actions** 1. **A.** Remove the operation\_id parameter. 2. **B.** Add code to verify the `read_results.status` value. 3. **C.** Add code to verify the status of the `read_operation_location` value. 4. **D.** Wrap the call to `get_read_result` within a loop that contains a delay. ---
**Answer Key:** 1. **No** * Removing the `operation_id` parameter is unnecessary because this parameter is required to retrieve the read results using `get_read_result`. 2. **Yes** * Adding code to verify the `read_results.status` value ensures that the read operation is complete before proceeding with the next steps. The status must be "succeeded" for the results to be processed. 3. **No** * Checking the status of the `read_operation_location` is not directly related to waiting for the completion of the read operation. The status of the `read_results` should be checked instead. 4. **Yes** * Wrapping the `get_read_result` method in a loop with a delay ensures that the method doesn't proceed until the read operation is complete. This provides a better solution for waiting until the OCR process finishes. ---
58
**Question:** You have a **Computer Vision** resource named **contoso1** that is hosted in the **West US** Azure region. You need to use **contoso1** to make a different size of a product photo by using the **smart cropping** feature. How should you complete the API URL? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. --- **Answer Area:** * **Dropdown 1:** * `https://api.projectoxford.ai` * `https://contoso1.cognitiveservices.azure.com` * `https://westus.api.cognitive.microsoft.com` * **Dropdown 2:** * `areaOfInterest` * `detect` * `generateThumbnail` ---
**Answer:** 1. **Dropdown 1:** * **Selected option:** `https://contoso1.cognitiveservices.azure.com` * This is the correct endpoint URL for accessing the **contoso1** Computer Vision resource hosted in the West US region. 2. **Dropdown 2:** * **Selected option:** `generateThumbnail` * The `generateThumbnail` operation is used for resizing and cropping the image with the **smart cropping** feature, which is the requirement here. --- This setup ensures that you are using the correct **endpoint** and the appropriate **operation** for resizing and cropping the product photo.
59
**Question:** You are developing a webpage that will use the Azure Video Analyzer for Media (formerly Video Indexer) service to display videos of internal company meetings. You embed the Player widget and the Cognitive Insights widget into the page. You need to configure the widgets to meet the following requirements: * Ensure that users can search for keywords. * Display the names and faces of people in the video. * Show captions in the video in English (United States). How should you complete the URL for each widget? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. **Answer Area:** Options: * en-US * false * people,keywords * people,search * search * true **Cognitive Insights Widget URL:** `https://www.videoindexer.ai/embed/insights/{accountId}/{videoId}/?widgets={value}&controls={value}` **Player Widget URL:** `https://www.videoindexer.ai/embed/player/{accountId}/{videoId}/?showcaptions={value}&captions={value}` ---
**Answer:** **Cognitive Insights Widget:** * `widgets=people_keywords` * `controls=search` **Player Widget:** * `showcaptions=true` * `captions=en-US` --- This revision ensures the widget's controls are set to **search**, which is aligned with the requirement to allow users to search for keywords in the video.
60
**Question:** You train a Custom Vision model to identify a company’s products by using the Retail domain. You plan to deploy the model as part of an app for Android phones. You need to prepare the model for deployment. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. **Select and Place:** **Actions:** 1. Change the model domain. 2. Retrain the model. 3. Test the model. 4. Export the model.
**Answer:** 1. **Change the model domain** – The first step is to switch the domain to the appropriate one (Retail in this case), as the model needs to be trained for that specific task. 2. **Retrain the model** – After changing the domain, retrain the model to adapt it to the new domain. 3. **Export the model** – Once the model is trained, the next step is to export it to a deployable format for use in the Android app. --- This approach ensures the model is correctly configured for deployment in the app while minimizing any extra steps in the process.
61
**Question:** You are developing an application to recognize employees' faces by using the Face Recognition API. Images of the faces will be accessible from a URI endpoint. The application has the following code. ``` def add_face(subscription_key, person_group_id, person_id, image_uri): headers = { 'Content-Type': 'application/json', 'Ocp-Apim-Subscription-Key': subscription_key } body = { 'url': image_uri } conn = http.client.HTTPSConnection('westus.api.cognitive.microsoft.com') conn.request('POST', f'/face/v1.0/persongroups/{person_group_id}/persons/{person_id}/persistedFaces', f'{body}', headers) response = conn.getresponse() response_data = response.read() ``` For each of the following statements, select **Yes** if the statement is true. Otherwise, select **No**. * The code will add a face image to a person object in a person group. * The code will work for up to 10,000 people. * `add_face` can be called multiple times to add multiple face images to a person object. ---
**Answer Section:** * **The code will add a face image to a person object in a person group.** * **Yes** The code adds a face image to the specified `person_id` in the given `person_group_id` by calling the Face API and uploading the image via the specified URL. * **The code will work for up to 10,000 people.** * **Yes** The **Face API allows the creation of person groups that can contain up to 10,000 person objects**. The API supports adding faces to these objects. * **`add_face` can be called multiple times to add multiple face images to a person object.** * **Yes** The `add_face` function can be invoked multiple times to add different face images to a single person object, provided the same `person_id` and `person_group_id` are used. ---
62
You are developing an application that will recognize faults in components produced on a factory production line. The components are specific to your business. You need to use the Custom Vision API to help detect common faults. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. **Actions:** * Train the classifier model. * Upload and tag images. * Initialize the training dataset. * Train the object detection model. * Create a project. ---
1. **Create a project** The first step in using Custom Vision is to create a project, where you will define the type of model (classification or object detection) and specify the domain that fits your needs. 2. **Upload and tag images** Once the project is created, you need to upload images and tag them to provide the model with labeled training data. This step is essential for the model to learn from examples. 3. **Train the classifier model** After uploading and tagging the images, the next step is to train the model using the uploaded dataset. This allows the model to learn to classify images, for instance, identifying faulty and non-faulty components.
63
**Question Section:** HOTSPOT - You are building a model that will be used in an iOS app. You have images of cats and dogs. Each image contains either a cat or a dog. You need to use the Custom Vision service to detect whether the images are of a cat or a dog. How should you configure the project in the Custom Vision portal? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: **Answer Area:** **Project Types:** * Classification * Object Detection **Classification Types:** * Multiclass (Single tag per image) * Multilabel (Multiple tags per image) **Domains:** * Audit * Food * General * General (compact) * Landmarks * Landmarks (compact) * Retail * Retail (compact) ---
**Answer Section (Revised):** 1. **Project Type - Classification**: Correct because the task involves classifying images into two categories (cat or dog), which is a classification problem. 2. **Classification Type - Multiclass (Single tag per image)**: Correct since each image has only one tag, either "cat" or "dog." 3. **Domain - General (compact)**: Correct for deployment on an iOS app, as **General (compact)** is optimized for edge devices like iOS, offering the necessary performance and size efficiency for mobile devices.
64
**Question Section:** You have an Azure Video Analyzer for Media (previously Video Indexer) service that is used to provide a search interface over company videos on your company's website. You need to be able to search for videos based on who is present in the video. What should you do? A. Create a person model and associate the model to the videos. B. Create person objects and provide face images for each object. C. Invite the entire staff of the company to Video Indexer. D. Edit the faces in the videos. E. Upload names to a language model. ---
**Answer Section:** **A. Create a person model and associate the model to the videos.** This is the correct approach. Azure Video Indexer supports creating person models, which can then be associated with videos. By associating face images with person models, you enable facial recognition that allows searching for specific individuals across multiple videos. Why other options are incorrect: * **B. Create person objects and provide face images for each object**: This option is part of the process, but it doesn't fully answer the question. The **person model** needs to be created and associated with the videos for the search to work properly, not just creating person objects. * **C. Invite the entire staff of the company to Video Indexer**: Inviting the staff is unnecessary. The key is associating faces with person models to enable recognition, not inviting individuals to use the service. * **D. Edit the faces in the videos**: Editing faces is not needed. Video Indexer uses face detection and recognition, and there's no need for manual editing of the faces in the videos. * **E. Upload names to a language model**: Uploading names to a language model is related to text processing and is not relevant to facial recognition or searching for people based on who is present in the video.
65
**Question Section:** You use the Custom Vision service to build a classifier. After training is complete, you need to evaluate the classifier. Which two metrics are available for review? Each correct answer presents a complete solution. (Choose two.) NOTE: Each correct selection is worth one point. **Answer Area:** A. recall B. F-score C. weighted accuracy D. precision E. area under the curve (AUC) ---
**Answer Section:** **A. recall** **D. precision** **Explanation:** * **Recall** and **precision** are standard evaluation metrics for classification tasks. * **Recall** measures how many of the actual positive instances were correctly identified by the classifier. * **Precision** measures how many of the instances classified as positive are actually positive. Why the other options are incorrect: * **B. F-score**: While the F-score is a metric that combines precision and recall, it's not listed as an available metric in the Custom Vision service for evaluation. * **C. Weighted accuracy**: This is not one of the metrics provided directly by the Custom Vision service for model evaluation. * **E. Area under the curve (AUC)**: While AUC is a common metric for evaluating classifiers, it's not available in the Custom Vision service by default.
66
**Question Section:** DRAG DROP - You are developing a call to the Face API. The call must find similar faces from an existing list named **employeefaces**. The **employeefaces** list contains 60,000 images. How should you complete the body of the HTTP request? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. **Select and Place:** **Values** * "faceListId" * "LargeFaceListId" * "matchFace" * "matchPerson" **Answer Area:** ``` { "faceId": "18c51a87-3a69-47a8-aedc-a547457f08a1", "xxxxxxxx": "employeefaces", "maxNumOfCandidatesReturned": 1, "mode": xxxxxxxxx } ``` ---
**Answer Section:** 1. **"LargeFaceListId"** should be placed in the **"faceListId"** field. The **LargeFaceListId** is used to specify the list of images when the list contains a large number of faces (like the **employeefaces** list, which has 60,000 images). This value is appropriate for large-scale face matching. 2. **"matchFace"** should be placed in the **"mode"** field. The **mode** field is used to specify the type of matching you want to perform. In this case, since you are looking for similar faces, **"matchFace"** is the correct choice to match faces from the list. Why other options are incorrect: * **"faceListId"**: This option is not appropriate in this case as **LargeFaceListId** is better suited for large datasets, like the one in this scenario. * **"matchPerson"**: This option is typically used for identifying specific people, which is not the goal here. The goal is to find similar faces, so **matchFace** is more suitable.
67
hard **Question Section:** **QUESTION 49** Drag and Drop Question You are developing a photo application that will find photos of a person based on a sample image by using the Face API. You need to create a POST request to find the photos. How should you complete the request? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. **Select and Place:** **Values:** * detect * findsimilars * group * identify * matchFace * matchPerson * verify **Answer Area:** ``` POST {Endpoint}/face/v1.0/xxxxx Request Body { "faceId": "c5c24a82-6845-4031-9d5d-978df9175426", "largeFaceListId": "sample_list", "largeFaceListId": "sample_list", "maxNumOfCandidatesReturned": 10, "mode": xxxxx } ``` ---
Answer Section: POST URL: findsimilars ✅ This is the correct Face API operation for finding faces similar to a given face ID. mode: matchPerson ✅ This mode matches faces based on identity, across photos taken under different conditions. Explanation: findsimilars is used to find similar faces to a given faceId within a face list or large face list. matchPerson mode is used when you want to match based on a person, which allows for broader face variation (e.g., different angles, lighting).
68
**Question Section:** **HOTSPOT** You are using the Computer Vision API to detect logos in an image. The API returns a collection of brands with their associated bounding boxes. You want to display the name of each detected brand along with the coordinates of the top-left corner of the bounding box. You have the following code segment: ``` foreach (var brand in brands) { if (brand.Confidence >= .75) { Console.WriteLine($"Logo of {brand.Name} between ({brand.Rectangle.X}, {brand.Rectangle.Y}) and ({brand.Rectangle.X + brand.Rectangle.W}, {brand.Rectangle.Y + brand.Rectangle.H})"); } } ``` For each of the following statements, select **Yes** if the statement is true. Otherwise, select **No**. NOTE: Each correct selection is worth one point. **Statements:** 1. The code will display the name of each detected brand with a confidence equal to or higher than 75 percent. 2. The code will display coordinates for the top-left corner of the rectangle that contains the brand logo of the displayed brands. 3. The code will display coordinates for the bottom-right corner of the rectangle that contains the brand logo of the displayed brands. --- | ------------------------------------------------------------------------------------------------------------------------------------ | --- | -- |
**Answer Section:** 1. **The code will display the name of each detected brand with a confidence equal to or higher than 75 percent.** **Yes** The code checks if **brand.Confidence** is greater than or equal to 0.75, and if true, it proceeds to display the brand's name. 2. **The code will display coordinates for the top-left corner of the rectangle that contains the brand logo of the displayed brands.** **Yes** The code prints the **brand.Rectangle.X** and **brand.Rectangle.Y** values, which represent the top-left corner of the bounding box for the brand logo. 3. **The code will display coordinates for the bottom-right corner of the rectangle that contains the brand logo of the displayed brands.** **No** The code doesn't explicitly calculate or display the bottom-right corner of the rectangle. It only prints the top-left corner coordinates and the width/height to infer the area, but not the exact bottom-right corner coordinates.
69
Hard **Question Section:** **HOTSPOT** You develop an application that uses the Face API. You need to add multiple images to a person group. How should you complete the code? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: **Answer Area:** ``` Parallel.For(0, PersonCount, async i => { Guid personId = persons[i].PersonId; string personImageDir = $"/path/to/person/{i}/images"; foreach (string imagePath in Directory.GetFiles(personImageDir, "*.jpg")) { using ( XXXXXX t = File.OpenRead(imagePath)) { await faceClient.PersonGroupPerson.XXXXX (personGroupId, personId, t); } } }); ``` **Dropdown 1:** * File * Stream * Uri * Url **Dropdown 2:** * AddFaceFromStreamAsync * AddFaceFromUrlAsync * CreateAsync * GetAsync ---
**Answer Section:** 1. **Dropdown 1 - Stream**: The code needs to use a **Stream** object to pass the image file into the API. The `File.OpenRead(imagePath)` method returns a stream that can be used to send the image data to the Face API. 2. **Dropdown 2 - AddFaceFromStreamAsync**: Since the image data is coming from a stream, **AddFaceFromStreamAsync** is the correct method to use for adding the face to the person group. Why other options are incorrect: * **Dropdown 1 - File**: This is not a correct option for passing the image to the Face API. A **Stream** is required, not just the file object itself. * **Dropdown 1 - Uri**: The image data is being read from a file on the disk, not from a URI, so this option is not appropriate. * **Dropdown 1 - Url**: This option would be relevant if the image were being retrieved from an online URL, but it’s not the case here. * **Dropdown 2 - AddFaceFromUrlAsync**: This method is used when the image is being accessed via a URL, not a local file stream. * **Dropdown 2 - CreateAsync**: This is used for creating a new person group or person, not for adding faces to a person group. * **Dropdown 2 - GetAsync**: This method is used for retrieving details about a person or person group, not for adding faces.
70
**Question Section:** Your company uses an Azure Cognitive Services solution to detect faces in uploaded images. The method to detect the faces uses the following code: ``` static async Task DetectFaces(string imageFilePath) { HttpClient client = new HttpClient(); DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", subscriptionKey); string requestParameter = "detectionModel=detection_01&returnFaceId=true&returnFaceLandmarks=false"; string uri = endpoint + "/face/v1.0/detect?" + requestParameters; HttpResponseMessage response; byte[] byteData = GetImageAsByteArray(imageFilePath); using (ByteArrayContent content = new ByteArrayContent(byteData)) { Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream"); response = await PostAsync(uri, content); string contentString = await content.ReadAsStringAsync(); ProcessDetection(contentString); } } ``` You discover that the solution frequently fails to detect faces in blurred images and in images that contain sideways faces. You need to increase the likelihood that the solution can detect faces in blurred images and images that contain sideways faces. What should you do? **Answer Options:** A. Use a different version of the Face API. B. Use the Computer Vision service instead of the Face service. C. Use the Identify method instead of the Detect method. D. Change the detection model. ---
**Answer Section:** **D. Change the detection model.** This is the correct solution. The detection model can be adjusted for better accuracy in detecting faces under different conditions. The **detectionModel** parameter in the request specifies the model to use, and changing it can improve detection in blurred or sideways images. The **detection\_01** model might not be optimal for all cases, so switching to a more robust model, such as **detection\_02**, could improve detection. Why other options are incorrect: * **A. Use a different version of the Face API**: Using a different version of the API is unlikely to address the problem of detecting blurred or sideways faces, as it's more related to the model used for face detection. * **B. Use the Computer Vision service instead of the Face service**: The **Computer Vision** service is primarily used for general image analysis (e.g., object detection, OCR), not specifically for face detection. The **Face API** is the correct service for detecting faces in images. * **C. Use the Identify method instead of the Detect method**: The **Identify** method is used for recognizing known individuals based on a face list or person group. The **Detect** method is used for detecting faces in images. This option does not apply to improving detection in blurred or sideways images.
71
**Question Section:** You have the following Python function for creating Azure Cognitive Services resources programmatically: ``` def create_resource(resource_name, kind, account_tier, location): parameters = CognitiveServicesAccount(sku=Sku(name=account_tier), kind=kind, location=location, properties={}) result = client.accounts.create(resource_group_name, resource_name, parameters) ``` You need to call the function to create a free Azure resource in the West US Azure region. The resource will be used to generate captions of images automatically. Which code should you use? **A.** create_resource("res1", "ComputerVision", "F0", "westus") **B.** create_resource("res1", "CustomVision.Prediction", "F0", "westus") **C.** create_resource("res1", "ComputerVision", "S0", "westus") **D.** create_resource("res1", "CustomVision.Prediction", "S0", "westus") ---
**Answer Section:** **A. create_resource("res1", "ComputerVision", "F0", "westus")** **Explanation:** * **"ComputerVision"** is the correct **kind** for generating captions of images automatically using Azure Cognitive Services. * **"F0"** is the **account\_tier** for the free tier of the Computer Vision resource. * **"westus"** is the correct **location** in the West US Azure region. Why the other options are incorrect: * **B. create\_resource("res1", "CustomVision.Prediction", "F0", "westus")**: **"CustomVision.Prediction"** is used for prediction tasks in custom vision models, not for generating captions of images. * **C. create\_resource("res1", "ComputerVision", "S0", "westus")**: **"S0"** is a paid tier, not a free tier. The question specifies the need for a **free** resource. * **D. create\_resource("res1", "CustomVision.Prediction", "S0", "westus")**: This option is not valid because **CustomVision.Prediction** is used for custom image classification tasks, not for captioning images. Additionally, **S0** is a paid tier.
72
**Question Section:** You are developing a method that uses the Azure AI Vision client library. The method will perform optical character recognition (OCR) in images. The method has the following code: ``` def read_file_url(computervision_client, url_file): read_response = computervision_client.read(url_file, raw=True) read_operation_location = read_response.headers["Operation-Location"] operation_id = read_operation_location.split("/")[-1] read_result = computervision_client.get_read_result(operation_id) for page in read_result.analyze_result.read_results: for line in page.lines: print(line.text) ``` During testing, you discover that the call to the **get\_read\_result** method occurs before the read operation is complete. You need to prevent the **get\_read\_result** method from proceeding until the read operation is complete. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. **Answer Options:** A. Remove the operation\_id parameter. B. Add code to verify the read\_result.status value. C. Add code to verify the status of the read\_operation\_location value. D. Wrap the call to get\_read\_result within a loop that contains a delay. ---
**Answer Section:** 1. **B. Add code to verify the read\_result.status value.** **Explanation**: You need to ensure that the OCR operation has completed successfully before retrieving the result. The **status** field in the **read\_result** can be used to verify if the operation is complete. Once the status indicates completion, you can proceed to call the **get\_read\_result** method. 2. **D. Wrap the call to get\_read\_result within a loop that contains a delay.** **Explanation**: A delay should be introduced to ensure the program waits until the OCR operation completes. You can wrap the **get\_read\_result** call in a loop and check for the completion status of the operation periodically, allowing the system to wait for the OCR process to finish before trying to retrieve the results. Why other options are incorrect: * **A. Remove the operation\_id parameter.** The **operation\_id** is necessary to identify the specific operation for the **get\_read\_result** method. Removing it would make the request invalid. * **C. Add code to verify the status of the read\_operation\_location value.** The **read\_operation\_location** header gives the location of the operation, but it is not directly responsible for checking whether the operation has completed. Verifying the **read\_result.status** value is the correct approach.
73
hard **Question Section:** **HOTSPOT** You are building an app that will enable users to upload images. The solution must meet the following requirements: * Automatically suggest alt text for the images. * Detect inappropriate images and block them. * Minimize development effort. You need to recommend a computer vision endpoint for each requirement. What should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. **Answer Area:** **Generate alt text:** * [https://westus.api.cognitive.microsoft.com/contentmoderator/moderate/v1.0/ProcessImage/Evaluate](https://westus.api.cognitive.microsoft.com/contentmoderator/moderate/v1.0/ProcessImage/Evaluate) * [https://westus.api.cognitive.microsoft.com/customvision/v3.1/prediction/projectId/classify/iterations/publishedName/image](https://westus.api.cognitive.microsoft.com/customvision/v3.1/prediction/projectId/classify/iterations/publishedName/image) * [https://westus.api.cognitive.microsoft.com/vision/v3.2/analyze/?visualFeatures=Adult,Description](https://westus.api.cognitive.microsoft.com/vision/v3.2/analyze/?visualFeatures=Adult,Description) **Detect inappropriate content:** * [https://westus.api.cognitive.microsoft.com/contentmoderator/moderate/v1.0/ProcessImage/Evaluate](https://westus.api.cognitive.microsoft.com/contentmoderator/moderate/v1.0/ProcessImage/Evaluate) * [https://westus.api.cognitive.microsoft.com/customvision/v3.1/prediction/projectId/classify/iterations/publishedName/image](https://westus.api.cognitive.microsoft.com/customvision/v3.1/prediction/projectId/classify/iterations/publishedName/image) * [https://westus.api.cognitive.microsoft.com/vision/v3.2/analyze/?visualFeatures=Adult,Description](https://westus.api.cognitive.microsoft.com/vision/v3.2/analyze/?visualFeatures=Adult,Description) * [https://westus.api.cognitive.microsoft.com/vision/v3.2/describe?maxCandidates=1](https://westus.api.cognitive.microsoft.com/vision/v3.2/describe?maxCandidates=1) ---
**Answer Section:** 1. **Generate alt text:** **[https://westus.api.cognitive.microsoft.com/vision/v3.2/analyze/?visualFeatures=Adult,Description](https://westus.api.cognitive.microsoft.com/vision/v3.2/analyze/?visualFeatures=Adult,Description)** As per the feedback, this endpoint is used for both generating descriptions of images (alt text) and detecting inappropriate content. It analyzes the image and returns descriptions based on various visual features (such as descriptions of the content) and also evaluates adult or racy content. 2. **Detect inappropriate content:** **[https://westus.api.cognitive.microsoft.com/vision/v3.2/analyze/?visualFeatures=Adult,Description](https://westus.api.cognitive.microsoft.com/vision/v3.2/analyze/?visualFeatures=Adult,Description)** This same endpoint can be used for detecting inappropriate content by including the **Adult** visual feature. It will return properties such as `isAdultContent`, `isRacyContent`, and their respective confidence scores (e.g., `adultScore`, `racyScore`), which are useful for blocking inappropriate images. Why the other options are incorrect: * **Generate alt text:** * **[https://westus.api.cognitive.microsoft.com/contentmoderator/moderate/v1.0/ProcessImage/Evaluate](https://westus.api.cognitive.microsoft.com/contentmoderator/moderate/v1.0/ProcessImage/Evaluate)**: This is used for content moderation (e.g., detecting adult content), but not specifically for generating alt text or image descriptions. * **[https://westus.api.cognitive.microsoft.com/customvision/v3.1/prediction/projectId/classify/iterations/publishedName/image](https://westus.api.cognitive.microsoft.com/customvision/v3.1/prediction/projectId/classify/iterations/publishedName/image)**: This is for custom image classification and not for generating alt text. * **Detect inappropriate content:** * **[https://westus.api.cognitive.microsoft.com/contentmoderator/moderate/v1.0/ProcessImage/Evaluate](https://westus.api.cognitive.microsoft.com/contentmoderator/moderate/v1.0/ProcessImage/Evaluate)**: This endpoint does indeed detect inappropriate content, but it is not as versatile for other image analysis tasks, such as generating descriptions for alt text. * **[https://westus.api.cognitive.microsoft.com/customvision/v3.1/prediction/projectId/classify/iterations/publishedName/image](https://westus.api.cognitive.microsoft.com/customvision/v3.1/prediction/projectId/classify/iterations/publishedName/image)**: This is a custom vision endpoint, not designed for detecting inappropriate content like the **Analyze** API from the Computer Vision service. * **[https://westus.api.cognitive.microsoft.com/vision/v3.2/describe?maxCandidates=1](https://westus.api.cognitive.microsoft.com/vision/v3.2/describe?maxCandidates=1)**: This endpoint is for describing images in natural language but doesn't offer the explicit adult content detection required for blocking inappropriate images. --- Thus, the correct endpoint for both **generating alt text** and **detecting inappropriate content** is **[https://westus.api.cognitive.microsoft.com/vision/v3.2/analyze/?visualFeatures=Adult,Description](https://westus.api.cognitive.microsoft.com/vision/v3.2/analyze/?visualFeatures=Adult,Description)**.
74
**Question Section:** You need to build a solution that will use optical character recognition (OCR) to scan sensitive documents by using the Computer Vision API. The solution must **NOT** be deployed to the public cloud. What should you do? **A.** Build an on-premises web app to query the Computer Vision endpoint. **B.** Host the Computer Vision endpoint in a container on an on-premises server. **C.** Host an exported Open Neural Network Exchange (ONNX) model on an on-premises server. **D.** Build an Azure web app to query the Computer Vision endpoint. ---
**Answer Section:** **B. Host the Computer Vision endpoint in a container on an on-premises server.** This is the correct choice. Microsoft offers the **Azure Cognitive Services Containers** that can be deployed on-premises to avoid sending data to the public cloud. By hosting the Computer Vision service in a container on an on-premises server, you can run the OCR solution locally, ensuring that sensitive documents do not leave the internal network. Why the other options are incorrect: * **A. Build an on-premises web app to query the Computer Vision endpoint:** This would still require using the **cloud-based** Computer Vision API, which contradicts the requirement of **NOT deploying to the public cloud**. The data would still be sent to the cloud for processing. * **C. Host an exported Open Neural Network Exchange (ONNX) model on an on-premises server:** While ONNX models can be hosted on-premises, the **Computer Vision API** itself does not use ONNX models for OCR. You would need to implement a separate custom OCR model, and it would not offer the same capabilities as the pre-trained Computer Vision OCR model. * **D. Build an Azure web app to query the Computer Vision endpoint:** This solution would deploy the app in the **cloud**, which is against the requirement of avoiding the public cloud for processing sensitive documents.
75
**Question Section:** You have an Azure Cognitive Search solution and a collection of handwritten letters stored as JPEG files. You plan to index the collection. The solution must ensure that queries can be performed on the contents of the letters. You need to create an indexer that has a skillset. Which skill should you include? **A.** Key phrase extraction **B.** Optical character recognition (OCR) **C.** Document extraction **D.** Image analysis ---
**Answer Section:** **B. Optical character recognition (OCR)** This is the correct skill to include in your indexer. OCR allows the extraction of text from images, including handwritten letters in JPEG files. By using the OCR skill, Azure Cognitive Search can extract the content of the handwritten letters, making it searchable. Why the other options are incorrect: * **A. Key phrase extraction**: This skill is used to identify important terms or phrases within text, but it is not applicable for extracting text from images. It works with text that already exists in the document, not from images. * **C. Document extraction**: This skill is generally used to extract structured data from documents like forms or PDFs, but it is not directly related to extracting text from images or handwritten letters. * **D. Image analysis**: While this skill provides insights into the content of images (e.g., recognizing objects or identifying faces), it does not focus on extracting text, which is the primary requirement here for making the letters searchable.
76
Hard **Question Section:** **HOTSPOT** You have a library that contains thousands of images. You need to tag the images as photographs, drawings, or clipart. Which service endpoint and response property should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. **Answer Area:** **Service endpoint:** * Computer Vision analyze images * Computer Vision object detection * Custom Vision image classification * Custom Vision object detection **Property:** * categories * description * imageType * metadata * objects ---
**Answer Section:** 1. **Service endpoint - Computer Vision analyze images** This endpoint is the correct one to use for analyzing images and extracting general information about them, such as whether they are photographs, drawings, or clipart. It can analyze image content and return various features, including image type. 2. **Property - imageType** The **imageType** property will give you the classification of the image, including whether it is a photograph, drawing, or clipart. This is the most suitable property for identifying the type of image in the library. Why the other options are incorrect: * **Computer Vision object detection**: This endpoint is designed for detecting specific objects within an image, but it is not suitable for identifying general image categories such as photographs or drawings. * **Custom Vision image classification**: While this can classify images based on a custom model, it is not as generic as the **Computer Vision analyze images** endpoint for general image-type classification. * **Custom Vision object detection**: Similar to the **object detection** option above, this is focused on detecting specific objects within images and is not intended for general image categorization. * **categories**: The **categories** property is useful for categorizing the image, but it is not specific to distinguishing between photographs, drawings, or clipart. * **description**: This provides a textual description of the image content, but does not focus on identifying the specific image type (e.g., photograph, drawing, clipart). * **metadata**: Metadata contains information about the image, such as creation time or file size, and is not useful for tagging the image type. * **objects**: This property would return detected objects within the image, which is more suited for object detection tasks, not identifying image types like photographs or drawings.
77
You have an app that captures live video of exam candidates. You need to use the Face service to validate that the subjects of the videos are real people. What should you do? A. Call the face detection API and retrieve the face rectangle by using the FaceRectangle attribute. B. Call the face detection API repeatedly and check for changes to the FaceAttributes.HeadPose attribute. C. Call the face detection API and use the FaceLandmarks attribute to calculate the distance between pupils. D. Call the face detection API repeatedly and check for changes to the FaceAttributes.Accessories attribute.
**B. Call the face detection API repeatedly and check for changes to the FaceAttributes.HeadPose attribute.** **Explanation:** * The **FaceAttributes.HeadPose** attribute provides the position of the subject's head (pitch, roll, and yaw). By checking for **changes in head pose**, you can confirm that the face is not a static image but is likely a real person whose head is moving, which is an indication of a live subject. Why the other options are incorrect: * **A. Call the face detection API and retrieve the face rectangle by using the FaceRectangle attribute**: The **FaceRectangle** attribute is used to locate the face in an image, but it does not provide any information about whether the subject is real or if the face is live. * **C. Call the face detection API and use the FaceLandmarks attribute to calculate the distance between pupils**: **FaceLandmarks** can be used to detect key facial points, but calculating the distance between pupils won't definitively validate if the person is real, and it's not a reliable indicator for determining liveliness. * **D. Call the face detection API repeatedly and check for changes to the FaceAttributes.Accessories attribute**: The **Accessories** attribute indicates whether the person is wearing accessories such as glasses or a hat, but it does not help validate whether the person is a real, live subject.
78
hard **Question Section:** **HOTSPOT** You make an API request and receive the results shown in the following exhibits. *Note: A large JSON file is referenced here, but not displayed.* Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. **Answer Area:** 1. The API **\[answer choice]** faces. * **detects** * **finds similar** * **recognizes** * **verifies** 2. A face that can be used in person enrollment is at position **\[answer choice]** within the photo. * **118, 754** * **497, 191** * **797, 201** * **1167, 249** ---
**Answer Section:** 1. **The API detects faces.** **Correct Answer: "detects"** The API's primary role in this case is to **detect** faces within an image, which matches the behavior of the face detection endpoint. 2. **A face that can be used in person enrollment is at position 797, 201 within the photo.** **Correct Answer: "797, 201"** The **faceRectangle** field in the response provides the coordinates for the top-left corner of the detected face (i.e., **"LEFT": 797**, **"TOP": 201**). This corresponds to the position of the first face that is suitable for **person enrollment**. --- The reasoning behind the answers: * The **"detects"** option corresponds to the face detection API endpoint that is used to identify faces in images. * The **"797, 201"** position corresponds to the **faceRectangle** coordinates provided in the response, indicating the location of the detected face. This is the position used for **person enrollment** when the quality of the image is sufficient.
79
You have an Azure subscription that contains an AI enrichment pipeline in Azure Cognitive Search and an Azure Storage account that has 10 GB of scanned documents and images. You need to index the documents and images in the storage account. The solution must minimize how long it takes to build the index. What should you do? A. From the Azure portal, congure parallel indexing. B. From the Azure portal, congure scheduled indexing. C. Congure eld mappings by using the REST API. D. Create a text-based indexer by using the REST API.
**Answer Section:** **A. From the Azure portal, configure parallel indexing.** **Explanation:** To minimize the time it takes to build the index, you should use **parallel indexing**. This approach allows the Azure Cognitive Search indexer to process multiple documents concurrently, speeding up the indexing process. Parallel indexing is especially beneficial when working with large collections of documents and images, as it leverages multiple threads to process the data in parallel. Why the other options are incorrect: * **B. From the Azure portal, configure scheduled indexing:** Scheduled indexing is useful when you want to index data at specific intervals (e.g., daily or weekly), but it does not optimize the speed of the indexing process. It focuses more on when to perform the indexing rather than how quickly the index is built. * **C. Configure field mappings by using the REST API:** Field mappings are important for customizing the structure of the index, but they don't directly affect the speed of the indexing process. This step focuses on how data is mapped and indexed, not on the performance of indexing. * **D. Create a text-based indexer by using the REST API:** While creating a text-based indexer using the REST API is part of setting up the index, it doesn't specifically address minimizing the time taken to build the index. The speed is more impacted by parallel indexing, which improves the performance during indexing.
80
**Question Section:** **DRAG DROP** You need to analyze video content to identify any mentions of specific company names. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. --- **Actions List:** 1. Add the specific company names to the exclude list. 2. Add the specific company names to the include list. 3. From Content model customization, select **Language**. 4. Sign in to the Custom Vision website. 5. Sign in to the Azure Video Analyzer for Media website. 6. From Content model customization, select **Brands**. ---
**Answer Section:** **Correct Sequence:** 1. **Sign in to the Azure Video Analyzer for Media website.** First, you need to sign in to the platform to begin configuring the analysis. 2. **From Content model customization, select Brands.** After signing in, you select **Brands** in the Content model customization section to focus on identifying mentions of specific brands or company names. 3. **Add the specific company names to the include list.** Lastly, you add the specific company names to the **include list** to ensure that only mentions of those companies are included in the analysis. --- This sequence ensures that you are correctly signed in to the right platform, set the focus on brands, and include the company names in the analysis for detection.
81
**Question Section:** You have a mobile app that manages printed forms. You need the app to send images of the forms directly to Forms Recognizer to extract relevant information. For compliance reasons, the image files must not be stored in the cloud. In which format should you send the images to the Form Recognizer API endpoint? **A.** raw image binary **B.** form URL encoded **C.** JSON ---
**Answer Section:** **A. raw image binary** To ensure that the image files are sent directly to the **Forms Recognizer API** without being stored in the cloud, you should send the images in **raw image binary** format. This allows the API to process the images directly from the request without storing them on the server. Why the other options are incorrect: * **B. form URL encoded**: This is typically used for sending form data, not for sending raw image files. * **C. JSON**: JSON is used for structured data and is not the correct format for directly sending image files. **Correct Answer: A. raw image binary.**
82
**Question Section:** You plan to build an app that will generate a list of tags for uploaded images. The app must meet the following requirements: * Generate tags in a user's preferred language. * Support English, French, and Spanish. * Minimize development effort. You need to build a function that will generate the tags for the app. Which Azure service endpoint should you use? **A.** Content Moderator Image Moderation **B.** Custom Vision image classification **C.** Computer Vision Image Analysis **D.** Custom Translator ---
**Answer Section:** **C. Computer Vision Image Analysis** The **Computer Vision Image Analysis** endpoint is the most suitable service for generating tags for images. This service provides features such as object detection, image classification, and tagging based on pre-built models, minimizing development effort. Additionally, it supports multiple languages, including English, French, and Spanish, allowing you to generate tags in a user's preferred language. Why the other options are incorrect: * **A. Content Moderator Image Moderation**: This service is primarily used for detecting and filtering out inappropriate content in images, not for generating tags. * **B. Custom Vision image classification**: This service is used for training custom models to classify images into specific categories, but it would require more effort to set up compared to the **Computer Vision Image Analysis** endpoint, which already has pre-built models for image tagging. * **D. Custom Translator**: This service is used for translation between different languages, not for generating tags for images. **Correct Answer: C. Computer Vision Image Analysis.**
83
DUPLICATE QUESTION?--- **Question Section:** **HOTSPOT** You develop a test method to verify the results retrieved from a call to the Computer Vision API. The call is used to analyze the existence of company logos in images. The call returns a collection of brands named brands. You have the following code segment: ``` foreach (var brand in brands) { if (brand.Confidence >= .75) { Console.WriteLine($"Logo of {brand.Name} between ({brand.Rectangle.X}, {brand.Rectangle.Y}) and ({brand.Rectangle.X + brand.Rectangle.W}, {brand.Rectangle.Y + brand.Rectangle.H})"); } } ``` For each of the following statements, select **Yes** if the statement is true. Otherwise, select **No**. NOTE: Each correct selection is worth one point. ---
**Answer Section:** 1. **The code will display the name of each detected brand with a confidence equal to or higher than 75 percent.** **Yes** The code checks if **brand.Confidence** is greater than or equal to **0.75**. If so, it will display the name of the brand along with the coordinates of its bounding box. 2. **The code will display coordinates for the top-left corner of the rectangle that contains the brand logo of the displayed brands.** **Yes** The code uses **brand.Rectangle.X** and **brand.Rectangle.Y** to print the coordinates of the top-left corner of the bounding box that contains the brand logo. 3. **The code will display coordinates for the bottom-right corner of the rectangle that contains the brand logo of the displayed brands.** **No** The code does not explicitly calculate or display the **bottom-right corner** of the rectangle. It only prints the **top-left corner** (using **brand.Rectangle.X**, **brand.Rectangle.Y**) and calculates the width (**brand.Rectangle.W**) and height (**brand.Rectangle.H**) of the bounding box, but not the exact coordinates of the bottom-right corner. --- Final Answer: 1. **Yes** 2. **Yes** 3. **No** The code does display the top-left corner coordinates and the name of the detected brands, but it does not calculate or show the bottom-right corner explicitly.
84
HARD **Question Section:** **DRAG DROP** You have a factory that produces cardboard packaging for food products. The factory has intermittent internet connectivity. The packages are required to include four samples of each product. You need to build a Custom Vision model that will identify defects in packaging and provide the location of the defects to an operator. The model must ensure that each package contains the four products. Which project type and domain should you use? To answer, drag the appropriate options to the correct targets. Each option may be used once, more than once, or not at all. **Answer Area:** **Options** * Food * General * General (compact) * Image classification * Logo * Object detection **Fill in** * Project Type: xxxxxx * Domain: xxxxxx ---
**Answer Section:** 1. **Project type - Object detection**: For detecting the location of defects in packaging and ensuring the four products are included, the correct **project type** is **Object detection**. This type of model can locate the position of each product and identify defects in the image. 2. **Domain - General (compact)**: Since the factory has **intermittent internet connectivity**, using the **General (compact)** domain is ideal. This domain is optimized for edge devices and situations where you need the model to work offline, making it suitable for the factory environment with limited internet access. Why other options are incorrect: * **Food (Project type)**: While relevant for identifying food items, **Food** is not the right **project type** for detecting defects or ensuring product inclusion. * **General (Domain)**: While **General** could work for image classification, the **General (compact)** domain is better suited for offline use. * **Image classification (Project type)**: This is more suited for classifying images into categories, but **Object detection** is needed to locate defects and verify products. * **Logo (Project type)**: **Logo** is focused on detecting logos, which is not applicable in this case.
85
**hard question** https://www.examtopics.com/discussions/microsoft/view/112135-exam-ai-102-topic-2-question-31-discussion/ **HOTSPOT** You are building a model to detect objects in images. The performance of the model based on training data is shown in the following exhibit. *Note: An image with precision, recall, and mAP values is referenced here.* Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. **NOTE: Each correct selection is worth one point.** **Answer Area:** 1. The percentage of false positives is **\[answer choice]**. * 0 * 25 * 50 * 75 * 100 2. The value for the number of true positives divided by the total number of true positives and false negatives is **\[answer choice]**. * 0 * 25 * 50 * 75 * 100 ---
**Answer Section:** 1. **The percentage of false positives is 0.** **Explanation:** The model shows **100% precision**, which means that all the positive predictions made by the model are correct (i.e., there are no false positives). This means the percentage of false positives is **0**. 2. **The value for the number of true positives divided by the total number of true positives and false negatives is 25.** **Explanation:** The **recall** is shown as **25%**, which directly correlates to the formula for recall (True Positives / (True Positives + False Negatives)). Therefore, the value for the number of true positives divided by the total number of true positives and false negatives is **25**.
86
You are building an app that will include one million scanned magazine articles. Each article will be stored as an image file. You need to configure the app to extract text from the images. The solution must minimize development effort. What should you include in the solution? A. Computer Vision Image Analysis B. the Read API in Computer Vision C. Form Recognizer D. Azure Cognitive Service for Language
**Answer Section:** **B. the Read API in Computer Vision** The **Read API** in **Computer Vision** is the ideal solution for extracting text from scanned images. It is designed specifically for Optical Character Recognition (OCR) and works with a variety of image formats. This API is easy to use and will minimize development effort since it provides an out-of-the-box solution for extracting printed or handwritten text from images. Why the other options are incorrect: * **A. Computer Vision Image Analysis**: While this service provides a wide range of image analysis capabilities (such as object detection), it is not specifically designed for OCR or extracting text from images. * **C. Form Recognizer**: This is a great service for extracting structured data from forms (such as invoices, receipts, etc.), but it is not primarily intended for general text extraction from scanned articles. * **D. Azure Cognitive Service for Language**: This service is used for natural language processing tasks (like sentiment analysis, entity recognition, etc.), but it does not handle OCR or text extraction from images. **Correct Answer: B. the Read API in Computer Vision**.
87
**Question Section:** You have a **20-GB video file** named **File1.avi** that is stored on a local drive. You need to **index File1.avi** by using the **Azure Video Indexer website**. **What should you do first?** **A.** Upload File1.avi to an Azure Storage queue **B.** Upload File1.avi to the Azure Video Indexer website **C.** Upload File1.avi to Microsoft OneDrive **D.** Upload File1.avi to the [www.youtube.com](http://www.youtube.com) webpage ---
**Answer Section:** ✅ **C. Upload File1.avi to Microsoft OneDrive** **Explanation:** The **Azure Video Indexer website** has a **direct upload limit of 2 GB** for video files. Since **File1.avi is 20 GB**, you cannot upload it directly through the browser. Instead, you must: 1. Upload the large video file to a supported **cloud storage** location such as **OneDrive**, **Azure Blob Storage**, or **Dropbox**. 2. Then provide a **shared URL** from that location to Video Indexer. **OneDrive** is a supported integration that allows Azure Video Indexer to ingest and index larger files through a link. --- Why the other options are incorrect: * **A. Azure Storage queue**: Queues are for messaging, **not file storage**. * **B. Azure Video Indexer website**: Direct upload won't work for files over 2 GB. * **D. YouTube**: YouTube links are **not supported** as video sources for indexing due to licensing and access restrictions. --- **Correct Answer: C. Upload File1.avi to Microsoft OneDrive** ✅
88
--- **Question Section:** **HOTSPOT** You are building an app that will share user images. You need to configure the app to meet the following requirements: * Uploaded images must be scanned and any text must be extracted from the images. * Extracted text must be analyzed for the presence of profane language. * The solution must minimize development effort. What should you use for each requirement? To answer, select the appropriate options in the answer area. **NOTE: Each correct selection is worth one point.** --- **Answer Area:** 1. **Text extraction:** * Azure AI Language * Azure AI Computer Vision * Content Moderator * Azure AI Custom Vision * Azure AI Document Intelligence 2. **Profane language detection:** * Azure AI Language * Azure AI Computer Vision * Content Moderator * Azure AI Custom Vision * Azure AI Document Intelligence ---
**Answer Section:** 1. **Text extraction: Azure AI Computer Vision** **Explanation:** The **Azure AI Computer Vision** service includes Optical Character Recognition (OCR), which is specifically designed for extracting text from images, making it the most appropriate choice for this requirement. 2. **Profane language detection: Content Moderator** **Explanation:** **Content Moderator** is the best service for detecting profane language in text. This service includes functionality for text moderation, including detecting offensive or profane content in extracted text. --- Final Answer: 1. **Text extraction: Azure AI Computer Vision** 2. **Profane language detection: Content Moderator**
89
**Question Section:** You are building an app that will share user images. You need to configure the app to perform the following actions when a user uploads an image: * Categorize the image as either a photograph or a drawing. * Generate a caption for the image. The solution must minimize development effort. Which two services should you include in the solution? Each correct answer presents part of the solution. **NOTE: Each correct selection is worth one point.** **A.** object detection in Azure AI Computer Vision **B.** content tags in Azure AI Computer Vision **C.** image descriptions in Azure AI Computer Vision **D.** image type detection in Azure AI Computer Vision **E.** image classification in Azure AI Custom Vision ---
**Answer Section:** 1. **D. image type detection in Azure AI Computer Vision** **Explanation:** The **image type detection** feature in **Azure AI Computer Vision** helps to classify an image as either a photograph or a drawing. This aligns with the requirement to categorize the image based on its type. 2. **C. image descriptions in Azure AI Computer Vision** **Explanation:** The **image descriptions** feature in **Azure AI Computer Vision** generates captions for images. This meets the requirement to generate a caption for the image. --- Final Answer: 1. **D. image type detection in Azure AI Computer Vision** 2. **C. image descriptions in Azure AI Computer Vision**
90
You are building an app that will use the Azure AI Video Indexer service. You plan to train a language model to recognize industry-specific terms. You need to upload a file that contains the industry-specific terms. Which file format should you use? A. XML B. TXT C. XLS D. PDF
**Answer Section:** **B. TXT** **Explanation:** The **Azure AI Video Indexer** service allows you to upload a file containing industry-specific terms in a simple text format. The most common format for such files is **TXT** (plain text), which allows the service to easily process the list of terms for training the language model. Why the other options are incorrect: * **A. XML**: While XML is used for structured data, it's not commonly used for training language models with terms in this context. * **C. XLS**: Excel files (XLS) are not directly supported for uploading industry terms for language model training in Azure Video Indexer. * **D. PDF**: PDF files are not typically used for providing training data for language models, as they are not as easily parsed for specific terms compared to TXT files. **Correct Answer: B. TXT**
91
**Question Section:** **DRAG DROP** You have an app that uses Azure AI and a custom-trained classifier to identify products in images. You need to add new products to the classifier. The solution must meet the following requirements: * Minimize how long it takes to add the products. * Minimize development effort. Which five actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. --- **Actions:** * Label the sample images. * From Vision Studio, open the project. * Publish the model. * From the Custom Vision portal, open the project. * Retrain the model. * Upload sample images of the new products. * From the Azure Machine Learning studio, open the workspace. ---
**Answer Area:** 1. **From the Custom Vision portal, open the project.** 2. **Upload sample images of the new products.** 3. **Label the sample images.** 4. **Retrain the model.** 5. **Publish the model.** --- **Explanation:** 1. **Open the project** in the **Custom Vision portal** to access the existing classifier. 2. **Upload the new product images** to the project. 3. **Label the sample images** to ensure that they are properly annotated. 4. **Retrain the model** with the newly labeled images. 5. **Publish the model** to apply the changes and make the model ready for use. This process minimizes development effort by using existing tools and automating the model retraining pipeline.
92
**Question Section:** **HOTSPOT** You are developing an application that will use the Azure AI Vision client library. The application has the following code: ``` def analyze_image(local_image): with open(local_image, "rb") as image_stream: image_analysis = client.analyze_image_in_stream( image=image_stream, visual_features=[ VisualFeatureTypes.tags, VisualFeatureTypes.description ] ) for caption in image_analysis.description.captions: print(f"\n{caption.text} with confidence {caption.confidence}") for tag in image_analysis.tags: print(f"\n{tag.name} with confidence {tag.confidence}") ``` For each of the following statements, select **Yes** if the statement is true. Otherwise, select **No**. **NOTE: Each correct selection is worth one point.** 1. **The code will perform face recognition.** 2. **The code will list tags and their associated confidence.** **Yes** 3. **The code will read an image file from the local file system.** ---
**Answer Section:** 1. **The code will perform face recognition.** **No** The code does not use any specific feature or API related to **face recognition**. It only analyzes the **tags** and **descriptions** of the image, not faces. 2. **The code will list tags and their associated confidence.** **Yes** The code extracts **tags** from the image and prints each **tag** along with its **confidence** score, as seen in the `for tag in image_analysis.tags:` loop. 3. **The code will read an image file from the local file system.** **Yes** The code opens and reads an image from the **local file system** using `open(local_image, "rb")` to get the image stream. This allows it to be analyzed by the Azure Vision API. --- Final Answer: 1. **No** 2. **Yes** 3. **Yes**
93
Question Section: You build a language model by using a Language Understanding service. The language model is used to search for information on a contact list by using an intent named FindContact. A conversational expert provides you with the following list of phrases to use for training: * Find contacts in London. * Who do I know in Seattle? * Search for contacts in Ukraine. You need to implement the phrase list in Language Understanding. **Solution: You create a new pattern in the FindContact intent.** Does this meet the goal? A. Yes B. No
No, An utterance should be used
94
Question Section: You build a language model by using a Language Understanding service. The language model is used to search for information on a contact list by using an intent named FindContact. A conversational expert provides you with the following list of phrases to use for training: * Find contacts in London. * Who do I know in Seattle? * Search for contacts in Ukraine. You need to implement the phrase list in Language Understanding. **Solution: You create a new intent for location.** Does this meet the goal? A. Yes B. No
**Answer Section:** **B. No** **Explanation:** Creating a **new intent for location** would not be the most efficient way to meet the goal. **Location** should be treated as an **entity** within the **FindContact** intent, rather than a separate intent. The **FindContact** intent is used for searching for contacts, and **location** (such as "London," "Seattle," and "Ukraine") should be captured as an **entity**. You can use **patterns** in the **FindContact** intent to differentiate the various phrases (like "Find contacts in London" or "Who do I know in Seattle") and extract the **location** as an entity, minimizing the need for multiple intents. Why **A. Yes** is incorrect: * Creating a new intent for location would overcomplicate the model and is unnecessary. The correct approach is to treat location as an entity within the **FindContact** intent. **Correct Answer: B. No**
95
You build a language model by using a Language Understanding service. The language model is used to search for information on a contact list by using an intent named FindContact. A conversational expert provides you with the following list of phrases to use for training: * Find contacts in London. * Who do I know in Seattle? * Search for contacts in Ukraine. You need to implement the phrase list in Language Understanding. **Solution: You create a new entity for the domain.** Does this meet the goal? A. Yes B. No
Correct Answer: B. No Explanation: The goal is to implement the training phrases to help the FindContact intent recognize various user utterances. Creating a new entity for the domain is not the correct approach because: * The "domain" is already covered by the intent (FindContact). * The correct approach is to add the example utterances to the FindContact intent to train the model. * Optionally, you could create an entity for location (e.g., London, Seattle), but creating an entity just for the domain is unnecessary and doesn't help classify the intent. --- Final Answer: B. No ✅
96
You develop an application to identify species of flowers by training a Custom Vision model. You receive images of new flower species. You need to add the new images to the classifier. **Solution: You add the new images, and then use the Smart Labeler tool.** Does this meet the goal? A. Yes B. No ---**
**Answer Section:** **B. No** **Explanation:** The **Smart Labeler** tool in **Custom Vision** is designed to suggest labels for images that the model has already been trained on. If the new flower species are not part of the trained categories, the **Smart Labeler** cannot generate suggestions for them. Therefore, in this scenario, the **Smart Labeler** will not be able to label the new flower species images effectively, since they represent new tags that the model hasn't seen before. The correct approach would be to manually label the new images first and then retrain the model to include those new flower species in the classification. Why **A. Yes** is incorrect: * The **Smart Labeler** cannot handle new categories (tags) that have not been previously trained. It works well for suggesting labels within pre-trained categories, but not for entirely new tags that the model hasn't encountered. **Correct Answer: B. No**
97
Question Section: You develop an application to identify species of flowers by training a Custom Vision model. You receive images of new flower species. You need to add the new images to the classifier. **Solution: You add the new images and labels to the existing model. You retrain the model, and then publish the model.** Does this meet the goal? A. Yes B. No ---**
**Answer Section:** **A. Yes** **Explanation:** The solution described is correct. To include new flower species in the **Custom Vision** model, you can add the new images and their corresponding labels to the existing model. After adding the new data, you need to **retrain** the model to recognize these new species. Once the retraining is completed, **publishing** the model will make it available for use. This is the correct and standard workflow for updating a Custom Vision model when you want to add new classes or improve the model with new images. Why **B. No** is incorrect: * This solution meets the goal perfectly by following the correct steps: adding images, retraining, and publishing the model. **Correct Answer: A. Yes**
98
Question Section: You develop an application to identify species of flowers by training a Custom Vision model. You receive images of new flower species. You need to add the new images to the classifier. **Solution: You create a new model, and then upload the new images and labels. Does this meet the goal?** A. Yes B. No
**Answer Section:** **B. No** **Explanation:** Creating a **new model** is not necessary if you're only adding new species to an existing classifier. You can simply **add the new images and labels** to the current model, **retrain** it, and then **publish** the updated model. Creating a new model would be inefficient because the goal is to improve the existing classifier, not to start over with a new one. Why **A. Yes** is incorrect: * Creating a new model is not required for adding new images to the classifier. The correct solution is to update the existing model, retrain it, and publish it. **Correct Answer: B. No**
99
hard Question Section: HOTSPOT You are developing a service that records lectures given in English (United Kingdom). You have a method named AppendToTranscriptFile that takes translated text and a language identifier. You need to develop code that will provide transcripts of the lectures to attendees in their respective language. The supported languages are English, French, Spanish, and German. How should you complete the code? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. --- Answer Area: ``` static async Task TranslateSpeechAsync() { var config = SpeechTranslationConfig.FromSubscription("69cad5cc-0ab3-4704-bdff-afbf4aa07d85", "uksouth"); var lang = new Listxxxxxxxx; config.SpeechRecognitionLanguage = "en-GB"; lang.ForEach(config.AddTargetLanguage); using var audioConfig = AudioConfig.FromDefaultMicrophoneInput(); using var recognizer = new xxxxxxxxxxxxx(config, audioConfig); await recognizer.RecognizeOnceAsync(); if (result.Reason == ResultReason.TranslatedSpeech) } ``` Dropdown 1 (For `lang`): * Option A: `{ "en-GB" }` * Option B: `{ "fr", "de", "es" }` * Option C: `{ "French", "Spanish", "German" }` * Option D: `{ "language}` Dropdown 2 (For `SpeechRecognitionLanguage`): * Option A: IntentRecognizer * Option B: SpeechRecognizer * Option C: SpeechSynthesizer * Option D: TranslationRecognizer
**Answer Section:** 1. **For Dropdown 1 (language list configuration):** * **Correct Answer**: **Option B**: `{ "fr", "de", "es" }` **Explanation**: This option correctly specifies the supported target languages for translation (French, German, Spanish). 2. **For Dropdown 2 (recognizer type):** * **Correct Answer**: **Option D**: **TranslationRecognizer** **Explanation**: Since the goal is to provide translations of speech, the appropriate recognizer for handling this task is **TranslationRecognizer**, which is specifically designed to handle speech translation. --- **Final Answer:** 1. **Dropdown 1**: **Option B**: `{ "fr", "de", "es" }` 2. **Dropdown 2**: **Option D**: **TranslationRecognizer**
100
**Question Section:** **DRAG DROP** You train a **Custom Vision** model used in a mobile app. You receive **1,000 new images** that do **not** have any associated data. You need to use the images to retrain the model. The solution must **minimize how long it takes** to retrain the model. **Which three actions should you perform in the Custom Vision portal?** To answer, move the appropriate actions from the list to the answer area and arrange them in the correct order. **Actions:** * Upload the images by category * Get suggested tags * Upload all the images * Group the images locally into category folders * Review the suggestions and confirm the tags * Tag the images manually ---
**Answer Section:** 1. **Upload all the images.** **Explanation**: The first step is to upload all the new images to the Custom Vision portal. You should upload them at once, ideally grouped by category, to save time during the tagging process. 2. **Get suggested tags.** **Explanation**: After uploading the images, the **Smart Labeler** feature in Custom Vision can be used to automatically suggest tags for the new images based on previously trained models. This saves time compared to manually tagging each image. 3. **Review the suggestions and confirm the tags.** **Explanation**: Once the suggested tags are generated by the system, you should review and confirm them for accuracy. This step is essential to ensure that the tags are appropriate before retraining the model with the newly labeled images. --- **Final Answer:** 1. **Upload all the images.** 2. **Get suggested tags.** 3. **Review the suggestions and confirm the tags.**
101
hard **Question Section:** You are building a Conversational Language Understanding model for an e-commerce chatbot. Users can speak or type their billing address when prompted by the chatbot. You need to construct an entity to capture billing addresses. Which entity type should you use? **A.** machine learned **B.** Regex **C.** list **D.** Pattern.any ---
**Answer Section:** **A. machine learned** **Explanation:** For capturing a **billing address** in a Conversational Language Understanding (LUIS) model, the **machine learned** entity type is the most appropriate. A **machine learned entity** can automatically learn and extract specific types of information, like an address, from user input based on the model's training. This is ideal for dynamic or variable inputs such as billing addresses, which can vary significantly in format. Why the other options are incorrect: * **B. Regex**: Regex entities are great for specific patterns (e.g., phone numbers, dates), but they are not flexible enough to capture the wide range of possible billing address formats. * **C. list**: The **list** entity type is useful for fixed sets of values, like product categories or common phrases, but is not suitable for the dynamic and variable format of addresses. * **D. Pattern.any**: This type is used for matching any token in a pattern and is not specialized for extracting structured data like an address. **Correct Answer: A. machine learned**
102
hard Question Section: You are building an Azure WebJob that will create knowledge bases from an array of URLs. You instantiate a QnAMakerClient object that has the relevant API keys and assign the object to a variable named client. You need to develop a method to create the knowledge bases. Which two actions should you include in the method? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. Create a list of FileDTO objects that represents data from the WebJob. B. Call the client.Knowledgebase.CreateAsync method. C. Create a list of QnADTO objects that represents data from the WebJob. D. Create a CreateKbDTO object ---**
**Answer Section:** 1. **B. Call the client.Knowledgebase.CreateAsync method.** **Explanation:** The **CreateAsync** method of **QnAMakerClient** is used to create a new knowledge base. This method takes the necessary data and creates the knowledge base asynchronously, which is a crucial step in developing the knowledge base creation process. 2. **D. Create a CreateKbDTO object** **Explanation:** The **CreateKbDTO** object is required to define the structure and properties of the knowledge base that will be created. It includes fields like the name of the knowledge base, the URLs to extract information from, and other configurations. Why the other options are incorrect: * **A. Create a list of FileDTO objects**: This is not required for creating a knowledge base. The **FileDTO** is typically used for managing files for QnA Maker, but it's not the appropriate data structure for defining a knowledge base. * **C. Create a list of QnADTO objects**: While **QnADTO** is important for adding question-answer pairs to the knowledge base, the task is focused on creating the knowledge base, which requires a **CreateKbDTO** object, not individual QnADTO objects. --- **Final Answer:** 1. **B. Call the client.Knowledgebase.CreateAsync method.** 2. **D. Create a CreateKbDTO object.**
103
**Question Section:** **HOTSPOT** You are developing an application that includes language translation. The application will translate text retrieved by using a function named **getTextToBeTranslated**. The text can be in one of many languages. The content of the text must remain within the **Americas Azure geography**. You need to develop code to translate the text to a single language. How should you complete the code? To answer, select the appropriate options in the answer area. **NOTE**: Each correct selection is worth one point. --- **Answer Area:** ``` var endpoint = **xxxxxx**; var apiKey = "FF956C6883B21B38691ABD2000A4C606"; var text = getTextToBeTranslated(); var body = '[{"Text":"' + text + '"}]'; var client = new HttpClient(); client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", apiKey); **xxxxxx** HttpResponseMessage response; var content = new StringContent(body, Encoding.UTF8, "application/json"); var response = await client.PutAsync(uri, content); ``` **Dropdown 1** (For `endpoint`): * **Option A**: `https://api.cognitive.microsofttranslator.com/translate` * **Option B**: `https://api.cognitive.microsofttranslator.com/transliterate` * **Option C**: `https://api-apc.cognitive.microsofttranslator.com/detect` * **Option D**: `https://api-nam.cognitive.microsofttranslator.com/detect` * **Option E**: `https://api-nam.cognitive.microsofttranslator.com/translate` **Dropdown 2** (For `uri`): * **Option A**: `var uri = endpoint + "?from=en";` * **Option B**: `var uri = endpoint + "?suggestedFrom=en";` * **Option C**: `var uri = endpoint + "?to=en";` ---
**Answer Section:** 1. * **Option E**: `https://api-nam.cognitive.microsofttranslator.com/translate` **Explanation**: The **translate** endpoint is used for translating text from one language to another. This matches the requirement for translating the text to a single language. Must remain in america 2. **Dropdown 2 (For `uri`)** **Correct Answer**: **Option C**: `var uri = endpoint + "?to=en";` **Explanation**: The **to=en** parameter is used to specify the target language for translation. This is the correct format for defining the destination language (English in this case). --- **Final Answer:** 1. * **Option E**: `https://api-nam.cognitive.microsofttranslator.com/translate` 2. **Dropdown 2**: **Option C**: `var uri = endpoint + "?to=en";`
104
**Question Section:** You are building a conversational language understanding model. You need to enable active learning. What should you do? **A.** Add show-all-intents=true to the prediction endpoint query. **B.** Enable speech priming. **C.** Add log=true to the prediction endpoint query. **D.** Enable sentiment analysis. ---
**Answer Section:** **C. Add log=true to the prediction endpoint query.** **Explanation:** Active learning in **Language Understanding (LUIS)** is the process of continuously improving the model by logging and analyzing user interactions. Enabling the **log** parameter by adding `log=true` to the prediction endpoint query allows you to log the predicted intents and their confidence scores, which is crucial for tracking, evaluating, and improving the model's performance over time. Why the other options are incorrect: * **A. Add show-all-intents=true to the prediction endpoint query**: This option shows multiple possible intents, but it is not directly related to enabling active learning. * **B. Enable speech priming**: Speech priming is related to enhancing voice recognition for speech models, not for enabling active learning in a language understanding model. * **D. Enable sentiment analysis**: Sentiment analysis assesses the sentiment of text but is not directly related to the process of enabling active learning for improving the model's intent detection. **Correct Answer: C. Add log=true to the prediction endpoint query.**
105
hard **Question Section:** **HOTSPOT** You run the following command: ``` docker run --rm -it -p 5000:5000 --memory 10g --cpus 2 \ mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment \ Eula=accept \ Billing={ENDPOINT_URI} \ ApiKey={API_KEY} ``` For each of the following statements, select **Yes** if the statement is true. Otherwise, select **No**. **NOTE**: Each correct selection is worth one point. --- **Answer Area:** 1. **Going to [http://localhost:5000/status](http://localhost:5000/status) will query the Azure endpoint to verify whether the API key used to start the container is valid.** 2. **The container logging provider will write log data.** 3. **Going to [http://localhost:5000/swagger](http://localhost:5000/swagger) will provide the details to access the documentation for the available endpoints.** ---
**Answer Section:** 1. **Yes** **Explanation**: The **/status** endpoint typically verifies the health of the service and can also check whether the API key is valid. In many service containers, this endpoint serves both purposes — verifying the status and confirming the API key's validity. 2. **No** **Explanation**: The **docker run** command provided does not explicitly configure a logging provider for persistent logging. Without further configuration, logs would only be available in the container's standard output and not written to a file or external logging service. 3. **Yes** **Explanation**: The **/swagger** endpoint is often used to provide an interactive UI for API documentation, including available endpoints and their details. It’s common for containers exposing RESTful APIs to provide Swagger UI at this endpoint for easy exploration of the API. --- **Final Answer:** 1. **Yes** 2. **No** 3. **Yes**
106
**Question Section:** You are building a Language Understanding model for an e-commerce platform. You need to construct an entity to capture billing addresses. Which entity type should you use for the billing address? **A.** machine learned **B.** Regex **C.** geographyV2 **D.** Pattern.any **E.** list ---
**Answer Section:** **A. machine learned** **Explanation:** A **machine learned entity** is the most appropriate choice for capturing billing addresses in a conversational AI model. Since addresses can vary significantly in format (e.g., street names, building numbers, city names), a machine learned entity allows the system to automatically learn and capture this dynamic data based on training. You can also break down the billing address into smaller sub-entities like street name, city, state, and zip code, which a machine learned entity can handle effectively. Why the other options are incorrect: * **B. Regex**: While **Regex** is useful for extracting data based on specific patterns, billing addresses vary widely in format and structure, so **Regex** would not be reliable or flexible enough to handle all possible variations of billing addresses. * **C. geographyV2**: The **geographyV2** entity type is pre-trained for recognizing geographic locations (e.g., cities, countries), but it is not specifically tailored for structured billing addresses. The **machine learned** entity is more flexible for extracting complex, structured information like an address. * **D. Pattern.any**: This is a placeholder entity used in pattern-based templates, not for capturing structured data like a billing address. * **E. list**: A **list** entity is used for recognizing a set of predefined values or synonyms, but a billing address is dynamic and variable, making a **list** entity unsuitable. **Correct Answer: A. machine learned**
107
**Question Section:** You need to upload speech samples to a Speech Studio project for use in training. How should you upload the samples? **A.** Combine the speech samples into a single audio file in the .wma format and upload the file. **B.** Upload a .zip file that contains a collection of audio files in the .wav format and a corresponding text transcript file. **C.** Upload individual audio files in the FLAC format and manually upload a corresponding transcript in Microsoft Word format. **D.** Upload individual audio files in the .wma format. ---
**Answer Section:** **B. Upload a .zip file that contains a collection of audio files in the .wav format and a corresponding text transcript file.** **Explanation:** The **Speech Studio** project for training speech models in **Azure** accepts a **.zip** file that contains a collection of **audio files** in the **.wav** format and a **corresponding text transcript file**. This is the most efficient and correct method to upload speech samples for training, as it organizes the audio files and transcripts together in a standardized format for easy processing. Why the other options are incorrect: * **A. Combine the speech samples into a single audio file in the .wma format and upload the file**: This is not the recommended method for uploading speech samples for training, and **.wma** is not the preferred audio format for Speech Studio. * **C. Upload individual audio files in the FLAC format and manually upload a corresponding transcript in Microsoft Word format**: While **FLAC** is a supported audio format, **Microsoft Word** is not the standard format for transcripts. **Text files** or a **CSV** format would be more appropriate. * **D. Upload individual audio files in the .wma format**: **.wma** is not a commonly used or recommended format for speech training in Speech Studio. The **.wav** format is preferred. **Correct Answer: B. Upload a .zip file that contains a collection of audio files in the .wav format and a corresponding text transcript file.**
108
hard **Question Section:** You are developing a method for an application that uses the Translator API. The method will receive the content of a webpage, and then translate the content into Greek (el). The result will also contain a transliteration that uses the Roman alphabet. You need to create the URI for the call to the Translator API. You have the following URI: `https://api.cognitive.microsofttranslator.com/translate?api-version=3.0` Which three additional query parameters should you include in the URI? Each correct answer presents part of the solution. **A.** toScript=Cyrl **B.** from=el **C.** textType=html **D.** to=el **E.** textType=plain **F.** toScript=Latn ---
**Answer Section:** 1. **D. to=el** **Explanation**: The **to** parameter specifies the target language for the translation. In this case, you want the translation to be in **Greek (el)**, so you need to set the **to** parameter to **el**. 2. **F. toScript=Latn** **Explanation**: The **toScript=Latn** parameter specifies that the transliteration should use the **Roman alphabet (Latn)**. This is necessary to provide the Romanized version of the Greek text. 3. **C. textType=html** **Explanation**: Since you are translating the content of a webpage, the **textType=html** parameter tells the Translator API that the content being sent for translation is in HTML format. This is important to ensure that the HTML tags are correctly handled during translation. --- **Final Answer:** 1. **D. to=el** 2. **F. toScript=Latn** 3. **C. textType=html**
109
hard **Question Section:** You have a chatbot that was built by using the Microsoft Bot Framework. You need to debug the chatbot endpoint remotely. Which two tools should you install on a local computer? Each correct answer presents part of the solution. **A.** Fiddler **B.** Bot Framework Composer **C.** Bot Framework Emulator **D.** Bot Framework CLI **E.** ngrok **F.** nginx ---
**Answer Section:** 1. **C. Bot Framework Emulator** **Explanation**: The **Bot Framework Emulator** allows you to test and debug your chatbot locally. It provides a user interface to interact with the bot and track its messages. It also lets you simulate the bot's behavior in a local environment before deploying it to production. 2. **E. ngrok** **Explanation**: **ngrok** is a tool that creates a secure tunnel to your local server from the public internet. This is particularly useful when you want to test or debug your chatbot endpoint remotely while running it locally. **ngrok** will expose your local bot's endpoint to the internet, enabling remote debugging. --- --- ❌ **Why Other Options Are Incorrect** **A. Fiddler** * Fiddler is a **web debugging proxy**, good for inspecting HTTP traffic, but it does **not expose local endpoints** or simulate bot conversations. * It’s not required for **bot endpoint debugging**. **B. Bot Framework Composer** * Composer is a **bot development tool** used to design dialog flows visually. * It's useful for **building** bots, but **not needed** just to **debug an existing bot endpoint remotely**. **D. Bot Framework CLI** * CLI tool for managing resources (e.g., creating QnA Maker KBs, skill manifests). * Does **not provide remote debugging or testing capabilities**. **F. nginx** * A web server and reverse proxy. * It is **not designed for exposing localhost to the internet** in the same seamless way that **ngrok** does, nor is it used for **bot-specific debugging**. --- ✅ **Final Answer:** **C. Bot Framework Emulator** **E. ngrok**
110
Question Section: You create a web app named app1 that runs on an Azure virtual machine named vm1. vm1 is on an Azure virtual network named vnet1. You plan to create a new Azure Cognitive Search service named service1. You need to ensure that app1 can connect directly to service1 without routing traffic over the public internet. Solution: **You deploy service1 and a private endpoint to vnet1.** Does this meet the goal? A. Yes B. No ---
**Answer Section:** **A. Yes** **Explanation**: Deploying **service1** and a **private endpoint** to **vnet1** ensures that **app1**, running on **vm1**, can connect directly to the Azure Cognitive Search service over the Azure private network. This avoids routing traffic over the public internet, meeting the requirement of keeping the traffic private and secure within the virtual network. A **private endpoint** allows for private connectivity to Azure services, such as Azure Cognitive Search, without exposing them to the public internet. By deploying the private endpoint within **vnet1**, the traffic between **app1** and **service1** remains within the private network. **Correct Answer: A. Yes**
111
**Question Section:** **DRAG DROP** You are building a retail chatbot that will use a **QnA Maker** service. You upload an internal support document to train the model. The document contains the following question: **"What is your warranty period?"** Users report that the chatbot returns the default QnA Maker answer when they ask the following question: **"How long is the warranty coverage?"** The chatbot returns the correct answer when the users ask the following question: **"What is your warranty period?"** Both questions should return the same answer. You need to increase the accuracy of the chatbot responses. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. --- **Answer Area:** 1. **Add a new question and answer (QnA) pair** 2. **Retrain the model** 3. **Add additional questions to the document** 4. **Republish the model** 5. **Add alternative phrasing to the question and answer (QnA) pair** ---
**Answer Section:** 1. **Add alternative phrasing to the question and answer (QnA) pair** * **Explanation**: Adding alternative phrasing helps the model recognize different ways users may ask the same question. In this case, the question "How long is the warranty coverage?" should be added as an alternative phrasing for the question "What is your warranty period?" to ensure both questions trigger the correct answer. 2. **Retrain the model** * **Explanation**: After adding the alternative phrasing, you need to retrain the model so that it can learn the new question variations and correctly associate them with the right answer. 3. **Republish the model** * **Explanation**: Once the model is retrained, you need to republish the model for the changes to take effect. This will make the new question-answer pair and alternative phrasing available in the live chatbot. --- **Final Answer:** 1. **Add alternative phrasing to the question and answer (QnA) pair** 2. **Retrain the model** 3. **Republish the model**
112
**Question Section:** You are training a Language Understanding model for a user support system. You create the first intent named **GetContactDetails** and add 200 examples. You need to decrease the likelihood of a false positive. What should you do? **A.** Enable active learning. **B.** Add a machine learned entity. **C.** Add additional examples to the GetContactDetails intent. **D.** Add examples to the None intent. ---
**Answer Section:** **D. Add examples to the None intent.** **Explanation:** The **None intent** (also known as the **Fallback intent**) is used to catch user inputs that do not match any other defined intents. Adding examples to the **None intent** helps the model learn to recognize and classify when a user's input does not match a known intent. This reduces the likelihood of **false positives**, where the model incorrectly classifies input as part of the **GetContactDetails** intent when it should not be. Why the other options are incorrect: * **A. Enable active learning**: While **active learning** can help improve the model over time by selecting examples that need manual correction, it is not a direct solution to decreasing false positives. * **B. Add a machine learned entity**: Machine learned entities are used to recognize specific pieces of information in user input (e.g., dates, locations) but do not directly address reducing false positives in intent classification. * **C. Add additional examples to the GetContactDetails intent**: Adding more examples to an intent can help increase accuracy for that specific intent, but it doesn't necessarily decrease false positives for cases where the user's input doesn't belong to that intent. Adding examples to the **None intent** is a better solution. **Correct Answer: D. Add examples to the None intent.**
113
HARD **Question Section:** **DRAG DROP** You are building a Language Understanding model for purchasing tickets. You have the following utterance for an intent named **PurchaseAndSendTickets**. **Purchase \[2 audit business] tickets to \[Paris] \[next Monday] and send tickets to \[[email@domain.com](mailto:email@domain.com)]** You need to select the entity types. The solution must use built-in entity types to minimize training data whenever possible. Which entity type should you use for each label? To answer, drag the appropriate entity types to the correct labels. Each entity type may be used once, more than once, or not at all. **Select and Place:** * **Entity Types**: * **Email** * **List** * **Regex** * **GeographyV2** * **Machine learned** **Answer Area:** Paris: * **xxxx** email@domain.com: * **xxxx** 2 audit business]: * **xxxx** ---
1. **\[Paris]**: **GeographyV2** **Explanation**: Recognizes locations (cities, countries, etc.), and **Paris** is a city. 2. **\[[email@domain.com](mailto:email@domain.com)]**: **Email** **Explanation**: This clearly identifies an email address. 3. **\[2 audit business]**: **Machine learned** **Explanation**: This would be an appropriate choice for a custom quantity and type of ticket, especially if the list of values isn't fixed or predefined. **Machine learned** entities can dynamically capture this kind of data, especially if there are multiple variations of ticket types or quantities that might evolve over time. If the quantities and types are predefined, a **List** could work, but in this case, **Machine learned** makes sense if you want to dynamically classify ticket information. --- **Final Answer Update:** 1. **\[Paris]**: **GeographyV2** 2. **\[[email@domain.com](mailto:email@domain.com)]**: **Email** 3. **\[2 audit business]**: **Machine learned**
114
You have the following C# method: ``` static void create_resource(string resource_name, string kind, string account_tier, string location) { CognitiveServicesAccount parameters = new CognitiveServicesAccount(null, null, kind, location, resource_name, new CognitiveServicesAccountProperties(), new Sku(account_tier)); var result = cog_svc_client.Accounts.Create(resource_group_name, account_tier, parameters); } ``` You need to deploy an Azure resource to the East US Azure region. The resource will be used to perform sentiment analysis. How should you call the method? A. create\_resource("res1", "ContentModerator", "S0", "eastus") B. create\_resource("res1", "TextAnalytics", "S0", "eastus") C. create\_resource("res1", "ContentModerator", "Standard", "East US") D. create\_resource("res1", "TextAnalytics", "Standard", "East US")
**Answer Section:** **B. create\_resource("res1", "TextAnalytics", "S0", "eastus")** **Explanation:** * **TextAnalytics** is the correct resource kind for performing **sentiment analysis**. * **S0** is the correct SKU for a standard (paid) tier Cognitive Service resource. * **"eastus"** is the correct regional format expected by Azure APIs (lowercase, no space). **D** looks close, but `"East US"` would not be valid in the actual deployment context — Azure expects `"eastus"` as the value. --- **Correct Answer: B. create\_resource("res1", "TextAnalytics", "S0", "eastus")**
115
**Question Section:** You build a Conversational Language Understanding model by using the Language Services portal. You export the model as a JSON file as shown in the following sample: ``` { "text": "average amount of rain by month at chicago last year", "intent": "Weather.CheckWeatherValue", "entities": [ { "entity": "Weather.WeatherRange", "startPos": 0, "endPos": 6, "children": [] }, { "entity": "Weather.WeatherCondition", "startPos": 18, "endPos": 21, "children": [] }, { "entity": "Weather.Historic", "startPos": 23, "endPos": 30, "children": [] } ] } ``` To what does the **Weather.Historic** entity correspond in the utterance? **A.** by month **B.** chicago **C.** rain **D.** last year ---
**Answer Section:** **A. by month** **Explanation:** The character span from position **23 to 30** in the utterance `"average amount of rain by month at chicago last year"` corresponds to the phrase **"by month"**, which maps to the **Weather.Historic** entity. --- **Correct Answer: A. by month**
116
**Question Section:** You are examining the **Text Analytics** output of an application. The text analyzed is: > *"Our tour guide took us up the Space Needle during our trip to Seattle last week."* The response contains the data shown in the following table: | Text | Category | ConfidenceScore | | ------------ | ---------- | --------------- | | Tour guide | PersonType | 0.45 | | Space Needle | Location | 0.38 | | Trip | Event | 0.78 | | Seattle | Location | 0.78 | | Last week | DateTime | 0.80 | **Which Text Analytics API is used to analyze the text?** **A.** Entity Linking **B.** Named Entity Recognition **C.** Sentiment Analysis **D.** Key Phrase Extraction ---
**Answer Section:** **B. Named Entity Recognition** **Explanation:** The **Named Entity Recognition (NER)** API is responsible for identifying and categorizing entities such as **PersonType**, **Location**, **Event**, and **DateTime** in the text. This is exactly what is shown in the table — classification of text spans into entity categories with confidence scores. * **Entity Linking** (A) would provide links to a knowledge base like Wikipedia. * **Sentiment Analysis** (C) would determine the emotional tone (positive/negative/neutral). * **Key Phrase Extraction** (D) would extract important phrases, not categorize them into entity types. --- **Correct Answer: B. Named Entity Recognition**
117
**Question Section:** You need to measure the public perception of your brand on social media by using natural language processing. Which Azure service should you use? **A.** Language service **B.** Content Moderator **C.** Computer Vision **D.** Form Recognizer ---
**Answer Section:** **A. Language service** **Explanation:** The **Azure Language service** (formerly Text Analytics) provides natural language processing (NLP) capabilities including **sentiment analysis**, which is used to assess public perception in text data such as social media posts, reviews, or comments. It can determine whether text expresses positive, negative, or neutral sentiments. Why the other options are incorrect: * **B. Content Moderator**: Used for detecting offensive content (e.g., profanity), not general public sentiment. * **C. Computer Vision**: Used for analyzing visual content (images, video), not textual sentiment. * **D. Form Recognizer**: Extracts structured data from forms and documents, not suitable for analyzing free-form text or public opinion. --- **Correct Answer: A. Language service**
118
HOTSPOT You are developing an application that includes language translation. The application will translate text retrieved by using a function named `get_text_to_be_translated`. The text can be in one of many languages. The content of the text must remain within the Americas Azure geography. You need to develop code to translate the text to a single language. How should you complete the code? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ``` api_key = "FF956C6883B21B38691ABD200A4C606" text = get_text_to_be_translated() headers = { 'Content-Type': 'application/json', 'Ocp-Apim-Subscription-Key': api_key } body = [{ 'Text': text }] conn = http.client.HTTPSConnection("xxxxx") conn.request("POST", "xxxxx", str(body), headers) response = conn.getresponse() response_data = response.read() ``` Answer Area: First dropdown options (endpoint): * `"api.cognitive.microsofttranslator.com"` * `"api-apc.cognitive.microsofttranslator.com"` * `"api-nam.cognitive.microsofttranslator.com"` Second dropdown options (path): * `"/translate?from=en"` * `"/translate?suggestedFrom=en"` * `"/translate?to=en"` * `"/detect?to=en"` * `"/detect?from=en"` ---**
**Answer Section:** 1. **"api-nam.cognitive.microsofttranslator.com"** ✅ Because you are required to keep the content within the **Americas** region. 2. **"/translate?to=en"** ✅ Because you need to **translate** the content **to English**. The `?to=en` specifies the target language. --- **Correct Answer:** * **Endpoint**: `"api-nam.cognitive.microsofttranslator.com"` * **Path**: `"/translate?to=en"`
119
**Question Section:** You have the following data sources: * **Finance**: On-premises Microsoft SQL Server database * **Sales**: Azure Cosmos DB using the Core (SQL) API * **Logs**: Azure Table storage * **HR**: Azure SQL database You need to ensure that you can **search all the data** by using the **Azure Cognitive Search REST API**. What should you do? **A.** Migrate the data in HR to Azure Blob storage. **B.** Migrate the data in HR to the on-premises SQL server. **C.** Export the data in Finance to Azure Data Lake Storage. **D.** Ingest the data in Logs into Azure Sentinel. ---
**Answer Section:** **C. Export the data in Finance to Azure Data Lake Storage.** **Explanation:** Azure Cognitive Search **cannot index data directly from on-premises sources** like an on-prem SQL Server unless it's made accessible through a cloud-based service. To make the **Finance** data searchable via Azure Cognitive Search, it must be **migrated or exported to a supported data source** — such as **Azure Blob Storage** or **Azure Data Lake Storage** — that Azure Cognitive Search can natively index through a data source connection. Why the other options are incorrect: * **A.** Migrating HR data to Blob storage is unnecessary since **Azure SQL Database** is already a supported data source for Azure Cognitive Search. * **B.** Moving HR data to on-prem SQL Server doesn't solve the issue; it actually moves data away from a cloud-supported source. * **D.** Ingesting Logs into Azure Sentinel is unrelated to Azure Cognitive Search — Sentinel is for security analytics, not content indexing or search. --- **Correct Answer: C. Export the data in Finance to Azure Data Lake Storage.**
120
**Question Section:** You have a **Language service** resource that performs the following: * Sentiment analysis * Named Entity Recognition (NER) * Personally Identifiable Information (PII) identification You need to **prevent the resource from persisting input data** once the data is analyzed. Which query parameter in the Language service API should you configure? **A.** model-version **B.** piiCategories **C.** showStats **D.** loggingOptOut ---
**Answer Section:** **D. loggingOptOut** **Explanation:** The **`loggingOptOut=true`** query parameter ensures that the input data **is not logged or stored** by Microsoft when using Azure Cognitive Services like the Language service. This is critical for protecting sensitive information and meeting compliance requirements. Why the other options are incorrect: * **A. model-version**: Specifies which model version to use for analysis, not related to data logging or privacy. * **B. piiCategories**: Filters or limits which types of PII entities to return, but does not prevent logging. * **C. showStats**: Adds statistics (like document counts and transaction counts) to the response; it does not affect data logging behavior. --- **Correct Answer: D. loggingOptOut**
121
Question Section: You have an Azure Cognitive Services model named Model1 that identifies the **intent** of text input. You develop an app in C# named App1. You need to configure App1 to use Model1. Which package should you add to App1? A. Universal.Microsoft.CognitiveServices.Speech B. SpeechServicesToolkit C. Azure.AI.Language.Conversations D. Xamarin.Cognitive.Speech ---**
**Answer Section:** **C. Azure.AI.Language.Conversations** **Explanation:** The **Azure.AI.Language.Conversations** package is the correct SDK for working with **Conversational Language Understanding** models in Azure Cognitive Services, including models that detect **intent** and **entities** from text input. This SDK allows you to integrate with services like **Language Studio** or **LUIS**'s successor in Azure Language service. Why the other options are incorrect: * **A. Universal.Microsoft.CognitiveServices.Speech**: This is for speech recognition and not specifically for intent detection from text. * **B. SpeechServicesToolkit**: Not a standard Azure SDK and not specifically used for Language service intent detection. * **D. Xamarin.Cognitive.Speech**: This is mobile-specific and related to speech input, not intent recognition from text. --- **Correct Answer: C. Azure.AI.Language.Conversations**
122
**Question Section:** **HOTSPOT** You are building content for a video training solution. You need to create narration to accompany the video content. The solution must use **Custom Neural Voice**. What should you use to **create a custom neural voice**, and which service should you use to **generate the narration**? To answer, select the appropriate options in the answer area. **NOTE:** Each correct answer is worth one point. **Answer Area:** **Custom neural voice:** * Microsoft Bot Framework Composer * The Azure portal * The Language Understanding portal * The Speech Studio portal **Narration:** * Language Understanding * Speaker Recognition * Speech-to-text * Text-to-speech ---
**Answer Section:** 1. **Custom neural voice:** ✅ **The Speech Studio portal** 2. **Narration:** ✅ **Text-to-speech** **Explanation:** * **Speech Studio** is the correct tool for creating and managing **Custom Neural Voice** models. * The narration is produced using the **Text-to-speech** service, which converts text input into spoken audio using the custom voice.
123
**Question Section:** **HOTSPOT** You are building a call handling system that will receive calls from French-speaking and German-speaking callers. The system must perform the following tasks: * Capture inbound voice messages as text. * Replay messages in English on demand. Which Azure Cognitive Services services should you use? To answer, select the appropriate options in the answer area. **NOTE**: Each correct selection is worth one point. --- **Answer Area:** **To capture messages:** * Speaker Recognition * Speech-to-text * Text-to-speech * Translator **To replay messages:** * Speech-to-text only * Speech-to-text and Language * Speaker Recognition and Language * Text-to-speech and Language * Text-to-speech and Translator ---
**Answer Section:** 1. **To capture messages:** ✅ **Speech-to-text** **Explanation**: Speech-to-text is the correct service for converting spoken audio (in French or German) into written text. 2. **To replay messages:** ✅ **Text-to-speech and Translator** **Explanation**: To replay the messages in English, you need to: * Translate the captured text (French/German ➝ English) using **Translator**, and * Convert the translated English text back to speech using **Text-to-speech**. --- **Correct Answers:** * **To capture messages:** Speech-to-text * **To replay messages:** Text-to-speech and Translator
124
**Question Section:** You are building a social media extension that will convert text to speech. The solution must meet the following requirements: * Support messages of up to 400 characters. * Provide users with multiple voice options. * Minimize costs. You create an Azure Cognitive Services resource. Which **Speech API endpoint** provides users with the available voice options? **A.** `https://uksouth.api.cognitive.microsoft.com/speechtotext/v3.0/models/base` **B.** `https://uksouth.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/longaudiosynthesis/voices` **C.** `https://uksouth.tts.speech.microsoft.com/cognitiveservices/voices/list` **D.** `https://uksouth.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` ---
**Answer Section:** **C. [https://uksouth.tts.speech.microsoft.com/cognitiveservices/voices/list](https://uksouth.tts.speech.microsoft.com/cognitiveservices/voices/list)** **Explanation:** This endpoint returns a list of **available voices** supported by the **Text-to-Speech (TTS)** service in Azure. It includes metadata such as voice name, locale, gender, and style — allowing users to choose from multiple voice options. Why the other options are incorrect: * **A.** This is related to **speech-to-text**, not text-to-speech, and provides information on speech recognition models, not voices. * **B.** This is a custom endpoint related to **long audio synthesis** in the **Custom Voice** feature, which is more advanced and costlier — not suitable when aiming to minimize costs. * **D.** This endpoint is used to **synthesize speech** from text but requires specifying a custom deployment ID — not for retrieving the list of available voices. --- **Correct Answer: C. [https://uksouth.tts.speech.microsoft.com/cognitiveservices/voices/list](https://uksouth.tts.speech.microsoft.com/cognitiveservices/voices/list)**
125
hard **Question Section:** You develop a custom question answering project in Azure Cognitive Service for Language. The project will be used by a chatbot. You need to configure the project to **engage in multi-turn conversations**. What should you do? **A.** Add follow-up prompts **B.** Enable active learning **C.** Add alternate questions **D.** Enable chit-chat ---
**Answer Section:** **A. Add follow-up prompts** **Explanation:** To enable **multi-turn conversations** in a **custom question answering** project, you need to add **follow-up prompts**. These prompts allow the chatbot to guide the user through a series of related questions and answers, enabling conversational flow beyond a single Q\&A pair. Why the other options are incorrect: * **B. Enable active learning**: Helps improve the model based on user feedback but doesn't support multi-turn conversation. * **C. Add alternate questions**: Helps match variations of a single question but not for chaining multiple questions. * **D. Enable chit-chat**: Adds prebuilt conversational responses but is not related to structured multi-turn Q\&A. --- **Correct Answer: A. Add follow-up prompts**
126
hard Question Section: HOTSPOT You are building a solution that students will use to find references for essays. You use the following code to start building the solution: ``` using Azure; using System; using Azure.AI.TextAnalytics; private static readonly AzureKeyCredential credentials = new AzureKeyCredential(""); private static readonly Uri endpoint = new Uri(""); static void EntityLinker(TextAnalyticsClient client) { var response = client.RecognizeLinkedEntities( "Our tour guide took us up the Space Needle during our trip to Seattle last week."); } ``` For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Answer Area: 1. The code will detect the language of documents. 2. The `url` attribute returned for each linked entity will be a Bing search link. 3. The `matches` attribute returned for each linked entity will provide the location in a document where the entity is referenced. ---**
**Answer Section:** 1. **The code will detect the language of documents** — ❌ **No** * **Explanation**: The method `RecognizeLinkedEntities` does not detect language; it assumes the language or uses a default. To detect language, you'd use `DetectLanguage()`. 2. **The `url` attribute returned for each linked entity will be a Bing search link** — ❌ **No** * **Explanation**: The `url` for linked entities typically points to a **Wikipedia** page or similar knowledge base, not a Bing search result. 3. **The `matches` attribute returned for each linked entity will provide the location in a document where the entity is referenced** — ✅ **Yes** * **Explanation**: The `matches` collection includes the exact matched text and its **offset** and **length**, which indicates where in the document the entity appears. --- **Correct Answers:** 1. No 2. No 3. Yes
127
**Question Section:** You train a **Conversational Language Understanding** model to understand the natural language input of users. You need to evaluate the **accuracy of the model** before deploying it. What are two methods you can use? **NOTE:** Each correct selection is worth one point. **A.** From the language authoring REST endpoint, retrieve the model evaluation summary. **B.** From Language Studio, enable Active Learning, and then validate the utterances logged for review. **C.** From Language Studio, select Model performance. **D.** From the Azure portal, enable log collection in Log Analytics, and then analyze the logs. ---
**Answer Section:** ✅ **A. From the language authoring REST endpoint, retrieve the model evaluation summary.** **Explanation:** The **Language authoring REST API** allows you to retrieve an **evaluation summary**, which includes metrics such as precision, recall, and F1-score for the model. This is one of the primary ways to programmatically evaluate model accuracy. ✅ **C. From Language Studio, select Model performance.** **Explanation:** **Language Studio** provides a built-in **Model performance** section where you can review the accuracy of your trained model through visual metrics such as confusion matrices and scores. This is a direct, UI-based method to evaluate the model. --- Why the other options are incorrect: * **B.** Active Learning is useful **after deployment**, when reviewing misclassified or low-confidence utterances based on real usage. It helps improve the model but is **not used to evaluate accuracy before deployment**. * **D.** Enabling log collection via Log Analytics is also a **post-deployment monitoring** feature. It's not used to evaluate model performance **before** deployment. --- **Correct Answers:** ✅ **A.** ✅ **C.**
128
**Question Section:** **DRAG DROP** You develop an app in C# named **App1** that performs **speech-to-speech translation**. You need to configure App1 to **translate English to German**. How should you complete the `SpeechTranslationConfig` object? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. **NOTE:** Each correct selection is worth one point. --- **Values:** * `addTargetLanguage` * `speechSynthesisLanguage` * `speechRecognitionLanguage` * `voiceName` --- **Answer Area:** ``` var translationConfig = SpeechTranslationConfig.FromSubscription(SPEECH_SUBSCRIPTION_KEY, SPEECH_SERVICE_REGION); translationConfig.__________ = "en-US"; translationConfig.__________("de"); ``` ---
**Answer Section:** 1. `translationConfig.speechRecognitionLanguage = "en-US";` ✅ **speechRecognitionLanguage** **Explanation:** This sets the source language that the app will recognize. Since you're translating from English, the recognition language should be **"en-US"**. 2. `translationConfig.addTargetLanguage("de");` ✅ **addTargetLanguage** **Explanation:** This adds **German** as the target language for translation. `"de"` is the correct language code for German. --- **Final Answer:** ``` translationConfig.speechRecognitionLanguage = "en-US"; translationConfig.addTargetLanguage("de"); ```
129
**Question Section:** You have an Azure subscription that contains a multi-service Azure Cognitive Services Translator resource named **Translator1**. You are building an app that will translate text and documents by using **Translator1**. You need to create the REST API request for the app. **Which headers should you include in the request?** **A.** the access control request, the content type, and the content length **B.** the subscription key and the client trace ID **C.** the resource ID and the content language **D.** the subscription key, the subscription region, and the content type ---
**Answer Section:** **D. the subscription key, the subscription region, and the content type** **Explanation:** When making a REST API call to the **Azure Translator service**, you must include the following headers: * `Ocp-Apim-Subscription-Key`: Your **subscription key** for authentication. * `Ocp-Apim-Subscription-Region`: The **Azure region** where your Translator resource is deployed. * `Content-Type`: Typically `"application/json"` for translation requests. These headers ensure the request is **authenticated**, routed to the correct regional endpoint, and the service knows how to parse the request body. Why the other options are incorrect: * **A.** These are generic headers for CORS and HTTP protocol control, not specific to Translator API authentication. * **B.** While `client trace ID` is optional for tracking, it's not required for basic requests. * **C.** `Resource ID` and `content language` are not used in the required request headers for translation. --- **Correct Answer: D. the subscription key, the subscription region, and the content type**
130
**Question Section:** **DRAG DROP** You are building a transcription service for technical podcasts. Testing reveals that the service fails to transcribe technical terms accurately. You need to improve the accuracy of the service. Which five actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. **NOTE:** Each correct selection is worth one point. --- **Actions:** * Deploy the model * Create a Custom Speech project * Upload training datasets * Create a speech-to-text model * Create a Speaker Recognition model * Train the model * Create a Conversational Language Understanding model ---
**Answer Area (Correct Order):** 1. **Create a Custom Speech project** 2. **Upload training datasets** 3. **Create a speech-to-text model** 4. **Train the model** 5. **Deploy the model** --- **✅ Explanation:** Datasets must be uploaded **before** creating the model, because the model configuration requires selecting which datasets to use. This ensures the model is trained on the correct data before it's deployed.
131
**Question Section:** You are building a retail kiosk system that will use a **custom neural voice**. You acquire audio samples and **consent from the voice talent**. You need to **create a voice talent profile**. **What should you upload to the profile?** **A.** a .zip file that contains 10-second .wav files and the associated transcripts as .txt files **B.** a five-minute .flac audio file and the associated transcript as a .txt file **C.** a .wav or .mp3 file of the voice talent consenting to the creation of a synthetic version of their voice **D.** a five-minute .wav or .mp3 file of the voice talent describing the kiosk system ---
**Answer Section:** **C. a .wav or .mp3 file of the voice talent consenting to the creation of a synthetic version of their voice** **Explanation:** When creating a **voice talent profile** for **Custom Neural Voice** in Azure, the **first requirement** is uploading a **recorded consent statement** from the voice talent. This is a mandatory step for ethical use and legal compliance. The voice talent must read a specific script provided by Microsoft and **explicitly consent** to the creation of a synthetic voice. Why other options are incorrect: * **A**: This is part of the **training data**, not the voice talent profile creation step. * **B**: Same — used for training, but not for **profile creation**. * **D**: This content isn't relevant to the required **legal consent** statement. --- **Correct Answer: C. a .wav or .mp3 file of the voice talent consenting to the creation of a synthetic version of their voice**
132
DRAG DROP You have a Language Understanding solution that runs in a Docker container. You download the Language Understanding container image from the Microsoft Container Registry (MCR). You need to deploy the container image to a host computer. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. NOTE: Each correct selection is worth one point. --- Actions: * From the host computer, move the package file to the Docker input directory. * From the Language Understanding portal, export the solution as a package file. * From the host computer, build the container and specify the output directory. * From the host computer, run the container and specify the input directory. * From the Language Understanding portal, retrain the model.
**Answer Area (Correct Sequence):** 1. ✅ **From the Language Understanding portal, export the solution as a package file.** 2. ✅ **From the host computer, move the package file to the Docker input directory.** 3. ✅ **From the host computer, run the container and specify the input directory.** --- **Explanation:** 1. **Export the solution** from the Language Understanding (LUIS) portal so that it can be used by the container. 2. The exported **package file** needs to be placed in the **input folder** that the Docker container can access. 3. You then **run the container**, passing the path to the input directory, so it loads the model for processing. The options related to **building the container** and **retraining the model** are **not necessary** for deployment — the container is already downloaded, and the model is pre-trained. You are only configuring and running it.
133
**Question Section:** **HOTSPOT** You are building a **text-to-speech app** that will use a **custom neural voice**. You need to create an SSML file for the app. The solution must ensure that the voice profile meets the following requirements: * Expresses a **calm tone** * Imitates the voice of a **young adult female** How should you complete the code? To answer, select the appropriate options in the answer area. **NOTE**: Each correct selection is worth one point. **Answer Area (SSML snippet with dropdowns):** ``` How can I assist you? ``` **Dropdown Options (both dropdowns share these 5 options):** * role * style * styledegree * type * voice ---
**Answer Section:** 1. **First dropdown:** ✅ **role** **Explanation**: The role attribute is used to specify the persona or character of the voice, such as **YoungAdultFemale**, which matches the requirement to imitate a young adult female voice. 2. **Second dropdown:** ✅ **style** **Explanation**: The style attribute sets the speaking style of the voice. **gentle** is a valid speaking style in the Azure neural voice library that matches the requirement for a calm tone. --- **Correct Answers:** * **role** = "YoungAdultFemale" * **style** = "gentle"
134
Question Section: HOTSPOT You have a collection of press releases stored as PDF files. You need to extract text from the files and perform sentiment analysis. Which service should you use for each task? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. --- Answer Area: **Extract text:** * Azure Cognitive Search * Computer Vision * Form Recognizer **Perform sentiment analysis:** * Azure Cognitive Search * Computer Vision * Form Recognizer * Language ---
**Answer Section:** 1. **Extract text:** ✅ **Form Recognizer** **Explanation:** Form Recognizer is the best Azure service for extracting text from structured and semi-structured documents like PDFs. It works well with forms, receipts, and printed text documents. 2. **Perform sentiment analysis:** ✅ **Language** **Explanation:** The Azure **Language** service (part of Cognitive Services) is specifically designed for natural language processing tasks like **sentiment analysis**, **key phrase extraction**, and **named entity recognition**. --- Answer Area: Extract text: ✅ Form Recognizer Justification: Form Recognizer (now part of Azure AI Document Intelligence) is the recommended Azure service for extracting text from both scanned (image-based) and text-based PDFs. It provides reliable OCR capabilities for structured and unstructured documents, including multi-page PDF files. Perform sentiment analysis: ✅ Language Justification: The Azure AI Language service (formerly Text Analytics) offers robust sentiment analysis APIs that evaluate sentiment at both sentence and document levels. It is the go-to service for analyzing text to determine whether it conveys positive, negative, or neutral sentiment. **Correct Answers:** * **Extract text:** Form Recognizer * **Perform sentiment analysis:** Language
135
hard You have a text-based chatbot. You need to enable content moderation by using the Text Moderation API of Content Moderator. Which two service responses should you use? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. personal data B. the adult classification score C. text classification D. optical character recognition (OCR) E. the racy classification score
**Answer Section:** ✅ **A. personal data** ✅ **C. text classification** Explanation: The Text Moderation API of Azure Content Moderator provides features for: * Detecting personally identifiable information (PII) such as emails, phone numbers, and addresses (personal data), and * Performing text classification to identify potentially offensive or harmful content. These two capabilities are directly relevant for moderating user messages in a text-based chatbot. Other options, such as adult classification score and racy classification score, apply to image content, and OCR is also specific to image-based text extraction. --- Correct Answer: A. personal data, C. text classification
136
**Question Section:** You are developing a text processing solution. You develop the following method: ``` static void GetKeyPhrases(TextAnalyticsClient textAnalyticsClient, string text) { var response = textAnalyticsClient.ExtractKeyPhrases(text); Console.WriteLine("Key phrases:"); foreach (string keyphrase in response.Value) { Console.WriteLine($"\t{keyphrase}"); } } ``` You call the method by using the following code: ``` GetKeyPhrases(textAnalyticsClient, "the cat sat on the mat"); ``` For each of the following statements, select **Yes** if the statement is true. Otherwise, select **No**. **NOTE:** Each correct selection is worth one point. --- **Answer Area:** 1. **The call will output key phrases from the input string to the console.** 2. **The output will contain the following words: the, cat, sat, on, and mat.** 3. **The output will contain the confidence level for key phrases.** ---
**Answer Section:** 1. ✅ **Yes** * The method calls `ExtractKeyPhrases()` and iterates through the returned key phrases, printing each to the console. 2. ❌ **No** * Key phrase extraction does **not** return every word; it returns meaningful **noun phrases**, not common stopwords like "the" or "on". So not all listed words will appear. 3. ❌ **No** * The key phrase extraction response **does not include confidence scores**. It only returns a list of strings representing the extracted key phrases. --- **Correct Answers:** 1. ✅ Yes 2. ❌ No 3. ❌ No
137
**Question Section:** You are building an Azure web app named **App1** that will translate text from **English to Spanish**. You need to use the **Text Translation REST API** to perform the translation. The solution must ensure that you have **data sovereignty in the United States**. How should you complete the URI? To answer, select the appropriate options in the answer area. **NOTE:** Each correct selection is worth one point. --- https://____________ / ______________?api-version=3.0&to=es **Answer Area (with dropdowns):** **Base URI:** * `api-nam.cognitive.microsofttranslator.com` * `app.cognitive.microsofttranslator.com` * `api-eur.cognitive.microsofttranslator.com` * `api.cognitive.services.azure.com` * `eastus.api.cognitive.microsoft.com` **Endpoint path:** * `detect` * `languages` * `text-to-speech` * `translate` ---
**Answer Section:** * ✅ **Base URI:** `api-nam.cognitive.microsofttranslator.com` **Explanation:** This endpoint ensures **data residency in the Americas (including the U.S.)**, satisfying the requirement for **U.S. data sovereignty**. * ✅ **Endpoint path:** `translate` **Explanation:** This is the correct REST API endpoint for performing **text translation** between languages. --- **Correct Answer:** * **Base URI:** `api-nam.cognitive.microsofttranslator.com` * **Path:** `translate` 🔹 What is `api-nam.cognitive.microsofttranslator.com`? This is a **regional endpoint** for the **Azure Translator** service, where: * **`api`** = API endpoint * **`nam`** = **North America** * The full domain (`api-nam.cognitive.microsofttranslator.com`) is used to **ensure that data is processed and stored in the North America region**, helping meet **data residency and sovereignty requirements**. --- 🔹 When and Why Do You Use It? You use `api-nam.cognitive.microsofttranslator.com` when: * You are using **Azure Cognitive Services Translator (Text Translation API)** * You need to ensure **data sovereignty**, i.e., that your data **stays within the U.S. or North America** * Your Translator resource was created in a region **within the Americas** (e.g., East US, West US, Brazil South, etc.) > ✅ It is specifically required when **regulatory compliance** (e.g., for HIPAA, FedRAMP, etc.) mandates that your data must not leave the **Americas**. --- 🔹 How Does It Compare to Other Endpoints? | Endpoint | Region | Use When... | | ------------------------------------------- | ----------------------------- | ----------------------------------------------- | | `api.cognitive.microsofttranslator.com` | **Global** (default endpoint) | You don’t have regional data residency concerns | | `api-nam.cognitive.microsofttranslator.com` | **North America** | You need U.S./Americas **data residency** | | `api-eur.cognitive.microsofttranslator.com` | **Europe** | You need **EU data sovereignty** | | `api-ase.cognitive.microsofttranslator.com` | **Asia Pacific** | You need **Asia-Pacific data residency** | --- 🔹 In Practice When calling the **Translator Text API**, you build a URL like: ``` https://api-nam.cognitive.microsofttranslator.com/translate?api-version=3.0&to=es ``` This ensures: * **Data is processed in the U.S.** * You're compliant with local **data governance** policies --- ✅ Summary Use `api-nam.cognitive.microsofttranslator.com` when: * You are calling the Translator Text API * Your Azure Translator resource is deployed in a **U.S./Americas region** * You must comply with **data residency** rules in the **United States** Let me know if you want to see this in code or API examples!
138
Question Section: DRAG DROP You have a Docker host named Host1 that contains a container base image. You have an Azure subscription that contains a custom speech-to-text model named model1. You need to run model1 on Host1. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. NOTE: Each correct selection is worth one point. --- Actions: * Retrain the model. * Request approval to run the container. * Export model1 to Host1. * Run the container. * Configure disk logging.
**Correct Answer Area (in order):** 1. ✅ **Request approval to run the container** 2. ✅ **Export model1 to Host1** 3. ✅ **Run the container** --- **Explanation:** To run a **custom speech-to-text model** in a container: 1. **Request approval**: Microsoft requires approval before you can use custom speech models in a container for legal and licensing reasons. 2. **Export the model**: Once approved, export the custom model from Azure to the local environment (Host1). 3. **Run the container**: Start the container on Host1, configured to use the exported model. Other options: * **Retrain the model**: Not needed unless you're modifying the model. * **Configure disk logging**: Optional and not part of the core required deployment steps. --- ✅ **Final Sequence:** 1. Request approval to run the container 2. Export model1 to Host1 3. Run the container
139
Question Section: You build a language model by using Conversational Language Understanding. The language model is used to search for information on a contact list by using an intent named FindContact. A conversational expert provides you with the following list of phrases to use for training: * Find contacts in London. * Who do I know in Seattle? * Search for contacts in Ukraine. You need to implement the phrase list in Conversational Language Understanding. **Solution: You create a new utterance for each phrase in the FindContact intent.** Does this meet the goal? A. Yes B. No ---**
**Answer Section:** ✅ **A. Yes** **Explanation:** Adding multiple **utterances** that reflect how users may phrase their request is exactly how **Conversational Language Understanding (CLU)** is trained. By assigning each of these phrases to the **FindContact** intent, you're teaching the model how this intent may be expressed in different ways. This helps the model generalize and correctly identify the intent when similar queries are made by users. There is no need to create separate intents or entities **just to input these phrases** — assigning them as utterances to the correct intent is standard and recommended practice. --- **Correct Answer: A. Yes**
140
**Question Section:** **DRAG DROP** You have a **question answering project** in **Azure Cognitive Service for Language**. You need to **move the project to a Language service instance in a different Azure region**. **Which three actions should you perform in sequence?** To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. **NOTE:** Each correct selection is worth one point. --- **Actions:** * From the new Language service instance, train and publish the project. * From the new Language service instance, import the project file. * From the new Language service instance, enable custom text classification. * From the original Language service instance, export the existing project. * From the new Language service instance, regenerate the keys. * From the original Language service instance, train and publish the model. ---
**Correct Answer Area (in order):** 1. ✅ **From the original Language service instance, export the existing project.** 2. ✅ **From the new Language service instance, import the project file.** 3. ✅ **From the new Language service instance, train and publish the project.** --- **Explanation:** To move a **question answering project** across Azure regions, follow this typical migration process: 1. **Export** the existing project from the current (original) Language resource. 2. **Import** that project file into the new Language resource located in the desired region. 3. **Train and publish** the project in the new environment so it becomes usable. Other options like **enabling custom text classification** or **regenerating keys** are not part of the standard project migration process for question answering projects. --- ✅ **Final Sequence:** 1. From the original Language service instance, export the existing project. 2. From the new Language service instance, import the project file. 3. From the new Language service instance, train and publish the project.
141
**Question Section:** **DRAG DROP** You are building a customer support chatbot. You need to configure the bot to identify the following: * Code names for internal product development * Messages that include credit card numbers The solution must **minimize development effort**. **Which Azure Cognitive Service for Language feature should you use for each requirement?** To answer, drag the appropriate features to the correct requirements. Each feature may be used once, more than once, or not at all. **NOTE:** Each correct selection is worth one point. --- **Features:** * Custom named entity recognition (NER) * Key phrase extraction * Language detection * Named Entity Recognition (NER) * Personally Identifiable Information (PII) detection * Sentiment analysis --- **Statements:** 1. **Identify code names for internal product development:** `__________` 2. **Identify messages that include credit card numbers:** `__________` ---
**Answer Section:** 1. **Identify code names for internal product development:** ✅ **Custom named entity recognition (NER)** **Explanation:** Code names are often domain-specific and not part of pre-trained models, so you need **custom NER** to recognize them. 2. **Identify messages that include credit card numbers:** ✅ **Personally Identifiable Information (PII) detection** **Explanation:** PII detection in Azure Language service includes detection of sensitive data types like **credit card numbers**, emails, SSNs, etc. --- ✅ **Correct Answers:** * **Code names:** Custom named entity recognition (NER) * **Credit card numbers:** Personally Identifiable Information (PII) detection
142
Question Section: HOTSPOT You are building an app by using the Speech SDK. The app will translate speech from French to German by using natural language processing. You need to define the source language and the output language. How should you complete the code? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. --- Answer Area (code snippet): ``` var speechTranslationConfig = SpeechTranslationConfig.FromSubscription(speechKey, speechRegion); speechTranslationConfig._____ ("fr"); speechTranslationConfig._______("de"); ``` Dropdown Options (for both lines): * AddTargetLanguage * SpeechRecognitionLanguage * SpeechSynthesisLanguage * TargetLanguages * VoiceName ---**
**Answer Section:** 1. ✅ **speechTranslationConfig.SpeechRecognitionLanguage = "fr";** **Explanation:** The **SpeechRecognitionLanguage** property is used to specify the **source (input) language** of the speech, which in this case is **French ("fr")**. 2. ✅ **speechTranslationConfig.AddTargetLanguage("de");** **Explanation:** You use **AddTargetLanguage** to specify the **target (output) language** for translation, which in this case is **German ("de")**. --- **Correct Answers:** * **"fr"** → **SpeechRecognitionLanguage** * **"de"** → **AddTargetLanguage** This ensures the app recognizes speech in French and translates it to German.
143
**Question Section:** **DRAG DROP** You have a collection of **Microsoft Word documents and PowerPoint presentations in German**. You need to create a solution to **translate the files to French**. The solution must meet the following requirements: * Preserve the original formatting of the files * Support the use of a custom glossary You create: * A blob container for **German files** (source) * A blob container for **French files** (target) You upload the original German files to the source container. --- **Which three actions should you perform in sequence to complete the solution?** To answer, move the appropriate actions from the list to the answer area and arrange them in the correct order. **NOTE:** Each correct selection is worth one point. --- **Available Actions:** * Perform an asynchronous translation by using the list of files to be translated. * Perform an asynchronous translation by using the document translation specification. * Generate a list of files to be translated. * Upload a glossary file to the container for German files. * Upload a glossary file to the container for French files. * Define a document translation specification that has a French target. ---
**Correct Answer Area (in order):** 1. ✅ **Upload a glossary file to the container for French files** 2. ✅ **Define a document translation specification that has a French target** 3. ✅ **Perform an asynchronous translation by using the document translation specification** --- **Explanation:** 1. **Upload glossary**: The glossary must be uploaded to the **target language container** (French) to be applied during translation. 2. **Define translation specification**: This includes source/target container URIs and the glossary reference. 3. **Execute translation**: The document translation operation uses the specification to perform translation while preserving formatting and applying glossary terms. --- ✅ **Final Sequence:** 1. Upload a glossary file to the container for French files 2. Define a document translation specification that has a French target 3. Perform an asynchronous translation by using the document translation specification
144
**Question Section:** **HOTSPOT** You are developing a text analysis application that uses the **Azure.AI.TextAnalytics** SDK. You define the following C# function: ``` static void MyFunction(TextAnalyticsClient textAnalyticsClient, string text) { var response = textAnalyticsClient.ExtractKeyPhrases(text); Console.WriteLine("Key phrases:"); foreach (string keyPhrase in response.Value) { Console.WriteLine($"\t{keyPhrase}"); } } ``` You call the function using the following line of code: ``` MyFunction(textAnalyticsClient, "the quick brown fox jumps over the lazy dog"); ``` **Which output will you receive?** To answer, select **Yes** if the output will include the phrase. Otherwise, select **No**. **NOTE:** Each correct selection is worth one point. --- **Answer Area:** | ----------------------------------------- | --- | -- | | The output will include "quick brown fox" | ⭕ | ⭕ | | The output will include "lazy dog" | ⭕ | ⭕ | --- | Statements | Yes | No |
**Answer Section:** * **"quick brown fox"** – ✅ **Yes** * **"lazy dog"** – ✅ **Yes** **Explanation:** The **ExtractKeyPhrases** API from Azure Text Analytics is designed to return meaningful **noun phrases**. In the sentence **"the quick brown fox jumps over the lazy dog"**, both **"quick brown fox"** and **"lazy dog"** are likely to be identified as key phrases due to their structure and importance within the sentence. > Note: Exact results can vary slightly based on the language model version, but these phrases are commonly returned. --- ✅ **Correct Answers:** * "quick brown fox" → Yes * "lazy dog" → Yes
145
**Question Section:** You have the following C# method: ``` static void create_resource(string resource_name, string kind, string account_tier, string location) { CognitiveServicesAccount parameters = new CognitiveServicesAccount(null, null, kind, location, resource_name, new CognitiveServicesAccountProperties(), new Sku(account_tier)); var result = cog_svc_client.Accounts.Create(resource_group_name, account_tier, parameters); } ``` You need to deploy an Azure resource to the **East US** Azure region. The resource will be used to perform **sentiment analysis**. **How should you call the method?** **A.** `create_resource("res1", "ContentModerator", "S0", "eastus")` **B.** `create_resource("res1", "TextAnalytics", "S0", "eastus")` **C.** `create_resource("res1", "ContentModerator", "Standard", "East US")` **D.** `create_resource("res1", "TextAnalytics", "Standard", "East US")` ---
**Answer Section:** ✅ **Correct Answer: B. `create_resource("res1", "TextAnalytics", "S0", "eastus")`** **Explanation:** * **Kind:** `"TextAnalytics"` is the correct service kind for performing **sentiment analysis**. * **SKU:** `"S0"` is the valid SKU for standard tier of Text Analytics. * **Location:** `"eastus"` is the correct format for specifying Azure region in **API requests** (no spaces, all lowercase). Why the others are incorrect: * **A** and **C** use `"ContentModerator"` which is a different service, not used for sentiment analysis. * **C** and **D** use `"Standard"` and `"East US"` — although human-readable, Azure APIs expect `"S0"` for SKU and `"eastus"` for location. --- **Final Answer: B. `create_resource("res1", "TextAnalytics", "S0", "eastus")`** ✅
146
**Question Section:** **DRAG DROP** You develop an app in C# named **App1** that performs **speech-to-speech translation**. You need to configure App1 to translate **English to German**. How should you complete the `SpeechTranslationConfig` object? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. **NOTE:** Each correct selection is worth one point. --- **Values:** * addTargetLanguage * speechSynthesisLanguage * speechRecognitionLanguage * voiceName --- **Answer Area:** ``` var translationConfig = SpeechTranslationConfig.FromSubscription(SPEECH_SUBSCRIPTION_KEY, SPEECH_SERVICE_REGION); translationConfig.__________ = "en-US"; translationConfig.__________("de"); ``` ---
**Answer Section:** 1. ✅ **speechRecognitionLanguage** = `"en-US"` * This sets the **source language** that the app listens for — in this case, **English**. 2. ✅ **addTargetLanguage** = `"de"` * This adds **German** as the **target language** for the translation. --- **Final Answers:** * **"en-US"** → `speechRecognitionLanguage` * **"de"** → `addTargetLanguage` These settings ensure that App1 recognizes speech in English and translates it into German.
147
**Question Section:** **HOTSPOT** You are developing a **streaming Speech to Text** solution that will use the **Speech SDK** and **MP3 encoding**. You need to develop a method to **convert speech to text** for streaming MP3 data. [https://www.examtopics.com/discussions/microsoft/view/134937-exam-ai-102-topic-3-question-60-discussion/]source How should you complete the code? To answer, select the appropriate options in the answer area. **NOTE:** Each correct selection is worth one point. --- **Answer Area (Code Snippet with Dropdowns):** ``` audio_format = speechsdk.audio.__________(compressed_stream_format=speechsdk.AudioStreamContainerFormat.MP3) stream = speechsdk.audio.PullAudioInputStream(stream_format=audio_format, pull_stream_callback=callback) speech_config = speechsdk.SpeechConfig("18c518a7-3a69-47a8-aedc-a54745f708a1", "westus") audio_config = speechsdk.audio.AudioConfig(stream=stream) recognizer = speechsdk.__________(speech_config=speech_config, audio_config=audio_config) result = recognizer.recognize_once() text = result.text ``` --- **Dropdown Options for audio\_format:** * AudioConfig.SetProperty * AudioStreamFormat * GetWaveFormatPCM * PullAudioInputStream **Dropdown Options for recognizer:** * KeywordRecognizer * SpeakerRecognizer * SpeechRecognizer * SpeechSynthesizer
✅ **Correct choice:** `AudioStreamFormat` **Explanation:** You use `speechsdk.audio.AudioStreamFormat(compressed_stream_format=...)` to create a stream format that supports **MP3**. This is the correct way to define the format of compressed audio streams for speech recognition. --- ✅ **Correct choice:** `SpeechRecognizer` **Explanation:** To convert **speech to text**, you need `SpeechRecognizer`. Other recognizers serve different purposes: * **KeywordRecognizer** is for wake word detection. * **SpeakerRecognizer** is for speaker verification/identification. * **SpeechSynthesizer** is for text-to-speech. --- **Final Answers:** * **audio\_format**: `AudioStreamFormat` * **recognizer**: `SpeechRecognizer` This configuration supports streaming MP3 audio input and transcription using Azure’s Speech SDK.
148
**Question Section:** **HOTSPOT** You are building a **chatbot**. You need to use the **Content Moderator API** to identify **aggressive and sexually explicit language**. [https://www.examtopics.com/discussions/microsoft/view/134937-exam-ai-102-topic-3-question-60-discussion/]source **Which three settings should you configure?** To answer, select the appropriate settings in the answer area. **NOTE:** Each correct selection is worth one point. **Available Settings:** * Resource Name * autocorrect * PII * listId * classify * language * Content-Type: text/plain * Ocp-Apim-Subscription-Key ---
**Answer Section:** ✅ **Resource Name** ✅ **classify** ✅ **Ocp-Apim-Subscription-Key** --- **Explanation:** * **Resource Name** is required to specify the target Content Moderator resource in your Azure subscription, forming part of the endpoint URI. * **classify** must be set to **true** to enable detection of **sexually explicit**, **offensive**, and **aggressive content** based on classification categories. * **Ocp-Apim-Subscription-Key** is mandatory for authenticating the API request. Other parameters like `autocorrect`, `PII`, and `listId` are unrelated to detecting aggressive or explicit language and are not required in this scenario. --- **Correct Answer:** * **Resource Name** * **classify** * **Ocp-Apim-Subscription-Key** ✅✅✅
149
**Question Section:** You are developing an app that will use the **Speech** and **Language** APIs. You need to provision resources for the app. The solution must ensure that **each service is accessed by using a single endpoint and credential**. **Which type of resource should you create?** **A.** Azure AI Language **B.** Azure AI Speech **C.** Azure AI Services **D.** Azure AI Content Safety ---
**Answer Section:** ✅ **C. Azure AI Services** **Explanation:** To use **multiple Azure Cognitive Services** (like Speech and Language) with **a single endpoint and credential**, you should create a **multi-service Azure AI Services resource** (formerly known as a Cognitive Services resource). This allows you to use various services—such as Speech, Language, Vision, and Decision APIs—without needing to provision and manage separate resources and keys for each. --- Why the other options are incorrect: * **A. Azure AI Language**: Only provides access to Language services, not Speech. * **B. Azure AI Speech**: Only provides access to Speech services, not Language. * **D. Azure AI Content Safety**: A separate service for moderating content; unrelated to general Speech or Language APIs. --- **Correct Answer: C. Azure AI Services** ✅
150
**Question Section:** You are building a **chatbot**. You need to ensure that the bot will **recognize the names of your company’s products and codenames**. The solution must **minimize development effort**. **Which Azure Cognitive Service for Language service should you include in the solution?** **A.** custom text classification **B.** entity linking **C.** custom Named Entity Recognition (NER) **D.** key phrase extraction ---
**Answer Section:** ✅ **C. custom Named Entity Recognition (NER)** **Explanation:** To identify **specific entities** such as **internal product names and codenames**, which are not typically recognized by prebuilt models, you should use **custom Named Entity Recognition (NER)**. This allows you to **train a model** to recognize domain-specific terms that are unique to your organization. While it requires some training data, it significantly **minimizes development effort** compared to implementing entity recognition manually and is the **most accurate solution for recognizing structured named entities**. --- Why the other options are incorrect: * **A. custom text classification**: Useful for categorizing overall input into labels (e.g., topic intent), not for extracting specific terms like product names. * **B. entity linking**: Matches known entities to external knowledge bases like Wikipedia — not suitable for custom or internal terms. * **D. key phrase extraction**: Identifies general important terms in text, but does not extract or categorize specific entities accurately. --- **Correct Answer: C. custom Named Entity Recognition (NER)** ✅
151
**Question Section:** You have an Azure subscription that contains an Azure App Service app named **App1**. You provision a **multi-service Azure Cognitive Services resource** named **CSAccount1**. You need to configure **App1** to access **CSAccount1**. The solution must **minimize administrative effort**. **What should you use to configure App1?** **A.** a system-assigned managed identity and an X.509 certificate **B.** the endpoint URI and an OAuth token **C.** the endpoint URI and a shared access signature (SAS) token **D.** the endpoint URI and subscription key ---
**Answer Section:** ✅ **D. the endpoint URI and subscription key** **Explanation:** The simplest and most commonly used method to authenticate with Azure Cognitive Services (including multi-service resources like **CSAccount1**) is to use the **endpoint URI** and a **subscription key**. This approach: * Requires **minimal setup and configuration** * Is ideal for **development or lightweight production use** * Can be easily added to the app settings of App1 (e.g., as environment variables) --- Why the other options are incorrect: * **A. system-assigned managed identity and X.509 certificate**: Managed identities can be used for authentication in some Azure services, but **Cognitive Services primarily uses keys** or Azure RBAC with identity, and adding an X.509 certificate introduces more administrative overhead. * **B. OAuth token**: This is more complex to implement and is typically used when accessing services via **Azure Active Directory**, which requires extra setup. * **C. SAS token**: Not applicable to Cognitive Services authentication. SAS is used primarily for **Azure Storage** services. --- **Correct Answer: D. the endpoint URI and subscription key** ✅
152
**Question Section:** You have an Azure subscription that contains a multi-service Azure Cognitive Services Translator resource named **Translator1**. You are building an app that will translate text and documents by using **Translator1**. You need to create the **REST API request** for the app. **Which headers should you include in the request?** **A.** the access control request, the content type, and the content length **B.** the subscription key and the client trace ID **C.** the resource ID and the content language **D.** the subscription key, the subscription region, and the content type ---
**Answer Section:** ✅ **D. the subscription key, the subscription region, and the content type** **Explanation:** When calling the **Translator Text API** via REST, the request must include the following **headers**: * `Ocp-Apim-Subscription-Key`: Your Translator resource's subscription key * `Ocp-Apim-Subscription-Region`: The Azure region where your resource is deployed * `Content-Type`: Typically `"application/json"` for sending text or document content These headers ensure: * Proper **authentication** * Correct **regional routing** * Accurate **data formatting** --- Why the other options are incorrect: * **A.** Access control and content length are not required for authentication or core functionality with Translator API. * **B.** Client trace ID is optional (used for debugging); the subscription region is missing. * **C.** Resource ID and content language are not valid REST headers for Translator. --- **Correct Answer: D. the subscription key, the subscription region, and the content type** ✅
153
**Question Section:** **HOTSPOT** You are developing a **text processing solution**. You have the following function: ``` static void GetKeywords(TextAnalyticsClient textAnalyticsClient, string text) { var response = textAnalyticsClient.RecognizeEntities(text); Console.WriteLine("Key words:"); foreach (CategorizedEntity entity in response.Value) { Console.WriteLine($"\t{entity.Text}"); } } ``` You call the function using this input: ``` GetKeywords(textAnalyticsClient, "Our tour of Paris included a visit to the Eiffel Tower"); ``` For each of the following statements, select **Yes** if the statement is true. Otherwise, select **No**. **NOTE:** Each correct selection is worth one point. --- **Answer Area:** 1. **The output will include the following words: *our* and *included*.** 2. **The output will include the following words: *Paris*, *Eiffel*, and *Tower*.** 3. **The function will output all the key phrases from the input string to the console.** ---
**Answer Section:** 1. ❌ **No** * The function uses `RecognizeEntities()`, which identifies **named entities**, not common words like "our" and "included". 2. ✅ **Yes** * "Paris" and "Eiffel Tower" are both **named locations** and likely to be recognized as entities by the Text Analytics API. "Eiffel Tower" might be returned as one entity or two depending on the language model version. 3. ❌ **No** * The function does **not** extract key phrases; it performs **entity recognition**, so it won’t output key phrases. --- ✅ **Correct Answers:** 1. No 2. Yes 3. No
154
**Question Section:** **HOTSPOT** You are developing a text processing solution. You develop the following method: ``` static void GetKeyPhrases(TextAnalyticsClient textAnalyticsClient, string text) { var response = textAnalyticsClient.ExtractKeyPhrases(text); Console.WriteLine("Key phrases:"); foreach (string keyphrase in response.Value) { Console.WriteLine($"\t{keyphrase}"); } } ``` You call the method by using the following code: ``` GetKeyPhrases(textAnalyticsClient, "the cat sat on the mat"); ``` For each of the following statements, select **Yes** if the statement is true. Otherwise, select **No**. **NOTE:** Each correct selection is worth one point. --- **Answer Area:** 1. **The call will output key phrases from the input string to the console.** 2. **The output will contain the following words: the, cat, sat, on, and mat.** 3. **The output will contain the confidence level for key phrases.** ---
**Answer Section:** 1. ✅ **Yes** * The function calls `ExtractKeyPhrases` and prints the key phrases to the console. This is the expected behavior. 2. ❌ **No** * The `ExtractKeyPhrases` method returns **noun phrases**, not every individual word. Common stopwords like "the", "on", and "and" are usually excluded. The exact output may include **"cat"**, **"mat"**, or **"cat sat"**, depending on how the model extracts meaningful chunks — but not all five listed words. 3. ❌ **No** * The key phrase extraction API does **not return confidence scores** for each key phrase — only a list of key phrases. --- ✅ **Correct Answers:** 1. Yes 2. No 3. No
155
**Question Section:** **HOTSPOT** You are developing a service that records lectures given in **English (United Kingdom)**. You have a method named `AppendToTranscriptFile` that takes **translated text** and a **language identifier**. You need to develop code that will provide **transcripts** of the lectures to attendees in their respective language. The supported languages are **English, French, Spanish, and German**. **How should you complete the code?** To answer, select the appropriate options in the answer area. **NOTE:** Each correct selection is worth one point. --- ``` var config = SpeechTranslationConfig.FromSubscription("69cad5cc-0ab3-4704-bdff-afbf4aa07d85", "uksouth"); var lang = new List() { xxxxx // Dropdown 1 }; config.SpeechRecognitionLanguage = "en-GB"; lang.ForEach(config.AddTargetLanguage); using var audioConfig = AudioConfig.FromDefaultMicrophoneInput(); using var recognizer = new xxxxx(config, audioConfig); // Dropdown 2 var result = await recognizer.RecognizeOnceAsync(); if (result.Reason == ResultReason.TranslatedSpeech) { AppendToTranscriptFile(result.Text, result.Translation.Language); } ``` --- **Dropdown 1 Options (to specify target languages):** * `{"en-GB"}` * `{"fr", "de", "es"}` * `{"French", "Spanish", "German"}` * `languages` **Dropdown 2 Options (recognizer type):** * `IntentRecognizer` * `SpeakerRecognizer` * `SpeechSynthesizer` * `TranslationRecognizer` ---
**Answer Section:** * **Dropdown 1:** ✅ `{"fr", "de", "es"}` **Explanation:** You must use **language codes** (not full names) to define translation targets. `"fr"`, `"de"`, and `"es"` correspond to French, German, and Spanish. * **Dropdown 2:** ✅ `TranslationRecognizer` **Explanation:** Since the goal is to perform **speech-to-text translation**, the `TranslationRecognizer` class is specifically designed to handle both **speech recognition** and **language translation**. --- ✅ **Correct Answers:** * Dropdown 1: `{"fr", "de", "es"}` * Dropdown 2: `TranslationRecognizer`
156
Hard **Question:** You are developing an app that will use the **text-to-speech** capability of the **Azure AI Speech service**. The app will be used in **motor vehicles**. You need to **optimize the quality of the synthesized voice output**. **Which Speech Synthesis Markup Language (SSML) attribute should you configure?** **A.** the style attribute of the `mstts:express-as` element **B.** the effect attribute of the `voice` element **C.** the pitch attribute of the `prosody` element **D.** the level attribute of the `emphasis` element ---
**Answer: ✅ B. the effect attribute of the voice element** **Explanation:** The `effect="eq_car"` setting is provided by Azure to optimize speech playback in **automotive environments**, making the voice clearer and better suited for noisy settings like car cabins. It directly addresses the challenge of clarity — unlike `style`, `pitch`, or `emphasis`, which are intended for emotional tone and speech prosody. --- **✅ Correct Answer: B**
157
**Question Section:** You are designing a **content management system**. You need to ensure that the **reading experience is optimized** for users who have **reduced comprehension and learning differences**, such as **dyslexia**. The solution must **minimize development effort**. **Which Azure service should you include in the solution?** **A.** Azure AI Immersive Reader **B.** Azure AI Translator **C.** Azure AI Document Intelligence **D.** Azure AI Language ---
**Answer Section:** ✅ **A. Azure AI Immersive Reader** **Explanation:** **Azure AI Immersive Reader** is designed specifically to improve **reading comprehension and accessibility** for users with learning differences such as **dyslexia**, ADHD, or language learners. It provides features like: * Text-to-speech * Syllable breakdown * Line focus * Picture dictionary * Translation into multiple languages It’s **easy to integrate** with minimal development effort and is the most targeted solution for accessibility needs in reading-based experiences. --- Why the other options are incorrect: * **B. Azure AI Translator** – Focuses only on language translation, not reading support or accessibility. * **C. Azure AI Document Intelligence** – Used for extracting data from forms and documents, not for enhancing readability. * **D. Azure AI Language** – Provides NLP services like entity recognition and sentiment analysis, but does not support accessible reading features. --- **✅ Correct Answer: A. Azure AI Immersive Reader**
158
**Question Section:** You have an **Azure Cognitive Services model** named **Model1** that identifies the **intent of text input**. You develop an app in C# named **App1**. You need to configure **App1** to use **Model1**. **Which package should you add to App1?** **A.** Universal.Microsoft.CognitiveServices.Speech **B.** SpeechServicesToolkit **C.** Azure.AI.Language.Conversations **D.** Xamarin.Cognitive.Speech ---
**Answer Section:** ✅ **C. Azure.AI.Language.Conversations** **Explanation:** The **Azure.AI.Language.Conversations** package is the correct SDK to integrate **Conversational Language Understanding (CLU)** models, which are designed to identify **intents and entities** from natural language input. This package supports building intelligent chatbots and other NLP-based applications using your **intent recognition model** (like Model1) within **Azure Cognitive Services for Language**. --- Why the other options are incorrect: * **A. Universal.Microsoft.CognitiveServices.Speech**: Targets speech-to-text functionality, not intent recognition from text. * **B. SpeechServicesToolkit**: Not an official Azure SDK; not applicable for intent recognition. * **D. Xamarin.Cognitive.Speech**: Geared toward mobile (Xamarin) speech capabilities, not for processing text input to detect intents. --- **Correct Answer: C. Azure.AI.Language.Conversations** ✅
159
**Question Section:** **HOTSPOT** You are building an app that will **answer customer calls** about the **status of an order**. The app will: * Convert spoken customer input into a query, * Query a database for order details, and * Provide the customer with a **spoken response**. You need to identify which **Azure AI service APIs** to use. The solution must **minimize development effort**. **Which object should you use for each requirement?** To answer, select the appropriate options in the answer area. **NOTE:** Each correct selection is worth one point. --- **Answer Area:** **Convert customer calls into text queries:** Dropdown options: * SpeechRecognizer * SpeechSynthesizer * TranslationRecognizer * VoiceProfileClient **Provide customers with the order details:** Dropdown options: * SpeechRecognizer * SpeechSynthesizer * TranslationRecognizer * VoiceProfileClient ---
**Answer Section:** 1. ✅ **Convert customer calls into text queries:** **SpeechRecognizer** **Explanation:** The **SpeechRecognizer** class is used to convert **spoken audio into text** — perfect for turning customer speech into a database query. 2. ✅ **Provide customers with the order details:** **SpeechSynthesizer** **Explanation:** The **SpeechSynthesizer** class is used to **convert text to speech**, making it ideal for reading the response back to the customer. --- ✅ **Final Answers:** * **Convert customer calls into text queries:** SpeechRecognizer * **Provide customers with the order details:** SpeechSynthesizer
160
**Question Section** You plan to implement an **Azure AI Search** resource that will use a **custom skill based on sentiment analysis**. You need to **create a custom model** and **configure Azure AI Search** to use the model. **Which five actions should you perform in sequence?** To answer, move the appropriate actions from the list to the answer area and arrange them in the correct order. --- **Available Actions** * Create an endpoint for the model. * Rerun the indexer to enrich the index. * Create an Azure Machine Learning workspace. * Create and train the model in the Azure Machine Learning studio. * Provision an Azure AI Services resource and obtain the endpoint. * Connect the custom skill to the endpoint. ---
✅ **Correct Sequence (Answer Area)** 1. **Create an Azure Machine Learning workspace.** 2. **Create and train the model in the Azure Machine Learning studio.** 3. **Create an endpoint for the model.** 4. **Connect the custom skill to the endpoint.** 5. **Rerun the indexer to enrich the index.** --- ✅ **Explanation** 1. **Create an Azure Machine Learning workspace** * This is the foundational environment where you'll manage and train your machine learning models. 2. **Create and train the model in Azure Machine Learning studio** * You develop and test the sentiment analysis model using your workspace. 3. **Create an endpoint for the model** * After training, expose the model via a REST API so it can be called externally. 4. **Connect the custom skill to the endpoint** * Azure Cognitive Search can then call your custom model as a skill using the REST endpoint. 5. **Rerun the indexer to enrich the index** * Now that the custom skill is connected, rerun the indexer to apply enrichment using the sentiment model. --- ✅ Final Answer Recap 1. Create an Azure Machine Learning workspace 2. Create and train the model in the Azure Machine Learning studio 3. Create an endpoint for the model 4. Connect the custom skill to the endpoint 5. Rerun the indexer to enrich the index
161
**Question Section:** **NEW QUESTION 114** **HOTSPOT – (Topic 3)** You have a collection of **press releases stored as PDF files**. You need to **extract text** from the files and **perform sentiment analysis**. **Which service should you use for each task?** To answer, select the appropriate options in the answer area. **NOTE:** Each correct selection is worth one point. --- **Dropdown Options for Each Task:** **Extract text:** * Computer Vision * Azure Cognitive Search * Form Recognizer **Perform sentiment analysis:** * Azure Cognitive Search * Computer Vision * Form Recognizer * Language ---
**Answer Section:** 1. **Extract text:** ✅ **Form Recognizer** * **Explanation:** Azure **Form Recognizer** (also known as Azure AI Document Intelligence) is designed to extract structured and unstructured text from documents, including PDFs. It supports both scanned and digital text-based documents and preserves layout. 2. **Perform sentiment analysis:** ✅ **Language** * **Explanation:** Azure **Language** (formerly Text Analytics) provides sentiment analysis, key phrase extraction, and entity recognition. It's the correct choice for analyzing the tone of the extracted text. --- ✅ **Final Answers:** * **Extract text:** Form Recognizer * **Perform sentiment analysis:** Language
162
**Question Section:** **HOTSPOT** You are developing a text processing solution. You develop the following method: ``` static void GetKeyPhrases(TextAnalyticsClient textAnalyticsClient, string text) { var response = textAnalyticsClient.ExtractKeyPhrases(text); Console.WriteLine("Key phrases:"); foreach (string keyphrase in response.Value) { Console.WriteLine($"\t{keyphrase}"); } } ``` You call the method by using the following code: ``` GetKeyPhrases(textAnalyticsClient, "the cat sat on the mat"); ``` For each of the following statements, select **Yes** if the statement is true. Otherwise, select **No**. **NOTE:** Each correct selection is worth one point. --- **Answer Area:** 1. The call will output key phrases from the input string to the console. 2. The output will contain the following words: the, cat, sat, on, and mat. 3. The output will contain the confidence level for key phrases. --- | Statements | Yes | No |
**Answer Section:** 1. **The call will output key phrases from the input string to the console.** ✅ **Yes** The `ExtractKeyPhrases` method extracts key phrases and the function prints them to the console. 2. **The output will contain the following words: the, cat, sat, on, and mat.** ❌ **No** Key phrase extraction filters out common stopwords like "the" and "on". The output is likely to contain meaningful phrases like "cat", "mat", or "cat sat". 3. **The output will contain the confidence level for key phrases.** ❌ **No** The `ExtractKeyPhrases` method does not return confidence scores for each key phrase — only the list of phrases. --- ✅ **Correct Answers:** * Yes * No * No
163
**Question Section:** **DRAG DROP** You have a web app that uses **Azure Cognitive Search**. When reviewing billing for the app, you discover much higher than expected charges. You suspect that the **query key is compromised**. You need to **prevent unauthorized access** to the search endpoint and ensure that users only have **read-only access** to the document collection. The solution must **minimize app downtime**. **Which three actions should you perform in sequence?** To answer, move the appropriate actions from the list to the answer area and arrange them in the correct order. **Actions:** * Add a new query key * Regenerate the secondary admin key * Change the app to use the secondary admin key * Change the app to use the new key * Regenerate the primary admin key * Delete the compromised key ---
**Correct Sequence (Answer Area):** 1. Add a new query key 2. Change the app to use the new key 3. Delete the compromised key --- **Explanation:** 1. **Add a new query key** – You generate a fresh, uncompromised query key that will be used moving forward. 2. **Change the app to use the new key** – Update the application so it no longer uses the potentially compromised key. 3. **Delete the compromised key** – Once the new key is active in your app, remove the old one to prevent unauthorized access. This sequence avoids app downtime and mitigates the security risk. Regenerating admin keys is unnecessary here because the issue is with a query key, not an admin key.
164
Question Section: You have an existing Azure Cognitive Search service. You have an Azure Blob Storage account that contains millions of scanned documents stored as images and PDFs. You need to make the scanned documents available to search as quickly as possible. What should you do? A. Split the data into multiple blob containers. Create a Cognitive Search service for each container. Within each indexer definition, schedule the same runtime execution pattern. B. Split the data into multiple blob containers. Create an indexer for each container. Increase the search units. Within each indexer definition, schedule a sequential execution pattern. C. Create a Cognitive Search service for each type of document. D. Split the data into multiple virtual folders. Create an indexer for each folder. Increase the search units. Within each indexer definition, schedule the same runtime execution pattern.
**Answer: ✅ D. Split the data into multiple virtual folders. Create an indexer for each folder. Increase the search units. Within each indexer definition, schedule the same runtime execution pattern.** --- **Explanation:** * To **maximize throughput** and **minimize indexing time** with large volumes of data, it is best to: * Split content into **virtual folders** (prefix-based organization within a blob container). * Create **multiple indexers**, each targeting a different virtual folder. * **Increase search units** to allow for **parallel processing** of indexers. * **Schedule indexers to run simultaneously** for maximum concurrency. * This approach keeps everything under a **single Cognitive Search service**, which simplifies management and avoids unnecessary resource duplication. --- Why other options are incorrect: * **A**: Creating **multiple Cognitive Search services** increases complexity and cost unnecessarily. Azure best practices recommend using **one service** with scalable capacity. * **B**: Splitting into multiple containers is acceptable, but scheduling indexers **sequentially** contradicts the goal of processing "as quickly as possible." * **C**: Creating a search service **per document type** is inefficient and unnecessary. One search service can handle multiple content types. --- **✅ Correct Answer: D**
165
You need to implement a table projection to generate a physical expression of an Azure Cognitive Search index. Which three properties should you specify in the skillset definition JSON configuration table node? (Choose three.) NOTE: Each correct selection is worth one point. A. tableName B. generatedKeyName C. dataSource D. dataSourceConnection E. source
**Answer Section:** ✅ **A. tableName** ✅ **B. generatedKeyName** ✅ **E. source** --- **Explanation:** In an Azure Cognitive Search **skillset** that uses a **table projection**, the `tableProjection` configuration within the skillset JSON requires the following key properties: 1. **`tableName`** * Specifies the name of the table to generate in memory during skillset execution. * This is the name used to refer to the table in the skill inputs. 2. **`generatedKeyName`** * Specifies the name of the key column that will be generated for each row in the projected table. * This key is necessary for joining the table rows back to the main document. 3. **`source`** * Specifies the root source object (usually an array field from the input document) that is used to generate rows in the table. --- Why the other options are incorrect: * **C. dataSource** * Refers to the indexer's data source, not part of the `tableProjection` configuration. * **D. dataSourceConnection** * Not a valid property in the `tableProjection` context. This is relevant for data source definitions, not for skillset configuration. --- **Correct Answers: A, B, E** ✅✅✅
166
**Question Section:** **HOTSPOT** You are creating an **enrichment pipeline** that will use **Azure Cognitive Search**. The knowledge store contains: * **Unstructured JSON data** * **Scanned PDF documents** that contain text **Which projection type should you use for each data type?** To answer, select the appropriate options in the answer area. **NOTE:** Each correct selection is worth one point. --- **Dropdown options (for each question):** * File projection * Object projection * Table projection --- **Answer Area:** **JSON data:** xxxxxxx **Scanned data:** xxxxxxxxx ---
**Explanation:** * **JSON data** is best suited for **object projection**, which stores enriched data in a structured format (fields and values) that reflects the JSON schema. * **Scanned documents (like PDFs)** are typically stored using **file projection**, which stores content in its original binary format, useful for preserving the actual file and referencing it later. --- **Correct Answers:** * JSON data → Object projection * Scanned data → File projection ✅✅
167
You are building an Azure Cognitive Search custom skill. You have the following custom skill schema definition: ``` { "@odata.type": "#Microsoft.Skills.Custom.WebApiSkill", "description": "My custom skill description", "uri": "https://contoso-webskill.azurewebsites.net/api/process", "context": "/document/organizations/*", "inputs": [ { "name": "companyName", "source": "/document/organizations/*" } ], "outputs": [ { "name": "companyDescription" } ] } ``` For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. --- Answer Area: Statement 1: `companyDescription` is available for indexing. Statement 2: The definition calls a web API as part of the enrichment process. Statement 3: The enrichment step is called only for the first organization under `/document/organizations/`.
Statement 1: **`companyDescription` is available for indexing.** → **Yes** Explanation: Any field listed under `"outputs"` in a custom skill is added to the enrichment pipeline and becomes available for indexing. Statement 2: **The definition calls a web API as part of the enrichment process.** → **Yes** Explanation: The skill uses `"#Microsoft.Skills.Custom.WebApiSkill"` and includes a `"uri"` — this means it calls a web API during enrichment. Statement 3: **The enrichment step is called only for the first organization under `/document/organizations/`.** → **No** Explanation: The use of `"*"` in `/document/organizations/*` means the skill is applied to **each object** in the `organizations` array — not just the first one. --- **Correct Answers:** * companyDescription is available for indexing → Yes * The definition calls a web API as part of the enrichment process → Yes * The enrichment step is called only for the first organization under `/document/organizations/` → No
168
**Question Section:** You have the following data sources: * **Finance:** On-premises Microsoft SQL Server database * **Sales:** Azure Cosmos DB using the Core (SQL) API * **Logs:** Azure Table storage * **HR:** Azure SQL database You need to ensure that you can **search all the data** by using the **Azure Cognitive Search REST API**. **What should you do?** **A.** Configure multiple read replicas for the data in Sales **B.** Mirror Finance to an Azure SQL database **C.** Ingest the data in Logs into Azure Data Explorer **D.** Ingest the data in Logs into Azure Sentinel ---
**Answer Section:** **Correct Answer: B. Mirror Finance to an Azure SQL database** **Explanation:** Azure Cognitive Search can natively index data from several **cloud-based sources**, including: * Azure SQL Database * Azure Cosmos DB * Azure Blob Storage * Azure Table Storage However, **on-premises SQL Server** is not a supported data source directly. To make **Finance data** (which is on-premises SQL Server) searchable using Azure Cognitive Search, you must **mirror or migrate it to a supported cloud-based source**, such as **Azure SQL Database**. Why the other options are incorrect: * **A.** Configuring read replicas for Sales (Azure Cosmos DB) does not address the issue of indexing Finance (on-prem) data. * **C.** Ingesting Logs into Azure Data Explorer is unrelated to enabling indexing in Cognitive Search. * **D.** Azure Sentinel is used for security and monitoring — not indexing data for search. --- **Final Answer: B. Mirror Finance to an Azure SQL database** ✅
169
**Question Section:** You are developing a solution to generate a **word cloud** based on the **reviews of a company’s products**. **Which Text Analytics REST API endpoint should you use?** A. `keyPhrases` B. `sentiment` C. `languages` D. `entities/recognition/general` ---
**Answer Section:** **Correct Answer: A. `keyPhrases`** **Explanation:** The `keyPhrases` endpoint in Azure Text Analytics is used to extract **important phrases and terms** from text. These key phrases typically represent the most meaningful parts of a sentence and are ideal for generating **word clouds**, which visually highlight the most common or relevant phrases in a body of text. Why the others are incorrect: * **B. `sentiment`**: Analyzes the tone of the text (positive, negative, neutral), not suitable for building a word cloud. * **C. `languages`**: Detects the language of the input text but does not extract content. * **D. `entities/recognition/general`**: Identifies known entities like people, places, and organizations, but may not provide the general terms you’d want in a word cloud. --- **Final Answer: A. `keyPhrases`** ✅
170
hard You are developing an application that will use Azure Cognitive Search for internal documents. You need to implement document-level filtering for Azure Cognitive Search. Which three actions should you include in the solution? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. Send Azure AD access tokens with the search request. B. Retrieve all the groups. C. Retrieve the group memberships of the user. D. Add allowed groups to each index entry. E. Create one index per group. F. Supply the groups as a filter for the search requests.
**Answer Section:** **Correct Answers:** * ✅ **C. Retrieve the group memberships of the user** * ✅ **D. Add allowed groups to each index entry** * ✅ **F. Supply the groups as a filter for the search requests** --- **Explanation:** To implement **document-level filtering** (also known as **security trimming**) in **Azure Cognitive Search**, follow this common pattern: 1. **Tag each document** in the index with a list of **allowed groups or user roles**. → **D. Add allowed groups to each index entry** 2. At query time, **retrieve the group memberships** of the current user. → **C. Retrieve the group memberships of the user** 3. **Apply a filter** in the search query that only returns documents where the user's group is listed. → **F. Supply the groups as a filter for the search requests** --- Why the other options are incorrect: * **A. Send Azure AD access tokens with the search request** ❌ Not directly used by Azure Cognitive Search — it doesn’t automatically process tokens for document-level filtering. * **B. Retrieve all the groups** ❌ Unnecessary — you only need to retrieve the **user’s group memberships**, not all groups in the system. * **E. Create one index per group** ❌ Inefficient and unscalable. Filtering is the recommended approach. --- **Final Answers: C, D, F** ✅✅✅
171
hard Question Section: You have an Azure Cognitive Search solution and an enrichment pipeline that performs Sentiment Analysis on social media posts. You need to define a knowledge store that will include both the social media posts and the Sentiment Analysis results. Which two fields should you include in the definition? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. storageContainer B. storageConnectionString C. files D. tables E. objects
debated ✅ **Correct Answer: D (tables) and E (objects)** **Why D (tables) is correct:** * `tables` is one of the valid **projection types** in a knowledge store. * It is used to store **structured data**, such as **Sentiment Analysis results**, in a tabular form (like CSV or Azure Table Storage). * Example from official doc: > "Use tables to store flat row-column data, such as entities, key phrases, or sentiment scores." > Source: [Microsoft Docs – Knowledge Store](https://learn.microsoft.com/en-us/azure/search/knowledge-store-concept-intro?tabs=portal#projections) **Why E (objects) is correct:** * `objects` is also a valid **projection type**. * It stores **semi-structured data**, such as the **enriched JSON representation** of a social media post and its analysis. * Example from doc: > "Objects are projected as JSON structures. These projections often include the full document with all enrichment outputs." > Source: [Microsoft Docs – Knowledge Store](https://learn.microsoft.com/en-us/azure/search/knowledge-store-concept-intro?tabs=portal#projections) --- ❌ **Why B (storageConnectionString) is incorrect as an answer in this context:** * `storageConnectionString` is required in the **skillset definition** to connect to the Azure Blob Storage **where the knowledge store will be created**. * However, **it is not part of the knowledge store projection definition itself** (which is what the question is asking about). * It’s a configuration detail, not a **field that defines** what **data** goes into the knowledge store. Example from docs: > "A knowledge store is defined inside a skillset definition and has two components: > > * A connection string to Azure Storage. > * Projections that determine whether the knowledge store consists of tables, objects, or files." > — [https://learn.microsoft.com/en-us/azure/search/knowledge-store-concept-intro](https://learn.microsoft.com/en-us/azure/search/knowledge-store-concept-intro) Thus, **B is necessary for setup**, but **not part of the fields defining what’s stored**. --- ❌ **Why A (storageContainer) is incorrect:** * `storageContainer` is not a valid field in the **knowledge store** definition or projection. * The actual location where the projections are stored is inferred from the connection string and projection type (`tables`, `objects`, etc.). * This is often a misconception, possibly confused with general Azure Storage concepts. --- ❌ **Why C (files) is incorrect:** * `files` is another **projection type**, valid in some contexts. * But for this case (social media posts + sentiment analysis), it's **less appropriate** because: * `files` is used to generate and store **enriched content as text files** (e.g., `.txt`, `.html`, `.pdf`). * Social media posts and structured sentiment scores are better suited to `objects` (for raw/enriched JSON) and `tables` (for structured results). * Using `files` would not be wrong **technically**, but it's **not optimal or standard** for this use case. --- Summary: * ✅ `D (tables)` and `E (objects)` are **projection types** that define **what kind of enriched data is stored**. * ❌ `B (storageConnectionString)` is a **setup field**, not a **data definition** field. * ❌ `A (storageContainer)` is **not a valid field** in the knowledge store definition. * ❌ `C (files)` is a **less appropriate projection** for this scenario. **Correct answer: D and E.**
172
Hard **Question Section:** You create a **knowledge store** for **Azure Cognitive Search** by using the following JSON configuration: ``` "knowledgeStore": { "storageConnectionString": "DefaultEndpointsProtocol=...;", "projections": [ { "tables": [ { "tableName": "unrelatedDocument", "generatedKeyName": "documentId", "source": "/document/piiShape" }, { "tableName": "unrelatedKeyPhrases", "generatedKeyName": "keyPhraseId", "source": "/document/piiShape/keyPhrases" } ], "objects": [], "files": [] }, { "tables": [], "objects": [ { "storageContainer": "unrelatedocrtext", "source": null, "sourceContext": "/document/normalized_images/*/text", "inputs": [ { "name": "ocrText", "source": "/document/normalized_images/*/text" } ] }, { "storageContainer": "unrelatedocrlayout", "source": null, "sourceContext": "/document/normalized_images/*/layoutText", "inputs": [ { "name": "ocrLayoutText", "source": "/document/normalized_images/*/layoutText" } ] } ], "files": [] } ] } ``` Use the dropdown menus to select the answer choice that completes each statement based on the information in the graphic. **NOTE:** Each correct selection is worth one point. --- **Statements:** 1. **There will be \[answer choice]** * no projection groups * one projection group * two projection groups * four projection groups 2. **Normalized images will \[answer choice]** * not be projected * be projected to Azure Blob storage * be projected to Azure File storage * be saved to an Azure table store ---
**Answer Section:** 1. **There will be:** ✅ **two projection groups** * Explanation: The JSON shows **two separate objects in the `projections` array**, each representing a distinct projection group. 2. **Normalized images will:** ✅ **be projected to Azure Blob storage** * Explanation: The second projection group contains `objects` with `sourceContext` paths targeting `/document/normalized_images/*`. Since `objects` projections save JSON blobs to **Azure Blob Storage**, this confirms the destination. --- **Correct Answers:** * There will be: **two projection groups** * Normalized images will: **be projected to Azure Blob storage** ✅✅
173
**Question Section:** You plan to create an index for an **Azure Cognitive Search** service by using the **Azure portal**. The Cognitive Search service will connect to an **Azure SQL database**. The Azure SQL database contains a table named **UserMessages**. Each row in **UserMessages** has a field named **MessageCopy** that contains the **text of social media messages** sent by a user. Users will: * Perform **full text searches** against the **MessageCopy** field * See the **MessageCopy values** returned in the search results You need to configure the **properties of the index** for the **MessageCopy** field to support this solution. **Which attributes should you enable for the field?** **A.** Sortable and Retrievable **B.** Filterable and Retrievable **C.** Searchable and Facetable **D.** Searchable and Retrievable ---
**Answer Section:** ✅ **D. Searchable and Retrievable** **Explanation:** * **Searchable**: Enables **full-text search** on the field using the Azure Cognitive Search engine’s tokenizer and linguistic analysis. This is required for users to **search within the contents** of `MessageCopy`. * **Retrievable**: Ensures the field is **included in the search results** returned to the user. Since the users need to **see the MessageCopy** value, it must be retrievable. --- Why the other options are incorrect: * **A. Sortable and Retrievable**: Sortable allows ordering by value but **does not enable full-text search**. * **B. Filterable and Retrievable**: Filterable allows exact match filtering (e.g., MessageCopy = "foo") but **not full-text search**. * **C. Searchable and Facetable**: Facetable is for grouping and aggregating values (e.g., counts), not relevant for full-text search or display. --- **Correct Answer: D. Searchable and Retrievable** ✅
174
**Question Section:** You have the following data sources: * **Finance:** On-premises Microsoft SQL Server database * **Sales:** Azure Cosmos DB using the Core (SQL) API * **Logs:** Azure Table storage * **HR:** Azure SQL database You need to ensure that you can **search all the data** by using the **Azure Cognitive Search REST API**. **What should you do?** **A.** Export the data in Finance to Azure Data Lake Storage **B.** Configure multiple read replicas for the data in Sales **C.** Ingest the data in Logs into Azure Data Explorer **D.** Migrate the data in HR to Azure Blob storage ---
**Answer Section:** **Correct Answer: A. Export the data in Finance to Azure Data Lake Storage** **Explanation:** Azure Cognitive Search can directly index **cloud-native** data sources such as: * Azure SQL Database * Azure Cosmos DB * Azure Blob Storage * Azure Table Storage * Azure Data Lake Storage However, **on-premises SQL Server** (used by Finance) is **not a supported direct data source**. To enable indexing, you need to **move the Finance data into a supported cloud source**, such as **Azure Data Lake Storage** or **Azure SQL Database**. --- Why the other options are incorrect: * **B. Configure multiple read replicas for the data in Sales**: This doesn’t help Azure Cognitive Search access the data — the service already supports Azure Cosmos DB natively. * **C. Ingest the data in Logs into Azure Data Explorer**: Azure Cognitive Search does **not** natively support Azure Data Explorer. Azure Table Storage is already a supported source. * **D. Migrate the data in HR to Azure Blob storage**: **HR is already in Azure SQL Database**, which is natively supported by Azure Cognitive Search. No migration is needed. --- **Final Answer: A. Export the data in Finance to Azure Data Lake Storage** ✅
175
hard **Question Section:** You plan to provision **Azure Cognitive Services** resources by using the following method: ``` CognitiveServicesAccount parameters = new CognitiveServicesAccount(null, null, kind, location, name, new CognitiveServicesAccountProperties(), new Sku(tier)); result = client.Accounts.Create(resource_group_name, tier, parameters); ``` You need to create a **Standard tier** resource that will **convert scanned receipts into text**. **How should you call the method?** To answer, select the appropriate **kind** and **location/tier** options in the answer area. **NOTE:** Each correct selection is worth one point. --- ``` provision_resource("res1", "XXXXXX", "XXXXX", "XXXXX"); ``` **Dropdown Options (Kind):** * FormRecognizer * ComputerVision * CustomVision.Prediction * CustomVision.Training **Dropdown Options (Tier/Location):** * "eastus", "S1" * "eastus", "S0" * "S0", "eastus" * "S1", "eastus" ---
**Answer Section:** * **Kind:** FormRecognizer * **Tier/Location:** "eastus", "S0" --- --- ✅ Official Azure Cognitive Services Pricing and SKUs: Azure Cognitive Services resources (including **Form Recognizer**) offer pricing tiers such as: * **F0**: Free tier (limited usage) * **S0**: **Standard** paid tier --- 🔍 What about **S1**? There is **no official S1 SKU** for **Form Recognizer**. The **correct Standard SKU is `S0`**. > In the image, `"S1"` appears as an option, likely due to a dropdown error or reused UI component, but it is **not valid for Form Recognizer**. > You can verify this from [Microsoft’s official Form Recognizer pricing page](https://azure.microsoft.com/en-us/pricing/details/form-recognizer/), which shows **only `F0` and `S0`**. --- 🔁 Correction to Previous Answer: **You should use:** * Kind: `FormRecognizer` * Tier: `"S0"` * Location: `"eastus"` Example call: ``` provision_resource("res1", "FormRecognizer", "eastus", "S0"); ``` --- ✅ Final Answer (Updated): * **Kind:** FormRecognizer * **Tier/Location:** `"eastus"`, `"S0"` This correctly provisions a **Standard-tier** Form Recognizer resource for **scanned receipt processing**.
176
**Question Section:** **HOTSPOT** You have an app named **App1** that uses **Azure AI Document Intelligence** to analyze **medical records** and provide **pharmaceutical dosage recommendations** for patients. You send a request to App1 and receive the following response (partial): ``` { "status": "succeeded", "analyzeResult": { "apiVersion": "2023-07-31", "modelId": "prebuilt-healthInsuranceCard.us", "content": "Blood Pressure 118/72", "pages": [ { "words": [ { "content": "Blood", "confidence": 0.766 }, { "content": "Pressure", "confidence": 0.716 }, { "content": "118/72", "confidence": 0.761 } ] } ], "documents": [ { "docType": "healthInsuranceCard.us", "confidence": 1 } ] } } ``` For each of the following statements, select **Yes** if the statement is true. Otherwise, select **No**. **NOTE:** Each correct selection is worth one point. --- **Answer Section (Answer List for Brainscape-style formatting):** * The chosen model is suitable for the intended use case * The text content was recognized with greater than 70 percent confidence * The form elements were recognized with greater than 70 percent confidence
**Answer Area:** Statement 1: **The chosen model is suitable for the intended use case.** → No * The model used is `prebuilt-healthInsuranceCard.us`, which is designed to extract data from **health insurance cards**, not **medical records or dosage recommendations**. Statement 2: **The text content was recognized with greater than 70 percent confidence.** → Yes * All listed words have confidence scores: * "Blood" = 0.766 * "Pressure" = 0.716 * "118/72" = 0.761 * All scores are **above 70%**, so this is true. Statement 3: **The form elements were recognized with greater than 70 percent confidence.** → No * The `"fields"` property is empty (`"fields": {}`), meaning **no form fields** were recognized, so this statement is false. ---
177
**Question Section:** **HOTSPOT** You have an Azure subscription that contains an **Azure AI Document Intelligence** resource named **DI1**. You build an app named **App1** that analyzes **PDF files for handwritten content** by using **DI1**. You need to ensure that **App1 will recognize handwritten content**. **How should you complete the code?** To answer, select the appropriate options in the answer area. **NOTE:** Each correct selection is worth one point. --- **Code Snippet:** ``` Uri fileUri = new Uri(""); AnalyzeDocumentOperation operation = await client.AnalyzeDocumentFromUriAsync(WaitUntil.Completed, ____________, fileUri); // Dropdown 1 AnalyzeResult result = operation.Value; foreach (DocumentStyle style in result.Styles) { bool isHandwritten = style.IsHandwritten.HasValue && style.IsHandwritten == true; if (isHandwritten && style.Confidence > ____________) // Dropdown 2 { Console.WriteLine($"Handwritten content found:"); foreach (DocumentSpan span in style.Spans) { ... } } } ``` --- **Dropdown 1 Options (model ID):** * "prebuilt-document" * "prebuilt-contract" * "prebuilt-read" **Dropdown 2 Options (confidence threshold):** * 0.1 * 0.75 * 1.0 ---
**Answer Section:** **Dropdown 1:** `prebuilt-read` * The `prebuilt-read` model is designed to extract **text and style information**, including **handwriting detection**, from documents. * This model supports detecting whether a region is **handwritten or printed**. **Dropdown 2:** `0.75` * A confidence threshold of `0.75` is reasonable and commonly used to determine if the detection is **reliable enough** to act on. * Lower thresholds like `0.1` would likely include unreliable results, and `1.0` would be too strict. --- **Correct Answers:** * Model: `prebuilt-read` * Confidence: `0.75` ✅✅
178
**Question Section:** You have an app named **App1** that uses a **custom Azure AI Document Intelligence model** to recognize contract documents. You need to ensure that the model supports an **additional contract format**. The solution must **minimize development effort**. **What should you do?** **A.** Lower the confidence score threshold of App1 **B.** Create a new training set and add the additional contract format to the new training set. Create and train a new custom model **C.** Add the additional contract format to the existing training set. Retrain the model **D.** Lower the accuracy threshold of App1 ---
**Answer Section:** ✅ **Correct Answer: C. Add the additional contract format to the existing training set. Retrain the model** **Explanation:** To support an **additional format** in a custom Azure AI Document Intelligence model (formerly Form Recognizer), you should: * **Add new sample documents** (representing the new format) to the **existing training set**, and * **Retrain** the model. This approach: * Leverages the existing labeled data and training effort * Requires **minimal development work** compared to building a new model * Ensures the model learns to handle the new format alongside previously supported formats --- Why the other options are incorrect: * **A. Lower the confidence score threshold of App1** ❌ Might increase false positives — it does **not improve model understanding** of new formats. * **B. Create a new training set and train a new model** ❌ Involves **more effort** than retraining an existing model, and results in a new model to manage. * **D. Lower the accuracy threshold of App1** ❌ There is no direct setting in Azure AI Document Intelligence for "accuracy threshold." Accuracy is determined by the model's training. --- **Correct Answer: C. Add the additional contract format to the existing training set. Retrain the model** ✅
179
**Question Section:** **HOTSPOT** You have an Azure subscription. You need to deploy an **Azure AI Document Intelligence** resource. **How should you complete the Azure Resource Manager (ARM) template?** To answer, select the appropriate options in the answer area. **NOTE:** Each correct selection is worth one point. --- https://www.examtopics.com/discussions/microsoft/view/135063-exam-ai-102-topic-4-question-20-discussion/ **ARM Template Snippet (with dropdowns):** ``` "type": "__________/accounts", // Dropdown 1 ... "kind": "__________" // Dropdown 2 ``` **Dropdown 1 Options:** * Microsoft.CognitiveSearch * Microsoft.CognitiveServices * Microsoft.MachineLearning * Microsoft.MachineLearningServices **Dropdown 2 Options:** * AiBuilder * CognitiveSearch * FormRecognizer * OpenAI ---
**Answer Section:** * **Dropdown 1:** Microsoft.CognitiveServices * **Dropdown 2:** FormRecognizer --- **Explanation:** To deploy a **Document Intelligence** (formerly Form Recognizer) resource via ARM: * The correct `type` is **Microsoft.CognitiveServices/accounts** because Document Intelligence is part of Azure Cognitive Services. * The correct `kind` is **FormRecognizer**, which is the specific kind value for deploying a Document Intelligence resource. This combination provisions a **Form Recognizer** resource under Azure Cognitive Services using ARM templates. --- **Correct Answers:** * `"type"` → Microsoft.CognitiveServices * `"kind"` → FormRecognizer ✅✅
180
**Question Section:** You are building an app named **App1** that will use **Azure AI Document Intelligence** to extract the following data from **scanned documents**: * Shipping address * Billing address * Customer ID * Amount due * Due date * Total tax * Subtotal You need to identify which model to use for App1. The solution must **minimize development effort**. **Which model should you use?** **A.** custom extraction model **B.** contract **C.** invoice **D.** general document ---
**Answer Section:** ✅ **Correct Answer: C. invoice** **Explanation:** The **prebuilt invoice model** in Azure AI Document Intelligence is specifically designed to extract common financial and billing fields such as: * **Billing and shipping address** * **Customer ID** * **Amount due** * **Subtotal** * **Total tax** * **Due date** This matches the exact requirements listed, and since it is **prebuilt**, it requires **minimal to no training effort**, making it the best option for reducing development time. --- Why the other options are incorrect: * **A. custom extraction model** ❌ Would require manual labeling and training, which contradicts the requirement to **minimize development effort**. * **B. contract** ❌ The contract model is for extracting fields from legal contracts, not invoices or billing data. * **D. general document** ❌ Can extract general content and layout, but it doesn't extract invoice-specific fields like total tax or due date with high accuracy. --- **Final Answer: C. invoice** ✅
181
Question Section: You build a bot by using the Microsoft Bot Framework SDK and the Azure Bot Service. You plan to deploy the bot to Azure. You register the bot by using the Bot Channels Registration service. Which two values are required to complete the deployment? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. botId B. tenantId C. appId D. objectId E. appSecret
**Answer Section:** ✅ **C. appId** ✅ **E. appSecret** --- **Explanation:** When registering a bot using the **Bot Channels Registration** service, you must provide the **Microsoft Entra (formerly Azure AD) app credentials**, which include: * **appId**: The **Application (client) ID** of the Azure AD app registration. This uniquely identifies the bot’s identity. * **appSecret**: The **client secret** associated with the app registration, used to authenticate the bot with Azure Bot Service and other APIs. These two values are **required** for the bot to authenticate properly and securely communicate with the Bot Framework and other services. --- Why the other options are incorrect: * **A. botId** – This is often used internally or for naming but is **not required for authentication**. * **B. tenantId** – Might be used for multi-tenant scenarios but is **not required for basic bot registration**. * **D. objectId** – Refers to the Azure AD object but is **not needed for bot registration or deployment**. --- **Final Answer: C. appId and E. appSecret** ✅✅
182
**Question Section:** **HOTSPOT** You are building a chatbot by using the **Microsoft Bot Framework Composer**. You have the dialog design shown in the exhibit: (\[Visual dialog design and expression for user.name] [Source: https://www.examtopics.com/discussions/microsoft/view/56834-exam-ai-102-topic-5-question-2-discussion/](https://www.examtopics.com/discussions/microsoft/view/56834-exam-ai-102-topic-5-question-2-discussion/)) For each of the following statements, select **Yes** if the statement is true. Otherwise, select **No**. **NOTE:** Each correct selection is worth one point. --- **Statements:** 1. `user.name` is an entity. 2. The dialog asks for a user name and a user age and assigns appropriate values to the `user.name` and `user.age` properties. 3. The chatbot attempts to take the first non-null entity value for `userName` or `personName` and assigns the value to `user.name`. ---
1. `user.name` is an entity → **No** * `user.name` is a **property**, not an entity. It refers to a variable in the bot's memory scope (`user`) where the captured name will be stored. 2. The dialog asks for a user name and a user age and assigns appropriate values to the `user.name` and `user.age` properties → **Yes** * The bot asks for both a name and an age and stores the responses in the `user.name` and `user.age` variables, as shown in the visual dialog flow. 3. The chatbot attempts to take the first non-null entity value for `userName` or `personName` and assigns the value to `user.name` → **Yes** * The expression used in the "Value" field is `=coalesce(@userName, @personName)`, which returns the **first non-null** value from either of those entities and assigns it to `user.name`. --- **Correct Answers:** 1. No 2. Yes 3. Yes ✅✅✅
183
hard **Question Section** You are building a multilingual chatbot. You need to send a different answer for positive and negative messages. Which two Text Analytics APIs should you use? Each correct answer presents part of the solution. (Choose two.) NOTE: Each correct selection is worth one point. Options: * A. Linked entities from a well-known knowledge base * B. Sentiment Analysis * C. Key Phrases * D. Detect Language * E. Named Entity Recognition ---
**Answer Section** **Correct Answers:** * **B. Sentiment Analysis** * **D. Detect Language** **Explanation:** * **B. Sentiment Analysis**: This API evaluates the sentiment of the input text and classifies it as positive, neutral, or negative. It is **directly required** to determine whether a message is positive or negative, allowing the chatbot to tailor responses accordingly. * **D. Detect Language**: Since the chatbot is multilingual, it must first detect the language of incoming messages to process them appropriately or pass them to the correct translation or analysis pipeline. This makes **language detection essential**. **Why other options are incorrect:** * **A. Linked entities from a well-known knowledge base**: This identifies entities in the text and links them to known data sources like Wikipedia. It is useful for enriching content but **not for detecting sentiment** or language. * **C. Key Phrases**: This extracts important terms but does **not help determine sentiment** or **language**. * **E. Named Entity Recognition**: Identifies people, places, organizations, etc., in text but does **not indicate sentiment** or **language**.
184
**Question Section** **DRAG DROP –** You plan to build a chatbot to support task tracking. You create a Language Understanding service named **lu1**. You need to build a Language Understanding model to integrate into the chatbot. The solution must **minimize development time** to build the model. Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. **Actions**: * Train the application. * Publish the application. * Add a new application. * Add example utterances. * Add the prebuilt domain ToDo.
**Answer Section** **Correct Order**: 1. Add a new application. 2. Add the prebuilt domain ToDo. 3. Train the application. 4. Publish the application. **Explanation**: * **Add a new application**: You start by creating a new LUIS application to house the language understanding logic. * **Add the prebuilt domain ToDo**: To **minimize development time**, you use a prebuilt domain that includes intents and utterances related to tasks. * **Train the application**: You train the app so that the model can learn from the domain's intents and utterances. * **Publish the application**: Once trained, you publish the model so it can be used in your chatbot. **Why not "Add example utterances"?** Using a **prebuilt domain** already includes necessary intents and utterances, making it unnecessary to manually add example utterances — especially when minimizing development time.
185
**Question Section** You are building a bot on a local computer by using the Microsoft Bot Framework. The bot will use an existing Language Understanding model. You need to translate the Language Understanding model locally by using the Bot Framework CLI. What should you do first? Options: * A. From the Language Understanding portal, clone the model. * B. Export the model as an .lu file. * C. Create a new Speech service. * D. Create a new Language Understanding service. ---
**Answer Section** **Correct Answer:** * **B. Export the model as an .lu file.** **Explanation:** To work with a LUIS (Language Understanding) model locally using the **Bot Framework CLI (bf cli)**, you need to export the model as a `.lu` (Language Understanding) file. This format allows local manipulation, testing, and translation of the model using tools like `bf luis:translate`. **Why the other options are incorrect:** * **A. Clone the model**: Cloning within the portal duplicates the model in the cloud but doesn't help with local translation. * **C. Create a new Speech service**: This is unrelated to LUIS or translation tasks. * **D. Create a new Language Understanding service**: Not necessary if the model already exists — the question implies you're using an existing model.
186
**Question Section** **DRAG DROP –** You are using a Language Understanding service to handle natural language input from the users of a web-based customer agent. The users report that the agent frequently responds with the following generic response: *"Sorry, I don’t understand that."* You need to improve the ability of the agent to respond to requests. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. **Actions:** * Add prebuilt domain models as required. * Validate the utterances logged for review and modify the model. * Migrate authoring to an Azure resource authoring key. * Enable active learning. * Enable log collection by using Log Analytics. * Train and republish the Language Understanding model. ---
**Answer Section** **Correct Sequence:** 1. **Enable active learning.** 2. **Validate the utterances logged for review and modify the model.** 3. **Train and republish the Language Understanding model.** **Explanation:** * **Enable active learning**: This allows the system to automatically collect low-confidence utterances, helping identify gaps in understanding. * **Validate the utterances logged for review and modify the model**: Review the collected user inputs to improve intent and entity recognition by updating the model. * **Train and republish the Language Understanding model**: After making updates, retrain and republish to make the improved model available for the chatbot. **Why other options are not selected:** * **Add prebuilt domain models as required**: This helps at the initial setup stage, but the scenario is about improving an existing model. * **Migrate authoring to an Azure resource authoring key**: Relevant only if transitioning from a legacy key to Azure resource-based authoring. * **Enable log collection by using Log Analytics**: Useful for advanced telemetry, but not directly necessary for training the model based on user inputs.
187
**Question Section** You build a conversational bot named **bot1**. You need to configure the bot to use a **QnA Maker** application. From the **Azure Portal**, where can you find the information required by **bot1** to connect to the QnA Maker application? Options: * A. Access control (IAM) * B. Properties * C. Keys and Endpoint * D. Identity ---
**Answer Section** **Correct Answer:** * **C. Keys and Endpoint** **Explanation:** To connect your bot to a QnA Maker application, you need details like the **endpoint URL**, **authorization key**, and **resource region**. These are found in the **"Keys and Endpoint"** section of the QnA Maker resource in the Azure Portal. **Why other options are incorrect:** * **A. Access control (IAM)**: Manages role-based access for users, not application connection details. * **B. Properties**: May show general info like resource IDs but not connection-specific keys or endpoints. * **D. Identity**: Refers to the system-assigned or user-assigned managed identity, not the connection credentials for QnA Maker.
188
**Question Section** **HOTSPOT –** You are building a chatbot by using the Microsoft Bot Framework SDK. You use an object named **UserProfile** to store user profile information and an object named **ConversationData** to store information related to a conversation. You create the following state accessors to store both objects in state: ``` var userStateAccessors = _userState.CreateProperty(nameof(UserProfile)); var conversationStateAccessors = _conversationState.CreateProperty(nameof(ConversationData)); ``` The state storage mechanism is set to **Memory Storage**. For each of the following statements, select **Yes** if the statement is true. Otherwise, select **No**. Each correct selection is worth one point. **Statements** 1. The code will create and maintain the **UserProfile** object in the underlying storage layer. 2. The code will create and maintain the **ConversationData** object in the underlying storage layer. 3. The **UserProfile** and **ConversationData** objects will persist when the Bot Framework runtime terminates. ---
**Answer Section** 1. **Yes** – The code will create and maintain the **UserProfile** object in the underlying storage layer. * ✔️ True. Since `CreateProperty()` is called and `UserState` is configured, it will maintain the object in the state system. 2. **Yes** – The code will create and maintain the **ConversationData** object in the underlying storage layer. * ✔️ True. `ConversationState` is also properly configured and will maintain the object. 3. **No** – The **UserProfile** and **ConversationData** objects will persist when the Bot Framework runtime terminates. * ❌ False. Since **Memory Storage** is used, all state is stored **in-memory** and will be lost when the application is restarted or terminated. To persist data, you'd need Blob Storage, Cosmos DB, etc.
189
DAN FIX **Question Section** You are building a chatbot for a Microsoft Teams channel by using the **Microsoft Bot Framework SDK**. The chatbot will use the following code. For each of the following statements, select **Yes** if the statement is true. Otherwise, select **No**. **Statements:** 1. **OnMembersAddedAsync** will be triggered when a user joins the conversation 2. When a new user joins the conversation, the existing users in the conversation will see a chatbot greeting 3. **OnMembersAddedAsync** will be initiated when a user sends a message ---
**Answer Section** 1. **Yes** – **OnMembersAddedAsync** will be triggered when a user joins the conversation * ✔️ Correct. This method is part of the Bot Framework's activity handlers and is specifically triggered when a new member is added to the conversation. 2. **No** – When a new user joins the conversation, the existing users in the conversation will see a chatbot greeting * ❌ Incorrect. **OnMembersAddedAsync** is triggered for the **newly added member**, and typically the bot greets **only the new user**, not existing ones, unless explicitly coded otherwise. 3. **No** – **OnMembersAddedAsync** will be initiated when a user sends a message * ❌ Incorrect. This method is only triggered on **member-added** events, not message activities. Sending a message triggers **OnMessageActivityAsync** instead.
190
HARD **Question Section** **HOTSPOT –** You are reviewing the design of a chatbot. The chatbot includes a **language generation (LG)** file that contains the following fragment: ``` # Greet(user) - ${Greeting()}, ${user.name} ``` For each of the following statements, select **Yes** if the statement is true. Otherwise, select **No**. NOTE: Each correct selection is worth one point. **Statements:** 1. `${user.name}` retrieves the user name by using a prompt. 2. `Greet()` is the name of the language generation template. 3. `${Greeting()}` is a reference to a template in the language generation file. ---
**Answer Section** 1. **No** – `${user.name}` retrieves the user name by using a prompt. * ❌ This expression accesses the `name` property from the `user` object in memory, not via a prompt. The value must already exist in the memory scope. 2. **Yes** – `Greet()` is the name of the language generation template. * ✔️ Correct. The `# Greet(user)` line defines a template named `Greet` with a parameter called `user`. 3. **Yes** – `${Greeting()}` is a reference to a template in the language generation file. * ✔️ Correct. This syntax calls another LG template named `Greeting`, returning its generated content.
191
**Question Section** **HOTSPOT –** You are building a bot that will use **Language Understanding**. You have a **LUDown** file that contains the following content: ``` ## Confirm - confirm - ok - yes ExtractName - call me steve ! - i am anna - (i'm|i am) {@personName:Any}[.] - my name is {@personName:Any}[.] Logout - forget me - log out SelectItem - choose last - choose the {@DirectionalReference=bottom left} - choose {@DirectionalReference=top right} - i like {@DirectionalReference=left} one SelectNone - none @ ml DirectionalReference @ prebuilt personName ``` Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. Each correct selection is worth one point. **Statements:** 1. **SelectItem** is \[answer choice]. * Options: a domain, an entity, an intent, an utterance 2. **Choose {@DirectionalReference=top right}** is \[answer choice]. * Options: a domain, an entity, an intent, an utterance ---
**Answer Section** 1. **SelectItem is an intent.** * ✔️ Correct. `SelectItem` is defined using `## SelectItem` and contains utterances. This is the syntax for **defining an intent** in LUDown. 2. **Choose {@DirectionalReference=top right} is an utterance.** * ✔️ Correct. This is a **sample utterance** within the `SelectItem` intent. It includes an **entity assignment** (`DirectionalReference=top right`), but the line itself is an **utterance**. **Explanation:** * Lines beginning with `##` define **intents**. * Lines under an intent are **utterances** that train the intent. * Expressions like `{@DirectionalReference=top right}` show **labeled entities** within those utterances. * `@ ml DirectionalReference` defines **DirectionalReference** as a machine learning **entity**. * `@ prebuilt personName` refers to a **prebuilt entity** from LUIS.
192
**Question Section** You are building a chatbot by using the **Microsoft Bot Framework Composer** as shown in the exhibit. https://www.examtopics.com/discussions/microsoft/view/75647-exam-ai-102-topic-5-question-14-discussion/ The chatbot contains a dialog named **GetUserDetails**. GetUserDetails contains a **TextInput** control that prompts users for their name. The user input will be stored in a property named `name`. You need to ensure that you can **dispose of the property when the last active dialog ends**. **Which scope should you assign to `name`?** Options: A. dialog B. user C. turn D. conversation ---
**Answer Section** **Correct Answer:** * **A. dialog** **Explanation:** * The **dialog scope** is used for temporary data that is only needed during the lifespan of the dialog. Once the dialog ends, the data stored in `dialog.name` will automatically be discarded. * This is exactly what's needed when you want the data (like the user's name) to **not persist beyond the dialog's lifetime**. **Why other options are incorrect:** * **B. user**: Persists data across all conversations and dialogs for a user. Not appropriate for short-lived values. * **C. turn**: Lives only during a single turn (a single request/response). Too short-lived. * **D. conversation**: Persists across the entire conversation, which can include multiple dialogs — not automatically discarded after one dialog ends.
193
**Question Section** You are building a chatbot that will provide information to users as shown in the following exhibit. GRAPHIC UNAVAILABLE Use the drop-down menus to select the answer choices that complete each statement based on the information presented. **Statements:** 1. The chatbot is showing \[**answer choice**]. * Options: an Adaptive Card, a Hero Card, a Thumbnail Card 2. The card includes \[**answer choice**]. * Options: an action set, an image, an image group, media ---
**Answer Section** **Correct Answers:** * The chatbot is showing → **an Adaptive Card** * The card includes → **an image** --- ✅ Explanation: * The exhibit displays **rich structured data**, including: * Multiple passenger names * Flight itinerary with detailed formatting (airports, times, stop types) * Total price * This layout and customization go beyond what **Hero** or **Thumbnail cards** can handle — it clearly indicates the use of an **Adaptive Card**, which supports: * Multiple text blocks * Custom layouts * Images * Data binding * The **plane icon** between the airports is an **image**, not a media object or image group. --- **Final Answer:** * The chatbot is showing → **an Adaptive Card** * The card includes → **an image**
194
Hard **DRAG DROP –** You build a bot by using the **Microsoft Bot Framework SDK**. You need to test the bot **interactively on a local machine**. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. **NOTE:** More than one order of answer choices is correct. You will receive credit for any of the correct orders you select. **Actions:** * Open the Bot Framework Composer. * Connect to the bot endpoint. * Register the bot with the Azure Bot Service. * Build and run the bot. * Open the Bot Framework Emulator. ---
**Answer Section** **Correct Sequence:** 1. **Build and run the bot.** 2. **Open the Bot Framework Emulator.** 3. **Connect to the bot endpoint.** **Explanation:** * **Build and run the bot**: Starts the local bot code and enables it to listen on a localhost port. * **Open the Bot Framework Emulator**: This tool is used to test bots locally by sending messages. * **Connect to the bot endpoint**: You must specify the local bot endpoint (usually `http://localhost:3978/api/messages`) in the emulator to begin the conversation. **Why other options are not included:** * **Open the Bot Framework Composer**: Not necessary unless you're using Composer to design dialogs. The question specifies you're using the **Bot Framework SDK** directly. * **Register the bot with the Azure Bot Service**: Not required for **local** testing. This is needed for cloud deployment.
195
**Question Section** You are designing a conversational interface for an app that will be used to make vacation requests. The interface must gather the following data: * The start date of a vacation * The end date of a vacation * The amount of required paid time off The solution must **minimize dialog complexity**. **Which type of dialog should you use?** Options: A. adaptive B. skill C. waterfall D. component ---
**Answer Section** **Correct Answer:** * **C. waterfall** **Explanation:** * **Waterfall dialogs** are best suited for **linear, step-by-step conversations**, like collecting structured data inputs (start date, end date, PTO amount). * Each step prompts for one piece of information and passes control to the next step — which **minimizes complexity** for straightforward tasks. **Why other options are incorrect:** * **A. Adaptive**: More flexible and dynamic, but also **more complex**, better for scenarios with interruptions and conditional branching. * **B. Skill**: Refers to a reusable bot service, not a dialog type. Skills help when you want to modularize functionality across bots, not simplify dialog steps. * **D. Component**: A reusable dialog unit, which could contain a **waterfall**, but adds structure — not necessarily simplifying the dialog itself for this simple case.
196
**Question Section** You create five bots by using **Microsoft Bot Framework Composer**. You need to make a **single bot** available to users that **combines the bots**. The solution must support **dynamic routing** to the bots based on **user input**. **Which three actions should you perform?** Each correct selection is worth one point. Options: A. Create a composer extension B. Change the Recognizer/Dispatch type C. Create an Orchestrator model D. Enable WebSockets E. Create a custom recognizer JSON file F. Install the Orchestrator package ---
**Answer Section** **Correct Answers:** * **B. Change the Recognizer/Dispatch type** * **C. Create an Orchestrator model** * **F. Install the Orchestrator package** **Explanation:** To dynamically route user input to the appropriate bot (or sub-dialog) in Composer, you use **Orchestrator**, which is a powerful recognizer for **dispatching intents across multiple language models and skills**. * **B. Change the Recognizer/Dispatch type**: You must change the recognizer type in Composer to **Orchestrator** to allow intent dispatching across multiple bots or dialogs. * **C. Create an Orchestrator model**: This model is used to train routing logic across different intents and bots. * **F. Install the Orchestrator package**: Composer requires this package to support Orchestrator-based recognition. **Why other options are incorrect:** * **A. Create a composer extension**: Not required for routing logic. Extensions are used for adding new UI or behaviors in Composer. * **D. Enable WebSockets**: Not relevant to bot routing; it's used for communication protocol enhancement. * **E. Create a custom recognizer JSON file**: This is more relevant for older LUIS dispatch models or specialized configurations — **not necessary** with Orchestrator in Composer.
197
Dan fix **Question Section** You have a chatbot. You need to test the bot by using the **Bot Framework Emulator**. The solution must ensure that you are **prompted for credentials when you sign in to the bot**. **Which three settings should you configure?** Select the appropriate settings in the Emulator Settings pane. [source](https://www.examtopics.com/discussions/microsoft/view/112152-exam-ai-102-topic-5-question-48-discussion/) ---
**Answer Section** **Correct Settings:** 1. ✅ **Enter the local path to ngrok** 2. ✅ **Enable "Run ngrok when the Emulator starts up"** 3. ✅ **Enable "Use version 1.0 authentication token"** --- **Explanation:** To test authentication flows **locally**, including being **prompted for credentials**, you must: * **Provide the path to ngrok**: The emulator uses this tunneling tool to expose your localhost bot endpoint over HTTPS, which is required for OAuth flows. * **Enable ngrok to run at startup**: Ensures the tunnel is automatically available when launching the Emulator — critical for seamless testing. * **Use version 1.0 authentication token**: This option ensures the Emulator simulates how the Bot Framework handles auth tokens in production. 🔗 Source: [Microsoft Docs – Using authentication tokens with the Emulator](https://learn.microsoft.com/en-us/azure/bot-service/bot-service-debug-emulator?view=azure-bot-service-4.0&tabs=csharp#using-authentication-tokens)
198
**Question Section** **DRAG DROP –** You have a chatbot that uses a **QnA Maker application**. You enable **active learning** for the knowledge base used by the QnA Maker application. You need to **integrate user input into the model**. Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. **Actions:** * Add a task to the Azure resource. * Approve and reject suggestions. * Publish the knowledge base. * Modify the automation task logic app to run an Azure Resource Manager template that creates the Azure Cognitive Services resource. * For the knowledge base, select Show active learning suggestions. * Save and train the knowledge base. * Select the properties of the Azure Cognitive Services resource. ---
**Answer Section** **Correct Sequence:** 1. **For the knowledge base, select Show active learning suggestions.** 2. **Approve and reject suggestions.** 3. **Save and train the knowledge base.** 4. **Publish the knowledge base.** --- **Explanation:** To integrate real user feedback into the QnA Maker knowledge base using **active learning**, you follow this process: 1. **Show active learning suggestions**: This displays user-submitted phrasing that QnA Maker identifies as potentially needing refinement. 2. **Approve and reject suggestions**: You manually vet which suggestions should be added to improve the KB. 3. **Save and train the knowledge base**: This updates the model with your approved changes. 4. **Publish the knowledge base**: Makes the trained updates live so your bot can use them. Other options like modifying Azure resources or automation logic are not relevant to this specific **active learning + model update workflow**.
199
**Question Section** You need to **enable speech capabilities** for a chatbot. **Which three actions should you perform?** Each correct selection is worth one point. Options: A. Enable WebSockets for the chatbot app B. Create a Speech service C. Register a Direct Line Speech channel D. Register a Cortana channel E. Enable CORS for the chatbot app F. Create a Language Understanding service ---
**Answer Section** **Correct Answers:** 1. ✅ **A. Enable WebSockets for the chatbot app** 2. ✅ **B. Create a Speech service** 3. ✅ **C. Register a Direct Line Speech channel** --- **Explanation:** To enable **speech capabilities** in a chatbot: * **A. Enable WebSockets for the chatbot app**: Required for real-time, low-latency communication between the speech-enabled client and the bot. * **B. Create a Speech service**: This service provides speech-to-text and text-to-speech capabilities needed for spoken interactions. * **C. Register a Direct Line Speech channel**: This special channel connects the Speech service with your bot, enabling speech-based input/output. **Why other options are incorrect:** * **D. Register a Cortana channel**: Deprecated — Cortana integration is no longer supported for new bots. * **E. Enable CORS for the chatbot app**: Not necessary for enabling speech. It’s required when dealing with browser-based cross-origin requests. * **F. Create a Language Understanding service**: Useful for intent recognition, but **not mandatory** for enabling **speech** capabilities. You can use speech without LUIS.
200
**Question Section** You use the **Microsoft Bot Framework Composer** to build a chatbot that enables users to purchase items. You need to ensure that users can **cancel in-progress transactions**. The solution must **minimize development effort**. **What should you add to the bot?** Options: A. a language generator B. a custom event C. a dialog trigger D. a conversation activity ---
**Answer Section** **Correct Answer:** * ✅ **C. a dialog trigger** **Explanation:** * **Dialog triggers** in Composer allow you to **interrupt the current dialog** and respond to specific user inputs (like "cancel"). * You can easily configure a trigger for utterances like "cancel", "nevermind", or "stop" and then **terminate or roll back the active dialog**, making it a **low-effort, high-impact solution**. **Why other options are incorrect:** * **A. a language generator**: Controls output responses; not used for controlling dialog flow or cancellations. * **B. a custom event**: Useful for handling external signals, but adds more complexity than needed for a simple cancel flow. * **D. a conversation activity**: Refers to a broader concept in the Bot Framework SDK; not specific or easy to use for this cancellation pattern in Composer.
201
**Question Section** You create a bot by using the **Microsoft Bot Framework SDK**. You need to configure the bot to **respond to events** by using **custom text responses**. **What should you use?** Options: A. an adaptive card B. an activity handler C. a dialog D. a skill ---
**Answer Section** **Correct Answer:** * ✅ **B. an activity handler** **Explanation:** * An **activity handler** is a class in the Bot Framework SDK that allows you to **handle incoming activities** (e.g., messages, events, conversation updates). * You can override methods like `OnEventActivityAsync` to respond to **event-type activities** with **custom text responses** or other actions. **Why other options are incorrect:** * **A. an adaptive card**: Used to **display rich content** (text, buttons, inputs), not to handle events or control logic. * **C. a dialog**: Manages **conversation flow** but is not directly triggered by events — dialogs are typically invoked after an activity is processed. * **D. a skill**: A bot-to-bot component used to extend functionality, not intended for handling events directly within a bot.
202
Question Section Scenario: You have a chatbot that uses question answering in Azure Cognitive Service for Language. Users report that the chatbot's responses lack formality when answering spurious (casual/chitchat) questions. You need to ensure that the chatbot provides formal responses to spurious questions. --- **Solution: From Language Studio, you change the chitchat source to qna\_chitchat\_friendly.tsv, and then retrain and republish the model.** Does this meet the goal? Options: A. Yes B. No
**Answer Section** **Correct Answer:** * **B. No** **Explanation:** * The **`qna_chitchat_friendly.tsv`** file contains **casual and friendly responses**, which are **less formal**, not more. * To provide **formal responses**, you should instead use the **`qna_chitchat_professional.tsv`** file as your chitchat source. * Therefore, switching to the **friendly** version does **not meet** the goal of increasing formality. **Correct action would be:** Use **`qna_chitchat_professional.tsv`**, then retrain and republish.
203
Question Section Scenario: You have a chatbot that uses question answering in Azure Cognitive Service for Language. Users report that the chatbot's responses lack formality when answering spurious questions. You need to ensure that the chatbot provides formal responses to spurious questions. --- **Solution: From Language Studio, you modify the question and answer pairs for the custom intents, and then retrain and republish the model.** Does this meet the goal? Options: A. Yes B. No ---**
✅ **Correct Answer: B. No** --- **Why?** * **Spurious questions** refer to **chitchat**, small talk, or unstructured input (e.g., “Tell me a joke”, “Are you real?”). * These are not handled via **custom intents or user-authored QnA pairs** unless you’ve explicitly built them that way — which the question does not state. * **Custom intents** are used in **Conversational Language Understanding (CLU)** and are unrelated to the **prebuilt chitchat QnA datasets**. * The **formality** of chitchat responses is managed through the **import of curated chitchat datasets**, such as: * `qna_chitchat_friendly.tsv` (casual tone) * `qna_chitchat_professional.tsv` (formal tone) > 🔗 Microsoft Documentation: > [https://learn.microsoft.com/en-us/azure/cognitive-services/language-service/question-answering/how-to/chit-chat](https://learn.microsoft.com/en-us/azure/cognitive-services/language-service/question-answering/how-to/chit-chat) So, modifying question-answer pairs for **custom intents** does **not** control the tone or formality of responses to **chitchat inputs**. --- 🛠️ What *would* meet the goal? Using the **`qna_chitchat_professional.tsv`** file and importing it into your knowledge base — **that** provides formal responses to spurious questions. --- ✅ Final Answer: **B. No** **Modifying custom intent Q\&A pairs doesn’t control chitchat tone — use the correct chitchat dataset instead.**
204
Question Section Scenario: You have a chatbot that uses question answering in Azure Cognitive Service for Language. Users report that the chatbot’s responses lack formality when answering spurious questions. You need to ensure that the chatbot provides formal responses to those questions. --- **Solution: From Language Studio, you change the chitchat source to qna\_chitchat\_professional.tsv, and then retrain and republish the model.** Does this meet the goal? Options: A. Yes B. No ---**
**Answer Section** **Correct Answer:** ✅ **A. Yes** **Explanation:** * The **qna\_chitchat\_professional.tsv** file is one of the predefined chitchat datasets provided by Microsoft. * It is **specifically designed** to respond to common/spurious user inputs (like "Who are you?", "Tell me a joke", etc.) in a **formal and professional tone**. * Importing this file into your QnA knowledge base ensures your bot uses **formal responses** to those types of questions. * After retraining and republishing, the updated tone is available to the bot. ✅ **Therefore, this solution meets the goal.**
205
**Question Section** You create five bots by using **Microsoft Bot Framework Composer**. You need to make a **single bot available to users** that **combines the bots**. The solution must support **dynamic routing** to the bots based on **user input**. **Which three actions should you perform?** Each correct selection is worth one point. Options: A. Create an Orchestrator model B. Change the Recognizer/Dispatch type C. Create a composer extension D. Enable WebSockets E. Create a custom recognizer JSON file F. Install the Orchestrator package ---
**Answer Section** **Correct Answers:** * ✅ **A. Create an Orchestrator model** * ✅ **B. Change the Recognizer/Dispatch type** * ✅ **F. Install the Orchestrator package** --- **Explanation:** To enable **dynamic routing** between multiple bots (or skills) in **Bot Framework Composer**, Microsoft recommends using **Orchestrator**, a language understanding routing engine. Here's why the selected answers are correct: * **A. Create an Orchestrator model**: This model allows intent recognition across multiple bots and dispatches control to the right one. * **B. Change the Recognizer/Dispatch type**: You must change the recognizer for the root bot/dialog to use **Orchestrator** for routing. * **F. Install the Orchestrator package**: This package enables Orchestrator functionality in Composer. --- **Why the other options are incorrect:** * **C. Create a composer extension**: Extensions are for UI or behavior customization in Composer — not required for routing between bots. * **D. Enable WebSockets**: WebSockets are used for real-time communication but have nothing to do with dialog routing. * **E. Create a custom recognizer JSON file**: This is used for custom LUIS implementations, not needed with Orchestrator, which Composer supports natively.
206
Question Section Scenario: You are building a chatbot that uses question answering in Azure Cognitive Service for Language. You have a PDF named Doc1.pdf containing a product catalogue and a price list. You upload Doc1.pdf and train the model. During testing: * The chatbot correctly responds to: *"What is the price of \[product]?"* * The chatbot fails to respond to: *"How much does \[product] cost?"* You need to ensure that the chatbot responds correctly to both questions. --- **Solution: From Language Studio, you add alternative phrasing to the question and answer pair, and then retrain and republish the model.** Does this meet the goal? Options: A. Yes B. No ---**
**Answer Section** **Correct Answer:** ✅ **A. Yes** **Explanation:** * In **Azure Question Answering**, each QnA pair can have **multiple alternative phrasings** for a question. * If a valid question like *"How much does it cost?"* isn't matching the correct answer, you can manually **add it as an alternative phrasing** to an existing QnA pair. * Once you do this and **retrain and republish**, the model will recognize and respond correctly to both phrasings. This is a **standard and supported approach** to improve answer matching for **semantically similar questions**. --- ✅ Final Answer: **A. Yes** **Adding alternative phrasings and retraining the model ensures broader question coverage and meets the goal.**
207
Question Section Scenario: You are building a chatbot that uses question answering in Azure Cognitive Service for Language. You have a PDF named Doc1.pdf that contains a product catalogue and a price list. You upload Doc1.pdf and train the model. During testing: * The chatbot responds correctly to: *"What is the price of \[product]?"* * The chatbot fails to respond to: *"How much does \[product] cost?"* You need to ensure that the chatbot responds correctly to both questions. --- **Solution: From Language Studio, you enable chit-chat, and then retrain and republish the model.** Does this meet the goal? Options: A. Yes B. No ---**
**Answer Section** **Correct Answer:** ❌ **B. No** **Explanation:** * **Chit-chat** is a predefined set of casual conversation Q\&A pairs (e.g., "How are you?", "Tell me a joke"). * It is **not related to domain-specific questions** like product prices. * Enabling chit-chat **does not improve matching for variations of factual/product-specific queries** from a PDF document. * To fix this issue, you should instead **add alternate phrasings** to the relevant QnA pair manually (e.g., include "How much does \[product] cost?"). --- ✅ Final Answer: **B. No** **Enabling chit-chat will not help with semantically similar domain-specific questions. The solution does not meet the goal.**
208
Question Section Scenario: You are building a chatbot that uses question answering in Azure Cognitive Service for Language. You have a PDF named Doc1.pdf that contains a product catalogue and a price list. You upload Doc1.pdf and train the model. During testing: * The chatbot responds correctly to: *"What is the price of \[product]?"* * The chatbot fails to respond to: *"How much does \[product] cost?"* You need to ensure that the chatbot responds correctly to both questions. --- **Solution: From Language Studio, you create an entity for price, and then retrain and republish the model.** Does this meet the goal? Options: A. Yes B. No ---**
**Answer Section** **Correct Answer:** ❌ **B. No** **Explanation:** * **Question Answering in Azure Cognitive Service for Language** (formerly QnA Maker) **does not use entities** the way intent-based models like **LUIS** or **Conversational Language Understanding** do. * Creating an entity for “price” has **no effect** on how questions are matched to answers in a **document-based QnA knowledge base**. * To fix the issue, you should **add alternate question phrasings** (e.g., "How much does it cost?") to the appropriate QnA pair. --- ✅ Final Answer: **B. No** **Creating an entity does not help improve response matching in Question Answering. The solution does not meet the goal.**
209
Question Section Scenario: You are building a chatbot that uses question answering in Azure Cognitive Service for Language. You have a PDF named Doc1.pdf that contains a product catalogue and a price list. You upload Doc1.pdf and train the model. During testing: * The chatbot responds correctly to: *"What is the price of \[product]?"* * The chatbot fails to respond to: *"How much does \[product] cost?"* You need to ensure that the chatbot responds correctly to both questions. --- **Solution: From Language Studio, you create an entity for cost, and then retrain and republish the model.** Does this meet the goal? Options: A. Yes B. No ---**
**Answer Section** **Correct Answer:** ❌ **B. No** **Explanation:** * **Azure Question Answering** does **not use entities** (like "cost" or "price") for matching user queries to answers. * It works by matching the **natural language of the question** to the QnA pairs extracted from your document or manually entered. * Creating an entity has **no impact** on the chatbot's ability to recognize that *"How much does it cost?"* means the same as *"What is the price of?"* * The correct solution would be to **add alternative phrasings** to the relevant QnA pair to improve coverage. --- ✅ Final Answer: **B. No** **Entities are not used in document-based Question Answering; this solution does not meet the goal.**
210
**Question Section** You have a **Conversational Language Understanding** model. You export the model as a **JSON file**. The following is a sample of the file: ``` { "text": "average amount of rain by month in Chicago last year", "intent": "Weather.CheckWeatherValue", "entities": [ { "entity": "Weather.WeatherRange", "startPos": 0, "endPos": 6, "children": [] }, { "entity": "Weather.WeatherCondition", "startPos": 18, "endPos": 21, "children": [] }, { "entity": "Weather.Historic", "startPos": 23, "endPos": 30, "children": [] } ] } ``` **What represents the `Weather.Historic` entity in the sample utterance?** Options: A. last year B. by month C. amount of D. average ---
By month
211
hard **Question Section** You are building a chatbot by using **Microsoft Bot Framework Composer**. You need to configure the chatbot to **present a list of available options**, and the solution must **ensure that an image is provided for each option**. **Which two features should you use?** Each correct selection is worth one point. Options: A. an Azure function B. an adaptive card C. an entity D. a dialog E. an utterance ---
**Answer Section** **Correct Answers:** * ✅ **B. an adaptive card** * ✅ **D. a dialog** --- **Explanation:** * **B. Adaptive card**: Adaptive Cards allow you to create **rich UI elements**, including lists, buttons, and **images**. This is the best way to visually present **options with images** in a chatbot. * **D. Dialog**: A **dialog** in Bot Framework Composer is used to structure **conversation flows**. You would use a dialog to handle the logic that presents the options and responds to user selection. --- **Why other options are incorrect:** * **A. Azure function**: Useful for processing logic or fetching data, but **not responsible for presenting UI** or rendering images in responses. * **C. Entity**: Entities help extract values from user input — they **don’t control visual output**. * **E. Utterance**: An utterance is a sample of what the user says. It’s used for **training recognition**, not for presenting UI or rendering options. --- ✅ Final Answer: **B. an adaptive card** and **D. a dialog**
212
**Question Section** You have a chatbot that was built by using **Microsoft Bot Framework** and deployed to **Azure**. You need to configure the bot to support **voice interactions**. The solution must support **multiple client apps**. **Which type of channel should you use?** Options: A. Microsoft Teams B. Direct Line Speech C. Cortana ---
**Answer Section** **Correct Answer:** ✅ **B. Direct Line Speech** --- **Explanation:** * **B. Direct Line Speech** is designed specifically to enable **voice interaction** between users and bots. It integrates with the **Azure Speech service** to provide **speech-to-text and text-to-speech**, allowing users to talk to bots using **natural language** across **multiple client platforms** (desktop, mobile, etc.). --- **Why the other options are incorrect:** * **A. Microsoft Teams**: Supports text-based chat and rich cards, but is not optimized or widely used for **voice-first interactions** with bots. * **C. Cortana**: Deprecated. Microsoft has discontinued support for Cortana as a Bot Framework channel. --- ✅ Final Answer: **B. Direct Line Speech** This channel enables voice interactions across multiple clients and is fully supported with the Bot Framework.
213
**Question Section** You are building a bot by using **Microsoft Bot Framework**. You need to configure the bot to **respond to spoken requests**. The solution must **minimize development effort**. **What should you do?** Options: A. Deploy the bot to Azure and register the bot with a Direct Line Speech channel B. Integrate the bot with Cortana by using the Bot Framework SDK C. Create an Azure function that will call the Speech service and connect the bot to the function D. Deploy the bot to Azure and register the bot with a Microsoft Teams channel ---
**Answer Section** **Correct Answer:** ✅ **A. Deploy the bot to Azure and register the bot with a Direct Line Speech channel** --- **Explanation:** * **A. Direct Line Speech** is the **recommended channel** for enabling voice interactions with minimal development effort. It **automatically integrates with the Azure Speech service**, handling **speech-to-text** and **text-to-speech** conversion, and directly connects to your bot. --- **Why other options are incorrect:** * **B. Cortana**: Deprecated and no longer supported as a Bot Framework channel. * **C. Azure Function with Speech service**: Possible, but adds **unnecessary complexity**. It requires manually managing speech input/output and routing to the bot, which goes against the goal to *minimize effort*. * **D. Microsoft Teams**: Primarily supports **text-based** conversations and not optimized for native **voice interactions**. --- ✅ Final Answer: **A. Deploy the bot to Azure and register the bot with a Direct Line Speech channel** This provides built-in speech capabilities with minimal setup effort.
214
Question Section Scenario: You have a chatbot that uses question answering in Azure Cognitive Service for Language. Users report that the chatbot’s responses lack formality when answering spurious questions (i.e., small talk, off-topic input). You need to ensure that the chatbot provides formal responses to spurious questions. --- **Solution: From Language Studio, you remove all the chitchat question and answer pairs, and then retrain and republish the model.** Does this meet the goal? Options: A. Yes B. No ---**
**Answer Section** **Correct Answer:** ❌ **B. No** --- **Explanation:** * **Removing** all chitchat Q\&A pairs will prevent the bot from answering spurious (casual) questions altogether. * This may result in **"I don’t understand"** type fallback messages instead of formal answers. * The goal is to **ensure formal responses**, not eliminate them. * The correct approach would be to **replace the current chitchat source** with one that uses a formal tone — e.g., **`qna_chitchat_professional.tsv`** — or **manually revise** the Q\&A pairs to formal language. --- ✅ Final Answer: **B. No** **Deleting chitchat removes responses instead of improving their tone. It does not meet the goal.**
215
Dan fix You are building a chatbot. You need to use the **Content Moderator** service to identify messages that contain **sexually explicit language**. Which section in the response from the service will contain the **category score**, and which **category** will be assigned to the message? To answer, select the appropriate options in the answer area. **NOTE:** Each correct selection is worth one point. ---
**Answer Section** * **Section:** ✅ **Classification** * **Category:** ✅ **1** --- **Explanation:** * The **Classification** section of the Content Moderator API response contains scores for detecting **adult**, **racy**, and **offensive content**. * **Category 1** refers to **sexually explicit (adult) content**, which is the focus in this question. * Categories: * **1 = Sexually explicit content** * **2 = Sexually suggestive content** * **3 = Offensive language (general profanity)** --- ✅ Final Answers: * **Section:** Classification * **Category:** 1
216
**Question Section** You are building a chatbot for a **travel agent**. The bot will ask users for a **destination** and must **repeat the question** until a **valid input** is received, or the **user closes the conversation**. **Which type of dialog should you use?** Options: A. prompt B. input C. adaptive D. QnA Maker ---
**Answer Section** **Correct Answer:** ✅ **A. prompt** --- **Explanation:** * A **prompt** is a built-in dialog type in the **Bot Framework SDK** (and also used in **Composer**) that asks the user for input, **validates** the response, and **repeats** the question automatically if the input is invalid. * Prompts are specifically designed to **loop until valid input is received**, making them perfect for tasks like requesting a destination. **Why other options are incorrect:** * **B. input**: Not a defined dialog type in the Bot Framework SDK. May be confused with "Input.Text" in Composer, but that doesn't handle validation loops like prompts. * **C. adaptive**: Refers to **Adaptive Dialogs**, which support complex flows but require more configuration and are not the minimal-effort solution for this scenario. * **D. QnA Maker**: Used for **knowledge-based question answering**, not for structured input collection or validation. --- ✅ Final Answer: **A. prompt** This is the most appropriate dialog type for repeating questions until valid input is received.
217
**Question Section** You are building a chatbot. You need to configure the chatbot to **query a knowledge base**. **Which dialog class should you use?** Options: A. AdaptiveDialog B. QnAMakerDialog C. ComponentDialog D. SkillDialog ---
**Answer Section** **Correct Answer:** ✅ **B. QnAMakerDialog** --- **Explanation:** * **QnAMakerDialog** is a **built-in dialog class** in the **Microsoft Bot Framework SDK** designed to interact with a **QnA Maker knowledge base** (or **Azure Question Answering**). * It allows your bot to **query a knowledge base** and return the **most relevant answer** to the user input with minimal setup. **Why other options are incorrect:** * **A. AdaptiveDialog**: Useful for building dynamic, rule-driven conversations, but not specifically designed for querying QnA knowledge bases. * **C. ComponentDialog**: Used to group and manage multiple dialogs as a reusable component — not specialized for QnA scenarios. * **D. SkillDialog**: Used to invoke skills (other bots), not to directly query a knowledge base. --- ✅ Final Answer: **B. QnAMakerDialog** This class is specifically designed to enable knowledge base queries in a bot.
218
**Question Section** You have a **chatbot** built using the **Microsoft Bot Framework SDK**. You need to ensure that the **conversation resets** if the **user fails to respond for 10 minutes**. You write the following code: ``` await turn_context._______________ await self.conversation_state._________(turn_context) ``` **How should you complete the code?** Options: A. `send_activity("Timeout reached")`     `delete(turn_context)` B. `end_dialog()`     `clear_state(turn_context)` C. `send_activity("Session expired due to inactivity.")`     `delete_state(turn_context)` D. `send_activity("Goodbye")`     `save_changes(turn_context)` ---
**Answer Section** **Correct Answer:** ✅ **C.** `send_activity("Session expired due to inactivity.")`           `delete_state(turn_context)` --- **Explanation:** * To **reset the conversation** after an inactivity timeout (like 10 minutes), you typically **inform the user** and then **clear the conversation state**. * `send_activity(...)` sends a message back to the user, such as "Session expired due to inactivity." * `delete_state(turn_context)` clears any stored conversation or user state associated with the current context, **effectively resetting the session**. **Why other options are incorrect:** * **A.** `delete(...)` is not a valid method on `conversation_state`. * **B.** `end_dialog()` is used within dialogs, not directly on `turn_context`. * **D.** `save_changes()` persists the current state, which is the opposite of resetting it. --- Let me know if you'd like the C# or JavaScript version too!
219
**Question Section** You develop a **Conversational Language Understanding** model by using **Language Studio**. During testing, users receive **incorrect responses** to requests that **do NOT relate to the capabilities of the model**. You need to ensure that the model can **identify spurious requests**. **What should you do?** Options: A. Enable active learning B. Add entities C. Add examples to the None intent D. Add examples to the custom intents ---
**Answer Section** **Correct Answer:** ✅ **C. Add examples to the None intent** --- **Explanation:** * The **None intent** is used in Conversational Language Understanding (CLU) to handle **irrelevant or out-of-scope user input**. * If the model doesn't have **training examples** in the **None intent**, it may incorrectly classify unrelated messages as one of the defined custom intents. * Adding representative examples of **spurious or irrelevant queries** to the **None intent** trains the model to **correctly reject or ignore** them. --- **Why other options are incorrect:** * **A. Enable active learning**: Helps refine intent recognition based on real usage, but **does not directly solve** the issue of misclassifying spurious input. * **B. Add entities**: Entities help extract information from **recognized intents**, not in detecting **irrelevant** ones. * **D. Add examples to the custom intents**: Helps improve recognition **within scope**, but will not help the model **reject unrelated input**. --- ✅ Final Answer: **C. Add examples to the None intent** This is the standard way to teach the model how to recognize and discard spurious queries.
220
**Question Section** You have a **Speech resource** and a **bot built using Microsoft Bot Framework Composer**. You need to **add support for speech-based channels** to the bot. **Which three actions should you perform?** Each correct selection is worth one point. Options: A. Configure the language and voice settings for the Speech resource B. Add the endpoint and key of the Speech resource to the bot C. Add language understanding to dialogs D. Add Orchestrator to the bot E. Add Speech to the bot responses F. Remove the setSpeak configuration ---
**Answer Section** **Correct Answers:** * ✅ **A. Configure the language and voice settings for the Speech resource** * ✅ **B. Add the endpoint and key of the Speech resource to the bot** * ✅ **E. Add Speech to the bot responses** --- **Explanation:** To enable **speech capabilities** in a Composer bot: * **A. Configure the language and voice settings for the Speech resource**: This defines how the speech service will synthesize responses (e.g., language and voice profile). * **B. Add the endpoint and key of the Speech resource to the bot**: Necessary for your bot to authenticate and connect with the Speech service for speech recognition and synthesis. * **E. Add Speech to the bot responses**: Composer supports adding a `Speak` property to responses (e.g., `Speak = "Hello"`), which ensures the bot outputs **spoken content** in addition to text. --- **Why other options are incorrect:** * **C. Add language understanding to dialogs**: Useful for intent recognition, but **not required specifically for speech support**. * **D. Add Orchestrator to the bot**: Helps with intent routing across dialogs, but again, **not necessary for adding speech support**. * **F. Remove the setSpeak configuration**: This would disable the bot's ability to speak — the **opposite** of what’s required. --- ✅ Final Answers: * **A. Configure the language and voice settings for the Speech resource** * **B. Add the endpoint and key of the Speech resource to the bot** * **E. Add Speech to the bot responses**
221
**Question Section** **DRAG DROP –** You build a bot by using the **Microsoft Bot Framework SDK**. You need to **test the bot interactively on a local machine**. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. **Actions:** * Open the Bot Framework Composer * Connect to the bot endpoint * Register the bot with the Azure Bot Service * Build and run the bot * Open the Bot Framework Emulator **Answer Area:** 1. \[ ? ] 2. \[ ? ] 3. \[ ? ] ---
**Answer Section** **Correct Sequence:** 1. ✅ **Build and run the bot** 2. ✅ **Open the Bot Framework Emulator** 3. ✅ **Connect to the bot endpoint** --- **Explanation:** To test a **local bot** built using the **Bot Framework SDK**, follow these steps: 1. **Build and run the bot** – This starts the bot’s local server (usually at `http://localhost:3978/api/messages`). 2. **Open the Bot Framework Emulator** – This tool lets you interact with the bot locally. 3. **Connect to the bot endpoint** – In the Emulator, you provide the local bot endpoint to begin the conversation. --- **Why the other options are not used:** * **Open the Bot Framework Composer**: Not needed if you're building/testing the bot with **Bot Framework SDK**, not Composer. * **Register the bot with the Azure Bot Service**: Only required for **cloud deployment**, **not for local testing**. --- ✅ Final Answer: 1. **Build and run the bot** 2. **Open the Bot Framework Emulator** 3. **Connect to the bot endpoint**
222
**Question Section** You have a bot that was built by using the **Microsoft Bot Framework Composer**, as shown in the exhibit: [ExamTopics Discussion Link](https://www.examtopics.com/discussions/microsoft/view/112150-exam-ai-102-topic-5-question-46-discussion/) --- **Use the drop-down menus to select the answer choice that completes each statement based on the information presented.** **NOTE:** Each correct selection is worth one point. --- **Statements:** 1. If a user asks *"what is the weather like in New York"*, the bot will **\[answer choice]**. 2. The **GetWeather** dialog uses a **\[answer choice]** trigger. **Dropdown Options for Statement 1:** * Change to a different dialog * Identify New York as a city entity * Identify New York as a state entity * Respond with the weather in Seattle **Dropdown Options for Statement 2:** * Custom events * Dialog events * Language Understanding Intent recognized * QnA Intent recognized ---
**Answer Section** 1. ✅ **Identify New York as a city entity** 2. ✅ **Language Understanding Intent recognized** --- **Explanation:** * The **GetWeather** dialog defines a **city entity**, seen in the trigger phrases (e.g., `{city=Seattle}`), and uses `@ml city` which refers to a **machine-learned entity** named `city`. * Therefore, if the user says “what is the weather like in New York,” the bot will **identify “New York” as a city entity**. * The **trigger type** for the GetWeather dialog is based on recognizing the **#GetWeather** intent, as shown in the Composer configuration. * This confirms the trigger type is **Language Understanding Intent recognized**, not a QnA or custom event. --- ✅ Final Answers: 1. **Identify New York as a city entity** 2. **Language Understanding Intent recognized**
223
**Question Section** You are building a **flight booking bot** by using the **Microsoft Bot Framework SDK**. The bot will ask users for the **departure date**. The bot must **repeat the question** until a **valid date** is given, or the users **cancel the transaction**. **Which type of dialog should you use?** Options: A. prompt B. adaptive C. waterfall D. action ---
**Answer Section** **Correct Answer:** ✅ **A. prompt** --- **Explanation:** * A **prompt** (e.g., `DateTimePrompt`) is designed to **ask the user for input**, **validate** the response, and **repeat the question** if the input is invalid. * Prompts are the **best-fit solution** when you need to **collect a single, validated value** like a **date**, and you want built-in retry behavior. --- **Why other options are incorrect:** * **B. adaptive**: Refers to **adaptive dialogs**, which are more dynamic and rule-based. They can handle complex scenarios, but are **not needed** just to repeat a simple question. * **C. waterfall**: A **sequence of steps**, which can use prompts, but the prompt itself is the part responsible for retry logic. * **D. action**: Not a recognized dialog type in the Bot Framework SDK (this might refer to Composer actions, but not applicable here). --- ✅ Final Answer: **A. prompt** Use a prompt dialog (like `DateTimePrompt`) to gather and validate the departure date with retry support.
224
**Question Section** You are building a chatbot by using the **Microsoft Bot Framework SDK**. You use an object named `UserProfile` to store user profile information and an object named `ConversationData` to store information related to a conversation. You create the following state accessors to store both objects in state: ``` var userStateAccessors = _userState.CreateProperty(nameof(UserProfile)); var conversationStateAccessors = _conversationState.CreateProperty(nameof(ConversationData)); ``` The **state storage mechanism is set to Memory Storage**. For each of the following statements, select **Yes** if the statement is true. Otherwise, select **No**. 1. **The code will create and maintain the `UserProfile` object in the underlying storage layer.** ✅ **Yes** 2. **The code will create and maintain the `ConversationData` object in the underlying storage layer.** 3. **The `UserProfile` and `ConversationData` objects will persist when the Bot Framework runtime terminates.** ---
**Answer Section** 1. **The code will create and maintain the `UserProfile` object in the underlying storage layer.** ✅ **Yes** Explanation: `UserProfile` is created and managed using `UserState`, which manages user-scoped state. It will be maintained in memory for the duration of the bot runtime. 2. **The code will create and maintain the `ConversationData` object in the underlying storage layer.** ✅ **Yes** Explanation: Similarly, `ConversationData` is tied to `ConversationState`, and it will also be stored and managed correctly in memory. 3. **The `UserProfile` and `ConversationData` objects will persist when the Bot Framework runtime terminates.** ❌ **No** Explanation: **Memory Storage** is **volatile** and does **not persist** beyond the runtime. Data will be lost when the bot restarts. --- ✅ Final Answers: * The code will create and maintain the `UserProfile` object in the underlying storage layer → **Yes** * The code will create and maintain the `ConversationData` object in the underlying storage layer → **Yes** * The `UserProfile` and `ConversationData` objects will persist when the Bot Framework runtime terminates → **No**
225
**Question Section** You are building a **chatbot** by using the **Microsoft Bot Framework SDK**. The bot will be used to **accept food orders from customers** and allow them to **customize each food item**. You need to configure the bot to **ask the user for additional input** based on the **type of item ordered**. The solution must **minimize development effort**. **Which two types of dialogs should you use?** Each correct selection is worth one point. Options: A. adaptive B. action C. waterfall D. prompt E. input ---
**Answer Section** **Correct Answers:** * ✅ **C. waterfall** * ✅ **D. prompt** --- **Explanation:** * **Waterfall dialog (C)**: Best suited for **multi-step, guided interactions** like customizing a food order (e.g., "What topping would you like? What size?"). Each step in the waterfall handles one part of the process. * **Prompt dialog (D)**: Used within waterfall steps to **collect user input** and validate it (e.g., `TextPrompt`, `ChoicePrompt`, `NumberPrompt`). Prompts handle **retries and input validation automatically**, which helps **minimize development effort**. --- **Why other options are incorrect:** * **A. Adaptive**: More complex and flexible, but requires additional setup and may not minimize development effort compared to waterfall + prompt. * **B. Action**: Not a defined dialog type in the **Bot Framework SDK** (may refer to Composer actions, but not relevant here). * **E. Input**: Not a recognized dialog type in the Bot Framework SDK. --- ✅ Final Answer: * **C. Waterfall** * **D. Prompt**
226
**Question Section** You have a **monitoring solution** that uses the **Azure AI Anomaly Detector service**. You provision a server named **Server1** that has **intermittent internet access**. You need to **deploy the Azure AI Anomaly Detector** to Server1. **Which four actions should you perform in sequence?** (To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.) **Actions:** * Query the prediction endpoint on Server1. * From Server1, run the docker `push` command. * Install the Docker Engine on Server1. * Query the prediction endpoint of the Azure AI Anomaly Detector in Azure. * From Server1, run the docker `run` command. * From Server1, run the docker `pull` command. ---
**Answer Section** **Correct Sequence:** 1. ✅ **Install the Docker Engine on Server1** 2. ✅ **From Server1, run the docker `pull` command** 3. ✅ **From Server1, run the docker `run` command** 4. ✅ **Query the prediction endpoint on Server1** --- **Explanation:** * **Step 1:** Docker must be installed to support containerized services. * **Step 2:** Use `docker pull` to download the **Anomaly Detector container image** from the registry (requires internet access). * **Step 3:** Use `docker run` to start the Anomaly Detector as a container. * **Step 4:** Once the container is running, you can **query the local endpoint** to perform predictions. **Note:** The `push` command is for uploading images, which is **not needed** in this scenario. Querying Azure's hosted endpoint is also **irrelevant** since the goal is to run it **locally on Server1**. --- ✅ Final Answer: 1. Install the Docker Engine on Server1 2. From Server1, run the docker `pull` command 3. From Server1, run the docker `run` command 4. Query the prediction endpoint on Server1
227
**Question Section** You have an **Azure subscription**. The subscription contains an **Azure OpenAI resource** that hosts a **GPT-4 model** named **Model1** and an app named **App1**. **App1 uses Model1**. You need to ensure that **App1 will NOT return answers that include hate speech**. **What should you configure for Model1?** Options: A. the Frequency penalty parameter B. abuse monitoring C. a content filter D. the Temperature parameter ---
**Answer Section** **Correct Answer:** ✅ **C. a content filter** --- **Explanation:** * **Content filters** in **Azure OpenAI** are specifically designed to **block harmful content**, including **hate speech**, violence, self-harm, and sexual content. * You can **customize or enable filters** for different categories and severity levels (e.g., block high/medium/low risk content). **Why other options are incorrect:** * **A. Frequency penalty**: Reduces the chance of repeating the same text — has **nothing to do with harmful content** like hate speech. * **B. Abuse monitoring**: Monitors for misuse, but **doesn’t block content** in real-time — it’s used for **logging and auditing**, not filtering. * **D. Temperature**: Controls **randomness/creativity** in responses, not safety or content moderation. --- ✅ Final Answer: **C. a content filter** Use content filters to **prevent hate speech** and other harmful outputs from being returned by the model.
228
**Question Section** You have an **Azure subscription** that contains an **Azure OpenAI resource** hosting a **GPT-3.5 Turbo** model named **Model1**. You configure Model1 to use the following **system message**: > “You are an AI assistant that helps people solve mathematical puzzles. Explain your answers as if the request is by a 4-year-old.” **Which type of prompt engineering technique is this an example of?** Options: A. few-shot learning B. affordance C. chain of thought D. priming ---
**Answer Section** **Correct Answer:** ✅ **D. priming** --- **Explanation:** * The use of a **system message** to **set the behavior, tone, or persona** of the model before user input is received is known as **priming**. * In this case, the system prompt tells the model: * *What role it is playing* (“AI assistant that helps with puzzles”) * *How it should respond* (“Explain as if the request is by a 4-year-old”) This is **priming** the model to follow specific behavioral instructions for all future interactions. --- **Why the other options are incorrect:** * **A. Few-shot learning**: Involves giving **specific examples** in the prompt to teach the model how to respond — this prompt doesn’t include examples. * **B. Affordance**: A design principle referring to the **design cues** that suggest how something should be used — not a prompt engineering term in this context. * **C. Chain of thought**: Refers to prompting the model to **show intermediate reasoning steps** — not just setting a behavior. --- ✅ Final Answer: **D. priming** This is a classic example of **priming** the model using a system message to influence its tone and behavior.
229
**Question Section** You build a chatbot by using **Azure OpenAI Studio**. You need to ensure that the **responses are more deterministic and less creative**. **Which two parameters should you configure?** To answer, select the appropriate parameters based on the information provided in the following exhibit: [ExamTopics Discussion Link](https://www.examtopics.com/discussions/microsoft/view/134944-exam-ai-102-topic-7-question-3-discussion/) ---
**Answer Section** **Correct Parameters:** * ✅ **Temperature** * ✅ **Top P** --- **Explanation:** To **reduce randomness** and make responses more **deterministic**: * **Temperature** controls how random or creative the model is. A lower temperature (e.g., **0.0–0.3**) makes responses more focused and deterministic. * **Top P** (nucleus sampling) determines the probability mass of tokens considered. Lowering this (e.g., **<0.9**) also reduces variability in responses. By adjusting **both Temperature and Top P to lower values**, you ensure the model is **less creative and more consistent** in its responses. --- ✅ Final Answers: * **Temperature** * **Top P**
230
**Question Section** You are building a chatbot for a **travel agent**. The chatbot will use the **Azure OpenAI GPT-3.5** model and will be used to **make travel reservations**. You need to **maximize the accuracy** of the responses from the chatbot. **What should you do?** Options: A. Configure the model to include data from the travel agent's database. B. Set the Top P parameter for the model to 0. C. Set the Temperature parameter for the model to 0. D. Modify the system message used by the model to specify that the answers must be accurate. ---
**Answer Section** **Correct Answer:** ✅ **C. Set the Temperature parameter for the model to 0.** --- **Explanation:** * The **Temperature** parameter controls how **random or creative** the model is. * A **lower value (like 0)** makes the model **more deterministic and focused**, leading to **more accurate and consistent responses**. * A **higher value (like 1)** increases creativity and randomness, which is **not ideal** when accuracy is the priority. --- **Why other options are incorrect:** * **A. Configure the model to include data from the travel agent's database**: Azure OpenAI models like GPT-3.5 **don’t have direct access** to external databases. You would need **additional integration** (e.g., retrieval-augmented generation), which isn’t implied in the question. * **B. Set the Top P parameter to 0**: Top P must be **greater than 0**; setting it to 0 is **invalid**. * **D. Modify the system message to say answers must be accurate**: While helpful for **guidance**, a system message alone does **not guarantee accuracy** — it helps steer tone and intent, but **not determinism**. --- ✅ Final Answer: **C. Set the Temperature parameter for the model to 0.** This ensures **maximum accuracy** and consistency in responses.
231
**Question Section** You build a **chatbot** that uses the **Azure OpenAI GPT-3.5** model. You need to **improve the quality of the responses** from the chatbot. The solution must **minimize development effort**. **What are two ways to achieve the goal?** Each correct answer presents a complete solution. Options: A. Fine-tune the model B. Provide grounding content C. Add sample request/response pairs D. Retrain the language model by using your own data E. Train a custom large language model (LLM) ---
**Answer Section** **Correct Answers:** ✅ **B. Provide grounding content** ✅ **C. Add sample request/response pairs** --- **Explanation:** * **B. Provide grounding content**: This refers to **retrieval-augmented generation (RAG)** — supplementing the model with **relevant context** (e.g., from documents, FAQs, or databases). This improves response quality **without retraining or fine-tuning** and is **low-effort** to implement via tools like Azure Cognitive Search. * **C. Add sample request/response pairs**: This is an example of **few-shot learning**. Including these pairs in the **prompt** helps the model understand how to respond, improving accuracy and tone **without model training** — just prompt engineering. --- **Why other options are incorrect:** * **A. Fine-tune the model**: Fine-tuning **can improve quality**, but it requires **data preparation, cost, time**, and **greater development effort** — so it **doesn't meet** the “minimize development effort” requirement. * **D. Retrain the language model using your own data**: Not possible with GPT-3.5. Only **fine-tuning** is allowed — full retraining is not supported or practical for most users. * **E. Train a custom large language model (LLM)**: This involves **massive resources**, infrastructure, and expertise — it’s the **opposite** of low development effort. --- ✅ Final Answers: * **B. Provide grounding content** * **C. Add sample request/response pairs**
232
**Question Section** You have an Azure subscription that contains an **Azure OpenAI resource** named `AI1`. You build a chatbot that will use `AI1` to provide **generative answers** to specific questions. You need to ensure that the **responses are more creative and less deterministic**. **How should you complete the code?** To answer, select the appropriate options in the **dropdown menus**. --- **Code snippet:** ``` response = openai.ChatCompletion.create( engine="dgw-aoai-gpt35", messages = [{"role": "_____", "content": ""}], _____ = 1, max_tokens=800, stop=None ) ``` --- **Dropdown Options:** **For `role`:** * assistant * function * system * user **For the parameter being set to `1`:** * Frequency\_penalty * Presence\_penalty * temperature * token\_selection\_biases ---
**Answer Section** ✅ **Role:** `"user"` ✅ **Parameter:** `temperature` --- **Completed Code:** ``` response = openai.ChatCompletion.create( engine="dgw-aoai-gpt35", messages = [{"role": "user", "content": ""}], temperature = 1, max_tokens=800, stop=None ) ``` --- **Explanation:** * **`role: "user"`**: This specifies that the message is coming from the user. It’s used to pass the user's input to the model. * **`temperature = 1`**: This setting increases the **creativity** and **randomness** of the output. A higher temperature (closer to 1) encourages **more varied and generative responses**, which is the goal in this scenario. --- ✅ Final Answer: ``` role = "user" temperature = 1 ```
233
**Question Section** You have an **Azure subscription** that contains an **Azure OpenAI resource** named **AI1**. You plan to build an app named **App1** that will write **press releases** by using **AI1**. You need to **deploy an Azure OpenAI model** for App1. The solution must **minimize development effort**. **Which three actions should you perform in sequence** in **Azure OpenAI Studio**? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. ---
**Answer Section** **Correct Sequence:** 1. ✅ **Create a deployment that uses the GPT-35 Turbo model.** 2. ✅ **Apply the Marketing Writing Assistant system message template.** 3. ✅ **Deploy the solution to a new web app.** --- **Explanation:** * **Step 1: Create a deployment that uses the GPT-35 Turbo model** GPT-3.5 Turbo is well-suited for text generation tasks like writing press releases. * **Step 2: Apply the Marketing Writing Assistant system message template** Azure OpenAI Studio includes prebuilt **system message templates** (like the **Marketing Writing Assistant**) to reduce development time. * **Step 3: Deploy the solution to a new web app** This makes the solution accessible and functional for end users. --- **Why other options are incorrect:** * **Create a deployment that uses the text-embedding-ada-002 model**: Embedding models are used for **search and similarity**, not text generation. * **Apply the Default system message template**: The **Marketing Writing Assistant** is more appropriate for press releases and minimizes customization. --- ✅ Final Answer: 1. Create a deployment that uses the GPT-35 Turbo model. 2. Apply the Marketing Writing Assistant system message template. 3. Deploy the solution to a new web app.
234
**Question Section** You have an **Azure subscription** that contains an **Azure OpenAI resource** named `AI1`. You build a chatbot that will use `AI1` to provide **generative answers** to specific questions. You need to ensure that the responses are **more creative and less deterministic**. **How should you complete the code?** To answer, select the appropriate options in the answer area. --- **Code Snippet:** ``` new ChatCompletionsOptions() { Messages = { new ChatMessage(ChatRole.______, @""), }, ________ = (float)1.0, MaxTokens = 800, }; ``` --- **Dropdown Options:** **For `ChatRole`:** * ChatRole.Assistant * ChatRole.Function * ChatRole.System * **ChatRole.User** **For the parameter being set to `1.0`:** * ChatRole.User * PresencePenalty * **Temperature** * TokenSelectionBiases ---
DEBATED **Answer Section** ✅ **ChatRole:** `ChatRole.User` ✅ **Parameter:** `Temperature` --- **Completed Code:** ``` new ChatCompletionsOptions() { Messages = { new ChatMessage(ChatRole.User, @""), }, Temperature = (float)1.0, MaxTokens = 800, }; ``` --- **Explanation:** * **ChatRole.User** is the correct role when the message is coming from the user, which initiates the conversation with the model. * **Temperature** controls how creative or deterministic the responses are: * A higher value like `1.0` results in **more creative and varied outputs**. * A lower value like `0.0` results in **deterministic and repetitive responses**. This setup meets the goal of making the bot **more creative and less deterministic**. --- ✅ Final Answers: * **ChatRole.User** * **Temperature**
235
**Question Section** You have an **Azure subscription** that contains an **Azure OpenAI resource**. You configure a model with the following settings: * **Temperature**: 1 * **Top probabilities (Top P)**: 0.5 * **Max response tokens**: 100 You ask the model a question and receive the following response: ``` { "choices": [ { "finish_reason": "stop", "index": 0, "message": { "content": "The founders of Microsoft are Bill Gates and Paul Allen. They co-founded the company in 1975.", "role": "assistant" } } ], "created": 1679044554, "id": "chatcmpl-6us5rafxyj9hkeE5e3GdJ4qR6BDsO1", "model": "gpt-3.5-turbo-0301", "object": "chat.completion", "usage": { "completion_tokens": 86, "prompt_tokens": 37, "total_tokens": 123 } } ``` For each of the following statements, select **Yes** if the statement is true. Otherwise, select **No**. **NOTE:** Each correct selection is worth one point. **Statements:** 1. The subscription will be charged 86 tokens for the execution of the session. 2. The text completion was truncated because the Max response tokens value was exceeded. 3. The `prompt_tokens` value will be included in the calculation of the Max response tokens value. ---
**Answer Section** | -------------------------------------------------------------------------------------------------------------------------- | ------ | | **1. The subscription will be charged 86 tokens for the execution of the session.** | ❌ No | | → Azure OpenAI charges based on **total tokens used**, not just the completion. Here, the total is 123 tokens (`37 + 86`). | | \| **2. The text completion was truncated because the Max response tokens value was exceeded.** | ❌ No → The `finish_reason` is `"stop"` which means the model finished **naturally**. Truncation would result in `finish_reason = "length"`. \| **3. The `prompt_tokens` value will be included in the calculation of the Max response tokens value.** | ❌ No → `max_tokens` applies **only to completion tokens**, not to the prompt. Prompt tokens only count toward the model’s **context length**, not its `max_tokens`. --- ✅ Final Answers: * Statement 1 → **No** * Statement 2 → **No** * Statement 3 → **No** | Statement | Answer |
236
**Question Section** You have an Azure subscription that contains an **Azure OpenAI resource** named `AI1`. You plan to develop a **console app** that will **answer user questions**. You need to **call AI1** and **output the results to the console**. **How should you complete the code?** To answer, select the appropriate options in the answer area. **NOTE:** Each correct selection is worth one point. --- **Answer Area (Code):** ``` openai.api_key = key openai.api_base = endpoint response = ___________( engine=deployment_name, prompt="What is Microsoft Azure?" ) print(___________) ``` **Dropdown Options:** * Function: * `openai.ChatCompletion.create` * `openai.Embedding.create` * `openai.Image.create` * Output: * `response.choices[0].text` * `response.id` * `response.text` ---
**Answer Section** ✅ **Function to call:** `openai.ChatCompletion.create` ✅ **Output to print:** `response.choices[0].text` --- **Explanation:** * To generate **text-based answers**, you use `openai.ChatCompletion.create` or `openai.Completion.create` depending on whether you’re using chat-style models (`gpt-3.5-turbo`, `gpt-4`) or base completions (`text-davinci-003`, etc.). Since this is about answering questions, **`ChatCompletion.create` is correct**. * The `choices` array holds the model’s response(s), and `choices[0].text` extracts the actual answer text returned by the model. * `Embedding.create` is for generating vector embeddings (not answering questions), and `Image.create` is for image generation — both are incorrect for this task. --- ✅ Final Answer: ``` response = openai.ChatCompletion.create( engine=deployment_name, prompt="What is Microsoft Azure?" ) print(response.choices[0].text) ```
237
**Question Section** You have an **Azure subscription** that contains an **Azure OpenAI resource** named `AI1`. You plan to develop a **console app** that will **answer user questions**. You need to **call AI1** and **output the results to the console**. **How should you complete the code?** To answer, select the appropriate options in the answer area. **NOTE:** Each correct selection is worth one point. --- **Code:** ``` OpenAIClient client = new OpenAIClient(new Uri(endpoint), new AzureKeyCredential(key)); Response response = client.__________(deploymentName, "What is Microsoft Azure?"); Console.WriteLine(__________); ``` **Dropdown Options:** * For method: * `GetCompletions` * `GetEmbeddings` * `GetImageGenerations` * For output: * `response.Value.Choices[0].Text` * `response.Value.Id` * `response.Value.PromptFilterResults` ---
**Answer Section** ✅ **Method to call:** `GetCompletions` ✅ **Output to print:** `response.Value.Choices[0].Text` --- **Explanation:** * To generate a **text response** using Azure OpenAI in a .NET console app, you use the method `**GetCompletions**`. This is part of the Azure.AI.OpenAI SDK for models like `text-davinci-003`. * **`GetEmbeddings`** is for retrieving vector embeddings (not for answering questions), and **`GetImageGenerations`** is used with image models — both are incorrect in this context. * The correct way to access the **generated text** is through `**response.Value.Choices[0].Text**`. The other options (`response.Value.Id`, `PromptFilterResults`) do not return the model’s actual answer content. --- ✅ Final Answer: ``` Response response = client.GetCompletions(deploymentName, "What is Microsoft Azure?"); Console.WriteLine(response.Value.Choices[0].Text); ```
238
**Question Section** You have an **Azure subscription**. You need to create a new resource that will **generate fictional stories** in response to user prompts. The solution must ensure that the resource uses a **customer-managed key (CMK)** to **protect data**. **How should you complete the script?** To answer, select the appropriate options in the answer area. **NOTE:** Each correct selection is worth one point. --- **Script to Complete:** ``` az cognitiveservices account create -n myresource -g myResourceGroup --kind _______ --sku S -l WestEurope \ --api-properties '{ " _______": { "keySource": "Microsoft.KeyVault", "keyVaultProperties": { "keyName": "KeyName", "keyVersion": "secretVersion", "keyVaultUri": "https://issue23056kv.vault.azure.net/" } } }' \ --assign-identity ``` **Answer Area Options:** * **Kind:** * AIservices * LanguageAuthoring * OpenAI * **Parameter to use for CMK setup:** * api-properties * assign-identity * encryption ---
**Answer Section** ✅ **--kind:** `OpenAI` ✅ **--encryption** (within `--api-properties` block) --- **Explanation:** * You are provisioning an **Azure OpenAI** resource, so the correct `--kind` is `**OpenAI**`. * To ensure it uses a **customer-managed key (CMK)**, you must specify: * `"keySource": "Microsoft.KeyVault"` * `"keyVaultProperties"` with Key Vault URI, key name, and version * These encryption settings must go inside the `--api-properties` parameter block, under the **`"encryption"`** property. * `--assign-identity` is also required to allow Azure OpenAI to access the Key Vault via **managed identity**. --- ✅ Final Answers: * `--kind`: **OpenAI** * Inside `--api-properties`: Use the **`encryption`** property to configure CMK.
239
**Question Section** You are developing a **smart e-commerce project**. You need to design the **skillset** to include the **contents of PDFs in searches**. The solution must support: * Extracting text from PDFs (including scanned documents) * Translating the content to **English, Spanish, and Portuguese** * Storing the enriched content for **further processing** **How should you complete the skillset design diagram?** To answer, drag the appropriate services to the correct stages. Each service may be used once, more than once, or not at all. --- **Stages:** * **Source** * **Cracking** * **Preparation** * **Destination** **Available Services:** * Azure Blob Storage * Azure Files * Azure Cosmos DB * Computer Vision API * Translator API * Custom Vision API * Conversational Language Understanding API ---
**Answer Section** **Correct Mapping:** * **Source** → ✅ **Azure Blob Storage** * **Cracking** → ✅ **Computer Vision API** * **Preparation** → ✅ **Translator API** * **Destination** → ✅ **Azure Files** --- ✅ **Explanation** 🔹 **Source → Azure Blob Storage** * This is the most common storage source used by Azure Cognitive Search for indexing unstructured data like PDFs. * 🔗 [Docs: Azure Cognitive Search - supported data sources](https://learn.microsoft.com/en-us/azure/search/search-what-is-data-source) 🔹 **Cracking → Computer Vision API** * This service performs **OCR** to extract text from scanned PDFs and images. * It's a standard step in **document cracking**. * 🔗 [Docs: Computer Vision OCR](https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/overview-ocr) 🔹 **Preparation → Translator API** * Text from PDFs must be available in **multiple languages** (English, Spanish, Portuguese). * The **Translator API** enables this multilingual support. * 🔗 [Docs: Translator in skillsets](https://learn.microsoft.com/en-us/azure/search/cognitive-search-skill-text-translation) 🔹 **Destination → Azure Files** * Although Cosmos DB is commonly used for structured, real-time, queryable data, the scenario here involves **storing enriched data** for **further processing** — making **Azure Files** a valid choice. * Azure Files supports **knowledge store projections** in enrichment pipelines. * 🔗 [Docs: Knowledge store using Azure Files](https://learn.microsoft.com/en-us/azure/search/knowledge-store-projection-overview) --- ❌ **Why Other Options Are Incorrect** * **Custom Vision API**: Used for image classification **with training**, not OCR or text extraction. * **Conversational Language Understanding API**: Intended for parsing **chatbot utterances**, not documents. * **Azure Cosmos DB**: Excellent for structured metadata and real-time queries, but **not required** here as the scenario emphasizes **storage for later processing**, not real-time search or querying. --- ✅ Final Answer ``` Source → Azure Blob Storage Cracking → Computer Vision API Preparation → Translator API Destination → Azure Files ``` This configuration is best aligned with Microsoft Learn documentation and the scenario’s goals. Let me know if you'd like the Cosmos DB variant for real-time indexing scenarios!
240
**Question Section** You build a **QnA Maker resource** to meet the chatbot requirements. **Which RBAC role should you assign to each group?** To answer, select the appropriate options from the dropdowns in the answer area. NOTE: Each correct selection is worth one point. **Groups:** * Management-Accountants * Consultant-Accountants * Agent-CustomerServices **Options:** * Owner * Contributor * Cognitive Services User * Cognitive Services QnA Maker Read * Cognitive Services QnA Maker Editor ---
DEBATED - IDK THE ANSWER ### **Answer Section** **Management-Accountants** → **Cognitive Services QnA Maker Editor** **Consultant-Accountants** → **Cognitive Services QnA Maker Read** **Agent-CustomerServices** → **Cognitive Services User** --- ✅ **Explanation** 1. **Management-Accountants → Cognitive Services QnA Maker Editor** This group likely needs **full control** over the QnA knowledge base: editing, publishing, and maintaining content. The **Editor** role allows all of these capabilities **without granting unnecessary full subscription access** like "Owner" or "Contributor". 2. **Consultant-Accountants → Cognitive Services QnA Maker Read** This group likely needs to **review content** but not modify it. The **QnA Maker Read** role grants access to view the content in QnA Maker, making it suitable for **reviewers**. 3. **Agent-CustomerServices → Cognitive Services User** This group most likely **uses** the QnA Maker resource indirectly via the chatbot during customer interactions. They don’t interact with the QnA Maker UI or maintain the KB directly, so **Cognitive Services User** is appropriate. It provides **execute/query permissions**, which are sufficient for bot runtime access. --- ✅ Final Answers Recap (in line format): * Management-Accountants: Cognitive Services QnA Maker Editor * Consultant-Accountants: Cognitive Services QnA Maker Read * Agent-CustomerServices: Cognitive Services User Let me know if you'd like to flip the Agent role to QnA Maker Read — but **User** is typically preferred for consumers of the model via API/chatbot.
241
**Question Section** You are planning the **product creation project**. You need to build the **REST endpoint** to create **multilingual product descriptions**. **How should you complete the URI?** To answer, select the appropriate options in the answer area. **NOTE:** Each correct selection is worth one point. --- **Complete this URI:** ``` https://?api-version=3.0&to=es&to=pt ``` **HOST options:** * api.cognitive.microsofttranslator.com * api-nam.cognitive.microsofttranslator.com * westus.tts.speech.microsoft.com * wvics.cognitiveservices.azure.com/translator **ROUTE options:** * /detect * /languages * /text-to-speech * /translate --- **Answer Section** **Correct URI:** ``` https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&to=es&to=pt ``` ---
**Answer Section** Correct answers: * HOST: `api-nam.cognitive.microsofttranslator.com` * ROUTE: `/translate` --- **Explanation** * **`api-nam.cognitive.microsofttranslator.com`** is the **regional endpoint for North America**, and is required when the scenario **explicitly mandates** that data processing must occur within **U.S. data centers**. * [Reference – Microsoft Translator Regional Endpoints](https://learn.microsoft.com/en-us/azure/cognitive-services/translator/reference/v3-0-reference#base-urls) * **`/translate`** is the correct path for performing **text translations** using the Translator Text API. --- **Why other options are incorrect:** * `api.cognitive.microsofttranslator.com`: This is the **global endpoint**, which may **route traffic outside the U.S.** for failover scenarios. Does **not guarantee regional data residency**. * `westus.tts.speech.microsoft.com`: For **speech** services — not used for translation. * `wvics.cognitiveservices.azure.com/translator`: Not a valid base URI for the Translator API. * `/detect`, `/languages`, `/text-to-speech`: Valid endpoints, but do **not perform translation**. Only `/translate` translates text. --- **Final Answer:** ``` https://api-nam.cognitive.microsofttranslator.com/translate?api-version=3.0&to=es&to=pt ```
242
**Question Section** You need to develop code to **upload images** for the **product creation project**. The solution must meet the **accessibility requirements** by providing **automatic alt text** for images. **How should you complete the code?** To answer, select the appropriate options in the answer area. **NOTE:** Each correct selection is worth one point. --- **Code Snippet:** ``` public static async Task SuggestAltText(ComputerVisionClient client, ______ image) { List features = new List() { ________ }; ImageAnalysis results = await client.AnalyzeImageAsync(image, features); var c = results.________.Captions[0]; if (c.Confidence > 0.5) return c.Text; } ``` --- **Dropdown Options:** * For the `image` parameter type: * Dictionary * Stream * string * For the item in `results`: * VisualFeatureTypes.Description, * VisualFeatureTypes.ImageType, * VisualFeatureTypes.Objects, * VisualFeatureTypes.Tags * For the property in `results`: * results.Brands * results.Description * results.Metadata * results.Objects ---
**Answer Section** **Correct Answers:** * `image` parameter → ✅ **string** * VisualFeatureTypes.Description: This feature is used to return natural language captions of the image, making it the best fit for generating accessibility-focused alt text.VisualFeatureTypes.Description: This feature is used to return natural language captions of the image, making it the best fit for generating accessibility-focused alt text. * `results._____.Captions[0]` → ✅ **Description** --- ✅ Explanation * **Parameter Type – `string`**: The `AnalyzeImageAsync` method expects a **URL string** when analyzing an image from the web (not a stream or dictionary). This is appropriate for a cloud-based image upload or external image link scenario. * **`results.Description.Captions[0]`**: The **Description** property contains **Captions**, which include the **natural language description** of the image with associated **confidence score** — ideal for **alt text generation**. --- ✅ Final Completed Code (Snippet) ``` public static async Task SuggestAltText(ComputerVisionClient client, string image) { List features = new List() { VisualFeatureTypes.Description, VisualFeatureTypes.ImageType, VisualFeatureTypes.Objects, VisualFeatureTypes.Tags }; ImageAnalysis results = await client.AnalyzeImageAsync(image, features); var c = results.Description.Captions[0]; if (c.Confidence > 0.5) return c.Text; } ``` This solution meets **accessibility requirements** by automatically generating **image descriptions** suitable for use in alt text.
243
**Question Section** You are developing a solution for the **Management-Bookkeepers** group to meet the document processing requirements. The solution must contain the following components: * A **Form Recognizer** resource * An **Azure web app** that hosts the **Form Recognizer sample labeling tool** The Management-Bookkeepers group needs to **create a custom table extractor** by using the sample labeling tool. **Which three actions should the Management-Bookkeepers group perform in sequence?** To answer, move the appropriate actions from the list to the answer area and arrange them in the correct order. --- **Available Actions:** * Train a custom model * Label the sample documents * Create a new project and load sample documents * Create a composite model ---
**Answer Section** **Correct Sequence:** 1. Create a new project and load sample documents 2. Label the sample documents 3. Train a custom model --- **Explanation** * **Step 1: Create a new project and load sample documents** Before labeling or training, users must first **initialize a labeling project** and load their sample files into the Form Recognizer labeling tool. * **Step 2: Label the sample documents** Once documents are loaded, users manually **label the fields and tables** that need to be extracted. This is essential for training the model. * **Step 3: Train a custom model** After labeling is complete, the user can submit the data to **train a custom table extraction model** in Azure Form Recognizer. --- **Why not "Create a composite model"?** * A **composite model** is only needed when you want to **combine multiple trained models**, which is not required here. The scenario focuses on **a single custom table extractor**, so this step is **not applicable**. --- **Final Answer Recap (in order):** * Create a new project and load sample documents * Label the sample documents * Train a custom model
244
**Question Section** You need to develop an **extract solution for receipt images**. The solution must meet the document processing requirements and the **technical requirements**. You upload the receipt images to the **Form Recognizer API** for analysis, and the API returns the following JSON: ``` "documentResults": [ { "docType": "prebuilt:receipt", "pageRange": [1, 1], "fields": { "ReceiptType": { "type": "string", "valueString": "Itemized", "confidence": 0.672 }, "MerchantName": { "type": "string", "valueString": "Tailwind", "text": "Tailwind", "boundingBox": [], "page": 1, "confidence": 0.913, "elements": [ "#/readResults/0/lines/0/words/0" ] } } } ] ``` **Which expression should you use to trigger a manual review of the extracted information by a member of the Consultant-Bookkeeper group?** --- **Options:** A. `documentResults.docType == "prebuilt:receipt"` B. `documentResults.fields.*.confidence < 0.7` C. `documentResults.fields.ReceiptType.confidence > 0.7` D. `documentResults.fields.MerchantName.confidence < 0.7` ---
**Answer Section** **Correct Answer:** **B. `documentResults.fields.*.confidence < 0.7`** --- **Explanation:** * The **goal** is to **trigger a manual review** **when confidence is low**, regardless of which specific field is uncertain. * The expression `documentResults.fields.*.confidence < 0.7` evaluates whether **any field** in the response has a **confidence below 0.7** — this includes `ReceiptType`, which in this example has a confidence of `0.672`. **Why the other options are incorrect:** * **A. `documentResults.docType == "prebuilt:receipt"`** This only checks the document type, not the **accuracy or confidence** of extracted data. * **C. `documentResults.fields.ReceiptType.confidence > 0.7`** This would skip review **if confidence is high**, but we want to review **when it's low**. * **D. `documentResults.fields.MerchantName.confidence < 0.7`** Only checks one field (`MerchantName`), which in this case has **high confidence** — it wouldn't trigger review even if other fields are low. --- **Final Answer:** **B. `documentResults.fields.*.confidence < 0.7`** This ensures **any low-confidence field** triggers a **manual review**, which aligns with quality control requirements.
245
**Question Section** You are developing the **smart e-commerce project**. You need to implement **autocomplete** as part of the **Azure Cognitive Search** solution. **Which three actions should you perform?** Each correct selection is worth one point. --- **Options:** A. Make API queries to the autocomplete endpoint and include `suggesterName` in the body. B. Add a suggester that has the three product name fields as source fields. C. Make API queries to the search endpoint and include the product name fields in the `searchFields` query parameter. D. Add a suggester for each of the three product name fields. E. Set the `searchAnalyzer` property for the three product name variants. F. Set the `analyzer` property for the three product name variants. ---
**Answer Section** **Correct Answers:** * **A.** Make API queries to the autocomplete endpoint and include `suggesterName` in the body * **B.** Add a suggester that has the three product name fields as source fields * **F.** Set the `analyzer` property for the three product name variants --- **Explanation:** ✅ A. Make API queries to the autocomplete endpoint and include `suggesterName` in the body * This is how the autocomplete feature is triggered. You must call the `/autocomplete` endpoint (not `/search`) and include the `suggesterName`. * Reference: [Azure Docs – Autocomplete](https://learn.microsoft.com/en-us/azure/search/search-add-autocomplete-suggestions) ✅ B. Add a suggester that has the three product name fields as source fields * A **suggester** defines which fields are indexed for **autocomplete or suggest**. * You can use **one suggester with multiple fields**, but **not multiple suggesters** per index. * Reference: [Add a suggester to an Azure Cognitive Search index](https://learn.microsoft.com/en-us/azure/search/index-add-suggesters) ✅ F. Set the `analyzer` property for the three product name variants * Autocomplete uses a **specific analyzer**, usually something like `standard.lucene` or `edgeNGram`, during **indexing**. * Setting the `analyzer` (not `searchAnalyzer`) is correct here because it affects **how the field is indexed for suggestions**. * Reference: [Analyzers in Azure Cognitive Search](https://learn.microsoft.com/en-us/azure/search/index-add-suggesters#analyzers-and-suggesters) --- ❌ Why the other options are incorrect: * **C. Make API queries to the search endpoint**: This is for **full search**, not autocomplete. Autocomplete uses a **different endpoint**. * **D. Add a suggester for each of the three product name fields**: You can only define **one suggester per index**, though that suggester can include multiple fields. * **E. Set the `searchAnalyzer` property**: This applies to **query-time search**, not autocomplete. Autocomplete uses the **index-time analyzer**. --- **Final Answer:** A. Make API queries to the autocomplete endpoint and include `suggesterName` in the body B. Add a suggester that has the three product name fields as source fields F. Set the `analyzer` property for the three product name variants
246
**Question Section** You are developing the **document processing workflow**. You need to identify which **API endpoints** to use to **extract text from financial documents**. The solution must meet the **document processing requirements**, including extracting **tables** and **text** from documents that may vary by office format. **Which two API endpoints should you identify?** Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. --- **Options:** A. `/vision/v3.1/read/analyzeResults` B. `/formrecognizer/v2.0/custom/models/{modelId}/analyze` C. `/formrecognizer/v2.0/prebuilt/receipt/analyze` D. `/vision/v3.1/describe` E. `/vision/v3.1/read/analyze` ---
**Answer Section** **Correct Answers:** B. `/formrecognizer/v2.0/custom/models/{modelId}/analyze` C. `/formrecognizer/v2.0/prebuilt/receipt/analyze` --- **Explanation:** * **B. `/formrecognizer/v2.0/custom/models/{modelId}/analyze`** This endpoint is used when you have **distinct formats for financial documents** across offices. A **custom trained model** can be tailored to each format and will extract **structured data**, including **tables and fields**. * **C. `/formrecognizer/v2.0/prebuilt/receipt/analyze`** This endpoint is appropriate if **receipt images** are part of the financial documentation. It extracts key receipt fields like merchant, total, and tax — supporting **financial document processing** as described in the requirements. --- **Why the other options are incorrect:** * **A. `/vision/v3.1/read/analyzeResults`** This is a **follow-up polling URL**, not a primary endpoint to call. * **D. `/vision/v3.1/describe`** Used for **image descriptions**, not for text or table extraction. * **E. `/vision/v3.1/read/analyze`** This performs **basic OCR** only. It returns plain text but **not structured outputs** like tables or key-value pairs — which are required in the scenario. --- **Final Answer:** B. `/formrecognizer/v2.0/custom/models/{modelId}/analyze` C. `/formrecognizer/v2.0/prebuilt/receipt/analyze`
247
**Question Section** You are developing the **knowledgebase** by using **Azure Cognitive Search**. You need to build a skill that will extract **named entities** such as **people, locations, and organizations**. The skill should meet the latest standards and use the **current version** of Azure's entity recognition capability. **How should you complete the code?** To answer, select the appropriate options in the answer area. **NOTE:** Each correct selection is worth one point. --- **Code Snippet to Complete:** ``` { "@odata.type": "#Microsoft.Skills.Text.V3.EntityRecognitionSkill", "categories": [ ___ ], "defaultLanguageCode": "en", "includeTypelessEntities": true, "minimumPrecision": 0.7, "inputs": [ { "name": "text", "source": "/document/content" } ], "outputs": [ { "name": ___ } ] } ``` **Dropdown Options for `categories`:** * `[]` * `["Person", "Organization", "Location"]` * `["Email", "Persons", "Organizations"]` **Dropdown Options for final output `name`:** * `"entities"` * `"categories"` * `"namedEntities"` ---
**Answer Section** **Correct Answers:** * `categories`: `["Person", "Organization", "Location"]` * Output `name`: `"namedEntities"` --- **Explanation** * `"Person"`, `"Organization"`, `"Location"` are **valid categories** for V3 of the EntityRecognitionSkill. Older plural forms (like `"Persons"`) are **deprecated**. * `"namedEntities"` is now the **standard output field** for the recognized entities in V3. * The old `"entities"` value is no longer supported in this version. --- **Final Answer Recap:** * First dropdown: `["Person", "Organization", "Location"]` * Second dropdown: `"namedEntities"` Let me know if you'd like the updated version using `EntityLinkingSkill` for extended metadata like Wikipedia references.
248
**Question Section** You are developing the **knowledgebase** by using **Azure Cognitive Search**. You need to **process wiki content** to meet the **technical requirements**. **What should you include in the solution?** --- **Options:** A. An indexer for Azure Blob storage attached to a skillset that contains the language detection skill and the text translation skill B. An indexer for Azure Blob storage attached to a skillset that contains the language detection skill C. An indexer for Azure Cosmos DB attached to a skillset that contains the document extraction skill and the text translation skill D. An indexer for Azure Cosmos DB attached to a skillset that contains the language detection skill and the text translation skill ---
**Answer Section** **Correct Answer:** D. An indexer for Azure Cosmos DB attached to a skillset that contains the language detection skill and the text translation skill --- **Explanation** To process **wiki content** for inclusion in a multilingual **Azure Cognitive Search knowledgebase**, you need to: 1. **Source**: Use **Azure Cosmos DB** if the wiki content is stored as structured or semi-structured JSON documents (typical for CMS or application data). * Cosmos DB is a common data source for text-based knowledge systems. 2. **Language Detection + Translation**: * Use the **language detection skill** to determine the language of the content. * Use the **text translation skill** to ensure the knowledgebase contains **uniformly translated content** (typically into English or another base language). 3. **Why not Blob Storage?** * Blob storage is used for **binary or flat documents** (PDFs, DOCX, etc.). * Wiki content is typically **structured**, so **Cosmos DB** is more appropriate. 4. **Why not document extraction?** * **Document extraction skill** is used to extract content from file formats like PDFs or Office docs. * It’s not relevant if the data is already stored as **structured text** in Cosmos DB. --- **Final Answer:** D. An indexer for Azure Cosmos DB attached to a skillset that contains the language detection skill and the text translation skill
249
**Question Section** You are developing the **knowledgebase** by using **Azure Cognitive Search**. You need to meet the **knowledgebase requirements for searching equivalent terms**. **What should you include in the solution?** --- **Options:** A. synonym map B. a suggester C. a custom analyzer D. a built-in key phrase extraction skill ---
**Answer Section** **Correct Answer:** A. synonym map --- **Explanation** * A **synonym map** allows Azure Cognitive Search to treat **different words as equivalent** during search operations. * For example, if the user searches for “TV”, it can also match “television” if both are defined as synonyms. * This is the **correct and recommended feature** for enabling **searching of equivalent terms**. --- **Why the other options are incorrect:** * **B. a suggester**: Used for **autocomplete functionality**, not for treating terms as synonyms or equivalents. * **C. a custom analyzer**: Used for **custom text processing**, such as tokenization or stemming, but not specifically for defining term equivalency. * **D. a built-in key phrase extraction skill**: Used to extract important phrases from documents during indexing. It does **not affect search equivalency**. --- **Final Answer:** A. synonym map
250
**Question Section** You are developing the **shopping on-the-go** project. You are configuring **access to the QnA Maker resources**. **Which role should you assign to AllUsers and LeadershipTeam?** To answer, select the appropriate options in the answer area. **NOTE:** Each correct selection is worth one point. ---
**Answer Section** * **AllUsers** → QnA Maker Read * **LeadershipTeam** → QnA Maker Editor --- **Explanation** * **AllUsers**: Users who need to **query** the QnA Maker knowledge base (e.g., during app usage or chatbot interaction) only need the **QnA Maker Read** role. This allows them to retrieve answers but not modify content. * **LeadershipTeam**: Members responsible for **maintaining or updating** the QnA knowledge base should be assigned the **QnA Maker Editor** role, which provides full permissions to add, edit, delete, and publish knowledge base content. --- **Final Answers:** * AllUsers: QnA Maker Read * LeadershipTeam: QnA Maker Editor Yes — and here's a **definitive breakdown** to confirm why the answer is correct: --- 🔍 **Scenario Recap** You are: * **Configuring QnA Maker access** * Assigning roles to two groups: * **AllUsers** — general users of the application * **LeadershipTeam** — likely responsible for maintaining the content --- ✅ **Understanding QnA Maker Roles** Here are the relevant RBAC roles: * **QnA Maker Read** * Can **query the knowledge base** * Cannot create, edit, delete, or publish content * **QnA Maker Editor** * Can **create, modify, delete**, and **publish** knowledge base content * **Cognitive Services User / Contributor / Owner** * These are broader Azure roles that **go beyond QnA Maker** * Not scoped to QnA-specific tasks (like knowledge base editing or querying) --- 🧠 **Role Assignments Explained** 🔸 AllUsers → `QnA Maker Read` * AllUsers are typically **end users** or **chatbot consumers** * They just need to **query** the knowledge base, not modify it * So `QnA Maker Read` is the correct and least-privilege role 🔸 LeadershipTeam → `QnA Maker Editor` * LeadershipTeam is likely managing or curating **the QnA content** * They need rights to **edit**, **approve**, and **publish** changes * `QnA Maker Editor` gives them full access to do this --- ❌ **Why other options are incorrect** * **Owner / Contributor**: Too broad; full access to the Azure resource, not QnA-specific * **Cognitive Services User**: General permission to use Cognitive Services APIs, but not specific to QnA Maker functions --- ✅ Final Confirmed Answer: * **AllUsers** → `QnA Maker Read` * **LeadershipTeam** → `QnA Maker Editor` This setup follows Microsoft’s **least privilege** principle and aligns with **best practices** for QnA Maker access control.
251
**Question Section** You are developing the **shopping on-the-go project**. You need to build the **Adaptive Card** for the chatbot. **How should you complete the code?** To answer, select the appropriate options in the answer area. **NOTE:** Each correct selection is worth one point. --- **Code Snippet:** ``` { "version": "1.3", "body": [ { "type": "TextBlock", "size": "Medium", "weight": "Bolder", "text": "${[Dropdown 1]}" }, { "type": "TextBlock", "$when": "${[Dropdown 2]}", "color": "Attention" }, { "type": "Image", "url": "${image.uri}", "size": "Medium", "altText": "${[Dropdown 3]}" } ] } ``` --- **Dropdown 1 Options:** * `if(language == 'en','en', name)` * `name` * `name.en` * `name[language]` **Dropdown 2 Options:** * `${stockLevel != 'OK'}` * `${stockLevel = 'OK'}` * `${stockLevel == 'OK'}` * `${stockLevel.OK}` **Dropdown 3 Options:** * `image.altText.en` * `image.altText.language` * `image.altText["language"]` * `image.altText[language]` ---
**Answer Section** **Correct Answers:** * **Dropdown 1:** `name[language]` * **Dropdown 2:** `${stockLevel != 'OK'}` * **Dropdown 3:** `image.altText[language]` --- **Explanation** * **Dropdown 1: `name[language]`** This syntax dynamically selects the appropriate value for the current language, which is ideal for localization in adaptive cards. * **Dropdown 2: `${stockLevel != 'OK'}`** The condition ensures that the warning message only displays if the stock level is **not OK**, which aligns with `color: Attention`. * **Dropdown 3: `image.altText[language]`** This retrieves the correct localized alt text based on the language setting, improving accessibility. --- **Final Answers Recap:** * Dropdown 1: `name[language]` * Dropdown 2: `${stockLevel != 'OK'}` * Dropdown 3: `image.altText[language]`
252
**Question Section** You are developing the **chatbot**. You create the following components: * A **QnA Maker** resource * A **chatbot** using the **Azure Bot Framework SDK** You need to **integrate the components** to meet the chatbot requirements. **Which property should you use?** --- **Options:** A. `QnAMakerOptions.StrictFilters` B. `QnADialogResponseOptions.CardNoMatchText` C. `QnAMakerOptions.RankerType` D. `QnAMakerOptions.ScoreThreshold` ---
**Answer Section** **Correct Answer:** D. `QnAMakerOptions.ScoreThreshold` --- **Explanation:** * **`QnAMakerOptions.ScoreThreshold`** defines the **minimum confidence score** required for a QnA answer to be considered a match. * If the QnA Maker returns answers below this threshold, the bot can choose to **respond with a fallback message**. * This is **crucial for controlling quality** and ensuring your chatbot only returns answers when it has high enough confidence. --- **Why other options are incorrect:** * **A. `StrictFilters`**: Used to apply metadata-based filters on QnA results, but not directly related to evaluating **confidence of matches**. * **B. `CardNoMatchText`**: Used to define the **text shown** when the user selects "None of the above" in a card UI — not part of core QnA filtering or answer selection. * **C. `RankerType`**: Controls whether to use **Default** or **QuestionOnly** ranker — useful for relevance tuning, but again, not the primary way to **filter low-confidence answers**. --- **Final Answer:** D. `QnAMakerOptions.ScoreThreshold`
253
**Question Section** You are developing the **chatbot**. You create the following components: * A **QnA Maker** resource * A **chatbot** by using the **Azure Bot Framework SDK** You need to **add an additional component** to meet the **technical requirements** and **chatbot requirements**. **What should you add?** --- **Options:** A. Microsoft Translator B. Language Understanding C. Orchestrator D. chatdown ---
**Answer Section** **Correct Answer:** **B. Language Understanding** --- **Explanation** * **Language Understanding (LUIS or Conversational Language Understanding)** enables the chatbot to **interpret user intent and extract entities**, which is essential when the chatbot must handle **more than just QnA**. * If the **technical requirements** include **handling user intent**, **contextual conversations**, or **routing between intents and QnA**, then **LUIS** is the natural complement to QnA Maker. --- **Why the other options are incorrect:** * **A. Microsoft Translator** Used for **language translation**, but not required unless the chatbot needs to support **multiple spoken/written languages**. * **C. Orchestrator** Used for **dispatching between multiple recognizers**, like LUIS and QnA Maker, but it’s **not required unless you are using multiple overlapping services**. It's an advanced routing tool, not a foundational piece. * **D. chatdown** A tool for **generating conversation scripts and sample dialogs** in `.chat` format — useful for testing, but not an operational component. --- **Final Answer:** **B. Language Understanding**
254
**Question Section** You are developing a **new sales system** that will process **video and text** from a **public-facing website**. You plan to **monitor the system** to ensure that it provides **equitable results**, **regardless of the user's location or background**. **Which two Responsible AI principles provide guidance to meet the monitoring requirements?** Each correct answer presents part of the solution. **NOTE:** Each correct selection is worth one point. --- **Options:** A. transparency B. fairness C. inclusiveness D. reliability and safety E. privacy and security ---
**Answer Section** **Correct Answers:** * **B. fairness** * **C. inclusiveness** --- **Explanation** * **B. Fairness**: The principle of **fairness** ensures that AI systems treat **all users equitably**, and do **not discriminate** based on race, gender, geography, or background. This directly aligns with your requirement to **monitor for equitable outcomes**. * **C. Inclusiveness**: The principle of **inclusiveness** ensures that AI systems are **usable and effective for people from diverse backgrounds**, including underrepresented groups. It supports the goal of making sure results are **equitable and accessible to all**. --- **Why the other options are incorrect:** * **A. Transparency**: This refers to users' ability to **understand how the AI works** or how decisions are made — helpful for accountability, but not directly related to monitoring for **equitable results**. * **D. Reliability and Safety**: Focuses on ensuring the system performs reliably and doesn't cause unintended harm — important, but not directly tied to **equality of outcomes** across users. * **E. Privacy and Security**: Deals with **protecting user data** — not about monitoring for fairness or demographic equity. --- **Final Answer:** **B. fairness** **C. inclusiveness**
255
**Question Section** You have an **Azure subscription** that contains an **Azure AI Video Indexer** account. You need to **add a custom brand and logo** to the indexer and **configure an exclusion** for the custom brand. **How should you complete the REST API call?** To answer, select the appropriate options in the answer area. **NOTE:** Each correct selection is worth one point. --- **Complete the following JSON:** ``` { "referenceUrl": "https://www.contoso.com/Contoso", "id": 97974, "name": "Contoso", "accountId": "ContosoAccountId", "lastModifierUserName": "SampleUserName", "created": "2023-04-25T14:59:52.7433333", "lastModified": "2023-04-25T14:59:52.7433333", "": } ``` --- **Dropdown 1 Options:** * `enabled` * `tags` * `state` * `useBuiltIn` **Dropdown 2 Options:** * `"Excluded"` * `"Included"` * `false` * `true` ---
**Answer Section** **Correct Pairings:** * `enabled`: `false` **or alternatively:** * `tags`: `["Excluded"]` --- **Explanation** * **`enabled: false`** is the simplest and most direct way to **exclude a brand** from being recognized in Azure Video Indexer. * **`tags: ["Excluded"]`** is another valid way, especially if your API or UI context prefers tags over flags. * Using **`state: "Excluded"`** is not recognized in the [official REST API spec](https://learn.microsoft.com/en-us/azure/azure-video-indexer/customize-brands-model-how-to?tabs=customizeapi#exclude-brands-from-the-model) — it might be inferred or confused from the UI implementation, but not used in API payloads. --- **Final Answer:** ``` Dropdown 1: enabled Dropdown 2: false ``` **Alternative correct pair (also supported):** ``` Dropdown 1: tags Dropdown 2: ["Excluded"] ``` Let me know if you’d like to see the full working sample JSON with either pair.
256
Hard **Question Section** You plan to deploy a **containerized version** of an **Azure Cognitive Services** service that will be used for **sentiment analysis**. You configure `https://contoso.cognitiveservices.azure.com` as the **endpoint URI** for the service. You need to run the container on an **Azure virtual machine** using **Docker**. **How should you complete the command?** To answer, select the appropriate options in the answer area. **NOTE:** Each correct selection is worth one point. --- **Command Template:** ``` docker run -rm -it -p 5000:5000 --memory 8g --cpus 1 \ \ Billing= \ ApiKey=xxxxxxxxxxxxxxxxxxxxxxxx ``` --- **Dropdown 1 Options:** * [http://contoso.blob.core.windows.net](http://contoso.blob.core.windows.net) * [http://contoso.cognitiveservices.azure.com](http://contoso.cognitiveservices.azure.com) * mcr.microsoft.com/azure-cognitive-services/textanalytics/keyphrases * mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment **Dropdown 2 Options:** * [http://contoso.blob.core.windows.net](http://contoso.blob.core.windows.net) * [https://contoso.cognitiveservices.azure.com](https://contoso.cognitiveservices.azure.com) * mcr.microsoft.com/azure-cognitive-services/textanalytics/keyphrase * mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment ---
**Answer Section** **Correct Answers:** * Dropdown 1: `mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment` * Dropdown 2: `https://contoso.cognitiveservices.azure.com` --- **Explanation** * **Dropdown 1 (Image name):** To run the sentiment analysis container, you must pull the **correct image** from Microsoft’s container registry. The correct container image is: `mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment` * **Dropdown 2 (Billing endpoint):** This should be set to the **resource endpoint** from your Azure Cognitive Services resource, which in this case is: `https://contoso.cognitiveservices.azure.com` This endpoint is used to **validate billing and access** against your Azure subscription. --- **Final Answer Recap:** ``` docker run -rm -it -p 5000:5000 --memory 8g --cpus 1 \ mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment \ Billing=https://contoso.cognitiveservices.azure.com \ ApiKey=xxxxxxxxxxxxxxxxxxxxxxxx ```
257
**Question Section** You are developing a system that will **monitor temperature data from a data stream**. The system must **generate an alert in response to atypical values**. The solution must **minimize development effort**. **What should you include in the solution?** --- **Options:** A. Multivariate Anomaly Detection B. Azure Stream Analytics C. metric alerts in Azure Monitor D. Univariate Anomaly Detection ---
**Answer Section** **Correct Answer:** **D. Univariate Anomaly Detection** --- **Explanation** * Since the input is **temperature data only**, this is a **single-variable (univariate)** time series. * **Univariate Anomaly Detection** is designed specifically for detecting **outliers or atypical values** in a **single stream of numeric data** (e.g., temperature). * It can be used with **minimal configuration**, and Azure provides built-in REST APIs and integration options to get started quickly — which supports the goal of **minimizing development effort**. --- **Why the other options are incorrect:** * **A. Multivariate Anomaly Detection**: Used when you have **multiple correlated metrics** (e.g., temperature, humidity, and pressure). Overkill for a single variable. * **B. Azure Stream Analytics**: A powerful tool for stream processing, but requires more **setup, SQL-style queries**, and **custom alerting logic** — more development effort than needed. * **C. Metric alerts in Azure Monitor**: Used for **predefined metrics** from Azure resources (e.g., CPU usage) — not for **custom data streams** unless ingested as custom metrics first, which adds complexity. --- **Final Answer:** **D. Univariate Anomaly Detection**
258
**Question Section** You have 1,000 scanned images of **hand-written survey responses**. The surveys do **NOT** have a **consistent layout**. You have an Azure subscription that contains an **Azure AI Document Intelligence** resource named `AIdoc1`. You open **Document Intelligence Studio** and create a new project. You need to **extract data from the survey responses**, and the solution must **minimize development effort**. **To where should you upload the images**, and **which type of model** should you use? Each correct selection is worth one point. --- **Answer Area** **Dropdown 1: Upload to:** * An Azure Cosmos DB account * An Azure Files share * An Azure Storage account **Dropdown 2: Model type:** * Custom neural * Custom template * Identity document (ID) ---
**Answer Section** **Correct Answers:** * Upload to: **An Azure Storage account** * Model type: **Custom neural** --- **Explanation** * **Upload to → An Azure Storage account**: * **Azure Document Intelligence Studio** requires that training and testing data be uploaded to an **Azure Blob Storage account**. * This is the only supported option when working with **scanned documents or forms**. * **Model type → Custom neural**: * Since the **hand-written surveys have no consistent layout**, the **Custom neural model** is the best fit. * It uses **machine learning to generalize across variable formats**, unlike **Custom template**, which is only effective when documents follow a **fixed structure**. * The **Identity document (ID)** model is prebuilt and only used for IDs like passports and driver’s licenses — not survey responses. --- **Final Answer Recap:** * Upload to: **An Azure Storage account** * Model type: **Custom neural**
259
**Question Section** You are building an **Azure AI Language Understanding** solution. You discover that **many intents have similar utterances** containing **airport names or airport codes**. You need to **minimize the number of utterances** used to train the model. **Which type of custom entity should you use?** --- **Options:** A. Pattern.any B. machine-learning C. regular expression D. list ---
**Answer Section** **Correct Answer:** **D. list** --- **Explanation:** * **List entities** are designed for **well-defined sets of values**, like **airport names** or **airport codes**, where you can specify a list of terms and their **synonyms or variations**. * Using a list entity allows the model to **generalize across utterances** that include any item from the list, reducing the number of utterances you need to explicitly include in training. * This is the **best choice** for scenarios where known terms (like a list of airports) repeat across intents. --- **Why other options are incorrect:** * **A. Pattern.any**: Captures **any text matching a placeholder** in a pattern, but **doesn’t help generalize** across known entities like airports. * **B. machine-learning**: Requires **multiple labeled utterances** and is useful when the values are **diverse and unpredictable**, which increases training effort — not ideal when values are known. * **C. regular expression**: Useful for **patterned input** (like phone numbers or dates), but not suitable for matching **named entities** like airports unless they follow a strict format (which they usually don’t). --- **Final Answer:** **D. list**
260
**Question Section** You have an **Azure subscription**. You need to deploy an **Azure AI Search** resource that will **recognize geographic locations**. **Which built-in skill should you include in the skillset for the resource?** --- **Options:** A. AzureOpenAIEmbeddingSkill B. DocumentExtractionSkill C. EntityRecognitionSkill D. EntityLinkingSkill ---
**Answer Section** **Correct Answer:** **C. EntityRecognitionSkill** --- **Explanation:** * **EntityRecognitionSkill** is used to extract named entities (like **locations**, **people**, **organizations**, etc.) from text in Azure AI Search. * It can identify geographic locations such as **cities**, **countries**, and **landmarks**, making it ideal for scenarios where you need to **detect place names** in documents or content. --- **Why the other options are incorrect:** * **A. AzureOpenAIEmbeddingSkill**: Used to **generate vector embeddings** from text for semantic search — not for entity extraction. * **B. DocumentExtractionSkill**: Extracts text from **structured file formats** (like PDFs or Office documents), not for recognizing named entities. * **D. EntityLinkingSkill**: Links known entities (like "London") to **external knowledge sources** such as Wikipedia — useful for enrichment, but **not required just to recognize geographic entities**. --- **Final Answer:** **C. EntityRecognitionSkill**
261
**Question Section** You deploy a **web app** that serves as a **management portal** for **Azure Cognitive Search indexing**. The app is using the **primary admin key**. During a **security review**, you detect **unauthorized changes** to the search index and suspect that the **primary key is compromised**. You need to **prevent unauthorized access** to the **index management endpoint**, and the solution must **minimize downtime**. **What should you do next?** --- **Options:** A. Regenerate the primary admin key, change the app to use the secondary admin key, and then regenerate the secondary admin key. B. Change the app to use a query key, and then regenerate the primary admin key and the secondary admin key. C. Regenerate the secondary admin key, change the app to use the secondary admin key, and then regenerate the primary key. D. Add a new query key, change the app to use the new query key, and then delete all the unused query keys. ---
**Answer Section** **Correct Answer:** **C. Regenerate the secondary admin key, change the app to use the secondary admin key, and then regenerate the primary key.** --- **Explanation** Azure Cognitive Search provides **two admin keys**: **primary** and **secondary**, to support **key rotation with minimal downtime**. If you suspect that one of the admin keys (e.g., **primary**) is compromised: 1. **First**, ensure you have an alternative admin key available (the **secondary**) to avoid downtime. 2. **Then**, **regenerate the compromised key** (in this case, the **primary**). 3. Once the app has switched over and stabilized using the other key, **you can safely rotate both keys**. So the correct **safe sequence** is: * **Regenerate the secondary key** (ensure it’s secure and fresh) * **Update the app to use the secondary key** * **Then regenerate the compromised primary key** This ensures **continuous access** while removing the compromised key. --- **Why the other options are incorrect:** * **A. Regenerate the primary key first**: Risky — if the app is still using it, regenerating immediately may **break access**. * **B. Change the app to use a query key**: **Query keys are read-only** — they do not allow index management. Not suitable for admin operations. * **D. Add a new query key**: Again, **query keys cannot be used for index management** — they’re intended only for search/query scenarios. --- **Final Answer:** **C. Regenerate the secondary admin key, change the app to use the secondary admin key, and then regenerate the primary key.**
262
**Question Section** You are developing an app that will use the **Azure AI Vision API** to **analyze an image**. You need to configure the request that will be used by the app to **identify whether an image is clipart or a line drawing**. **How should you complete the request?** To answer, select the appropriate options in the answer area. **NOTE:** Each correct selection is worth one point. --- **Request Template:** ``` "https://.cognitiveservices.azure.com/vision/v3.2/analyze?visualFeatures=&details=(string)&language=en" ``` --- **Dropdown 1 (HTTP method):** * GET * PATCH * POST **Dropdown 2 (visualFeatures):** * description * imageType * objects * tags ---
**Answer Section** **Correct Answers:** * HTTP method: **POST** * visualFeatures: **imageType** --- **Explanation** * **HTTP method → POST** * The Azure AI Vision `analyze` API uses the **POST** method to send image data in the request body for analysis. * GET is only used when submitting image URLs directly in query params, which is uncommon and limited in functionality. * **visualFeatures → imageType** * The `imageType` feature is used specifically to identify if an image is a **clipart**, **line drawing**, or **photograph**. * This is exactly what's needed to detect **clipart or line drawings**. --- **Final Answer:** * Dropdown 1: **POST** * Dropdown 2: **imageType**
263
**Question Section** You have a local folder that contains the files shown in the following table: | ----- | ------ | ------------- | --------- | | File1 | WMV | 34 | 400 | | File2 | AVI | 90 | 1,200 | | File3 | MOV | 300 | 980 | | File4 | MP4 | 80 | 1,800 | You need to **analyze the files** by using **Azure AI Video Indexer**. **Which files can you upload to the Video Indexer website?** NOTE: Each correct selection is worth one point. --- **Options:** A. File1 and File3 only B. File1, File2, File3, and File4 C. File1, File2, and File3 only D. File1 and File2 only E. File1, File2, and File4 only --- | Name | Format | Length (mins) | Size (MB) |
**Answer Section** **Correct Answer:** **B. File1, File2, File3, and File4** --- **Explanation** According to the [Azure AI Video Indexer support matrix](https://learn.microsoft.com/en-us/azure/azure-video-indexer/avi-support-matrix): * **Maximum file size** for web upload: **2 GB (2,048 MB)** * **Maximum duration** for upload: **6 hours (360 minutes)** * **Supported formats** include: **WMV**, **AVI**, **MOV**, and **MP4** File breakdown: * **File1** → 34 min, 400 MB → ✅ * **File2** → 90 min, 1,200 MB → ✅ * **File3** → 300 min, 980 MB → ✅ * **File4** → 80 min, 1,800 MB → ✅ All files meet the duration, size, and format criteria. --- **Final Answer:** **B. File1, File2, File3, and File4**
264
**Question Section** You have an Azure subscription that contains an **Azure AI Content Safety** resource named **CS1**. You create a test image that contains a **circle**. You submit the test image to CS1 by using the `curl` command and the following command-line parameters: ``` --data-raw '{ "image": { "content": "" }, "categories": [ "Violence" ], "outputType": "EightSeverityLevels" }' ``` **What should you expect as the output?** --- **Options:** A. 0 B. 0.0 C. 7 D. 100 ---
**Answer Section** **Correct Answer:** **A. 0** --- **Explanation** * You are sending a **non-violent, neutral image** (a simple circle) to the **Azure AI Content Safety API**. * You are requesting a **Violence category score**, and specifying the output type as `"EightSeverityLevels"`, which returns a value from **0 (least severe)** to **7 (most severe)**. * Since a **circle does not contain any violent content**, the API will return a **severity level of 0**. > **Note:** Had the `outputType` been `"FourSeverityLevels"` or `"Float"`, the response could be `0.0`, but in this case, it is explicitly `"EightSeverityLevels"`. --- **Final Answer:** **A. 0**
265
**Question Section** You are building an **Azure WebJob** that will create **knowledge bases** from an array of URLs. You instantiate a **QnAMakerClient** object that has the relevant API keys and assign the object to a variable named `client`. You need to develop a method to **create the knowledge bases**. **Which two actions should you include in the method?** Each correct answer presents part of the solution. **NOTE:** Each correct selection is worth one point. --- **Options:** A. Create a list of `FileDTO` objects that represents data from the WebJob. B. Call the `client.Knowledgebase.CreateAsync` method. C. Create a list of `QnADTO` objects that represents data from the WebJob. D. Create a `CreateKbDTO` object. ---
**Answer Section** **Correct Answers:** **B. Call the `client.Knowledgebase.CreateAsync` method.** **D. Create a `CreateKbDTO` object.** --- **Explanation** To create a knowledge base programmatically using the **QnA Maker API (via QnAMakerClient)**, you need to: 1. **Define a `CreateKbDTO` object**, which contains all the necessary components to create a KB (including QnAs, URLs, files, and metadata). 2. **Call the `client.Knowledgebase.CreateAsync` method**, passing in the `CreateKbDTO` to actually create the KB on the QnA Maker service. You do **not** need to manually construct a list of `FileDTO` or `QnADTO` unless you're feeding the knowledge base with **files** or **manual QnA entries**. Since the prompt says the data is from **URLs**, that information is passed directly via the `CreateKbDTO` object's `urls` property. --- **Why the other options are incorrect:** * **A. FileDTO**: Used if you're uploading **files** (e.g., PDFs or DOCX) to populate the knowledge base. Not needed here since you're working with **URLs**. * **C. QnADTO**: Used for manually adding **questions and answers** into the knowledge base. Again, this is not needed when using **URLs** as data sources. --- **Final Answer:** **B. Call the `client.Knowledgebase.CreateAsync` method.** **D. Create a `CreateKbDTO` object.**
266
Question Section You have an Azure subscription that contains: * An Azure OpenAI resource named `AI1` * An Azure AI Content Safety resource named `CS1` You build a chatbot that: * Uses `AI1` to provide generative answers * Uses `CS1` to check input and output for objectionable content You need to optimize the content filter configurations by running tests on sample questions. --- **Solution: From Content Safety Studio, you use the Safety metaprompt feature to run the tests.** --- Does this meet the requirement? A. Yes B. No ---**
**Answer Section** **Correct Answer:** **B. No** --- **Explanation** * The **Safety metaprompt** feature in Content Safety Studio is used to **guide AI model behavior** by adjusting prompts to be more safety-conscious or restrictive. * It is **not intended** for **testing content filtering configurations** using **sample inputs and outputs** for detection evaluation. --- ✅ What should be used instead? To **test and optimize content filtering**, you should: * Use the **Playground** in **Content Safety Studio**, which lets you: * Submit **sample content** * Adjust filter thresholds * View **severity level results** * This allows you to **evaluate how CS1 flags different inputs/outputs**, making it the correct tool for **filter tuning and evaluation**. --- **Final Answer:** **B. No**
267
Question Section You have an Azure subscription that contains: * An Azure OpenAI resource named `AI1` * An Azure AI Content Safety resource named `CS1` You build a chatbot that: * Uses `AI1` to provide generative answers * Uses `CS1` to check input and output for objectionable content You need to optimize the content filter configurations by running tests on sample questions. --- **Solution: From Content Safety Studio, you use the Moderate text content feature to run the tests.** --- Does this meet the requirement? A. Yes B. No ---**
**Answer Section** **Correct Answer:** **A. Yes** --- **Explanation** * The **Moderate text content** feature in **Content Safety Studio** allows you to: * Submit **sample inputs (text)** to be analyzed * View **category classifications** (e.g., violence, hate, sexual content) * Review **severity levels** * Adjust thresholds as needed to **optimize filtering behavior** * This feature is **designed specifically** to help test and refine how the **Content Safety service** responds to various kinds of content. * Therefore, using **Moderate text content** to run tests on sample chatbot questions is the **correct way** to optimize filter configurations. --- **Final Answer:** **A. Yes**
268
Question Section You have an Azure subscription that contains: * An Azure OpenAI resource named `AI1` * An Azure AI Content Safety resource named `CS1` You build a chatbot that: * Uses `AI1` to generate answers to specific questions * Uses `CS1` to check input and output for objectionable content You need to optimize the content filter configurations by running tests on sample questions. --- **Solution: From Content Safety Studio, you use the Protected material detection feature to run the tests.** --- Does this meet the requirement? A. Yes B. No ---**
**Answer Section** **Correct Answer:** **B. No** --- **Explanation** * The **Protected material detection** feature in Azure AI Content Safety is used to identify **copyrighted or intellectual property-protected content** (e.g., brand names, lyrics, published media). * It is **not used** to test or optimize **general content filter configurations** such as those for **hate, violence, self-harm, or sexual content**. * The correct feature for **testing and tuning safety filters on sample content** (such as chatbot prompts and responses) is the **Moderate text content** feature. --- **Final Answer:** **B. No**
269
**Question Section** You have an Azure subscription that contains an **Azure OpenAI** resource named `AI1`. You build a **chatbot** that uses `AI1` to provide **generative answers** to specific questions. You need to ensure that the chatbot **checks all input and output for objectionable content**. **Which type of resource should you create first?** --- **Options:** A. Microsoft Defender Threat Intelligence (Defender TI) B. Azure AI Content Safety C. Log Analytics D. Azure Machine Learning ---
**Answer Section** **Correct Answer:** **B. Azure AI Content Safety** --- **Explanation** * To **detect and filter objectionable or harmful content** (e.g., hate speech, violence, sexual content) in **text or image** data processed by your chatbot, you must use **Azure AI Content Safety**. * This service is specifically designed to **analyze input and output content** and **assign severity levels** based on safety categories. * It can be easily integrated into **chatbots and generative AI solutions** to ensure responsible AI behavior. --- **Why the other options are incorrect:** * **A. Microsoft Defender Threat Intelligence (Defender TI)**: Focuses on **cyber threat intelligence** and **security monitoring**, not content safety or moderation. * **C. Log Analytics**: Used for **monitoring and logging**, but does **not filter or detect objectionable content**. * **D. Azure Machine Learning**: Used to build/train/deploy **custom ML models**, but not necessary here since you're using **OpenAI and content safety as prebuilt services**. --- **Final Answer:** **B. Azure AI Content Safety**
270
**Question Section** In **Azure OpenAI Studio**, you are prototyping a **chatbot** using **Chat playground**. You need to configure the chatbot to meet the following requirements: * **Reduce the repetition of words** in conversations. * **Reduce the randomness** of each response. **Which two parameters should you modify?** To answer, select the appropriate parameters in the answer area. **NOTE:** Each correct answer is worth one point. https://www.examtopics.com/discussions/microsoft/view/143643-exam-ai-102-topic-7-question-15-discussion/ ---
**Answer Section** **Correct Answers:** * **Temperature** * **Frequency penalty** --- **Explanation** * **Temperature**: * Controls the **randomness** of the model's output. * **Lowering** the temperature (e.g., from 0.9 to 0.2 or 0.3) will make responses **more deterministic** and **less random**, addressing the second requirement. * **Frequency penalty**: * Penalizes repeated use of the **same tokens** (words or phrases) within a response. * Increasing the frequency penalty helps **reduce repetition**, addressing the first requirement. --- **Why other options are incorrect:** * **Presence penalty**: Encourages or discourages the model from introducing **new topics**, but does not directly address repetition or randomness. * **Top P**: Another randomness control, but **Temperature** is the more direct and primary control parameter for determinism. --- **Final Answer:** * **Temperature** * **Frequency penalty**
271
**Question Section** You have an **Azure OpenAI model**. You have **500 prompt-completion pairs** that will be used as **training data to fine-tune the model**. You need to **prepare the training data**. **Which format should you use for the training data file?** --- **Options:** A. CSV B. XML C. JSONL D. TSV ---
**Answer Section** **Correct Answer:** **C. JSONL** --- **Explanation** * The required format for fine-tuning an Azure OpenAI model is **JSONL (JSON Lines)**. * Each line in the `.jsonl` file must be a valid JSON object, typically in the format: ``` {"prompt": "", "completion": ""} ``` * This format is efficient for line-by-line streaming and aligns with how Azure OpenAI processes fine-tuning data. --- **Why the other options are incorrect:** * **A. CSV**: Not supported for fine-tuning. Does not allow complex structured data per line like JSONL does. * **B. XML**: Not compatible with the fine-tuning input schema. * **D. TSV**: Like CSV, it is a plain-text format and **not supported** by Azure OpenAI for training input. --- **Final Answer:** **C. JSONL**
272
**Question Section** You have an Azure subscription that contains an **Azure OpenAI** resource named `OpenAI1` and a user named `User1`. You need to ensure that **User1 can upload datasets** to `OpenAI1` and **fine-tune the existing models**. The solution must follow the **principle of least privilege**. **Which role should you assign to User1?** --- **Options:** A. Cognitive Services OpenAI Contributor B. Cognitive Services Contributor C. Cognitive Services OpenAI User D. Contributor ---
**Answer Section** **Correct Answer:** **A. Cognitive Services OpenAI Contributor** --- **Explanation** * The **Cognitive Services OpenAI Contributor** role is the **least-privileged built-in role** that allows a user to: * **Upload training data (datasets)** * **Submit fine-tuning jobs** * **Manage and deploy fine-tuned models** * This role is specifically scoped to **Azure OpenAI operations** and avoids granting broader permissions unrelated to OpenAI. --- **Why other options are incorrect:** * **B. Cognitive Services Contributor**: Grants **broad permissions** to manage any type of Cognitive Services resource, not just OpenAI. This **violates the principle of least privilege**. * **C. Cognitive Services OpenAI User**: Allows a user to **use deployed OpenAI models** but **not to fine-tune models or manage datasets**. * **D. Contributor**: Grants full management access to **all resources** in the assigned scope. This is **too permissive** for the task and violates **least privilege**. --- **Final Answer:** **A. Cognitive Services OpenAI Contributor**
273
**Question Section** You have an **Azure subscription** and **10,000 ASCII files**. You need to **identify files that contain specific phrases**, and the solution must use **cosine similarity**. **Which Azure OpenAI model should you use?** --- **Options:** A. text-embedding-ada-002 B. GPT-4 C. GPT-35 Turbo D. GPT-4-32k ---
**Answer Section** **Correct Answer:** **A. text-embedding-ada-002** --- **Explanation** * The **text-embedding-ada-002** model is purpose-built for generating **vector embeddings** of text, which can then be compared using **cosine similarity**. * This is the **ideal model** for: * Semantic search * Document similarity * Clustering * Recommendation systems * You would: 1. Embed the target phrase using `text-embedding-ada-002` 2. Embed each of the 10,000 ASCII files 3. Use **cosine similarity** to find files with similar semantic meaning --- **Why other options are incorrect:** * **B. GPT-4 / C. GPT-35 Turbo / D. GPT-4-32k** These are **text generation models**, not optimized for vector embedding or similarity comparison. While they can process text, they are **not efficient or cost-effective** for similarity-based search tasks. --- **Final Answer:** **A. text-embedding-ada-002**
274
You have the following data sources: * Finance: On-premises Microsoft SQL Server database * Sales: Azure Cosmos DB using the Core (SQL) API * Logs: Azure Table storage * HR: Azure SQL database You need to ensure that you can search all the data by using the Azure AI Search REST API. What should you do? A. Migrate the data in HR to Azure Blob storage. B. Migrate the data in HR to the on-premises SQL server. C. Export the data in Finance to Azure Data Lake Storage. D. Migrate the data in Sales to the MongoDB API.
**Answer Section** **Correct Answer:** **C. Export the data in Finance to Azure Data Lake Storage** --- **Explanation** Azure AI Search requires **indexers** or **custom ingestion pipelines** to bring in data for indexing. Let's assess each data source: * **Azure SQL Database (HR)**: ✅ **Natively supported** by Azure Cognitive Search indexers * **Azure Cosmos DB (Sales) using Core (SQL) API**: ✅ **Natively supported** * **Azure Table Storage (Logs)**: ✅ **Supported via built-in indexer** * **On-premises SQL Server (Finance)**: ❌ **Not directly accessible** to Azure Search * You must first **move or expose the data in a cloud-accessible location** like **Azure Blob Storage** or **Azure Data Lake Storage**, then index it. ✅ Therefore, to make the **Finance data searchable**, the correct step is to **export it to Azure Data Lake Storage**, which Azure Cognitive Search can access. --- **Why other options are incorrect:** * **A. Migrate the data in HR to Azure Blob storage**: Azure SQL DB is already natively supported — this adds unnecessary effort. * **B. Migrate the data in HR to the on-premises SQL server**: On-prem SQL is **not accessible** directly by Azure Search; this would make it worse. * **D. Migrate the data in Sales to the MongoDB API**: Cosmos DB Core (SQL) API is **supported**, but MongoDB API is **not supported** by Azure Search indexers. --- **Final Answer:** **C. Export the data in Finance to Azure Data Lake Storage**
275
You are building an app that will process scanned expense claims and extract and label the following data: * Merchant information * Time of transaction * Date of transaction * Taxes paid * Total cost You need to recommend an Azure AI Document Intelligence model for the app. The solution must minimize development effort. What should you use? A. the prebuilt Read model B. a custom template model C. a custom neural model D. the prebuilt receipt model
**Answer Section** **Correct Answer:** **D. the prebuilt receipt model** --- **Explanation** * The **prebuilt receipt model** in Azure AI Document Intelligence is specifically designed to extract structured information from **scanned receipts**, including: * Merchant name * Transaction date and time * Line items * Taxes * Total amount * It is a **ready-to-use** model, requiring **no training**, which means it **minimizes development effort**. --- **Why the other options are incorrect:** * **A. the prebuilt Read model**: Extracts **raw text only** (OCR), without structure or labels — you'd have to build all logic yourself. * **B. a custom template model**: Requires the documents to have a **fixed layout** and manual label training — more effort and not suitable for variable receipt formats. * **C. a custom neural model**: Suitable for **complex documents with no fixed layout**, but still requires **manual labeling and training**, which adds effort. --- **Final Answer:** **D. the prebuilt receipt model**
276
hard **Question Section** You are building a **language learning solution**. You need to recommend which **Azure services** can be used to perform the following tasks: * **Analyze lesson plans** submitted by teachers and **extract key fields**, such as lesson times and required texts. * **Analyze learning content** and **provide students with pictures** that represent commonly used words or phrases in the text. The solution must **minimize development effort**. --- **Which Azure service should you recommend for each task?** **NOTE:** Each correct selection is worth one point. --- **Answer Area:** * **Analyze lesson plans:** Options: * Azure Cognitive Search * Azure AI Custom Vision * Azure AI Document Intelligence * Immersive Reader * **Analyze learning content:** Options: * Azure Cognitive Search * Azure AI Custom Vision * Azure AI Document Intelligence * Immersive Reader ---
**Answer Section** **Correct Answers:** * **Analyze lesson plans:** Azure AI Document Intelligence * **Analyze learning content:** Immersive Reader --- **Explanation** * **Azure AI Document Intelligence** (formerly Form Recognizer) is designed to extract **structured information** from documents. It’s ideal for analyzing lesson plans and extracting key fields like **times, objectives, and required materials** — **minimizing development effort** by using prebuilt or custom models. * **Immersive Reader** is designed to **enhance reading and comprehension**. It can analyze text and **support language learners** by displaying images for **key words**, reading text aloud, and breaking down grammar — all **with minimal configuration or coding**. --- **Final Answer Recap:** * **Analyze lesson plans:** Azure AI Document Intelligence * **Analyze learning content:** Immersive Reader
277
**Question Section** You have an **Azure AI Search** resource named `Search1`. You have an app named `App1` that uses `Search1` to **index content**. You need to **add a custom skill** to `App1` to ensure that the app can **recognize and retrieve properties from invoices** using `Search1`. **What should you include in the solution?** --- **Options:** A. Azure AI Immersive Reader B. Azure OpenAI C. Azure AI Document Intelligence D. Azure AI Custom Vision ---
**Answer Section** **Correct Answer:** **C. Azure AI Document Intelligence** --- **Explanation** * **Azure AI Document Intelligence** (formerly Form Recognizer) is the correct solution for extracting **key-value pairs, tables, and fields from structured and semi-structured documents** like **invoices**, receipts, and forms. * You can integrate Document Intelligence as a **custom skill** in an **Azure AI Search skillset**, allowing your indexer to extract invoice properties and enrich the search index. --- **Why other options are incorrect:** * **A. Azure AI Immersive Reader**: Designed for improving **text comprehension and accessibility**, not for structured data extraction from documents. * **B. Azure OpenAI**: Great for generating or analyzing text, but **not optimized for extracting structured fields** from invoices. * **D. Azure AI Custom Vision**: Used for **image classification or object detection**, not for reading structured content like invoice fields. --- **Final Answer:** **C. Azure AI Document Intelligence**
278
**Question Section** You have an **Azure subscription**. You plan to build a solution that will **analyze scanned documents** and **export relevant fields** to a database. You need to recommend an **Azure AI Document Intelligence model** for the following types of documents: * **Expenditure request authorization forms** * **Structured and unstructured survey forms** * **Structured employment application forms** The solution must **minimize development effort and costs**. **Which type of model should you recommend for each document type?** To answer, select the appropriate options from the dropdown menus. **NOTE:** Each correct selection is worth one point. --- **Answer Area** **Expenditure request authorization forms:** \[Dropdown Options: * Custom neural * Custom template * Prebuilt contract * Prebuilt invoice * Prebuilt layout] **Structured and unstructured survey forms:** \[Dropdown Options: * Custom neural * Custom template * Prebuilt contract * Prebuilt invoice * Prebuilt layout] **Structured employment application forms:** \[Dropdown Options: * Custom neural * Custom template * Prebuilt contract * Prebuilt invoice * Prebuilt layout] ---
debated ### **Answer Section** **Correct Answers:** * **Expenditure request authorization forms** → **Prebuilt layout** * **Structured and unstructured survey forms** → **Custom neural** * **Structured employment application forms** → **Custom template** --- **Explanation** * **Prebuilt layout** is suitable for **structured forms** where no training is needed and where you're extracting key-value pairs and tables — ideal for expense forms. * **Custom neural** is designed for **flexibility**, supporting documents with **inconsistent or unstructured layouts**, such as a wide variety of survey forms. * **Custom template** is cost-efficient and best for **structured forms** with a **predictable layout**, such as employment applications, and requires minimal labeling. Let me know if you'd like this exported in a flashcard-ready format!
279
**Question Section** You have an Azure subscription that contains an **Azure AI Document Intelligence** resource named **DI1**. DI1 uses the **Standard S0 pricing tier**. You have the files shown in the following table: | Name | Size | Description | | ---------- | ------ | ------------------------------------ | | File1.pdf | 800 MB | Contains scanned images | | File2.jpg | 1 KB | An image that has 25 × 25 pixels | | File3.tiff | 5 MB | An image that has 5000 × 5000 pixels | **Which files can you analyze by using DI1?** --- **Options:** A. File1.pdf only B. File2.jpg only C. File3.tiff only D. File2.jpg and File3.tiff only E. File1.pdf, File2.jpg, and File3.tiff ---
**Answer Section** **Correct Answer:** **C. File3.tiff only** --- **Explanation** According to the [Azure AI Document Intelligence file requirements (Standard S0 tier)](https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/overview), the following constraints apply: ✅ File support constraints: * **Maximum file size**: **200 MB** * **Minimum image dimension**: **50 x 50 pixels** * **Maximum image dimension**: **10,000 x 10,000 pixels** * **Supported file types**: `.pdf`, `.jpg`, `.jpeg`, `.png`, `.tiff`, `.bmp` --- 🔍 File-by-file analysis: * **File1.pdf** ❌ **800 MB exceeds** the 200 MB size limit → **Rejected** * **File2.jpg** ❌ **25 x 25 pixels** is **smaller than** the required 50 x 50 pixels → **Rejected** * **File3.tiff** ✅ 5 MB < 200 MB, and dimensions 5000 x 5000 are within limits → **Accepted** --- **Final Answer:** **C. File3.tiff only**
280
hard **Question Section** You have an Azure subscription that contains: * An **Azure AI Document Intelligence** resource named **DI1** * A **storage account** named **sa1**, which includes: * A **blob container** named **blob1** * An **Azure Files share** named **share1** You plan to build a **custom model** named **Model1** in DI1. You create **sample forms and JSON files** for Model1. You need to **train Model1** and **retrieve the model ID**. --- **Which four actions should you perform in sequence?** To answer, move the appropriate actions to the answer area and arrange them in the correct order. **NOTE:** More than one order may be correct. You will receive credit for any valid order. ---
**Answer Section – Correct Sequence:** 1. **Upload the forms and JSON files to blob1.** 2. **Retrieve the access key for sa1.** 3. **Create a shared access signature (SAS) URL for blob1.** 4. **Call the Build model REST API function.** --- **Explanation** To train a custom model in Azure AI Document Intelligence: 1. **Upload training files** (forms + label JSONs) to a **blob container** (in this case, blob1). 2. To create a **SAS token**, you need the **storage access key**. 3. With the SAS token, you generate a **SAS URL** to grant the Document Intelligence service access to the files. 4. Use the **Build model REST API** and supply the SAS URL to initiate model training. Retrieving model info using the **Get info** or **Get model REST API function** would only be done **after** the model has been trained. --- ✅ Final Answer: 1. **Upload the forms and JSON files to blob1** 2. **Retrieve the access key for sa1** 3. **Create a shared access signature (SAS) URL for blob1** 4. **Call the Build model REST API function**
281
hard **Question Section** You have an Azure subscription that contains an **Azure AI Document Intelligence** resource named `AIdoc1`. You have an app named `App1` that uses `AIdoc1` to **analyze business cards** by calling the **business card model v2.1**. You need to **update App1** to ensure the app can **interpret QR codes**. The solution must **minimize administrative effort**. **What should you do first?** --- **Options:** A. Upgrade the business card model to v3.0 B. Implement the read model C. Deploy a custom model D. Implement the contract model ---
**Answer Section** **Correct Answer:** **A. Upgrade the business card model to v3.0** --- **Explanation** * The **business card model v3.0** includes **built-in support for interpreting QR codes**. * Upgrading to this version provides QR code parsing **out of the box**, eliminating the need to: * Build a custom model (C) * Implement OCR manually (B) * Use unrelated models like contract model (D) This approach also **minimizes administrative effort**, as required by the scenario — no need to manually label data or train a new model. --- **Why the other options are incorrect:** * **B. Implement the read model**: This provides raw OCR output (text only), not structured data from QR codes. * **C. Deploy a custom model**: More complex and unnecessary since the **prebuilt v3.0 model** already supports the feature. * **D. Implement the contract model**: Used for extracting fields from **contracts**, not business cards or QR codes. --- **Final Answer:** **A. Upgrade the business card model to v3.0**
282
**Question Section** You are building a **social media extension** that will **convert text to speech**. The solution must meet the following requirements: * Support **messages up to 400 characters** * Provide users with **multiple voice options** * **Minimize costs** You create an **Azure Cognitive Services** resource. **Which Speech API endpoint provides users with the available voice options?** --- **Options:** A. `https://uksouth.api.cognitive.microsoft.com/speechtotext/v3.0/models/base` B. `https://uksouth.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/longaudiosynthesis/voices` C. `https://uksouth.tts.speech.microsoft.com/cognitiveservices/voices/list` D. `https://uksouth.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` ---
**Answer Section** **Correct Answer:** **C. `https://uksouth.tts.speech.microsoft.com/cognitiveservices/voices/list`** --- **Explanation** * This endpoint is part of the **Text-to-Speech API** and returns a list of **available voices**, including: * Language * Gender * Voice name * Locale * Style (e.g., cheerful, angry) * It supports **standard and neural voices**, and helps developers let users **select from multiple voice options**. --- **Why the other options are incorrect:** * **A.** `/speechtotext/v3.0/models/base`: Refers to **speech-to-text**, not text-to-speech, and is unrelated to voice selection. * **B.** `/longaudiosynthesis/voices`: Used for **long audio synthesis**, which is designed for **large batches or files**, not short social media messages. Also requires a **custom voice deployment** and has **higher costs**. * **D.** `/cognitiveservices/v1?deploymentId=...`: Used for **sending text-to-speech synthesis requests**, not for **listing voices**. --- **Final Answer:** **C. `https://uksouth.tts.speech.microsoft.com/cognitiveservices/voices/list`**
283
hard **Question Section** You are building a solution that students will use to **find references for essays**. You use the following **Azure Text Analytics** code to get started: https://www.examtopics.com/discussions/microsoft/view/104894-exam-ai-102-topic-3-question-38-discussion/ ``` var response = client.RecognizeLinkedEntities( "Our tour guide took us up the Space Needle during our trip to Seattle last week."); ``` You are calling the **RecognizeLinkedEntities** method from the **Azure.AI.TextAnalytics** SDK. For each of the following statements, select **Yes** if the statement is true. Otherwise, select **No**. --- **Statements:** 1. The code will detect the language of documents. 2. The `url` attribute returned for each linked entity will be a **Bing search link**. 3. The `matches` attribute returned for each linked entity will provide the **location in a document** where the entity is referenced. ---
**Answer Section** * **Statement 1: "The code will detect the language of documents." → No** * The `RecognizeLinkedEntities()` method assumes **English by default** unless explicitly specified. * To detect language, you need to call the **DetectLanguage()** method or pass a language parameter. * **Statement 2: "The `url` will be a Bing search link." → No** * The `url` returned by the **Linked Entity Recognition** feature links to **Bing knowledge sources** like **Wikipedia**, **not to Bing search result pages**. * **Statement 3: "The `matches` attribute gives location of entity reference in text." → Yes** * The `matches` array includes details like **offsets and lengths**, telling you **where in the original text** the entity was found. --- ✅ Final Answers: 1. The code will detect the language of documents. → **No** 2. The `url` will be a Bing search link. → **No** 3. The `matches` attribute gives location of entity reference. → **Yes**
284
You have an Azure subscription that contains an Azure Cognitive Service for Language resource. You need to identify the URL of the REST interface for the Language service. Which blade should you use in the Azure portal? A. Identity B. Keys and Endpoint C. Networking D. Properties
**Answer Section** **Correct Answer:** **B. Keys and Endpoint** --- **Explanation** * The **Keys and Endpoint** blade in the Azure portal shows: * The **endpoint URL** to use when calling the **REST API** for your Cognitive Services resource. * The **API keys** required to authenticate requests. * This is the **standard location** to retrieve both the **endpoint** and **access keys** for any **Azure Cognitive Service**, including the **Language** resource. --- **Why the other options are incorrect:** * **A. Identity**: Used to manage **managed identities** for the resource, not for retrieving REST endpoints. * **C. Networking**: Used to configure **virtual networks, IP restrictions**, or **private endpoints**, not to view the public API URL. * **D. Properties**: Provides basic metadata like **resource ID**, **location**, and **SKU**, but **not the REST endpoint**. --- **Final Answer:** **B. Keys and Endpoint**
285
**Question Section** You are building a **chatbot**. You need to configure the chatbot to **query a knowledge base**. **Which dialog class should you use?** --- **Options:** A. QnAMakerDialog B. AdaptiveDialog C. SkillDialog D. ComponentDialog --- **Answer Section** **Correct Answer:** **A. QnAMakerDialog** ---
**Explanation** * **QnAMakerDialog** is a **specialized dialog class** in the **Microsoft Bot Framework SDK** that is designed to: * Connect your bot to a **QnA Maker knowledge base** * Automatically handle **user queries**, **retrieve answers**, and manage **fallbacks** * It simplifies integration with QnA Maker by: * Handling dialog flow for question/answer exchange * Supporting confidence scoring and response customization --- **Why other options are incorrect:** * **B. AdaptiveDialog**: Used for **rule-based conversational flows**, not specifically for querying a knowledge base. * **C. SkillDialog**: Used to connect to **remote bots (skills)**, not for querying QnA knowledge bases. * **D. ComponentDialog**: Used for **composing multiple dialogs**, but does not provide built-in support for QnA Maker. --- **Final Answer:** **A. QnAMakerDialog**
286
hard **Question Section** You are building a **Conversational Language Understanding** (CLU) model. You need to **enable active learning**. **What should you do?** --- **Options:** A. Add `show-all-intents=true` to the prediction endpoint query B. Enable speech priming C. Add `log=true` to the prediction endpoint query D. Enable sentiment analysis ---
**Answer Section** **Correct Answer:** **C. Add `log=true` to the prediction endpoint query** --- **Explanation** * **Active learning** in Conversational Language Understanding allows you to **review real user utterances** that the model was **uncertain about**, so you can **relabel and improve** the model. * To collect those utterances, you must **log prediction data**. * This is done by setting **`log=true`** in the **prediction endpoint query**, which enables the service to **store those utterances** for later review in **Language Studio**. --- **Why the other options are incorrect:** * **A. `show-all-intents=true`**: Returns all intent scores in a prediction — **not related to enabling logging or active learning**. * **B. Enable speech priming**: Used to optimize recognition for expected vocabulary in **speech** input — unrelated to active learning. * **D. Enable sentiment analysis**: Adds sentiment scoring to responses — does **not affect training or active learning**. --- **Final Answer:** **C. Add `log=true` to the prediction endpoint query**
287
hard **Question Section** You are developing the **smart e-commerce project**. You need to implement **autocompletion** as part of the **Cognitive Search** solution. **Which three actions should you perform?** Each correct answer presents part of the solution. **NOTE:** Each correct selection is worth one point. --- **Options:** A. Make API queries to the autocomplete endpoint and include `suggesterName` in the body. B. Add a suggester that has the three product name fields as source fields. C. Make API queries to the search endpoint and include the product name fields in the `searchFields` query parameter. D. Add a suggester for each of the three product name fields. E. Set the `searchAnalyzer` property for the three product name variants. F. Set the `analyzer` property for the three product name variants. ---
**Answer Section** **Correct Answers:** **A. Make API queries to the autocomplete endpoint and include `suggesterName` in the body.** **B. Add a suggester that has the three product name fields as source fields.** **F. Set the `analyzer` property for the three product name variants.** --- **Explanation** * **A. Make API queries to the autocomplete endpoint and include `suggesterName` in the body** * This is required to use the autocomplete feature in Azure Cognitive Search. The `suggesterName` tells the service which suggester to use. * **B. Add a suggester that has the three product name fields as source fields** * You can only have **one suggester per index**, but that suggester can contain **multiple fields**. This makes autocomplete work across all desired product name fields. * **F. Set the `analyzer` property for the three product name variants** * The **index-time analyzer** (set via `analyzer`) determines how text is tokenized and stored for suggestions. This is essential for making autocomplete function correctly. --- **Why the other options are incorrect:** * **C. Make API queries to the search endpoint** * This is used for **full text search**, not for autocomplete. Autocomplete uses the `/autocomplete` endpoint. * **D. Add a suggester for each of the three product name fields** * Only **one suggester** is allowed per index. That suggester can reference multiple fields. * **E. Set the `searchAnalyzer` property** * `searchAnalyzer` is used **at query time**, mainly for search relevance, not for indexing text for autocomplete. --- **Final Answer:** **A, B, F**
288
**Question Section** You are building an **internet-based training solution**. The solution requires that a user's **camera and microphone remain enabled**. You need to **monitor a video stream** of the user and **verify that the user is alone and not collaborating with another person**. The solution must **minimize development effort**. **What should you include in the solution?** --- **Options:** A. Speech-to-text in the Azure AI Speech service B. Object detection in Azure AI Custom Vision C. Spatial Analysis in Azure AI Vision D. Object detection in Azure AI Custom Vision ---
**Answer Section** **Correct Answer:** **C. Spatial Analysis in Azure AI Vision** --- **Explanation** * **Azure AI Vision Spatial Analysis** provides prebuilt capabilities to detect: * The **number of people** in a camera frame * **Presence** and **movement** patterns * Proximity of individuals in real time * It’s optimized for scenarios like: * Monitoring rooms for **occupancy** * Ensuring a user is **alone** * Detecting **unauthorized collaboration** * It works with live video streams and requires **minimal development effort** due to its **prebuilt models** and **no need for training**. --- **Why the other options are incorrect:** * **A. Speech-to-text in Azure AI Speech** * Transcribes what is being said — it doesn’t detect **who is present** on video or ensure the user is alone. * **B & D. Object detection in Azure AI Custom Vision** * Requires **manual model training** and labeling to detect specific scenarios like multiple people. * It’s more effort-intensive and less suitable than using Spatial Analysis’s **prebuilt capabilities**. --- **Final Answer:** **C. Spatial Analysis in Azure AI Vision**
289
hard **Question Section** You are examining the **Text Analytics** output of an application. The text analyzed is: > *"Our tour guide took us up the Space Needle during our trip to Seattle last week."* The response contains the data shown below: | Text | Category | ConfidenceScore | | ------------ | ---------- | --------------- | | Tour guide | PersonType | 0.45 | | Space Needle | Location | 0.38 | | Trip | Event | 0.78 | | Seattle | Location | 0.78 | | Last week | DateTime | 0.80 | **Which Text Analytics API is used to analyze the text?** --- **Options:** A. Entity Linking B. Named Entity Recognition C. Sentiment Analysis D. Key Phrase Extraction ---
**Answer Section** **Correct Answer:** **B. Named Entity Recognition** --- **Explanation** * **Named Entity Recognition (NER)** identifies and categorizes named entities in text, such as: * **PersonType**, **Location**, **DateTime**, **Event**, etc. * The table in the output shows entities **categorized and scored**, which is the result of the **NER API**. --- **Why the other options are incorrect:** * **A. Entity Linking**: * Maps known entities in text to **external data sources** (like Wikipedia). * Would include **URLs** and **data sources**, not just entity categories. * **C. Sentiment Analysis**: * Detects the **sentiment** (positive, negative, neutral) of a sentence — doesn't return entity data. * **D. Key Phrase Extraction**: * Returns **important phrases**, not categorized entities or confidence scores per entity. --- **Final Answer:** **B. Named Entity Recognition**
290
hard **Question Section** You have an Azure subscription that contains an **Azure AI Document Intelligence** resource named **DI1**. You create a **PDF document named Test.pdf** that contains **tabular data**. You need to **analyze Test.pdf** by using **DI1**. **How should you complete the command?** To answer, select the appropriate options in the answer area. **NOTE:** Each correct selection is worth one point. --- **Command Snippet to Complete** ``` curl -v -i POST "{endpoint}/formrecognizer/documentModels/{model}?api-version=2023-07-31" \ -H "Content-Type: application/json" \ -H ": {yourKey}" \ --data-ascii '{ "urlSource": "test.pdf" }' ``` --- **Dropdown Options:** **Dropdown 1 (Model name):** * prebuilt-contract * prebuilt-document * prebuilt-layout * prebuilt-read **Dropdown 2 (Header name):** * Ocp-Apim-Subscription-Key * Secret * Subscription-Key * Key1 ---
**Answer Section** **Correct Answers:** * **Model name:** `prebuilt-layout` * **Header name:** `Ocp-Apim-Subscription-Key` --- **Explanation** * **Model name → `prebuilt-layout`** * The **prebuilt-layout model** is designed for extracting **text**, **tables**, and **structure** from documents like PDFs. * Since the document contains **tabular data**, this is the most appropriate prebuilt model. * **Header name → `Ocp-Apim-Subscription-Key`** * This is the standard header name required by Azure APIs to pass the **subscription key** when calling Cognitive Services. --- **Why the other options are incorrect:** * **prebuilt-contract**, **prebuilt-read**, **prebuilt-document**: * `prebuilt-contract` is for contracts * `prebuilt-read` is for OCR (text only, no structure or tables) * `prebuilt-document` includes key-value pairs and more advanced extraction, but **layout** is better suited for simple tabular documents and lower cost. * **Secret**, **Subscription-Key**, **Key1**: * Not valid header names recognized by Azure’s REST API for authentication. --- **Final Answer:** * **Model name:** `prebuilt-layout` * **Header name:** `Ocp-Apim-Subscription-Key`
291
Hard **Question Section** You build a bot named **app1** using the **Microsoft Bot Framework**. You prepare **app1** for deployment. You need to **deploy app1 to Azure**. **How should you complete the command?** To answer, select the appropriate options in the answer area. **NOTE:** Each correct selection is worth one point. --- **Command Template:** ``` az deployment source --resource-group "RG1" --name "app1" --src "app1.zip" ``` --- **Dropdown 1 Options (service):** * bot * functionapp * vm * webapp **Dropdown 2 Options (deployment source):** * config * config-local-git * config-zip ---
**Answer Section** **Correct Answers:** * **Service:** `webapp` * **Deployment source method:** `config-zip` --- **Explanation** * **Service → `webapp`** * When deploying a bot built with the **Bot Framework**, you typically host it as an **Azure App Service (Web App)**. * The `az webapp` CLI command is used to deploy web-based applications like bots. * **Deployment source method → `config-zip`** * Since the deployment package is a `.zip` file (`app1.zip`), `config-zip` is the correct method. * This tells Azure to deploy the contents of a zip archive to the web app. --- **Why other options are incorrect:** * **`bot`**: Refers to bot service metadata, **not used for deployment** of app code. * **`functionapp`, `vm`**: Not relevant for a **Bot Framework Web App** deployment. * **`config-local-git`**: Used for Git-based deployments, not zip packages. * **`config`**: Too generic — `config-zip` is the appropriate specific method. --- **Final Answer:** * **Dropdown 1:** `webapp` * **Dropdown 2:** `config-zip`
292
**Question Section** You have an Azure subscription that contains a **storage account** named **sa1** and an **Azure AI Document Intelligence** resource named **DI1**. You need to **create and train a custom model** in DI1 by using **Document Intelligence Studio**. The solution must **minimize development effort**. **Which four actions should you perform in sequence?** To answer, move the appropriate actions to the answer area and arrange them in the correct order. --- **Answer Area (Correct Sequence):** 1. **Create a custom model project and link the project to sa1.** 2. **Upload five sample documents.** 3. **Apply labels to the sample documents.** 4. **Train and test the model.** --- ✅ **Explanation** To create a custom model in Document Intelligence Studio **with minimal effort**, you typically use the **Custom Template** model, which: * Requires **only 5 sample documents** * Uses a **no-code/low-code** UI to **label fields** * Does **not require** model training scripts or JSON file authoring --- 🔄 **Step-by-Step Breakdown:** 1. **Create a custom model project and link to sa1** * This initializes your labeling project and points it to your Azure Blob Storage container (sa1). 2. **Upload five sample documents** * Minimum required for training a template-based model. 3. **Apply labels to the sample documents** * Label fields like Name, Date, Total, etc., via the Studio UI. 4. **Train and test the model** * Document Intelligence Studio trains the model and provides testing tools. --- ❌ Why Other Actions Are Incorrect: * **Upload 50 sample documents**: Only required for **neural models**, which are more complex and not needed when minimizing development effort. * **Upload JSON files that contain layout and labels**: Used for **manual training workflows** or API-based approaches — not required in Document Intelligence Studio's built-in labeling tool. --- ✅ Final Answer Recap: 1. Create a custom model project and link the project to sa1 2. Upload five sample documents 3. Apply labels to the sample documents 4. Train and test the model
**Question Section** You have an Azure subscription that contains a **storage account** named **sa1** and an **Azure AI Document Intelligence** resource named **DI1**. You need to **create and train a custom model** in DI1 by using **Document Intelligence Studio**. The solution must **minimize development effort**. **Which four actions should you perform in sequence?** To answer, move the appropriate actions to the answer area and arrange them in the correct order. --- **Answer Area (Correct Sequence):** 1. **Create a custom model project and link the project to sa1.** 2. **Upload five sample documents.** 3. **Apply labels to the sample documents.** 4. **Train and test the model.** --- ✅ **Explanation** To create a custom model in Document Intelligence Studio **with minimal effort**, you typically use the **Custom Template** model, which: * Requires **only 5 sample documents** * Uses a **no-code/low-code** UI to **label fields** * Does **not require** model training scripts or JSON file authoring --- 🔄 **Step-by-Step Breakdown:** 1. **Create a custom model project and link to sa1** * This initializes your labeling project and points it to your Azure Blob Storage container (sa1). 2. **Upload five sample documents** * Minimum required for training a template-based model. 3. **Apply labels to the sample documents** * Label fields like Name, Date, Total, etc., via the Studio UI. 4. **Train and test the model** * Document Intelligence Studio trains the model and provides testing tools. --- ❌ Why Other Actions Are Incorrect: * **Upload 50 sample documents**: Only required for **neural models**, which are more complex and not needed when minimizing development effort. * **Upload JSON files that contain layout and labels**: Used for **manual training workflows** or API-based approaches — not required in Document Intelligence Studio's built-in labeling tool. --- ✅ Final Answer Recap: 1. Create a custom model project and link the project to sa1 2. Upload five sample documents 3. Apply labels to the sample documents 4. Train and test the model
293
hard **Question Section** You are developing a **text processing solution**. You have the following function: ``` static void GetKeyWords(TextAnalyticsClient textAnalyticsClient, string text) { var response = textAnalyticsClient.RecognizeEntities(text); Console.WriteLine("Key words:"); foreach (CategorizedEntity entity in response.Value) { Console.WriteLine($"\t{entity.Text}"); } } ``` You call the function and use the following string as the second argument: > "Our tour of London included a visit to Buckingham Palace" **What will the function return?** --- **Options:** A. London and Buckingham Palace only B. Tour and visit only C. London and Tour only D. Our tour of London included visit to Buckingham Palace ---
**Answer Section** **Correct Answer:** **A. London and Buckingham Palace only** --- **Explanation** The function is using the **RecognizeEntities()** method from the **Azure Text Analytics SDK**. This API identifies **named entities** in a piece of text and returns them along with their categories (e.g., Location, Person, Organization, etc.). In the sentence: > *"Our tour of London included a visit to Buckingham Palace"* The recognizable **named entities** are: * **London** → Location * **Buckingham Palace** → Location or Organization (depending on the model) Words like *“tour”* and *“visit”* are **common nouns**, not named entities, and will **not** be returned by `RecognizeEntities`. --- **Final Answer:** **A. London and Buckingham Palace only**
294
hard **Question Section** You develop a Python app named **App1** that performs **speech-to-speech translation**. You need to configure **App1** to **translate English to German**. **How should you complete the `SpeechTranslationConfig` object?** To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. **NOTE:** Each correct selection is worth one point. --- **Values:** * `add_target_language` * `speech_synthesis_language` * `speech_recognition_language` * `voice_name` --- **Answer Area:** ``` def translate_speech_to_text(): translation_config = speechsdk.translation.SpeechTranslationConfig( subscription=speech_key, region=service_region) translation_config.________ = "en-US" translation_config.________ = ("de") ``` ---
**Correct Answers:** ``` translation_config.speech_recognition_language = "en-US" translation_config.add_target_language("de") ``` --- ✅ Explanation: * **`speech_recognition_language = "en-US"`** * This sets the language of the **input speech**, which is English. * **`add_target_language("de")`** * This specifies the **target language** (German) for translation. --- **Why the other options are incorrect:** * **`speech_synthesis_language`**: * Controls the language of **output voice**, but is not used when setting up basic speech-to-text translation target. * **`voice_name`**: * Only required for setting a specific **voice model** for synthesized output — not necessary for this task. --- **Final Answer:** ``` translation_config.speech_recognition_language = "en-US" translation_config.add_target_language("de") ```
295
**Question Section** You have the following **C# function**: ``` static void MyFunction(TextAnalyticsClient textAnalyticsClient, string text) { var response = textAnalyticsClient.ExtractKeyPhrases(text); Console.WriteLine("Key phrases:"); foreach (string keyphrase in response.Value) { Console.WriteLine($"{keyphrase}"); } } ``` You call the function with: ``` MyFunction(textAnalyticsClient, "the quick brown fox jumps over the lazy dog"); ``` **Which output will you receive?** --- **Options:** A. ``` The quick - The lazy ``` B. ``` the quick brown fox jumps over the lazy dog ``` C. ``` jumps over ``` D. ``` quick brown fox lazy dog ``` ---
**Answer Section** **Correct Answer:** **D. quick brown fox**     **lazy dog** --- **Explanation** * This code uses **Azure Text Analytics** to extract **key phrases** from a sentence. * The method `ExtractKeyPhrases()` analyzes the text and returns **phrases of semantic significance**. * For the input: > *"the quick brown fox jumps over the lazy dog"* The most meaningful chunks in terms of **content** are: * **quick brown fox** * **lazy dog** These are the phrases that a **key phrase extraction model** is likely to return, as they represent distinct entities or concepts. --- **Why other options are incorrect:** * **A:** “The quick -” and “The lazy” are unlikely outputs and not valid key phrases. * **B:** The full sentence is not returned as a key phrase. * **C:** “jumps over” is a verb phrase and less likely to be extracted as a key **noun phrase**. --- **Final Answer:** **D. quick brown fox**     **lazy dog**
296
hard **Question Section** You have an **Azure subscription**. You plan to build a solution that will **analyze scanned documents** and **export relevant fields** to a database. You need to recommend which **Azure AI service** to deploy for the following types of documents: * **Internal expenditure request authorization forms** * **Supplier invoices** The solution must **minimize development effort**. **Which Azure AI service should you recommend for each document type?** To answer, select the appropriate options from the dropdown menus. --- **Dropdown Options (for both fields):** * An Azure AI Document Intelligence custom model * An Azure AI Document Intelligence pre-built model * Azure AI Custom Vision * Azure AI Immersive Reader * Azure AI Vision **Answer Area** **Internal expenditure request authorization forms:** xxxxxxxx **Supplier invoices:** xxxxxxxx --- ---
**Answer Section** **Correct Answers:** * **Internal expenditure request authorization forms:** **An Azure AI Document Intelligence custom model** * **Supplier invoices:** **An Azure AI Document Intelligence pre-built model** --- **Explanation** * **Custom Model** is appropriate for **internal forms** that vary in layout and do not match standardized formats. Training a model on sample documents ensures accurate field extraction. * **Prebuilt Invoice Model** is optimized for extracting data from **supplier invoices** and supports common invoice fields **without needing training**, minimizing development effort. ---
297
**Question Section** You have a **chatbot** that uses **Azure OpenAI** to generate responses. You need to **upload company data** by using **Chat Playground**. The solution must ensure that the chatbot **uses the data to answer user questions**. How should you complete the code? To answer, select the appropriate options from the dropdown menus. **NOTE:** Each correct selection is worth one point. --- ``` var options = new __________(); options.Messages = { new ChatMessage(ChatRole.User, "What are the differences between Azure Machine Learning and Azure AI services?") }; AzureExtensionsOptions = new AzureChatExtensionsOptions() { Extensions = { new __________ { SearchEndpoint = new Uri(searchEndpoint), SearchKey = new AzureKeyCredential(searchKey), IndexName = searchIndex } } }; ``` --- **Dropdown Options:** **First blank (`var options = new ___();`)** * ChatCompletionsOptions * CompletionsOptions * StreamingChatCompletions **Second blank (`new ___`)** * AzureChatExtensionConfiguration * AzureChatExtensionOptions * AzureCognitiveSearchChatExtensionConfiguration ---
**Answer Section** **Correct Answers:** * **Dropdown 1:** `ChatCompletionsOptions()` * **Dropdown 2:** `AzureCognitiveSearchChatExtensionConfiguration` --- ✅ Explanation 🔹 First Dropdown: `ChatCompletionsOptions()` * Used to configure **chat-based completions** in Azure OpenAI. * Since the chatbot uses **Chat Playground** and processes **messages**, this is the correct type. * `CompletionsOptions()` is used for **text completions**, not chat. * `StreamingChatCompletions()` is for streaming tokens — not required in this case. 🔹 Second Dropdown: `AzureCognitiveSearchChatExtensionConfiguration` * This class is used when configuring a **Cognitive Search extension** to work with the chat model. * It allows the model to **query an Azure Cognitive Search index** (RAG-style interaction). * This ensures that company data is used to answer questions. * `AzureChatExtensionConfiguration` is a generic base and not specific to Cognitive Search. * `AzureChatExtensionOptions` is not valid here. --- ✅ Final Answer: * `ChatCompletionsOptions()` * `AzureCognitiveSearchChatExtensionConfiguration`
298
**Question Section** You have an **Azure subscription** that is linked to a **Microsoft Entra tenant**. * The **subscription ID** is `x11xx11x-x11x-xxxx-xxxx-x1111xxx11x1` * The **tenant ID** is `1y1y1yyy-1y1y-y1y1-y1y1-y1y1y11111y1` The subscription contains an **Azure OpenAI** resource named `OpenAI1` that has a **primary API key** of `1111a1111a11a111aaa11a1a11a11a11aa`. OpenAI1 has a **deployment** named `embeddings1` that uses the **text-embedding-ada-002** model. You need to **query OpenAI1 and retrieve embeddings** for text input. **How should you complete the code?** To answer, select the appropriate options in the answer area. **NOTE:** Each correct selection is worth one point. --- **Answer Area** ``` Uri endpoint = new Uri("https://openai1.openai.azure.com"); AzureKeyCredential credentials = new AzureKeyCredential("________"); OpenAIClient openAIClient = new (endpoint, credentials); EmbeddingsOptions embeddingOptions = new EmbeddingsOptions(input_text_string); var returnValue = openAIClient.GetEmbeddings("________", embeddingOptions); foreach (float item in returnValue.Value.Data[0].Embedding) { Console.WriteLine(item); } ``` --- **Dropdown Options:** **For the credential key:** * `x11xx11x-x11x-xxxx-xxxx-x1111xxx11x1` * `1111a1111a11a111aaa11a1a11a11a11aa` * `1y1y1yyy-1y1y-y1y1-y1y1-y1y1y11111y1` **For the deployment name:** * `embeddings1` * `OpenAI1` * `text-embedding-ada-002` ---
**Correct Answers:** * **Credential key:** `1111a1111a11a111aaa11a1a11a11a11aa` * **Deployment name:** `embeddings1` --- ✅ Explanation: * The **API key** used in the `AzureKeyCredential` constructor must be the **primary key** of the Azure OpenAI resource, not the subscription or tenant ID. * The **deployment name** is required to invoke the model — in this case, the deployment named `embeddings1` wraps the `text-embedding-ada-002` model. You don't pass the model name directly to the API call — you reference the **deployment name**. --- **Final Answer:** * Credential key: `1111a1111a11a111aaa11a1a11a11a11aa` * Deployment name: `embeddings1`
299
hard You have an Azure subscription. You need to build an app that will compare documents for semantic similarity. The solution must meet the following requirements: * Return numeric vectors that represent the tokens of each document. * Minimize development effort. Which Azure OpenAI model should you use? A. GPT-3.5 B. GPT-4 C. embeddings D. DALL-E
**Answer Section** **Correct Answer:** **C. embeddings** --- **Explanation** * The **embeddings model** (e.g., `text-embedding-ada-002`) is specifically designed to: * Convert text into **numeric vectors** * Capture **semantic meaning** * Allow for **similarity comparison** using techniques like **cosine similarity** * It is **optimized for use cases** like: * Semantic search * Document comparison * Clustering * Recommendations * Requires **minimal development effort**: Just send a text string and receive a vector. --- **Why the other options are incorrect:** * **A. GPT-3.5 / B. GPT-4**: * These are **text generation models**, not intended for generating embeddings. * While technically capable with plugins or wrappers, they **do not return vectors directly** for similarity tasks. * **D. DALL-E**: * This model is for **image generation**, not text similarity. --- **Final Answer:** **C. embeddings**
300
**Question Section** You have an Azure subscription that contains an **Azure OpenAI** resource named **AI1** and a user named **User1**. You need to ensure that **User1** can perform the following actions in **Azure OpenAI Studio**: * Identify **resource endpoints** * View **models available for deployment** * **Generate text and images** by using deployed models The solution must follow the **principle of least privilege**. **Which role should you assign to User1?** --- **Options:** A. Cognitive Services OpenAI User B. Cognitive Services Contributor C. Contributor D. Cognitive Services OpenAI Contributor ---
**Answer Section** **Correct Answer:** **A. Cognitive Services OpenAI User** --- **Explanation** * The **Cognitive Services OpenAI User** role is the **least-privileged role** that allows a user to: * Access **Azure OpenAI Studio** * Use **deployed models** (e.g., GPT, DALL-E) to **generate text and images** * View **resource endpoints** and available models ✅ It is specifically designed for **end users** who will **interact with models** in OpenAI Studio, **without granting them management or deployment permissions**. --- **Why the other options are incorrect:** * **B. Cognitive Services Contributor** * Grants **broad access** to all Cognitive Services resources — **too much privilege** for this use case. * **C. Contributor** * Grants full **resource-level permissions** across the subscription scope — **not least privilege**. * **D. Cognitive Services OpenAI Contributor** * Grants permissions to **manage deployments**, upload data, and fine-tune models. * More than necessary for **just using deployed models**. --- **Final Answer:** **A. Cognitive Services OpenAI User**
301
**Question Section** You have a **chatbot** that uses **Azure OpenAI** to generate responses. You need to **upload company data using Chat playground**. The solution must ensure that the **chatbot uses the data to answer user questions**. **How should you complete the code?** To answer, select the appropriate options from the dropdowns. **NOTE:** Each correct selection is worth one point. --- ``` completion = openai.__________.create( messages=[{"role": "user", "content": "What are the differences between Azure Machine Learning and Azure AI services?"}], deployment_id=os.environ.get("AOAIDeploymentId"), dataSources=[ { "type": "__________", "parameters": { "endpoint": os.environ.get("SearchEndpoint"), "key": os.environ.get("SearchKey"), "indexName": os.environ.get("SearchIndex"), } } ] ) ``` --- **Dropdown 1 options (completion = openai.\_\_\_\_\_\_):** * ChatCompletion * Completion * Embedding **Dropdown 2 options ("type": "\_\_\_\_\_\_"):** * AzureCognitiveSearch * AzureDocumentIntelligence * BlobStorage ---
**Correct Answers:** * **Dropdown 1:** ChatCompletion * **Dropdown 2:** AzureCognitiveSearch --- ✅ **Explanation** * **ChatCompletion** is used when building **chat-based interactions** using Azure OpenAI with message-based prompts (`role: user`). * **AzureCognitiveSearch** is the correct `type` when using a **search index** (Retrieval Augmented Generation) to enhance answers with **external company data**. The other types (`Completion`, `Embedding`, `AzureDocumentIntelligence`, `BlobStorage`) are not applicable in this scenario. --- ✅ Final Answers: * `completion = openai.ChatCompletion.create(...)` * `"type": "AzureCognitiveSearch"`
302
hard **Question Section** You create a bot by using the **Microsoft Bot Framework SDK**. You need to configure the bot to **respond to events** by using **custom text responses**. **What should you use?** --- **Options:** A. a dialog B. an activity handler C. an adaptive card D. a skill ---
**Answer Section** **Correct Answer:** **B. an activity handler** --- **Explanation** * An **activity handler** is a class in the Bot Framework SDK (typically derived from `ActivityHandler`) that is used to handle various **activity types**, such as: * `Message` * `ConversationUpdate` * `Event` * `Typing`, etc. * To respond to an **event activity** (like a custom event), you can **override `OnEventActivityAsync()`** in the activity handler and send back a **custom text response**. --- **Why other options are incorrect:** * **A. a dialog** * Manages **conversation flow**, but not specifically tied to handling **events** unless invoked via an activity. * **C. an adaptive card** * A **UI rendering component** used to present rich content — not for handling logic or events. * **D. a skill** * A separate bot exposed for reuse across other bots. Not used directly for handling events in the same bot. --- **Final Answer:** **B. an activity handler**
303
**Question Section** You are building a **chatbot**. You need to configure the bot to **guide users through a product setup process**. **Which type of dialog should you use?** --- **Options:** A. component B. action C. waterfall D. adaptive ---
**Answer Section** **Correct Answer:** **C. waterfall** --- **Explanation** * A **Waterfall dialog** is designed to guide the user through a **multi-step process**, where each step may prompt the user, collect input, and pass information to the next step. * It is ideal for **linear, structured interactions**, such as a **product setup process**, onboarding, or form filling. --- **Why the other options are incorrect:** * **A. component** * A **wrapper** for bundling multiple dialogs together — not the dialog itself used for step-by-step user interaction. * **B. action** * Not a dialog type in the Bot Framework SDK. Might refer to Composer's actions, but not relevant here. * **D. adaptive** * Used for **dynamic dialog flows** with rule-based triggers and conditions. More flexible but not needed for a **straightforward guided flow**. --- **Final Answer:** **C. waterfall**
304
You have a chatbot that was built by using Microsoft Bot Framework and deployed to Azure. You need to configure the bot to support voice interactions. The solution must support multiple client apps. Which type of channel should you use? A. Cortana B. Microsoft Teams C. Direct Line Speech
**Answer Section** **Correct Answer:** **C. Direct Line Speech** --- **Explanation** * **Direct Line Speech** is the correct channel for: * **Enabling voice interactions** with bots * **Integrating speech recognition (speech-to-text)** and **synthesis (text-to-speech)** * Supporting **multiple client apps**, such as web, mobile, and desktop * Providing a seamless experience by using the **Speech SDK** * It connects your bot to the **Azure Speech service** and works with **Microsoft Bot Framework bots**. --- **Why the other options are incorrect:** * **A. Cortana** * **Deprecated** and no longer supported as a bot channel. * **B. Microsoft Teams** * Supports **text-based** interaction and some voice/video integration, but it is not designed for **speech-first or voice-interactive experiences across multiple apps**. --- **Final Answer:** **C. Direct Line Speech**
305
hard **Question Section** You are designing a **conversation flow** to be used in a **chatbot**. You need to **test the conversation flow** by using the **Microsoft Bot Framework Emulator**. **How should you complete the `.chat` file?** To answer, select the appropriate options from the dropdown menus. **NOTE:** Each correct selection is worth one point. --- **.chat File Code (with blanks)** ```plaintext user=User1 bot=watchbot user: I want a new watch. bot: [ { ________________ } [Delay=3000] ] bot: I can help you with that! Let me see what I can find. bot: Here's what I found. bot: [AttachmentLayout=______________] [Attachment=https://contoso.blob.core.windows.net/watch01.jpg] [Attachment=https://contoso.blob.core.windows.net/watch02.jpg] user: I like the first one. bot: Sure, pulling up more information. [Attachment=cards/watchProfileCard.json] bot: [AttachmentLayout=______________] ``` --- **Dropdown Options** **Dropdown 1 (activity type):** * Attachment * ConversationUpdate * Typing **Dropdown 2 (initial attachment layout):** * adaptivecard * carousel * thumbnail **Dropdown 3 (final attachment layout):** * adaptivecard * carousel * list ---
**Answer Section** **Correct Answers:** * First blank → **Typing** * Second blank → **carousel** * Third blank → **adaptivecard** --- ✅ Explanation * **Typing** → Simulates the bot "thinking" before responding * **carousel** → Best for browsing **multiple item options** horizontally * **adaptivecard** → Ideal for presenting **detailed information** on a selected item in a structured format (as in the watch profile JSON) --- ✅ Final Answer Recap * **First blank:** Typing * **Second blank:** carousel * **Third blank:** adaptivecard
306
You have an Azure subscription that contains an Azure OpenAI resource named AI1. You build a chatbot that uses AI1 to provide generative answers to specific questions. You need to ensure that questions intended to circumvent built-in safety features are blocked. Which Azure AI Content Safety feature should you implement? A. Monitor online activity B. Jailbreak risk detection C. Moderate text content D. Protected material text detection
**Answer Section** **Correct Answer:** **B. Jailbreak risk detection** --- **Explanation** * **Jailbreak risk detection** is a feature in **Azure AI Content Safety** designed to identify **attempts to bypass or manipulate AI model safety mechanisms**, such as: * Prompt injection attacks * Reworded malicious prompts * Evasion techniques to make the model output unsafe or restricted content * This directly addresses the need to **block questions intended to circumvent built-in safety features**. --- **Why the other options are incorrect:** * **A. Monitor online activity**: * Not a feature of Azure AI Content Safety — irrelevant to chatbot content filtering. * **C. Moderate text content**: * Useful for **flagging harmful content** (e.g., hate, violence, etc.), but **not designed specifically to detect jailbreak-style prompt manipulation**. * **D. Protected material text detection**: * Focuses on identifying **copyrighted or protected content**, not prompt hacking or safety circumvention. --- **Final Answer:** **B. Jailbreak risk detection**
307
hard **Question Section** You have a **custom Azure OpenAI model**. You have the following files: | Name | Size | | ---------- | ------ | | File1.tsv | 80 MB | | File2.xml | 25 MB | | File3.pdf | 50 MB | | File4.xlsx | 200 MB | You need to **prepare training data** by using the **OpenAI CLI data preparation tool**. **Which files can you upload to the tool?** --- **Options:** A. File1.tsv only B. File2.xml only C. File3.pdf only D. File4.xlsx only E. File1.tsv and File4.xlsx only F. File1.tsv, File2.xml, and File4.xlsx only G. File1.tsv, File2.xml, File3.pdf, and File4.xlsx ---
**Answer Section** **Correct Answer:** **E. File1.tsv and File4.xlsx only** --- ✅ Explanation The Azure OpenAI CLI data prep tool supports only the following input formats: * `.tsv` * `.csv` * `.jsonl` * `.xlsx` It does **not** support `.xml` or `.pdf` files directly. Also, each file must be **under 512 MB**, and both File1 and File4 meet this size requirement. File-by-file breakdown: * **File1.tsv (80 MB)** → ✅ Supported and within size limits * **File2.xml (25 MB)** → ❌ XML is **not** a supported file type * **File3.pdf (50 MB)** → ❌ PDF is **not** a supported file type * **File4.xlsx (200 MB)** → ✅ Supported and within size limits --- ✅ Final Answer: **E. File1.tsv and File4.xlsx only**
308
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure subscription that contains an Azure OpenAI resource named AI1 and an Azure AI Content Safety resource named CS1. You build a chatbot that uses AI1 to provide generative answers to specific questions and CS1 to check input and output for objectionable content. You need to optimize the content filter configurations by running tests on sample questions. Solution: **From Content Safety Studio, you use the Monitor online activity feature to run the tests.** Does this meet the requirement? A. Yes B. No
Selected Answer: B No is CORRECT. The Monitor online activity feature in Content Safety Studio is used for real-time monitoring and analyzing live content streams but is not designed for running tests on sample questions to optimize content filter configurations. To meet the requirement, you should use the Test or Analyze features within Content Safety Studio, which are specifically designed for testing and adjusting content filter configurations on sample data.
309
hard **Question Section** You are planning the **product creation project**. You need to recommend a **process for analyzing videos**. **Which four actions should you perform in sequence?** To answer, move the appropriate actions from the list to the answer area and arrange them in the correct order. Available Actions: Index the video by using the Azure Video Analyzer for Media (previously Video Indexer) API. Upload the video to blob storage. Analyze the video by using the Computer Vision API. Extract the transcript from Microsoft Stream. Send the transcript to the Language Understanding API as an utterance. Extract the transcript from the Azure Video Analyzer for Media (previously Video Indexer) API. Translate the transcript by using the Translator API. Upload the video to file storage. ---
✅ Correct Sequence: 1. **Upload the video to blob storage.** 2. **Index the video by using the Azure Video Analyzer for Media (previously Video Indexer) API.** 3. **Extract the transcript from the Azure Video Analyzer for Media (previously Video Indexer) API.** 4. **Send the transcript to the Language Understanding API as an utterance.** --- ✅ Explanation: 1. **Upload the video to blob storage** * The video must be stored in an accessible location (e.g., Azure Blob Storage) before it can be processed. 2. **Index the video using Azure Video Analyzer for Media** * This step performs audio and video analysis, including speaker identification, transcript generation, and content tagging. 3. **Extract the transcript from Azure Video Analyzer** * After indexing, you can retrieve the **text transcript** that was generated from the video. 4. **Send the transcript to the Language Understanding API (LUIS/CLUA)** * This enables semantic understanding of the extracted transcript for downstream processing or automation (e.g., identifying user intent). --- ❌ Why other actions are incorrect: * **Analyze the video using Computer Vision API**: This is mainly for image/frame analysis — not suited for full video and transcript generation. * **Extract the transcript from Microsoft Stream**: Not applicable here — we're using Azure Video Analyzer, not Microsoft Stream. * **Translate the transcript**: Translation isn't required as part of the outlined goal. * **Upload to file storage**: Blob storage is preferred and directly supported by Video Indexer; file storage is irrelevant here. --- ✅ Final Answer (in order): 1. Upload the video to blob storage 2. Index the video by using the Azure Video Analyzer for Media (previously Video Indexer) API 3. Extract the transcript from the Azure Video Analyzer for Media (previously Video Indexer) API 4. Send the transcript to the Language Understanding API as an utterance