Exam Questions Flashcards
(309 cards)
You plan to use a Language Understanding application named app1 that is deployed to a container. App1 was developed by using a Language Understanding authoring resource named lu1.
App1 has the following versions:
|---------|--------------|----------------| | V1.2 | None | None | | V1.1 | 2020-10-01 | None | | V1.0 | 2020-09-01 | 2020-09-15 |
You need to create a container that uses the latest deployable version of app1.
Which three actions should you perform in sequence?
(Move the appropriate actions from the list of Actions to the Answer Area and arrange them in the correct order.)
Actions:
- Run a container that has
version
set as an environment variable. - Export the model by using the Export as JSON option.
- Select v1.1 of app1.
- Run a container and mount the model file.
- Select v1.0 of app1.
- Export the model by using the Export for containers (GZIP) option.
- Select v1.2 of app1.
| Version | Trained date | Published date |
Version | Trained date | Published date |**
- Select v1.1 of app1.
- Export the model using the “Export for containers (GZIP)” option.
- Run a container and mount the model file.
You have 100 chatbots that each has its own Language Understanding (LUIS) model.
You frequently need to add the same phrases to each model.
You need to programmatically update the LUIS models to include the new phrases.
How should you complete the code?
Drag the appropriate values to complete the code snippet.
Each value may be used once, more than once, or not at all.
Each correct selection is worth one point.
Values:
- AddPhraseListAsync
- Phraselist
- PhraselistCreateObject
- Phrases
- SavePhraseListAsync
- UploadPhraseListAsync
Code (Answer Area):
var phraselistId = await client.Features.( appId, versionId, new * { EnabledForAllModels = false, IsExchangeable = true, Name = "PL1", Phrases = "item1,item2,item3,item4,item5" });
**
Answer:
- AddPhraseListAsync
- PhraselistCreateObject
Explanation:
-
AddPhraseListAsync
is the method used to add a phrase list feature. -
PhraselistCreateObject
represents the structure needed to define the new phrase list.
You need to build a chatbot that meets the following requirements:
- Supports chit-chat, knowledge base, and multilingual models
- Performs sentiment analysis on user messages
- Selects the best language model automatically
What should you integrate into the chatbot?
A. QnA Maker, Language Understanding, and Dispatch
B. Translator, Speech, and Dispatch
C. Language Understanding, Text Analytics, and QnA Maker
D. Text Analytics, Translator, and Dispatch
—**
Answer:
D. Text Analytics, Translator, and Dispatch
Explanation:
- Text Analytics handles sentiment analysis.
- Translator supports multilingual capabilities.
- Dispatch helps automatically select and route to the correct language model or service (LUIS, QnA Maker, etc.).
Question:
Your company wants to reduce how long it takes for employees to log receipts in expense reports. All the receipts are in English.
You need to extract top-level information from the receipts, such as the vendor and the transaction total. The solution must minimize development effort.
Which Azure service should you use?
A. Custom Vision
B. Personalizer
C. Form Recognizer
D. Computer Vision
—**
Answer:
C. Form Recognizer
Explanation:
Form Recognizer is designed to extract key-value pairs, tables, and text from documents like receipts, minimizing development effort with prebuilt models.
You need to create a new resource that will be used to perform sentiment analysis and optical character recognition (OCR).
The solution must meet the following requirements:
- Use a single key and endpoint to access multiple services.
- Consolidate billing for future services that you might use.
- Support the use of Computer Vision in the future.
How should you complete the HTTP request to create the new resource?
(Select the appropriate options in the answer area.)
Answer Area:
[Dropdown: PATCH / POST / PUT] https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/RG1/providers/Microsoft.CognitiveServices/accounts/CS1?api-version=2017-04-18 { "location": "West US", "kind": "[Dropdown: CognitiveServices / ComputerVision / TextAnalytics]", "sku": { "name": "S0" }, "properties": {}, "identity": { "type": "SystemAssigned" } }
Answer:
-
HTTP Method:
PUT
-
Kind:
CognitiveServices
Explanation:
-
PUT
is the correct HTTP verb to create or update a resource. -
CognitiveServices
allows access to multiple services (e.g., Text Analytics and Computer Vision) using a single endpoint and key, meeting all the stated requirements.
PUT is correct because:
You are creating or replacing a resource at a known URI.
In Azure Resource Manager (ARM) operations, PUT is the standard method for creating or updating a resource.
POST is used when the server determines the resource URI (e.g., for custom actions or when the resource ID is not known in advance).
PATCH is used for partial updates.
Question:
You are developing a new sales system that will process the video and text from a public-facing website.
You plan to notify users that their data has been processed by the sales system.
Which responsible AI principle does this help meet?
A. Transparency
B. Fairness
C. Inclusiveness
D. Reliability and safety
Answer:
A. Transparency
Explanation:
Informing users that their data is being processed supports the principle of transparency, helping build trust and understanding of AI system behavior and data use.
Question:
You have an app that manages feedback.
You need to ensure that the app can detect negative comments using the Sentiment Analysis API in Azure AI Language.
The solution must also ensure that the managed feedback remains on your company’s internal network.
Which three actions should you perform in sequence?
(Move the appropriate actions to the answer area and arrange them in the correct order.)
Actions:
- Identify the Language service endpoint URL and query the prediction endpoint.
- Provision the Language service resource in Azure.
- Run the container and query the prediction endpoint.
- Deploy a Docker container to an on-premises server.
- Deploy a Docker container to an Azure container instance.
Answer Area:
[Step 1]
[Step 2]
[Step 3]
Answer:
- Provision the Language service resource in Azure.
- Deploy a Docker container to an on-premises server.
- Run the container and query the prediction endpoint.
Explanation:
- You first need to provision the service to obtain the container image and required keys.
- To keep data on-premises, you should deploy the container locally.
- Finally, you run the container and query the prediction endpoint to perform sentiment analysis within your network.
Question:
You plan to use containerized versions of the Anomaly Detector API on local devices for testing and in on-premises datacenters.
You need to ensure that the containerized deployments meet the following requirements:
- Prevent billing and API information from being stored in the command-line histories of the devices that run the container.
- Control access to the container images by using Azure role-based access control (RBAC).
Which four actions should you perform in sequence?
(Move the appropriate actions from the list to the answer area and arrange them in the correct order.)
Actions:
- Create a custom Dockerfile.
- Pull the Anomaly Detector container image.
- Distribute a
docker run
script. - Push the image to an Azure container registry.
- Build the image.
- Push the image to Docker Hub.
Answer Area:
[Step 1]
[Step 2]
[Step 3]
[Step 4]
- Pull the Anomaly Detector container image.
- Create a custom Dockerfile.
- Build the image.
- Push the image to an Azure container registry.
🔍 Explanation:
- Step 1: Pull the base image from Microsoft to get the official container content.
- Step 2: Create a Dockerfile where you might modify environment variables, set configuration, or enhance the image securely (without hardcoding billing keys).
- Step 3: Build the image based on your custom Dockerfile.
- Step 4: Push the image to Azure Container Registry (ACR) so you can use Azure RBAC to control access to the image.
Not chosen:
-
docker run
script isn’t necessary yet, and distributing it may expose billing info unless handled very carefully. - Docker Hub doesn’t support Azure RBAC, so it doesn’t meet the access control requirement.
Let me know if you want a version with blanks in the question section.
Question:
You plan to deploy a containerized version of an Azure Cognitive Services service to be used for text analysis.
You configure https://contoso.cognitiveservices.azure.com
as the endpoint URI for the service, and you plan to pull the latest version of the Text Analytics Sentiment Analysis container.
You intend to run the container on an Azure virtual machine using Docker.
How should you complete the following Docker command?
(Select the appropriate options in the answer area.)
Answer Area:
docker run --rm -it -p 5000:5000 --memory 8g --cpus 1 \ <Dropdown: image> \ Eula=accept \ Billing=<Dropdown: billing value> \ ApiKey=xxxxxxxxxxxxxxxxxxxxxxxx
Image Options:
- http://contoso.blob.core.windows.net
- https://contoso.cognitiveservices.azure.com
- mcr.microsoft.com/azure-cognitive-services/textanalytics/keyphrase
- mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment
Billing Options:
- http://contoso.blob.core.windows.net
- https://contoso.cognitiveservices.azure.com
- mcr.microsoft.com/azure-cognitive-services/textanalytics/keyphrase
- mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment
Answer:
docker run --rm -it -p 5000:5000 --memory 8g --cpus 1 \ mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment \ Eula=accept \ Billing=https://contoso.cognitiveservices.azure.com \ ApiKey=xxxxxxxxxxxxxxxxxxxxxxxx
Explanation:
- The correct image for sentiment analysis is
mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment
. - The Billing parameter must point to the Cognitive Services endpoint you’ve configured — in this case:
https://contoso.cognitiveservices.azure.com
.
Question:
You have the following C# method for creating Azure Cognitive Services resources programmatically:
static void create_resource(CognitiveServicesManagementClient client, string resource_name, string kind, string account_tier, string location) { CognitiveServicesAccount parameters = new CognitiveServicesAccount(null, null, kind, location, resource_name, new CognitiveServicesAccountProperties(), new Sku(account_tier)); var result = client.Accounts.Create(resource_group_name, account_tier, parameters); }
You need to call the method to create a free Azure resource in the West US region.
The resource will be used to generate captions of images automatically.
Which code should you use?
A. create_resource(client, "res1", "ComputerVision", "F0", "westus")
B. create_resource(client, "res1", "CustomVision.Prediction", "F0", "westus")
C. create_resource(client, "res1", "ComputerVision", "S0", "westus")
D. create_resource(client, "res1", "CustomVision.Prediction", "S0", "westus")
Answer:
A. create_resource(client, “res1”, “ComputerVision”, “F0”, “westus”)
Explanation:
- ComputerVision is the correct kind for generating image captions.
- F0 is the free pricing tier.
-
“westus” is the specified location.
Therefore, A is the correct call to meet all requirements.
Question:
You successfully run the following HTTP request:
POST https://management.azure.com/subscriptions/18c51a87-3a69-47a8-aedc-a54745f708a1/resourceGroups/RG1/providers/Microsoft.CognitiveServices/accounts/contoso1/regenerateKey?api-version=2017-04-18 Body: { "keyName": "Key2" }
What is the result of the request?
A. A key for Azure Cognitive Services was generated in Azure Key Vault.
B. A new query key was generated.
C. The primary subscription key and the secondary subscription key were rotated.
D. The secondary subscription key was reset.
Answer:
D. The secondary subscription key was reset.
Explanation:
Calling the regenerateKey
API with "keyName": "Key2"
targets Key2, which refers to the secondary key. This resets that specific key, allowing key rotation without service disruption.
Question:
You build a custom Form Recognizer model.
You receive sample files to use for training the model, as shown in the table below:
Which three files can you use to train the model?
(Each correct answer presents a complete solution. Each correct selection is worth one point.)
A. File1
B. File2
C. File3
D. File4
E. File5
F. File6
Name | Type | Size |
| —– | —- | —— |
| File1 | PDF | 20 MB |
| File2 | MP4 | 100 MB |
| File3 | JPG | 20 MB |
| File4 | PDF | 100 MB |
| File5 | GIF | 1 MB |
| File6 | JPG | 40 MB |
Answer:
A. File1
C. File3
F. File6
Explanation:
Form Recognizer supports training with:
- PDF, JPG, PNG, and TIFF formats.
- Files must be ≤ 50 MB in size.
✅ Valid:
- File1: PDF, 20 MB
- File3: JPG, 20 MB
- File6: JPG, 40 MB
❌ Invalid:
- File2: MP4 — unsupported format
- File4: PDF — too large (100 MB)
- File5: GIF — unsupported format
Question:
A customer uses Azure Cognitive Search.
The customer plans to enable server-side encryption and use customer-managed keys (CMK) stored in Azure.
What are three implications of the planned change?
(Each correct answer presents a complete solution. Each correct selection is worth one point.)
A. The index size will increase.
B. Query times will increase.
C. A self-signed X.509 certificate is required.
D. The index size will decrease.
E. Query times will decrease.
F. Azure Key Vault is required.
Answer:
A. The index size will increase
B. Query times will increase
F. Azure Key Vault is required
Explanation:
- Enabling CMK encryption typically increases index size due to encryption metadata.
- Query times may increase slightly due to encryption/decryption overhead.
- Azure Key Vault is required to manage and store the customer-managed keys used for encryption.
- A self-signed X.509 certificate is not required in this scenario.
Question:
You are developing a new sales system that will process the video and text from a public-facing website.
You plan to notify users that their data has been processed by the sales system.
Which responsible AI principle does this help meet?
A. Transparency
B. Fairness
C. Inclusiveness
D. Reliability and safety
Answer:
A. Transparency
Explanation:
Notifying users that their data has been processed promotes transparency, which ensures users are aware of how their data is collected, used, and managed by AI systems.
You create a web app named app1 that runs on an Azure virtual machine named vm1.
vm1 is located in a virtual network named vnet1.
You plan to create a new Azure Cognitive Search service named service1.
You need to ensure that app1 can connect directly to service1 without routing traffic over the public internet.
Proposed solution:
You deploy service1 and a public endpoint to a new virtual network, and you configure Azure Private Link.
Does this meet the goal?
A. Yes
B. No
Answer:
B. No
Explanation:
Although Azure Private Link enables private access to Azure services, using a public endpoint does not prevent traffic from potentially going over the public internet.
To meet the goal, the public network access must be disabled, and the private endpoint must be correctly configured within the same virtual network or peered VNets to ensure completely private communication.
Question:
You create a web app named app1 that runs on an Azure virtual machine named vm1.
vm1 is on an Azure virtual network named vnet1.
You plan to create a new Azure Cognitive Search service named service1.
You need to ensure that app1 can connect directly to service1 without routing traffic over the public internet.
Proposed solution:
You deploy service1 with a public endpoint, and you configure an IP firewall rule.
Does this meet the goal?
A. Yes
B. No
Answer:
B. No
Explanation:
Configuring a public endpoint with an IP firewall rule restricts which IP addresses can access the service but does not prevent traffic from routing over the public internet.
To avoid public internet routing, you must use Private Link or a private endpoint, not just IP restrictions.
Question:
You create a web app named app1 that runs on an Azure virtual machine named vm1.
vm1 is on an Azure virtual network named vnet1.
You plan to create a new Azure Cognitive Search service named service1.
You need to ensure that app1 can connect directly to service1 without routing traffic over the public internet.
Proposed solution:
You deploy service1 with a public endpoint, and you configure a network security group (NSG) for vnet1.
Does this meet the goal?
A. Yes
B. No
Answer:
B. No
Explanation:
A Network Security Group (NSG) controls inbound and outbound traffic within a virtual network, but it does not eliminate routing through the public internet when accessing a service via its public endpoint.
To keep traffic off the public internet, you must use Private Endpoint or Azure Private Link. NSG alone does not meet the requirement.
Question:
You plan to perform predictive maintenance.
You collect IoT sensor data from 100 industrial machines for a year.
Each machine has 50 different sensors generating data at one-minute intervals.
In total, you have 5,000 time series datasets.
You need to identify unusual values in each time series to help predict machinery failures.
Which Azure service should you use?
A. Azure AI Computer Vision
B. Cognitive Search
C. Azure AI Document Intelligence
D. Azure AI Anomaly Detector
Answer:
D. Azure AI Anomaly Detector
Explanation:
Azure AI Anomaly Detector is specifically designed to detect anomalies in time series data, making it ideal for identifying unusual patterns that could indicate machinery failures in IoT scenarios.
HOTSPOT -
You are developing a streaming Speech to Text solution that will use the Speech SDK and MP3 encoding.
You need to develop a method to convert speech to text for streaming MP3 data.
How should you complete the code? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area:
Audio Format:
- AudioConfig.SetProperty
- AudioStreamFormat.GetCompressedFormat
- AudioStreamFormat.GetWaveFormatPCM
- PullAudioInputStream
Recognizer:
- KeywordRecognizer
- SpeakerRecognizer
- SpeechRecognizer
- SpeechSynthesizer
Code:
var audioFormat = xxxxxxxxxxxxx (AudioStreamContainerFormat.MP3); var speechConfig = SpeechConfig.FromSubscription("18c51887-3a69-47a8-aedc-a547457f08a1", "westus"); var audioConfig = AudioConfig.FromStreamInput(pushStream, audioFormat); using (var recognizer = new xxxxxxxxxxx (speechConfig, audioConfig)) { var result = await recognizer.RecognizeOnceAsync(); var text = result.Text; }
Answer Section:
-
Audio Format - AudioStreamFormat.GetCompressedFormat:
To handle MP3 data, you need to choose the compressed format for streaming. GetCompressedFormat is the correct choice for MP3 encoding. -
Recognizer - SpeechRecognizer:
For converting speech to text, you should use SpeechRecognizer. It is designed for speech-to-text tasks, whereas other recognizers like KeywordRecognizer and SpeakerRecognizer serve different purposes (e.g., keyword spotting or speaker identification). SpeechSynthesizer is used for text-to-speech, not speech recognition.
Question:
You are developing an internet-based training solution for remote learners.
Your company identifies that during the training, some learners leave their desk for long periods or become distracted.
You need to use a video and audio feed from each learner’s computer to detect whether the learner is present and paying attention. The solution must minimize development effort and identify each learner.
Which Azure Cognitive Services should you use for each requirement?
- From a learner’s video feed, verify whether the learner is present.
- From a learner’s facial expression in the video feed, verify whether the learner is paying attention.
- From a learner’s audio feed, detect whether the learner is talking.
Options: Face, Speech, Text Analytics
Answer:
- From a learner’s video feed, verify whether the learner is present → Face
- From a learner’s facial expression in the video feed, verify whether the learner is paying attention → Face
- From a learner’s audio feed, detect whether the learner is talking → Speech
Explanation:
- The Face API detects and identifies people, and can analyze presence and facial expressions (e.g., attention, emotions).
- The Speech API can detect when someone is speaking, even with streaming audio.
- Text Analytics is for processing text, not suited for real-time video or audio input.
Question:
You plan to provision a QnA Maker service in a new resource group named RG1.
In RG1, you create an App Service plan named AP1.
Which two Azure resources are automatically created in RG1 when you provision the QnA Maker service?
(Each correct answer presents part of the solution. Each correct selection is worth one point.)
A. Language Understanding
B. Azure SQL Database
C. Azure Storage
D. Azure Cognitive Search
E. Azure App Service
Answer:
D. Azure Cognitive Search
E. Azure App Service
Explanation:
When you provision QnA Maker, it automatically creates:
- An Azure Cognitive Search resource to index and search knowledge base content.
- An Azure App Service to host the QnA Maker runtime endpoint used for querying.
Note: Azure QnA Maker is now deprecated and replaced by Azure Language Studio (Question Answering), but this remains valid for legacy questions.
Question:
You are building a language model by using a Language Understanding (classic) service.
You create a new Language Understanding (classic) resource.
You need to add more contributors.
What should you use?
A. A conditional access policy in Azure Active Directory (Azure AD)
B. The Access control (IAM) page for the authoring resources in the Azure portal
C. The Access control (IAM) page for the prediction resources in the Azure portal
Answer:
B. The Access control (IAM) page for the authoring resources in the Azure portal
Explanation:
To add contributors who can create, edit, and manage models, you must assign them roles on the authoring resource, not the prediction resource.
Authoring is where models are developed, so IAM permissions apply there for contributor roles.
You have an Azure Cognitive Search service.
During the past 12 months, query volume steadily increased.
You discover that some search query requests are being throttled.
You need to reduce the likelihood that search query requests are throttled.
Solution: You add indexes.
Does this meet the goal?
A. Yes
B. No
—**
Answer:
B. No
Explanation:
Adding indexes increases the number of searchable datasets but does not increase query capacity or throughput.
To reduce throttling, you should consider scaling up the service (e.g., adding replicas or partitions), which directly affects query handling capacity.
Question:
You need to develop an automated call handling system that can respond to callers in their own language.
The system will support only French and English.
Which Azure Cognitive Services should you use to meet each requirement?
Requirements:
- Detect the incoming language
- Respond in the caller’s own language
Available Services:
- Speaker Recognition
- Speech to Text
- Text Analytics
- Text to Speech
- Translator
✅ Answer:
- Detect the incoming language: Speech to Text
- Respond in the caller’s own language: Text to Speech
🔍 Why this is now correct:
- Speech to Text with AutoDetectSourceLanguageConfig
- Azure’s Speech to Text now includes automatic spoken language identification using
AutoDetectSourceLanguageConfig
. - This allows the service to detect if the caller is speaking English or French directly from the audio stream, without needing a separate transcription step via Translator or Text Analytics.
- Official doc: Azure Speech Language Identification
- Text to Speech
- Still the right service to generate spoken responses in the caller’s language.