test1 Flashcards

https://freedumps.certqueen.com/?s=DP-100 (69 cards)

1
Q

You are developing a hands-on workshop to introduce Docker for Windows to attendees.You need to ensure that workshop attendees can install Docker on their devices.Which two prerequisite components should attendees install on the devices? Each correct answer pre-sents part of the solution.NOTE: Each correct selection is worth one point.

Microsoft Hardware-Assisted Virtualization Detection Tool
Kitematic
BIOS-enabled virtualization
VirtualBox
Windows 10 64-bit Professional

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
1
Q

You are implementing a machine learning model to predict stock prices. The model uses a PostgreSQL database and requires GPU processing. You need to create a virtual machine that is pre-configured with the required tools. What should you do?

Create a Data Science Virtual Machine (DSVM) Windows edition.
Create a Geo Al Data Science Virtual Machine (Geo-DSVM) Windows edition.
Create a Deep Learning Virtual Machine (DLVM) Linux edition.
Create a Deep Learning Virtual Machine (DLVM) Windows edition.
Create a Data Science Virtual Machine (DSVM) Linux edition.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You must store data in Azure Blob Storage to support Azure Machine Learning.You need to transfer the data into Azure Blob Storage. What are three possible ways to achieve the goal? Each correct answer presents a complete solution.NOTE: Each correct selection is worth one point.

Bulk Insert SQL Query
AzCopy
Python script
Azure Storage Explorer
Bulk Copy Program (BCP)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

You are moving a large dataset from Azure Machine Learning Studio to a Weka environment.You need to format the data for the Weka environment.Which module should you use?

Convert to CSV
Convert to Dataset
Convert to ARFF
Convert to SVMLight

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

You are solving a classification task.You must evaluate your model on a limited data sample by using k-fold cross validation. You start by configuring a k parameter as the number of splits. You need to configure the k parameter for the cross-validation. Which value should you use?

k=0.5
k=0
k=5
k=1

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

You are creating a machine learning model. You have a dataset that contains null rows.You need to use the Clean Missing Data module in Azure Machine Learning Studio to identify and re-solve the null and missing data in the dataset. Which parameter should you use?

Replace with mean
Remove entire column
Remove entire row
Hot Deck

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

You are performing feature engineering on a dataset. You must add a feature named CityName and populate the column value with the text London.You need to add the new feature to the dataset.Which Azure Machine Learning Studio module should you use?

Edit Metadata
Preprocess Text
Execute Python Script
Latent Dirichlet Allocation

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You are creating a binary classification by using a two-class logistic regression model. You need to evaluate the model results for imbalance.Which evaluation metric should you use?

A. Relative Absolute Error
B. AUC Curve
C. Mean Absolute Error
D. Relative Squared Error
E. Accuracy
F. Root Mean Square Error

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

You are building a machine learning model for translating English language textual content into French language textual content. You need to build and train the machine learning model to learn the sequence of the textual content. Which type of neural network should you use?

Multilayer Perceptions (MLPs)
Convolutional Neural Networks (CNNs)
Recurrent Neural Networks (RNNs)
Generative Adversarial Networks (GANs)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

You create a binary classification model.You need to evaluate the model performance. Which two metrics can you use? Each correct answer presents a complete solution.NOTE: Each correct selection is worth one point.

relative absolute error
precision
accuracy
mean absolute error
coefficient of determination

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

HOTSPOT -
You have an Azure blob container that contains a set of TSV files. The Azure blob container is registered as a datastore for an Azure Machine Learning service
workspace. Each TSV file uses the same data schema.
You plan to aggregate data for all of the TSV files together and then register the aggregated data as a dataset in an Azure Machine Learning workspace by using the
Azure Machine Learning SDK for Python.
You run the following code.
from azureml.core.workspace import Workspace
from azureml.core.datastore import Datastore
from azureml.core.dataset import Dataset
import pandas as pd
datastore_paths = (datastore, ‘./data/ *. tsv’ )
myDataset_1 = Dataset.File.from_files (path=datastore_paths)
myDataset_2 = Dataset. Tabular.from_delimited_files (path=datastore_paths, separator=’\t’ )
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area

The myDataset_1 dataset can be converted into a pandas
dataframe by using the following method:
using myDataset_1.to_pandas_dataframe ()
The myDataset_1.to_path() method returns an array of file
paths for all of the TSV files in the dataset.
The myDataset_2 dataset can be converted into a pandas
dataframe by using the following method:
myDataset_2.to_pandas_dataframe ()

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

You create a multi-class image classification deep learning model that uses a set of labeled images. You create a script file named train.py that uses the PyTorch 1.3 framework to train the model.

You must run the script by using an estimator. The code must not require any additional Python libraries to be installed in the environment for the estimator. The time required for model training must be minimized.

You need to define the estimator that will be used to run the script.

Which estimator type should you use?

TensorFlow
PyTorch
SKLearn
Estimator

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You create a model to forecast weather conditions based on historical data.

You need to create a pipeline that runs a processing script to load data from a datastore and pass the processed data to a machine learning model training script.

Solution: Run the following code:

Does the solution meet the goal?

Yes
No

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

You create a multi-class image classification deep learning model that uses the PyTorch deep learningframework.

You must configure Azure Machine Learning Hyperdrive to optimize the hyperparameters for the classification model.

You need to define a primary metric to determine the hyperparameter values that result in the model with the best accuracy score.

Which three actions must you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

Set the primary_metric_goal of the estimator used to run the bird_classifier_train.py script to maximize.
Add code to the bird_classifier_train.py script to calculate the validation loss of the model and log it as a float value with the key loss.
Set the primary_metric_goal of the estimator used to run the bird_classifier_train.py script to minimize.
Set the primary_metric_name of the estimator used to run the bird_classifier_train.py script to accuracy.
Set the primary_metric_name of the estimator used to run the bird_classifier_train.py script to loss.
Add code to the bird_classifier_train.py script to calculate the validation accuracy of the model and log it as a float value with the key accuracy.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

You are with a time series dataset in Azure Machine Learning Studio.

You need to split your dataset into training and testing subsets by using the Split Data module.

Which splitting mode should you use?

Regular Expression Split
Split Rows with the Randomized split parameter set to true
Relative Expression Split
Recommender Split

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

DRAG DROP -

An organization uses Azure Machine Learning service and wants to expand their use of machine learning.
You have the following compute environments. The organization does not want to create another compute environment.

Environment name Compute type
nb_server Compute Instance
aks_cluster Azure Kubernetes Service
mlc_cluster Machine Learning Compute
You need to determine which compute environment to use for the following scenarios.
Which compute types should you use? To answer, drag the appropriate compute environments to the correct scenarios. Each compute environment may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.

Select and Place:

Environments

nb_server

aks_cluster

mlc_cluster

Answer Area - Scenario

Run an Azure Machine Learning Designer training pipeline.

Deploying a web service from the Azure Machine Learning designer.

Environment

[Drop-down for 1st scenario]
[Drop-down for 2nd scenario]

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

You register a model that you plan to use in a batch inference pipeline.

The batch inference pipeline must use a ParallelRunStep step to process files in a file dataset. The script has the ParallelRunStep step runs must process six input files each time the inferencing function is called.

You need to configure the pipeline.

Which configuration setting should you specify in the ParallelRunConfig object for the PrallelRunStep step?

process_count_per_node= “6”
node_count= “6”
mini_batch_size= “6”
error_threshold= “6”

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

HOTSPOT -
You create an experiment in Azure Machine Learning Studio. You add a training dataset that contains 10,000 rows. The first 9,000 rows represent class 0 (90 percent).
The remaining 1,000 rows represent class 1 (10 percent).
The training set is imbalances between two classes. You must increase the number of training examples for class 1 to 4,000 by using 5 data rows. You add the
Synthetic Minority Oversampling Technique (SMOTE) module to the experiment.
You need to configure the module.
Which values should you use? To answer, select the appropriate options in the dialog box in the answer area.
NOTE: Each correct
Hot Area:

Answer Area
🔽 SMOTE
Label column
Selected columns: All labels
[Launch column selector button]
SMOTE percentage
Dropdown options:
0
300
3000
4000
Number of nearest neighbors
Dropdown options:
0
1
5
4000
Random seed: 0

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

DRAG DROP -
You configure a Deep Learning Virtual Machine for Windows.
You need to recommend tools and frameworks to perform the following:
✑ Build deep neural network (DNN) models
✑ Perform interactive data exploration and visualization
Which tools and frameworks should you recommend? To answer, drag the appropriate tools to the correct tasks. Each tool may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Select and Place:

Tools
Vowpal Wabbit
PowerBI Desktop
Azure Data Factory
Microsoft Cognitive Toolkit

Answer Area
Task Tool
Build DNN models [Tool]
Enable interactive data exploration and visualization [Tool]

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

HOTSPOT -
You are working on a classification task. You have a dataset indicating whether a student would like to play soccer and associated attributes. The dataset includes the
following columns:
Name Description
IsPlaySoccer Values can be 1 and 0.
Gender Values can be M or F.
PrevExamMarks Stores values from 0 to 100
Height Stores values in centimeters
Weight Stores values in kilograms

You need to classify variables by type.
Which variable should you add to each category? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area

Category Variables

Categorical variables:
Gender, IsPlaySoccer
Gender, PrevExamMarks, Height, Weight
PrevExamMarks, Height, Weight
IsPlaySoccer
Continuous variables:
Gender, IsPlaySoccer
Gender, PrevExamMarks, Height, Weight
PrevExamMarks, Height, Weight
IsPlaySoccer

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

DRAG DROP -
You create a training pipeline using the Azure Machine Learning designer. You upload a CSV file that contains the data from which you want to train your model.
You need to use the designer to create a pipeline that includes steps to perform the following tasks:
✑ Select the training features using the pandas filter method.
✑ Train a model based on the naive_bayes.GaussianNB algorithm.
✑ Return only the Scored Labels column by using the query
✑ SELECT [Scored Labels] FROM t1;
Which modules should you use? To answer, drag the appropriate modules to the appropriate locations. Each module name may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Select and Place:
Modules
Create Python Model

Train Model

Two Class Neural Network

Execute Python Script

Apply SQL Transformation

Select Columns in Dataset

Answer Area (Pipeline)
training-data →

Select Columns in Dataset (likely module to apply) →

Split Data →

One output goes to Train Model

The other goes directly to Score Model

Train Model ←

Receives input from the selected module (likely Two Class Neural Network)

Outputs to Score Model

Score Model →

Final step: (Empty module box — possibly Evaluate Model)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

You register a file dataset named csvjolder that references a folder. The folder includes multiple com ma-separated values (CSV) files in an Azure storage blob container. You plan to use the following code to run a script that loads data from the file dataset.

You create and instantiate the following variables:

You have the following code:

You need to pass the dataset to ensure that the script can read the files it references.

Which code segment should you insert to replace the code comment?

inputs=[file_dataset.as_named_input(‘training_files’).to_pandas_dataframe()],
inputs=[file_dataset.as_named_input(‘training_files’).as_mount()],
script_params={‘–training_files’: file_dataset},
inputs=[file_dataset.as_named_input(‘training_files’)],

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

HOTSPOT -
You are developing a linear regression model in Azure Machine Learning Studio. You run an experiment to compare different algorithms.
The following image displays the results dataset output:
Results Table
Algorithm Mean Absolute Error Root Mean Squared Error Relative Absolute Error Relative Squared Error
Bayesian Linear 3.276025 4.655442 0.511436 0.282138
Neural Network 2.676538 3.621476 0.417847 0.17073
Boosted Decision Tree 2.168847 2.878077 0.338589 0.107831
Linear 6.350005 8.720718 0.99133 0.99002
Decision Forest 2.390206 3.315164 0.373146 0.14307
Use the drop-down menus to select the answer choice that answers each question based on the information presented in the image.

NOTE: Each correct selection is worth one point.

Answer Area
Question 1:
Which algorithm minimizes differences between actual and predicted values?

Options:
Bayesian Linear Regression
Neural Network Regression
Boosted Decision Tree Regression
Linear Regression
Decision Forest Regression

Question 2:
Which approach should you use to find the best parameters for a Linear Regression model for the Online Gradient Descent method?

Options:
Set the Decrease learning rate option to True.
Set the Decrease learning rate option to False.
Set the Create trainer mode option to Parameter Range.
Increase the number of epochs.
Decrease the number of epochs.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

You use Azure Machine Learning designer to create a real-time service endpoint. You have a single Azure Machine Learning service compute resource. You train the model and prepare the real-time pipeline for deployment You need to publish the inference pipeline as a web service.

Which compute type should you use?

HDInsight
Azure Databricks
Azure Kubernetes Services
the existing Machine Learning Compute resource
a new Machine Learning Compute resource

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
You arc creating a new experiment in Azure Machine Learning Studio. You have a small dataset that has missing values in many columns. The data does not require the application of predictors for each column. You plan to use the Clean Missing Data module to handle the missing data. You need to select a data cleaning method. Which method should you use? Synthetic Minority Replace using Probabilistic PAC Replace using MICE Normalization
25
You use Azure Machine Learning Studio to build a machine learning experiment. You need to divide data into two distinct datasets. Which module should you use? Partition and Sample Assign Data to Clusters Group Data into Bins Test Hypothesis Using t-Test
26
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You train and register a machine learning model. You plan to deploy the model as a real-time web service. Applications must use key-based authentication to use the model. You need to deploy the web service. Solution: Create an AciWebservice instance. Set the value of the ssl_enabled property to True. Deploy the model to the service. Does the solution meet the goal? Yes No
27
HOTSPOT You must use an Azure Data Science Virtual Machine (DSVM) as a compute target. You need to attach an existing DSVM to the workspace by using the Azure Machine Learning SDK for Python. How should you complete the following code segment? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area from azureml.core.compute import RemoteCompute, ComputeTarget compute_target_name = "dsvm" config = RemoteCompute.__________(resource_id='', ssh_port=22, username='', private_key_file='./.ssh/id_rsa') compute = ComputeTarget.__________(ws, compute_target_name, config) compute.wait_for_completion(show_output=True) Dropdown Options: First dropdown (RemoteCompute.__________): attach_configuration get_credentials detach Second dropdown (ComputeTarget.__________): detach create attach
28
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You train a classification model by using a logistic regression algorithm. You must be able to explain the model’s predictions by calculating the importance of each feature, both as an overall global relative importance value and as a measure of local importance for a specific set of predictions. You need to create an explainer that you can use to retrieve the required global and local feature importance values. Solution: Create a MimicExplainer. Does the solution meet the goal? Yes No
29
HOTSPOT - You are evaluating a Python NumPy array that contains six data points defined as follows: data = [10, 20, 30, 40, 50, 60] You must generate the following output by using the k-fold algorithm implantation in the Python Scikit-learn machine learning library: train: [10 40 50 60], test: [20 30] train: [20 30 40 60], test: [10 50] train: [10 20 30 50], test: [40 60] You need to implement a cross-validation to generate the output. How should you complete the code segment? To answer, select the appropriate code segment in the dialog box in the answer area. NOTE: Each correct selection is worth one point. Hot Area: Answer Area from numpy import array from sklearn.model_selection import K-Means k-fold CrossValidation ModelSelection data - array([10, 20, 30, 40, 50, 60]) kfold - Kfold(n_splits- , shuffle - True, random_state-1) 1 2 3 6 for train, test in kFold, split( ): data k-fold array train, test print ('train: %s, test: $5' % (data [train], data [test]) dropdown1: dropdown2: dropdown3:
30
HOTSPOT You are using a decision tree algorithm. You have trained a model that generalizes well at a tree depth equal to 10. You need to select the bias and variance properties of the model with varying tree depth values. Which properties should you select for each tree depth? To answer, select the appropriate options in the answer area. Answer Area Tree Depth Bias Variance 5 ⬇️ High / Low / Identical ⬇️ High / Low / Identical 15 ⬇️ High / Low / Identical ⬇️ High / Low / Identical
31
You create a multi-class image classification deep learning model that uses a set of labeled images. You create a script file named train.py that uses the PyTorch 1.3 framework to train the model. You must run the script by using an estimator. The code must not require any additional Python libraries to be installed in the environment for the estimator. The time required for model training must be minimized. You need to define the estimator that will be used to run the script. Which estimator type should you use? TensorFlow PyTorch SKLearn Estimator
32
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You create a model to forecast weather conditions based on historical data. You need to create a pipeline that runs a processing script to load data from a datastore and pass the processed data to a machine learning model training script. Solution: Run the following code: Does the solution meet the goal? Yes No
33
HOTSPOT - You deploy a model in Azure Container Instance. You must use the Azure Machine Learning SDK to call the model API. You need to invoke the deployed model using native SDK classes and methods. How should you complete the command? To answer, select the appropriate options in the answer areas. NOTE: Each correct selection is worth one point. Hot Area: Answer Area from azureml.core import Workspace: from azureml.core.webservice import requests from azureml.core.webservice import Webservice from azureml.core.webservice import LocalWebservice import json ws = Workspace.from_config() service_name = "mlmode11-service" service = Webservice(name=service_name, workspace=ws) x_new = [[2,101.5,1,24,21], [1,89.7,4,41, 21]] input_json = json.dumps({"data": x_new}) predictions = service.run(input_json) predictions = requests.post(service.scoring_uri, input_json) predictions = service.deserialize(ws, input_json)
34
You are training machine learning models in Azure Machine Learning. You use Hyperdrive to tune the hyperparameters. In previous model training and tuning runs, many models showed similar performance. You need to select an early termination policy that meets the following requirements: * accounts for the performance of all previous runs when evaluating the current run * avoids comparing the current run with only the best performing run to date Which two early termination policies should you use? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. Bandit Median stopping Default Truncation selection
35
You are building recurrent neural network to perform a binary classification. The training loss, validation loss, training accuracy, and validation accuracy of each training epoch has been provided. You need to identify whether the classification model is over fitted. Which of the following is correct? The training loss increases while the validation loss decreases when training the model. The training loss decreases while the validation loss increases when training the model. The training loss stays constant and the validation loss decreases when training the model. The training loss .stays constant and the validation loss stays on a constant value and close to the training loss value when training the model.
36
You create a datastore named training_data that references a blob container in an Azure Storage account. The blob container contains a folder named csv_files in which multiple comma-separated values (CSV) files are stored. You have a script named train.py in a local folder named./script that you plan to run as an experiment using an estimator. The script includes the following code to read data from the csv_files folder: import os import argparse import pandas as pd from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from azureml.core import Run run = Run.get_context() parser = argparse.ArgumentParser() parser.add_argument('--data-folder', type=str, dest='data_folder', help='data reference') args = parser.parse_args() data_folder = args.data_folder csv_files = os.listdir(data_folder) training_data = pd.concat((pd.read_csv(os.path.join(data_folder,csv_file)) for csv_file in csv_files)) Code goes here to load the training data and train a logistic regression model You have the following script: python Sao chép Chỉnh sửa from azureml.core import Workspace, Datastore, Experiment from azureml.train.sklearn import SKLearn ws = Workspace.from_config() exp = Experiment(workspace=ws, name='csv_training') ds = Datastore.get(ws, datastore_name='training_data') data_ref = ds.as_dataset() You need to configure the estimator for the experiment so that the script can read the data from a data reference named data_ref that references the csv_files folder in the training_data datastore. Which code should you use to configure the estimator? A. python Sao chép Chỉnh sửa estimator = SKLearn(source_directory='./script', inputs=[data_ref.as_named_input('data-folder').to_pandas_dataframe()], compute_target='local', entry_script='train.py') B. python Sao chép Chỉnh sửa script_params = { '--data-folder': data_ref.as_mount() } estimator = SKLearn(source_directory='./script', script_params=script_params, compute_target='local', entry_script='train.py') C. python Sao chép Chỉnh sửa estimator = SKLearn(source_directory='./script', inputs=[data_ref.as_named_input('data-folder').as_mount()], compute_target='local', entry_script='train.py') D. python Sao chép Chỉnh sửa script_params = { '--data-folder': data_ref.as_download(path_on_compute='csv_files') } estimator = SKLearn(source_directory='./script', script_params=script_params, compute_target='local', entry_script='train.py') E. python Sao chép Chỉnh sửa estimator = SKLearn(source_directory='./script', inputs=[data_ref.as_named_input('data-folder').as_download(path_on_compute='csv_files')], compute_target='local', entry_script='train.py')
37
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You are a data scientist using Azure Machine Learning Studio. You need to normalize values to produce an output column into bins to predict a target column. Solution: Apply a Quantiles normalization with a QuantileIndex normalization. Does the solution meet the GOAL? Yes No
38
You have a comma-separated values (CSV) file containing data from which you want to train a classification model. You are using the Automated Machine Learning interface in Azure Machine Learning studio to train the classification model. You set the task type to Classification. You need to ensure that the Automated Machine Learning process evaluates only linear models. What should you do? Add all algorithms other than linear ones to the blocked algorithms list. Set the Exit criterion option to a metric score threshold. Clear the option to perform automatic featurization. Clear the option to enable deep learning. Set the task type to Regression.
39
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You plan to use a Python script to run an Azure Machine Learning experiment. The script creates a reference to the experiment run context, loads data from a file, identifies the set of unique values for the label column, and completes the experiment run: from azureml.core import Run import pandas as pd run = Run.get_context() data = pd.read_csv('data.csv') label_vals = data['label'].unique() Add code to record metrics here run.complete() The experiment must record the unique labels in the data as metrics for the run that can be reviewed later. You must add code to the script to record the unique label values as run metrics at the point indicated by the comment. Solution: Replace the comment with the following code: run.log_table('Label Values', label_vals) Does the solution meet the goal? Yes No
40
You use the Azure Machine Learning SDK in a notebook to run an experiment using a script file in an experiment folder. The experiment fails. You need to troubleshoot the failed experiment. What are two possible ways to achieve this goal? Each correct answer presents a complete solution. Use the get_metrics() method of the run object to retrieve the experiment run logs. Use the get_details_with_logs() method of the run object to display the experiment run logs. View the log files for the experiment run in the experiment folder. View the logs for the experiment run in Azure Machine Learning studio. Use the get_output() method of the run object to retrieve the experiment run logs.
41
You use the Azure Machine Learning SDK to run a training experiment that trains a classification model and calculates its accuracy metric. The model will be retrained each month as new data is available. You must register the model for use in a batch inference pipeline. You need to register the model and ensure that the models created by subsequent retraining experiments are registered only if their accuracy is higher than the currently registered model. What are two possible ways to achieve this goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. Specify a different name for the model each time you register it. Register the model with the same name each time regardless of accuracy, and always use the latest version of the model in the batch inferencing pipeline. Specify the model framework version when registering the model, and only register subsequent models if this value is higher. Specify a property named accuracy with the accuracy metric as a value when registering the model, and only register subsequent models if their accuracy is higher than the accuracy property value of the currently registered model. Specify a tag named accuracy with the accuracy metric as a value when registering the model, and only register subsequent models if their accuracy is higher than the accuracy tag value of the currently registered model.
42
You create a binary classification model. You need to evaluate the model performance. Which two metrics can you use? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. relative absolute error precision accuracy mean absolute error coefficient of determination
43
You deploy a model as an Azure Machine Learning real-time web service using the following code. The deployment fails. You need to troubleshoot the deployment failure by determining the actions that were performed during deployment and identifying the specific action that failed. Which code segment should you run? service.get_logs() service.state service.serialize() service.update_deployment_state()
44
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You create an Azure Machine Learning service datastore in a workspace. The datastore contains the following files: * /data/2018/Q1 .csv * /data/2018/Q2.csv * /data/2018/Q3.csv * /data/2018/Q4.csv * /data/2019/Q1.csv All files store data in the following format: id,M,f2,l 1,1,2,0 2,1,1,1 32,10 You run the following code: You need to create a dataset named training_data and load the data from all files into a single data frame by using the following code: Solution: Run the following code: Does the solution meet the goal? Yes No
45
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You are creating a model to predict the price of a student’s artwork depending on the following variables: the student’s length of education, degree type, and art form. You start by creating a linear regression model. You need to evaluate the linear regression model. Solution: Use the following metrics: Mean Absolute Error, Root Mean Absolute Error, Relative Absolute Error, Relative Squared Error, and the Coefficient of Determination. Does the solution meet the goal? Yes No
46
code: hyperdrive = HyperDriveConfig(estimator=your_estimator, hyperparameter_sampling=your_params, policy=policy, primary_metric_name='AUC', primary_metric_goal=PrimaryMetricGoal.MAXIMIZE, max_total_runs=6, max_concurrent_runs=4) You plan to use this configuration to run a script that trains a random forest model and then tests it with validation data. The label values for the validation data are stored in a variable named y_test variable, and the predicted probabilities from the model are stored in a variable named y_predicted. You need to add logging to the script to allow Hyperdrive to optimize hyperparameters for the AUC metric. Solution: Run the following code: from sklearn.metrics import roc_auc_score import logging # code to train model omitted auc = roc_auc_score(y_test, y_predicted) logging.info("AUC: "+ str(auc)) Does the solution meet the goal? Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You are using Azure Machine Learning to run an experiment that trains a classification model. You want to use Hyperdrive to find parameters that optimize the AUC metric for the model. You configure a HyperDriveConfig for the experiment by running the following A. Yes B. No
47
HOTSPOT - You are a lead data scientist for a project that tracks the health and migration of birds. You create a multi-image classification deep learning model that uses a set of labeled bird photos collected by experts. You plan to use the model to develop a cross-platform mobile app that predicts the species of bird captured by app users. You must test and deploy the trained model as a web service. The deployed model must meet the following requirements: ✑ An authenticated connection must not be required for testing. ✑ The deployed model must perform with low latency during inferencing. ✑ The REST endpoints must be scalable and should have a capacity to handle large number of requests when multiple end users are using the mobile application. You need to verify that the web service returns predictions in the expected JSON format when a valid REST request is submitted. Which compute resources should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area Context Resource Test ds-workstation notebook VM aks-compute cluster cpu-compute cluster gpu-compute cluster Production ds-workstation notebook VM aks-compute cluster cpu-compute cluster gpu-compute cluster
48
You are determining if two sets of data are significantly different from one another by using Azure Machine Learning Studio. Estimated values in one set of data may be more than or less than reference values in the other set of data. You must produce a distribution that has a constant Type I error as a function of the correlation. You need to produce the distribution. Which type of distribution should you produce? Paired t-test with a two-tail option Unpaired t-test with a two tail option Paired t-test with a one-tail option Unpaired t-test with a one-tail option
49
You are analyzing a dataset containing historical data from a local taxi company. You arc developing a regression a regression model. You must predict the fare of a taxi trip. You need to select performance metrics to correctly evaluate the- regression model. Which two metrics can you use? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. an F1 score that is high an R Squared value dose to 1 an R-Squared value close to 0 a Root Mean Square Error value that is high a Root Mean Square Error value that is low an F 1 score that is low.
50
HOTSPOT - A coworker registers a datastore in a Machine Learning services workspace by using the following code: Datastore.register_azure_blob_container(workspace=ws, datastore_name='demo_datastore', container_name='demo_datacontainer', account_name='demo_account', account_key='0A0A0A-0A0A00A-0A00A0A0A0A0A', create_if_not_exists=True) You need to write code to access the datastore from a notebook. How should you complete the code segment? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: Answer Area import azureml.core from azureml.core import Workspace, Datastore ws = Workspace.from_config() datastore = Datastore.get(ws, 'demo_datastore') Dropdown options: First dropdown: Workspace Datastore Experiment Run Second dropdown: ws run experiment log Third dropdown: demo_datastore demo_datacontainer demo_account Datastore
51
HOTSPOT - You are using Azure Machine Learning to train machine learning models. You need a compute target on which to remotely run the training script. You run the following Python code: from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException the_cluster_name = "NewCompute" config = AmlCompute.provisioning_configuration (vm_size= 'STANDARD_D2', max_nodes=3) the_cluster = ComputeTarget.create(ws, the_cluster_name, config) For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: Answer Area The compute is created in the same region as the Machine Learning service workspace. The compute resource created by the code is displayed as a compute cluster in Azure Machine Learning studio. The minimum number of nodes will be zero.
52
HOTSPOT - You use an Azure Machine Learning workspace. You create the following Python code: from azureml.core import ScriptRunConfig src = ScriptRunConfig (source_directory=project_folder, script='train.py' environment=myenv) For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: Answer Area Statements The default environment will be created The training script will run on local compute A script run configuration runs a training script named train. py located in a directory defined by the project_folder variable
53
You use the following code to run a script as an experiment in Azure Machine Learning: You must identify the output files that are generated by the experiment run. You need to add code to retrieve the output file names. Which code segment should you add to the script? files = run.get_properties() files= run.get_file_names() files = run.get_details_with_logs() files = run.get_metrics() files = run.get_details()
54
You must use the Azure Machine Learning SDK to interact with data and experiments in the workspace. You need to configure the config.json file to connect to the workspace from the Python environment. You create the following config.json file. Which two additional parameters must you add to the config.json file in order to connect to the workspace? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. subscription_Id Key resource_group region Login
55
Topic 1, Case Study 1 Overview You are a data scientist in a company that provides data science for professional sporting events. Models will be global and local market data to meet the following business goals: * Understand sentiment of mobile device users at sporting events based on audio from crowd reactions. * Access a user's tendency to respond to an advertisement. * Customize styles of ads served on mobile devices. * Use video to detect penalty events. Current environment Requirements * Media used for penalty event detection will be provided by consumer devices. Media may include images and videos captured during the sporting event and snared using social media. The images and videos will have varying sizes and formats. * The data available for model building comprises of seven years of sporting event media. The sporting event media includes: recorded videos, transcripts of radio commentary, and logs from related social media feeds feeds captured during the sporting events. * Crowd sentiment will include audio recordings submitted by event attendees in both mono and stereo Formats. Advertisements * Ad response models must be trained at the beginning of each event and applied during the sporting event. * Market segmentation nxxlels must optimize for similar ad resporr.r history. * Sampling must guarantee mutual and collective exclusivity local and global segmentation models that share the same features. * Local market segmentation models will be applied before determining a user’s propensity to respond to an advertisement. * Data scientists must be able to detect model degradation and decay. * Ad response models must support non linear boundaries features. * The ad propensity model uses a cut threshold is 0.45 and retrains occur if weighted Kappa deviates from 0.1 +/-5%. * The ad propensity model uses cost factors shown in the following diagram: * The ad propensity model uses proposed cost factors shown in the following diagram: Performance curves of current and proposed cost factor scenarios are shown in the following diagram: Penalty detection and sentiment Findings * Data scientists must build an intelligent solution by using multiple machine learning models for penalty event detection. * Data scientists must build notebooks in a local environment using automatic feature engineering and model building in machine learning pipelines. * Notebooks must be deployed to retrain by using Spark instances with dynamic worker allocation * Notebooks must execute with the same code on new Spark instances to recode only the source of the data. * Global penalty detection models must be trained by using dynamic runtime graph computation during training. * Local penalty detection models must be written by using BrainScript. * Experiments for local crowd sentiment models must combine local penalty detection data. * Crowd sentiment models must identify known sounds such as cheers and known catch phrases. Individual crowd sentiment models will detect similar sounds. * All shared features for local models are continuous variables. * Shared features must use double precision. Subsequent layers must have aggregate running mean and standard deviation metrics Available. segments During the initial weeks in production, the following was observed: * Ad response rates declined. * Drops were not consistent across ad styles. * The distribution of features across training and production data are not consistent. Analysis shows that of the 100 numeric features on user location and behavior, the 47 features that come from location sources are being used as raw features. A suggested experiment to remedy the bias and variance issue is to engineer 10 linearly uncorrected features. Penalty detection and sentiment * Initial data discovery shows a wide range of densities of target states in training data used for crowd sentiment models. * All penalty detection models show inference phases using a Stochastic Gradient Descent (SGD) are running too stow. * Audio samples show that the length of a catch phrase varies between 25%-47%, depending on region. * The performance of the global penalty detection models show lower variance but higher bias when comparing training and validation sets. Before implementing any feature changes, you must confirm the bias and variance using all training and validation cases. DRAG DROP - You need to define an evaluation strategy for the crowd sentiment models. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: Actions Add new features for retraining supervised models. Filter labeled cases for retraining using the shortest distance from centroids. Evaluate the changes in correlation between model error rate and centroid distance Impute unavailable features with centroid aligned models Filter labeled cases for retraining using the longest distance from centroids. Remove features before retraining supervised models. Answer Area
56
You need to implement a feature engineering strategy for the crowd sentiment local models. What should you do? Apply an analysis of variance (ANOVA). Apply a Pearson correlation coefficient. Apply a Spearman correlation coefficient. Apply a linear discriminant analysis.
57
HOTSPOT You have a Python data frame named salesData in the following format: yaml Sao chép Chỉnh sửa shop 2017 2018 0 Shop X 34 25 1 Shop Y 65 76 2 Shop Z 48 55 The data frame must be unpivoted to a long data format as follows: shop year value 0 Shop X 2017 34 1 Shop Y 2017 65 2 Shop Z 2017 48 3 Shop X 2018 25 4 Shop Y 2018 76 5 Shop Z 2018 55 You need to use the pandas.melt() function in Python to perform the transformation. How should you complete the code segment? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area python Sao chép Chỉnh sửa import pandas as pd salesData = pd.melt( ___ , id_vars= ___ , value_vars= ___ ) Dropdown options: First blank: dataFrame pandas salesData year Second blank (id_vars): shop year value Shop X, Shop Y, Shop Z Third blank (value_vars): 'shop' 'year' ['year'] ['2017', '2018']
58
# - backfill HOTSPOT You create an Azure Machine Learning workspace. You need to detect data drift between a baseline dataset and a subsequent target dataset by using the DataDriftDetector class. How should you complete the code segment? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: Answer Area from azureml.core import Workspace, Dataset from datetime import datetime ws = Workspace.from_config() dset = Dataset.get_by_name(ws, 'target') baseline = target.time_before(datetime(2021, 2, 1)) features = ['windAngle', 'windSpeed', 'temperature', 'stationName'] monitor = DataDriftDetector. _____________ (ws, 'drift-monitor', baseline, dset) Dropdown options: # - create_from_datasets # - create_from_model target.compute_target='cpu-cluster'; frequency='Week'; feature_list=None; drift_threshold=.6, latency=24 monitor = DataDriftDetector.get_by_name(ws, 'drift-monitor') monitor = monitor.update(feature_list=features) complete = monitor. _____________ (datetime(2021, 1, 1), datetime.today()) Dropdown options: # - backfill # - list # - update
58
You need to implement a scaling strategy for the local penalty detection data. Which normalization type should you use? Streaming Weight Batch Cosine
59
DRAG DROP You are developing a machine learning solution by using the Azure Machine Learning designer. You need to create a web service that applications can use to submit data feature values and retrieve a predicted label. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Actions Create and run a batch inference pipeline. Create and run a real-time inference pipeline. Deploy a service to an inference cluster. Create and run a training pipeline. Answer area 1 2 3
60
DRAG DROP You manage an Azure Machine Learning workspace. You train a model named model1. You must identify the features to modify for a differing model prediction result. You need to configure the Responsible Al (RAI) dashboard for model1. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Actions Add the error analysis component to the Responsible Al Insights dashboard. Use the Gather Responsible Al Insights dashboard component to present the dashboard. Add the counterfactuals component to the Responsible Al Insights dashboard. Load and configure the Responsible Al Insights dashboard constructor component. Add the explanation component to the Responsible Al Insights dashboard. Add the causal component to the Responsible Al Insights dashboard. Answer Area
61
DRAG DROP - You need to define a modeling strategy for ad response. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: Action Implement a K-Means Clustering model. Use the raw score as a feature in a Score Matchbox Recommender model. Use the cluster as a feature in a Decision Jungle model. Use the raw score as a feature in a Logistic Regression model. Implement a Sweep Clustering model. Answer area
61
You need to select an environment that will meet the business and data requirements. Which environment should you use? Azure HDInsight with Spark MLlib Azure Cognitive Services Azure Machine Learning Studio Microsoft Machine Learning Server
61
You need to implement a model development strategy to determine a user’s tendency to respond to an ad. Which technique should you use? Use a Relative Expression Split module to partition the data based on centroid distance. Use a Relative Expression Split module to partition the data based on distance travelled to the event. Use a Split Rows module to partition the data based on distance travelled to the event. Use a Split Rows module to partition the data based on centroid distance.
61
You need to resolve the local machine learning pipeline performance issue. What should you do? Increase Graphic Processing Units (GPUs). Increase the learning rate. Increase the training iterations, Increase Central Processing Units (CPUs).
62
You need to implement a new cost factor scenario for the ad response models as illustrated in the performance curve exhibit. Which technique should you use? Set the threshold to 0.5 and retrain if weighted Kappa deviates +/- 5% from 0.45. Set the threshold to 0.05 and retrain if weighted Kappa deviates +/- 5% from 0.5. Set the threshold to 0.2 and retrain if weighted Kappa deviates +/- 5% from 0.6. Set the threshold to 0.75 and retrain if weighted Kappa deviates +/- 5% from 0.15.
63
HOTSPOT - You create a Python script named train.py and save it in a folder named scripts. The script uses the scikit-learn framework to train a machine learning model. You must run the script as an Azure Machine Learning experiment on your local workstation. You need to write Python code to initiate an experiment that runs the train.py script. How should you complete the code segment? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: Answer Area from azureml.core import Experiment, ScriptRunConfig, Environment from azureml.core.conda_dependencies import CondaDependencies from azureml.core import Workspace ws = Workspace. from_config () py_sk = Environment ('sklearn-training' ) pkgs = CondaDependencies.create (pip_packages=['scikit-learn', 'azureml-defaults' ]) py_sk.python.conda_dependencies = pkgs script_config = ScriptRunConfig( __________ = 'scripts', # Dropdown options: script, source_directory, resume_from, arguments __________ = 'train.py', # Dropdown options: script, arguments, environment, compute_target __________ = py_sk # Dropdown options: arguments, resume_from, environment, compute_target ) experiment = Experiment(workspace=ws, name='training-experiment') run = experiment.submit(config=script_config)
64
DRAG DROP - You need to define an evaluation strategy for the crowd sentiment models. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: Actions Define a cross-entropy function activation. Add cost functions for each target state Evaluate the classification error metric. Evaluate the distance error metric. Add cost functions for each component metric Define a sigmoid loss function activation. Answer Area