test2 Flashcards

https://www.dumpsbase.com/freedumps/?s=DP+100 (69 cards)

1
Q

Topic 2, Case Study 2

Case study

Overview

You are a data scientist for Fabrikam Residences, a company specializing in quality private and commercial property in the United States. Fabrikam Residences is considering expanding into Europe and has asked you to investigate prices for private residences in major European cities. You use Azure Machine Learning Studio to measure the median value of properties. You produce a regression model to predict property prices by using the Linear Regression and Bayesian Linear Regression modules.

Datasets

There are two datasets in CSV format that contain property details for two cities, London and Paris, with the following columns:

The two datasets have been added to Azure Machine Learning Studio as separate datasets and included as the starting point of the experiment.

Dataset issues

The AccessibilityToHighway column in both datasets contains missing values. The missing data must be replaced with new data so that it is modeled conditionally using the other variables in the data before filling in the missing values.

Columns in each dataset contain missing and null values. The dataset also contains many outliers. The Age column has a high proportion of outliers. You need to remove the rows that have outliers in the Age column. The MedianValue and AvgRoomsinHouse columns both hold data in numeric format. You need to select a feature selection algorithm to analyze the relationship between the two columns in more detail.

Model fit

The model shows signs of overfitting. You need to produce a more refined regression model that reduces the overfitting.

Experiment requirements

You must set up the experiment to cross-validate the Linear Regression and Bayesian Linear Regression modules to evaluate performance.

In each case, the predictor of the dataset is the column named MedianValue. An initial investigation showed that the datasets are identical in structure apart from the MedianValue column. The smaller Paris dataset contains the MedianValue in text format, whereas the larger London dataset contains the MedianValue in numerical format. You must ensure that the datatype of the MedianValue column of the Paris dataset matches the structure of the London dataset.

You must prioritize the columns of data for predicting the outcome. You must use non-parameters statistics to measure the relationships.

You must use a feature selection algorithm to analyze the relationship between the MedianValue and AvgRoomsinHouse columns.

Model training

Given a trained model and a test dataset, you need to compute the permutation feature importance scores of feature variables. You need to set up the Permutation Feature Importance module to select the correct metric to investigate the model’s accuracy and replicate the findings.

You want to configure hyperparameters in the model learning process to speed the learning phase by using hyperparameters. In addition, this configuration should cancel the lowest performing runs at each evaluation interval, thereby directing effort and resources towards models that are more likely to be successful.

You are concerned that the model might not efficiently use compute resources in hyperparameter tuning. You also are concerned that the model might prevent an increase in the overall tuning time. Therefore, you need to implement an early stopping criterion on models that provides savings without terminating promising jobs.

Testing

You must produce multiple partitions of a dataset based on sampling using the Partition and Sample module in Azure Machine Learning Studio. You must create three equal partitions for cross-validation. You must also configure the cross-validation process so that the rows in the test and training datasets are divided evenly by properties that are near each city’s main river. The data that identifies that a property is near a river is held in the column named NextToRiver. You want to complete this task before the data goes through the sampling process.

When you train a Linear Regression module using a property dataset that shows data for property prices for a large city, you need to determine the best features to use in a model. You can choose standard metrics provided to measure performance before and after the feature importance process completes. You must ensure that the distribution of the features across multiple training models is consistent.

Data visualization

You need to provide the test results to the Fabrikam Residences team. You create data visualizations to aid in presenting the results.

You must produce a Receiver Operating Characteristic (ROC) curve to conduct a

diagnostic test evaluation of the model. You need to select appropriate methods for producing the ROC curve in Azure Machine Learning Studio to compare the Two-Class Decision Forest and the Two-Class Decision Jungle modules with one another.
DRAG DROP -
You need to define an evaluation strategy for the crowd sentiment models.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct
order.
Select and Place:
Actions

Define a cross-entropy function activation.
Add cost functions for each target state
Evaluate the classification error metric.
Evaluate the distance error metric.
Add cost functions for each component metric
Define a sigmoid loss function activation.

Answer Area

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

DRAG DROP -
You need to correct the model fit issue.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct
order.
Select and Place:

Actions

Add the Ordinal Regression module.
Add the Two-Class Averaged
Perception module.
Augment the data.
Add the Bayesian Linear Regression
module.
Decrease the memory size for L-BFGS.
Add the Multiclass Decision Jungle
module.
Configure the regularization weight.

Answer Area

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

HOTSPOT -
You need to set up the Permutation Feature Importance module according to the model training requirements.
Which properties should you select? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Tune Model Hyperparameters
Specify parameter sweeping mode: Random sweep
Maximum number of runs on random sweep: 5
Random seed: 0
Label column
Selected columns
Column names: MedianValue

Launch column selector
Metric for measuring performance for classification

F-score
Precision
Recall
Accuracy
Metric for measuring performance for regression

Root of mean squared error
R-squared
Mean zero one error
Mean absolute error

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

HOTSPOT -
You need to configure the Permutation Feature Importance module for the model training requirements.
What should you do? To answer, select the appropriate options in the dialog box in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:

Answer Area
Permutation Feature importance
Random seed
0
500

Regression - Root Mean Square Error
Regression - R-squared
Regression - Mean Zero One Error
Regression - Mean Absolute Error

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

HOTSPOT -
You need to configure the Edit Metadata module so that the structure of the datasets match.
Which configuration options should you select? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Properties

Project
Edit Metadata
Column
Selected columns:
Column names: Median Value
Launch column selector

Floating point
DateTime
TimeSpan
Integer

Unchanged
Make Categorical
Make Uncategorical

Fields
5

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

HOTSPOT -
You need to identify the methods for dividing the data according to the testing requirements.
Which properties should you select? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Properties

Project
Partition and Sample
Assign to Folds
Sampling
Head
Partition or sample mode
Use replacement in the partitioning (uncheck)
Randomized split (checked)
Random seed
0

True
False
Partition evenly
Partition with custom partitions
Specify the partitioner method
Partition evenly

Specify number of folds to split evenly into
3

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

HOTSPOT -
You need to replace the missing data in the AccessibilityToHighway columns.
How should you configure the Clean Missing Data module? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Properties

Project
Clean Missing Data
Columns to be cleaned
Selected columns:
Column names: AccessibilityToHighway
Launch column selector
Minimum missing value ratio
0
Maximum missing value ratio
1
Cleaning mode:
Replace using MICE
Replace with Mean
Replace with Median
Replace with Mode

Cols with all missing values:
Propagate
Remove

◿ Generate missing value indicator column
Number of iterations

5

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

You need to visually identify whether outliers exist in the Age column and quantify the outliers before the outliers are removed.
Which three Azure Machine Learning Studio modules should you use? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

A. Create Scatterplot
B. Summarize Data
C. Clip Values
D. Replace Discrete Values
E. Build Counting Transform

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

DRAG DROP -
You are building an experiment using the Azure Machine Learning designer.
You split a dataset into training and testing sets. You select the Two-Class Boosted Decision Tree as the algorithm.
You need to determine the Area Under the Curve (AUC) of the model.
Which three modules should you use in sequence? To answer, move the appropriate modules from the list of modules to the answer area and arrange them in the correct order.
Select and Place:
Modules

Export Data
Tune Model Hyperparameters
Cross Validate Model
Evaluate Model
Score Model
Train Model

Answer Area

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You need to select a feature extraction method.

Which method should you use?

Spearman correlation
Mutual information
Mann-Whitney test
Pearson’s correlation

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

DRAG DROP -
You create an image classification model in Azure Machine Learning Studio.
You need to deploy the model as a containerized web service.
Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
Actions

Start the container
Create a container image
Create an Azure Batch Al account
Get the http endpoint of the web
service
Register the container image
Train the model

Answer Area

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You need to select a feature extraction method.

Which method should you use?

Mutual information
Mood’s median test
Kendall correlation
Permutation Feature Importance

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

DRAG DROP -
You create an image classification model in Azure Machine Learning Studio.
You need to deploy the model as a containerized web service.
Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
Actions

Start the container
Create a container image
Create an Azure Batch Al account
Get the http endpoint of the web
service
Register the container image
Train the model

Answer Area

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

DRAG DROP -
You need to implement early stopping criteria as stated in the model training requirements.
Which three code segments should you use to develop the solution? To answer, move the appropriate code segments from the list of code segments to the answer area
and arrange them in the correct order.
NOTE: More than one order of answer choices is correct. You will receive the credit for any of the correct orders you select.
Select and Place:

Code segments

early_termination_policy = TruncationSelectionPolicy
(evaluation_interval=1, truncation_percentage=20,
delay_evaluation = 5)

import BanditPolicy

import TruncationSelectionPolicy

early_termination_policy= BanditPolicy (slack_factor =
0.1, evaluation_interval = 1, delay_evaluation = 5)

from azureml.train.hyperdrive

early_termination_policy = MedianStoppingPolicy
(evaluation_interval = 1, delay_evaluation=5)

import MedianStoppingPolicy

Answer Area

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Topic 3, Mix Questions

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You are using Azure Machine Learning to run an experiment that trains a classification model.

You want to use Hyperdrive to find parameters that optimize the AUC metric for the model.

You configure a HyperDriveConfig for the experiment by running the following code:

You plan to use this configuration to run a script that trains a random forest model and then tests it with validation data. The label values for the validation data are stored in a variable named y_test variable, and the predicted probabilities from the model are stored in a variable named y_predicted.

You need to add logging to the script to allow Hyperdrive to optimize hyperparameters for the AUC metric.

Solution: Run the following code:

Does the solution meet the goal?

Yes
No

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

You are a data scientist creating a linear regression model.

You need to determine how closely the data fits the regression line.

Which metric should you review?

Coefficient of determination
Recall
Precision
Mean absolute error
Root Mean Square Error

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

You train and register a model in your Azure Machine Learning workspace.

You must publish a pipeline that enables client applications to use the model for batch inferencing. You must use a pipeline with a single ParallelRunStep step that runs a Python inferencing script to get predictions from the input data.

You need to create the inferencing script for the ParallelRunStep pipeline step.

Which two functions should you include? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

run(mini_batch)
main()
batch()
init()
score(mini_batch)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

You are evaluating a completed binary classification machine.

You need to use the precision as the evaluation metric.

Which visualization should you use?

scatter plot
coefficient of determination
Receiver Operating Characteristic CROC) curve
Gradient descent

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

You run an experiment that uses an AutoMLConfig class to define an automated machine learning task with a maximum of ten model training iterations. The task will attempt to find the best performing model based on a metric named accuracy.

You submit the experiment with the following code:
from azureml.core.experiment import Experiment
automl_experiment = Experiment (ws, ‘automl_experiment’ )
automl_run = automl_experiment.submit (automl_config,
show output=True)

You need to create Python code that returns the best model that is generated by the automated machine learning task.

Which code segment should you use?

best_model = automl_run.get_details()
best_model = automl_run.get_output()[1]
best_model = automl_run.get_file_names()[1]
best_model = automl_run.get_metrics()

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

You plan to run a script as an experiment using a Script Run Configuration. The script uses modules from the scipy library as well as several Python packages that are not typically installed in a default conda environment.

You plan to run the experiment on your local workstation for small datasets and scale out the experiment by running it on more powerful remote compute clusters for larger datasets.

You need to ensure that the experiment runs successfully on local and remote compute with the least administrative effort.

What should you do?

Create and register an Environment that includes the required packages. Use this Environment for all experiment runs.
Always run the experiment with an Estimator by using the default packages.
Do not specify an environment in the run configuration for the experiment. Run the experiment by using the default environment.
Create a config.yaml file defining the conda packages that are required and save the file in the experiment folder.
Create a virtual machine (VM) with the required Python configuration and attach the VM as a compute target. Use this compute target for all experiment runs.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

You are a data scientist building a deep convolutional neural network (CNN) for image classification.

The CNN model you built shows signs of overfitting.

You need to reduce overfitting and converge the model to an optimal fit.

Which two actions should you perform? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.

Reduce the amount of training data.
Add an additional dense layer with 64 input units
Add L1/L2 regularization.
Use training data augmentation
Add an additional dense layer with 512 input units.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

You are creating a new Azure Machine Learning pipeline using the designer.

The pipeline must train a model using data in a comma-separated values (CSV) file that is published on a

website. You have not created a dataset for this file.

You need to ingest the data from the CSV file into the designer pipeline using the minimal administrative effort.

Which module should you add to the pipeline in Designer?

Convert to CSV
Enter Data Manually D
Import Data
Dataset

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

You are building a regression model tot estimating the number of calls during an event.

You need to determine whether the feature values achieve the conditions to build a Poisson regression model.

Which two conditions must the feature set contain? I ach correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

The label data must be a negative value.
The label data can be positive or negative,
The label data must be a positive value
The label data must be non discrete.
The data must be whole numbers.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

You create an Azure Machine Learning compute resource to train models.

The compute resource is configured as follows:

✑ Minimum nodes: 2

✑ Maximum nodes: 4

You must decrease the minimum number of nodes and increase the maximum number of nodes to the following values:

✑ Minimum nodes: 0

✑ Maximum nodes: 8

You need to reconfigure the compute resource.

What are three possible ways to achieve this goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.

Use the Azure Machine Learning studio.
Run the update method of the AmlCompute class in the Python SD
Use the Azure portal.
Use the Azure Machine Learning designer.
Run the refresh_state() method of the BatchCompute class in the Python SDK

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
You create a binary classification model. The model is registered in an Azure Machine Learning workspace. You use the Azure Machine Learning Fairness SDK to assess the model fairness. You develop a training script for the model on a local machine. You need to load the model fairness metrics into Azure Machine Learning studio. What should you do? A. Implement the download_dashboard_by_upload_id function B. Implement the create_group_metric_set function C. Implement the upload_dashboard_dictionary function D. Upload the training script
26
You create a binary classification model by using Azure Machine Learning Studio. You must tune hyperparameters by performing a parameter sweep of the model. The parameter sweep must meet the following requirements: ✑ iterate all possible combinations of hyperparameters ✑ minimize computing resources required to perform the sweep You need to perform a parameter sweep of the model. Which parameter sweep mode should you use? A. Random sweep B. Sweep clustering C. Entire grid D. Random grid
27
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You are creating a new experiment in Azure Machine Learning Studio. One class has a much smaller number of observations than the other classes in the training set. You need to select an appropriate data sampling strategy to compensate for the class imbalance. Solution: You use the Scale and Reduce sampling mode. Does the solution meet the goal? Yes No
28
HOTSPOT - You plan to use Hyperdrive to optimize the hyperparameters selected when training a model. You create the following code to define options for the hyperparameter experiment: import azureml.train.hyperdrive.parameter_expressions as pe from azureml.train.hyperdrive import GridParameterSampling, HyperDriveConfig param_sampling = GridParameterSampling({ "max_depth" : pe.choice(6, 7, 8, 9), "learning_rate" : pe.choice(0.05, 0.1, 0.15) hyperdrive_run_config = HyperDriveConfig( estimator = estimator, hyperparameter_sampling = param_sampling, policy = None, primary_metric_name = "auc", primary_metruc_goal = PrimaryMetricGoal.MAXIMIZE, max_total_runs = 50, max_concurrent_runs = 4) For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: Answer Area There will be 50 runs for this hyperparameter tuning experiment. You can use the policy parameter in the HyperDriveConfig class to specify a security policy. The experiment will create a run for every possible value for the learning rate parameter between 0.05 and 0.15.
29
You write five Python scripts that must be processed in the order specified in Exhibit A -- which allows the same modules to run in parallel, but will wait for modules with dependencies. You must create an Azure Machine Learning pipeline using the Python SDK, because you want to script to create the pipeline to be tracked in your version control system. You have created five PythonScriptSteps and have named the variables to match the module names. Diagram markdown Sao chép Chỉnh sửa step_1_a step_1_b \ / \ / step_2_a step_2_b \ / \ / step_3 Prompt: You need to create the pipeline shown. Assume all relevant imports have been done. Which Python code segment should you use? Options: A. p = Pipeline(ws, steps=[[[step_1_a, step_1_b], step_2_a], step_2_b, step_3]) B. pipeline_steps = { "Pipeline": { "run": step_3, "run_after": [{ "run": step_2_a, "run_after": [ {"run": step_1_a}, {"run": step_1_b} ] }, {"run": step_2_b}] } } p = Pipeline(ws, steps=pipeline_steps) C. step_2_a.run_after(step_1_b) step_2_a.run_after(step_1_a) step_3.run_after(step_2_b) step_3.run_after(step_2_a) p = Pipeline(ws, steps=[step_1_a, step_1_b, step_2_a, step_2_b, step_3]) D. p = Pipeline(ws, steps=[step_1_a, step_1_b, step_2_a, step_2_b, step_3]) A. Option A B. Option B C. Option C D. Option D
30
You create an Azure Machine Learning workspace. You must create a custom role named DataScientist that meets the following requirements: ✑ Role members must not be able to delete the workspace. ✑ Role members must not be able to create, update, or delete compute resources in the workspace. ✑ Role members must not be able to add new users to the workspace. You need to create a JSON file for the DataScientist role in the Azure Machine Learning workspace. The custom role must enforce the restrictions specified by the IT Operations team. Which JSON code segment should you use? A. { "Name": "DataScientist", "IsCustom": true, "Description": "Project Data Scientist role", "Actions": ["*"], "NotActions": [ "Microsoft.MachineLearningServices/workspaces/*/delete", "Microsoft.MachineLearningServices/workspaces/computes/*/write", "Microsoft.MachineLearningServices/workspaces/computes/*/delete", "Microsoft.Authorization/*/write" "AssignableScopes": [ "/subscriptions//resourceGroups/ml-rg/providers/Microsoft.MachineLearningServices/workspaces/ml-ws" } } B { "Name": "DataScientist", "IsCustom": true, "Description": "Project Data Scientist role", "Actions": ["*"], "NotActions":[], "AssignableScopes": [ "/subscriptions//resourceGroups/ml-rg/providers/Microsoft.MachineLearningServices/workspaces/ml-ws" } } C. { "Name": "DataScientist", "IsCustom": true, "Description": "Project Data Scientist role", "Actions": ["Microsoft.MachineLearningServices/workspaces/*/delete", "Microsoft.MachineLearningServices/workspaces/computes/*/write", "Microsoft.MachineLearningServices/workspaces/computes/*/delete", "Microsoft.Authorization/*/write" "NotActions": [], "AssignableScopes": [ "/subscriptions//resourceGroups/ml-rg/providers/Microsoft.MachineLearningServices/workspaces/ml-ws" } } D. { "Name": "Datascientist", "IsCustom": true, "Description": "Project Data Scientist role", "Actions": [], "NotActions": ["*"], "AssignableScopes": [ "/subscriptions//resourceGroups/ml-rg/providers/Microsoft.MachineLearningServices/workspaces/ml-ws" } }
31
You are building a binary classification model by using a supplied training set. The training set is imbalanced between two classes. You need to resolve the data imbalance. What are three possible ways to achieve this goal? Each correct answer presents a complete solution NOTE: Each correct selection is worth one point. Penalize the classification Resample the data set using under sampling or oversampling Generate synthetic samples in the minority class. Use accuracy as the evaluation metric of the model. Normalize the training feature set.
32
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You are using Azure Machine Learning Studio to perform feature engineering on a dataset. You need to normalize values to produce a feature column grouped into bins. Solution: Apply an Entropy Minimum Description Length (MDL) binning mode. Does the solution meet the goal? Yes No
33
HOTSPOT You are tuning a hyperparameter for an algorithm. The following table shows a data set with different hyperparameter, training error, and validation errors. Hyperparameter (H) Training error (TE) Validation error (VE) 1 105 95 2 200 85 3 250 100 4 105 100 5 400 50 Use the drop-down menus to select the answer choice that answers each question based on the information presented in the graphic. Answer Area Question: Which H value should you select based on the data? Answer Choices: 1 2 3 4 5 What H value displays the poorest training result? Answer Choices: 1 2 3 4 5
34
You are creating a machine learning model. You need to identify outliers in the data. Which two visualizations can you use? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. NOTE: Each correct selection is worth one point. box plot scatter random forest diagram Venn diagram ROC curve
35
You are solving a classification task. The dataset is imbalanced. You need to select an Azure Machine Learning Studio module to improve the classification accuracy. Which module should you use? Fisher Linear Discriminant Analysis. Filter Based Feature Selection Synthetic Minority Oversampling Technique (SMOTE) Permutation Feature Importance
36
You write a Python script that processes data in a comma-separated values (CSV) file. You plan to run this script as an Azure Machine Learning experiment. The script loads the data and determines the number of rows it contains using the following code: from azureml.core import Run import pandas as pd run = Run.get_context () data = pd.read_csv('./data.csv' ) rows = (len (data) ) # record row_count metric here You need to record the row count as a metric named row_count that can be returned using the get_metrics method of the Run object after the experiment run completes. Which code should you use? O run.upload_file('row_count', "./data.csv') O run.log('row_count', rows) O run.tag('row_count', rows) O run.log_table('row_count', rows) run.log_row('row_count', rows)
37
HOTSPOT You need to use the Python language to build a sampling strategy for the global penalty detection models. How should you complete the code segment? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area import pytorch as deeplearninglib import tensorflow as deeplearninglib import cntk as deeplearninglib train_smapler = deeplearminglib.DistributedSampler.(penalty_video_dataset) train_sampler = deeplearminglib.log_uniform_candidate_sampler.(penalty_video_dataset) train_sampler = deeplearninglib.WeightedRandomSampler.(penalty_video_dataset) train_sampler = deeplearninglib.all_candidate_sampler.(penalty_video_dataset) train loader - (train_smapler, penalty_video_dataset) optimizer = deeplearninglib.optim.SGD(model.parameters().Ir=0,01) optimizer = deeplearninglib.train.GradientDescentOptimizer(learning_rate=0.10) model = deeplearninglib.parallel.Distributed(DataParallel(model) model = deeplearninglib.nn.parallel.DistributedDataParalleICPU(model) model = deeplearninglib.keras.Model([ model = deeplearninglib.keras.Sequental([ ... train_sampler.set_epoch (epoch) for data, target in train_loader: data, target = data.to(device), target.to (device)
38
You are developing deep learning models to analyze semi-structured, unstructured, and structured data types. You have the following data available for model building: ✑ Video recordings of sporting events ✑ Transcripts of radio commentary about events ✑ Logs from related social media feeds captured during sporting events You need to select an environment for creating the model. Which environment should you use? Azure Cognitive Services Azure Data Lake Analytics Azure HDInsight with Spark MLib Azure Machine Learning Studio
39
HOTSPOT You publish a batch inferencing pipeline that will be used by a business application. The application developers need to know which information should be submitted to and returned by the REST interface for the published pipeline. You need to identify the information required in the REST request and returned as a response from the published pipeline. Which values should you use in the REST request and to expect in the response? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area REST Request Value Request Header: JSON containing the run ID JSON containing the pipeline ID JSON containing the experiment name JSON containing an OAuth bearer token Response: JSON containing the run ID JSON containing the pipeline ID JSON containing the experiment name JSON containing an OAuth bearer token Response: JSON containing the run ID JSON containing a list of predictions JSON containing the experiment name JSON containing a path to the parallel_run_step.txt output file
40
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You train a classification model by using a logistic regression algorithm. You must be able to explain the model’s predictions by calculating the importance of each feature, both as an overall global relative importance value and as a measure of local importance for a specific set of predictions. You need to create an explainer that you can use to retrieve the required global and local feature importance values. Solution: Create a TabularExplainer. Does the solution meet the goal? Yes No
41
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You are a data scientist using Azure Machine Learning Studio. You need to normalize values to produce an output column into bins to predict a target column. Solution: Apply an Equal Width with Custom Start and Stop binning mode. Does the solution meet the goal? Yes No
42
You are a data scientist working for a bank and have used Azure ML to train and register a machine learning model that predicts whether a customer is likely to repay a loan. You want to understand how your model is making selections and must be sure that the model does not violate government regulations such as denying loans based on where an applicant lives. You need to determine the extent to which each feature in the customer data is influencing predictions. What should you do? Enable data drift monitoring for the model and its training dataset. Score the model against some test data with known label values and use the results to calculate a confusion matrix. Use the Hyperdrive library to test the model with multiple hyperparameter values. Use the interpretability package to generate an explainer for the model. Add tags to the model registration indicating the names of the features in the training dataset.
43
Your team is building a data engineering and data science development environment. The environment must support the following requirements: ✑ support Python and Scala ✑ compose data storage, movement, and processing services into automated data pipelines ✑ the same tool should be used for the orchestration of both data engineering and data science ✑ support workload isolation and interactive workloads ✑ enable scaling across a cluster of machines You need to create the environment. What should you do? Build the environment in Apache Hive for HDInsight and use Azure Data Factory for orchestration. Build the environment in Azure Databricks and use Azure Data Factory for orchestration. Build the environment in Apache Spark for HDInsight and use Azure Container Instances for orchestration. Build the environment in Azure Databricks and use Azure Container Instances for orchestration.
44
18. DRAG DROP You have an Azure Machine Learning workspace that contains a training cluster and an inference cluster. You plan to create a classification model by using the Azure Machine Learning designer. You need to ensure that client applications can submit data as HTTP requests and receive predictions as responses. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Actions Create a real-time inference pipeline and run the pipeline on the compute cluster. Create a batch inference pipeline and run the pipeline on the compute cluster. Deploy a service to the compute cluster. Create a pipeline that trains a classification model and run the pipeline on the compute cluster. Deploy a service to the inference cluster. Answer area
45
20. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. An IT department creates the following Azure resource groups and resources: Resource group Resource group: ml_resources Resource: · an Azure Machine Learning workspace named amlworkspace . an Azure Storage account named amlworkspace12345 . an Application Insights instance named amlworkspace54321 . an Azure Key Vault named amlworkspace67890 . an Azure Container Registry named amlworkspace09876 Resource group: general_compute Resource: A virtual machine named mlvm with the following configuration: . Operating system: Ubuntu Linux . Software installed: Python 3.6 and Jupyter Notebooks · Size: NC6 (6 vCPUs, 1 vGPU, 56 Gb RAM) The IT department creates an Azure Kubernetes Service (AKS)-based inference compute target named aks-cluster in the Azure Machine Leaming workspace. You have a Microsoft Surface Book computer with a GPU. Python 3.6 and Visual Studio Code are installed. You need to run a script that trains a deep neural network (DNN) model and logs the loss and accuracy metrics. Solution: Install the Azure ML SDK on the Surface Book. Run Python code to connect to the workspace and then run the training script as an experiment on local compute. Yes No
46
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have a Python script named train.py in a local folder named scripts. The script trains a regression model by using scikit-learn. The script includes code to load a training data file which is also located in the scripts folder. You must run the script as an Azure ML experiment on a compute cluster named aml-compute. You need to configure the run to ensure that the environment includes the required packages for model training. You have instantiated a variable named aml-compute that references the target compute cluster. Solution: Run the following code: from azureml.train.estimator import Estimator sk_est = Estimator(source_directory='./scripts', compute_target=aml-compute, entry_script='train.py', conda_packages=['scikit-learn']) Does the solution meet the goal? Yes No
47
You have a comma-separated values (CSV) file containing data from which you want to train a classification model. You are using the Automated Machine Learning interface in Azure Machine Learning studio to train the classification model. You set the task type to Classification. You need to ensure that the Automated Machine Learning process evaluates only linear models. What should you do? Add all algorithms other than linear ones to the blocked algorithms list. Set the Exit criterion option to a metric score threshold. Clear the option to perform automatic featurization. Clear the option to enable deep learning. Set the task type to Regression.
48
HOTSPOT A biomedical research company plans to enroll people in an experimental medical treatment trial. You create and train a binary classification model to support selection and admission of patients to the trial. The model includes the following features: Age, Gender, and Ethnicity. The model returns different performance metrics for people from different ethnic groups. You need to use Fairlearn to mitigate and minimize disparities for each category in the Ethnicity feature. Which technique and constraint should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Option Value Technique: Grid search Outlier detection Dimensionality reduction Constraint: Demographic parity False-positive rate parity
49
You use the Two-Class Neural Network module in Azure Machine Learning Studio to build a binary classification model. You use the Tune Model Hyperparameters module to tune accuracy for the model. You need to select the hyperparameters that should be tuned using the Tune Model Hyperparameters module. Which two hyperparameters should you use? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. Number of hidden nodes Learning Rate The type of the normalizer Number of learning iterations Hidden layer specification
50
DRAG DROP You have several machine learning models registered in an Azure Machine Learning workspace. You must use the Fairlearn dashboard to assess fairness in a selected model. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Actions Answer Area Select a binary classification or regression model. Select a metric to be measured. Select a multiclass classification model. Select a model feature to be evaluated. Select a clustering model.
51
You retrain an existing model. You need to register the new version of a model while keeping the current version of the model in the registry. What should you do? A. Register a model with a different name from the existing model and a custom property named version with the value 2. B. Register the model with the same name as the existing model. C. Save the new model in the default datastore with the same name as the existing model. Do not register the new model. D. Delete the existing model and register the new one with the same name.
52
HOTSPOT You collect data from a nearby weather station. You have a pandas dataframe named weather_df that includes the following data: Temperature Observation_time Humidity Pressure Visibility Days_since_last_observation 74 2019/10/2 00:00 0.62 29.87 3 0.5 89 2019/10/2 12:00 0.70 28.88 10 0.5 72 2019/10/3 00:00 0.64 30.00 8 0.5 80 2019/10/3 12:00 0.66 29.75 7 0.5 The data is collected every 12 hours: noon and midnight. You plan to use automated machine learning to create a time-series model that predicts temperature over the next seven days. For the initial round of training, you want to train a maximum of 50 different models. You must use the Azure Machine Learning SDK to run an automated machine learning experiment to train these models. You need to configure the automated machine learning run. How should you complete the AutoMLConfig definition? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area automl_config = AutoMLConfig(task=" regression forecasting classification deep learning training_data=weather_df, label_column_name=" humidity pressure visibility temperature days_since_last observation_time time_column_name=" humidity pressure visibility temperature days_since_last observation_time max_horizon= 2 6 7 12 14 50 iterations= 2 6 7 12 14 50 iteration_timeout_minutes=5, primary_metric="r2_score")
53
DRAG DROP You have an Azure Machine Learning workspace that contains a CPU-based compute cluster and an Azure Kubernetes Services (AKS) inference cluster. You create a tabular dataset containing data that you plan to use to create a classification model. You need to use the Azure Machine Learning designer to create a web service through which client applications can consume the classification model by submitting new data and getting an immediate prediction as a response. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Actions Answer Area Create and run a batch inference pipeline on the compute cluster. Deploy a real-time endpoint on the inference cluster. Create and run a real-time inference pipeline on the compute cluster. Create and run a training pipeline that prepares the data and trains a classification model on the compute cluster. Use the automated ML user interface to train a classification model on the compute cluster. Create and start a Compute Instance.
54
A set of CSV files contains sales records. All the CSV files have the same data schema. Each CSV file contains the sales record for a particular month and has the filename sales.csv. Each file in stored in a folder that indicates the month and year when the data was recorded. The folders are in an Azure blob container for which a datastore has been defined in an Azure Machine Learning workspace. The folders are organized in a parent folder named sales to create the following hierarchical structure: /sales /01-2019 /sales.csv /02-2019 /sales.csv /03-2019 /sales.csv ... At the end of each month, a new folder with that month’s sales file is added to the sales folder. You plan to use the sales data to train a machine learning model based on the following requirements: ✑ You must define a dataset that loads all of the sales data to date into a structure that can be easily converted to a dataframe. ✑ You must be able to create experiments that use only data that was created before a specific previous month, ignoring any data that was added after that month. ✑ You must register the minimum number of datasets possible. You need to register the sales data as a dataset in Azure Machine Learning service workspace. What should you do? Create a tabular dataset that references the datastore and explicitly specifies each 'sales/ mm-yyyy / sales.csv' file every month. Register the dataset with the namesales_dataset each month, replacing the existing dataset and specifying a tag named month indicating the month and year it was registered. Use this dataset for all experiments. Create a tabular dataset that references the datastore and specifies the path 'sales/*/sales.csv', register the dataset with the namesales_dataset and a tag named month indicating the month and year it was registered, and use this dataset for all experiments. Create a new tabular dataset that references the datastore and explicitly specifies each 'sales/ mm-yyyy / sales.csv' file every month. Register the dataset with the name sales_dataset_MM-YYYY each month with appropriate MM and YYYY values for the month and year. Use the appropriate month-specific dataset for experiments. Create a tabular dataset that references the datastore and explicitly specifies each 'sales/ mm-yyyy / sales.csv' file. Register the dataset with the namesales_dataset each month as a new version and with a tag named month indicating the month and year it was registered. Use this dataset for all experiments, identifying the version to be used based on the month tag as necessary.
55
You are solving a classification task. You must evaluate your model on a limited data sample by using k-fold cross-validation. You start by configuring a k parameter as the number of splits. You need to configure the k parameter for the cross-validation. Which value should you use? k=1 k=10 k=0.5 k=0.9
56
HOTSPOT You have an Azure Machine Learning workspace named workspace1 that is accessible from a public endpoint. The workspace contains an Azure Blob storage datastore named store1 that represents a blob container in an Azure storage account named account1. You configure workspace1 and account1 to be accessible by using private endpoints in the same virtual network. You must be able to access the contents of store1 by using the Azure Machine Learning SDK for Python. You must be able to preview the contents of store1 by using Azure Machine Learning studio. You need to configure store1. What should you do? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Requirement Action Access the contents of store1 by using the Azure Machine Learning SDK for Python: Set store1 as the default datastore. Disable data validation for store1. Update authentication for store1. Regenerate the keys of account1. Preview the contents of store1 by using Azure Machine Learning studio: Set store1 as the default datastore. Disable data validation for store1. Update authentication for store1. Regenerate the keys of account1.
57
DRAG DROP You are creating an experiment by using Azure Machine Learning Studio. You must divide the data into four subsets for evaluation. There is a high degree of missing values in the data. You must prepare the data for analysis. You need to select appropriate methods for producing the experiment. Which three modules should you run in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. NOTE: More than one order of answer choices is correct. You will receive credit for any of the correct orders you select. Actions Answer Area Build Counting Transform Missing Values Scrubber Feature Hashing Clean Missing Data Replace Discrete Values Import Data Latent Dirichlet Transformation Partition and Sample
58
HOTSPOT You are creating a machine learning model in Python. The provided dataset contains several numerical columns and one text column. The text column represents a product's category. The product category will always be one of the following: ✑ Bikes ✑ Cars ✑ Vans ✑ Boats You are building a regression model using the scikit-learn Python package. You need to transform the text data to be compatible with the scikit-learn Python package. How should you complete the code segment? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area from sklearn import linear_model import pandas as df numpy as df scipy as df dataset = df.read_csv("data\\ProductSales.csv") ProductCategoryMapping = {"Bikes":1,"Cars":2, "Boats": 3, "Vans": 4} dataset ['ProductCategoryMapping' ] = dataset ['ProductCategory' ]. map[ProductCategoryMapping] reduce[ProductCategoryMapping] transpose[ProductCategoryMapping] regr = linear_model. LinearRegression () X train = dataset [ ['ProductCategoryMapping', 'ProductSize', 'ProductCost' ] ] y_train = dataset [ ['Sales' ] ] regr.fit (X train, y train) dropdown1: dropdown2:
59
DRAG DROP You have a model with a large difference between the training and validation error values. You must create a new model and perform cross-validation. You need to identify a parameter set for the new model using Azure Machine Learning Studio. Which module you should use for each step? To answer, drag the appropriate modules to the correct steps. Each module may be used once or more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Answer Area Modules Step Module Two-Class Boosted Decision Tree Define the parameter scope Partition and Sample Define the cross-validation settings Tune Model Hyperparameters Define the metric Split Data Train, evaluate, and compare
60
You plan to provision an Azure Machine Learning Basic edition workspace for a data science project. You need to identify the tasks you will be able to perform in the workspace. Which three tasks will you be able to perform? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. Create a Compute Instance and use it to run code in Jupyter notebooks. Create an Azure Kubernetes Service (AKS) inference cluster. Use the designer to train a model by dragging and dropping pre-defined modules. Create a tabular dataset that supports versioning. Use the Automated Machine Learning user interface to train a model.
61
DRAG DROP You need to correct the model fit issue. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Actions Answer Area Add the Ordinal Regression module. Add the Two-Class Averaged Perception module. Augment the data. Add the Bayesian Linear Regression module. Decrease the memory size for L-BFGS. Add the Multiclass Decision Jungle module. Configure the regularization weight.
62
You create and register a model in an Azure Machine Learning workspace. You must use the Azure Machine Learning SDK to implement a batch inference pipeline that uses a ParallelRunStep to score input data using the model. You must specify a value for the ParallelRunConfig compute_target setting of the pipeline step. You need to create the compute target. Which class should you use? BatchCompute AdlaCompute AmlCompute Aks Compute
63
You have a Python script that executes a pipeline. The script includes the following code: from azureml.core import Experiment pipeline_run = Experiment(ws, 'pipeline_test').submit(pipeline) You want to test the pipeline before deploying the script. You need to display the pipeline run details written to the STDOUT output when the pipeline completes. Which code segment should you add to the test script? pipeline_run.get.metrics() pipeline_run.wait_for_completion(show_output=True) pipeline_param = PipelineParameter(name="stdout", default_value="console") pipeline_run.get_status()
64
You are creating a new experiment in Azure Machine Learning Studio. You have a small dataset that has missing values in many columns. The data does not require the application of predictors for each column. You plan to use the Clean Missing Data module to handle the missing data. You need to select a data cleaning method. Which method should you use? Synthetic Minority Oversampling Technique (SMOTE) Replace using MICE Replace using; Probabilistic PCA Normalization
65
You are analyzing a dataset containing historical data from a local taxi company. You arc developing a regression a regression model. You must predict the fare of a taxi trip. You need to select performance metrics to correctly evaluate the- regression model. Which two metrics can you use? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. an F1 score that is high an R Squared value dose to 1 an R-Squared value close to 0 a Root Mean Square Error value that is high a Root Mean Square Error value that is low an F 1 score that is low.
66
You train and register a model in your Azure Machine Learning workspace. You must publish a pipeline that enables client applications to use the model for batch inferencing. You must use a pipeline with a single ParallelRunStep step that runs a Python inferencing script to get predictions from the input data. You need to create the inferencing script for the ParallelRunStep pipeline step. Which two functions should you include? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. run(mini_batch) B. main() C. batch() D. init() E. score(mini_batch)
66
You register a model that you plan to use in a batch inference pipeline. The batch inference pipeline must use a ParallelRunStep step to process files in a file dataset. The script has the ParallelRunStep step runs must process six input files each time the inferencing function is called. You need to configure the pipeline. Which configuration setting should you specify in the ParallelRunConfig object for the PrallelRunStep step? process_count_per_node= "6" node_count= "6" mini_batch_size= "6" error_threshold= "6"
67
You create a script that trains a convolutional neural network model over multiple epochs and logs the validation loss after each epoch. The script includes arguments for batch size and learning rate. You identify a set of batch size and learning rate values that you want to try. You need to use Azure Machine Learning to find the combination of batch size and learning rate that results in the model with the lowest validation loss. What should you do? Run the script in an experiment based on an AutoMLConfig object Create a PythonScriptStep object for the script and run it in a pipeline Use the Automated Machine Learning interface in Azure Machine Learning studio Run the script in an experiment based on a ScriptRunConfig object Run the script in an experiment based on a HyperDriveConfig object
67
You are building a regression model tot estimating the number of calls during an event. You need to determine whether the feature values achieve the conditions to build a Poisson regression model. Which two conditions must the feature set contain? I ach correct answer presents part of the solution. NOTE: Each correct selection is worth one point. The label data must be a negative value. The label data can be positive or negative, The label data must be a positive value The label data must be non discrete. The data must be whole numbers.