Classification Model With Azure Machine Learning Designer Flashcards

1
Q

Classification model with azure machine learning designer Cole on new line classification is a supervised machine learning technique used to predict categories or classes.
Learn how to create classification models using azure machine learning designer

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Classification:
Classification is an example of a supervised machine learning technique in which you train a model using data that includes both the features and known values for the label so that the model learns to fit the feature combinations to the label.
Then after training has been completed you can use the trains model to predict labels for new items for which the label is unknown.

A

You can use the Microsoft azure machine learning designer to create classification models by using a drag-and-drop visual interface without needing to write any code

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Identify classification machine learning scenarios:
Classification is a form of machine learning that is used to predict which category or class and item belongs to.
This machine learning technique can be applied to binary and multi-class scenarios. For example a health clinic might use the characteristics of a patient such as age weight blood pressure and so on to predict whether the patient is at risk of diabetes.
In this case the characteristics of the patient are the features and the label is a binary classification of either 0 or 1 representing non-diabetic or diabetic

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Like regression classification is an example of a supervised machine learning technique in which you can train and modelled using data that includes both the features and known values for the label so that the model dance2fit the feature combinations to the label.
Then after training has been completed you can use the trained model to predict labels for new items for which the label is unknown

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Scenarios for classification machine learning models:
Using clinical data to predict whether a patient will become sick or not
Using historical data to predict where the text sentiment is positive negative or neutral
Using characteristics of small businesses to predict if a new venture will succeed.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Confusion Matrix: Possible outcomes.
True positive curl on the model predicts the patient has diabetes and the patient does have diabetes.
False positive Caroline the model predicts the patient has diabetes but the patient does not actually have diabetes.
False negative kolanda model predicts the patient does not have diabetes but the patient actually does have diabetes.
Go through negative: the model predicts the patient does not have diabetes and the patient does not have diabetes
See answer for more details

A

Confusion matrix:
The confusion matrix is a tool used to access the quality of a classification model’s predictions. It compares predicted labels against actual labels.
In a binary classification model were predicting one after possibility possible values the confusion matrix is a 2 by 2 grid showing the predicted and actual value counts for classes one and zero.
It categorises the models results into four types of outcomes using the diabetes example these outcomes can look like:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Multi-class classification:
For each multi-class classification model where there are more than two possible classes the same approach is used to tabulate each possible combination of actual and predicted value counts – sewing model with three possible classes would result in a 3 by 3 matrix with a diagonal line of cells where the predicted and actual labels match.

A

Matrix that can be derived from the confusion matrix include:
Accuracy call on the number of correct predictions true positive plus negative positive divided by the total number of predictions.
Precision Cole on the number of the case is classified as positive that are actually positive; the number of true positive divided by the number of true positives plus false positives.
Recall Cole on the fraction of positive cases correctly identified; the number of true positive divided by the number of true positive plus false negatives.
F1 score: and overall metric that essentially combines precision and recall.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Of the metrics the accuracy metric may be the most intuitive which is the number of correct predictions true positive plus a negative divided by the total number of predictions.
However you need to be careful about using accuracy as a measurement of how well a model performs. Using the model that predicts 15% of patients and diabetes and actually 25% of patients have diabetes we can calculate calculate the following metrics:
The accuracy of the model is: (10 + 70) / 100 is equal to 80%
The position of the model is: 10 / (10 + 5 +) is equal to 67 per cent
The recall of the model is 10 / (10 + 15) is equal to 40%

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Choosing a threshold:
A classification model predicts the probability for each possible class will stop in other words the Model calculates A likelihood for each predicted label.
In the case of a binary classification model the predicted probability is a value between 0 and 1.
By default a predicted probability including or above 0.5 results in a class prediction of wine will a prediction below this threshold means that there is a greater probability of a negative prediction. Remember that the probabilities for all classes add up to one so they predicted class would be zero Designer has a useful threshold slider for reviewing how the model performance would change depending on the set threshold

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Roc Curve and auc metric:
Another term for recall is true positive rate and it has a corresponding metric named false positive rate which measures the number of negative cases correctly identified as positive compare between the number of actual negative cases.
Starting these metrics against each other for every possible threshold between 0 and 1 results at a curve known as the Roc Curve. RAC stands for receiver operating characteristic but most data scientists just call it an Roc curve.
In an ideal model the curve would go all the way up to the left side and across the top so that so that it covers the full area of the chart.
The larger the area under the curve of auc metric which can be any value from 021 the better the model is performing. You can review The Roc curve in evaluation results

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Deploying a predictive service:
You have the ability to deploy a service that can be used in real-time.
In order to automate your model into a service that makes continuous predictions you need to create and deploy and inference pipeline.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Inference pipeline:
To deploy your pipeline you must first convert the training pipeline into a real time in France pipeline. This process removes training components and adds web service inputs and outputs to handle requests.
The Infant’s pipeline performs the same data transformation does the first pipeline for new data.
Then it uses the train model to infer or predict labels based on its features. This model will form the basis for a predictive service that you can publish for applications to use.
You can create an inference pipeline by selecting the menu above a completed job

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Deployment turn on after creating an Infant’s pipeline you can deployed as an Endpoint in the Endpoint page you can view deployment details test your Pipeline Services sample data data and find credentials to connect your pipeline service to a client application. On the test debut can test your deployed service with a sample data in a dress and format will stop the test tube is a tool you can use to quickly check and see if your model is behaving as expected full-stop typically it is helpful to test the service before connecting it to an application.
You can find credentials for your service on the consume tab. These credentials are used to connect your trained machine learning model as a service to a client application

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Knowledge check:
Using as your machine learning designer pipeline to train and test a binary classification model. Your review the models performance metrics in and evaluate model module and note that it has an auc score of 0.3 what can you conclude about the model?

A

Answer to the question above the line
The model performs worse than random guessing. And auc of 0.5 is what you’d expect with a random prediction of a binary models therefore performs worse than guessing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly