Path6.Mod1.b - Deploy and Consume Models - Managed Online Endpoint w/out MLFlow Flashcards

1
Q

Deploying to an Online Endpoint without MLFlow requires three things, and the obvious disadvantage

A
  • Model Artifacts - Stored on a local path or as a Registered Model.
  • A Scoring Script
  • An Environment

Note the disadvantages without MLFLow: No auto generated resources

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Code for creating an Environment with a Docker Image + Conda dependencies

A

The conda.yml file:

name: basic-env-cpu
channels:
  - conda-forge
dependencies:
  - python=3.7
  - scikit-learn
  - pandas
  - numpy
  - matplotlib

Then create the Enviroment with python:

from azure.ai.ml.entities import Environment

env = Environment(
    image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04",
    conda_file="./src/conda.yml",
    name="deployment-environment",
    description="Environment created from a Docker image plus Conda environment.",
)
ml_client.environments.create_or_update(env)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

The ManagedOnlineDeployment class, what its for and important parameters

A

When managing Online Endpoints without MLFlow you also need a Managed Deployment. Important Parameters:
- instance_type: VM size
- instance_count: Number of instances to use
- model: The Model to deploy to the Endpoint
- environment: The execution Environment. Can be a string name or an Environment instance
- code_configuration: An instance of CodeConfiguration

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

The CodeConfiguration class, what its for and important parameters

A

The CodeConfiguration is used to provide the ManagedOnlineDeployment instance with the location of the Scoring Script:
- code: the Path to the Scoring Script
- scoring_script: the name of the actual script file

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Example code using ManagedOnlineDeployment with an Environment, CodeConfiguration and Model

A

Putting it together:

model = Model(path="../model-1/model/sklearn_regression_model.pkl")
env = Environment(
    conda_file="../model-1/environment/conda.yml",
    image="mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04:latest",
)

blue_deployment = ManagedOnlineDeployment(
    name="blue",
    endpoint_name="endpoint_example",
    model=model, // add the Model
    environment=env, // add the Environment
    code_configuration=CodeConfiguration(
        code="../model-1/onlinescoring", scoring_script="score.py"
    ),
    instance_type="Standard_DS3_v2",
    instance_count=1,
)

ml_client.online_deployments.begin_create_or_update(blue_deployment).result()
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Managed Online Deployments deploy multiple models to Endpoints the same way Managed Online Endpoints do using MFLow (T/F)

A

True. Same configing between endpoint traffic and model names. Same deletion:

// blue deployment takes 100 traffic
endpoint.traffic = {"blue": 100}
ml_client.begin_create_or_update(endpoint).result()

ml_client.online_endpoints.begin_delete(name="endpoint-example")
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Two ways to Test Online Endpoints

A
  • One way is through ML Studio > Endpoints > Endpoint Details > Test Tab (you saw this in AI-900 and even AZ-900)
  • The other way is through Python SDK via endpoint invoke:
// test the blue deployment with some sample data
{
  "data":[
      [0.1,2.3,4.1,2.0], // 1st case
      [0.2,1.8,3.9,2.1],  // 2nd case,
      ...
  ]
}

...

call online_endpoints.invoke
response = ml_client.online_endpoints.invoke(
    endpoint_name=online_endpoint_name,
    deployment_name="blue",
    request_file="sample-data.json",
)

print ("Yes" if response[1]=='1' else "no")

Lastly, if you have the Endpoint already deployed, check through PostMan or similar http tools~

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

mirror_traffic parameter:
- What it does when set
- The upper limit and what happens when you violate it

A
  • This Managed Endpoint parameter allows you to mirror aka copy a percentage of live traffic to a deployed endpoint. This is the Shadow Deployment
  • 50% is the max mirror, else you get throttled
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

mirror_traffic parameter: CLI and Python version of how to use it

A

CLI version. Mirrors 10% to the ‘green’ Deployment:
az ml online-endpoint update --name $endpoint_name --mirror-traffic "green=10"

Python version. Mirrors 10% to the ‘green’ Deployment:

endpoint.mirror_traffic = {"green": 10}
ml_client.begin_create_or_update(endpoint).result()
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q
  • You can set mirror_traffic for Kubernetes Online Endpoints (T/F)
  • You can mirror traffic to multiple Deployments in an Endpoint (T/F)
  • A Deployment can receive both live traffic and mirror traffic (T/F)
  • When invoking an Endpoint by naming the Deployment, Azure will not mirror traffic to the Shadow Deployment
A
  • False. It’s not supported for Kubernetes
  • False. You only mirror to ONE Deployment
  • False. They can receive either or but NOT both
  • True. Shadowing aka Mirror Traffic only works when you DON’T specify a Deployment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly