Azure Open A.i. Service Flashcards

1
Q

Introduction to law
Suppose you want to build a support application that summarises text and suggests code. To build this app you want to utilise the capability as you see in trapped gbt a chat box built by the open AI research company that takes in natural language input from a user and Returns a machine created human-like response.

Generator of our models powertrac gpt’s ability to produce new content such as text code and images based on a natural language prompt. May degenerative our models are a subset of deep learning algorithms. These algorithms support various workloads across Bridge and speech-language decision and more and search decision speech-language decision search animal.

As you’re open a.i. service brings these generative models to the as your platform enabling you to develop powerful AI Solutions that benefit from the security scalability and integration of other services provided by the as your cloud platform.
These models are available for building applications through a rest API various sdks and a studio interface.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Access azure open AI service

A

The first step in building a generative isolation with as open as is to provision and Asda open a.i. resources in your as your subscription. As you are open I use service is currently Limited in access users need to apply for access

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Create another open a.i. service resource in the Azure portal when you create an ASDA open a service resource you need to provide a subscription name resource group name region unique instance name and select pricing tier select a pricing tier.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

To create an ASDA open a.i. service resource from the CLA refer to this example and replace the following variables with your own:

A

My open a.i. resources kalan replaced with a unique name for your resource new line of resource group: replace with your resource group name
East us Poland replaced with the region to deploy your resource
Subscriptionid: replace with your subscription ID

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Use as open as idiots out on the line as you’re open white studio provides access to model management
Deployment
Experimentation
Customisation
And learning resources.

A

You can access the Azure open a.i. studio through the Azure for portalr after creating a resource or at the website by logging in with your as your open a.i. resources instant.
During the sign and workflow select the appropriate directory azure subscription or open a high resource

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Once you have as your open house studio you see a call-to-action button at the top of the screen if you don’t have any difference yet the lights in Reid’s create and new deployment.
Get started by selecting the bathrooms brings you to the Freeman stage.
Deployments is one of several navigationoptions that appears on the left hand side of the screen.

A

Once in the studio you next time please turn on uline choosing a base model
Deploying a base model
Testing the model new. And easy way to do this is in one of the Studio playgrounds.
Experimenting with promise and parameters to see their effects on completion or generator output.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Explore types of generative models fell on new line to begin building with Microsoft AZ open up you need to choose a base model and deploy a.
Microsoft provides base models and the option to create customised base models. This model will cover the currently available out-of-the-box base models.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Model families:
Open a generative model are grouped by family and capability.
The family groupings are by workload.
The base models within the families are distinguished how well they can compete the workload.

A

Family
Gpt4
Gpt3
Fedex
Embeddings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

VP T4:
Models that generate natural language and third. These models are currently in previews will stop for access existing AZ Microsoft open a.i. customers can build can apply by filling out a form
Base models within the family:
gpt-4g pt-for-32k

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Gpt-3: models that’s can understand and generate natural language.
Base models within the family Cole on new line text DaVinci 003 text 3001 text Babbage 001 text 001 GPT-35-turbo

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Codex:
Models that can understand and generate code including translating natural language to code.
Base baba base models within the family turn on hue line code davinci-002 code cushman-001

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Embedding turn on new line embeddings after the broken down into three families of models for different functionality:
similarity text search and clothes search.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Note:
And underlying generative AI capabilities in open a house church dptr in the gbt-35-turbo model which belongs to the gpt-three family. This model is in preview.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Choosing a model-new line the models within the family differ by speed cost and how old a complete tasks will stop in general models with the name DaVinci or stronger than models with the name Harry Babbage or Ada but may be slower.
You can learn about the differences and latest models offered in documentation
Pricing is determined by tokens and buy model type.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Microsoft AZ open a.i. studio navigation toll on new line in the Microsoft AZ open a.i. Cydia the model page gets the available base models and provides an option to create customised models will stop the models that have a succeeded status mean that they are successfully trained and can be selected for deployment.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Deploy generative models failed on
I first need to deploy a model to make API calls to receive competitions to prompt. When you create a new development you need to indicate which base model to deploy. You can only deploy one instance of each model will stop there are several ways you can deploy your base model.

A
17
Q

Deploying as your open a.i. studio fill on new line in Microsoft at open.ai studio deployment speed you can create a new deployment by selecting a model name from the menu. The available base models come from the list of the models page

A
18
Q

From the deployment page in the studio you can also be information about all your deployment secluding to play with name model name model version status state created and more.

A
19
Q

Deploy using azure cli solan
He can also deploy a model using the console. Using this example replace the following variables with your own resources values:
My resource group turn on replace with your resource group name
My resource group name the lawn replace with your resource name
My model kalan replace with a unique name for your model
Text Carrie zero zero one: replace with the base model you wish to deploy

A
20
Q

Deploying using rest API pola
You can deploy a model using the rest API. In the request body you can specify the base model you wish to reply.

A
21
Q

Use prompts to get completions from models polo
Once the model is deployed you can test how it completes prompt. A prompt is the tax portion of a request that is sent to the deployed models completions in point.
Responses are referred to as completions which can come from in the form of text code and other formats

A

Prom styles from types can be grouped into types of requests based on tasks:
Task types include:
Classification specifying continue line generating new continue line holding a conversation
Transformation translation & conversion
Summary content
Pick up where you left off
Giving factual responses

22
Q

Classifying content example:
Sweet Caroline I enjoyed the trip. Sentimental on positive
Generating new content example: List ways of travelling completion example bike and car.
Holding a conversation example:
A friendly AI assistant kalan being a friendly assistant I guess.
Transformation translation & conversion example turn on new line English Caroline hello French cologne translate competition example kolandra.
Summarising content example turn on uline provide a summary of the contacts with text underneath that
Completion example the content shared methods of machine learning
Picking up where you left off the alarm
One way to grow tomatoes Competition example is to plant seeds completes the sentence.
Giving facial respectable responses so our
How many moons does the Earth have a line
Answer completion example: 1

A
23
Q

Competition quality:
Several factors affect the quality of completions you’ll get from a generator weigh isolution.
The way from these engineered.
The model parameters.
The data the model is Strand on which can be adapted through model fine-tuning with customisation.

You have got more control over The competition’s Returned by training custom model than through prompt engineering and parameters adjustment

A
24
Q

Making calls kala
You can start making calls to your deployed model via the rest API Python c sharp or from the studio.
If your deployed model has a chat GPT or gpt-for modelbase use the chat competitions documentation which has different request endpoints and variables required than the other four base models.

A
25
Q

Test models in azure open.ai Studios playground:
Playgrounds are useful interfaces in Microsoft AZ open.ai studio that you can use to experiment with your deployed models without needing to develop your own client application. Microsoft AZ open Street AI studio offers multiple playgrounds with different parameters tuning options

A
26
Q

Completion playground can you line the completions playground allows you to make calls from me to your deployed models through a text in and text out interface and to adjust parameters.
Need to select the deployment name of your model under deployments will stop optionally you can use the provided examples to get you started and then you can enter your own prompt.

A

Completion playground perimeters:
Temperature
Max length
Stop sequences
Tour probabilities
Frequency penalty
Presents penalty
Preresponse text:
Post response text
First response text

27
Q

Temperature turn on controls randomness. Lowering the temperature means that the model produces more repetitive and deterministic responses will stop increasing the temperature results and or unexpected or creative responses full-stop tried Russ adjusting the temperature or top pee but not both

A
28
Q

Maxlength (tokens) Caroline uline set a limit on the number of tokens per model responsible stop the API supports a maximum or 4000 tokens shared between the prompt including system message examples message history and user query and the models response. One token is roughly 4 characters for typical English text.

A
29
Q

Stop sequences:
Make responses stop at a desired for and such as the end of a sentence or list. Specify up to for sequences where the model will start generating further tokens in a response. The return text won’t contain the stop sequence

A
30
Q

Top probabilities or topical on new line simulator temperature that controls randomness but uses a different method will stop lowering top free Narrows the models token selection to likelier tokens. Increasing topi let’s the model shoes from tokens with both high and low likelihood. Try adjusting the temperature or talk pee but not

A
31
Q

Frequency penalty:
Reduce the chance of repeating a token proportionally based on how often it has appeared in the text so far. This decreases the likelihood of repeating the exact same text in a responsible

A
32
Q

Presents penalty:
Reduce the chance of repeating any tokens that has appeared in the text at all so far. This increases the likelihood of introducing new topics in a response

A
33
Q

Free response text Cole on insert text after the user’s input and before the models responsible stop this can help prepare the model for a response

A
34
Q

Post response text:
Insert text after the models generated response to encourage further user input as when modelling a conversation

A
35
Q

Church playground carillon uline the church playground is based on a conversation in message out interface.
You can initialise the session with a system message to set up the chat context.
In the truck playground you’re able to add a few short examples of few short examples. The term few-shot refers to providing a few of examples to help models learn what it needs to do. You can think of it in contrast 0 shot which refers to providing their examples

A
36
Q

In the assistant setup you can provide a few short examples of what the user input maybe and what the assistant responsibly. The assistant tries to mimic the responses you include your intern rules and format you’ve defined in your system message.

A
37
Q

Chat playground perimeters:
The truck playground includes temperature barometer and others not available in the completions playground these include:
Max response new line top p
Past messages included

A

Max response colon City limit on the number of tokens per model response
Top-of-the-line similar to temperature that controls randomness but uses a different method.
Past messages code on select the number of fast messages to include in each new API request. Including past messages helps give the model context of for new user queries. Setting this number 210 will include five years Aquarius and 5 system responses.

38
Q

The current step count is viewable from the church playground. Since the API calls are priced by token and it’s possible to set a Max response token limit you’ll want to keep an eye out for the current open account to make sure the conversation in doesn’t exceed the max output response token count

A