Build Natural Language Solutions With Azure Open A.i. Service Flashcards
Introduction: Open a.i. provides a platform for developers to add artificial intelligence functionality do the applications with the help of birth python and C Sharp sdks and rest API is. The platform has various AI models available each specialising in different tasks which can be deployed through the as your open a.i. service a z u r e.
Integrate open.ai into your app:
(Ao ao) office bus eShop and Python SDK Sunday rest API that developers can use to add AI functionality to their applications. To narrative AI capabilities in open a.i. are provided through models.
The models available in the open a.i. service belong to different families each with their own Focus. To use one of these models you need to deploy through the open a.i. service
Important:
Open.ai has been released with limited access to the support the response to support the responsible use of the service.
Users need to apply for access and be approved before they can create and a oai resource
Create an open a.i. resource:
And open a.i. resources can be deployed through both the command-line interface and the AZ portal. Creating the AI resource through the av portal is similar to the following individual cognitive services resources and is part of the cognitive services services
Navigate to AZ portal Newlands search for AZ open I selected and click create
Enter the appropriate values for the empty fields and creative resource.
The possible reasons for an AI are currently limited full-stop choose the region closest to your physical location
Once the resources being created you’ll have the keys and the end point that you can use to enjoy your app.
Choose and deploy a model colony line each model family excels at different tasks and they are different capabilities of models within each family. Model families break down into three main families Cole on new line text or generator free trains transformer or gpt
Models that understand and generate natural language and some code. These models are best at generation general tasks conversations and chat formats.
Code:
Code means killed models are built on top of g p t models and trained on millions of lines of code.
These models can understand and generate code including interpreting comments or natural language to generate code.
Embeddings:
These models can understand and use in weddings which are a special format of data that can be used by machine learning models and algorithms.
The model family and capability is indicated in the name of the base model such as colon text-davinci-0 0 3 which specifies that it’s a text model with DaVinci level type of capability and identify a number 3.
Details on models capability levels and naming conventions can be found on the AAAA models documentation page.
To deploy a model for you to use navigate to the AC open a I sit here and go to the deployments page.
Authentication and specification of diploid model:
When you deploy a model in a AAA you choose a development name to give it.
And configuring your app you need to specify your resource and Kiki in point key and deployment name to specify which model to make to send a request to.
This enables users to deploy various models of them the same Resource and make request to the appropriate model depending on the task
Prompt engineering:
How the input prompt is written plays a large part in how the AI model will respond.
For example if prompted with a simple request such as what is AZ open AI you will often get a generic answer similar to using a search engine.
However if you give it more details about what you want in your response you will get a more specific answer.
Example would be turn on uline classify the following news headlines into one of the following categories business tech and politics Tech or politics etc:
Headline one excetera
Headline 2
And category will be provided
Note:
Is never safe to assume that answers from an air model of actual or correct full-stop teams or individuals tasked with developing and deploying assistant should work so identify measure and mitigate helpful stop it is your responsibility to verify any responses from an AI model and to use AI responsibly.
Available endpoints:
I can be accessed via a rest a bit a or an SDK currently available for python and c-sharp.
The endpoints available for interacting with a deployed model or used differently and plantain and certain endpoints can only use certain models will stopped the available in points are:
Completion total on new line model takes and input prompt and generates 10 more predicted completions.
Chat completion:
Model takes in the form of a chat conversation where rules are specified with the message they send and the chat competition is generated.
Embeddings:
Model takes input and Returns a vector representation of that input.
For example the input for completion might be a prompt like what is open as or it might include some roll tags or other prompt engineering elements
Use open a.i. rest APK Cole on new line Open Office arrest APK for interacting and generating responses that developers can use to add AI functionality to their applications.
Nuts new line before interacting with the API you must create an open a.i. resource in the portal. Deploy that model in that Resource and receive your Endpoint and keys.
Placeholder names:
Your Endpoint name
Your API key
Your deployment name
Your infant name Cole on the space in point is found in the keys and Endpoint section of the AC portal. It’s the base in point of your resources such as https://sampled open airasia.com/.
Your API key code on his are found in the keys and Endpoint section of the Azure portal. You can you see the keys from your resource.
Your deployment name:
This deployment name is the name provided when you deployed your model in the Azure open ai studio
Complications:
Once you put a model in your ar resources you can send a prom to the service using a post request.
One Endpoint is completions which generates the completion of your prompt
The response from the API will be similar to the following Jason: Jason text at slash code which contains the ID object created model choices and in curly brackets there are text index log props and finish reason with other values attached to it
The company’s response to look for is within choices [Close] start text Dot. Notice that also included in the response is finished reason which is this example which in this example is stop.
Other possibilities for finished reason include laying which means it used up the max tokens specified in the request off content filter filter which means the system detected harmful content was generated from the prompt if i’m.
If harmful content is included in the prompt the API request returns and error
Chat competitions:
Similar to completion chat slash completions generates a completion to your prompt but works best when that prompt is a chat exchange
Embedding:
Embeddings are helpful for specific formats that are easily consumed by machine learning models for staff to generate and bearings from the input text post a request to the embedding end point
When generating embeddings be sure to use a model in ala I meant for my earrings will stop those models start with text- embedding or text-similarity depending on what functionality you were looking for.