AI - 900 Flashcards

1
Q

This is often the foundation for an AI system, and is the way we “teach” a computer model to make predictions and draw conclusions from data.

A

Machine learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Capabilities within AI to interpret the world visually through cameras, video, and images.

A

Computer vision

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Capabilities within AI for a computer to interpret written or spoken language, and respond in kind.

A

Natural language processing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Capabilities within AI that deal with managing, processing, and using high volumes of data found in forms and documents.

A

Document intelligence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Capabilities within AI to extract information from large volumes of often unstructured data to create a searchable knowledge store.

A

Knowledge mining

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Capabilities within AI that create original content in a variety of formats including natural language, image, code, and more.

A

Generative AI

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

the foundation for most AI solutions

A

Machine Learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How does machine learning work.

A

Machines learn from data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Machine learning models try to capture the relationship between …

A

Data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Microsoft Azure provides the

A

Machine learning service

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Azure is

A

a cloud-based platform for creating, managing, and publishing machine learning models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Azure Machine Learning Studiooffers multiple authoring experiences such as

A

Automated machine learning: this feature enables non-experts to quickly create an effective machine learning model from data.

Azure Machine Learning designer: a graphical interface enabling no-code development of machine learning solutions.

Data metric visualization: analyze and optimize your experiments with visualization.

Notebooks: write and run your own code in managed Jupyter Notebook servers that are directly integrated in the studio.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Automated machine learning:

A

this feature enables non-experts to quickly create an effective machine learning model from data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Azure Machine Learning designer:

A

a graphical interface enabling no-code development of machine learning solutions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Data metric visualization:

A

analyze and optimize your experiments with visualization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Notebooks:

A

write and run your own code in managed Jupyter Notebook servers that are directly integrated in the studio.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

an area of AI that deals with visual processing. Let’s explore some of the possibilities that computer vision brings

A

Computer Vision

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

… app is a great example of the power of computer vision. Designed for the blind and low vision community, the Seeing AI app harnesses the power of AI to open up the visual world and describe nearby people, text and objects.

A

Seeing AI

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

common computer vision tasks.

A

Image classification, Object detection, Semantic segmentation, Image analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

… an advanced machine learning technique in which individual pixels in the image are classified according to the object to which they belong. For example, a traffic monitoring solution might overlay traffic images with “mask” layers to highlight different vehicles using specific colors.

A

Semantic segmentation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

… involves training a machine learning model to classify images based on their contents. For example, in a traffic monitoring solution you might use an image classification model to classify images based on the type of vehicle they contain, such as taxis, buses, cyclists, and so on.

A

Image classification

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

machine learning models are trained to classify individual objects within an image, and identify their location with a bounding box. For example, a traffic monitoring solution might use object detection to identify the location of different classes of vehicle.

A

Object detection

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

You can create solutions that combine machine learning models with advanced … techniques to extract information from images, including “tags” that could help catalog the image or even descriptive captions that summarize the scene shown in the image.

A

image analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
… is a specialized form of object detection that locates human faces in an image. This can be combined with classification and facial geometry analysis techniques to recognize individuals based on their facial features.
Face detection
26
… is a technique used to detect and read text in images. You can use OCR to read text in photographs (for example, road signs or store fronts) or to extract information from scanned documents such as letters, invoices, or forms.
Optical character recognition
27
Use … to develop computer vision solutions. The service features are available for use and testing in the … and other programming language
Azure AI Vision, Azure Vision Studio
28
features of Azure AI Vision include:
Image Analysis: capabilities for analyzing images and video, and extracting descriptions, tags, objects, and text. Face: capabilities that enable you to build face detection and facial recognition solutions. Optical Character Recognition (OCR): capabilities for extracting printed or handwritten text from images, enabling access to a digital version of the scanned text.
29
… is the area of AI that deals with creating software that understands written and spoken language.
Natural language processing (NLP)
30
NLP software can:
Analyze and interpret text in documents, email messages, and other sources.
 Interpret spoken language, and synthesize speech responses.
 Automatically translate spoken or written phrases between languages.
 Interpret commands and determine appropriate actions.

31
You can use … and … to build natural language processing solutions.
Microsoft's Azure AI Language, Azure AI Speech
32
features of Azure AI Language include …
understanding and analyzing text, training conversational language models that can understand spoken or text-based commands, and building intelligent applications.

33
Azure AI Speech features include …
speech recognition and synthesis, real-time translations, conversation transcriptions, and more.

34
You can explore Azure AI Language features in the … and Azure AI Speech features in the ….
Azure Language Studio, Azure Speech Studio
35
… is the area of AI that deals with managing, processing, and using high volumes of a variety of data found in forms and documents. Document intelligence enables you to create software that can automate processing for contracts, health documents, financial forms and more
Document Intelligence 
36
You can use Microsoft's …  to build solutions that manage and accelerate data collection from scanned documents.
Azure AI Document Intelligence
37
Features of Azure AI Document Intelligence help…
automate document processing in applications and workflows, enhance data-driven strategies, and enrich document search capabilities.
38
You can use … to add intelligent document processing for invoices, receipts, health insurance cards, tax forms, and more. You can also use … to create custom models with your own labeled datasets. The service features are available for use and testing in the … and other programming languages.
prebuilt models, Azure AI Document Intelligence, Document Intelligence Studio
39
… is the term used to describe solutions that involve extracting information from large volumes of often unstructured data to create a searchable knowledge store.
Knowledge mining 
40
One Microsoft knowledge mining solution is …, a private, enterprise, search solution that has tools for building indexes. The indexes can then be used for internal only use, or to enable searchable content on public facing internet assets.
Azure Cognitive Search
41
… can utilize the built-in AI capabilities of Azure AI services such as image processing, document intelligence, and natural language processing to extract data. The product's AI capabilities makes it possible to index previously unsearchable documents and to extract and surface insights from large amounts of data quickly.
Azure Cognitive Search
42
… describes a category of capabilities within AI that create original content. People typically interact with generative AI that has been built into chat applications. Generative AI applications take in natural language input, and return appropriate responses in a variety of formats including natural language, image, code, and audio.
Generative AI
43
In Microsoft Azure, you can use the … to build generative AI solutions.
Azure OpenAI service
44
… is Microsoft's cloud solution for deploying, customizing, and hosting generative AI models. It brings together the best of OpenAI's cutting edge models and APIs with the security and scalability of the Azure cloud platform.
Azure OpenAI Service
45
Azure OpenAI supports many foundation model choices that can serve different needs. The service features are available for use and testing in the Azure …  and other programming languages. You can use the Azure OpenAI Studio user interface to manage, develop, and customize generative AI models.
OpenAI Studio
46
The Challenges or Risks of AI include:
Bias can affect results Errors may cause harm Data could be exposed Solutions may not work for everyone Users must trust a complex system Who's liable for AI-driven decisions?
47
six principles, designed to ensure that AI applications provide amazing solutions to difficult problems without any unintended negative consequences.
Fairness, Reliability and safety, Privacy and security, Inclusiveness, Transparency, Accountability
48
AI systems should treat all people fairly. For example, suppose you create a machine learning model to support a loan approval application for a bank. The model should predict whether the loan should be approved or denied without bias. This bias could be based on gender, ethnicity, or other factors that result in an unfair advantage or disadvantage to specific groups of applicants. Azure Machine Learning includes the capability to interpret models and quantify the extent to which each feature of the data influences the model's prediction. This capability helps data scientists and developers identify and mitigate bias in the model. Another example is Microsoft's implementation of Responsible AI with the Face service, which retires facial recognition capabilities that can be used to try to infer emotional states and identity attributes. These capabilities, if misused, can subject people to stereotyping, discrimination or unfair denial of services.
Fairness
49
AI systems should perform …. For example, consider an AI-based software system for an autonomous vehicle; or a machine learning model that diagnoses patient symptoms and recommends prescriptions. Unreliability in these kinds of systems can result in substantial risk to human life. AI-based software application development must be subjected to rigorous testing and deployment management processes to ensure that they work as expected before release.
Reliably and safely
50
AI systems should be … and respect …. The machine learning models on which AI systems are based rely on large volumes of data, which may contain personal details that must be kept private. Even after the models are trained and the system is in production, privacy and security need to be considered. As the system uses new data to make predictions or take action, both the data and decisions made from the data may be subject to privacy or security concerns.
Secure, privacy
51
Thru …, AI systems should empower everyone and engage people. AI should bring benefits to all parts of society, regardless of physical ability, gender, sexual orientation, ethnicity, or other factors.
Inclusiveness
52
To achieve …, AI systems should be understandable. Users should be made fully aware of the purpose of the system, how it works, and what limitations may be expected.
Transparency
53
People should be … for AI systems. Designers and developers of AI-based solutions should work within a framework of governance and organizational principles that ensure the solution meets ethical and legal standards that are clearly defined.
Accountable
54
Machine learning is in many ways the intersection of two disciplines ... and …
data science and software engineering
55
The goal of machine learning is to use data to create a … model that can be incorporated into a software application or service. To achieve this goal requires collaboration between data scientists who explore and prepare the data before using it to train a machine learning model, and software developers who integrate the models into applications where they're used to predict new data values (a process known as … ).
predictive, inferencing
56
Fundamentally, a machine learning model is a software application that encapsulates a  … to calculate an output value based on one or more input values. The process of defining that … is known as  …. After the … has been defined, you can use it to predict new values in a process called ….
function, function, training, function, inferencing
57
The training data consists of past observations. In most cases, the observations include the observed … or  … of the thing being observed, and the known … of the thing you want to train a model to predict (known as the …).

attributes, features, value, label
58
You'll often see the features referred to using the shorthand variable name …, and the label referred to as …. Usually, an observation consists of multiple feature values, so x is actually a … (an array with multiple values), like this: [x1,x2,x3,...].

x, y, vector
59
An … is applied to the data to try to determine a relationship between the … and the …, and generalize that relationship as a calculation that can be performed on … to calculate …
algorithm, features, label, x, Y
60
The specific algorithm used depends on the kind of … problem you're trying to solve (more about this later), but the basic principle is to try to fit the data to a function in which the values of the features can be used to calculate the…
predictive, label.

61
The result of the algorithm is a … that encapsulates the calculation derived by the algorithm as a function - let's call it f. In mathematical notation:
 y = f(x)
model
62
The model is essentially a software program that encapsulates the … produced by the training process. You can input a set of …, and receive as an output a prediction of the corresponding …. Because the output from the model is a prediction that was calculated by the function, and not an observed value, you'll often see the output from the function shown as …
function, feature values, label, ŷ 
63
… is a general term for machine learning algorithms in which the training data includes both feature values and known label values.
Supervised machine learning
64
Supervised machine learning is used to train … by determining a relationship between the … and … in past observations, so that unknown … can be predicted for features in future cases.

models, features and labels, labels
65
… is a form of supervised machine learning in which the label predicted by the model is a numeric value.
Regression
66
… is a form of supervised machine learning in which the label represents a categorization, or class. There are two common … scenarios.

Classification, classification
67
In … classification, the label determines whether the observed item is (or isn't) an instance of a specific class. Or put another way, … classification models predict one of two mutually exclusive outcomes.
binary, binary
68
In the … model predicts a …/… or …/… prediction for a single possible class.

Binary, true/false, positive/negative
69
… classification extends binary classification to predict a label that represents one of multiple possible classes.
Multiclass,
70
In most scenarios that involve a known set of multiple classes, multiclass classification is used to predict … labels
mutually exclusive
71
… machine learning involves training models using data that consists only of feature values without any known labels.
Unsupervised
72
… machine learning algorithms determine relationships between the features of the observations in the training data.

Unsupervised
73
There are some … algorithms that you can use to train multilabel classification models, in which there may be more than one valid label for a single observation.

Multiclass,
74
… machine learning involves training models using data that consists only of feature values without any known labels.
Unsupervised
75
Unsupervised machine learning algorithms determine … between the features of the observations in the training data.

relationships
76
The most common form of unsupervised machine learning is ….
clustering
77
A … algorithm identifies similarities between observations based on their …, and groups them into discrete clusters.
clustering, features,
78
… is similar to multiclass classification; in that it categorizes observations into discrete groups. The difference is that when using classification, you already know the classes to which the observations in the training data belong.
clustering,
79
In clustering, there's no previously known … … and the algorithm groups the data observations based purely on similarity of features.
cluster label,
80
In some cases, … is used to determine the set of classes that exist before training a classification model.
clustering
81
… models are trained to predict numeric label values based on training data that includes both features and known labels.
Regression
82
The process for training a regression model (or indeed, any … machine learning model) involves multiple iterations in which you use an appropriate algorithm (usually with some parameterized settings) to train a model, evaluate the model's … …, and refine the model by repeating the training process with different … and … until you achieve an acceptable level of predictive accuracy.

supervised, predictive performance, algorithms and parameters
83
Four key elements of the training process for supervised machine learning models
Split the training data (randomly) to create a dataset with which to train the model while holding back a subset of the data that you'll use to validate the trained model.
 Use an algorithm to fit the training data to a model. In the case of a regression model, use a regression algorithm such as linear regression.
 Use the validation data you held back to test the model by predicting labels for the features.
 Compare the known actual labels in the validation dataset to the labels that the model predicted. Then aggregate the differences between the predicted and actual label values to calculate a metric that indicates how accurately the model predicted for the validation data.

84
After each train, validate, and evaluate iteration, you can repeat the process with different … and … until an acceptable evaluation metric is achieved.

algorithms and parameters
85
A … … algorithm, works by deriving a function that produces a straight line through the intersections of the x and y values while minimizing the average distance between the line and the plotted points
linear regression,
86
The … is the differences between the predicted  (…) values and actual (…) values, from the validation dataset.
variance, ŷ, y,
87
Based on the differences between the predicted and actual values, you can calculate some common metrics that are used to evaluate a regression model. They include:
Mean Absolute Error (MAE) Mean Squared Error (MSE) Root Mean Squared Error (RMSE) Coefficient of determination (R2)
88
The absolute error for each prediction, the distance either above or below the predicted outcome, can be summarized for the whole validation set as the …
mean absolute error (MAE)
89
it may be more desirable to have a model that is consistently wrong by a small amount than one that makes fewer, but larger errors. One way to produce a metric that "amplifies" larger errors by squaring the individual errors and calculating the mean of the squared values. This metric is known as the … … …
mean squared error (MSE)
90
The mean squared error helps take the magnitude of errors into account, but because it squares the error values, the resulting metric no longer represents the quantity measured by the label. If we want to measure the error in terms of the number of ice creams, we need to calculate the … of the MSE; which produces a metric called, unsurprisingly, the … … … …
square root, Root Mean Squared Error
91
The … … … (more commonly referred to as R2 or R-Squared) is a metric that measures the proportion of variance in the validation results that can be explained by the model, as opposed to some anomalous aspect of the validation data.
coefficient of determination
92
The calculation for … is more complex than for the previous metrics. It compares the sum of squared differences between predicted and actual labels with the sum of squared differences between the actual label values and the mean of actual label values. … = 1- ∑(y-ŷ)2 ÷ ∑(y-ȳ)2
R2, R2
93
The important point is that the result of R2 is a value between … and … that describes the proportion of variance explained by the model. In simple terms, the closer to … this value is, the better the model is fitting the validation data
0 and 1, 1
94
In most real-world scenarios, a data scientist will use an iterative process to repeatedly train and evaluate a model, varying:
Feature selection and preparation (choosing which features to include in the model, and calculations applied to them to help ensure a better fit). Algorithm selection (We explored linear regression in the previous example, but there are many other regression algorithms)
 Algorithm parameters (numeric settings to control algorithm behavior, more accurately called hyperparameters to differentiate them from the x and y parameters).

95
Instead of calculating numeric values like a regression model, the algorithms used to train classification models calculate … …for class assignment and the evaluation metrics used to assess model performance compare the … classes to the … classes.
probability values, predicted, actual
96
In most real scenarios, the data observations used to train and validate the binary model consist of … feature (x) values and a y value that is either … or ….
multiple, 1 or 0
97
To train a binary classification model use an algorithm to fit the training data to a … that calculates the probability of the class label being true 
function, true
98
… is measured as a value between … and …, such that the total probability for all possible classes is ….
Probability, 0.0 and 1.0, 1.0
99
There are many algorithms that can be used for binary classification, such as logistic regression, which derives a … (S-shaped) function with values between … and ….
sigmoid, 0.0 and 1.0
100
Despite its name, in machine learning, … is used for classification, not regression. The important point is the logistic nature of the function it produces, which describes an S-shaped curve between a lower and upper value (0.0 and 1.0 when used for … …).
logistic regression, binary classification
101
The … … … used to train binary classification models describes the probability of y being true (y=1) for a given value of x. Mathematically, you can express the function like this: f(x) = P(y=1 | x)
logistic regression function
102
For logistical regression models, with three of six observations in the training data, we know that y is definitely true, so the probability for those observations that y=1 is … and for the other three, we know that y is definitely false, so the probability that y=1 is …. The S-shaped curve describes the probability distribution so that plotting a value of x on the line identifies the corresponding probability that y is 1.
1.0, 0.0
103
The diagram for a logistical regression model also includes a horizontal line to indicate the threshold at which a model based on this function will predict true (1) or false (0). The threshold lies at the … for y (P(y) = 0.5). For any values at this point or above, the model will predict true (1); while for any values below this point it will predict false (0). For example, for a patient with a blood glucose level of 90, the function would result in a probability value of 0.9. Since 0.9 is higher than the threshold of 0.5, the model would predict true (1) - in other words, the patient is predicted to have diabetes.
mid-point
104
The first step in calculating evaluation metrics for a binary classification model is usually to create a matrix of the number of … and … predictions for each possible class label:
correct and incorrect
105
A confusion matrix shows the prediction totals where:
ŷ=0 and y=0: True negatives (TN) ŷ=1 and y=0: False positives (FP) ŷ=0 and y=1: False negatives (FN) ŷ=1 and y=1: True positives (TP) ŷ 0. 1 ———————— | | | 0. | |. | Y. ———————— |. |. | 1. |. |. | ————————-
106
The arrangement of the confusion matrix is such that correct (true) predictions are shown in a … line from … to … Often, color-intensity is used to indicate the number of predictions in each cell, so a quick glance at a model that predicts well should reveal a deeply shaded diagonal trend.
diagonal, top-left, bottom-right.
107
The simplest metric you can calculate from the confusion matrix is …  - the proportion of predictions that the model got ….
accuracy, right (TN+TP) ÷ (TN+FN+FP+TP)
108
Accuracy is calculated as:
(TN+TP) ÷ (TN+FN+FP+TP) (2+3) ÷ (2+1+0+3) = 5 ÷ 6 = 0.83 So for our validation data, the diabetes classification model produced correct predictions 83% of the time.
109
What is the problem with an accuracy model?
Suppose 11% of the population has diabetes. You could create a model that always predicts 0, and it would achieve an accuracy of 89%, even though it makes no real attempt to differentiate between patients by evaluating their features. What we really need is a deeper understanding of how the model performs at predicting 1 for positive cases and 0 for negative cases.
110
… is a metric that measures the proportion of positive cases that the model identified correctly. In other words, compared to the number of patients who have diabetes, how many did the model predict to have diabetes? The formula is: TP ÷ (TP+FN)
Recall
111
… is a similar metric to recall, but measures the proportion of predicted positive cases where the true label is actually positive. In other words, what proportion of the patients predicted by the model to have diabetes actually have diabetes? The formula is: TP ÷ (TP+FP)
Precision
112
… is an overall metric that combined recall and precision. The formula is: (2 x Precision x Recall) ÷ (Precision + Recall)
F1-score
113
Another name for recall is the … … …, and there's an equivalent metric called the … … …that is calculated as FP÷(FP+TN).
true positive rate (TPR), false positive rate (FPR)
114
TPR and FPR metrics are often used to evaluate a model by plotting a … … … curve that compares the TPR and FPR for every possible threshold value between 0.0 and 1.0.
received operator characteristic (ROC)
115
The … curve for a perfect model would go straight up the TPR axis on the left and then across the FPR axis at the top. Since the plot area for the curve measures 1x1, the area under this perfect curve would be 1.0 (meaning that the model is correct 100% of the time). In contrast, a diagonal line from the bottom-left to the top-right represents the results that would be achieved by randomly guessing a binary label; producing an area under the curve of 0.5. In other words, given two possible class labels, you could reasonably expect to guess correctly 50% of the time.
ROC
116
As a supervised machine learning technique, … … follows the same iterative train, validate, and evaluate process as regression and binary classification in which a subset of the training data is held back to validate the trained model.
Multiclass classification
117
Multiclass classification algorithms are used to calculate probability values for multiple class labels, enabling a model to predict the … … class for a given observation.
most probable
118
To train a multiclass classification model, we need to use an algorithm to fit the training data to a function that calculates a probability value for each possible class. There are two kinds of algorithm you can use to do this:
One-vs-Rest (OvR) algorithms Multinomial algorithms

119
… algorithms train a binary classification function for each class, each calculating the probability that the observation is an example of the target class. Each function calculates the probability of the observation being a specific class compared to any other class.
One-vs-Rest f0(x) = P(y=0 | x) f1(x) = P(y=1 | x) f2(x) = P(y=2 | x)
 Each algorithm produces a sigmoid function that calculates a probability value between 0.0 and 1.0. A model trained using this kind of algorithm predicts the class for the function that produces the highest probability output.

120
A … algorithm, creates a single function that returns a multi-valued output. The output is a vector (an array of values) that contains the probability distribution for all possible classes - with a probability score for each class which when totaled adds up to 1.0: f(x) =[P(y=0|x), P(y=1|x), P(y=2|x)]
multinomial
121
An example of a Multinomial function is a … function, which could produce an output like the following example: [0.2, 0.3, 0.5] The elements in the vector represent the probabilities for classes 0, 1, and 2 respectively; so in this case, the class with the highest probability is 2.
softmax
122
You can evaluate a multiclass classifier by calculating … classification metrics for each individual class. Alternatively, you can calculate … metrics that take all classes into account.
binary, aggregate
123
The confusion matrix for a multiclass classifier is similar to that of a binary classifier, except that it shows the number of predictions … … … of predicted (ŷ) and actual class labels (y)
for each combination
124
To calculate the overall accuracy, recall, and precision metrics, you use the total of the … … … and … metrics: Overall accuracy = (TN+TP) ÷ (TN+FN+FP+TP) Overall recall = TP ÷ (TP+FN) Overall precision = TP ÷ (TP+FP) TP: True Positive, FP: False Positive, TN: True Negative, FN: False Negative 

TP, TN, FP, and FN
125
The overall f1 score is based on … … and …. …
Overall precision and overall recall Overall f1 = (2 x Precision x Recall) ÷ (Precision + Recall)
126
Clustering is a form of … machine learning in which observations are grouped into clusters based on similarities in their data values, or features. This kind of machine learning is considered … because it doesn't make use of previously known label values to train a model. In a clustering model, the … is the cluster to which the observation is assigned, based only on its features.
unsupervised, unsupervised, label For example, suppose a botanist observes a sample of flowers and records the number of leaves and petals on each flower:
 There are no known labels in the dataset, just two features. The goal is not to identify the different types (species) of flower; just to group similar flowers together based on the number of leaves and petals.
127
There are multiple algorithms you can use for clustering. One of the most commonly used algorithms is K-Means clustering, which consists of the following steps:
The feature (x) values are vectorized to define n-dimensional coordinates (where n is the number of features). In the flower example, we have two features: number of leaves (x1) and number of petals (x2). So, the feature vector has two coordinates that we can use to conceptually plot the data points in two-dimensional space ([x1,x2]) You decide how many clusters you want to use to group the flowers - call this value k. For example, to create three clusters, you would use a k value of 3. Then k points are plotted at random coordinates. These points become the center points for each cluster, so they're called centroids.
 Each data point (in this case a flower) is assigned to its nearest centroid.
 Each centroid is moved to the center of the data points assigned to it based on the mean distance between the points.
 After the centroid is moved, the data points may now be closer to a different centroid, so the data points are reassigned to clusters based on the new closest centroid.
 The centroid movement and cluster reallocation steps are repeated until the clusters become stable or a predetermined maximum number of iterations is reached.

128
Since there's no known label with which to compare the predicted cluster assignments, evaluation of a clustering model is based on how well the resulting clusters are … … … ….
separated from one another
129
There are multiple metrics that you can use to evaluate cluster separation, including:

Average distance to cluster center: How close, on average, each point in the cluster is to the centroid of the cluster.
 Average distance to other center: How close, on average, each point in the cluster is to the centroid of all other clusters.
 Maximum distance to cluster center: The furthest distance between a point in the cluster and its centroid.
 Silhouette: A value between -1 and 1 that summarizes the ratio of distance between points in the same cluster and points in different clusters (The closer to 1, the better the cluster separation).
130
… … is an advanced form of machine learning that tries to emulate the way the human brain learns.
Deep learning
131
The key to deep learning is the creation of an … … … that simulates electrochemical activity in biological neurons by using mathematical functions.
artificial neural network
132
Artificial neural networks are made up of multiple layers of neurons - essentially defining a … … …. This architecture is the reason the technique is referred to as deep learning and the models produced by it are often referred to as deep neural networks (DNNs). You can use deep neural networks for many kinds of machine learning problem, including regression and classification, as well as more specialized models for natural language processing and computer vision.
deeply nested function
133
Just like other machine learning techniques discussed in this module, deep learning involves fitting training data to a function that can predict a label (y) based on the value of one or more features (x). The function (f(x)) is the … … of a nested function in which each layer of the neural network encapsulates functions that operate on x and the weight (w) values associated with them. The algorithm used to train the model involves iteratively feeding the … … (x) in the training data forward through the layers to calculate output values for ŷ, validating the model to evaluate how far off the calculated ŷ values are from the known y values (which quantifies the level of error, or loss, in the model), and then modifying the weights (w) to reduce the loss. The trained model includes the final weight values that result in the most accurate predictions.
outer layer, feature values
134
This is an example of a classification problem, in which the machine learning model must predict the most probable class to which an observation belongs. A classification model accomplishes this by predicting a label that consists of the probability for … ….
each class In other words, y is a vector of three probability values; one for each of the possible classes: [P(y=0|x), P(y=1|x), P(y=2|x)].

135
The process for inferencing a predicted penguin class using a deep learning network is:

The feature vector for a penguin observation is fed into the input layer of the neural network, which consists of a neuron for each x value. In this example, the following x vector is used as the input: [37.3, 16.8, 19.2, 30.0]
 The functions for the first layer of neurons each calculate a weighted sum by combining the x value and w weight, and pass it to an activation function that determines if it meets the threshold to be passed on to the next layer.
 Each neuron in a layer is connected to all of the neurons in the next layer (an architecture sometimes called a fully connected network) so the results of each layer are fed forward through the network until they reach the output layer.
 The output layer produces a vector of values; in this case, using a softmax or similar function to calculate the probability distribution for the three possible classes of penguin. In this example, the output vector is: [0.2, 0.7, 0.1]
 The elements of the vector represent the probabilities for classes 0, 1, and 2. The second value is the highest, so the model predicts that the species of the penguin is 1 (Gentoo).

136
Azure Machine Learning provides the following features and capabilities to support machine learning workloads:

Centralized storage and management of datasets for model training and evaluation.
 On-demand compute resources on which you can run machine learning jobs, such as training a model.
 Automated machine learning (AutoML), which makes it easy to run multiple training jobs with different algorithms and parameters to find the best model for your data.
 Visual tools to define orchestrated pipelines for processes such as model training or inferencing.
 Integration with common machine learning frameworks such as MLflow, which make it easier to manage model training, evaluation, and deployment at scale.
 Built-in support for visualizing and evaluating metrics for responsible AI, including model explainability, fairness assessment, and others.

137
The primary resource required for Azure Machine Learning is an Azure Machine Learning…
workspace
138
Azure Machine Learning …; a browser-based portal for managing your machine learning resources and jobs.

studio
139
In Azure Machine Learning studio, you can (among other things):

Import and explore data. Create and use compute resources. Run code in notebooks. Use visual tools to create jobs and pipelines. Use automated machine learning to train models. View details of trained models, including evaluation metrics, responsible AI information, and training parameters. Deploy trained models for on-request and batch inferencing. Import and manage models from a comprehensive model catalog.

140
… … imitates human behavior by relying on machines to learn and execute tasks without explicit directions on what to output.

Artificial Intelligence
141
… … algorithms take in data like weather conditions and fit models to the data, to make predictions like how much money a store might make in a given day.

Machine learning
142
… … models use layers of algorithms in the form of artificial neural networks to return results for more complex use cases. Many Azure AI services are built on deep learning models. You can check out this article to learn more about the difference between machine learning and deep learning.
Deep learning
143
… … models can produce new content based on what is described in the input. The OpenAI models are a collection of generative AI models that can produce language, code, and images.
Generative AI
144
Generative AI includes:
Generating natural language Generating code Generating images
145
OpenAI consists of four components:

Pre-trained generative AI models
 Customization capabilities; the ability to fine-tune AI models with your own data
 Built-in tools to detect and mitigate harmful use cases so users can implement AI responsibly
 Enterprise-grade security with role-based access control (RBAC) and private networks

146
OpenAI supports many common AI workloads and solves for some new ones.
 Common AI workloads include machine learning, computer vision, natural language processing, conversational AI, anomaly detection, and knowledge mining.
 Other AI workloads Azure OpenAI supports can be categorized by tasks they support, such as:

Generating Natural Language
 Text completion: generate and edit text Embeddings: search, classify, and compare text
 Generating Code: generate, edit, and explain code
 Generating Images: generate and edit images

147
Azure AI services encompass all of what were previously known as … … and Azure Applied AI Services.

Cognitive Services
148
Azure AI services are tools for solving AI …
workloads
149
There are several overlapping capabilities between Azure AI Language service and Azure OpenAI Service, such as translation, … …, and keyword extraction
, sentiment analysis
150
… is the process of optimizing a model's performance) tuning.
Tuning
151
Azure OpenAI Service may be more beneficial for use-cases that require highly customized… …, or for exploratory research
generative models
152
When making business decisions about what type of model to use, it's important to understand how time and compute needs factor into machine learning training. In order to produce an effective machine learning model, the model needs to be trained with a substantial amount of cleaned data. The 'learning' portion of training requires a computer to identify an algorithm that best fits the data. The complexity of the task the model needs to solve for and the desired level of model performance all factor into the … required to run through possible solutions for a best fit algorithm.
time
153
… models that represent the latest generative models for natural language and code.
GPT-4 
154
… models that can generate natural language and code responses based on prompts.

GPT-3.5 
155
… models that convert text to numeric vectors for analysis - for example comparing sources of text for similarity.

Embeddings
156
… models that generate images based on natural language descriptions
DALL-E 
157
… models always have a probability of reflecting true values. Higher performing models, such as models that have been fine-tuned for specific tasks, do a better job of returning responses that reflect true values. It is important to review the output of generative AI models.

Generative AI
158
In the Azure OpenAI Studio, you can experiment with OpenAI models in …. In the … …, you can type in prompts, configure parameters, and see responses without having to code.

playgrounds, Completions playground
159
In the  … playground, you can use the assistant setup to instruct the model about how it should behave. The assistant will try to mimic the responses you include in tone, rules, and format you've defined in your system message.

Chat
160
… …learning models are trained on words or chunks of characters known as tokens. For example, the word "hamburger" gets broken up into the tokens ham, bur, and ger, while a short and common word like "pear" is a single token. These tokens are mapped into vectors for a machine learning model to use for training. When a trained … … model takes in a user's input, it also breaks down the input into tokens.
Natural language, natural language
161
… … … models are excellent at both understanding and creating natural language.
Generative pre-trained transformer (GPT)
162
GPT tries to infer, or guess, the context of the user's question based on the...
prompt
163
Natural language tasks include:

Task Summarizing text Classifying text Generating names or phrases Translation Answering questions Suggesting content
164
… models have been trained on both natural language and billions of lines of code from public repositories.
GPT
165
What's unique about the … model family is that it's more capable across more languages than GPT models.
Codex
166
… can also summarize functions that are already written, explain SQL queries or tables, and convert a function from one programming language into another.
GPT
167
OpenAI Codex is:
OpenAI Codex is an artificial intelligence model developed by OpenAI. It parses natural language and generates code in response. It powers GitHub Copilot, a programming autocompletion tool for select IDEs, like Visual Studio Code and Neovim. The main difference between CodeX and ChatGPT is that CodeX focuses on code generation, while ChatGPT is designed for conversational text generation. When analyzing their computational performance, we can see that CodeX is significantly faster than ChatGPT when performing code generation. Both are owned by OpenAI.
168
GitHub … integrates the power of OpenAI Codex into a plugin for developer environments like Visual Studio Code.

Copilot
169
In addition to natural language capabilities, generative AI models can edit and create images. The model that works with images is called ….
DALL-E
170
Image capabilities generally fall into the three categories of:
image creation, editing an image, and creating variations of an image.

171
DALL-E can edit the image as requested by changing its style, adding or removing items, or generating new content to add. Edits are made by uploading the original image and specifying a transparent … that indicates what area of the image to edit
Mask
172
… … … are AI capabilities that can be built into web or mobile applications, in a way that's straightforward to implement.
Azure AI services
173
The Azure AI … … service can be used to detect harmful content within text or images, including violent or hateful content, and report on its severity.
Content Safety
174
The Azure AI … service can be used to summarize text, classify information, or extract key phrases.
Language
175
Azure AI … service provides powerful speech to text and text to speech capabilities, allowing speech to be accurately transcribed into text, or text to natural sounding voice audio.
Speech
176
Azure AI services are based on three principles that dramatically improve speed-to-market:

Prebuilt and ready to use Accessed through APIs Available on Azure

177
Developers can access AI services through … …, client libraries, or integrate them with tools such as Logic Apps and Power Automate.
REST APIs
178
AI Services are managed in the same way as other Azure services, such as platform as a service (PaaS), infrastructure as a service (IaaS), or a … … service
managed database
179
The Azure platform and … … provide a consistent framework for all your Azure services, from creating or deleting resources, to availability and billing.
Resource Manager
180
There are two types of AI service resources … or …
multi-service or single-service.
181
… resource: a resource created in the Azure portal that provides access to multiple Azure AI services with a single key and endpoint. Use the resource Azure AI services when you need several AI services or are exploring AI capabilities. When you use an Azure AI services resource, all your AI services are billed together.
Multi-service
182
… resources: a resource created in the Azure portal that provides access to a single Azure AI service, such as Speech, Vision, Language, etc. Each Azure AI service has a unique key and endpoint. These resources might be used when you only require one AI service or want to see cost information separately.
Single-service
183
To create an Azure AI services resource, sign in to the Azure portal with … access and select Create a resource.
Contributor
184
Once you create an Azure AI service resource, you can build applications using the … …, software development kits (SDKs), or visual studio interfaces.
REST API
185
There are different studios for different Azure AI services, such as … …, Language Studio, Speech Studio, and the Content Safety Studio.
Vision Studio
186
Before you can use an AI service resource, you must associate it with the … you want to use on the Settings page. Select the resource, and then select Use Resource.
studio
187
Most Azure AI services are accessed through a … …, although there are other ways. The API defines what information is passed between two software components: the Azure AI service and whatever is using it.
RESTful API
188
Part of what an … does is to handle authentication. Whenever a request is made to use an AI services resource, that request must be authenticated. For example, your subscription and AI service resource is verified to ensure you have sufficient permissions to access it. This authentication process uses an endpoint and a resource key.
API
189
The …. … protects the privacy of your resource.
resource key
190
When you write code to access the AI service, the keys and endpoint must be included in the… ….
authentication header
191
… … is a technique that uses mathematics and statistics to create a model that can predict unknown values.
Machine learning
192
Mathematically, you can think of machine learning as a way of defining a … (let's call it f) that operates on one or more ,,, of something (which we'll call x) to calculate a predicted … (y) - like this: f(x) = y
function, features, label
193
The specific operation that the f function performs on x to calculate y depends on a number of factors, including the type of … you're trying to create and the specific algorithm used to train the model.
model
194
The … ,,, ,,, approach requires you to start with a dataset with known label values. Two types of supervised machine learning tasks include regression and classification.
supervised machine learning
195
… is used to predict a continuous value; like a price, a sales total, or some other measure.

Regression
196
… is used to determine a class label; an example of a binary class label is whether a patient has diabetes or not; an example of multi-class labels is classifying text as positive, negative, or neutral.
Classification
197
The … machine learning approach starts with a dataset without known label values. One type of unsupervised machine learning task is clustering.
unsupervised
198
… is used to determine labels by grouping similar information into label groups; like grouping measurements from birds into species.
Clustering
199
To use Azure Machine Learning, you first create a … resource in your Azure subscription.
workspace
200
You can then use this workspace to manage data, code, … , and other artifacts related to your machine learning workloads.
models
201
After you have created an Azure Machine Learning workspace, you can develop solutions with the Azure Machine Learning service either with developer tools or the Azure Machine Learning studio … …
web portal.
202
Azure Machine Learning … is a web portal for machine learning solutions in Azure.
studio
203
Azure Machine Learning includes an automated machine learning capability that automatically tries multiple pre-processing techniques and model-training algorithms in …. 
 These automated capabilities use the power of cloud … to find the best performing supervised machine learning model for your data.
parallel, compute
204
Automated machine learning allows you to train models without extensive data science or programming knowledge. For people with a data science and programming background, it provides a way to save time and resources by automating algorithm selection and … tuning.
hyperparameter
205
In Azure Machine Learning, operations that you run are called  …. You can configure multiple settings for your job before starting an automated machine learning ….
jobs, run
206
The run configuration provides the information needed to specify your training … and Azure Machine Learning environment in your run … and run a training job.
script, configuration
207
You can think of the steps in a machine learning process as:
Prepare data: Train model: Evaluate performance: Deploy a predictive service:
208
… …: Identify the features and label in a dataset. Pre-process, or clean and transform, the data as needed.

Prepare data:
209
… …: Split the data into two groups, a training and a validation set. Train a machine learning model using the training data set. Test the machine learning model for performance using the validation data set.

Train model:
210
… …: Compare how close the model's predictions are to the known labels.

Evaluate performance:
211
Deploy a … …: After you train a machine learning model, you can deploy the model as an application on a server or device so that others can use it.

predictive service:
212
In Azure Machine Learning, data for model training and other operations is usually encapsulated in an object called a … ….
 data asset.
213
The automated machine learning capability in Azure Machine Learning supports supervised machine learning models - in other words, models for which the training data includes known label values. You can use automated machine learning to train models for:

Classification (predicting categories or classes) Regression (predicting numeric values) Time series forecasting (predicting numeric values at a future point in time)

214
In Automated Machine Learning, you can select configurations for the primary metric, type of model used for training, exit criteria, and … ….
concurrency limits
215
Additional Machine Learning configuration settings include:
Primary metric Explain best model Use all supported models Blocked models Training Job Time Metric score threshold Max concurrent iterations
216
… will split data into a training set and a validation set.
AutoML
217
The best model is identified based on the … metric you specified,
evaluation
218
If you used … … to stop the job. Thus the "best" model the job generated might not be the best possible model, just the best one found within the time allowed for this exercise.

exit criteria
219
A technique called … is used to calculate the evaluation metric. After the model is trained using a portion of the data, the remaining portion is used to iteratively test, or …, the trained model. The metric is calculated by comparing the predicted value from the test with the actual known value, or label.

cross-validation, cross-validate
220
The difference between the predicted and actual value, known as the … , indicates the amount of error in the model.
residuals
221
The performance metric … … … … (RMSE), is calculated by squaring the errors across all of the test cases, finding the mean of these squares, and then taking the square root.
root mean squared error
222
With root mean squared error, the … this value is, the more accurate the model's predictions.
smaller
223
The … root mean squared error (NRMSE) standardizes the RMSE metric so it can be used for comparison between models which have variables on different scales.

normalized
224
The … … shows the frequency of residual value ranges. Residuals represent variance between predicted and true values that can't be explained by the model, in other words, errors. You should hope to see the most frequently occurring residual values clustered around zero. You want small errors with fewer errors at the extreme ends of the scale.

Residual Histogram
225
The … vs. … chart should show a diagonal trend in which the predicted value correlates closely to the true value. The dotted line shows how a perfect model should perform. The closer the line of your model's average predicted value is to the dotted line, the better its performance. A histogram below the line chart shows the distribution of true values.

Predicted, True
226
In Azure Machine Learning, you can deploy a service as an … … … (ACI) or to an … … … (AKS) cluster.
Azure Container Instances, Azure Kubernetes Service
227
For production scenarios, an … deployment is recommended, for which you must create an …. …. … ….
AKS, inference cluster compute target
228
For testing you can use an ACI service, which is a suitable deployment target for testing, and does not require you to create an inference cluster.
ACI
229
Regression is a supervised machine learning technique used to predict numeric values. Learn how to create regression models using Azure … ,… ….
Machine Learning designer
230
… predicts a numeric label or outcome based on variables, or features
Regression
231
Regression is an example of a  … machine learning technique in which you train a model using data that includes both the features and known values for the label, so that the model learns to fit the feature combinations to the label. Then, after training has been completed, you can use the trained model to predict labels for new items for which the label is unknown.
supervised
232
A few scenarios of … machine learning are:
 Using characteristics of houses, such as square footage and number of rooms, to predict home prices. Using characteristics of farm conditions, such as weather and soil quality, to predict crop yield.
 Using characteristics of a past campaign, such as advertising logs, to predict future advertisement clicks.
Regression
233
At its core, Azure Machine Learning is a service for training and managing machine learning models, for which you need … resources on which to run the training process.
compute
234
Compute targets are … resources on which you can run model training and data exploration processes.
cloud-based
235
You can manage the compute targets for your data science activities in Azure … … studio.
Machine Learning
236
There are four kinds of compute resource you can create:
Compute Instances: Compute Clusters: Kubernetes Clusters: Attached Compute:
237
… …: Development workstations that data scientists can use to work with data and models.

Compute Instances:
238
… … : Scalable clusters of virtual machines for on-demand processing of experiment code.

Compute clusters
239
… … : Deployment targets for predictive services that use your trained models. You can access previous versions of "inference clusters" here.

Kubernetes clusters: Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community.
240
… … : Links to existing Azure compute resources, such as Virtual Machines or Azure Databricks clusters.
Attached compute:
241
In Azure Machine Learning studio, there are several ways to author regression machine learning models. One way is to use a visual interface called  … that you can use to train, test, and deploy machine learning models. The drag-and-drop interface makes use of clearly defined inputs and outputs that can be shared, reused, and version controlled.
designer
242
Each designer project, known as a … , has a left panel for navigation and a canvas on your right hand side. To use designer, identify the building blocks, or components, needed for your model, place and connect them on your canvas, and run a machine learning job.
pipeline
243
Pipelines let you organize, manage, and reuse complex machine learning workflows across projects and users. A pipeline starts with the … from which you want to train the model.
dataset
244
Each time you run a pipeline, the configuration of the pipeline and its results are stored in your workspace as a … ….
pipeline job
245
An Azure Machine Learning … encapsulates one step in a machine learning pipeline.
component
246
You can think of a component as a … … and as a building block for Azure Machine Learning pipelines.
programming function
247
In a pipeline project, you can access data assets and components from the left panel's  … … tab
Asset Library
248
You can create data assets on the Data page from local files, a datastore, … …, and Open Datasets.
web files
249
Machine Learning data assets will appear along with standard sample datasets in designer's … Library. 
Asset
250
An Azure Machine Learning (ML) job executes a task against a specified … target.
compute
251
Once a job is created, Azure ML maintains a … … for the job.
run record
252
In your designer project, you can access the status of a pipeline job using the  … … tab on the left pane. 
Submitted jobs
253
The steps to train and evaluate a regression machine learning model:
Prepare data: Train model: Evaluate performance: Deploy a predictive service:
254
… … : Identify the features and label in a dataset. Pre-process, or clean and transform, the data as needed.

Prepare data:
255
… … : Split the data into two groups, a training and a validation set. Train a machine learning model using the training data set. Test the machine learning model for performance using the validation data set.

Train model:
256
… … : Compare how close the model's predictions are to the known labels.

Evaluate Performance:
257
: After you train a machine learning model, you need to convert the training pipeline into a real-time inference pipeline. Then you can deploy the model as an application on a server or device so that others can use it.

Deploy a predictive service:
258
Azure Machine Learning designer has several pre-built components that can be used to prepare data for training. These components enable you to clean data, … … , join tables, and more. 
normalize features
259
To train a regression model, you need a dataset that includes historical features, characteristics of the entity for which you want to make a prediction, and known … ….
label values
260
The … is the quantity you want to train a model to predict.
Label
261
It's common practice to train the model using a subset of the data, while holding back some data with which to … the trained model. This enables you to compare the labels that the model predicts with the actual known labels in the original dataset.
test
262
You will use designer's … … component to generate the predicted class label value.
Score Model
263
Once you connect all the … , you will want to run an experiment, which will use the data asset on the canvas to train and score a model.
components
264
There are many performance metrics and methodologies for evaluating how well a model makes predictions. You can review evaluation metrics on the completed job page by right-clicking on the … … component.
Evaluate model
265
… … … : The average difference between predicted values and true values. This value is based on the same units as the label, in this case dollars. The lower this value is, the better the model is predicting.

Mean Absolute Error (MAE)
266
… … … … : The square root of the mean squared difference between predicted and true values. The result is a metric based on the same unit as the label (dollars). When compared to the MAE (above), a larger difference indicates greater variance in the individual errors (for example, with some errors being very small, while others are large).

Root mean squared error (RMSE):
267
… … … : A relative metric between 0 and 1 based on the square of the differences between predicted and true values. The closer to 0 this metric is, the better the model is performing. Because this metric is relative, it can be used to compare models where the labels are in different units.

Relative squared error (RSE):
268
… … … : A relative metric between 0 and 1 based on the absolute differences between predicted and true values. The closer to 0 this metric is, the better the model is performing. Like RSE, this metric can be used to compare models where the labels are in different units.

Relative absolute error (RAE):
269
.: This metric is more commonly referred to as R-Squared, and summarizes how much of the variance between predicted and true values is explained by the model. The closer to 1 this value is, the better the model is performing.
Coefficient of determination (R2):
270
To … your pipeline, you must first convert the training pipeline into a real-time inference pipeline. This process removes training components and adds web service inputs and outputs to handle requests.
Deploy
271
The inference pipeline performs the same data transformations as the first pipeline for new … .
Data
272
Once an … … performs its data transformations, it uses the trained model to … , or predict, label values based on its features. This model will form the basis for a predictive service that you can publish for applications to use.
Inference pipeline, infer
273
You can create an inference pipeline by selecting the menu above a … job. 
completed
274
After creating the inference pipeline, you can deploy it as an ….
endpoint
275
In the … page, you can view deployment details, test your pipeline service with sample data, and find credentials to connect your pipeline service to a client application.
endpoints
276
The Deployment state on the Details tab will indicate … when deployment is successful.
Healthy
277
On the … tab, you can test your deployed service with sample data in a JSON format. The test tab is a tool you can use to quickly check to see if your model is behaving as expected. Typically it is helpful to test the service before connecting it to an application.
Test
278
You can find credentials for your service on the … tab. These credentials are used to connect your trained machine learning model as a service to a client application.
Consume
279
Classification is a … machine learning technique used to predict categories or classes. Learn how to create classification models using Azure Machine Learning designer.
supervised
280
Classification is an example of a supervised machine learning technique in which you train a model using data that includes both the … and known … for the label, so that the model learns to fit the feature combinations to the label.
features, values
281
With a classification model, after training has been completed, you can use the trained model to predict … for new items for which the … is unknown.
labels, label
282
Classification models can be applied to … and multi-class scenarios
binary
283
Classification is an example of a ,,, machine learning technique in which you train a model using data that includes both the features and known values for the label, so that the model learns to fit the feature combinations to the label. Then, after training has been completed, you can use the trained model to predict labels for new items for which the label is unknown.
supervised
284
Using historical data to predict whether text sentiment is positive, negative, or neutral is an example of a … classification model..

Multi-class
285
Using characteristics of small businesses to predict if a new venture will succeed is an example of a … classification model.
Binary
286
You can think of the steps to train and evaluate a classification machine learning model as:

Prepare data: Identify the features and label in a dataset. Pre-process, or clean and transform, the data as needed.
 Train model: Split the data into two groups, a training and a validation set. Train a machine learning model using the training data set. Test the machine learning model for performance using the validation data set.
 Evaluate performance: Compare how close the model's predictions are to the known labels.
 Deploy a predictive service: After you train a machine learning model, you need to convert the training pipeline into a real-time inference pipeline. Then you can deploy the model as an application on a server or device so that others can use it.

287
You'll use designer's … Model component to generate the predicted class label value. Once you connect all the components, you'll want to run an experiment, which will use the … … on the canvas to train and … a model.

Score, data asset, score
288
You can review evaluation metrics on the completed job page by right-clicking on the … model component.

Evaluate
289
The … … is a tool used to assess the quality of a classification model's predictions. It compares predicted labels against actual labels.

confusion matrix
290
In a binary classification model where you're predicting one of two possible values, the confusion matrix is a 2x2 grid showing the predicted and actual value counts for classes 1 and 0. It categorizes the model's results into four types of outcomes. Using our diabetes example these outcomes can look like:

True Positive: False Positive: False Negative: True Negative:
291
… … : The model predicts the patient has diabetes, and the patient does actually have diabetes.

True positive:
292
… … : The model predicts the patient has diabetes, but the patient doesn't actually have diabetes.

False positive:
293
… … : The model predicts the patient doesn't have diabetes, but the patient actually does have diabetes.

False negative
294
… … : The model predicts the patient doesn't have diabetes, and the patient actually doesn't have diabetes.

True negative:
295
Suppose you have data for 100 patients. You create a model that predicts a patient does have diabetes 15% of the time, so it predicts 15 people have diabetes and predicts 85 people do not have diabetes. In actuality, suppose 25 people actually do have diabetes and 75 people actually do not have diabetes. This information can be presented in a confusion matrix such as the one below:

1. 0 Actually True. Actually False ——————————————————- 1. | | | Predicted True. | | | | 10 | 5 | | | | —————————————————— | | | 0. | | | Predicted False. | 15 | 70 | | | | ——————————————————-
296
For a multi-class classification model (where there are more than two possible classes), the same approach is used to tabulate each possible combination of actual and predicted value counts - so a model with three possible classes would result in a … matrix with a diagonal line of cells where the predicted and actual labels match.
3x3
297
Metrics that can be derived from the confusion matrix include:

Accuracy: Precision: Recall: F1 Score:
298
… : The number of correct predictions (true positives + true negatives) divided by the total number of predictions.

Accuracy
299
… : The number of the cases classified as positive that are actually positive: the number of true positives divided by (the number of true positives plus false positives).

Precision:
300
… : The fraction of positive cases correctly identified: the number of true positives divided by (the number of true positives plus false negatives).

Recall:
301
… … : An overall metric that essentially combines precision and recall.

F1 score:
302
The accuracy of the model in the example is:
(10+70) / 100 = 80%
303
The precision of the model in our example is:
10 / (10 + 5) = 67%
304
The recall of the model in our example is
10 / (10 + 15) = 40%
305
In the case of a … classification model, the predicted probability is a value between 0 and 1. By default, a predicted probability including or above 0.5 results in a class prediction of 1, while a prediction below this threshold means that there's a greater probability of a negative prediction (remember that the probabilities for all classes add up to 1), so the predicted class would be 0.
Binary
306
Designer has a useful  … slider for reviewing how the model performance would change depending on the set threshold. 
threshold
307
Another term for … is True positive rate, and it has a corresponding metric named … … rate, which measures the number of negative cases incorrectly identified as positive compared between the number of actual negative cases.
recall, False positive
308
Plotting these metrics against each other for every possible binary threshold value between 0 and 1 results in a curve, known as the … curve, which stands for … … … , but most data scientists just call it a … curve).
ROC, receiver operating characteristic, ROC
309
In an ideal model, the curve would go all the way up the left side and across the top, so that it covers the full area of the chart. The larger the area under the curve, of AUC metric, (which can be any value from 0 to 1), the … the model is performing. You can review the ROC curve in Evaluation Results.
better
310
 … is a form of machine learning that is used to group similar items into clusters based on their features.
Clustering
311
Clustering is an example of …  machine learning, in which you train a model to separate items into clusters based purely on their characteristics, or features. There is no previously known cluster value (or label) from which to train the mod
unsupervised
312
Clustering machine learning models can be built using Azure … Learning
Machine
313
Like supervised models, you can think of the steps to train and evaluate a clustering machine learning model as:

Prepare data: Identify the features in a dataset. Pre-process, or clean and transform, the data as needed.
 Train model: Split the data into two groups, a training and a validation set. Train a machine learning model using the training data set. Test the machine learning model for performance using the validation data set.
 Evaluate performance: These metrics can help data scientists assess how well the model separates the clusters.
 Deploy a predictive service: After you train a machine learning model, you need to convert the training pipeline into a real-time inference pipeline. Then you can deploy the model as an application on a server or device so that others can use it.

314
To train a clustering model, you need a dataset that includes multiple observations of the items you want to cluster, including … features that can be used to determine similarities between individual cases that will help separate them into clusters
numeric
315
To train a clustering model, you need to apply a clustering algorithm to the data, using only the features that you have selected for clustering. You'll train the model with a subset of the data, and use the rest to test the trained model.
 The … …. algorithm groups items into the number of clusters, or centroids, you specify - a value referred to as
…
K-Means Clustering, K
316
The K-Means algorithm works by:

Initializing K coordinates as randomly selected points called centroids in n-dimensional space (where n is the number of dimensions in the feature vectors).
 Plotting the feature vectors as points in the same space, and assigning each point to its closest centroid.
 Moving the centroids to the middle of the points allocated to it (based on the mean distance).
 Reassigning the points to their closest centroid after the move.
 Repeating steps 3 and 4 until the cluster allocations stabilize or the specified number of iterations has completed.

317
You will use designer's … ,,, … … component to group the data into clusters. Once you connect all the components, you will want to run an experiment, which will use the data asset on the canvas to train a model.

Assign Data to Clusters
318
You can review evaluation metrics on the completed job page by right-clicking on the  …model component.
Evaluate
319
When the experiment run has finished, select Job details. Right click on the  … Model module and select Preview data, then select Evaluation results.
Evaluate
320
These metrics can help data scientists assess how well the model separates the clusters. They include a row of metrics for each cluster, and a summary row for a combined evaluation. The metrics in each row are:

Average Distance to Other Center: Average Distance to Cluster Center: Number of Points:. Maximal Distance to Cluster Center:
321
… …. … … : This indicates how close, on average, each point in the cluster is to the centroids of all other clusters.

Average Distance to Other Center
322
… … … … … : This indicates how close, on average, each point in the cluster is to the centroid of the cluster.

Average Distance to Cluster Center
323
… … … : The number of points assigned to the cluster.

Number of Points
324
… … … … … : The maximum of the distances between each point and the centroid of that point’s cluster. If this number is high, the cluster may be widely dispersed. This statistic in combination with the Average Distance to Cluster Center helps you determine the cluster’s spread.

Maximal Distance to Cluster Center
325
… … is an area of artificial intelligence (AI) in which software systems are designed to perceive the world visually, through cameras, images, and video. There are multiple specific types of computer vision problem that AI engineers and data scientists can solve using a mix of custom machine learning models and platform-as-a-service (PaaS) solutions - including many AI services in Microsoft Azure.
Computer vision
326
To a computer, an image is an … of numeric pixel values.
array
327
In reality, most digital images are … and consist of … layers (known as channels) that represent red, green, and blue (RGB) color hues.
multidimensional, three
328
A common way to perform image processing tasks is to apply … that modify the pixel values of the image to create a visual effect.
filters
329
A filter is defined by one or more arrays of pixel values, called … …. For example, you could define filter with a 3x3 kernel as shown in this example: -1 -1 -1 -1 8 -1 -1 -1 -1 The kernel is then convolved across the image, calculating a weighted sum for each 3.3 patch of pixels and assigning the result to a new image. It's easier to understand how the filtering works by exploring a step-by-step example.
filter kernels
330
Let's start with the … image we explored previously: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 255 255 255 0 0 0 0 255 255 255 0 0 0 0 255 255 255 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 First, we apply the filter kernel to the top left patch of the image, multiplying each pixel value by the corresponding weight value in the kernel and adding the results: (0 x -1) + (0 x -1) + (0 x -1) + (0 x -1) + (0 x 8) + (0 x -1) + (0 x -1) + (0 x -1) + (255 x -1) = -255 The result (-255) becomes the first value in a new array. Then we move the filter kernel along one pixel to the right and repeat the operation: (0 x -1) + (0 x -1) + (0 x -1) + (0 x -1) + (0 x 8) + (0 x -1) + (0 x -1) + (255 x -1) + (255 x -1) = -510 Again, the result is added to the new array, which now contains two values: -255 -510 The process is repeated until the filter has been convolved across the entire image
grayscale
331
The filter is convolved across the image, calculating a new array of values. Some of the values might be outside of the 0 to 255 pixel value range, so the values are adjusted to fit into that range. Because of the shape of the filter, the outside edge of pixels isn't calculated, so a padding value (usually 0) is applied. The resulting array represents a new image in which the filter has transformed the original image. In this case, the filter has had the effect of highlighting the edges of shapes in the image. To see the effect of the filter more clearly, here's an example of the same filter applied to a real image: Because the filter is convolved across the image, this kind of image manipulation is often referred to as convolutional filtering. The filter used in this example is a particular type of filter (called a laplace filter) that highlights the edges on objects in an image. There are many other kinds of filter that you can use to create blurring, sharpening, color inversion, and other effects.
332
the goal of computer vision is often to … …, or at least actionable insights, from images; which requires the creation of machine learning models that are trained to recognize features based on large volumes of existing images.
extract meaning
333
One of the most common machine learning model architectures for computer vision is a … … … (CNN).
convolutional neural network
334
CNNs use … to extract numeric feature maps from images, and then feed the feature values into a … … …to generate a label prediction. For example, in an image classification scenario, the label represents the main subject of the image (in other words, what is this an image of?). You might train a CNN model with images of different kinds of fruit (such as apple, banana, and orange) so that the label that is predicted is the type of fruit in a given image.
filters, deep learning model
335
During the training process for a CNN, filter kernels are initially defined using randomly generated … …. Then, as the training process progresses, the models predictions are evaluated against known label values, and the filter weights are adjusted to improve accuracy. Eventually, the trained fruit image classification model uses the filter weights that best extract features that help identify different kinds of fruit.
weight values
336
how a CNN for an image classification model works:
Images with known labels (for example, 0: apple, 1: banana, or 2: orange) are fed into the network to train the model.
 One or more layers of filters is used to extract features from each image as it is fed through the network. The filter kernels start with randomly assigned weights and generate arrays of numeric values called feature maps.
 The feature maps are flattened into a single dimensional array of feature values.
 The feature values are fed into a fully connected neural network.
 The output layer of the neural network uses a softmax or similar function to produce a result that contains a probability value for each possible class, for example [0.2, 0.5, 0.3].
 During training the output probabilities are compared to the actual class label - for example, an image of a banana (class 1) should have the value [0.0, 1.0, 0.0]. The difference between the predicted and actual class scores is used to calculate the loss in the model, and the weights in the fully connected neural network and the filter kernels in the feature extraction layers are modified to reduce the loss.
 The training process repeats over multiple epochs until an optimal set of weights has been learned. Then, the weights are saved and the model can be used to predict labels for new images for which the label is unknown.

337
CNN architectures usually include multiple convolutional filter layers and additional layers to … the size of feature maps, constrain the extracted values, and otherwise manipulate the feature values. These layers have been omitted in this simplified example to focus on the key concept, which is that filters are used to extract numeric features from images, which are then used in a neural network to predict image labels.

reduce
338
… have been at the core of computer vision solutions for many years. While they're commonly used to solve image classification problems as described previously, they're also the basis for more complex computer vision models. For example, object detection models combine CNN feature extraction layers with the identification of regions of interest in images to locate multiple classes of object in the same image.


CNNs
339
In another AI discipline - … … …, another type of neural network architecture, called a transformer has enabled the development of sophisticated models for language.
natural language processing (NLP)
340
Transformers work by processing huge volumes of data, and encoding language … (representing individual words or phrases) as vector-based embeddings (arrays of numeric values).
tokens
341
… work by processing huge volumes of data, and encoding language tokens (representing individual words or phrases) as vector-based embeddings (arrays of numeric values).
Transformers
342
You can think of an embedding as representing a set of dimensions that each represent some … … of the token. The embeddings are created such that tokens that are commonly used in the same context are closer together dimensionally than unrelated words.
semantic attribute
343
Words are encoded as multi-dimensional, and plotted in a 3D space. Tokens that are semantically similar are encoded in similar positions, creating a semantic language model that makes it possible to build sophisticated NLP solutions for text analysis, translation, language generation, and other tasks. Encoders in transformer networks create vectors with many more dimensions, defining complex semantic relationships between tokens based on linear algebraic calculations. The math involved is complex, as is the architecture of a transformer model. Our goal here is just to provide a conceptual understanding of how encoding creates a model that encapsulates relationships between entities.
vectors
344
The Microsoft … model is a multi-modal model. Trained with huge volumes of captioned images from the Internet, it includes both a language encoder and an image encoder. Florence is an example of a foundation model. In other words, a pre-trained general model on which you can build multiple adaptive models for specialist tasks.
Florence
345
The success of transformers as a way to build language models has led AI researchers to consider whether the same approach would be effective for image data. The result is the development of … models, in which the model is trained using a large volume of captioned images, with no fixed labels. An image encoder extracts features from images based on pixel values and combines them with text embeddings created by a language encoder. The overall model encapsulates relationships between natural language token embeddings and image features, as shown here:
multi-modal
346
You can use Florence as a foundation model for adaptive models that perform:
Image classification: Identifying to which category an image belongs. Object detection: Locating individual objects within an image. Captioning: Generating appropriate descriptions of images.
 Tagging: Compiling a list of relevant text tags for an image.

347
The architecture for computer vision models can be complex; and you require significant volumes of training ,,, and compute power to perform the training process.
images
348
Microsoft's Azure AI Vision service provides prebuilt and customizable computer vision models that are based on the … foundation model and provide various powerful capabilities.
Florence
349
To use Azure AI Vision, you need to create a resource for it in your Azure subscription. You can use either of the following resource types:

Azure AI Vision: A specific resource for the Azure AI Vision service. Use this resource type if you don't intend to use any other Azure AI services, or if you want to track utilization and costs for your Azure AI Vision resource separately.
 Azure AI services: A general resource that includes Azure AI Vision along with many other Azure AI services; such as Azure AI Language, Azure AI Custom Vision, Azure AI Translator, and others. Use this resource type if you plan to use multiple AI services and want to simplify administration and development.
350
After you've created a suitable resource in your subscription, you can submit images to the Azure AI Vision service to perform a wide range of analytical tasks.
 Azure AI Vision supports multiple … … capabilities, including:
 Optical character recognition (OCR) - extracting text from images.
 Generating captions and descriptions of images.
 Detection of thousands of common objects in images.
 Tagging visual features in images
 These tasks, and more, can be performed in Azure AI Vision Studio.
image analysis
351
Azure AI Vision service can use … … … capabilities to detect text in images. For example, consider the following image of a nutrition label on a product in a grocery store:

optical character recognition (OCR)
352
Azure AI Vision has the ability to analyze an image, evaluate the objects that are detected, and generate a … … … … … that can describe what was detected in the image. For example, consider the following image
human-readable phrase or sentence
353
Azure AI Vision can identify thousands of … … in images.
common objects
354
Predictions include a … …  that indicates the probability the model has calculated for the predicted objects. In addition to the detected object labels and their probabilities, Azure AI Vision returns bounding box coordinates that indicate the top, left, width, and height of the object detected. You can use these coordinates to determine where in the image each object was detected.
confidence score
355
Azure AI Vision can suggest … for an image based on its contents. These … can be associated with the image as metadata that summarizes attributes of the image and can be useful if you want to index an image along with a set of key terms that might be used to search for images with specific attributes or contents.
tags, tags
356
Azure AI Vision builds … models on the pre-trained foundation model, meaning that you can train sophisticated models by using relatively few training images.

custom
357
An image … model is used to predict the category, or class of an image
classification
358
… … models detect and classify objects in an image, returning bounding box coordinates to locate each object. In addition to the built-in object detection capabilities in Azure AI Vision, you can train a custom object detection model with your own images.
Object detection
359
Azure AI Vision includes numerous capabilities for understanding image content and context and extracting information from images. Azure … … … allows you to try out many of the capabilities of image analysis
AI Vision Studio
360
Some potential uses for image classification include:

Product identification: performing visual searches for specific products in online searches or even, in-store using a mobile device.
 Disaster investigation: identifying key infrastructure for major disaster preparation efforts. For example, identifying bridges and roads in aerial images can help disaster relief teams plan ahead in regions that are not well mapped.
 Medical diagnosis: evaluating images from X-ray or MRI devices could quickly classify specific issues found as cancerous tumors, or many other medical conditions related to medical imaging diagnosis.

361
To create an image classification model, you need data that consists of … and their labels.
features
362
An image classification model is trained to match the patterns in the pixel values to a set of class …. After the model has been trained, you can use it with new sets of features to predict unknown label values.

labels
363
Most modern image classification solutions are based on … … … that make use of convolutional neural networks (CNNs) to uncover patterns in the pixels that correspond to particular classes.
deep learning techniques
364
Training an effective CNN is a complex task that requires considerable expertise in … … and … …
data science, machine learning.
365
Creating an image classification solution with Azure AI Custom Vision consists of two main tasks:
First you must use existing images to train the model, and then you must publish the model so that client applications can use it to generate predictions.
366
To create an image classification system you need a resource in your Azure subscription. You can use the following types of resource:
Custom Vision: A dedicated resource for the custom vision service, which can be training, a prediction, or both resources. Azure AI services: A general resource that includes Azure AI Custom Vision along with many other Azure AI services. You can use this type of resource for training, prediction, or both.

367
If you choose to create a Custom Vision resource, you will be prompted to choose … , … , or both - and it's important to note that if you choose "both", then two resources are created - one for training and one for prediction.

training, prediction
368
It's also possible to take a mix-and-match approach in which you use a dedicated Custom Vision resource for training, but deploy your model to an Azure AI services resource for prediction. For this to work, the training and prediction resources must be created in the same ….

region
369
To train a … model, you must upload images to your training resource and label them with the appropriate class labels. Then, you must train the model and evaluate the training results. You can perform these tasks in the Custom Vision portal, or if you have the necessary coding experience you can use one of the Azure AI Custom Vision service programming language-specific software development kits (SDKs). One of the key considerations when using images for classification, is to ensure that you have sufficient images of the objects in question and those images should be of the object from many different angles.
classification
370
Model training process is an iterative process in which Azure AI Custom Vision service repeatedly trains the model using some of the data, but holds some back to evaluate the model. At the end of the training process, the performance for the trained model is indicated by the following evaluation metrics:

Precision: Recall: Average Precision (AP):
371
… : What percentage of the class predictions made by the model were correct? For example, if the model predicted that 10 images are oranges, of which eight were actually oranges, then the precision is 0.8 (80%).

Prcision
372
… : What percentage of class predictions did the model correctly identify? For example, if there are 10 images of apples, and the model found 7 of them, then the recall is 0.7 (70%).

Recall
373
… … … : An overall metric that takes into account both precision and recall.

Average Precision (AP)
374
When you publish the model, you can assign it a name (the default is "IterationX", where X is the … … … you have trained the model).

number of times
375
To use your model, client application developers need the following information:

Project ID: The unique ID of the Custom Vision project you created to train the model.
 Model name: The name you assigned to the model during publishing.
 Prediction endpoint: The HTTP address of the endpoints for the prediction resource to which you published the model (not the training resource).
 Prediction key: The authentication key for the prediction resource to which you published the model (not the training resource).
376
The Azure AI Vision service provides useful pre-built models for working with images, but you’ll often need to train your own model for computer vision. For example, suppose a wildlife conservation organization organization wants to track sightings of animals by using motion-sensitive cameras. The images captured by the cameras could then be used to verify the presence of particular species in a particular area and assist with conservation efforts for endangered species. To accomplish this, the organization would The Azure AI Vision service provides useful pre-built models for working with images, but you’ll often need to train your own model for computer vision. For example, suppose a wildlife conservation organization organization wants to track sightings of animals by using motion-sensitive cameras. The images captured by the cameras could then be used to verify the presence of particular species in a particular area and assist with conservation efforts for endangered species. To accomplish this, the organization would benefit from an image classification model that is trained to identify different species of animal in the captured photographs.
377
In Azure, you can use the … …  service to train an image classification model based on existing images. There are two elements to creating an image classification solution. First, you must train a model to recognize different classes using existing images. Then, when the model is trained you must publish it as a service that can be consumed by applications.
Custom Vision
378
… … is a form of computer vision in which artificial intelligence (AI) agents can identify and locate specific types of object in an image or camera feed.
Object detection
379
Some sample applications of object detection include:

Some sample applications of object detection include:
 Checking for building safety: Evaluating the safety of a building by analyzing footage of its interior for fire extinguishers or other emergency equipment.
 Driving assistance: Creating software for self-driving cars or vehicles with lane assist capabilities. The software can detect whether there is a car in another lane, and whether the driver's car is within its own lanes.
 Detecting tumors: Medical imaging such as an MRI or x-rays that can detect known objects for medical diagnosis
380
An object detection model returns the following information:

an object detection model returns the following information:
 The class of each object identified in the image.
 The probability score of the object classification (which you can interpret as the confidence of the predicted class being correct)
 The coordinates of a bounding box for each object.

381
… …  is a machine learning based form of computer vision in which a model is trained to categorize images based on the primary subject matter they contain. … ,.. goes further than this to classify individual objects within the image, and to return the coordinates of a bounding box that indicates the object's location.
Image classification, Object detection
382
You can create an object detection machine learning model by using advanced deep learning techniques. However, this approach requires significant … and a large volume of … ….
expertise, training data
383
Creating an object detection solution with Azure AI Custom Vision consists of three main tasks:
Upload and tag images Train the model Publish the trained model so client applications can use it to generate predictions
384
Azure AI Computer Vision can use … … to suggest classes and bounding boxes for images you add to the training dataset.
smart tagging
385
To train the model, you can use the Custom Vision … , or if you have the necessary coding experience you can use one of Azure AI Custom Vision's programming language-specific software development kits (SDKs).
portal
386
Azure AI Custom Vision can use the following evaluation metrics to judge the performance of the trained model:
Precision: What percentage of class predictions did the model correctly identify? For example, if the model predicted that 10 images are oranges, of which eight were actually oranges, then the precision is 0.8 (80%). Recall: What percentage of the class predictions made by the model were correct? For example, if there are 10 images of apples, and the model found 7 of them, then the recall is 0.7 (70%).
 Mean Average Precision (mAP): An overall metric that takes into account both precision and recall across all classes.

387
To use your model, client application developers need the following information:

Project ID: The unique ID of the Custom Vision project you created to train the model.
 Model name: The name you assigned to the model during publishing.
 Prediction endpoint: The HTTP address of the endpoints for the prediction resource to which you published the model (not the training resource).
 Prediction key: The authentication key for the prediction resource to which you published the model (not the training resource).
388
Face detection, analysis, and recognition are important capabilities for artificial intelligence (AI) solutions. Azure AI … service in Azure makes it easy integrate these capabilities into your applications.
Face
389
… … and analysis is an area of artificial intelligence (AI) which uses algorithms to locate and analyze human faces in images or video content.

Face detection
390
Face detection involves identifying regions of an image that contain a human face, typically by returning … … coordinates that form a rectangle around the face.
Bounding box,
391
With … …, facial features can be used to train machine learning models to return other information, such as facial features such as nose, eyes, eyebrows, lips, and others.

Face analysis
392
A further application of … … is to train a machine learning model to identify known individuals from their facial features. This is known as facial recognition, and uses multiple images of an individual to train the model.
facial analysis
393
Microsoft Azure provides multiple Azure AI services that you can use to detect and analyze faces, including:

Azure AI Vision, which offers face detection and some basic face analysis, such as returning the bounding box coordinates around an image.
 Azure AI Video Indexer, which you can use to detect and identify faces in a video.
 Azure AI Face, which offers pre-built algorithms that can detect, recognize, and analyze faces.

394
The Azure Face service can return the rectangle coordinates for any human faces that are found in an image, as well as a series of attributes related to those face such as:

Accessories: indicates whether the given face has accessories. This attribute returns possible accessories including headwear, glasses, and mask, with confidence score between zero and one for each accessory.
 Blur: how blurred the face is, which can be an indication of how likely the face is to be the main focus of the image.
 Exposure: such as whether the image is underexposed or over exposed. This applies to the face in the image and not the overall image exposure.
 Glasses: whether or not the person is wearing glasses.
 Head pose: the face's orientation in a 3D space.
 Mask: indicates whether the face is wearing a mask. Noise: refers to visual noise in the image. If you have taken a photo with a high ISO setting for darker settings, you would notice this noise in the image. The image looks grainy or full of tiny dots that make the image less clear.
 \Occlusion: determines if there might be objects blocking the face in the image.

395
To support Microsoft's Responsible AI Standard, Azure AI Face and Azure AI Vision have a Limited Access policy.
 Anyone can use the Face service to:

Anyone can use the Face service to:
 Detect the location of faces in an image.
 Determine if a person is wearing glasses.
 Determine if there's occlusion, blur, noise, or over/under exposure for any of the faces.
 Return the head pose coordinat

396
The Limited Access policy requires customers to submit an intake form to access additional Azure AI Face service capabilities including:

The ability to compare faces for similarity. The ability to identify named individuals in an image.

397
To use the Face service, you must create one of the following types of resource in your Azure subscription:

Face: Use this specific resource type if you don't intend to use any other Azure AI services, or if you want to track utilization and costs for Face separately.
 Azure AI services: A general resource that includes Azure AI Face along with many other Azure AI services such as Azure AI Content Safety, Azure AI Language, and others. Use this resource type if you plan to use multiple Azure AI services and want to simplify administration and development.

398
There are some considerations that can help improve the accuracy of the detection in the images:
Image format - supported images are JPEG, PNG, GIF, and BMP.
 File size - 6 MB or smaller.
 Face size range - from 36 x 36 pixels up to 4096 x 4096 pixels. Smaller or larger faces will not be detected.
 Other issues - face detection can be impaired by extreme face angles, extreme lighting, and occlusion (objects blocking the face such as a hand).
399
To test the face detection capabilities of the Azure AI Face service, you will use Azure… …
Vision Studio
400
To use the Face detect capabilities you will create an Azure AI services … resource.
multi-service
401
… … … enables artificial intelligence (AI) systems to read text in images, enabling applications to extract information from photographs, scanned documents, and other sources of digitized text.
Optical character recognition (OCR)
402
The ability for computer systems to process written and printed text is an area of artificial intelligence (AI) where computer … intersects with … … processing.
vision, natural language
403
The ability to extract text from images is handled by Azure AI … service.
Vision
404
One of the services in Azure AI Vision is the … API. You can think of the … API as an OCR engine that powers text extraction from images, PDFs, and TIFF files.
Read, Read
405
Organizations can use Azure AI … … to automate data extraction across document types, such as receipts, invoices, and more.
Document Intelligence
406
Typically after a document is scanned, someone will still need to manually enter the extracted text into a …. Azure AI Document Intelligence identifies the content's structure and save the data in key, value pairs.

database.
407
Azure AI Document Intelligence applies advanced machine learning to extract text, key-value pairs, tables, and structures from documents automatically and accurately. It combines state-of-the-art optical character recognition (OCR) with predictive models that can interpret form data by:

Matching field names to values. Processing tables of data. Identifying specific types of field, such as dates, telephone numbers, addresses, totals, and others.

408
Azure AI Document Intelligence supports automated document processing through:

Prebuilt models that are trained to recognize and extract data for common scenarios such as IDs, receipts, and invoices.
 Custom models, which enable you to extract what are known as key/value pairs and table data from forms. 

Custom models are trained using your own data, which helps to tailor this model to your specific forms. Starting with a few samples of your forms, you can train the custom model. After the first training exercise, you can evaluate the results and consider if you need to add more samples and re-train.

409
The next hands-on exercise will only step through a prebuilt receipt model. If you would like to train a custom model you can refer to the Azure AI Document Intelligence documentation for quickstarts.

410
To use Azure AI Document Intelligence, you need to either create a  … … … or an Azure AI services resource in your Azure subscription. Both resource types give access to Azure AI Document Intelligence.

Form Recognizer resource
411
Currently the prebuilt receipt model is designed to recognize common receipts in English that are common to the USA. Examples are receipts used at restaurants, retail locations, and gas stations. The model is able to extract key information from the receipt slip:

time of transaction date of transaction merchant information taxes paid receipt totals other pertinent information that may be present on the receipt all text on the receipt is recognized and returned as well

412
Use the following guidelines to get the best results when using a custom model.

Images must be JPEG, PNG, BMP, PDF, or TIFF formats File size must be less than 50 MB Image size between 50 x 50 pixels and 10000 x 10000 pixels For PDF documents, no larger than 17 inches x 17 inches
413
… … is a process where you evaluate different aspects of a document or phrase, in order to gain insights into the content of that text.
Analyzing text
414
There are some commonly used techniques that can be used to build software to analyze text, including:

Statistical analysis of terms used in the text. For example, removing common "stop words" (words like "the" or "a", which reveal little semantic information about the text), and performing frequency analysis of the remaining words (counting how often each word appears) can provide clues about the main subject of the text.
 Extending frequency analysis to multi-term phrases, commonly known as N-grams (a two-word phrase is a bi-gram, a three-word phrase is a tri-gram, and so on).
 Applying stemming or lemmatization algorithms to normalize words before counting them - for example, so that words like "power", "powered", and "powerful" are interpreted as being the same word.
 Applying linguistic structure rules to analyze sentences - for example, breaking down sentences into tree-like structures such as a noun phrase, which itself contains nouns, verbs, adjectives, and so on.
 Encoding words or terms as numeric features that can be used to train a machine learning model. For example, to classify a text document based on the terms it contains. This technique is often used to perform sentiment analysis, in which a document is classified as positive or negative.
 Creating vectorized models that capture semantic relationships between words by assigning them to locations in n-dimensional space. This modeling technique might, for example, assign values to the words "flower" and "plant" that locate them close to one another, while "skateboard" might be given a value that positions it much further away.

415
Azure AI Language service can help simplify application development by using pre-trained models that can:
Determine the language of a document or text (for example, French or English).
 Perform sentiment analysis on text to determine a positive or negative sentiment.
 Extract key phrases from text that might indicate its main talking points.
 Identify and categorize entities in the text. Entities can be people, places, organizations, or even everyday items such as dates, times, quantities, and so on.

416
Azure AI … is a part of the Azure AI services offerings that can perform advanced natural language processing over raw text.

Language
417
Use the language detection capability of Azure AI Language to identify the language in which text is written. You can submit multiple documents at a time for analysis. For each document submitted to it, the service will detect:

The language name (for example "English"). The ISO 639-1 language code (for example, "en"). A score indicating a level of confidence in the language detection.

418
Notice that the language detected for review 3 is English, despite the text containing a mix of English and French. The language detection service will focus on the predominant language in the text. The service uses an algorithm to determine the predominant language, such as length of phrases or total amount of text for the language compared to other languages in the text. The predominant language will be the value returned, along with the language code. The confidence score may be less than … as a result of the mixed language text.


1
419
using Azure AI Language to analyze the text ":-)", results in a value of unknown for the language name and the language identifier, and a score of NaN (which is used to indicate  … … …).

not a number
420
The text analytics capabilities in Azure AI Language can evaluate text and return sentiment scores and labels for each….
sentence
421
The AI Language service returns a sentiment score in the range of 0 to 1. Values closer to 1 represent a … sentiment. Scores that are close to the middle of the range (0.5) are considered neutral or indeterminate.

Positive,
422
a list of words in a sentence that has no structure, could result in an … score.
indeterminate
423
If you pass text in French but tell the service the language code is en for English, the service will return a score of precisely …

0.5.

424
… … … is the concept of evaluating the text of a document, or documents, and then identifying the main talking points of the document(s). Consider the restaurant scenario discussed previously. Depending on the volume of surveys that you have collected, it can take a long time to read through the reviews. Instead, you can use the key phrase extraction capabilities of the Language service to summarize the main points.

Key phrase extraction
425
Key phrase extraction can provide some context to this review by extracting the following phrases:
 attentive service great food birthday celebration fantastic experience table friendly hostess dinner ambiance place
 Not only can you use … … to determine that this review is positive, you can use the key phrases to identify important elements of the review.

sentiment analysis
426
You can provide Azure AI Language with unstructured text and it will return a list of …  in the text that it recognizes. Azure AI Language can also provide links to more information about that entity on the web. An entity is essentially an item of a particular type or a category; and in some cases, subtype, such as those as shown in the following table.

entities
427
Azure AI Language also supports entity linking to help disambiguate entities by linking to a specific reference. For recognized entities, the service returns a URL for a relevant … article.

Wikipedia
428
To enable speech interaction, the AI system must support two capabilities:

Speech recognition - the ability to detect and interpret spoken input.
 Speech synthesis - the ability to generate spoken output.

429
Speech … is concerned with taking the spoken word and converting it into data that can be processed - often by transcribing it into a text representation.
recognition
430
Speech patterns are analyzed in the audio to determine recognizable patterns that are mapped to words. To accomplish this feat, the software typically uses multiple types of models, including:

An acoustic model that converts the audio signal into phonemes (representations of specific sounds).
 A language model that maps phonemes to words, usually using a statistical algorithm that predicts the most probable sequence of words based on the phonemes.

431
The recognized words are typically converted to text, which you can use for various purposes, such as.

Providing closed captions for recorded or live videos Creating a transcript of a phone call or meeting Automated note dictation Determining intended user input for further processing

432
Speech synthesis is in many respects the reverse of speech recognition. It is concerned with vocalizing data, usually by converting text to speech. A speech synthesis solution typically requires the following information:

The text to be spoken. The voice to be used to vocalize the speech.

433
To synthesize speech, the system typically  … the text to break it down into individual words, and assigns phonetic sounds to each word. It then breaks the phonetic transcription into prosodic units (such as phrases, clauses, or sentences) to create phonemes that will be converted to audio format. These phonemes are then synthesized as audio by applying a voice, which will determine parameters such as pitch and timbre; and generating an audio wave form that can be output to a speaker or written to a file.

tokenizes
434
You can use the output of speech synthesis for many purposes, including:

Generating spoken responses to user input. Creating voice menus for telephone systems. Reading email or text messages aloud in hands-free scenarios. Broadcasting announcements in public locations, such as railway stations or airports.
435
Microsoft Azure offers both speech recognition and speech synthesis capabilities through Azure AI Speech service, which includes the following application programming interfaces (APIs):

The Speech to text API The Text to speech API

436
You can use Azure AI Speech to text API to perform real-time or … transcription of audio into a text format. The audio source for transcription can be a real-time audio stream from a microphone or an audio file.

batch
437
The model that is used by the Speech to text API, is based on the Universal Language Model that was trained by Microsoft. The data for the model is Microsoft-owned and deployed to Microsoft Azure. The model is optimized for two scenarios, … and …. You can also create and train your own custom models including acoustics, language, and pronunciation if the pre-built models from Microsoft do not provide what you need.

conversational, dictation
438
Real-time speech to text allows you to transcribe text in audio streams. You can use real-time transcription for presentations, demos, or any other scenario where a person is speaking.
 In order for real-time transcription to work, your application will need to be listening for incoming audio from a microphone, or other audio input source such as an audio file. Your … ,.. streams the audio to the service, which returns the transcribed text.


application code
439
Not all speech to text scenarios are real time. You may have audio recordings stored on a file share, a remote server, or even on Azure storage. You can point to audio files with a … … … URI and asynchronously receive transcription results.

shared access signature (SAS)
440
Batch transcription should be run in an … manner because the batch jobs are scheduled on a best-effort basis. Normally a job will start executing within minutes of the request but there is no estimate for when a job changes into the running state.

asynchronous
441
When you use the text to speech API, you can specify the … to be used to vocalize the text. This capability offers you the flexibility to personalize your speech synthesis solution and give it a specific character.

voice
442
The service includes multiple pre-defined voices with support for multiple languages and regional pronunciation, including standard voices as well as neural voices that leverage  … …  to overcome common limitations in speech synthesis with regard to intonation, resulting in a more natural sounding voice. You can also develop custom voices and use them with the text to speech API
neural networks
443
A … translation is where each word is translated to the corresponding word in the target language. This approach presents some issues. For one case, there may not be an equivalent word in the target language. Another case is where literal translation can change the meaning of the phrase or not get the context correct.
literal
444
Artificial intelligence systems must be able to understand, not only the words, but also the … … in which they are used. In this way, the service can return a more accurate translation of the input phrase or phrases. The grammar rules, formal versus informal, and colloquialisms all need to be considered.
semantic context
445
… translation can be used to translate documents from one language to another, translate email communications that come from foreign governments, and even provide the ability to translate web pages on the Internet. Many times you will see a Translate option for posts on social media sites, or the Bing search engine can offer to translate entire web pages that are returned in search results.
Text
446
… translation is used to translate between spoken languages, sometimes directly (speech-to-speech translation) and sometimes by translating to an intermediary text format (speech-to-text translation).
Speech
447
Microsoft provides Azure AI services that support translation. Specifically, you can use the following services:

The Azure AI Translator service, which supports text-to-text translation.
 The Azure AI Speech service, which enables speech to text and speech-to-speech translation.

448
Azure AI Translator is easy to integrate in your applications, websites, tools, and solutions. The service uses a … … … model for translation, which analyzes the semantic context of the text and renders a more accurate and complete translation as a result.

Neural Machine Translation (NMT)
449
Azure AI Translator supports text-to-text translation between more than … languages. When using the service, you must specify the language you are translating from and the language you are translating to using ISO 639-1 language codes, such as en for English, fr for French, and zh for Chinese. 

60
450
Alternatively, you can specify … … of languages by extending the language code with the appropriate 3166-1 cultural code - for example, en-US for US English, en-GB for British English, or fr-CA for Canadian French.

cultural variants
451
When using Azure AI Translator, you can specify one  … language with multiple  … languages, enabling you to simultaneously translate a source document into multiple languages.

from, to
452
Azure AI Translator's application programming interface (API) offers some optional configuration to help you fine-tune the results that are returned, including:
Profanity filtering. Without any configuration, the service will translate the input text, without filtering out profanity. Profanity levels are typically culture-specific but you can control profanity translation by either marking the translated text as profane or by omitting it in the results. Selective translation. You can tag content so that it isn't translated. For example, you may want to tag code, a brand name, or a word/phrase that doesn't make sense when localized.


453
Azure AI Speech includes the following APIs:

Speech to text - used to transcribe speech from an audio source to text format.
 Text to speech - used to generate spoken audio from a text source.
 Speech Translation - used to translate speech in one language to text or speech in another.

454
You can use the  … … … to translate spoken audio from a streaming source, such as a microphone or audio file, and return the translation as text or an audio stream. This enables scenarios such as real-time closed captioning for a speech or simultaneous two-way translation of a spoken conversation.

Speech Translation API
455
As with Azure AI Translator, you can specify one source language and one or more … languages to which the source should be translated with Azure AI Speech. You can translate speech into over 60 languages.

target
456
The source language must be specified using the … … and … … format, such as es-US for American Spanish. This requirement helps ensure that the source is understood properly, allowing for localized pronunciation and linguistic idioms.

extended language, culture code
457
To work with conversational language understanding, you need to take into account three core concepts: …, …, and ….
utterances, entities, and intents.
458
An … is an example of something a user might say, and which your application must interpret.
utterance
459
An … is an item to which an utterance refers.
entity
460
An … represents the purpose, or goal, expressed in a user's utterance.
intent
461
If the intent is to turn a device on; so in your conversational language understanding application, you might define a TurnOn intent that is related to the given ….
utterancesd
462
A conversational language understanding application defines a model consisting of intents and entities. … are used to train the model to identify the most likely … and the … to which it should be applied based on a given input.
Utterances, intent, entities
463
There are numerous … used for each of the intents. The intent should be a concise way of grouping the … tasks.
utterances, utterance
464
You should consider always using the … intent to help handle utterances that do not map any of the utterances you have entered.
None
465
In a conversational language understanding application, the None intent is created but left empty on purpose. The None intent is a … intent and can't be deleted or renamed.
required
466
After defining the entities and intents with sample utterances in your conversational language understanding application, you can train a language model to predict intents and entities from user input - even if it doesn't match the sample utterances exactly. You can then use the model from a client application to retrieve … and respond appropriately.
predictions
467
For each of the authoring and prediction tasks, you need a resource in your Azure subscription. You can use the following types of resource:
Language: A resource that enables you to build apps with industry-leading natural language understanding capabilities without machine learning expertise. You can use a language resource for authoring and prediction. Azure AI services: A general resource that includes conversational language understanding along with many other Azure AI services. You can only use this type of resource for prediction.
468
After you've created an … resource, you can use it to author and train a conversational language understanding application by defining the entities and intents that your application will predict as well as utterances for each intent that can be used to train the predictive model.
Authoring
469
Conversational language understanding provides a comprehensive collection of … … that include pre-defined intents and entities for common scenarios; which you can use as a starting point for your model. You can also create your own entities and intents.
prebuilt domains
470
When you create entities and intents, you can do so in any ,,,.
order
471
You can write code to define the elements of your model, but in most cases it's easiest to author your model using the Azure AI Language …
Portal
472
Best practice is to use the Azure AI Language portal for authoring and to use the … for runtime predictions.
SDK
473
Define intents based on … a user would want to perform with your application. For each intent, you should include a variety of utterances that provide examples of how a user might express the intent. If an intent can be applied to multiple entities, be sure to include sample utterances for each potential entity; and ensure that each entity is identified in the utterance.
actions
474
There are four types of entities:
Machine-Learned: Entities that are learned by your model during training from context in the sample utterances you provide. List: Entities that are defined as a hierarchy of lists and sublists. For example, a device list might include sublists for light and fan. For each list entry, you can specify synonyms, such as lamp for light. RegEx: Entities that are defined as a regular expression that describes a pattern - for example, you might define a pattern like [0-9]{3}-[0-9]{3}-[0-9]{4} for telephone numbers of the form 555-123-4567. Pattern.any: Entities that are used with patterns to define complex entities that may be hard to extract from sample utterances.
475
After you have defined the intents and entities in your model, and included a suitable set of sample utterances; the next step is to train the model. Training is the process of using your sample utterances to teach your model to match natural language expressions that a user might say to probable intents and entities. After training the model, you can test it by submitting text and reviewing the predicted intents. Training and testing is an … process. After you train your model, you test it with sample utterances to see if the intents and entities are recognized correctly. If they're not, make updates, retrain, and test again.
iterative
476
When you are satisfied with the results from the training and testing, you can publish your Conversational Language Understanding application to a … resource for consumption. Client applications can use the model by connecting to the endpoint for the prediction resource, specifying the appropriate authentication key; and submit user input to get predicted intents and entities. The predictions are returned to the client application, which can then take appropriate action based on the predicted intent.
prediction
477
Create a custom question answering knowledge base with Azure AI Language and create a bot with Azure AI … … that answers user questions.
Bot Service
478
To implement a bot conversation solution, you need:
A knowledge base of question and answer pairs - usually with some built-in natural language processing model to enable questions that can be phrased in multiple ways to be understood with the same semantic meaning. A bot service that provides an interface to the knowledge base through one or more channels.
479
You can easily create a user support bot solution on Microsoft Azure using a combination of two core services:
Azure AI Language: includes a custom question answering feature that enables you to create a knowledge base of question and answer pairs that can be queried using natural language input. Note
The question answering capability in Azure AI Language is a newer version of the QnA Maker service - which is still available as a separate service.
 Azure AI Bot Service: provides a framework for developing, publishing, and managing bots on Azure.
480
The first challenge in creating a user support bot is to use Azure AI … to create a knowledge base. You can use the Language Studio's custom question answering feature to create, train, publish, and manage knowledge bases.
Language
481
You can write code to create and manage knowledge bases using the Azure AI Language REST API or SDK. However, in most scenarios it is easier to use the … Studio.
Language
482
To create a knowledge base, you must first provision a … resource in your Azure subscription
Language
483
After provisioning a Language resource, you can use the Language Studio's custom question answering feature to create a knowledge base that consists of …-…-… pairs.
question-and-answer
484
These questions and answers can be:
Generated from an existing FAQ document or web page. Entered and edited manually.
485
In many cases, a knowledge base is created using a combination of all of these techniques; starting with a base dataset of questions and answers from an existing … document and extending the knowledge base with additional …. entries.
FAQ, manual
486
Questions in the knowledge base can be assigned alternative phrasing to help consolidate questions with the same ….
meaning
487
After creating a set of question-and-answer pairs, you must save it. This process analyzes your literal questions and answers and applies a built-in natural language processing model to match appropriate answers to questions, even when they are not phrased exactly as specified in your question definitions. Then you can use the built-in … interface in the Language Studio to test your knowledge base by submitting questions and reviewing the answers that are returned.
test
488
When you're satisfied with your knowledge base, deploy it. Then you can use it over its REST interface. To access the knowledge base, client applications require:
The knowledge base ID The knowledge base endpoint The knowledge base authorization key
489
After you've created and deployed a knowledge base, you can deliver it to users through a ….
bot
490
You can create a custom … by using the Microsoft Bot Framework SDK to write code that controls conversation flow and integrates with your knowledge base. However, an easier approach is to use the automatic bot creation functionality, which enables you to create a bot for your deployed knowledge base and publish it as an Azure AI Bot Service application with just a few clicks.

bot
491
After creating your bot, you can manage it in the Azure portal, where you can:

Extend the bot's functionality by adding custom code. Test the bot in an interactive test interface. Configure logging, analytics, and integration with other services.

492
For simple updates, you can edit bot code directly in the Azure portal. However, for more comprehensive customization, you can download the … …and edit it locally; republishing the bot directly to Azure when you're ready.
source code
493
When your bot is ready to be delivered to users, you can connect it to … …; making it possible for users to interact with it through web chat, email, Microsoft Teams, and other common communication media.

multiple channels
494
… … is an artificial intelligence technique used to determine whether values in a series are within expected parameters.

Anomaly detection
495
Azure AI Anomaly Detector is a cloud-based service that helps you monitor and detect in your historical time series and real-time data.
abnormalities
496
Anomalies are values that are … the expected values or range of values.

outside
497
In the graphic depicting the time series data, there's a light shaded area that indicates the boundary, or sensitivity range. The solid blue line is used to indicate the … …. When a measured value is outside of the shaded boundary, an orange dot is used to indicate the value is considered an anomaly. The sensitivity boundary is a parameter that you can specify when calling the service. It allows you to adjust that boundary settings to tweak the results.

measured values
498
Accurate anomaly detection leads to prompt troubleshooting, which helps to avoid revenue … and maintain brand reputation.

loss
499
AI Anatoly detector doesn't require you to know machine learning. You can use the … … to integrate Azure AI Anomaly Detector into your applications with relative ease.
REST API
500
The AI anomaly detector service uses the concept of a "… …" strategy. The main parameter you need to customize is "sensitivity," which is from 1 to 99 to adjust the outcome to fit the scenario. The service can detect anomalies in historical time series data and also in real-time data such as streaming input from IoT devices, sensors, or other streaming input sources.

one parameter
501
By default, the upper and lower boundaries for anomaly detection are calculated using concepts known as …, …, and ….
expectedValue, upperMargin, and lowerMargin The upper and lower boundaries are calculated using these three values. If a value exceeds either boundary, it will be identified as an anomaly.
502
You can adjust the boundaries by applying a marginScale to the upper and lower margins as demonstrated by the following formula.
upperBoundary = expectedValue + (100 - marginScale) * upperMargin
503
Azure AI Anomaly Detector accepts data in JSON format. You can use any numerical data that you have recorded over time. The key aspects of the data being sent includes the …, …, and … … that was recorded for that timestamp. An example of a JSON object that you might send to the API is shown in this code sample. The granularity is set as hourly and is used to represent temperatures in degrees celsius that were recorded at the timestamps indicated.
granularity, a timestamp, and a value { "granularity": "hourly", "series": [ { "timestamp": "2021-03-02T01:00:00Z", "value": -10.56 }, { "timestamp": "2021-03-02T02:00:00Z", "value": -8.30 }, { "timestamp": "2021-03-02T03:00:00Z", "value": -10.30 }, { "timestamp": "2021-03-02T04:00:00Z", "value": 5.95 }, ] }
504
The anomaly detector service will support a maximum of … data points however, sending this many data points in the same JSON object, can result in latency for the response. You can improve the response by breaking your data points into smaller chunks (windows) and sending these in a sequence.
8640
505
The same JSON object format for Anatoly detection is used in a streaming scenario. The main difference is that you will send a single value in each …. The streaming detection method will compare the current value being sent and the previous value sent.
request
506
If your data has missing … in the sequence, consider the following recommendations. Sampling occurs every few minutes and has less than 10% of the expected number of points missing. In this case, the impact should be negligible on the detection results.
 If you have more than 10% missing, there are options to help "fill" the data set. Consider using a linear interpolation method to fill in the missing values and complete the data set. This will fill gaps with evenly distributed values.

values
507
Azure AI Anomaly Detector will provide the best results if your time series data is … distributed. If the data is more randomly distributed, you can use an aggregation method to create a more even distribution data set.
evenly
508
Azure AI Anomaly Detector supports batch processing of time series data and … anomaly detection for real-time data.
last-point
509
… … involves applying the algorithm to an entire data series at one time. The concept of time series data involves evaluation of a data set as a batch. Use your time series to detect any anomalies that might exist throughout your data. This operation generates a model using your entire time series data, with each point analyzed using the same model.
Batch detection
510
Batch detection is best used when your data contains:
Flat trend time series data with occasional spikes or dips Seasonal time series data with occasional anomalies Seasonality is considered to be a pattern in your data that occurs at regular intervals. Examples would be hourly, daily, or monthly patterns. When you use seasonal data, specifying a period for that pattern can help to reduce the latency in detection.

511
When you use the … detection mode, Azure AI Anomaly Detector creates a single statistical model based on the entire data set passed to the service. From this model, each data point in the data set is evaluated and anomalies are identified.
batch
512
Consider a pharmaceutical company that stores medications in storage facilities where the temperature in the facilities needs to remain within a specific range. To evaluate whether the medication remained stored in a safe temperature range in the past three months we need to know:

the maximum allowable temperature the minimum allowable temperature the acceptable duration of time for temperatures to be outside the safe range

513
If you are interested in evaluating compliance over historical readings, you can extract the required time series data, package it into a JSON object, and send it to Azure AI Anomaly Detector for evaluation. You will then have a … view of the temperature readings over time.
historical
514
Real-time detection uses streaming data by comparing previously seen data points to the … data point to determine if your latest one is an anomaly. This operation generates a model using the data points you send, and determines if the target (current) point is an anomaly. By calling the service with each new data point you generate, you can monitor your data as it's created.
last
515
… detection is most useful for monitoring critical storage requirements that must be acted on immediately.
Streaming
516
Use Azure … … to make your data searchable, a cloud search service that has tools for building user-managed indexes
Cognitive Search
517
… … is the term used to describe solutions that involve extracting information from large volumes of often unstructured data.
Knowledge mining
518
Cognitive search indexes can be used for internal only use, or to enable searchable content on … … internet assets.

public-facing
519
Azure Cognitive Search results contain only your data, which can include text inferred or extracted from images, or new entities and key phrases detection through … …
text analytics.
520
Cognitive search is a … … … … solution.
Platform as a Service (PaaS)
521
Azure Cognitive Search exists to complement existing technologies and provides a programmable search engine built on … …, an open-source software library.
Apache Lucene
522
Cognitive search is a highly available platform offering a … uptime SLA available for cloud and on-premises assets.
99.9%
523
Azure Cognitive Search comes with the following features:
Data from any source: Full text search and analysis: AI powered search: Multi-lingual: Geo-enabled: Configurable user experience:
524
: Azure Cognitive Search accepts data from any source provided in JSON format, with auto crawling support for selected data sources in Azure.

Data from any source
525
: Azure Cognitive Search offers full text search capabilities supporting both simple query and full Lucene query syntax.

Full text search and analysis
526
: Azure Cognitive Search has Azure AI capabilities built in for image and text analysis from raw content.

AI powered search
527
: Azure Cognitive Search offers linguistic analysis for 56 languages to intelligently handle phonetic matching or language-specific linguistics. Natural language processors available in Azure Cognitive Search are also used by Bing and Office.

Multi-lingual
528
: Azure Cognitive Search supports geo-search filtering based on proximity to a physical location.

Geo-enabled
529
: Azure Cognitive Search has several features to improve the user experience including autocomplete, autosuggest, pagination, and hit highlighting.
Configurable user experience
530
A typical Azure Cognitive Search solution starts with a data source that contains the … … you want to search. This could be a hierarchy of folders and files in Azure Storage, or text in a database such as Azure SQL Database or Azure Cosmos DB. The data format that Cognitive Search supports is JSON. Regardless of where your data originates, if you can provide it as a JSON document, the search engine can index it.
data artifacts
531
If your data resides in supported data source, you can use an indexer to automate … …, including JSON serialization of source data in native formats.
data ingestion
532
An indexer connects to a data source, … the data, and passes to the search engine for indexing. Most indexers support change detection, which makes data refresh a simpler exercise.
serializes
533
Besides automating data ingestion, indexers also support … …. You can attach a skillset that applies a sequence of AI skills to enrich the data, making it more searchable. A comprehensive set of built-in skills, based on Azure AI services APIs, can help you derive new fields – for example by recognizing entities in text, translating text, evaluating sentiment, or predicting appropriate captions for images. Optionally, enriched content can be sent to a .,. … , which stores output from an AI enrichment pipeline in tables and blobs in Azure Storage for independent analysis or downstream processing.
AI enrichment, knowledge store
534
Whether you write application code that pushes data to an index - or use an indexer that automates data ingestion and adds AI enrichment - the fields containing your content are … in an index, which can be searched by client applications. The fields are used for searching, filtering, and sorting to generate a set of results that can be displayed or otherwise used by the client application.
persisted
535
AI … refers to embedded image and natural language processing in a pipeline that extracts text and information from content that can't otherwise be indexed for full text search.
enrichment
536
AI processing is achieved by adding and combining skills in a skillset. A skillset defines the operations that … and … data to make it searchable. These AI skills can be either built-in skills, such as text translation or Optical Character Recognition (OCR), or custom skills that you provide.
extract and enrich
537
… ,,, are based on pretrained models from Microsoft, which means you can't train the model using your own training data.
Built-in skills
538
Skills that call the Azure AI services APIs have a dependency on those services and are billed at the Azure AI services …-…-…-… price when you attach a resource. Other skills are metered by Azure Cognitive Search, or are utility skills that are available at no charge.
pay-as-you-go
539
Built-in skills fall into these categories:
Natural language processing skills: with these skills, unstructured text is mapped as searchable and filterable fields in an index. Image processing skills: creates text representations of image content, making it searchable using the query capabilities of Azure Cognitive Search.
540
Natural language processing skills include:
Key Phrase Extraction: uses a pre-trained model to detect important phrases based on term placement, linguistic rules, proximity to other terms, and how unusual the term is within the source data. Text Translation Skill: uses a pre-trained model to translate the input text into various languages for normalization or localization use cases.
541
Image processing skills include:
Image Analysis Skill: uses an image detection algorithm to identify the content of an image and generate a text description. Optical Character Recognition Skill: allows you to extract printed or handwritten text from images, such as photos of street signs and products, as well as from documents—invoices, bills, financial reports, articles, and more.
542
An Azure Cognitive … … can be thought of as a container of searchable documents. Conceptually you can think of an index as a table and each row in the table represents a document. Tables have columns, and the columns can be thought of as equivalent to the fields in a document. Columns have data types, just as the fields do on the documents. An index is a persistent collection of JSON documents and other content used to enable search functionality. The documents within an index can be thought of as rows in a table, each document is a single unit of searchable data in the index.
Search index
543
The index includes a definition of the structure of the data in these documents, called its schema.
544
Azure Cognitive Search needs to know how you would like to search and display the fields in the documents. You specify that by assigning …, or behaviors, to these fields.
attributes
545
For each field in the document, the index stores its …, the data …, and supported … for the field such as, is the field searchable, can the field be sorted?
name type behaviors
546
The most efficient indexes use only the behaviors that are needed. If you forget to set a required behavior on a field when designing, the only way to get that feature is to ,,, the index.
rebuild
547
In order to index the documents in Azure Storage, they need to be exported from their original file type to …. In order to export data in any format to JSON, and load it into an index, we use an …
JSON, indexer.
548
To create search documents, you can either generate JSON documents with application code or you can use Azure's … to export incoming documents into JSON
indexer
549
Azure Cognitive Search lets you create and load JSON documents into an index with two approaches:
Push method: JSON data is pushed into a search index via either the REST API or the .NET SDK. Pushing data has the most flexibility as it has no restrictions on the data source type, location, or frequency of execution. Pull method: Search service indexers can pull data from popular Azure data sources, and if necessary, export that data into JSON if it isn't already in that format.
550
Azure Cognitive Search's indexer is a crawler that extracts searchable text and metadata from an external Azure data source and populates a search index using field-to-field mappings between source data and your index. Using the indexer is sometimes referred to as a 'pull model' approach because the service pulls data in without you having to write any code that adds data to an index. An indexer maps source fields to their matching fields in the index.
551
The search services overview page has a … that lets you quickly see the health of the search service. On the dashboard, you can see how many documents are in the search service, how many indexes have been used, and how much storage is in use.
dashboard
552
When loading new documents into an index, the progress can be monitored by clicking on the index's associated …
indexer
553
Once the index is ready for querying, you can then use … to verify the results. An index is ready when the first document is successfully loaded.
Search explorer
554
Indexers only import new or updated documents, so it is normal to see zero documents indexed. The Search explorer can perform quick searches to check the contents of an index, and ensure that you are getting expected search results. Having this tool available in the portal enables you to easily check the index by reviewing the results that are returned as … documents.
JSON
555
You have to drop and recreate indexes if you need to make changes to field definitions. Adding new fields is supported, with all existing documents having null values. You'll find it faster using a code-based approach to iterate your designs, as working in the portal requires the index to be deleted, recreated, and the … details to be manually filled out.
schema
556
An approach to updating an index without affecting your users is to create a new index under a different …. You can use the same indexer and data source. After importing data, you can switch your app to use the new index.
name
557
A knowledge store is persistent storage of enriched content. The purpose of a knowledge store is to store the data generated from AI enrichment in a container. For example, you may want to save the results of an AI skillset that generates captions from ….
images
558
Recall that skillsets move a document through a sequence of enrichments that invoke transformations, such as recognizing entities or translating text. The outcome can be a search index, or projections in a knowledge store. The two outputs, search index and knowledge store, are mutually exclusive products of the same pipeline; derived from the same inputs, but resulting in output that is structured, stored, and used in different ….
applications
559
While the focus of an Azure Cognitive Search solution is usually to create a searchable index, you can also take advantage of its data extraction and enrichment capabilities to persist the enriched data in a knowledge store for further analysis or ….
processing
560
A knowledge store can contain one or more of three types of projection of the extracted data:
Table projections are used to structure the extracted data in a relational schema for querying and visualization Object projections are JSON documents that represent each data entity File projections are used to store extracted images in JPG format
561
Before using an indexer to create an index, you'll first need to make your data available in a supported data source. Supported data sources include:
Cosmos DB (SQL API) Azure SQL (database, managed instance, and SQL Server on an Azure VM) Azure Storage (Blob Storage, Table Storage, ADLS Gen2)
562
Once your data is in an Azure data source, you can begin using Azure Cognitive Search. Contained within the Azure Cognitive Search service in Azure portal is the Import data wizard, which automates processes in the Azure portal to create various objects needed for the search engine. You can see it in action when creating any of the following objects using the Azure portal:
Data Source: Persists connection information to source data, including credentials. A data source object is used exclusively with indexers.
 Index: Physical data structure used for full text search and other queries.
 Indexer: A configuration object specifying a data source, target index, an optional AI skillset, optional schedule, and optional configuration settings for error handling and base-64 encoding.
 Skillset: A complete set of instructions for manipulating, transforming, and shaping content, including analyzing and extracting information from image files. Except for very simple and limited structures, it includes a reference to an Azure AI services resource that provides enrichment.
 Knowledge store: Stores output from an AI enrichment pipeline in tables and blobs in Azure Storage for independent analysis or downstream processing.

563
To use Azure Cognitive Search, you'll need an Azure Cognitive Search resource. You can create a resource in the Azure portal. Once the resource is created, you can manage components of your service from the resource  … page in the portal.
Overview
564
You can build Azure search indexes using the Azure portal or programmatically with the … … or software development kits (SDKs).
REST API
565
Index and query design are closely linked. After we build the index, we can perform queries. A crucial component to understand is that the … of the index determines what queries can be answered.
schema
566
Azure Cognitive Search queries can be submitted as an HTTP or REST API request, with the response coming back as …. Queries can specify what fields are searched and returned, how search results are shaped, and how the results should be filtered or sorted. A query that doesn't specify the field to search will execute against all the searchable fields within the index.
JSON
567
Azure Cognitive Search supports two types of syntax: simple and full Lucene. Simple syntax covers all of the common query scenarios, while full Lucene is useful for advanced scenarios.
568
A query request is a list or words (search terms) and query operators (simple or full) of what you would like to see returned in a result set.
569
Consider this simple search example: coffee (-"busy" + “wifi")
This query is trying to find content about coffee, excluding busy and including wifi. Breaking the query into components, it's made up of search terms, (coffee), plus two verbatim phrases, "busy" and "wifi", and operators (-, +, and ( )). The search terms can be matched in the search index in any order or location in the content. The two phrases will only match with exactly what is specified, so wi-fi would not be a match. Finally, a query can contain a number of operators. In this example, the - operator tells the search engine that these phrases should NOT be in the results. The parenthesis group terms together, and set their precedence. By default, the search engine will match any of the terms in the query. Content containing just coffee would be a match. In this example, using -"busy" would lead to the search results including all content that doesn't have the exact string "busy" in it.