AI-LEARNING Flashcards

1
Q

provides a computer with data, rather than explicit instructions. Using these data, the computer learns to recognize patterns and becomes able to execute tasks on its own.

A

Machine Learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

given a data set of input-output pairs, learn a function to map inputs to outputs

A

Supervised Learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

a task where the function maps an input to a discrete output.

A

Classification

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

algorithm that, given an input, chooses the class of the nearest data point to that input

A

Nearest-Neighbor Classification

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

algorithm that, given an input, chooses the most common class out of the k nearest data points to that input

A

K-Nearest-Neighbor Classification

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Drawback of the k-nearest-neighbors algorithm

A

Using a naive approach, the algorithm will have to measure the distance of every single point to the point in question, which is computationally expensive.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Solution to the drawback of k-nearest-neighbors algorithm

A

Use data structures that enable finding neighbors more quickly or by pruning irrelevant observations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

for each data point, we adjust the weights to make our function more accurate.

A

Perceptron Learning Rule

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

sequences of numbers

A

Vector

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

The weights and values in Perceptron Learning are represented using?

A

Vectors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Drawback of Perceptron Learning

A

data are messy, and it is rare that one can draw a line and neatly divide the classes into two observations without any mistakes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

unable to express uncertainty, since it can only be equal to 0 or to 1.

A

Hard Threshold

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

uses a logistic function which is able to yield a real number between 0 and 1, expressing confidence in the estimate

A

Soft Threshold

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

they are designed to find the maximum margin separator

A

Support Vector Machines

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A boundary that maximizes the distance between any of the data points

A

Maximum Margin Separator

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Benefit of Support Vector Machines

A

they can represent decision boundaries with more than two dimensions, as well as non-linear decision boundaries

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

supervised learning task of learning a function mapping an input point to a continuous value

A

Regression

18
Q

this function gains value when the prediction isn’t correct and doesn’t gain value when it is correct

A

0-1 Loss Function

19
Q

functions that can be used when predicting a continuous value

A

L1 and L2 loss functions

20
Q

L1 Loss Function Formula

A

|actual - predicted|

21
Q

L2 Loss Function Formula

A

(actual - predicted)^2

22
Q

L1 vs L2

A

L₂ penalizes outliers more harshly than L₁ because it squares the the difference

23
Q

when a model fits the training data so well that it fails to generalize to other data sets.

A

Overfitting

24
Q

process of penalizing hypotheses that are more complex to favor simpler, more general hypotheses.

A

Regularization

25
Q

This is used to avoid overfitting

A

Regularization

26
Q

a constant that we can use to modulate how strongly to penalize for complexity in our cost function

A

Lambda (λ)

27
Q

A technique for testing whether a model is overfitted by splitting data in two (testing and training set)

A

Holdout Cross-Validation

28
Q

downside of holdout cross validation

A

we don’t get to train the model on half the data, since it is used for evaluation purposes.

29
Q

we divide the data into k sets. We run the training k times, each time leaving out one dataset and using it as a test set. We end up with k different evaluations of our model, which we can average and get an estimate of how our model generalizes without losing any data.

A

k-Fold Cross Validation

30
Q

given a set of rewards or punishments, learn what actions to take in the future

A

Reinforcement Learning

31
Q

model for decision-making, representing states, actions, and their rewards

A

Markov Decision Processes

32
Q

Reinforcement Learning as a Markov Decision Process

A
  • Set of states S
  • Set of actions Actions(S)
  • Transition model P(s’ | s, a)
  • Reward function R(s, a, s’)
33
Q

method for learning a function Q(s,a), outputs an estimate of the value of performing action a in state s

A

Q-Learning

34
Q

Q-Learning Process

A
  • The model starts with all estimated values equal to 0 (Q(s,a) = 0 for all s, a).
  • When an action is taken and a reward is received, the function does two things:
    1. It estimates the value of Q(s, a) based on current reward and expected future rewards, and
    2. Updates Q(s, a) to take into account both the old estimate and the new estimate. This gives us an algorithm that is capable of improving upon its past knowledge without starting from scratch.
35
Q

an algorithm that completely discounts the future estimated rewards, instead always choosing the action a in current state s that has the highest Q(s, a).

A

Greedy Decision-Making

36
Q

Explore vs. Exploit tradeoff

A

A greedy algorithm always exploits, taking the actions that are already established to bring to good outcomes. However, it will always follow the same path to the solution, never finding a better path. Exploration, on the other hand, means that the algorithm may use a previously unexplored route on its way to the target, allowing it to discover more efficient solutions along the way.

37
Q

In this type of algorithm, we set ε equal to how often we want to move randomly. With probability 1-ε, the algorithm chooses the best move (exploitation). With probability ε, the algorithm chooses a random move (exploration).

A

ε (epsilon) greedy algorithm

38
Q

Technique used in q-learning when game has multiple states and actions, making it computationally demanding.

A

function approximation

39
Q

given input data without any additional feedback/information, learn patterns.

A

Unsupervised Learning

40
Q

organizing a set of objects into groups in such a way that similar objects tend to be in the same group

A

clustering

41
Q

Applications of Clustering

A
  • Genetic Research
  • Image Segmentation
  • Market Research
  • Medical Imaging
  • Social Network Analysis
42
Q

algorithm for clustering data based on repeatedly assigning points to clusters and updating those clusters’ center

A

k-means Clustering