Lecture 1 Flashcards

1
Q

Pro’s and cons of AML?

A

Pro:
Emerging research field
Democratization of ML
Less work
Support Data Scientist, not replace

Con:
General AI

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is AML?

A

Algorithm selection (combined with hyperparameter optimization)
Hyperparameter optimization
Workflow Synthesis
Neural Architecture Search
Few shot learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Hyperparameter Optimization

A

Argmin_(lambda) Loss(Algo_(lambda), TrainSet, ValidSet)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Algorithm Selection

A

Argmin_(algorithm) Loss(Algo, TrainSet, ValidSet)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

CASH

A

argmin lambda and algorithm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Search Algorithms

A

Grid
Random
Bayesian Optimization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Bayesian Optimization

A

Find x with low f(x)
Find promising points with acquisition function (expected improvement)
Repeat for next x
(Not parallelizable)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Expected improvement

A

z = ( f s t a r − mu) / s i g m a
r e t u r n ( f s t a r − mu) ∗ norm . c d f ( z ) + s i g m a ∗ norm . p d f ( z )

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Configuration Spaces

A

Categorical
Numerical
Conditional Hyperparameters
(Range, Sampling Strat, some shouldnt be optimized(seed))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

One armed bandit

A

Every A, hyperp’s is a bandid
Lever pull is a training run cost (budget)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Successive Halving

A

Every part of budget, num of A’s that are trained is halved

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Budget types

A

Run time
Observations
Attributes
Suitable hyperparameter value (epoch, ensemble size)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Successive halving: pros

A

Simple
Parallelizable
Converges to optimum
Strong theoretical foundation
Good results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Successive halving: cons

A

Extension to random search
Not data efficient
Human designed bandit strat
Learning curves can cross
Good solutions might be dropped

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly