W1-Machine Learning Modeling Pipelines in Production Flashcards

1
Q

Neural Architecture Search or NAS is at the heart of AutoML. There are three main parts to Neural Architecture Search, a ____, a ____and a _____.

A

search space
search strategy
performance estimation strategy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

The search space defines the range of architectures which can be represented. To reduce the size of the search problem, we need to limit the search space to the architectures which are best suited to the problem that we’re trying to model. This helps reduce the search space, but it also means that ____ will be introduced, which might prevent Neural Architecture Search from finding architectural blocks that go beyond current human knowledge.

A

a human bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Neural Architecture Search is a sub field of AutoML. True/False

A

True

Nowadays, some researchers mistakenly equate AutoML with neural architecture search (NAS). However, a clear distinction is required, as the process of automating architecture engineering is strictly called NAS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

There are two main types of search spaces, ____ and ____.

A

macro, micro

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

The search strategies in neural architecture search need to estimate the performance of generated architectures so that they can generate architectures that perform better. What are 3 approaches used for it?

A
  1. Lower fidelity estimates
  2. Learning curve extrapolation
  3. Weight inheritance or network morphism.

The simplest approach is to measure the validation accuracy of each architecture that is generated.This becomes computationally heavy and IS NOT USED

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How does Lower fidelity estimate for performance estimation strategy work?

A

Lower fidelity or lower precision estimates try to reduce the training time by reframing the problem to make it easier to solve by training on a subset of data or using lower resolution images, for example, or using fewer filters per layer and fewer cells. It greatly reduces the computational cost, but ends up underestimating performance. (It’s not worth using)

That’s okay if you can make sure that the relative ranking of the architectures does not change due to their lower fidelity estimates. But unfortunately, recent research has shown that this is not the case

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How does Learning Curve Extrapolation for performance estimation strategy work?

A

Learning Curve Extrapolation is based on the assumption that you have mechanisms to predict the learning curve reliably, and so extrapolation is a sensitive and valid choice. Based on a few iterations and available knowledge, the method extrapolates the initial learning curves and terminates all architectures that perform poorly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How does Network Morphism work?

A

Another method for speeding up architecture search initializes the weights of novel architectures based on the weights of other architectures that have been trained before, similar to the way that transfer learning works. One way of achieving this is referred to as network morphism.

In simple terms, network morphism is a way of comparing and transforming one network into another network while preserving its overall structure and relationships.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the steps of classifying images using AutoML Vision (Google Cloud)?

A
  1. set up AutoML
  2. create the dataset in Vision by first uploading images to Cloud Storage and then proceeding to create the dataset.
  3. AutoML then performs the actual training
  4. selects the best model for the task
  5. Finally, we can deploy the model trained by AutoML Vision and test the model by generating predictions using the deployed model.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly