Week 6: Optimization Flashcards
Give examples of functions whose minimum point is not a stationary point. Draw graphs.
One discontinuous function and one bounded function.
What defines a stationary point?
Its first derivative (gradient) is zero.
Why would be not be able to use a kind of exponential function as an objective function?
As it has its minimum when f(x) goes to infinity and hence doesn’t have an interior minimum.
How come we can limit ourselves to minimization (and not also max.) when we do optimization?
Since we can always rewrite the maximization problem as minimizing the negative objective function.
What is a typical rate for the learning rate gamma?
0.01 or 0.05.
Name the two ways in which optimization is used in machine learning.
1) For training a model. We optimize the objective function J(theta) and the optimization variables are the model parameters. 2) For tuning hyperparameters set before-hand of analysis. We then optimize the objective function with the hyperparameters as optimization variables.
Why is a convex function a good function to optimize?
Since it has a unique, global minimum.
Give examples of convex cost functions?
The cost functions for linear, logistic regression and the the L1 regularised linear regression.
Give an example of a non-convex function.
The cost function for a deep neural network.
State the optimization problem i.e., the minimization of the cost function for linear regresson.
theta.hatt = arg min (theta) 1/n SUM [ ||*XX theta** - y ||] ^2_2
Why is coordinate descent particularly fast and efficient for optimizing an L1 regularized linear regression model? Since the model works the way that is sets many coefficients to zero, many of the updates in the coordinate descent will simply set theta_j = 0 due to the sparsity of the optimal theta.hat.