Background 2 Flashcards
Algorithms
Describe line search algorithms:
in each iteration choose a direction p_k and then search along this direction from x_k by approximately solving(to find the minimum) min f(x_k +αp_k)
Describe trust region methods:
construct a model function m_k whose behaviour near x_k is similar to the original function f.
the search for a minimizer of m_k is restricted to some region around x_k
approximate minimizer of the model in this region.
What is steepest descent direction?
What is the characteristic of this direction ?
steepest descent direction is the inverse of the gradient of the function f(x_k)
direction is orthogonal to the contours of the function.
What is steepest descent method?
what is its convergence rate?
is a line search method that moves along the search path which is negative of the gradient of the function f(x_k) at every step
converge at linear rate. when ill condition converge factor r close to 1.
Advantage and disadvantage of the steepest descent method ?
Advantage:
-it only requires the calculation of the gradient not the second derivative.
-negative of the gradient is guaranteed to produce a decrease in f.
Disadvantage:
-the method may converge very slowly on ill-conditioned problems(scaling) it will zigzag
What is Newton direction ?
it derived from the second-order Taylor series
Assuming the hessians ∇*2 f_k is positive definite
Advantage and disadvantage of Newton method ? What is the convergence rate?
Advantage: - avoid slow convergence on ill-conditioned problems - fast rate of local convergence Disadvantage: -the Hessian is expensive to compute.
-converge quadratically rate
What is quasi-Newton direction ?
Use approximation of the hessians.
use the knowledge gained from the gradient at each step.
Advantage of quasi-Newton method ?
and its convergence rate ?
- do not require computation of the Hessian
- still attain a super-linear rate
Conjugate Gradient Strategies vs steepest decent ?
search direction?
convergence rates?
advantage ?
- more effective than the steepest descent direction
- each new search direction only depends on data from the previous step
- do not attain the fast convergence rates of Newton or quasi- Newton methods
- advantage of not requiring storage of matrices
Golden section search
?
backtracking line search
?
Trust region linear model ?
??
Trust region quadratic model ?
??