EGD - Exam 2 Flashcards
What is the difference in dynamic optimization between optimizing with respect to continuous vs discrete time?
A variable measured in continuous time can be differentiated with respect to time. A variable measured in discrete time cannot.
What is optimal control?
It is an approach to solve dynamic optimization
problems when one of the constraints is a differential equation.
Describe a typical optimal control problem in economic growth theory.
▶ Economic agent chooses (controls) a sequence of values for certain variables over time (control variables).
▶ She/he wants to maximize an objective function (utility/profits) subject to some constraints.
▶ Such constraints are dynamic such that they describe the state of the economy: state variables.
Define the general problem of dynamic optimization and optimal control.
The maximization of utility function V(0), which is the integral from initial time 0 to final time T of the instantaneous objective functions u, which depend on a control variable c_t, a state variable k_t, and time t. This maximization is subject to:
a) the state variable k(dot)_t, which is a function of k_t, c_t, and t;
b) k_0, which is given;
c) the condition that the final value of k, k_T,