Nonlinear Programming Flashcards
The concept of __________ is now well rooted as a principle underlying the analysis of many complex decision or allocation problems. It offers a certain degree of philosophical elegance that is hard to dispute, and it often offers an indispensable
degree of operational simplicity.
Optimization
Using this _________, one approaches a complex decision problem, involving the selection of values for a number of interrelated variables, by focusing attention on a single objective designed to quantify
performance and measure the quality of the decision. This one objective is maximized
(or minimized, depending on the formulation) subject to the constraints that may limit
the selection of decision variable values.
Optimization philosophy
Optimization problems are usually divided into two major categories:
Linear and Nonlinear Programming
These categories are distinguished by the presence or
absence of nonlinear functions in either the objective function or constraints and lead to very distinct solution methods.
Linear and nonlinear programming
_________ programming assumes linearity. It is a technique that assumes that the relationships between the decision variables, the constraints, and the objective
function are linear, meaning that they can be represented by straight lines or planes.
Linear
_______ programming embraces
the complexities of reality. It is a mathematical technique for
determining the optimal solution to many business problems.
Nonlinear
LP problems can be solved using various algorithms, such as the
Simplex method, inter-point method, or branch-and-bound method
It can also be used to model
problems such as resource allocation, production planning, transportation, scheduling,
and network flow.
Linear Programming
It is more accurate compared to linear programs where it can be applied for the nonlinear
objective functions and constraints. The ____ techniques are based on a reduced gradient method utilizing the Lagrange multiplier or use the penalty function optimization approach.
Nonlinear Programming (NLP)
In this method, the_________ (the
reduced gradient) is used to choose a search direction in the iterative procedure.
first partial derivatives of the equation
_______ programming is a technique that allows for the relationships between
the decision variables, the constraints, and the objective function to be nonlinear,
meaning that they can be represented by curves or surfaces.
Nonlinear
NLP problems can be more complex and challenging to solve than LP problems, and they may require different algorithms, such as the
gradient method, newton method, penalty method
_________ can be used to model problems such as portfolio optimization, machine learning, engineering design, and economics.
Nonlinear Programming
Slightly straightforward and fixed. Optimal solutions are to be found at the limited corner points of the convex polyhedron.
Linear Programming
Varies. Optimal solutions can be anywhere along the boundaries of the feasible region or anywhere within the feasible region.
Nonlinear Programming
Very easily amenable to linear algebraic transformation. Objective function have degree one.
Linear Programming
Should be dealt with extreme care resulting in complex situations. Objective functions have degree more than one.
Nonlinear Programming
Optimum always attained at constraint boundaries. A local optimum is also a global optimum
Linear programming
Optima may be in the interior as well as at boundaries of constraints. A local optimum is not necessarily a global optimum
Nonlinear programming
If either f (x1, x2, ……, xn) or some g1 (x1, x2, …… xn) or both are non-linear, then the problem of determining the n-type (x1, x2, …. xn) which makes z a minimum
or maximum and satisfies both (b) and (c), above is called a
General nonlinear programming problem (GNLPP)
This is one of the reasons why all the non-linear programming problems cannot be grouped under the same title.
There is no known algorithm to effectively and efficiently solve a given general nonlinear programming problem. A method that is found to be useful in one problem may not be useful in another.
Solving nonlinear equation depends on
● Number of decision variables
● Concavity of the function
● Present of constraints
● Complexity of objective function
Steps In Identifying Method To Be Use
- Define variables
- Define requirements (constrained/unconstrained)
- Form the objective function
Nonlinear Programming Problem Classification
Unconstrained and Constrained
Unconstrained
● One Dimensional Search
● Quadratic Forms
○ Derivatives
○ Taylor Series
○ Conditions for maximum/minimum
● Gradient Search
● Newton’s Method
● Quasi Newton Method
Constrained
● Equality
○ Lagrange Multiplier Method
● Inequality
○ Karush-Kuhn Tucker
● Penalty Method
It is a powerful technique used in optimization problems, in which the objective is to maximize or minimize a function while adhering to one or more constraints.
Lagrange Multiplier Method
He is an Italian mathematician who introduced Lagrange Multiplier Method in 1804.
Joseph-Louis Lagrange
The fundamental idea behind Lagrange Multiplier Method is to
convert a limited problem into a from that allows the derivative test of an unconstrained problem to be applied
This method is particularly functional for constrained optimization situations.
Lagrange Multiplier Method
It is used for determining the local maxima and minima of a function of the form f(x, y, z) under equality constraints of this type g(x, y, z) = k or g(x, y, z) =0.
Lagrange Multiplier Method
It is denoted by L, it is a function created by combining the objective function with the constraints. An example of a mathematical representation of a ______ is:
L (x, y, λ) = f (x, y) + λg (x, y)
Lagrangian Function
It indicates the number intended to maximize or minimize, such as _________ or any other measure of interest. In mathematical notation, the objective function is frequently denoted as:
f (x1, x2, x3…., xn), where x1, x2, x3…., xn are the decision variables.
profit, cost, utility
Refers to the limitations or requirements that must be met in the process of optimization task. These ________ limit the available metrics for the decision
variables and frequently occur owing to real-world constraints, physical limitations, or
regulatory obligations.
Constraints
Lagrange multipliers is denoted by λ, called ________. They are integers that indicate the “cost” of breaking each constraint. They are used to alter the objective function to account for constraints, allowing the optimization issue to be addressed effectively.
Lambda
These are the points, where the Lagrangian function’s derivative, or gradient, is zero. These are potential solutions to the optimization problem and essential for identifying viable solutions that optimize the objective function while also satisfying the limitations.
Critical Points
CASE 1: ONE CONSTRAINT AND TWO VARIABLES (Steps)
Step 1: Find the Lagrange’s Function
Step 2: Find stationary ‘z0’ (a,b) and Lagrange Multiplier ‘λ’ by using
Step 3: Find the Determinant ‘∆’
Step 4: Checking of maxima or minima
CASE 2: ONE CONSTRAINT AND THREE VARIABLES (Steps)
Step 1: Find Lagrange’s Function
Step 2: Find stationary point ′z0′ (a,b,c) and ‘λ’ by using…
Step 3: Note: The computation for delta is another lesson…
Step 4: Since ∆ is negative
∴ Z0 will give minimum value at objective function
Step 5: Put X1, x2, x3 in Equation 4, Put λ = −2 in Equations 1, 2, & 3
CASE 3: TWO CONSTRAINTS AND TWO VARIABLE (Steps)
Step 1: Find the Lagrange’s function
Step 2: Find stationary point Z0 (a, b) and λ1, λ2 by using
Step 3: Find border Hessian matrix and ∇4
Step 4: Determine the rules in identifying maxima or minima
Step 5: Substitute
CASE 4: TWO CONSTRAINTS AND THREE VARIABLE (Steps)
Step 1: Find the Lagrange’s function
Step 2: Find stationary point Z0 (a, b) and λ1, λ2 by using
Step 3: Find border Hessian matrix and ∇5
The optimality conditions for a constrained local optimum are called the
Karush Kuhn Tucker (KKT) conditions, also called as Kuhn-Tucker Conditions
Kuhn-Tucker Conditions was originally named after: ______ & ________, who first published the
conditions in 1951; and ________, who stated the necessary conditions in his master’s thesis in 1939.
Harold W. Kuhn and Albert W. Tucker and William Karush
Are the first derivative tests or first-order necessary conditions for a solution in nonlinear programming to be optimal in a mathematical optimization problem, provided that the set of regularity conditions are satisfied. They play an important role in constrained optimization theory and algorithm development.
KKT Conditions
The Kuhn-Tucker necessary conditions for the problem Maximize Z = f(x) subject to the constraints gi(x) ≤ 0, i = 1, 2, … , m are also sufficient conditions if
f(x) is concave and gi(x) are convex functions of x
CONDITIONS FOR:
A. One inequality constraint and two variables
B. Two inequality constraints and two variables