Nonlinear Programming Flashcards

1
Q

The concept of __________ is now well rooted as a principle underlying the analysis of many complex decision or allocation problems. It offers a certain degree of philosophical elegance that is hard to dispute, and it often offers an indispensable
degree of operational simplicity.

A

Optimization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Using this _________, one approaches a complex decision problem, involving the selection of values for a number of interrelated variables, by focusing attention on a single objective designed to quantify
performance and measure the quality of the decision. This one objective is maximized
(or minimized, depending on the formulation) subject to the constraints that may limit
the selection of decision variable values.

A

Optimization philosophy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Optimization problems are usually divided into two major categories:

A

Linear and Nonlinear Programming

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

These categories are distinguished by the presence or
absence of nonlinear functions in either the objective function or constraints and lead to very distinct solution methods.

A

Linear and nonlinear programming

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

_________ programming assumes linearity. It is a technique that assumes that the relationships between the decision variables, the constraints, and the objective
function are linear, meaning that they can be represented by straight lines or planes.

A

Linear

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

_______ programming embraces
the complexities of reality. It is a mathematical technique for
determining the optimal solution to many business problems.

A

Nonlinear

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

LP problems can be solved using various algorithms, such as the

A

Simplex method, inter-point method, or branch-and-bound method

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

It can also be used to model
problems such as resource allocation, production planning, transportation, scheduling,
and network flow.

A

Linear Programming

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

It is more accurate compared to linear programs where it can be applied for the nonlinear
objective functions and constraints. The ____ techniques are based on a reduced gradient method utilizing the Lagrange multiplier or use the penalty function optimization approach.

A

Nonlinear Programming (NLP)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

In this method, the_________ (the
reduced gradient) is used to choose a search direction in the iterative procedure.

A

first partial derivatives of the equation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

_______ programming is a technique that allows for the relationships between
the decision variables, the constraints, and the objective function to be nonlinear,
meaning that they can be represented by curves or surfaces.

A

Nonlinear

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

NLP problems can be more complex and challenging to solve than LP problems, and they may require different algorithms, such as the

A

gradient method, newton method, penalty method

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

_________ can be used to model problems such as portfolio optimization, machine learning, engineering design, and economics.

A

Nonlinear Programming

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Slightly straightforward and fixed. Optimal solutions are to be found at the limited corner points of the convex polyhedron.

A

Linear Programming

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Varies. Optimal solutions can be anywhere along the boundaries of the feasible region or anywhere within the feasible region.

A

Nonlinear Programming

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Very easily amenable to linear algebraic transformation. Objective function have degree one.

A

Linear Programming

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Should be dealt with extreme care resulting in complex situations. Objective functions have degree more than one.

A

Nonlinear Programming

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Optimum always attained at constraint boundaries. A local optimum is also a global optimum

A

Linear programming

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Optima may be in the interior as well as at boundaries of constraints. A local optimum is not necessarily a global optimum

A

Nonlinear programming

20
Q

If either f (x1, x2, ……, xn) or some g1 (x1, x2, …… xn) or both are non-linear, then the problem of determining the n-type (x1, x2, …. xn) which makes z a minimum
or maximum and satisfies both (b) and (c), above is called a

A

General nonlinear programming problem (GNLPP)

21
Q

This is one of the reasons why all the non-linear programming problems cannot be grouped under the same title.

A

There is no known algorithm to effectively and efficiently solve a given general nonlinear programming problem. A method that is found to be useful in one problem may not be useful in another.

22
Q

Solving nonlinear equation depends on

A

● Number of decision variables
● Concavity of the function
● Present of constraints
● Complexity of objective function

23
Q

Steps In Identifying Method To Be Use

A
  1. Define variables
  2. Define requirements (constrained/unconstrained)
  3. Form the objective function
24
Q

Nonlinear Programming Problem Classification

A

Unconstrained and Constrained

25
Q

Unconstrained

A

● One Dimensional Search
● Quadratic Forms
○ Derivatives
○ Taylor Series
○ Conditions for maximum/minimum
● Gradient Search
● Newton’s Method
● Quasi Newton Method

26
Q

Constrained

A

● Equality
○ Lagrange Multiplier Method
● Inequality
○ Karush-Kuhn Tucker
● Penalty Method

27
Q

It is a powerful technique used in optimization problems, in which the objective is to maximize or minimize a function while adhering to one or more constraints.

A

Lagrange Multiplier Method

28
Q

He is an Italian mathematician who introduced Lagrange Multiplier Method in 1804.

A

Joseph-Louis Lagrange

29
Q

The fundamental idea behind Lagrange Multiplier Method is to

A

convert a limited problem into a from that allows the derivative test of an unconstrained problem to be applied

30
Q

This method is particularly functional for constrained optimization situations.

A

Lagrange Multiplier Method

31
Q

It is used for determining the local maxima and minima of a function of the form f(x, y, z) under equality constraints of this type g(x, y, z) = k or g(x, y, z) =0.

A

Lagrange Multiplier Method

32
Q

It is denoted by L, it is a function created by combining the objective function with the constraints. An example of a mathematical representation of a ______ is:

L (x, y, λ) = f (x, y) + λg (x, y)

A

Lagrangian Function

33
Q

It indicates the number intended to maximize or minimize, such as _________ or any other measure of interest. In mathematical notation, the objective function is frequently denoted as:

f (x1, x2, x3…., xn), where x1, x2, x3…., xn are the decision variables.

A

profit, cost, utility

34
Q

Refers to the limitations or requirements that must be met in the process of optimization task. These ________ limit the available metrics for the decision
variables and frequently occur owing to real-world constraints, physical limitations, or
regulatory obligations.

A

Constraints

35
Q

Lagrange multipliers is denoted by λ, called ________. They are integers that indicate the “cost” of breaking each constraint. They are used to alter the objective function to account for constraints, allowing the optimization issue to be addressed effectively.

A

Lambda

36
Q

These are the points, where the Lagrangian function’s derivative, or gradient, is zero. These are potential solutions to the optimization problem and essential for identifying viable solutions that optimize the objective function while also satisfying the limitations.

A

Critical Points

37
Q

CASE 1: ONE CONSTRAINT AND TWO VARIABLES (Steps)

A

Step 1: Find the Lagrange’s Function
Step 2: Find stationary ‘z0’ (a,b) and Lagrange Multiplier ‘λ’ by using
Step 3: Find the Determinant ‘∆’
Step 4: Checking of maxima or minima

38
Q

CASE 2: ONE CONSTRAINT AND THREE VARIABLES (Steps)

A

Step 1: Find Lagrange’s Function
Step 2: Find stationary point ′z0′ (a,b,c) and ‘λ’ by using…
Step 3: Note: The computation for delta is another lesson…
Step 4: Since ∆ is negative
∴ Z0 will give minimum value at objective function
Step 5: Put X1, x2, x3 in Equation 4, Put λ = −2 in Equations 1, 2, & 3

39
Q

CASE 3: TWO CONSTRAINTS AND TWO VARIABLE (Steps)

A

Step 1: Find the Lagrange’s function
Step 2: Find stationary point Z0 (a, b) and λ1, λ2 by using
Step 3: Find border Hessian matrix and ∇4
Step 4: Determine the rules in identifying maxima or minima
Step 5: Substitute

40
Q

CASE 4: TWO CONSTRAINTS AND THREE VARIABLE (Steps)

A

Step 1: Find the Lagrange’s function
Step 2: Find stationary point Z0 (a, b) and λ1, λ2 by using
Step 3: Find border Hessian matrix and ∇5

41
Q

The optimality conditions for a constrained local optimum are called the

A

Karush Kuhn Tucker (KKT) conditions, also called as Kuhn-Tucker Conditions

42
Q

Kuhn-Tucker Conditions was originally named after: ______ & ________, who first published the
conditions in 1951; and ________, who stated the necessary conditions in his master’s thesis in 1939.

A

Harold W. Kuhn and Albert W. Tucker and William Karush

43
Q

Are the first derivative tests or first-order necessary conditions for a solution in nonlinear programming to be optimal in a mathematical optimization problem, provided that the set of regularity conditions are satisfied. They play an important role in constrained optimization theory and algorithm development.

A

KKT Conditions

44
Q

The Kuhn-Tucker necessary conditions for the problem Maximize Z = f(x) subject to the constraints gi(x) ≤ 0, i = 1, 2, … , m are also sufficient conditions if

A

f(x) is concave and gi(x) are convex functions of x

45
Q

CONDITIONS FOR:

A

A. One inequality constraint and two variables
B. Two inequality constraints and two variables