Theory Flashcards
Question: What is Linear Programming (LP)?
Linear Programming is a mathematical method for solving optimization problems where the objective function and constraints are linear. The goal is to find the maximum or minimum of the objective function, subject to given constraints.
What is the general form of an LP problem? (Included in the tableaux)
The general form of an LP problem is:
Minimize or Maximize 𝑐(^𝑇)𝑥
Subject to 𝐴𝑥≤𝑏
and 𝑥≥0
where A is a matrix of coefficients,
b is a vector of constraints,
c is a vector of costs or coefficients, and
x is a vector of decision variables
What does the Fundamental Theorem of LP state?
The Fundamental Theorem of Linear Programming states that if an optimal solution exists, it will occur at an extreme point of the feasible region, which is the convex set defined by the constraints.
What is a convex set?
A set is convex if, for any two points within the set, the line segment connecting them lies entirely within the set.
What is the Simplex Algorithm?
The Simplex Algorithm is a method for solving linear programming problems by moving along the edges of the feasible region to find the optimal extreme point (vertex) that maximizes or minimizes the objective function.
What are slack variables in LP?
Slack variables are added to convert inequalities into equalities in a linear programming problem. They represent the difference between the left-hand side and the right-hand side of the inequality constraints.
What is the dual problem in LP?
The dual problem in LP is derived from the primal problem and involves maximizing a function subject to constraints derived from the original problem. The optimal value of the dual problem is equal to the optimal value of the primal problem.
What is an infeasible LP problem?
An LP problem is infeasible when no solution satisfies all of the constraints, meaning the feasible set is empty.
What is an unbounded LP problem?
Answer: An LP problem is unbounded if the objective function can be made arbitrarily large (or small in minimization problems) without violating any of the constraints.
What is computational complexity in the context of optimization?
Computational complexity refers to the difficulty of solving an optimization problem in terms of the time and resources required, often classified into categories such as P (polynomial time) and NP-hard (non-deterministic polynomial-time hard).
What does it mean for a solution to be sound in optimization problems?
A solution is sound if every solution found by the algorithm is a valid solution, meaning it satisfies all the constraints of the problem.
What does it mean for a solution to be complete in optimization problems?
A solution is complete if the algorithm can find a solution whenever one exists. In other words, it is guaranteed to find a solution if the problem is solvable.
Can an algorithm be sound but not complete?
Yes, an algorithm can be sound but not complete. This means it only finds valid solutions, but it might not always find a solution even if one exists.
Can an algorithm be complete but not sound?
No
If an algorithm is complete but not sound, it would mean it finds all possible solutions but also includes incorrect ones. In this case, it would not truly “find” the solutions in a meaningful way because it would lack confidence in the correctness of what it outputs. Essentially, its results would be unreliable, making it invalid as an algorithm for solving the problem, as algorithms are typically required to give correct answers.
In summary, an algorithm can be:
- Sound but not complete (always correct but might miss some solutions),
- Both sound and complete (correctly finds all solutions), or
- Neither sound nor complete (may give incorrect answers and miss solutions).
Is it possible for a solution to be neither sound nor complete?
Yes, it is possible for an algorithm to be neither sound nor complete, meaning it might produce invalid solutions and fail to find all valid solutions when they exist.
LP: What does it mean when there are no negative values in the bottom row?
There are two cases when the cT row has no negative values:
1. When the origin is not in the feasible set, which happens if there are surplus variables
that have negative values at the origin; or
2. all variables have non-negtive values at the current basic solution.
In the first case we need to do phase 1 to get into the feasible set. In the second case we cannot
improve the objective function value, and so we have found the optimum.
When solving LP problems, we typically (but not necessarily) select the pivot column by the largest negative value in the bottom row. Why?
When solving, we typically let the pivot column be decided by the currently larges negative
value in the cT row, as this affects the objective function value the most, and has a good
chance of finding the optimum with examining few extreme points.
For the A* algorithm, we learned that the heuristic should be admissible and monotone, explain what those properties mean and why they are important
A heuristic is admissible if it never overestimates the true cost to reach the goal from any node in the search space. In other words, it always provides a cost estimate that is equal to or less than the actual lowest cost from the current node to the goal.A heuristic is admissible if it never overestimates the cost to the goal
A heuristic is monotone if Cost (a -> c) <= Cost(a -> b) + Cost(b -> c), compare the triangle inequality.
Consistency ensures that as A* progresses, the total estimated cost along any path will only increase or stay the same, preventing the need to revisit nodes and making A* more efficient.
If the heuristic cost at the goal node is 0, then one of those properties implies the other, which one and why?
Monotonicity implies admissibility (assuming that h(goal)=0)
monotonicity –> admissible
not admissible –> not monotone
Note: h can be admissible without being monotone
A* is said to be optimal, what does that really mean?
A* is optimal means that there can exist no other algorithm using the same information that searches fewer nodes than what A* does.
Describe the P vs NP problem
NP represents the class of problems for which given a solution, we can in polynomial (with respect to the size of the input) time verify that this is indeed a correct solution.
P represents the class of problems for which we can compute a solution in polynomial time. We know that P is included in NP, but is P = NP?
This we do not know. No-one expects the equality to hold, if it did our world would be a bleaker place, but no one knows for sure.
There is something special with the set of problems that we call NP-complete. What is so special about them?
NP-complete problems are a special kind of NP problem that are also very hard (we call this NP-hard).
Here’s what’s interesting about them:
- All NP-complete problems are connected: You can convert one NP-complete problem into another quickly (in a reasonable amount of time). This means that, in a way, all NP-complete problems are different versions of the same tough problem.
- Why they matter for P vs. NP: Every NP problem can be turned into an NP-complete problem, but the reverse isn’t true. So, if we could find a fast (polynomial-time) way to solve even one NP-complete problem, we would suddenly be able to solve all NP problems quickly. That would prove that P = NP.
NP problems are ones where, if you have a solution, you can quickly check it’s correct — but finding that solution from scratch can be very hard.
What is the key difference between Mixed Integer Linear Programming (MILP) and Constraint Programming (CP)?
MILP focuses on optimizing a linear objective function with constraints that may include both continuous and integer variables, while CP focuses on satisfying constraints, typically used in combinatorial problems, without necessarily optimizing a linear objective function.
How can you convert a model from MILP to CP?
To convert from MILP to CP, you replace the linear objective function with constraints and remove the requirement for an objective function. You may need to reformulate constraints to fit into CP’s structure of variable domains and satisfaction of constraints rather than optimization.
In what situations is Mixed Integer Linear Programming (MILP) preferable over Constraint Programming (CP)?
MILP is preferable when the problem involves linear relationships and the goal is to optimize an objective function, especially in scenarios where precision is required, such as resource allocation, financial optimization, and logistics.
In what situations is Constraint Programming (CP) preferable over Mixed Integer Linear Programming (MILP)?
CP is better suited for combinatorial problems, scheduling, and problems with complex constraints that are hard to express linearly, or where feasibility is the primary concern over optimization.
What is the A* algorithm, and where is it typically used?
The A* algorithm is a heuristic search algorithm used to find the shortest path in graph-based problems. It combines the actual cost from the start with an estimated cost to the goal, making it efficient for pathfinding and routing.
What is Dijkstra’s algorithm, and how does it differ from A*?
Dijkstra’s algorithm is a graph search algorithm that finds the shortest path from a start node to all other nodes by expanding the least costly paths first. Unlike A*, it doesn’t use a heuristic and explores nodes based purely on path cost.
What are heuristic algorithms, and why are they important in optimization?
Heuristic algorithms are problem-solving methods that use practical approaches to finding good-enough solutions efficiently, especially when exact methods are computationally expensive. They are important for solving large, complex problems quickly.
What does it mean if a linear programming problem is infeasible?
A problem is infeasible if no satisfiable assignment exists, meaning the feasible set spanned by the constraints and domains is empty due to conflicting constraints.
Give an example of an infeasible linear programming problem.
Objective: Maximize 𝑧 = 𝑥 + 𝑦
Subject to the constraints:
𝑥 + 𝑦 ≤ 1
𝑥 + 𝑦 ≥ 3
𝑥 ≥ 0
𝑦 ≥ 0
What does it mean if a linear programming problem is unbounded?
A problem is unbounded when there is no bounded solution, meaning the objective function can grow arbitrarily large or small because the constraints are not tight enough to limit the solution space.
Give an example of an unbounded linear programming problem.
An example of an unbounded problem is:
maximize x1 + 3*x2
subject to x1+x2 >= 4,
x1, x2 >= 0
Here, there is no upper limit on the values can take, making the solution unbounded.
What does it mean for a solution to be feasible in linear programming?
A solution is feasible if it satisfies all the constraints, meaning the solution lies within the bounded feasible region defined by the constraints.
What is a unique feasible solution in linear programming?
A unique feasible solution occurs when there is only one optimal solution, which appears at an extreme point where constraints intersect.
What is a non-unique feasible solution in linear programming?
A non-unique feasible solution occurs when there are infinitely many optimal solutions along a constraint boundary, typically when the objective function is parallel to one of the constraints.
Can a non-unique solution still include extreme points?
Yes, even in a non-unique solution, at least one solution will occur at an extreme point of the feasible region.
When is a feasible set convex?
If all constraints C of an optimization problem are linear