Spring 2022/23 Paper (Long Questions) Flashcards
An algorithm A is c-approximation algorithm to a minimization problem P. What does it mean in terms of the optimal function value S* of P and the solution S returned by the algorithm A?
This type of algorithm provides a solution that may not optimal but its within a predictable bound of the optimal. Such algorithms are valuable especially for NP-Hard problems where finding the exact optimal solution is computationally infeasible for large instances. The c value quantifies the worst-case scenario in terms of how far from the optimal the algorithms solution might be.
Explain what is meant by NP-Completeness. Is Linear Programming NP-Hard? Is Integer-Linear Programming NP-Hard?
NP-Completeness: A problem is considered NP-complete if it satisfies two constraints..
1) Is it in NP
2) Is it NP-Hard (it is at least as hard as the hardest problems in NP)
Is Linear Programming NP-Hard?
No: Linear Programming is not NP-Hard. LP can be solved in polynomial time by algorithms such as the interior-point method and simplex method.
Is Integer-Linear Programming NP-Hard?
Yes: This is because it includes, as special cases, several combinatorial problems that are known to be NP-complete, such as the knapsack problem, the travelling salesman problem, etc. The integer constraints make ILP significantly harder to solve compared to LP.
State the definition for an optimization problem to be convex. Moreover, give a simple example for both (a) a convex optimization problem, and (b) a non-convex optimization problem.
An optimization problem is said to be convex is …
> The objective function is a convex function
> The feasible region is a convex set
This means for the objective function g, for any two points x and y within the domain, and for any point θ such that 0 ≤ θ ≤ 1.
f(θx + (1-θ)y) ≤ θ f(x) + (1- θ)f(y)
For the feasible region, it implies that for any two points in the region, the line segment connecting them also lies entirely within the region.
A Convex Optimization Problem:
Minimizing f(x) = x², subject to x ≥ 0
A Non-Convex Optimization Problem:
Minimizing f(x) = x³, subject to -1 ≤ x ≤ 1
How does the local search algorithm for optimization work? Is simplex a local search algorithm? If so, what neighborhood function is used in the simplex algorithm ?
Local Search Algorithm:
A Local Search Algorithm starts with an initial solution and iteratively moves to a neighboring solution that improves the objective value, until no more improvements can be made.
Is Simplex a Local Search Algorithm:
No, simplex algorithm is not typically classified as a local search algorithm because it operates within the framework of linear programming where it moves along the edges of the polytope formed by the constraints until it reaches an optimal vertex.
Neighborhood Function in Simplex:
The simplex algorithm does not use a neighborhood function in the conventional sense used in local search algorithms. Instead, it moves from one basic feasible solution (vertex) to another by pivoting, which involves swapping a basic variable with a non-basic variable.
Conclusion:
The simplex algorithm differs fundamentally from local search algorithms due to its methodical pivot process based on linear programming’s geometrical properties rather than exploring arbitrary “neighboring” solutions
Does the simplex algorithm always terminate ? Is it possible for the simplex algorithm to get stuck in a cycle if there are many degenerate solutions with equal objective function value ?
Does the Simplex Algorithm Always Terminate ?
Generally, the simplex algorithm is designed to terminate. It iteratively moves from one vertex (corner point) of the feasible region to an adjacent one that improves the objective functioning, aiming to find the optimal solution.
While the simplex algorithm can theoretically cycle when dealing with degenerate solutions, practical implementations often incorporate strategies to prevent this, ensuring that the algorithm terminates at an optimal solution, or declares the problem unbounded or infeasible.
Describe ADAM for solving optimization problems. Why is it a popular optimization algorithm in many machine learning tasks ?
ADAM is popular to due its robustness and effectiveness across a wide range of machine learning problems, particularly in training deep neural networks where the landscape can be highly non-convex and gradients can vary dramatically. This robustness, coupled with the ease of configuration and efficiency, make ADAM a go-to optimizer in many deep learning frameworks and applications.