Algorithms Flashcards
Fundamentals of All
The analysis of algorithms involves evaluating algorithms based on their efficiency and performance. Here are some key fundamentals:
Time Complexity: Measures the amount of time an algorithm takes to run as a function of the input size. Common notations include O (Big O), Ω (Big Omega), and Θ (Big Theta).
Space Complexity: Measures the amount of memory space an algorithm requires to run as a function of the input size. It includes both auxiliary space (temporary space) and space used by the input.
Worst-case, Best-case, and Average-case Complexity: Algorithms can have different performance for different inputs. The worst-case complexity is the maximum time or space required for any input of size n, while the best-case complexity is the minimum. Average-case complexity is an average over all possible inputs.
Asymptotic Analysis: Focuses on how the runtime or space requirements of an algorithm grow with the size of the input. It helps identify the most significant factors affecting performance.
Recurrence Relations: Often used to analyze recursive algorithms, recurrence relations describe the runtime of a function in terms of its own values for smaller inputs.
Amortized Analysis: Used for algorithms where some operations are more expensive than others but are balanced over time, leading to an average cost per operation that is less than the worst-case scenario.
Understanding these fundamentals helps in designing and choosing algorithms that are efficient and scalable for different problem sizes.
Linear Search
Linear search, also known as sequential search, is a simple algorithm that searches for a target value within a list or array by checking each element one by one until a match is found or the whole list has been traversed. Here’s a detailed explanation:
Algorithm:
Start from the first element of the list.
Compare the target value with each element of the list sequentially.
If the target value matches an element, return the index of that element.
If the end of the list is reached without finding a match, return a special value (e.g., -1) to indicate that the target value is not in the list.
Complexity Analysis:
Time Complexity: In the worst-case scenario, the algorithm may need to iterate through all n elements of the list to find the target value, resulting in a time complexity of O(n), where n is the number of elements in the list.
Space Complexity: Linear search has a space complexity of O(1) because it requires only a constant amount of extra space for variables regardless of the size of the input list.
Example:
Let’s say we have a list [4, 2, 7, 1, 9, 5] and we want to search for the value 7.
Start from the first element (4), compare it with 7 (not a match).
Move to the next element (2), compare it with 7 (not a match).
Continue this process until reaching 7 (a match) at index 2.
Return the index 2 as the result.
Advantages and Disadvantages:
Advantages: Simple to implement, works on unsorted lists, and requires minimal memory.
Disadvantages: Inefficient for large lists, especially when compared to more efficient algorithms like binary search (which requires a sorted list).
Use Cases:
Linear search is suitable for small lists or when the list is not expected to change frequently.
It is often used as a subroutine in more complex algorithms or when the simplicity of the algorithm outweighs its inefficiency for a particular problem.
Runtime of an Algorithm
The runtime of an algorithm refers to the amount of time it takes to complete its execution as a function of the input size. It is a measure of the algorithm’s efficiency and performance. Runtime can be influenced by factors such as the algorithm’s design, the size of the input, the speed of the computer’s processor, and any external factors affecting the algorithm’s execution (e.g., network latency).
In algorithm analysis, runtime is often expressed using Big O notation (O(n)), which describes the upper bound on the growth rate of the algorithm’s runtime relative to the size of the input. Other notations, such as Big Omega (Ω(n)) and Big Theta (Θ(n)), can also be used to describe different aspects of the algorithm’s performance.
Big O, Omega, Theta
Big O Notation (O-notation): Big O notation is used to describe the upper bound of the growth rate of a function. It provides an asymptotic upper limit for the function, indicating the worst-case scenario for the algorithm’s time or space complexity. For example, if an algorithm has a time complexity of O(n^2), it means that the algorithm’s runtime will not exceed a quadratic function of the input size n.
Big Omega Notation (Ω-notation): Big Omega notation is used to describe the lower bound of the growth rate of a function. It provides an asymptotic lower limit for the function, indicating the best-case scenario for the algorithm’s time or space complexity. For example, if an algorithm has a time complexity of Ω(n), it means that the algorithm’s runtime will not be faster than a linear function of the input size n.
Big Theta Notation (θ-notation): Big Theta notation is used to describe both the upper and lower bounds of the growth rate of a function. It provides a tight bound on the function, indicating the average-case scenario for the algorithm’s time or space complexity. For example, if an algorithm has a time complexity of θ(n), it means that the algorithm’s runtime will grow linearly with the input size n, with both the best and worst-case scenarios matching this growth rate.