BigO Flashcards
What are the four most common functions used on data structures. These are also the four criteria used by BigO notation to score efficiency.
- Accessing elements
- Searching for an element
- Inserting an element.
- Deleting an element.
BigO allows us to create a ‘report card’ based on the four criteria. This allows us to know what a certain data structure is…
Good at,
Bad at, and
When it should be used
What is the equation is used by BigO to calculate a functionality’s score in each of the four areas?
The Time Complexity equation.
How does a Time Complexity equation work?
By inserting the size of the data-set as an integer (n), and retuning the number of operations that need to be conducted by the computer before the function can finish.
What is the N value in BigO?
N is the size of the data set. (The amount of elements contained within the data structure)
When judging data structures, which scenario do we always use…
The worst case scenario. I.e. The longest time it could possibly take. E.g. The item we are searching for in an array is in the last possible location we attempt to check.
Why is it called BigO notation?
Because the function is represented by a big ‘O’ followed by a set of parentheses. E.g. O()
How do we pronounce the following…
O(2),
O(5)
O of 2
O of 5
O being pronounced “oh” like the letter, and NOT zero.
What does O(7) mean?
It takes 7 operations of the computer before our function can finish.
O(4) is an example of Constant Time. What does constant time mean?
No matter the size of our data set, it will always take the same amount of time to run. These arent very common though, as most of the time, our integer (n), is going to have some adverse effect on how many operations it takes.
Why does BigO measure efficiency in terms of number of operations and not time?
Because measuring by time is biased towards better hardware. Number of operations levels the playing field.
BigO isn’t the only input you should consider when deciding which data structure to implement in your program … Why?
Many data structures have specific quirks or features which separate them from the rest and may provide additional functionality that make them extremely useful.
What are the 7 most common Time Complexity equations, from most efficient to least efficient? Also which are considered Good and which bad?
O(1) GOOD, O(log n) GOOD, O(n) GOOD, O(n log n) BAD, O(n^2), n to the power of 2, BAD, O(2^n) 2 to the power of n, BAD O(n!), n Factorial, VERY BAD
What is the absolute best “score” a data structure can achieve?
O(1)
This means no matter what the size of the data set is, the task will be completed in a single instruction.
When is O(log n) most efficient?
When the data set size increases. Its efficiency slope decreases when n increases.
Think BinarySearch, for an n of 10 the first operation cuts it by half ruling out 5 elements, whereas for an n of 1,000,000 the first operation cuts it by half ruling out 500,000 elements!