5: Complexity Flashcards
What is an oracle?
A black box that is used with TMs that answer or solve difficult problems in one call.
What is a computation resource? List 5.
An asset that enables computations of increased complexity and difficulty.
time; space; energy; the number of comparisons; the number of calls to an oracle.
How do you measure time and space complexity in TMs?
Time β‘ the number of transitions made by TM.
Space β‘ the number of tape cells touched by TM.
What is an instance of a problem?
An input word that falls within (the language of) a problem.
Consider a positive non-decreasing function π βΆ N β N, πΏ a language, L, (over some alphabet π΄),
and π being the length of the input to f(n).
True/false: DTIME(π (π)) > {πΏ | πΏ is decided by some (deterministic) TM in at most π (π) transitions}
False. Here, DTIME(π (π)) = {πΏ | πΏ is decided by some (deterministic) TM in at most π(π) transitions}
What is a complexity class?
A set of computing problems, grouped by the time and space complexities that they require to solve.
How do you determine the complexity class(es) of languages?
Consider how the number of transitions and tape cells/stack size needed changes asymptotically for inputs of different length into machines recognising the language. You can do so for both deterministic and nondeterministic machines and find for either or both if the language lies within (takes to solve) constant, linear, polynomial, logarithmic, exponential, etc, time.
What does β^? mean?
A β^? B = whether A is an element of the set B. (?)
What does the big O notation mean?
- If π (π) = π(π(π)), then π is at least as big as π
- π is at most a constant times π
- βπ > 0, π0. βπ > π0. π (π) < ππ(π)
What does the little o notation mean?
- If π (π) = π(π(π)), then π is infinitely smaller than π
- π (π)/π(π) = 0
- βπ > 0.βππ. βπ > ππ. π (π) < ππ(π)
What does the big omega notation mean?
- If π (π) = Ξ©(π(π)), then π is at least as big as π
- π is at least a constant times π
- π = π(π (π))
What does the little omega notation mean?
- If π (π) = π(π(π)), then π is infinitely smaller than π
- π(π)/π (π) = 0
- π = π(π (π))
What does the big theta notation mean?
- If π (π) = Ξ(π(π)), then π and π are the same size
- π and π are within a constant factor
- π = π(π(π)) and π = Ξ©(π(π))
Consider a positive non-decreasing function π βΆ N β N, πΏ a language, L, (over some alphabet π΄),
and π being the length of the input to f(n).
True/false: DSPACE(π (π)) > {πΏ | πΏ is decided by some (deterministic) TM in at most π (π) tape cells}
False. Here, DSPACE(π (π)) = {πΏ | πΏ is decided by some (deterministic) TM in at most π (π) tape cells}
Are input tape cells (or whole tapes dedicated to holding input in multi-tape TMs) counted as part of space complexity? Why?
No. So the space complexity represents the number of tape cells to solve a problem: cells to take input are not part of those cells.
When is π(π) π(π (π))?
There exist constants π and a π0 > 0 such that for all π > π0, π(π) β€ π β π (π). Intuitively, π(π) grows no more than π (π) for large π. This is an asymptotic upper bound.
When is π(π) o(π (π))?
For every π > 0 there exists a π0 > 0 such that for all π > π0, π(π) β€ π β π (π). Intuitively, π(π) grows strictly slower than π (π) for large π. This is an asymptotic upper bound.
When is π(π) Ξ©(π (π))?
There exist constants π and a π0 > 0 such that for all π > π0, π(π) β₯ π β π (π). Intuitively, π(π) grows no less than π (π) for large π. This is an asymptotic lower bound.
When is π(π) Ο(π (π))?
For every π > 0 there exists a π0 > 0 such that for all π > π0, π(π) β₯ π β π (π). Intuitively, π(π) grows strictly faster than π (π) for large π. This is an asymptotic lower bound.
When is π(π) Ξ(π (π))?
There exist constants π1, π2, and a π0 > 0 such that for all π > π0, π1β π (π) β€ π(π) β€ π2β π (π). Intuitively,π(π) grows like π (π) for large π. This is an asymptotic tight bound.
What is the Time Speedup Lemma?
Let πΏ β DTIME(π (π)). Then, for any π > 0,
πΏ β DTIME(π β²(π)) where πβ²(π) = ππ (π) + π(π).
If you increase the number of states and alphabet size (by a constant factor depending on π), so that the new TM can make β₯ π transitions for each transition of the original TM, then in π(π) time, we can encode the original input.
What is the Tape Compression Lemma?
Let πΏ β DSPACE(π (π)). Then, for any π > 0,
πΏ β DSPACE(π β²(π))where πβ²(π) = ππ (π) + π(1).
If you increase the number of states and alphabet (by a constant factor depending on π), the new TM can use β₯ π cells for each tape cell used of the original TM.
Let π be a positive non-decreasing function π βΆ N β N
Let πΏ be a language (over some alphabet π΄),
and π be the length of the input.
True/false: NTIME(π (π)) = {πΏ β£ There is some nondeterministic TM π such that for any word π€ β πΏ, any computation path takes at most π (π) transitions}
True.
Let π be a positive non-decreasing function π βΆ N β N
Let πΏ be a language (over some alphabet π΄),
and π be the length of the input.
True/false: NSPACE(π (π)) = {πΏ β£ There is some nondeterministic TM π such that for any word π€ β πΏ, any computation path takes at most π (π) additional tape cells}
True.
Let π be a positive non-decreasing function π βΆ N β N
Let πΏ be a language (over some alphabet π΄),
and π be the length of the input.
True/false: NSPACE(π (π)) = {πΏ β£ There is some certificate π such that πΏ is decided by a Deterministic TM which takes π as additional input using at most π (π) additional tape cells}
True.
Let π be a positive non-decreasing function π βΆ N β N
Let πΏ be a language (over some alphabet π΄),
and π be the length of the input.
True/false: NTIME(π (π)) = {πΏ β£ There is some certificate π such that πΏ is decided by a Deterministic TM which takes π as additional input using at most π (π) transitions}
True.
How can you define DTIME?
Consider a positive non-decreasing function π βΆ N β N, πΏ a language, L, (over some alphabet π΄),
and π being the length of the input to f(n).
DTIME(π (π)) = {πΏ | πΏ is decided by some (deterministic) TM in at most π(π) transitions}
How can you define DSPACE?
Consider a positive non-decreasing function π βΆ N β N, πΏ a language, L, (over some alphabet π΄),
and π being the length of the input to f(n).
DSPACE(π (π)) = {πΏ | πΏ is decided by some (deterministic) TM in at most π (π) tape cells}
How can you define NTIME?
Let π be a positive non-decreasing function π βΆ N β N
Let πΏ be a language (over some alphabet π΄),
and π be the length of the input.
NTIME(π (π)) = {πΏ β£ There is some certificate π such that πΏ is decided by a Deterministic TM which takes π as additional input using at most π (π) transitions}
AND
NTIME(π (π)) = {πΏ β£ There is some nondeterministic TM π such that for any word π€ β πΏ, any computation path takes at most π (π) transitions}
How can you define NSPACE?
Let π be a positive non-decreasing function π βΆ N β N
Let πΏ be a language (over some alphabet π΄),
and π be the length of the input.
NSPACE(π (π)) = {πΏ β£ There is some nondeterministic TM π such that for any word π€ β πΏ, any computation path takes at most π (π) additional tape cells}
AND
True/false: NSPACE(π (π)) = {πΏ β£ There is some certificate π such that πΏ is decided by a Deterministic TM which takes π as additional input using at most π (π) additional tape cells}
Within which time and space complexities can you copy an input from an input tape to a working tape in a multi-tape TM? How could you improve this for the language πΏ(0 ^ (2 ^ π))?
Copying across would take DTIME(2π log2(π)) and DSPACE(π).
You could instead have a binary number to count the number of input symbols on the working tape, incrementing it every time you read a new input symbol. Since the language is a number of 0s equal to a power of 2, after reading them all the working tape will consist of a 1 and all 0s afterwards. This improves space complexity to DSPACE(log2(π)) and time complexity to ??.
When finding complexity classes with nondeterministic computation, do we take the worst, best, or average computational path, in terms of complexity/
Use the worst-case paths.
What is the time hierarchy theorem?
The idea that you can increase the computational power of a system by increasing its time complexity.
For a time-constructible function f(n),
Let f(n) * log(f(n)) = o(g(n)).
Then DTIME(g(n)) \ DTIME(f(n)) β β
So there are more problems that can be done in f(n) time that canβt be done in g(n) time if f(n) is time-constructible and f(n) > g(n) + log(n).
What does it mean for a function to be time-constructible?
A function, f(n), is called time constructible if there is a TM, which given the string 1 ^ n as input stops, with f(n) on its tape (in binary) after f(n) transitions (time). If a function is o(n) then it cannot be time constructible.
What is diagonalisation, and Cantorβs diagonalisation method?
Diagonalisation is when you take a matrix or set, and build a new row in which every ith element is different from the ith element in the ith row (so complement of a bit if binary).
Cantor applied diagonalisation to show that the matrix of all infinite binary sequences is infinite (uncountable) since the new row from diagonalisation will be different from every other row. Assuming the set is countable, you should be able to enumerate all its elements as s0 to sn as described but the new row made by diagonalisation contradicts this as it is not in the set, proving the set of all binary sequences to be uncountable and infinite by proof by contradiction.
What is the proof for the Time Hierarchy Theorem?
??
What is the Space Hierarchy Theorem?
Any increases in asymptotic space increases a systemβs computational power.
Let f(n) = o(g(n)), where f(n) is space-constructible.
Then DSPACE(g(n)) \ DSPACE(f(n)) β β
So there are more problems that can be done in f(n) space that canβt be done in g(n) space if f(n) is space-constructible and f(n) > g(n).
What does it mean for a function to be space-constructible?
A function, f(n), is called space-constructible if there is a TM, which given the string 1 ^ n as input stops with f(n) on its tape (in binary) while using f(n) tape cells (space).
What is the proof for the Space Hierarchy Theorem?
??
What are the consequences of the Time and Space Hierarchy Theorems
- DTIME(n ^ 2) is strictly more powerful than DTIME(n)
- DTIME(n ^ 1042) is strictly more powerful than DTIME(n ^ 1041)
- If d > c >= 1, DTIME(n ^ d) is strictly more powerful than DTIME(n ^ c).
- DTIME(n ^ Ο) is strictly more powerful than DTIME(n ^ e) since Ο > e.
- DSPACE(n * log(n)) is strictly more powerful than DSPACE(n).
- DTIME(2 ^ n) is strictly more powerful than DTIME(n ^ 3)
- Much more time (or space) means more problems solvable
How are DSPACE and NSPACE related?
DSPACE(π (π)) β NSPACE(π (π))
How are DTIME and NTIME related?
DTIME(π (π)) β NTIME(π (π))
How do you mathematically define deterministic polynomial time complexities?
P = βͺ πβ₯1 DTIME(π^π)
So the union of all factors raised to some k where k >= 1
How do you mathematically define deterministic exponential time complexities?
EXP = βͺ πβ₯1 DTIME(2^ (π ^ π))
So some factor raised to some variable in turn raised to some constant
Why and how are the polynomial and exponential time complexity classes different?
In polynomial, the factor is raised to a constant k; in exponential, the factor is raised to a variable n that is itself raised to a constant k.
How do you mathematically define deterministic logarithmic space complexities?
L = DSPACE(log π)
What is NPSPACE?
nondeterministic polynomial space complexity
How can you mathematically relate NPSPACE and DSPACE?
PSPACE = βͺ πβ₯1 DSPACE(π^π)
True/false: P problems β EXP problems.
True.
Prove that P problems β EXP problems.
??
How are the larger complexity classes related?
P β EXP
NP β NEXP
L β PSPACE β EXPSPACE
P β NP
PSPACE β NPSPACE
EXP β NEXP
What is the Non-deterministic Time Hierarchy Theorem?
Letπ (π + 1) log π (π + 1) = π(π(π)).
Then NTIME(π(π)) \ NTIME(π (π)) β β
What is the Non-deterministic Space Hierarchy Theorem?
Letπ (π) = π(π(π)).
Then NSPACE(π(π)) \ NSPACE(π (π)) β β
How can you relate time and space complexities?
NTIME(π (π)) β DSPACE(π (π))
AND
NSPACE(π (π)) β βͺπDTIME(ππ (π))
Prove NTIME(π (π)) β DSPACE(π (π))
??
How can you relate space and time, determinism and non-determinism?
DTIME(π (π) β NTIME(π (π)) β DSPACE(π (π))
P β NP β PSPACE
AND
DTIME(π (π)) β NTIME(π (π)) β DSPACE(π (π))
β NSPACE(π (π)) β DTIME(2π(π (π)))
P β NP β PSPACE β NPSPACE β EXP
Prove NSPACE(π (π)) β βͺπDTIME(ππ (π))
??
How can you relate Non-deterministic vs Deterministic Space?
Letπ (π) β₯ log π. Then
NSPACE(π (π)) β DSPACE((π (π))^ 2)
Prove NSPACE(π (π)) β DSPACE((π (π))^ 2)
??
How can you relate Nondeterministic vs Deterministic Time?
NP β EXP
P β NP
Can you get additional power by adding a constant amount of time and/or space?
No.
What happens when adding a βlargeβ amount of time and/or space?
Adding a βlargeβ amount of time and/or space gives a strict hierarchy of complexity classes.
Can we relate complexity classes (time vs space)?
Yes, some.
True/false: All regular languages are DSPACE(1).
True.
What is the significance of all regular languages being DSPACE(1)?
Donβt need any more tape cells than those used by the input word. Theyβre recognised by DFAs, which have no memory and so can be turned into TMs that recognise the same language using tape cells only also used to take input.