4.4 Theory of Computation Flashcards
Procedural abstraction
Procedural abstraction represents a computational method.
o The result of abstracting away the actual values used in any particular computation is a
computational pattern or computational method - a procedure
Abstraction by generalisation (or categorisation)
Abstraction by generalisation (or categorisation)
is a grouping by common characteristics to arrive
at a hierarchical relationship of the ‘is a kind of’
type.
Functional abstraction
Functional abstraction the particular computation method is hidden.
o The result of a procedural abstraction is a procedure, not a function. To get a function requires yet
another abstraction, which disregards the particular computation method. This is functional
abstraction.
Data abstraction
Data abstraction - details of how data are actually represented are hidden, allowing new kinds of data
objects to be constructed from previously defined types of data objects
o Data abstraction is a methodology that enables us to isolate how a compound data object is used
from the details of how it is constructed. For example, a stack could be implemented as an array
and a pointer for top of stack.
Problem abstraction/reduction
Problem abstraction/reduction - details are removed until the problem is represented in a way that is
possible to solve, because the problem reduces to one that has already been solved.
Representational abstraction
Representational abstraction is a representation
arrived at by removing unnecessary details
Decomposition
Decomposition - procedural decomposition means breaking a problem into a number of subproblems, so
that each sub-problem accomplishes an identifiable task, which might itself be further subdivided.
Composition
Composition - build a composition abstraction by combining procedures to form compound procedures.
o build data abstractions by combining data objects to form compound data, for example tree data
structure
Finite State Machine vs Turing Machine
The difference between a Finite State Machine and a Turing Machine, is that a Turing Machine has a memory in
the form of a tape. A Finite State Machine cannot store data.
Finite State Machine
Finite state machines are used to design computer programs and sequential logic circuits. They’re not a real
machine, but more of a design tool to show how a ‘machine’ could react to external events.
A Finite state machine can only be in any one of a finite number of states at any given moment. An FSM changes
between states when there is a trigger condition or some input.
Mealy Machine
A mealy machine is simply an FSM with an output. The outputs are determined by both the FSM current state and
the current input. As with a standard FSM, there can only be one possible transition for each given state and input.
A set/set notation
A set is an unordered collection of values or symbols in which each value or symbol occurs only once. Repeated
values are not allowed. A set is defined by listing each member of the set using the following notation
A finite set
A finite set is a set whose elements can
be counted off by natural numbers up
to a particular number.
Cardinality
The number of elements in a finite set.
Infinite Set
A set which has no end value and is
represented by a “…” at the end (or
sometimes the beginning) of the set.
Infinite sets cannot be counted off up to
a certain number.
Countable set
A countable set can be counted off
against a subset of the natural numbers.
This includes the finite sets, but also
countably infinite sets
Countably infinite set
A countably infinite set is a set where it
is possible to count the elements off
against the set of natural numbers.
Cartesian product of sets
The cartesian product of two sets is the set of all ordered pairs (a, b) where a is a member of set A and b is a member
of set B.
Subset and Proper subset
The symbol for Subset is ⊆ and the symbol for a proper subset is ⊂
If every member of set A is also a member of set B then we can say that “A is a subset of B”.
We would write this as A ⊆ B
If set A is a subset of, but not equal to set B, then we can say that “A is a proper subset of B”.
We would write this as A ⊂ B
Intersection in sets
The symbol for an intersection is ∩. The intersection of two sets contains all of the members
which both sets have in common.
Union in sets
The symbol for a union is ∪. We can add two sets together. This will result in a new set which
contains everything in either sets A or B.
Difference in sets
The symbol for difference is \ (sometimes a – symbol though). The difference of two sets is
everything which is in the first set but not in the second set.
Regular expression
A regular expression is simply a way of describing a set and that regular expressions allow particular types of
languages to be described in a convenient shorthand notation.
Why use a regular expression?
* Used in software for searching files or commands
* Also used for filtering text, firewalls, finding virus definitions, etc…
- OR
? - 0 or 1
* - 0 or more of the preceding elements
+ - 1 or more of the preceding elements
( ) - Grouping or ordering
Regular language
A language is called regular if it can be represented by a regular expression. A regular language is any language that a
Finite State Machine (FSM) will accept
Backus-Naur Form (BNF)
BNF is a formal mathematical way of defining syntax totally unambiguously.
BNF consists of
* A set of terminal symbols
* A set of non-terminal symbols
* A set of production rules
e.g. <DIGIT> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9</DIGIT>
Why can BNF work in some places Regular expressions can’t?
Regular expressions can be used to help us describe a simple language by specifying patterns of strings which match
conditions. Many aspects of languages can be defined used regular expressions, but it can be lengthy (lots of regex
rules) and time consuming. There are also some aspects of syntax which cannot be defined using regular
expressions, an example would be nested brackets.
BNF statements allow us to define tricky syntax such as nested brackets which may depend on a recursive
definition, Regular expressions are unable to do this.
Comparing algorithms
Algorithms can be compared by expressing their complexity as a function relative to the size of the problem. The size
of the problem is the key issue.
Some algorithms are more efficient time-wise or space-wise than other algorithms. Efficiently implementing
automated abstractions means designing data models and algorithms to run quickly while taking up the minimal
amount of resources such as memory.
The ultimate goal should always be to design algorithms which will
1. Run as quickly as possible (Time)
2. Take up the least amount of memory as possible (Space)
Time complexity – how much time (i.e. CPU cycles) is needed for the algorithm to solve a given problem
Space complexity – the amount of resources (e.g. memory) required to solve a given problem.
When comparing algorithms to determine which is better, the size of the problem should always be considered – the
algorithms should be working on the same number of items of data (n).
Order of complexity
O(1) Constant time
* Determining if a number is even or odd.
* Accessing a specific index in an array.
O(log2(n)) Logarithmic time
* Binary Search
O(n) Linear time
* Linear Search
O(n^2) Polynomial time
* Bubble Sort
O(2^n) Exponential time
* Brute force of all possible combinations
* Fibonacci Recursively
* O(2^n) are often recursive algorithms.
Deriving the time complexity of an algorithm is a case of simply adding up how many operations are required to
execute as a function of the size of the input. Always take the worst-case situation.
Tractable vs Intractable problems
- Tractable - problems that have a polynomial (or less) time solution are called tractable problems.
- Intractable - problems that have no polynomial (or less) time solution are called intractable problems. -Heuristic methods are often used when tackling intractable problems (this means taking an ‘educated guess’!).
Some problems simply cannot be solved algorithmically.
The halting problem
The halting problem is the unsolvable problem of determining whether any program will eventually stop if given
particular input. This is significant as the Halting problem demonstrates that there are some problems that cannot be
solved by a computer.
Turing machines
The difference between a Finite State Machine and a Turing Machine, is that a Turing Machine has a memory in
the form of a tape. A Finite State Machine cannot store data.
A Turing machines provide a formal model of computation
and provides a definition of what is computable.
A Turing machine can be viewed as a computer with a
single fixed program, expressed using:
* A finite set of states in a state transition diagram
* Aa finite alphabet of symbols
* An infinite tape with marked-off squares
* A sensing read-write head that can travel along the
tape, one square at a time.
One of the states is called the start state, and states that have no outgoing transitions are called halting
(accept/stop) states.
Universal Turing machines
A universal Turing machine is a Turing machine which can execute another Turing machine. This is similar to a
modern computer which can run a computer program (an operating system) which can run other computer
programs (user and system software).