Lecture 2 - Fundamentals of algorithms and problem solving Flashcards

1
Q

Process of designing algorithms:

A
  • Understanding the problem
  • Ascertaining the capabilities of the device
  • Choosing between exact and approximate problem
  • Deciding an appropriate language and data structure
  • Implement any solution
  • Improve on your initial solution (generate more solutions)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Factors that need to be taken into consideration when thinking of a solution:

A

Very important factors:
• How often are you going to need the algorithm?
• What is the typical problem size you are trying to solve?

Less important factors:
• What language are you going use to implement your algorithm?
• What is the cost-benefit of an efficient algorithm?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How do you analyze an algorithm?

A

By in investigating an algorithm’s efficiency in respect of two resources:
• Running time
• Memory space

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Why is it not always possible to study algorithm’s efficiency in terms of its input size?

A

There are algorithms that require more than one parameter (e.g. Graph algorithms)

The input size may not be well-defined as one would wish (eg. Matrix multiplication)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the problems of using a standard unit for time measurement?

A

We’d face serious drawbacks:
• Dependence on the speed of a particular computer
• Dependence on the quality of the program implementing the algorithm
• Dependence on the quality of the compiler used to generate the executable
• Difficulty of clocking the time precisely

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How is time efficiency measured in an algorithm?

A

The standard approach is to identify the basic operation(s) and count how many times it is executed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the basic operations for matrix multiplication ?

A

multiplications and additions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the basic operation for sorting ?

A

comparisons

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the established framework for counting operations?

A

Count the number of times the algorithm’s basic operation is executed for inputs of size n (Where n is clearly defined)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Machine-independent algorithm design depend on what hypothetical computer?

A

The Random Access Machine or RAM

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the characteristics of the RAM?

A
  • Each simple operation (+, *, -, /, =, memory call) takes 1 time step;
  • The count of operations of the algorithm is equal to n 1
  • Loops and subroutines are the compositions of many single-step operations.
  • Takes no notice of whether an item is in cache or on the disk.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Algorithm Design Manual Steve Skiena : RAM advantages

A
  • Under the RAM model, we measure the run time of an algorithm by counting up the number of steps it takes on a given problem instance.
  • By assuming that our RAM executes a given number of steps per second, the operation count converts easily to the actual run time.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

RAM Disadvantages:

A
  • A common complaint is that it is too simple, that these assumptions make the conclusions and analysis too coarse to believe in practice.
  • Multiplying two numbers takes more time than adding two numbers on most processors, which violates the first assumption of the model.
  • Memory access times differ greatly depending on whether data sits in cache or on the disk, thus violating the third assumption.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What type of functions are 2n and n! ?

A

Exponential. Solutions with these running times are only practical for very small input sizes. Problems having solutions in this class are called “intractable”.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Common growth functions:

A
  • ~1: Constant Time.
  • ~log n: Logarithmic.
  • ~n: Linear.
  • ~nk log n: Polylogarithmic.
  • n2: Quadratic.
  • n3: Cubic.
  • nk where k is a constant: Polynomial.
  • 2n: Exponential.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Explain a constant time growth function:

A

Normally the amount of time that a instruction takes under the RAM model. It does not depend on input.

17
Q

Explain a Logarithm growth function:

A

It occurs in algorithms that transform a bigger problem into a smaller version that whose input size is a ration of the original problem. Common in searching and in some tree algorithms.

18
Q

Explain a Linear growth function:

A

Algorithms that are forced to pass through all elements of the input (of size n) a number (constant) of times yield linear running time.

19
Q

Explain a Exponential growth function:

A

Unfortunately quite a few known solutions to practical problems are in this category. This is as bad as testing all possible answers to a problem. When algorithms fall in this category, algorithm designers go in search of
approximation algorithms.

20
Q

Explain a Quadratic growth function:

A

A subset of the polynomial solutions. Quadratic solutions are still acceptable and are relatively efficient to small to medium scale problem sizes. Typical of algorithms that have to analyse all pairs of elements of the input.

21
Q

Explain a Cubic growth function:

A

Not very efficient but still polynomial. A classical example of algorithms in this class is the matrix Multiplication.

22
Q

What algorithm is this?

Algorithm(A[0..n],K)
A[n] = K
i = 0
while (A[i] != K) do
           i = i + 1
if (i < n)
          return i
else
          return -1
A

Sequential search algorithm.

23
Q

The worst-case efficiency of an algorithm is its?

A

efficiency for the worst-case input of size n, which is an input (or inputs) of size n for which the algorithm runs the longest among all possible inputs of that size
[Levitin 2003]

worst case operation is the configuration that will make me do the most number of operations. eg. list size 1 the worst case and best case are the same.

24
Q

What is Best Case efficiency?

A

The best-case efficiency of an algorithm is its efficiency for the best-case input of size n, which is an input (or
inputs) of size n for which the algorithm runs the fastest among all the inputs of that size [Levitin 2003]

Best configuration of a problem of size n that will give me the best performance.

25
Q

Why is Best Case useful ?

A

One can take advantage of algorithms that run really fast in the best case if the sample inputs to which the algorithm will be applied is approximately the best input:
• For instance, the insertion sort of a list of size n will perform in Cbest(n), when the list is already sorted.
• If the list is close to being completely sorted the best case performance does not degenerate much
• This means that insertion sort may be a good option for lists that are known to be nearly sorted.

26
Q

What is The Average case ?

A

Running time of a typical input or a random input.