study Flashcards

1
Q

What is an algorithm?

A

The description of some basic steps to achieve a specified result

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

f(n) = O(g(n))

A

The Big-Oh Notation, used to describe the worst case scenario of an algorithm, in terms of time or space taken to execute.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Functions growth rate

A

1, log n, √n, n, n log n, n2, n3, 2n, 3n etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Common things to look for when assessing growth rate

A

A simple loop from 0 to n (with no internal loops) constitutes O(n) complexity i.e. linear complexity. A nested loop of the same time constitutes O(n2) complexity; i.e. quadratic complexity. A loop in which the controlling parameter is divided by two at each step (and which terminates when it reaches 1) gives O(log n) (logarithmic complexity). The divide and conquer paradigm (later!) which breaks the problem into two instances of size n/2 which must be combined in linear time gives O(n log n).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Recursion

A

To say that a method uses recursion is to say that it is defined in terms of itself, and that it calls itself until it hits a basecase. One benefit of a recursion is it is a simpler and more elegant way of implementing naturally recursive data structures such as trees. However, a disadvantage is that it is less efficient than the iterative approach in terms of memory and runtime, as the same method is on the call stack multiple times.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Array Insertion

A

done by copying all elements from the index position you want to insert at, one place to the right, and then inserting the value into the now empty index. In the worst case, if the index is at the beginning of the array, you are doing n operations so this is O(n).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Linear Search

A

When you know nothing about the order in which the values are stored, you have to use linear search, in which you inspect the actual values from left to right - 1, and if one matches the value you’re looking for then return its index. In the worst case, not found, you could search through every element O(n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Binary Search

A

When you check the midpoint, which either finds the value or gives us a new range to look through, which is half the size. You repeat that recursively. The complexity here is O(log n).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Comparable interface

A

We can import the comparable interface to define an order for user defined types, which makes it useful for sorting etc. To implement the comparable interface you must include a compareTo method that returns a negative if a is less than b, a positive if a is more than b and 0 if they are equal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Why there is so many sorting algorithms

A

Simplicity vs Speed, simple can win on small amounts of data. Worst case and average case performance, knowing when it matters.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Selection Sort

A

Find the smallest item in the list and swap it with the first item, then find the smallest item in the remainder of the list and swap it with the first item of the remainder. Basically using recursion to create subarrays. Time complexity is O(n2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Insertion sort

A

Process the array one element at a time. Insert each new element into its correct position among the previously sorted elements. Time required is O(n2) as we always search up until the position we are currently at and then insert from there until the end of the current subarray/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Better Insertion Sort

A

When we inset the element from position i, previous elements are sorted and those greater than a[i] need to be moved forward. So read backwards from position i, moving elements forward until the necessay hole opens up in which to place a[i], which we need to store in advance since it will be written over in the first step. If a is nearly sorted this works really well and performance is more like O(n) than O(n2).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

‘True’ Random numbers (bits, integers)

A

Are generated by obsevation of some unpredictable physical process. This is a slow and computationally expensive process that required special purpose hardware until recently.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Pseudo-random numbers

A

Numbers generated as a sequence by a specific deterministic mathematical algorithm. Although they appear unpredictable when observed, if the initial seed is known as well as the algorithm, you can predict them 100% accurately. So very fast, but not actually random at all.In java there are three ways to access pseudo randomness, through Math.random() which returns a double value greater than or equal to 0.0 and less than 1.0

java.util.Random class is used to generate a stream of pseudorandom numbers using a 48bit seed. The java.security.SecureRandom class provides a cryptographically strong random number generator.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

The complexity of choose a card method

A

Since we might need to loop at every card in the deck, to pick the last one, then the complexity is at worst O(n).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Collection in Java

A

Play

Shuffle

Options
The complexity of choose a card method

Since we might need to loop at every card in the deck, to pick the last one, then the complexity is at worst O(n).
Collection in Jav

IN an object oriented context, it’s an object that gathers and organizes other objects, and defines the way in which those elements can be accessed and manged. It specifies and limits the ways in which the user may interact with the collection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Types of Collection

A

In java, there are linear and non-linear collections. In a linear collection, the organization is in a straight line and we have natural notions of precedence and indexing (such as arrays) . In a non-linear collection the organization might be more complex (like in a tree or network) or not present,

The way in which elements are organized is usually determined by the sequence in which they are added to the collection as well as some other inherent relationship such as ordering.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Abstract Data Types (ADTs)

A

An ADT is a data type whose intended use is specified by its interface. A data structure is the collection of programming structures (methods, data fields) used to implement an ADT (or collection) The data structure that implements an ADT can be changed without changing the interface, and therefore in a way that does not affect any client programs.

Concerns of efficiency come about in evaluating which data structures to use when implementing an ADT, and will generally not be one-sided, as there are costs and benefits associated with a particular choice of implementation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Abstraction

A

A method of hiding certain details at certain times. As a result, the user can focus on mre important issues and not be concerned with the messy details. Often, abstractions focus on the interface to a process or structure rather than the structure itself. For collections, abstractions provide a powerful method of ensuring consistent and clean access and manipulation without having to worry about irrelevant details.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

APIs

A

Abstraction

A method of hiding certain details at certain times. As a result, the user can focus on mre important issues and not be concerned with the messy details. Often, abstractions focus on the interface to a process or structure rather than the structure itself. For collections, abstractions provide a powerful method of ensuring consistent and clean access and manipulation without having to worry about irrelevant details.
APIs

application programming interfaces, which much of the Java library is organized into. An API is typically a collection of ADTs (interfaces) together with some specific data structure to implement them. One such is the Java Collections API, which represents specific types of collections. We should bother to learn beyond the API though, as the collection type we want might not be there, or we may have a special implementation is mind because of special conditions. We need to understand the issues involved.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Generics

A

Because type checking occurs at compile time in Java, a key question about collections is: collections of what?

In early versions of Java any sort of collection was fine provided it was an object, but this caused casting or silly wrapper classes. As of Java 5.0 generic types are specified with collections. It means we can specify the type of object the collection will hold - access to the collection will return objects of that type, and only objects of that type can be added to the collection

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Stack ADT

A

A last in, first out collection. An analogy is a stack of papers to process, or a stack of dishes to wash. The basic operations are push: which adds an item to the stack, and pop: which removes and returns an item from the stack. These are also usually extended by allowing Peek, which examines the top item of the stack and convenience methods to return the size and an emptiness test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Using a stack to sort

A

Stacks can be used to (partially) sort an incoming stream of objects, the algorithm being as follows: compare the next incoming item with the top of the stack, if it is smaller push it onto the stack, otherwise pop the stack until the top is larger than the input (or is empty) and then push it onto the stack. When all the input has been processed, pop each remaining item off the stack.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Implementation of the Stack ADT

A

Cna use an array to keep track of the stack contents, and a variable count to keep track of the size of the stack. Problems include that you have to declare array size and it cannot change, this can be combated by using ArrayList or copying the the contents of the array over to a new array of double the size, once it gets full.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Performance analysis of the Stack implementation

A

For space complexity, we intially allocate O(1) space because of the default capacity, then later when we expand capacity we never have more than twice as much available as needed, so that’s O(n) where n is the maximum number of items we ever include in the stack.

Time for pop, peek, isEmpty are all clearly O(1), and the push operation is also O(1), except when we need to expand the stack. Although it seems unfair to associate the cost of stack expansion to the single item that caused it, we do have to say that the push operation can be O(n).

However, we can practically think of the cost of expanasion as being spread over all elements present when it happens. Since there are n elements present, and the expansion is O(n) is means in an amortized sense the push operation is still O(1).

The peek and pop operations need to do somethong on empty stacks, so one solution would be to have them return null in these cases, which is actually not bad. But in the spirit of ADTs these operations should produce exceptions and the user should use a try catch.

Each ADT tends to come with its own subclasses of exception to describe the exceptions it might generate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Stack Interface:

A
public void push (T element)
public T pop();
public T peek();
public boolean isEmpty();
public int size();
public String toString();
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

References as links

A

In Java, a reference refers to a memory location in which the complete data for an object is stored. In other languages are called pointers and a clear distinction is drawn between objects and references to objects.

A reference to an oject of the same or closely related type is often called a link, and a collection of linked objects is called a linked data structure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Simple linked list

A

A linked list consists of a sequence of nodes connected via links. Each node, except the last has a successor and each node except the first has a predecessor. Each node contains a single element, and links (i.e references) to its successor and/or predecessor. The key difference with an arrray is that by manipulating the lunks we can change the structure of the list - not just the values it stores. A linked list is a dynamic data structure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Dynamic vs Static Data Structure

A

A static data structure is one whose memory is fixed at the time it is created and cannot be changed, for instance with arrays we cannot change the structure, only its contents. A dynamic data structure is one whose structure CAN change. For instance, with linked lists by changing the links we can change the structure of the list. We sometimes use static structures because they tend to be more efficient, in the background of apparently of apparently dynamic structures (for example using an array to model a stack). Generally, this requires finesse to cope with the clash between the static and dynamic requirements (e.g. when the stack grows beyond the capacity of its backing array).

31
Q

Singly Linked List

A

In which each node stored only one link to the next element in the list, with the actual list object containing a reference to the first node in the list, or a null reference if the list is empty.

32
Q

Singly Linked List toString

A

uses the idea of list traversal, starting at the first node and then while that node’s not null, doing something and setting the node to the next node. We can add an iterator that allows us to do this generally.

33
Q

Iterators

A

One of the useful features of many classes in the collections framework is that they can be the targets ‘foreach’ statements such as for(String s : ArrayDeque titles) … any class can do this providing it implements the iterable interface, which in turn requires an iterator() method which returns an object…that implements the iterator interface. Iterators must support three methods: hasNext() returns a boolean indicating whether there is anything more to return, next() which returns an object from the iterator and remove() which removes the last item returned (although frequently throws an unsupported operation exception.)

These are often defined anonymously, i.e., the code that defines the behaviour of the iterator is given directly where the iterator is constructed (not as a seperate, named type). Iterators over dynamic data structures are generally allowed and even expected to behave unpredictably if the structure is modified whilst the iterator is active.

34
Q

Doubly linked lists

A

In a singly linked list we can accept the next item directly via it’s reference (O(n1)) and it’s preceding item by sneaking up on it via list traversal (O(n)). In a doubly linked list you aim to make both O(1) by keeping a second reference. Just two intertwined singly linked lists, so most of the code is reuseable and modifiable. Bit tricky to get working however, as modifying the prev reference of one node means that the next reference of the node it points to must be changed as well.

35
Q

Queue ADT

A

A queue is an abstract data type that represents item processing in a first in, first out manner. The basic operations are like that of a stack, except that the “remove” and “add” methods (pop and push for stacks) operate on opposite ends of the data instead of the same end.

This is a useful abstraction for the breadth first search, in which the order you visit is the order you add, as well as event or order processing and process scheduling + simulation. The priority queue is a common extension.

Essentially, in the stack you remove AND add things from/to the start, whereas in a queue you remove something from the hbeginning and add things to the end.

Event or order processing is when you want to process things in the order they arrive, and the same thing for process scheduling, you have a bunch of processes that need to be done in order, so you add them to a queue to make surethe necessary pre-processes are done first.

36
Q

Queue as Singly Linked list

A

singly linked list seems quite an obvious match for representing the queue ADT in a data structure. By adding a reference to the last element you can make sure that both the enqueue and the dequeue operations are O(1).

37
Q

Queue as Circular Array

A

It would be sensible to use a queue as an array if we knew our queue would never grow beyond a fixed size. One approach is to dequeue by returning the element at position 0 and moving everything else down one space, but that makes dequeue O(n). Can we arrange for both dequeue and enqueue to be O(1)???

If we think of the array elements as being arranged on a circle, then we need only keep track of ‘first’ and ‘last’ indices. Enqueue involves storing at the last index and incrementing it, whereas Dequeue involves returning the element at the first index and incrementing it. We need to remember to wrap around at the end.

38
Q

Enlargement problems

A

If we forget to enlarge the array storing a circular queue when the capacity is exceeded, we mess up completely. There are two possibilities of recovery: impose a hard limit on the size of the queue and throw an exception when we overrun (want a constructor that sets size to something other than default), or bite the bullet and write the enlarge method for the array.

39
Q

Trees

A

Whilst Stacks queues and lists are all linear data structures, with a left to right ordering of elements, a tree is a complex non-linear data structure in which elements are arranged in a hierarchy. A tree consists of a set of nodes and edges that connect nodes to one another. Each node has a particular level and there is a single node at the top level called the root of the tree. Each node except the root has a single parent which lies one level higher in the tree. A node may have multiple children, although a node without any children is called a leaf. If two nodes have a common parent, they are called siblings, and a node that is neither a leaf nor the root is called an internal node.

40
Q

Tree Paths

A

A path in a tree is a sequence of nodes where each node in the sequence is a child of the preceding node. The length of a path is the number of edges in it (so one less than the number of nodes). For every node in the tree there is a unique path from the root to that node; and the nodes lying on this path are called its ancestors. The level of a node is the length of the path from the root to the node, so the root is at level 0. Nodes that can be reached from a node by following a path starting from it are called descendents. A subtree consists of a node together with all of its descendents and the edges that connect them.

41
Q

Recursion in trees

A

A descendant of a node is either one of its children or the descendant of one of its children. An ancestory of a node is either its parent or an ancestor of its parent. A tree is either a single node, or a root node together with a collection of trees whose roots are its children.

42
Q

Tree Classifications!

A

The order of a tree is the maximum number of children any node has, if the order of a tree is at most K, then it’s called a k-ary tree. If every non-leaf node has the same number of children the tree is called a full one.

43
Q

Binary Tree

A

One in which every node as at most two children. A full and balanced Binary tree of depth d has at least 2d and at most 2d +1 -1 nodes. A non-full of poorly balanced binary tree could have as few as D + 1 nodes (where every node has a single child). Keeping them as full and balanced as possible will be important, as we search them and follow paths for efficiency.

44
Q

A balanced tree

A

A tree is called balanced when all of it’s trees are within one level of one another.

45
Q

Height or Depth of a Tree

A

The length of the longest path of the tree (which necessarily goes from the root to the tree). A full and balanced K-ary tree with n nodes has depth O(logk n)

46
Q

Tree traversals

A

A traversal of a data structure is visiting all of its elements in some order or more generally a “visit” to each element in order to take some action. There are three recursive tree traversals: preorder, inorder and postorder. There is also level-order (non-recursive) in which the nodes at each level of accessed from left to right.

47
Q

Preorder traversal

A

Visit the root then (preorder) traverse its left subtree then (preorder) traverse its right subtree

48
Q

Inorder traversal

A

inorder) traverse the left subtree of the root, then visit the root and then (inorder) traverse its right subtree.

49
Q

Postorder traversal

A

(post order) traverse the left subtree of the root, then (post order) traverse its right subtree and visit the root.

50
Q

Implementation of a Tree

A

Many possibilities! Could store nodes in an array with pre-computed indices. For instance, the root could go at index 0 and then the children of a node at index 2k + 1 and 2k + 2 (this just works, although wastes a lot of space)

Alternative array storage simply puts the nodes in an array as needed but includes in their structure the indices of their children (if any) and possiblt parent’s index as well.

The obvious, recursive representation is that each node contains references i.e. links to its children and possibily parent. These links could be to the left and right subtrees rooted at the children.

51
Q

Binary Search Tree

A

A BST is a binry tree whose nodes contain elements of some ordered type. If the value stored at a node is V, then all nodes in its left subtree store values smaller than V and all nodes in its right subtree store values larger than V. So, if we follow a path each time we pass to a left child the value will go down, and each time we pass to a right child it will go up.

52
Q

Operations on a BST

A

There are three fundamental operations on BSTs: Search, Add and Delete.

The interface!

public interface BinarySearchTree> {

public boolean search(T element)
public void add(T element)
public void remove(T element)

53
Q

Performance issues in trees

A

The main factor in this is its balance or lack of (the range of depths of its nodes)

The worst case of a single branch (average depth n/2) and best case of full balanced tree (max depth log2 n approx)

54
Q

Rotations

A

One approach to maintaining balance is allowing the tree structure to be modified while preserving its BST chracterisitcs. One family of modifications are called rotations.
A right rotation, helping to balance out when there are long paths in the left subtree of the left child of the root. We can similarly deal with long paths in the right subtree, but the mixed case is more difficult as it requires to rotations.

55
Q

Heaps (ADT)

A

A binary tree is complete if it is full and balanced and all of its leaves are as far left as possible. A heap is a binary tree in which every element is greater than or equal to both of its children. The operations that a heap should support are: adding an item, returning the maximum value in the heap and removing the maximum value from the heap.

56
Q

Usefulness of a Heap

A

There are two main uses for a heap: They can be used for priority queues, a data structure in which the item with the highest priority is always the next one processed.

Or as part of the heap sort algorithm which sorts data by adding it item by item to a heap and then simply removing from the heap until nothing is left.

57
Q

Heap Algorithms

A

We add an item by placing it at the first vacant leaf position and letting it float up the branch towards the root so long as it is larger than its parent.

Returning the max value is trivial, as that is always the root.

Removing the max value is interesting.

58
Q

Removing the Maximum in a heap

A

The maximum is always the root of the tree, so finding it is never an issue. The issue is reconstructing the tree after removing that value whilst maintaining the heap property after the root element is removed. The key idea is in some sense the reverse of addition.

Answer: Replace the root value with the value of the last leaf, and then re-establish the heap property by exchanging the root and the larger of its children and do this recursively downwards.

59
Q

The Heap ADT

A

public interface Heap> {

public void add(T element)
public T get();
public T remove();

60
Q

Heaps implementations

A

You could use a linked binary search tree to implement a heap. To facilitate both upwards and downwards navigation, we shall probably want to maintain links to parent nodes as well as children. To determine where the next item is to be added, and where the last item is for remova, we probably want to have a datafield for the last item.

OR we could use an array. We could use the index trick (the children of an element at index i are at 2i + 1 and 2i + 2 respectively) Navigation is then trivial if we keep track of the current size, and the position of the last item (and the next insertion point) is also known. Only drawback is the need to resize if the heap gets too large.

If using an array, you would need a method to expand the capacity if/when needed. A method to swap the values of two positions, and a method to find the index of the larger child or tell us there isn’t one.

61
Q

Priority Queues

A

In a priority queue, each element is added with an associated priority. When an element is removed, it’s the element with the highest priority, and if more than one shares the highest priority it should be the earliest arrival that is removed. So if all elements have the same priority then it behaves like a normal queue, while if elements are added in strictly an increasing order of priority then it behaves like a stack.

62
Q

Priority Queue Implementation

A

A heap is the ideal backing for a priority queue. We just need to do a bit of bundling together of items and priorities. We suppose the priorities are supplied as integers.

63
Q

Heapsort (1964)

A

An in place, comparison based, array sorting algorithm that has guarnteed worst case O(n log n) behaviour. The basic idea:

  • Organise the elements of the array into a heap structure.
  • Exchange the first (largest) and last element.
  • Restore the heap structure (except for the final element)
  • Repeat last two steps until finished.
64
Q

Organizing the heap for a heapsort algorithm

A

There are two choices, top down or bottom up. The first mimics the algorithm from the previous lecture, effectively treating a growing initial segment of the array as a heap and adding one element at a time, letting it float as high as necessary.

The second imagines the tree structure already in place over the whole array, and fixes violations of the heap property beginning from the lowest non-leaf nodes and moving upwards.

The first is easier to conceptualize but is O(n log n)

The second is actually O(n)

65
Q

The difference between top-down and bottom-up heap construction

A

In the top down version, where elements float up the heap, the elements from larger part of the heap float farthest. In particular, each element at the bottom level, which makes up half the heap, might need to float to the top (log n away) requiring O(n log n) steps.

In the bottom up version, the elements in the larger levels are sinking down, and have a shorter distance to travel. In fact, at most, n/2i elements need to sink a distance of i, so the total number of steps = O(n).

66
Q

Graphs

A

A graph consists of a set of vertices connected by edges. There is at most one edge beteween any two vertices, and the two endpoints of an edge are distinct (no loops). edges are symmetrical (i.e. two way streets) and two vertices are neighbours if there is an edge between them.

67
Q

Representing graphs

A

There are many possible data structures to represent a graph, but one of which is we can store it as an array of vertex objects. Each vertex object includes an ArrayList data field which are its neighbours

68
Q

Generating graphs to test algorithms

A

Direct creation is painful, one line of code per edge and not practical for graphs with lots of vertices or edges. You CAN read from file, or can do something random (fixed edge probability between any two vertices, or add a certain number of edges at each vertex).

69
Q

Distance!

A

The distance between two vertices in a graph is the smallest number of edges required to get from one to the other. If it’s impossible, just say distance is -1 or sometihng along those lines.

70
Q

Why random graphs?

A

Original motivation is theoretical: if for some model of a random graph the probability of X being true is strictly positive then there must be graphs for which X is true.

There is a surprisingly large collection of interesting properties X for which this is still the only known way to prove existence.

The CURRENT motivation is pragmatic - an enourmous number of situations have an underlying graph or network. Collecting real and complete data about these graphs is very expensive, so random networks can be used as simulation.

In that context it’s important that the characteristics of the simulation match those of the real data.

71
Q

Erdos-Renyi (1959) graph

A

Says a graph is built based on two parameters: the number of vertices (n) and the probability of two vertices being neighbours (p)

Each possible edge is considered independently and included with probability P.

A very useful model for theory athough not so applicable in practice. A slight variation is to take the second parameter to be the number of edges E and to take the set of edges to be a random subset of size E from among the (n 2) possible edges

72
Q

Watts-Strogatz (1998) graph

A

Three parameters: N the number of vertices, K the degree of each vertex (assumed even) and B with 0 < B < 1, a magic number.

He said to start with a circular graph in which every vertex is a neighbour of the immediately preceding K /2 vertices, and the immediately following K /2 vertices.

For each edge (i, j) with i < j, replace it with an edge (i, k) with probability B subject to ensuring there are no loops or duplicated edges. This produces graphs with much higher clustering coefficient than in the ER model, for the same average degree, while relatively large choice of B still ensures that the distance between vertices tends to be small (this is the small world phenomenon, frequently observed in complex networks) … the degree distribution is peaked sharply at K though which is not common in complex networks.

73
Q

Barabasi- Albert graph

A

Models a network growing in time by the addition of vertices. Vertices exhibit preferential attachment - i.e. they are more likely to become neighbours of vertices that already have relatively high degree. Formally, each new node connects to an existing node with probability proportional to the current degree of that node divided by the sum of the degrees of all nodes to this point. This gives a power law distribution to the degrees (the number of vertices of degree k is proportional to k -3 in the most basic model) and a small world model. However, the clustering coefficient is relatively low.