Part 1 : Deterministic Data Models Flashcards

1
Q

Data to Model

A
  • Models are descriptions of the data, they encode our assumptions
  • Models are a ‘generalisation’ of the data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Choice of model

A

Often dictated by practicality of method, as well as our assumptions of the data - No need to play God

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A model aims to achieve

A

maximum discrimination

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Linear Classifier

A

Only one parameter needed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Deterministic Models

A

produce an output without a confidence measure

- (Do not encode uncertainty in the data)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

(Deterministic Model) - Line Fitting

A

The best fitting line is that which minimises a distance measure from the points to the line. Can use the Method of Least Squares

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Method of Least-Squares

A
R(a, b) = Σ(yi − (a + bxi))2
- (This is known as the residual)
aLS = y¯ − bLSx¯
bLS = Σ (xiyi − Nx¯y¯ / xi^2− Nx¯^2 )
Note :: x¯,y¯ <= mean x,y

Method :

  1. Minimise residual by taking the partial derivatives, and setting them to zero. (using the chain rule)
  2. Find aLS and bLS
  3. y = aLS(x) + bLS
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

(Method of Least-Squares) - Outliers

A

Have a disproportionate effect, because residual is defined in terms of squares differences

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

(Method of Least-Squares) - Matrix Form

A
R(a, b) =(also)= ||y − Xa||^2
where..
y = column matrix y1...yn
X = column matrix, first column, 1, second column, x1...xn
a = [a
       b]

y - Xa = [y1 -a - bx1
…. -a - ……
yn -a - bxn]

[aLS
bLS] = (((X^T)X)^-1)(X^T)y

…Look at slide 103

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

K-D Least Squares - Matrix Form

A

[aLS
bLS] = (((X^T)X)^-1)(X^T)y

Where ((X^T)X) is a (K+1)(K+1) square matrix

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

General Least Square - Matrix Form

A

Matrix formulation also allows least squares method to be extended to polynomial fitting.

For a polynomial of degree p + 1

yi = a0 + a1xi + a2xi^2 + …+ apxi^p

[aLS
bLS] = (((X^T)X)^-1)(X^T)y

Where ((X^T)X) is a (P+1)(P+1) square matrix

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Generalistation and overfitting - Least Square, Polynomial Lines

A

Think of how lines of different order polynomials have different shapes…. order of p has a big impact on the separation of data.

See slide 108

How well did you know this?
1
Not at all
2
3
4
5
Perfectly