Spectral Methods Flashcards

1
Q

What are spectral techniques?

A

A range of techniques used to study the structure of matrices, used in images, data science, machine learning and game theory etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Identity Matrix

A

Identity matrix has all its entries equal 0 except for the ones where the column and row index are the same (aka the diagonal).
We can conclude that for any matrix A, the multipliation of the identity of the matrix A and matrix A itself equals A. In other words:
A . I = I . A = A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Inverse Matrix

A

Denoted as A^-1.
Some matrices do not have an inverse, and are called singular matrices.
Matrices which do are called non-sigular.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Determinant

A

The real value whose properties are key to the existence of a reverse matrix. In other words det(A) is not equal to 0 if A^-1 exists.
There are certain patterns depending on the size of the matrix, but for example, a 2x2 matrix’s determinant can be denoted as ad-bc. If this does not equal 0, there is a determinant.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Eigenvalue Problem

A

We essentially want to find a scalar value which multiplied by given n-vector, the matrix, gives us the nxn matrix A multiplied by that same n-vector.
The scalar value in this instance is the eigenvalue, the n-vector is the eigenvector. This can be represented as:
A . x_λ = λ . x_λ
This equation is mainly used to find eigenvectors from eigenvalues.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Eigenvalue Problem Rewritten

A

We can rewrite:
A . x_λ = λ . x_λ
to be:
(A - λ . I) . x_λ = 0
Also, we can prove that A - eigenvalue . the identity is just subtracting the eigenvalue from each diagonal of A. This is because when we multiply the identity of A, which will just be 1’s at the diagonal, with the eigenvalue, we essentially just multiply the matrices’ 1s with the eigenvalue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Finding the (dominant) eigenvalue

A

Before we do this, however, we need to actually find the eigenvalue. More times than not, we are only interested in finding the dominant eigenvalue, aka the eigenvalue with the highest value. This is done by using the power method with the Rayleigh Quotient.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Power Method with Rayleigh Quotient

A

Power Methods:
- Choose an initial guess at the eigenvector, in this case, which will always be <1,1,1,1,1,1,….>
- We set i = 0
- We then compute x_i+1 by A . x_i, in this case it would be A . <1,1,1,1,1,1,….>
- We introduce scaling to this to reduce how big the result is. Therefore, we divide the new vector with the largest value in the vector. Lets say that we computer A . <1,1,1,1,1,1,….> to be <0.5,0.4,0.6,0.7,…>, and 0.7 was the largest. We then divide each value by 0.7, which scales the vector down.
- We then set i = i+1
- We continue until i >= some maximum value which is user defined.

Rayleigh Quotient:
- We get the new found vector, and multiply it by A again.
- We find the dot product of this result with the transpose of that new found vector. Lets all the result of this f.
- We then take f and divide it by the new found vector multiplied by the transpose of the new found vector.
The equation looks somewhat like this:
((A . x) DOT x^T) / (x . x^T)
Where x is the found scaling vector, and x^T is the transpose of it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Using Eigenvalues with edges of graphs

A

For edges represented as E, number of nodes n, a graph v and an adjacency matrix which represents the spectrum of V shown by G = (λ1, λ2,…λn):

|E| = (SUM{k=1, n} ((λ_k)^2)) /2
Essentially, this gives us the amount of edges.

In a very simple summary, square all of the eigenvalues, sum them up, and divide by two.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Using Eigenvalues with closed paths in graphs

A

“The sum of the k’th powers of eigenvalues of an adjacency matrix divided by the k! reports the number of closed walks of length k in the graph”
Lets say we wanted to find out how many closed walks of 3 there where. We would cube the eigenvalues, add them up and divide by 3!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Bipartite graphs

A

Graphs which have a symmetric spectrum.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Page Ranking

A

Lets say we have a collection of items -> {p1,p2,p3,p4,…}.
And we wish to order these by a certain measure.
We could use eigenvalues to do this.

We could firstly create a graph, in which if p_1 points to p_2, then <p_1, p_2> is an edge.
Each node has a score associated with it, in which the score for that node is expressed as the total score of pages that link to it.
The issue is that if a certain page has a specific amount of links, no matter what that page contains, it will be considered more relevant.
So instead we use:
r_k = SUM {over all pages that link to the page} (r_i / t_i)
where:
r_k -> the score assigned to page k
r_i -> score of linking page p_i
t_i -> amount of links that are associated to the linked page p_i

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Page Ranking Matrix

A

We can define the page ranking matrix as:
If there is a link between p_1 and p_2, we take 1 and divide it by the amount of links the page is being pointed to has.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Singular Value Decomposition

A

Takes any matrix and represents it as matrices U . S . V, formed with eigenvalues and eigenvectors of the matrix.
By ignoring small values in S and making slight adjustments in other matrices, we allow compression, decreasing the amount of data used in images for example but with little to no change, since we use U and V for important data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly