Introduction Flashcards

1
Q

Column vector

A

A d × 1 matrix

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Row vector

A

A 1 × d matrix

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

The dot product is a commutative operation

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Squared norm or Euclidean norm

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Lp-norm

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The (squared) Euclidean distance between x and y

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Dot products satisfy the Cauchy-Schwarz inequality, according to which the dot product between a pair of vectors is bounded above by the product of their lengths

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

The cosine function between two vectors

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

The cosine law

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You are given the orthonormal directions [3/5, 4/5] and [−4/5, 3/5]. One can represent the point [10, 15] in a new coordinate system defined by the directions [3/5, 4/5] and [−4/5, 3/5] by computing the dot product of [10, 15] with each of these vectors.

A

Therefore, the new coordinates [x’, y’] are defined as follows:

x’ = 10 ∗ (3/5) + 15 ∗ (4/5) = 18

y’ = 10 ∗ (−4/5) + 15 ∗ (3/5) = 1

One can express the original vector using the new axes and coordinates as follows:

[10, 15] = x’[3/5, 4/5] + y’[−4/5, 3/5]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

An example of a multiplication of a 3×2 matrix A = [aij] with a 2-dimensional column vector

x = [x1, x2]T

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

An example of the multiplication of a 3-dimensional row vector v = [v1, v2, v3] with the 3 × 2 matrix A

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

The multiplication of an n×d matrix A with a d-dimensional column vector x to create an n-dimensional column vector Ax is a weighted sum

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Outer Product

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The outer product is not commutative; the order of the operands matters

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

An example of a matrix multiplication

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Types of matrices

18
Q

Inverse of a 2 x 2 matrice

19
Q

An orthogonal matrix is a square matrix whose inverse is its transpose

20
Q

Infinite geometric series of matrices

21
Q

Frobenius Norm (energy)

A

Defined as the square root of the sum of the absolute squares of the elements of a matrix

22
Q

The trace of a square matrix A

A

Is defined by the sum of its diagonal entries

23
Q

The energy of a rectangular matrix A is equal to the trace of either AAT or AT A

24
Q

Pre-multiplying a matrix X with an elementary matrix corresponding to an interchange results in an interchange of the rows of X

25
Post-multiplication of matrix X with the following elementary matrices will result in exactly analogous operations on the columns of X to create X'
26
Permutation matrix and its transpose
Are inverses of one another because they have orthonormal columns
27
The point [a cos(α), a sin(α)] has magnitude a and makes a counter-clockwise angle of α with the X-axis. One can multiply it with the rotation matrix shown here to yield a counter-clockwise rotation of the vector with angle θ
28
Elementary matrices for geometric operations
29
A fundamental result of linear algebra is that any square matrix can be shown to be a product of rotation/reflection/scaling matrices
By using a technique called singular value decomposition
30
Matrix factorization using the squared Frobenius norm
The squared Frobenius norm is the sum of the squares of the entries in the residual matrix (D−UVT)
31
The d-dimensional vector of partial derivatives is referred to as the gradient
32
Univariate Taylor Expansion of the function w at a
33
Taylor expansion of the exponential function at 0 exp(w)
34
Multivariable Taylor expansion of functions F(w) with d-dimensional arguments of the form w = [w1 ...wd]T. The Taylor expansion of the function F(w) about w = a = [a1 ...ad]T can be written as follows:
35
Second-order Taylor approximation can be written in vector form
36
Normal equation using calculus
37
Gradient descent with the normal equation
38
Gradient descent in compact form
39
Transpose of AB
40
The product of matrices AB can be expressed as the outer product of the columns of A and the rows of B
41
42