Introduction Flashcards

1
Q

Column vector

A

A d × 1 matrix

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Row vector

A

A 1 × d matrix

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

The dot product is a commutative operation

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Squared norm or Euclidean norm

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Lp-norm

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The (squared) Euclidean distance between x and y

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Dot products satisfy the Cauchy-Schwarz inequality, according to which the dot product between a pair of vectors is bounded above by the product of their lengths

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

The cosine function between two vectors

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

The cosine law

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You are given the orthonormal directions [3/5, 4/5] and [−4/5, 3/5]. One can represent the point [10, 15] in a new coordinate system defined by the directions [3/5, 4/5] and [−4/5, 3/5] by computing the dot product of [10, 15] with each of these vectors.

A

Therefore, the new coordinates [x’, y’] are defined as follows:

x’ = 10 ∗ (3/5) + 15 ∗ (4/5) = 18

y’ = 10 ∗ (−4/5) + 15 ∗ (3/5) = 1

One can express the original vector using the new axes and coordinates as follows:

[10, 15] = x’[3/5, 4/5] + y’[−4/5, 3/5]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

An example of a multiplication of a 3×2 matrix A = [aij] with a 2-dimensional column vector

x = [x1, x2]T

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

An example of the multiplication of a 3-dimensional row vector v = [v1, v2, v3] with the 3 × 2 matrix A

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

The multiplication of an n×d matrix A with a d-dimensional column vector x to create an n-dimensional column vector Ax is a weighted sum

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Outer Product

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The outer product is not commutative; the order of the operands matters

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

An example of a matrix multiplication

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Types of matrices

A
18
Q

Inverse of a 2 x 2 matrice

A
19
Q

An orthogonal matrix is a square matrix whose inverse is its transpose

A
20
Q

Infinite geometric series of matrices

A
21
Q

Frobenius Norm (energy)

A

Defined as the square root of the sum of the absolute squares of the elements of a matrix

22
Q

The trace of a square matrix A

A

Is defined by the sum of its diagonal entries

23
Q

The energy of a rectangular matrix A is equal to the trace of either AAT or AT A

A
24
Q

Pre-multiplying a matrix X with an elementary matrix corresponding to an interchange results in an interchange of the rows of X

A
25
Q

Post-multiplication of matrix X with the following elementary matrices will result in exactly analogous operations on the columns of X to create X’

A
26
Q

Permutation matrix and its transpose

A

Are inverses of one another because they have orthonormal columns

27
Q

The point [a cos(α), a sin(α)] has magnitude a and makes a counter-clockwise angle of α with the X-axis. One can multiply it with the rotation matrix shown here to yield a counter-clockwise rotation of the vector with angle θ

A
28
Q

Elementary matrices for geometric operations

A
29
Q

A fundamental result of linear algebra is that any square matrix can be shown to be a product of rotation/reflection/scaling matrices

A

By using a technique called singular value decomposition

30
Q

Matrix factorization using the squared Frobenius norm

A

The squared Frobenius norm is the sum of the squares of the entries in the residual matrix

(D−UVT)

31
Q

The d-dimensional vector of partial derivatives is referred to as the gradient

A
32
Q

Univariate Taylor Expansion of the function w at a

A
33
Q

Taylor expansion of the exponential function at 0

exp(w)

A
34
Q

Multivariable Taylor expansion of functions F(w) with d-dimensional arguments of the form w = [w1 …wd]T. The Taylor expansion of the function F(w) about w = a = [a1 …ad]T can be written as follows:

A
35
Q

Second-order Taylor approximation can be written in vector form

A
36
Q

Normal equation using calculus

A
37
Q

Gradient descent with the normal equation

A
38
Q

Gradient descent in compact form

A
39
Q

Transpose of AB

A
40
Q

The product of matrices AB can be expressed as the outer product of the columns of A and the rows of B

A
41
Q
A
42
Q
A