SVD Flashcards

(26 cards)

1
Q

What problem did the Netflix Prize aim to solve?

A

Predict user preferences based on historical ratings using collaborative filtering.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What technique underlies many recommendation systems?

A

Matrix factorization using SVD (Singular Value Decomposition).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does matrix factorization aim to do?

A

Break a matrix into structured components that capture underlying patterns.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the general form of the SVD for a matrix X?

A

X = S V Dᵀ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What do S, V, and D represent in the SVD?

A

S: left singular vectors, V: singular values (diagonal), D: right singular vectors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the rank of a matrix in terms of SVD?

A

The number of non-zero singular values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What does linear dependence between columns mean?

A

At least one column can be written as a combination of others.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What happens to singular values for a rank-deficient matrix?

A

Some singular values are zero and can be dropped in approximations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What does a truncated SVD (rank-k) give you?

A

The best rank-k approximation to the matrix under Frobenius norm.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What does the Eckart–Young theorem state?

A

Truncated SVD gives the optimal low-rank approximation under Frobenius norm.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the Frobenius norm used for in SVD?

A

To measure reconstruction error between original and approximated matrix.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What does each term σᵢ·uᵢ·vᵢᵀ in the SVD represent?

A

A rank-1 matrix capturing one mode of variation in the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What happens to image storage using SVD compression?

A

It reduces storage size by keeping only top-k singular components.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How do you choose k for SVD image compression?

A

Use the elbow method or energy preservation (e.g. retain 99% of variance).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

In recommender systems, what does the matrix X represent?

A

A user-item ratings matrix, often sparse and incomplete.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How does SVD help with recommendation tasks?

A

By uncovering latent factors and predicting missing entries via low-rank reconstruction.

17
Q

What are the steps for SVD-based recommendation?

A

Factorize, truncate, reconstruct, and recommend top items based on scores.

18
Q

What do latent dimensions in recommender systems represent?

A

Hidden features like genre preference or viewing style.

19
Q

How is SVD connected to PCA?

A

PCA is the SVD of a mean-centered data matrix.

20
Q

What does XᵀX equal in terms of SVD?

A

XᵀX = D V² Dᵀ, which is the eigendecomposition of the covariance matrix.

21
Q

What do the columns of D in PCA represent?

A

Principal components (eigenvectors of the covariance matrix).

22
Q

What do the singular values in PCA correspond to?

A

The square roots of the eigenvalues of the covariance matrix.

23
Q

Why is SVD more general than eigen-decomposition?

A

It applies to any rectangular matrix, not just symmetric square matrices.

24
Q

What does each σᵢ (singular value) tell us?

A

The strength or importance of the corresponding mode of variation.

25
When is a matrix said to be full rank?
When all columns (or rows) are linearly independent.
26
What does dimensionality reduction using SVD preserve?
The most significant components of variation in the data.