4) Itervative Methods for Solving Linear Systems Flashcards

1
Q

What is the Reaction-Diffusion Problem

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How do you implement the Centred Finite Difference approximation

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Describe the matrix in the Centred Finite Difference Scheme if r(x) and the boudary conditions are 0

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Describe the eigenvalues of the finite difference approximation matrix using a cos approximation

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is Jacobi Iteration

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is Gauss-Seidel Iteration

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What defines the error in Jacobi or Gauss-Seidel iterative methods

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How many operations are required to perform one iteration of Gauss-Seidel iteration on Ax = b

A

O(n^2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the residual

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What condition determines a solution in Jacobi or Gauss-Seidel methods

A

In both Jacobi and Gauss-Seidel methods, a vector
x is a solution to the linear system Ax=b if and only if it is a fixed point of the iteration. This means that x satisfies the equation: x=Tx+c

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Describe the proof that in both Jacobi and Gauss-Seidel methods, a vector x is a solution to Ax=b if and only if it is a fixed point of the iteration

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How is the error progression formulated in Jacobi and Gauss-Seidel iterations

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is a vector norm

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the three most commonly used vector norms

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What does it mean fora sequence of vectors to converge with respect to a vector norm

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the relationship between the infinity norm and the 2-norm of a vector in Rn

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Describe the proof of

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is the relationship between convergence in the 2-norm and convergence in the infinity norm

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Describe the proof that convergence in the 2-norm is equivalent to convergence in the infinity norm

A
20
Q

What is a matrix norm

A
21
Q

What is the definition of a matrix norm induced by a vector norm

A
22
Q

Describe the proof that

A
23
Q

If ∥·∥ is a vector norm, then what do we know about ∥A∥

A
24
Q

Let ∥A∥ be the induced matrix norm, show that ∥A + B∥ ≤ ∥A∥ + ∥B∥

A
25
Q

Let ∥A∥ be the induced matrix norm, show that ∥AB∥ ≤ ∥A∥∥B∥

A
26
Q

What are the norms induced by the vector 1-norms and ∞-norms

A
27
Q

What is the spectral radius of a matrix

A
28
Q

How do you find the eigenvalues of a matrix

A
29
Q

What is the 2-norm of a matrix

A
30
Q

What is the 2-norm of a symmetric matrix

A
31
Q

Assume D + L is non-singular and consider the Gauss-Seidel iteration matrix T = −(D + L) −1U.
Show that λ is an eigenvalue of T if and only if det(λ(D + L) + U) = 0

A
32
Q

Describe the proof that

A
33
Q

Show that for the induced matrix norm ∥A∥, we always have ∥A∥ ≥ ρ(A).

A

We have Ax = λx
We have ∥Ax∥ = ∥λx∥ = |λ|∥x∥
Then, using a property of the induced matrix norm, we have ∥Ax∥ ≤ ∥A∥ ∥x∥.
Hence |λ|∥x∥≤ ∥A∥ ∥x∥
Since eigenvectors are non-zero, ∥x∥ ̸= 0 and so |λ| ≤ ∥A∥
p(A) is the maximum λ so p(a) ≤ ∥A∥

34
Q

What is the relationship between the error at the
k-th iteration and the initial error in an iterative method for solving Ax=b using x^k+1 = T x^k +c

A
35
Q

Describe the proof that

A
36
Q

What are the conditions for the sequence x^k in an iterative method to converge to a fixed point x satisfying x = T x+c, with respect to the chosen norm vector ∥·∥

A
  • Vector Norm Condition: If ∥T ∥ < 1
  • Spectral Radius Condition: If and only if p(T) < 1 the x^k iterates converge for all starting points x0
37
Q

Describe the proof that if ∥T ∥ < 1then the sequence x^k , k ≥ 0, converges to a fixed point x satisfying x = T x+c, with respect to the chosen norm vector ∥·∥

A
38
Q

Describe the proof that x^k iterates converge for all starting points x^0 if and only if p(T) < 1

A
39
Q

What is Gershgorin’s Circle Theorem

A
40
Q

Describe the proof of Gershgorin’s Circle Theorem

A
41
Q

What does it mean to be diagonally dominant

A
42
Q

If a matrix A is diagonally dominant what does that impy about its convergence

A

Let A be diagonally dominant. Then the Jacobi method converges to a solution of the system Ax = b for any starting point x^0

43
Q

Describe the proof that if A is diagonally dominant then the Jacobi method converges to a solution of the system Ax = b for any starting point x0

A
44
Q

What is the condition number of a matrix A

A

Let ∥·∥ be a matrix norm and let A an invertible matrix.
The condition number of A is defined as cond(A) := ∥A∥· ∥A^−1∥

45
Q

Explain the importance of the condition number of a linear system

A
  • The condition number measures the sensitivity of a linear system to changes in input data. A high condition number indicates potential large changes in the solution for small perturbations
  • A high condition number suggests a higher likelihood of significant errors due to rounding and truncation
46
Q

What is the relationship between the error, residual, and condition number in iterative methods

A
47
Q

Describe the proof that cond(A) ≥ 1 for any matrix A

A