01 - Recap & Optimization Flashcards

1
Q

Explain the camera Models and Intrisics

A

Projection Matrix P:
m = (K H)M = P M
where
k - Intrinsic matrix
H - Extrinsic matrix ( [R t; 0 1] )
R - rotation matrix
t - translation (position)
R,t - pose

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the homogenous coordinate transform matrix and its inverse?

A

( [R t; 0 1] ) -> ( [R^T -R^Tt; 0 1] )

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

I the general multi-view case which unknowns are involved & which unknowns can be recovered under which assumptions?

A

We can recover:
m = K[R t]M = P M
K - Camera Calibration
P - Pose Estimation
We cannot recover:
3D points (just good guesses)
K & P

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are suitable criteria for triangulation?

A

2 calibrated cameras, two points

no dist to point larger than baseline (or 3 times larger? read two different things)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is optimal triangulation and how is it optimal?

A

There is DLT for two cameras (linear system) and minimiser of the sum of reprojection errors for n cameras (non-linear) (-> For each reprojection error we have a delta x and delta y, so if we have multiple cameras, we square all of them and take the sum).
DLT is optimal, LS is statistically optimal.
For it we assume gaussian noise

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How is triangulation affected by noise and imaging geometry?

A

The Quality of the 3D points is influenced by the quality of the orientation parameters and of the measured image coordinates
Generally we want to make sure that the points are not too far away from the cameras

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Explain pro’s and con’s of different triangulation methods.

A

Geometric: Simple and quick, but not optimal/unaccurate
DLT: Better but does not handle non-linear systems very well. Works for any number of corresponding images but is not projectively invariant.
Optimal: it assumes gaussian noise, can handle non-linear systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How do we solve Ax=b problems in the under and over-determined case?

A

Under-Determined: Tayler expansion? we have no or infinetely many solutions
Over-fitted: Least squares

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How do we solve Ax=0 problems?

A

Precondition: 2D points and Projection matrices are known
Create matrix A
[U, S, V] = svd(A)
X= V(:, end)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

When do we need non-linear optimization and what are the limitations/problems of non-linear optimization?

A

When we have more than 2 cameras
Computation time (→ too few iterations to find optimal solution)
Needs a good initial guess to find the global minimum

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Explain the gradient descent algorithm and its expected convergence.

A

First order, converges towards local minimum

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Explain Newton’s method and its expected convergence

A
  • Quadratic Approximation
  • Second order derivative information (Hessian Matrix)
  • Updates the point by subtracting by the multiplication of the gradient(1st derivative) and hessian (2nd derivative)
  • Good locally but does also not promise a global minima, but it generally is faster than gradient descent
  • Problem with this method: finding the inverse of the hessian is expensive, therefore approximation methods (quasi newton and conjugate gradient) are used
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the Gauss-Newton method?

A
  • Variant of Newtons Method, designed for problems where the objetive function is the sum of squares → non linear least squares problems
  • Second derivatives are not needed, the hessian matrix is replaced by J^T J
  • Not good for large residuals, highly non-linear functions and ill-conditioned functions, there it is better to use LM
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Explain the Levenberg-Marquardt algorithm

A
  • includes a damping factor to be more stable, have better convergence and handle ill conditioned problems
  • interpolation between gradient descend (which always moves in a decreasing direction) and like gauss-newton (doesn’t necessarily move in a decreasing dir, but if it does it is faster)
    • Small residuals, small alpha → Gauss-Newton
    • Big residuals, big alpha → Gradient Descent
      • slower but safer
  • Stopping Criteria:
    • max number of iterations
    • function value is below threshold
    • change of function value is below threshold
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly