3D Pipeline Flashcards

(84 cards)

1
Q

Points in 3D space are represented by…

A

Three coordinates: length, width and depth, or x, y and z.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Linear transformations are encoded as…

A

3 x 3 matrices

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Homogeneous coordinates are used for affine transformations, resulting in vectors of shape […] and matrices of shape […].

A

4-dimensional homogeneous vectors and 4 x 4 matrices

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Scaling and translation are the same as in 2D, however […] is not. (pick 1)

A

Rotation, shearing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Rotation is a different process than in 2D because…

A

In 2D, we rotate around the origin of the coordinate system, however in 3D we rotate around a chosen axis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The inverse of a rotation is denoted mathematically as R^-1, but since rotation matrices are orthogonal, it may also be given as…

A

The transpose of the original matrix

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Any 3D rotation can be expressed as a sequence of…

A

Three rotations around the main axes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

In order to rotate a vector v around a non-global axis, we need to begin by first… (HINT: similar to rotation around a point)

A

Aligning v to one of the coordinate system’s main axes, to be reversed later

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

We can use Euler-Rodrigues to calculate the matrix for a rotation given an axis u such that u is… (HINT: Vector3.forward is [0, 0, 1])

A

The axis u is a unit vector

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Given an affine transformation A, a normal of a 3D object n must be calculated as…

A

The inverse of the transpose of A multiplied by n, or (A^T)^-1 n

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

In order to convert nD coordinates to mD, we use a type of geometry called…

A

Projective geometry

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

When converting from 3D to 2D, we map points from 3D space to a…

A

Projection plane

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Where the camera exists and is pointing is called…

A

The Centre of Projection

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The projection plane is a…

A

Plane which contains 3D world points mapped to a 2D local space

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The rendering pipeline stage contains steps such as… (pick 3)

A

Shading, transformation, lighting, rasterization (pick 3)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Normals are…

A

The point at which each face on a model is facing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Normals are used for… (pick 2)

A

Anything related to viewer-facing improvements.

Bump/displacement mapping, lighting, surface smoothing, subsurface divisions (pick 2) (don’t use subsurface divisions as a mental example, it’s too complicated)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

The 3D pipeline transforms 3D objects by […] vertices onto a […].

A

Projecting vertices onto a screen.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

After projecting vertices onto a screen, the 3D pipeline then handles…

A

Lighting and shading.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

The final step of the 3D pipeline is… (HINT: to be output to the LED screen)

A

Rasterisation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

The two types of projection are…

A

Perspective and parallel.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

While perspective projection uses projection lines that converge at the center of projection, parallel projection…

A

Uses projection lines are parallel to each other, projecting objects in 3D space onto a fixed-size projection plane.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

While parallel projection uses projection lines that are equal to each other, perspective projection…

A

Uses projection lines that converge at the center of projection, projecting objects onto a plane defined at a chosen point.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

The distance a parallel projection may cover is…

A

Infinite, as the lines are parallel

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
In perspective projection, as objects get further away, they appear...
Smaller.
26
While world space represents the overall coordinate system in our scene, view space represents...
The coordinate system of all objects relative to the camera, or center of projection.
27
What is the purpose of the canonical view space?
Canonical view space is a coordinate system that is a copy of view space, except all points are normalised.
28
What is the difference between position and orientation?
Position is a 3D point in world space, while orientation is where the camera is pointing, represented as a 3D normal.
29
Instead of defining a Center of Projection for an orthographic projection, we instead define...
A Direction of Projection (DOP)
30
All x, y and z components in an orthographic view space are [...] mapped to normalised device coordinates.
Linearly
31
The pinhole camera model represents...
A box with a small aperture on one side, and a light sensitive surface on the other
32
The focal length in a perspective projection model is...
The distance between the center of projection and the plane of projection
33
In perspective projection, a 3D point within a truncated pyramid frustum is mapped to...
The canonical view space
34
Near and far clipping distances in frustum projection refer to...
How far an object needs to be from the camera to be seen, and the maximum object distance respectively
35
Defining the field of view generates the left and right [...] planes.
Clipping
36
Viewport transformation describes the process of...
Converting the normalised distance values (canonical) to viewport pixel coordinates
37
To transform from local to world space, we use a [...] matrix.
Model matrix
38
To transform from world to view space, we use a [...] matrix.
View matrix
39
To transform from view to clip space, we use a [...] matrix. (HINT: we use frustums for this)
Projection matrix
40
To transform from clip space to screen space, we use a...
Viewport transformation
41
Local coordinates are the coordinates of each [...], while global coordinates consist only of the [...] of this local primitive.
Vertex, position (and orientation)
42
MIP mapping is good for... (pick 2)
Better texture quality at distance, better performance, prevention of texture popping artefacts
43
The goal of texture mapping is to map a [...] onto a [...].
2D image onto a 3D object.
44
In OpenGL, textures are stored as images, and each pixel is also called a...
Texture element, or texel
45
Each vertex of the mesh is associated with a [...] on the texture.
Normalised point
46
A texture atlas, also known as a sprite, is...
An image containing multiple smaller images, typically packed together to reduce dimensions
47
Pelting is a technique used to map textures onto [...], by [...].
Complex, organic shapes, by flattening the model's surface into a 2D plane
48
To map a texture to the screen, we use...
Forward mapping
49
To map the screen to a texture coordinate, we use...
Inverse mapping
50
The benefit of using inverse instead of forward mapping for texture rendering is...
Inverse mapping allows us to only render what is being displayed on screen, rather than rendering the whole object's texture
51
At the rendering step, each fragment's texture coordinates are interpolated from...
The triangle it belongs to
52
When interpolating texture coordinates, we often find ourselves between texels. To solve this, we can choose either...
The nearest texel, or a weighted average of the neighbours.
53
Environment mapping represents the distant 'environment' as...
A texture, usually a cube or sphere
54
Sphere mapping maps the environment onto...
A distant sphere surrounding the scene
55
Cube mapping maps the environment onto...
The six faces of a distant cube
56
Cube mapping and sphere mapping is performed using...
A framebuffer
57
In OpenGL, textures have their own texture binding, called...
GL_TEXTURE1 or GL_TEXTURE2D
58
In OpenGL, cube maps have their own texture binding, called...
GL_TEXTURE_CUBE_MAP, alongside _POSITIVE_X, _POSITIVE_Y, and so on
59
We can use the framebuffer to render an image like a cube map without being immediately...
Displayed on screen
60
Bump mapping lets you simulate [...] variations by adjusting the [...] of surface normals.
Height variations, by adjusting the direction of surface normals.
61
A heightmap is a texture provided to the shader that defines, for each pixel on the model...
How high up it should be
62
Bump mapping is typically applied [before/after] shading.
Before
63
The process of shadowing is done to determine...
Volumes created by shadows
64
A shadow volume is represented as a [3D object/2D plane].
3D object
65
A point is considered 'in shadow' if it is within...
The shadow volume
66
To handle situations where multiple shadow volumes influence a single point, we use [...], which increment once for each...
Shadow counters, which increment once for each time the point is in shadow
67
Shadow mapping is the process of creating...
A shadow map texture
68
A shadow map is used as...
A lookup table to determine if the light from a light source is covered by an object or not
69
Shadow mapping may be performed [frame-by-frame/statically/both].
Both - performing it statically is called baking
70
Shadow maps are generated [for each light/for each observer], from [a fixed perspective/its total perspective].
For each light, from its total perspective
71
Shadow maps are generated by using a [view/projection/clip] matrix from the perspective of the light.
Clip matrix
72
Some examples of situations where faces may not be rendered are...
If the face is outside of the frustum, if the face is not facing towards the viewer, or if the face is hidden behind some other object.
73
Culling is a technique that optimises a scene by...
Removing faces that don't need to be rendered due to visibility or other reasons
74
View-Frustum culling is a technique that uses the frustum to...
Remove any objects that fall outside of the camera frustum
75
Clipping is performed in the vertex [post/pre]-processing stage in the rendering pipeline.
Post | It occurs after objects have been projected, but before rasterisation
76
Back-face culling is a technique that culls faces if...
The face is not pointing towards the camera
77
In back-face culling, a face is culled if the dot product of its normal and the vector from the viewpoint is [lesser/greater] than zero.
Greater ## Footnote This value being greater than zero implies the face is rotated 180 degrees from the camera viewpoint
78
Occlusion culling is a technique used when...
Multiple objects or polygons share the same screen coordinates, but have different depths (one object is in front of another)
79
One way to handle occlusion culling is by using Painter's algorithm, where we...
Draw objects from farthest to nearest, and sort these objects by depth
80
One way to handle occlusion culling is by using the Z-Buffer algorithm, where we...
Create a buffer containing the closest known depth in this rendering pass-over, and when we find an object with a closer depth, we update it
81
The Z-Buffer algorithm is a [image-space/object-space] approach, while Painter's algorithm is known as a [image-space/object-space] approach.
Image-space, object-space
82
Z-fighting occurs when two surfaces are...
Very close in depth, causing pixels to alternately be drawn from either surface, resulting in visual artifacts
83
Alpha values are used in rendering transparency to...
Measure the proportion of the transparent object that should be rendered compared to the objects behind it
84
Transparency uses a technique called blending, where we...
Combine colours from multiple layers based on their alpha values