Computer Graphics Flashcards

(291 cards)

1
Q

What is Modelling?

A

Effective representation and efficient computational modification of geometric shapes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is Animation and Simulation?

A

Generation and representation of dynamic imagery on a computer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is Image Synthesis?

A

Display of models and scenes on a computer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What Is Visualisation?

A

Methods to visually represent the information content of large-scale, multi-dimensional data sets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a Raster Display?

A

A display that consists of pixels with a defined colour and intensity. It also contains a framebuffer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the RGB colour theory?

A

The theory that every colour can be made up of red, green and blue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is a Framebuffer?

A

A 2D array in memory where each entry corresponds to a pixel on the screen.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is double buffering?

A

Using one buffer for drawing and another for display, then swapping them to avoid flicker.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are Primitives?

A

Simple geometric elements such as a line, point, boundary or polygon, that make up more complex shapes and images. They are specified by location and dimension.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are Complex Objects?

A

Objects built from primitives, but with visual attributes such as colour, line-style, and fill style.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are local and world coordinates?

A

Local coordinates define objects individually, while world coordinates describe these objects place in the world.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a clipping window in computer graphics?

A

The area of a scene, defined in world coordinates, that you choose to display.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is a viewport?

A

An area on the display device (screen) where the clipped portion of the scene is rendered, defined in device coordinates.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Outline the main stages of the 2D Graphics Pipeline.

A

Model Scene: Construct objects from primitives in local coordinates and place them into world coordinates.

View Scene: Specify window (scene area) and viewport (display area), then clip and map the scene.

Output: Rasterise primitives to pixels using attributes for display.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are Device Coordinates?

A

Coordinates specifying pixel locations on the physical device or screen, typically measured in pixels or physical units.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is a Linear Transformation?

A

A transformation represented by multiplying vectors by matrices, such as rotation, scaling, and shear.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What linear transformation is
[ cosθ -sinθ ]
[ sinθ cosθ ]
and what would the effect be?

A

Rotation by θ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What linear transformation is
[ λ1 0 ]
[ 0 λ2 ]
and what would the effect be?

A

Scaling x by λ1, and y by λ2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What linear transformation is
[ 1 λ ]
[ 0 1 ]
and what would the effect be?

A

Shear, and it adds λy to every x value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the key difference between linear and affine transformations?

A

Affine transformations include translation; linear ones do not.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

How do you combine two affine transformations (M1, t1) and (M2, t2)?

A

M = M2M1 and T = M2t1 + t2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is an affine transformation

A

A linear transformation plus a translation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What are Homogeneous Coordinates?

A

They add a 3rd component to 2D points (x, y) → (x, y, 1), which allows affine transformations to be handled with a single matrix multiplication.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is the advantage of using homogeneous coordinates?

A

Allows all affine transformations to be combined in one matrix multiplication.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What is the form of a 2D affine transformation matrix in homogeneous coordinates?
[m11 m12 tx] [m21 m22 ty] [0 0 1]
26
What does it mean that matrix multiplication is non-commutative?
Generally, AB≠BA. The order of transformations matters, e.g., rotating then translating differs from translating then rotating.
27
Which transformations commute?
- Rotations only in 2D - Translations - Scalings - Shears - Different transformations do not.
28
How do you rotate around a point (not the origin)?
Translate the point to origin → Rotate → Translate back.
29
How can lines and rectangles be transformed?
By only transforming the vertices/corners. This is because affine transformations preserve straightness and parallelism.
30
How are homogeneous coordinates used for 3D transformations?
To encode affine transformations as 4x4 matrices.
31
What are the differences between 2D and 3D transformations?
Scaling and Transformation is an extension of 2D, while rotation is different as rotation becomes non commututive in 3D.
32
Which 3D rotation is this: [cosθ -sinθ 0 0] [sinθ cosθ 0 0] [ 0 0 1 0] [ 0 0 0 1]
Around the z axis. Top left for z.
33
Which 3D rotation is this: [ 1 0 0 0] [0 cosθ -sinθ 0] [0 sinθ cosθ 0] [ 0 0 0 1]
Around the x axis. Bottom right for x.
34
Which 3D rotation is this: [cosθ 0 sinθ 0] [ 0 1 0 0] [-sinθ 0 cosθ 0] [ 0 0 0 1]
Around the y axis. Corners for y.
35
How do you get the inverse of a rotation?
By calculating the transpose of the rotation matrix.
36
How to rotate an object around an arbirtrary axis?
First, align the arbitrary axis to a principal axis, rotate around the principal axis, and undo the alignment with inverse rotation.
36
How to transpose a matrix?
First row becomes first column, first column becomes first row etc.
37
What are the three Euler angles?
Roll for Z, Pitch for X, Yaw for Y
38
What is the cross product matrix for a in [a] x b
[0 -uz uy] [uz 0 -ux] [-uy ux 0]
39
What is the Euler-Rodrigues Formula
R = cosθ⋅I + sinθ⋅[u] + (1−cosθ)⋅[u]^2 Where theta is the angle, I is the identity matrix, and [u] is the cross-product matrix of u, so that [u] * v = u x v. R is the resulting rotation matrix.
40
How do normals transform differently?
Normals transform differently under non-uniform scaling or affine transformations.
41
How do you transform a normal vector?
Multiply by the inverse of the transpose of A in Affine transformation x' = Ax + t
42
What is Parallel projection
A projection where all projection lines (rays) are parallel and intersect the projection plane perpendicularly. There is no center of projection.
43
What is perspective projection?
A projection where projection lines converge at a single point called the center of projection, mimicking how the human eye perceives depth.
44
What are the key visual effects created by perspective projection?
Objects farther away appear smaller, Objects closer appear larger, Parallel lines may converge (e.g., train tracks)
45
What is the main visual difference between parallel and perspective projection?
Parallel projection preserves object sizes and parallelism, while perspective projection distorts size and angles to mimic human vision.
46
Why is parallel projection useful?
It's ideal for technical or engineering drawings because it preserves true sizes and angles, allowing for accurate measurements.
47
Why is perspective projection preferred in visual graphics and games?
Why is perspective projection preferred in visual graphics and games?
48
In perspective and parallel projection, where is the center of projection located?
Perspective: At a finite distance in front of the projection plan. Parallel: At infinity, all projection rays are made parallel.
49
What does the orthographic projection matrix do?
It scales and shifts a 3D axis-aligned box (view volume) into a normalized cube in clip space, preserving sizes and parallel lines.
50
What is the direction of projection (DOP) in orthographic projection?
A vector that is perpendicular to the projection plane.
51
What is the Orthographic Projection Matrix
[(2/r-l) 0 0 - (r+l/r-l)] [0 (2/t-b) 0 -(t+b/t-b)] [0 0 -(2/f-n) -(f+n/f-n)] [0 0 0 1]
52
How does the pinhole model work:
A simple model where light enters through a small hole and projects an inverted image onto a plane.
53
What is the basic projection formula for the pinhole model (no inversion)?
[y1] = f/x3 [x1] [y2] [x2]
54
What is the homogeneous matrix for the pinhole model?
It is [x1] [y1] [f 0 0 0] [x2] [y2] = [0 f 0 0] [x3] [y3] [0 0 1 0] [1] Then divide by y3
55
What is the homogeneous matrix for the pinhole model allowing for pixel size and pixel location?
It is [x1] [y1] [f/s1 0 o1 0] [x2] [y2] = [0 f/s2 o2 0] [x3] [y3] [0 0 1 0] [1] Then divide by y3
56
What is a frustum in perspective projection?
A truncated pyramid volume between the near and far clipping planes.
57
What does the frustum projection matrix do?
Maps 3d coordinates in eye space to normalised device coordinates (NDC).
58
What is the general form of the frustum projection matrix.
[2n/r-l) 0 r+l/r-l 0 ] [0 2n/t-b t+b/t-b 0 ] [0 0 -(f+n/f-n) -(2fn/f-n)] [0 0 -1 0 ]
59
What is viewport transformation?
After projection, the scene lies in the canonical cube. Viewport transformation maps the canonical cube to screen pixel coordinates.
60
What are the key coordinate systems in rendering pipeline order?
Local space -> World space -> View space -> Clip space -> Screen space
61
How do you transform from view space to clip space?
Multiply the view-space coordinates by the projection matrix (perspective or orthographic), producing homogeneous clip-space coordinates.
62
What is Morphing?
The term morphing stands for metamorphosing and refers to an animation technique in which one graphical object is gradually turned into another.
62
What is Warping?
The term warping refers to the geometric transformation of graphical objects (images, surfaces or volumes) from one coordinate system to another coordinate system.
63
What is the goal of Morphing?
To find an average between two objects.
64
How is linear interpolation used in morphing?
Weighted average over time: P' = aP+(1−a)Q
65
What is cross-dissolve in morphing?
A simple interpolation of pixel colours between two images: I(t)=tI1+(1−t)I2 ​
66
Why is cross-dissolve alone not effective for face morphing?
It doesn’t account for differences in shape, so the visual transition looks unnatural if images aren’t aligned.
67
What is the solution to cross-disolve not working alone?
Warp shapes to align and then cross disolve.
68
What is Image Filtering?
Image filtering changes the range (i.e. the pixel values) of an image, so the colors of the image are altered without changing the pixel positions.
69
What is Parametric warping?
A global transformation applied uniformly across the image, often expressed as a matrix p′=Mp
70
Give examples of parametric transformations.
Scaling, rotation, shear—represented by matrices acting on 2D coordinates.
71
What is Uniform and Non-Uniform scaling?
Uniform scaling is when the scalar for transofmation is the same for all components, while non uniform means it isn't
72
What is forward warping?
Each source pixel is sent to a new destination position, which may leave gaps (holes).
73
What is inverse warping?
For each destination pixel, find its corresponding source location using the inverse transformation.
74
What if a pixel lands "between” two pixels in forward warping?
Add ”contribution” to several pixels, normalise later
75
What if a pixel lands "between” two pixels in inverse warping?
Re-sample color value from interpolated (pre-filtered) source image
76
What does Beier–Neely warping use to define transformations?
Pairs of corresponding lines (features) between source and destination images.
77
In Beier–Neely warping, what are u and v?
u: Distance along a line segment (fractional position) v: Perpendicular distance from the pixel to the line
78
In Beier–Neely warping, how is a destination pixel p mapped to a source pixel p′?
Use (u,v) to map p to the corresponding position on the source's image line
79
Why are multiple line pairs used in Beier–Neely warping?
To improve accuracy, each line suggests a source location, and their contributions are combine
80
What is the weighting formula in Beier–Neely warping?
wi = (li^p/a+di)^b li is line length, di is distance from pixel to line, a is smoothing of user control, b is how fast influence decays with distance, and p is the line length influence.
81
What is piecewise affine warping used for?
To enable flexible, localised deformations by transforming each triangle separately.
82
How is the image domain divided in piecewise affine warping?
Into triangles formed from a convex hull of control points.
83
How is a pixel p inside a triangle represented in piecewise affine warping?
p = γx1+αx 2+βx where γ+α+β = 1, and each is the distance to a corner of a pixels triangle.
83
What is a key limitation of piecewise affine warping?
It creates continuous but not always smooth deformations - visible seams may appear between triangles.
84
How are continous objects modelled?
Using polygonal approximations.
85
What is a polygon mesh in computer graphics?
A set of connected polygons (usually triangles or quads) that approximate the surface of a 3D object.
86
What are the main components of a polygon mesh?
Vertices (x, y, z), edges (connections between vertices), and faces (polygons).
87
Why do we only model the surface of objects in graphics?
Because only the surface interacts with light during rendering.
88
What is a surface normal?
A vector perpendicular to a surface, used in lighting calculations.
89
How are vertex normals typically calculated?
By averaging the normals of all faces that share the vertex.
90
What are the conditions for a valid polygon mesh?
Faces intersect only at edges/vertices, each edge is shared by exactly two faces (manifold), and normals are consistent.
91
What are the main two types of mesh?
Triangle and Qaudrangle mesh.
92
How are curved surfaces represented in graphics?
By approximating them with many small flat polygons (usually triangles).
93
Where do we place more polygons in a mesh?
In regions with high curvature, to capture shape detail.
94
What shape is commonly used to approximate curves like circles?
Regular polygons with many small sides.
95
What are the three OpenGL triangle primitives?
GL_TRIANGLES, GL_TRIANGLE_STRIP, GL_TRIANGLE_FAN.
96
What is an explicit triangle mesh representation?
Each triangle stores all three of its vertices directly, even if shared.
97
What is a drawback of explicit representation?
It is redundant - shared vertices are repeated multiple times.
98
What is GL_TRIANGLES in OpenGL?
A primitive where every group of three vertices defines a separate triangle.
99
What is GL_TRIANGLE_STRIP in OpenGL?
A primitive where each new vertex after the first two creates a new triangle, reusing the previous two vertices.
100
What is GL_TRIANGLE_FAN in OpenGL?
A primitive where all triangles share the first vertex; each new pair of vertices forms a new triangle with the first.
101
What is a Vertex Buffer Object (VBO)?
A GPU-stored array of vertex data, allowing faster rendering without re-uploading.
101
What is shared vertex representation?
A mesh format where vertices are stored once, and triangles reference them by index.
102
What are the memory benefits of shared vertex representation?
Less redundancy — requires 3m floats (vertices) + 3n ints (indices), instead of 9n floats.
103
What attributes can each vertex store?
Position (x, y, z), color (r, g, b), normal (nx, ny, nz).
104
What is flat shading?
Filling each triangle with a single constant color, emphasizing polygon edges.
105
What is smooth shading?
Interpolating color (or light) values across each triangle face for a more realistic appearance.
106
What are the three main types of light sources in graphics?
Point lights, directed (directional) lights, and spotlights.
107
How does light attenuation work for point lights?
Intensity decreases with distance: attenuation = 1/d^2
108
What are the possible interactions when light hits a surface?
Absorption, diffuse reflection, specular reflection, refraction.
109
What is Global Illumination?
A comprehensive light model that includes direct and indirect light, shadows, reflections, and caustics.
110
Why isn’t global illumination used in real-time graphics?
It is computationally expensive and too slow for real-time rendering
111
What is local illumination?
A lighting model that considers only direct light from sources to surfaces, ignoring indirect effects.
112
What is Lambert's cosine law?
Id = I * kd * (N ⋅ L) I is the incoming light intensity. kd is the materials diffuse reflection coefficient from 0-1. L is the unit vector to light source, and N is the surface normal in unit vector form.
113
What is the Phong Specular Reflection model?
Is - I * ks * (r⋅v)^n I is the light intensity, ks is the specular reflection coefficient, r is the mirror direction and v is the viewing direction, n is the specular reflection exponent.
114
What is the specular reflection exponent?
Sometimes called the shininess or Phong exponent, it controls how tight or broad the specular highlight is. A low n is a broad, dull highlight, and a high n is a sharp, focused highlight. When n is infinity, its a perfect specular reflection
115
What is the Phong Reflection Model in terms of theory?
A combination of the diffuse (Lambert's), Specular (Phong), and ambient term (Constant base lighting).
116
What is the Phong Reflection Model equation
Ir =ka * Ia + I[kd(N⋅L])+ ks(r⋅v)^n] ka * Ia = ambience kd(n⋅L]) = diffuse reflection ks(r⋅v)^n = specular reflection
117
What method is used to calculate Diffuse Reflection?
Lambert's cosine law
118
What method is used to calculate Specular Reflection?
Phong Specular Reflection Model
119
How does the Phong Reflection Model work for multiple sources
The diffuse and specular reflection are summed.
120
What is the problem with the Phong Specular Reflection model?
Calculating r⋅v for specular reflection per pixel is expensive.
121
What is the Blinn-Phong model?
An optimized version of the Phong reflection model that uses a halfway vector to compute specular highlights more efficiently
122
What vector replaces the reflection vector r in Blinn-Phong?
The halfway vector H = L + V / ||L + V|| where ||L + V|| is the magnitude of vector L + V
123
What is the full Blinn-Phong reflection equation?
Ir = ka * Ia + I[kd(N⋅L])+ ks(N⋅H)^n']
123
What is the Blinn-Phong specular reflection formula?
Is = I * ks * (N⋅H)^n'
124
When is N⋅H maximised in Blinn-Phong?
When N aligns with H, the brightest specular reflection is produced.
125
What is flat shading?
One normal per polygon. Quick, but looks faceted.
126
What is Gouraud Shading
Compute illumination at vertices, interpolate across face. Efficient, good for diffuse, but misses specular highlights.
127
What is Phong Shading?
Interpolate normals, compute illumination per pixel. Much smoother - catches specular highlights.
127
When should Phong be used and when should Gourad be used?
Use Phong for surfaces with specular reflection (large ks). Use Gouraud for diffuse surfaces (ks ≈0).
128
What is the speed difference of Phong and Gourad shading?
Gourad is 4 to 5 times faster.
129
What does the rasterizer do?
Converts primitives (e.g., triangles) into fragments (potential pixels).
129
What is the role of the vertex shader in the pipeline?
Processes individual vertices - applies transformations and passes data.
130
What is the role of the fragment shader?
Computes the final color of each pixel-sized fragment.
131
What is the correct order of the stages in the modern OpenGL rendering pipeline (using shaders)?
Vertex Specification - Vertex Data in buffers Vertex Shader - Transforms vertices to clip space Primitive Assembly - Forms Triangles, lines, etc. Clipping - Removes geometry outside view frustrum Rasterization - Converts primitives to fragments Fragment Shader - Computers fragment color depth, etc. Tests & Blending - Depth test, alpha blending Framebuffer Output - Writes final pixels to screen/image
132
What are shaders?
Shaders are small GPU programs written in GLSL (OpenGL Shading Language) that control how objects are rendered.
133
What are vec3 and mat4 in GLSL?
A 3D float vector and a 4x4 float matrix, respectively.
133
Can GLSL functions be recursive?
No - recursion is not allowed.
134
What does the in keyword mean in GLSL?
It denotes per-instance input to a shader (e.g., per-vertex input to vertex shader).
135
What does the out keyword mean?
Output from a shader stage, passed to the next stage.
136
What is a uniform variable in GLSL?
A global variable set by the CPU that is constant across all shader invocations (e.g., matrices, light position).
137
What is gl_Position?
A built-in output that stores the transformed vertex position in clip space.
138
Why do we need the PVM matrix in vertex shaders?
To transform vertex positions from model space to clip space.
139
what are the default inputs and outputs in a Vertex Shader?
input: integer Index of current vertex Output: vec4 gl_position
140
What is a VBO in OpenGL and what is it used for?
A VBO (Vertex Buffer Object) is a block of GPU memory that stores vertex data (e.g., positions, colors, normals, texture coordinates).
141
What is a VAO in OpenGL and why is it important?
A VAO (Vertex Array Object) stores the configuration of how vertex data is read from VBOs. It’s like a recipe for drawing objects. Once set up, binding the VAO lets OpenGL know how to feed data to the shaders without redefining everything.
142
What does a fragment shader receive as input?
Interpolated outputs from the vertex shader (e.g., color, normals, texture coords).
143
What does a fragment shader typically output?
The final color of the fragment (e.g., out vec4 final_color).
144
What Can Be Added in a Fragment Shader?
Lighting models such as Phong/Blinn-Phong, Texture mapping, Normal mapping.
145
What are texels?
Individual pixels of a texture image.
146
What is texture mapping?
Applying a 2D image (texture) onto a 3D object surface using coordinates (s, t) ∈ [0,1].
147
What are common texture coordinate names?
(s, t) or (u, v).
148
What are the equations for a sphere of Radius R?
x=Rsinθcosϕ y=Rsinθsinϕ z=Rcosθ ​θ is the polar angle from the z axis, and ϕ is the azimuthal angle around the xy plane.
149
How do you map a 2d texture around a sphere?
s = arctan(y/x) / 2π t = arccos(z/R) / π
150
What does GL_REPEAT and GL MIRRORED REPEAT do?
Repeats the texture pattern when s or t > 1. Texture is flipped in alternance for mirrored repeat.
151
What is the difference between GL_CLAMP_TO_EDGE and GL_CLAMP_TO_BORDER?
CLAMP_TO_EDGE: Clamps to the last texel. CLAMP_TO_BORDER: Uses a default border color.
152
What is Pelting?
Unwraps the mesh like peeling an orange - allows texture painting.
153
What is Atlas?
Stores multiple flattened regions into one texture image.
154
What is the difference between GL_NEAREST and GL_LINEAR?
GL_NEAREST returns the texel closest to (s,t), GL_LINEAR returns the weighted average of the four neighbours.
155
What Is Barycentric Interpolation?
Barycentric interpolation blends values (like texture coordinates) across a triangle using weights based on distance to each vertex. It ensures smooth, realistic rendering.
155
How are texture coordinates interpolated per fragment?
Using barycentric interpolation between triangle vertices.
156
What is forward mapping?
Maps from texture (s, t) to screen (x, y).
157
What is inverse mapping?
Maps from screen (x, y) to texture (s, t); more efficient for rasterisation
158
Why can aliasing occur in texture mapping?
When multiple texels map to one pixel or vice versa.
159
What is low-pass filtering, and in what aliasing scenario is it applied?
Low-pass filtering removes high-frequency details that cannot be accurately sampled, typically by averaging neighboring texels. It’s used when the pre-image of a pixel lies entirely within one texel
159
What are opacity maps?
Same concept as texture maps, but controls transparency. Usually use the alpha channel of the texture.
160
What is environment mapping?
Simulating reflections using a pre-rendered texture of the environment.
161
What is cube mapping?
Using 6 textures (one per cube face) to represent the environment in all directions.
162
What is sphere mapping?
Using a spehere to repersent the environment. Inadequate resolution of texels near boundaries of map leads to distortions.
163
What is the formula for the reflection vector?
r=d−2(n⋅d)n r = reflected direction d = incoming direction from viewer to surface as unit vector n = surface normal at point of reflection
164
What is Bump Mapping?
A technique that simulates small surface bumps by modifying normals to affect lighting, without changing the actual geometry.
165
What does bump mapping affect?
Surface normals used for lighting, not the mesh or silhouette.
166
What is a heightmap (elevation map) in bump mapping?
A grayscale texture that defines how far each surface point should appear raised or lowered from the actual mesh.
167
How are normals computed from a heightmap?
Using cross products of vectors derived from height differences between nearby texels.
168
What is a normal map?
Stores precomputed normals as RGB colours, which are then directly used in bump mapping.
169
What are limitations of bump mapping?
Does not change silhouette, can't self-shadow, limited to small-scale detail.
170
What is a shadow volume?
A 3D volume behind an object where light is blocked - pixels inside it are in shadow.
171
How does the shadow volume algorithm work?
Count camera ray intersections with shadow volumes. Increment counter when entering shadow volume, and decrement counter when exiting shadow volume. If the count is odd, the point is in shadow; if even, it’s lit.
172
What OpenGL buffer is used for shadow volumes?
The stencil buffer, to track entry and exit into shadow volumes per fragment.
173
What are the steps of the stencil shadow volume method?
Render the scene with ambient light only. Build shadow volumes. Use stencil buffer to track shadowed regions. Render the scene with lighting only where stencil = 0.
174
What are the limitations of shadow volumes?
Expensive, no soft shadows, requires exact silhoette detection.
174
What is shadow mapping?
A technique that determines whether a fragment is in shadow by checking if it’s visible from the light source using a depth map.
175
What are the two main stages in shadow mapping?
Render the scene from the light’s point of view to create a depth map. Render from the camera view, and for each fragment, compare its light-space depth to the depth map
176
What does the depth map store in shadow mapping?
The distance from the light to the closest surface it sees in each direction.
177
How do you decide if a fragment is in shadow?
Compare the fragment’s light-space depth dp to the depth stored in the shadow map ds: If dp > ds then it is in shadow, else it is lit.
178
What is shadow acne?
A self-shadowing artifact caused by floating-point precision errors when the fragment’s depth equals the stored depth.
179
How can shadow acne be reduced?
Apply a depth bias to slightly nudge the fragment’s depth before comparison: dp = 0.99dp
180
How is a fragment’s position transformed to match the shadow map?
xs = X * ps * vs * vv^-1 * xv The position (xv) in camera view space is converted to world space (vv^-1), then converted to light view space (vs), then projected into light clip space (ps). A bias matrix is applied to map from clip space to texture space (X).
181
What is the goal of visibility determination in graphics?
To identify and render only the parts of objects that are visible from the camera's viewpoint.
182
What does "clipping" mean in graphics rendering?
Clipping removes geometry that lies outside the camera’s view frustum or display boundaries.
183
What is view-frustum culling?
Discarding objects that lie completely outside the camera’s viewing volume.
184
How can OpenGL perform back-face culling?
Using glEnable(GL_CULL_FACE) and glCullFace(GL_BACK).
184
What is back-face culling?
The process of not rendering the faces of objects that point away from the camera.
185
How can you test whther a face is rendered?
Discard if: (Ppoint - Pview) ⋅ N >= 0
186
What is winding order used for in OpenGL?
To define the front face of a polygon—either clockwise (GL_CW) or counter-clockwise (GL_CCW). It default uses Counter Clockwise.
187
What is Occlusion Culling?
Occlusion culling determines which objects (or parts of them) are hidden behind others and removes them from rendering, saving computational resources.
187
What is the main idea behind the Painter's Algorithm?
Objects are rendered from farthest to nearest so nearer objects paint over the distant ones.
188
How can object depth be estimated in the Painter’s Algorithm?
Using the object's center of mass or bounding box depth after model-view transformation.
189
What issue arises when using the Painter’s Algorithm with intersecting or cyclically overlapping objects?
The algorithm may fail because objects can't always be sorted into a clear back-to-front order.
190
What data structure improves efficiency in the Painter's Algorithm for complex scenes?
A BSP tree (Binary Space Partitioning tree), which organizes space to support efficient depth ordering.
191
What is a Z-buffer?
A buffer that stores the depth (z-value) of the closest surface rendered at each screen pixel.
192
How does Z-buffering algorithm work?
Transform a new triangle to screen space. For each pixel it covers, it computes the depth Z. It compares the new pixel's depth value with the current value in the Z-buffer and keeps the one closer to the camera.
193
Why is Z-buffering preferred for complex scenes?
Because it handles overlapping geometry and doesn’t require objects to be sorted before rendering.
194
What is Z-fighting?
A rendering artifact that occurs when two surfaces have nearly the same depth value, causing flickering.
195
Why does Z-buffering struggle with transparent objects?
Because it only keeps one depth value per pixel and cannot blend multiple overlapping transparent surfaces.
196
How is transparency usually handled with Z-buffering?
Opaque objects are rendered first; then transparent objects are sorted back-to-front and rendered with blending enabled.
197
What is rasterisation in computer graphics?
The process of converting ideal geometric primitives into pixels on a raster display.
198
What is a fragment in OpenGL?
All the data needed to compute the final color and properties of a pixel.
199
Why do interpolated normals need renormalization?
Because linear interpolation can distort their magnitude and make them no longer unit vectors.
200
What is scanline interpolation and how is it used to fill triangles in rasterisation?
Scanline interpolation is a method of filling a triangle row by row (horizontally), where for each scanline, the interaction with the edges is found, the vertex attributes are interpolated for the interaction, and then the line is interpolated horizontally.
201
What is the parity rule for point-in-polygon tests?
A point is inside a polygon if a ray from it crosses the edges an odd number of times.
202
What is the winding number rule?
A point is inside if the polygon winds clockwise around it a non-zero number of times.
203
What are the key steps of the scanline polygon fill algorithm?
Find edge intersections, sort them, and fill between alternate pairs.
204
What does the naïve midpoint algorithm do?
Rounds the y-value of a line equation to choose the closest pixel for each x.
205
What is the drawback of the naïve midpoint algorithm?
It uses slow floating point operations and rounding.
206
How does Bresenham’s algorithm improve line drawing?
Uses only integer math and an error term to choose pixels efficiently.
207
How does Bresenham's algorithm work?
Variable M starts as 2maxY, then take away x every iteration until Z < 0, then add one to Y and add 2maxX to Z.
208
What adjustments are needed for general lines in Bresenham's algorithm?
Handle negative slopes, steep gradients (m > 1), and arbitrary start points.
209
How does Bresenham's circle algorithm work?
Calculates one octant using integer math and mirrors the result for a full circle.
210
What is a drawback of column replication for thick lines?
Thickness varies with slope and can cause gaps.
211
Why are circular pens better for thick lines?
They provide consistent thickness and smoother joins.
212
What is aliasing in computer graphics?
A visual distortion that occurs when a signal is undersampled, making high-frequency content appear incorrect or misleading.
213
What causes aliasing?
Undersampling a signal that contains high frequencies beyond the Nyquist limit.
214
What is the Nyquist limit?
Half the sampling frequency; signals must be sampled at least twice their highest frequency to avoid aliasing.
215
What is the wagon-wheel effect?
A temporal aliasing phenomenon where a spinning wheel appears to move backward when filmed at a low frame rate.
216
How can a signal be represented in frequency space?
As a sum of sine waves using Fourier analysis.
217
What does it mean for a signal to be "band-limited"?
It contains no frequencies higher than a certain maximum — essential for exact reconstruction from samples.
218
What is the goal of anti-aliasing?
To reduce or eliminate aliasing artifacts in digital images by filtering out high frequencies.
219
What are the two main anti-aliasing strategies?
Pre-filtering and post-filtering.
220
What is area sampling?
Averaging all the colours a pixel covers to simulate eye or camera blur.
221
What’s the difference between area sampling and weighted area sampling?
Weighted sampling gives more importance to the pixel center for smoother intensity transitions.
222
What is supersampling anti-aliasing (SSAA)?
Renders the image at higher resolution and averages multiple samples per pixel - accurate but computationally expensive.
223
What is FXAA (Fast Approximate Anti-Aliasing)?
A fast, shader-based method that smooths edges in post-processing based on detected contrast.
224
What is MSAA (Multisample Anti-Aliasing)?
A technique that samples geometry (not shading) multiple times per pixel, efficiently reducing edge aliasing.
225
How are anti-aliased lines drawn?
By assigning different intensities to nearby pixels based on distance from the ideal line.
226
How does anti-aliased Bresenham’s algorithm work?
It modifies the standard algorithm to light adjacent pixels with fractional intensity (e.g., 1.0, 0.5, 0.1).
227
Why does texture aliasing happen?
When many texels map to one pixel (minification) or when a pixel is stretched too far across a texel (magnification)
228
What is MIP mapping?
Precomputed, lower-resolution versions of a texture used for distant surfaces to reduce aliasing.
229
What is trilinear filtering?
Interpolating between texels and between MIP map levels for smoother texture transitions.
230
What is anisotropic filtering?
When textures are seen at a sharp angle, this method samples more in one direction to avoid blurring/stretching.
231
What is the problem with general curves as functions?
You cannot express loops or spirals as y=f(x) because one x can give multiple y
231
How to calculate the normal of a triangular face with verticex a, b, and c
edge 1 = A-C edge 2 = B-C cross product of edge 1 and edge 2 a2b3 - a3b2, a3b1 - a1b3, a1b2 - a2b1
232
What is a parametric curve?
A curve where each coordinate (x, y, z) is defined as a function of a parameter t.
233
What is a parametric surface?
A surface defined by two parameters u, v such that x = x(u,v), y=y(u,v) z=z(u,v)
234
What is the parametric form of a line between two points A and B?
x(t) = xA + (xB - xA)t and similarly for y and z.
235
How is a circle represented parametrically?
x(t) = r cos(2π t), y(t) = r sin(2π t)
236
What types of polynomials are used in parametric curves?
Linear, quadratic and cubic polynomials
237
What defines a Bézier curve?
A weighted sum of control points using Bernstein basis functions.
238
Name three key properties of Bézier curves.
Pass through the first and last control points. Stay within the convex hull of control points. Global control: one point affects the entire curve.
239
What does C¹ continuity mean?
Curves meet and have the same tangent direction at the join.
239
What does C⁰ continuity mean?
Curves meet (same position).
240
How do you ensure smooth joins between Bézier curves?
Make adjacent control points collinear across the join.
241
What is De Casteljau’s algorithm used for?
Evaluating Bézier curves recursively through linear interpolation.
242
What is Chaikin’s algorithm used for?
To iteratively smooth a polygonal curve by generating new points at ¼ and ¾ along each edge.
243
What type of curve does Chaikin’s algorithm converge to?
A uniform quadratic B-spline, which is C¹ continuous.
244
How does the C² subdivision scheme differ from Chaikin’s?
It inserts midpoints and adjusts old points using a weighted average that gives C² continuity, forming a cubic B-spline.
245
What is the 2D generalization of Chaikin’s algorithm?
The Doo-Sabin subdivision method, which works on polygon meshes.
246
What are “extraordinary polygons” in Doo-Sabin subdivision?
Polygons that are not quadrilaterals, requiring special weights for smoothness.
247
What does Catmull-Clark subdivision produce?
A smooth surface from a polygon mesh, often used in animation and modeling.
248
Why are subdivision surfaces useful in animation?
They allow smooth deformation with low-res control meshes and are compatible with skeletal animation.
249
How are new face points computed in Catmull-Clark?
As the centroid of the original vertices of the face.
250
How are edge points calculated in Catmull-Clark?
As the average of the endpoints of the edge and the centroids of adjacent faces.
251
What is tessellation in graphics?
The process of subdividing surfaces into smaller polygons at runtime for smoother rendering.
252
What are the two main tessellation shader stages in OpenGL?
Tessellation Control Shader (TCS) – sets tessellation levels, Tessellation Evaluation Shader (TES) – computes vertex positions
253
What does the Tessellation Control Shader do?
Controls how finely to subdivide each patch and ensures continuity between patches.
254
What does the Tessellation Evaluation Shader do?
Evaluates the tessellated patch and generates final vertex positions.
255
What are outer tessellation levels?
They define how many segments each edge of a patch is split into.
256
What are inner tessellation levels?
They define how many subdivisions occur within the patch area.
257
If outer = 4, how many segments will each edge become?
4 segments → 5 vertices on that edge
258
What is global illumination?
A lighting model that includes both direct and indirect light contributions (e.g. bounced light, caustics).
259
What are caustics?
Focused patterns of light created by reflection or refraction through curved surfaces like glass or water.
260
Why can't we simulate perfect global illumination directly?
Because it's computationally too expensive to trace every possible light path
261
How does Whitted ray tracing work?
It sends rays from the eye into the scene and spawns new rays on intersection for reflection, refraction, and shadows.
262
What is the notation for light paths?
L: Light source E: Eye D: Diffuse Reflection S: Specular Reflection
263
What light paths does Whitted ray tracing simulate?
Paths like LS*E and LDS*E
264
What are the limitations of Whitted ray tracing?
No indirect lighting, soft shadows, caustics, or diffuse interreflections.
265
What is path tracing?
A rendering method that simulates full global illumination by tracing many random light paths per pixel.
265
What does each path in path tracing represent?
A possible light interaction sequence from the light source to the camera through various bounces.
266
What does path tracing simulate that Whitted ray tracing does not?
Indirect lighting, multiple diffuse bounces, soft shadows, caustics, global light transport.
267
What is Monte Carlo sampling in path tracing?
A method to approximate the rendering equation using random samples.
268
Why is importance sampling used?
To reduce noise and improve convergence by sampling important directions (e.g., towards lights or glossy highlights) more often.
269
Why is path tracing slower than ray tracing?
It requires many more rays per pixel to converge to a clean image.
270
What happens as you increase the number of rays in path tracing?
Noise decreases and realism increases.
271
What’s the main advantage of path tracing?
It can simulate physically accurate lighting, including complex indirect effects.
272
What is Radiosity?
A global illumination method specialised for diffuse-diffuse lighting interactions, useful for static scenes.
273
Is radiosity view-dependent or view-independent?
View-independent — it computes the lighting for the entire scene regardless of the camera position.
273
What is a key assumption in radiosity?
All surfaces reflect light diffusely (Lambertian), not specularly.
274