Image Manipulation Flashcards

1
Q

Types of image enhancement?

A

Spatial Domain methods –> based on direct manipulation of pixels in image
Point, local and global image operations are all spatial domain methods

Frequency Domain methods –> Based on modifying the Fourier transform of the image
Filtering is implemented in frequency domain by multiplication

Combination Methods –> combo of spatial domain methods and frequency domain methods

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How to calculate the brightness of a gray scale image?

A

Average intensity of all pixels in the image

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Exposure

A

Amount of light that enters the lens of the camera.
Types of exposure:
1. Overexposure
2. Underexposure
3. Long Exposure (captures a subject over an extended period of time)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the difference between brightness and exposure?

A

Exposure is the amount of light that enters the lens of the camera, while brightness is how bright an object appears in an image

Brightness is a product of exposure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is good contrast?

A

Widely spread intensity values and a large difference between the max and min intensity values

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is HDR (High Dynamic Range)

A

technique that produces images with a larger dynamic range of luminosity than SDR (standard dynamic range)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is dynamic range

A

Range of lightest and darkest tones in an image

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is wide dynamic range?

A

When you are able to see details in both light and dark areas

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How do you obtain a HDR image?

A

Use photographs of a scene taken with different exposure values and combine them.
After the HDR image has been merged, it has to be converted back to 8-b to view on usual displays

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the characteristics of image operations?

A

Point –> Output value at specific coordinate is dependent only on the input value at the same coordinate.

Local –> Output value at a given coordinate is dependent on the input values of the neighborhood of that same coordinate

Global –> Output value at given coordinate is dependent on all the values of the input image

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Point operations

A

Type of image operation:
Changes a pixel’s intensity value based on some function f.
New pixels intensity depends on:
Pixel’s previous intensity
Mapping Function.

Examples of point operations:
Histogram Equalization
Gamma correction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is an image negative?

A

Produced by subtracting each pixel from the maximum intensity value.

e.g. for an 8-bit image, max intensity is 2^8 - 1 = 255.
So, subtract each pixel’s intensity value from 255

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the Power Law Transformation / Gamma correction?

A

Point operation
It adjusts the brighnress of an image using gamma correction:

O = 255 × (I/255)G

O = output image [0, 255]
I = Input Image
G = Gamma (controls brightness of image)

If Gamma < 1, Darker input values are mapped to brighter output values)
If Gamme > 1, Brighter input values mapped to darker output values

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Histogram Equalization

A

Technique for adjusting image intensities to enhance contrast. Transforms an image so that its histogram is more evenly distributed across the entire range of values

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Contrast Stretching

A

image normalization with a piece wise linear transformation function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Piece wise Transformation

A

Spatial Domain method used for enhancing a group of pixels in a defined range.

17
Q

Local Operations

A

Types of local operations:
Linear Filter –> Outputted pixel value is determined as a weighted sum of input pixel values.
The entries in the weight kernal are called filter coefficients.

Non-linear Filters –> Use the kernal to obtain the neighboring pixel values, and then uses ordering mechanisms to produce the output pixel

18
Q

Kernal / Filter / Spatial Mask

A

Rectangle / matrix of values used in the convolution process to modify an image

19
Q

Convolution

A

Involves multiplying the Kernal values with the corresponding pixel values in the image. The resulting values are added together and the sum is used to replace the value of the central pixel in the output image.
Process is repeated for every pixel in the image

20
Q

Why is padding necessary after applying a filter

A

Filtered images often suffer from boundary effects. Padding is used to deal with this

21
Q

Padding

A

Types of Padding:
Zero –> Set all pixels outside source image to 0.
Constant –. Set all pixels outside source image to specified border value
clamp –> repeat edge pixels indefinitely
Mirror –> Reflect pixels accross the image edge
Extend

22
Q

Types of linear filters:

A

Box Filter –> Averages the pixel values in a KxK window. Each pixel is computed as the average of it’s surrounding pixels (low-pass filter)

Gaussian Filter –> Uses a Gaussian Kernal (square matrix of pixels where the pixel values correspond to the values of a Gaussian Curve) (low-pass filter)

23
Q

Low pass filters

A

Used to remove high spatial frequency noise from an image

24
Q

Non-Linear filters

A

As the kernal is shifted around the image, the order of the pixel sin the window section of the image is rearranged, and the output image is produced from these rearranged pixels.
Types of Non-Linear Filters:
1. Median Filter
2. Bilateral Filter

25
Q

Median Filter

A

A type of non-linear filter

Sorts the pixel values in a given neighbourhood and then sorts them to find the median of these values. This median is used to replace the original pixel value in the outputted image.

performs well with images that contains salt and pepper noise.

As window size increases, median gets closer to local mean causeing more smearing and edge distortion

26
Q

Bilateral FIlters

A

A type of local, non-linear, edge-preserving, noise-reducing filter.

Similar to Gaussian Convolution, this filter is defined as a weighted average of pixels, except it also considers the variation of intensities to preserve edges.

Two pixels are close together in bilateral filtering iff:
1. they occupy nearby spatial locations
2. Have a similar intensity value

Controlled by two parameters:
Spatial Parameter
Range Parameter

27
Q

What is the main difference between Gaussian Filters and Bilateral Filters

A

Gaussian filters are local, linear filters while Bilateral Filters are local, Non-linear filters.
Also, Gaussian filters use a Gaussian function that determines the weight of the pixels in the neighbourhood based on their spatial distance alone, while Bilateral Filters use Gaussian functions to determine the weight of the neighborhood pixels based on their spatial distance as well as their individual intensity values

28
Q

What is the main advantage of using frequency domain filtering over spatial domain filtering:

A

The process is simplified to a multiplication, while in spatial domain filtering it is a convolution.

Gives you control over the whole image

simpler and computationally cheaper

29
Q

Fourier transfrom

A

Used to convert an image to the frequency domain
the concept behind the fourier transform is that any waveform can be constructed using the sum of the sin and cos waves of different frequencies

Amplitude = maximum value of the sin function

Phase = the relative position of a wave with respect to one where y(0) = 0

30
Q

Name the three steps for edge detection algorithms

A

FIltering –> low-pass filtering is commonly used to improve the performance of an edge detector with respect to noise

Enhancement –> Emphasizes pixels where there is a significant change in the local intensity values. Performed by computing the gradient magnitude

Detection –> indicates that an edge is present near a pixel in an image

31
Q

Modelling

A

Mathematical specification of shape and appearance.
e.g. A 3D object might be described as a set of ordered 3D points along with some interpolation rule to connect the points and a reflection model to describe how light interacts with the object

32
Q

Rendering

A

the creation of shaded images from 3D computer models.
involves considering how each object contributes to each pixel.
Two ways:

Object-Order Rendering –> For each object, all pixels that it influences are found and updated

Image-Order Rendering –> For each pixel all the objects that influence it are found and the pixel value is computed. (Uses Ray tracing algorithm)

33
Q

Animation

A

The creation of the illusion of motion through sequences of images

34
Q

What is Ray Tracing

A

An image-order algorithm used for making renderings of 3D shapes.

  1. Computes one pixel at a time. Each pixel has am “eye ray”.
  2. The “eye rays” are traced into the scene from the view point to the pixel, and tested for intersections with any objects in the scene. If the eye ray intersects any object, this intersection point is recorded.
  3. Ray tracing algorithm then determines the pixel colour and how the surface at that intersection point interacts with light. (shading computation renders shadows, light and refractions).
35
Q

Shading Models

A

Used to capture the process of light reflection.
3 variables in light reflection:
1. Light Direction –> Unit vector pointing towards light source
2. View Direction –> Unit vector pointing towards the camera.
3. Surface normal –> Unit vector perpendicular to the surface at the point where reflection is occurring
3. Surface normal

36
Q

Lambertian shading model:

A
  1. Lambertian –> amount of energy from a light source that falls on an area of a surface depends on the angle of the surface to the light

Used for objects that aren’t shiny (e.g. wood)

Lamber’s cosine law –> illumination is proportional to the cos of the angle between the surface and the light source

View independent –> surface colour doesn’t depend on the direction from which you look at. Appears matte.

37
Q

Blinn-Phong Shading Model

A

View dependent shading model

Aims to produce reflection that is at its brightest when view direction and light direction are symmetrically positioned across the surface normal. (Mirror reflection)

38
Q

How can you tell how close a a reflection model is to Mirror Configuration

A

By comparing the half vector h (bisector of the angel between the view direction and the light direction) to the surface normal
This is done by summing vector v and l and dividing the result by it’s magnitude

. If it is near the surface normal, the specular component should be bright. Otherwise it is dim.

Achieved by computing the dot product between h and n, then taking the result of this to the power of the Phong exponent

39
Q

What is the Phong Exponent?

A

Controls the apparent shininess of the surface normal