Lecture 9 Flashcards

(30 cards)

1
Q

Why do we do image post processing?

A

Get an enhanced image by adjusting the contrast, brightness, saturation and sharpness

Extract useful info from the image or to make the image smaller

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What influences post processing contrast adjustment?

A

Bit depth
- Higher the shades of grey, the better the contrast and tone

Eg of contrast adjustment include

  • grey scale transformation LUT
  • windowing
  • threshold
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Explain what’s grey scale transformation in contrast adjustment

A

Achieved by using a conversion curve known as the LUT

LUT maps each input digital grey scale data value into an output display as a just-noticeable difference (JND), so that the human eye can perceive change in pixel intensity as diff in shades of grey

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Why do we use LUT?

A

LUT save memory/storage space if only limited number of intensities or colours are needed

Images with high bit depth require more storage space

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How exactly does LUT work in order to adjust contrast in a computer?

A

The data stored in the computer are initially pixel values, but by adding fixed values to these pixel values will increase the pixel intensity and subtracting the fixed value will reduce. CONTRAST IS CHAGED BY CHANGING PIXEL VALUES

Increase PI = bright image
Decrease PI = darken image

Initially, the radiographer picks a LUT suitable for a body part (eg chest, brain, bone) and this formed thr desired contrast characteristics for the imshrv

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Explain how brightness levels are adjusted with post processing

A

Adjusted by adding or subtracting current pixel value evenly throughout the image

If limit of intensity values are reached (totally black or white), if so current pixel value will be the limit value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Explain what’s inversion of an image in post processing

A

Inversion involves the flipping of pixel values (ie. black region becomes white). Used in angiography and theatre image intensifiers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Explain what’s windowing in terms of image enhancement for post processing and its relevance to contrast and brightness

A

Process of selecting a segment of the total pixel value range, which can be used to change contrast and brightness of certain parts of image

Pixel intensities within a segment of the dynamic range are selected and displayed over the shades of grey (white to black range). So those pixel values below or above the selected segment will appear black or white and do not add to the contrast of the image

Window width = range or numbers in array of pixel values selected (relevant to contrast)
Window level = centre of range (relevant to brightness)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Distinguish the difference esteem contrast and brightness

A

Contrast is the difference in optical density that makes an object distinguishable from its background

Brightness is the perception provided by the luminance of a visual target as whiteness or darkness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Explain the effect of changing the window

A

Setting window to lower segment of grey scale range produces better contrast in lighter areas (ie. mediastinum)

Setting window to higher segment produces good contrast in darker areas like lungs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Explain the effect of changing the window width

A

Decreasing window width increases brightness interval between two consecutive pixel values thereby increasing contrast

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Explain the effect of changing window level

A

Image brightness is the average intensity of all pixels in image

Increasing window level = darker image
Decreasing window level = brighter image

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What’s histogram equalisation and what is it used for during image enhancement?

A

Corrects over and under exposure, poor brightness and contrast, dynamic range.
Accomplishes this by spreading out the intensity values through the range (grey scale)

So wider the spread of intensity values the better the difference between min and max intensity values THEREFORE BETTEE CONTRAST

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What exactly is the dynamic range?

A

Very bright and dark portions of digital image which may have exceeded the bit depth of detector

The histogram equalisation can be used to tailor dynamic range to the bit depth

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Explain what’s threholding in terms of post image processing

A

Sperating pixel values into 2 ranges and assigning fixed value to pixels in each range. It is used to extract or alter image info

You can create a binary image ie. binary contrast adjustment, where pixel intensities are mapped to either black or white according to whether they lie above or below the selected threshold

Or you can do grey scale splicing which separates pixels into more than 2 ranges and assigns a fixed value to each range

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Explain spatial resolution in terms of post processing mechanics

A

Spatial resolution of image is relevant to sampling

Which is the process of converting electrical signals to a digital format, which then determines its spatial resolution

A signal = continuous wave of varying frequencies

Detectors average data values over a small time interval so that the sample is representative of av. value of signal.

So the average number of samples obtained per second (sampling frequencies) influences the spatial resolution of image

17
Q

Explain intensity resolution in terms of post processing mechanics

A

Relevant to quantisation, which is the process of digitising the range of the signal.

It determines grey levels assigned to each pixel and the intensity resolution of digital image

Quantisation can be used to reduce image quality in which it reduces the number of grey levels (reduce bit per pixel) ie. compressing the image

but it can also be used to correct posterisation by providing more shades of grey to produce continuous tone

18
Q

What are the limitations of quantisation?

A

Vital info may be omitted

Posterisation can occur

19
Q

What’s posterisation

A

Too little bits per pixel ie. limited number of differnt greys that are used and available

Intensity values limited to 8 shades (3 bits) causes posterisation. It affects areas of low spatial frequency the most

20
Q

What’s a condition that must be adhered to in terms of sampling?

A

There is a minimum sampling rate with which a signal must be sampled without causing complications…this is referred to as the Nyquist frequency

And we have to make sure to always sample faster than this frequency (twice the maximum frequency present in the signal)

Complication = aliasing artefacts visible

21
Q

Explain what’s aliasing artefacts

A

Error due to signal being sampled at a less than twice the highest freq present in the signal

High freq components of the signal overlap with low frequency components giving a jaggy effect

Aliasing is worse in the absence of filtering

22
Q

Explain how aliasing within an image can be corrected?

A

Anti aliasing interpolation

Corrects aliasing by giving the surrounding pixels an intermediate value, which gives a smoother edged and higher resolution appearance

23
Q

What’s image decimation?

A

Reducing the signal sampling rate (below NF)

Which reduces dimensions of image
Eg. Decimation of 2 = final image is half the size of original

But this introduces aliasing

24
Q

What’s image interpolation?

A

Opposite of image decimation, Increasing signal sampling rate

Which increases dimensions of image
Eg. Interpolation of 2 = final image is double the size of original

Hence it preserves data accuracy and enhances image quality

But it has long computation times and ring artefacts (edges) are present

25
What are the factors that maximum info content of an image (file size) depend on?
Image dimensions (n x n)= no. Of pixels in rows and columns (matrix) Intensity resolution (m)= no. Of possible intensities for a pixel (bit depth) Therefore MIF = (n x n) x m Larger the MIF, larger space needed for storage and are slow to transfer electronically
26
Explain what’s image compression
An encoding process that reduces the overall size of image data Many images do not need the same data precision in all regions Eg. Black background of medical image So those regions are redundant data hence the image compression reduces overall data by shrinking this redundant data We do this because there’s a growing need for storage, efficient data transmission, teleradiology applications, real time teleconsultation, PACS
27
What’re the potential reasons for redundancies that can occur in digital images?
Coding = grey levels are coded using more codes than necessary. This is compressed using LUT, variable length coding (mapping source symbols to a variable number of bits) Interpixel = info in each pixel is relatively small and neighbouring pixels tend to have same info and are not independent. The intensity of a pixel can be predicted from the intensity of its neighbours. This is compressed using run length coding (storing same data value as a single data value and count) Psychovisual = uses more grey levels than necessary. Compress image to remove less important visual info. The eye can only resolve 32 grey levels locally and 8 bit images show no posterisation to eye. This is compressed using reduced intensity or colour precision
28
Summarise the process of image compression
There are 2 ways in which we classify image compression, lossy and lossless. But as summary relevant to both; Original image is encoded and compressed into the data for storage or transmission (a bunch of 0 and 1) And then this data is decoded and decompressed into the resultant image We can figure out the compression ratio of the image by dividing original image size/the compressed image size Compression ratio tells us the ratio of original size of image to compressed size of image
29
Explain what’s lossless image compression
It’s reversible - Reconstructed image from the compressed data is identical to original, therefore,Low compression ratio Compressor: original img - transform - entropy coding Channel Decompressor: entropy decoding - inverse transform - restored img The transform phase removes interpixel redundancy and packs info efficiently The entropy encoder phase removes the coding redundancy The image even when zoomed, it looks smooth
30
Explain what’s lossy image compression technique
It’s irreversible - reconstructed image differ from original - high compression ratio Compressor: original img - transform - quantisation - entropy coding CHANNEL Decomposer: entropy decoding - de quantisation - inverse transform - restored img 3 step process Transform phase removes interpixel redundancy and packs info efficiently Quantisation phase, many to one mapping replaces set of values with only one representative value. It removes psychovisual redundancy and packs info in a few bits Entropy encoding phase removes the coding redundancy The resultant image looks more pixelated when zoomed into a region