Lecture 9 Flashcards
(30 cards)
Why do we do image post processing?
Get an enhanced image by adjusting the contrast, brightness, saturation and sharpness
Extract useful info from the image or to make the image smaller
What influences post processing contrast adjustment?
Bit depth
- Higher the shades of grey, the better the contrast and tone
Eg of contrast adjustment include
- grey scale transformation LUT
- windowing
- threshold
Explain what’s grey scale transformation in contrast adjustment
Achieved by using a conversion curve known as the LUT
LUT maps each input digital grey scale data value into an output display as a just-noticeable difference (JND), so that the human eye can perceive change in pixel intensity as diff in shades of grey
Why do we use LUT?
LUT save memory/storage space if only limited number of intensities or colours are needed
Images with high bit depth require more storage space
How exactly does LUT work in order to adjust contrast in a computer?
The data stored in the computer are initially pixel values, but by adding fixed values to these pixel values will increase the pixel intensity and subtracting the fixed value will reduce. CONTRAST IS CHAGED BY CHANGING PIXEL VALUES
Increase PI = bright image
Decrease PI = darken image
Initially, the radiographer picks a LUT suitable for a body part (eg chest, brain, bone) and this formed thr desired contrast characteristics for the imshrv
Explain how brightness levels are adjusted with post processing
Adjusted by adding or subtracting current pixel value evenly throughout the image
If limit of intensity values are reached (totally black or white), if so current pixel value will be the limit value
Explain what’s inversion of an image in post processing
Inversion involves the flipping of pixel values (ie. black region becomes white). Used in angiography and theatre image intensifiers.
Explain what’s windowing in terms of image enhancement for post processing and its relevance to contrast and brightness
Process of selecting a segment of the total pixel value range, which can be used to change contrast and brightness of certain parts of image
Pixel intensities within a segment of the dynamic range are selected and displayed over the shades of grey (white to black range). So those pixel values below or above the selected segment will appear black or white and do not add to the contrast of the image
Window width = range or numbers in array of pixel values selected (relevant to contrast)
Window level = centre of range (relevant to brightness)
Distinguish the difference esteem contrast and brightness
Contrast is the difference in optical density that makes an object distinguishable from its background
Brightness is the perception provided by the luminance of a visual target as whiteness or darkness
Explain the effect of changing the window
Setting window to lower segment of grey scale range produces better contrast in lighter areas (ie. mediastinum)
Setting window to higher segment produces good contrast in darker areas like lungs
Explain the effect of changing the window width
Decreasing window width increases brightness interval between two consecutive pixel values thereby increasing contrast
Explain the effect of changing window level
Image brightness is the average intensity of all pixels in image
Increasing window level = darker image
Decreasing window level = brighter image
What’s histogram equalisation and what is it used for during image enhancement?
Corrects over and under exposure, poor brightness and contrast, dynamic range.
Accomplishes this by spreading out the intensity values through the range (grey scale)
So wider the spread of intensity values the better the difference between min and max intensity values THEREFORE BETTEE CONTRAST
What exactly is the dynamic range?
Very bright and dark portions of digital image which may have exceeded the bit depth of detector
The histogram equalisation can be used to tailor dynamic range to the bit depth
Explain what’s threholding in terms of post image processing
Sperating pixel values into 2 ranges and assigning fixed value to pixels in each range. It is used to extract or alter image info
You can create a binary image ie. binary contrast adjustment, where pixel intensities are mapped to either black or white according to whether they lie above or below the selected threshold
Or you can do grey scale splicing which separates pixels into more than 2 ranges and assigns a fixed value to each range
Explain spatial resolution in terms of post processing mechanics
Spatial resolution of image is relevant to sampling
Which is the process of converting electrical signals to a digital format, which then determines its spatial resolution
A signal = continuous wave of varying frequencies
Detectors average data values over a small time interval so that the sample is representative of av. value of signal.
So the average number of samples obtained per second (sampling frequencies) influences the spatial resolution of image
Explain intensity resolution in terms of post processing mechanics
Relevant to quantisation, which is the process of digitising the range of the signal.
It determines grey levels assigned to each pixel and the intensity resolution of digital image
Quantisation can be used to reduce image quality in which it reduces the number of grey levels (reduce bit per pixel) ie. compressing the image
but it can also be used to correct posterisation by providing more shades of grey to produce continuous tone
What are the limitations of quantisation?
Vital info may be omitted
Posterisation can occur
What’s posterisation
Too little bits per pixel ie. limited number of differnt greys that are used and available
Intensity values limited to 8 shades (3 bits) causes posterisation. It affects areas of low spatial frequency the most
What’s a condition that must be adhered to in terms of sampling?
There is a minimum sampling rate with which a signal must be sampled without causing complications…this is referred to as the Nyquist frequency
And we have to make sure to always sample faster than this frequency (twice the maximum frequency present in the signal)
Complication = aliasing artefacts visible
Explain what’s aliasing artefacts
Error due to signal being sampled at a less than twice the highest freq present in the signal
High freq components of the signal overlap with low frequency components giving a jaggy effect
Aliasing is worse in the absence of filtering
Explain how aliasing within an image can be corrected?
Anti aliasing interpolation
Corrects aliasing by giving the surrounding pixels an intermediate value, which gives a smoother edged and higher resolution appearance
What’s image decimation?
Reducing the signal sampling rate (below NF)
Which reduces dimensions of image
Eg. Decimation of 2 = final image is half the size of original
But this introduces aliasing
What’s image interpolation?
Opposite of image decimation, Increasing signal sampling rate
Which increases dimensions of image
Eg. Interpolation of 2 = final image is double the size of original
Hence it preserves data accuracy and enhances image quality
But it has long computation times and ring artefacts (edges) are present