Module 3 Assessment Flashcards

(35 cards)

1
Q

New values created through the algebraic combination of two or more bands in order to emphasize particular Earth features are known as:

A

Spectral Indices

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Which of the following is NOT a commonly used spectral index?

a. NDVI
b. NDWI
c. EVI
d. BRDF

A

d. BRDF
That’s right, the BRDF describes the angular reflectance properties of a surface and is not a spectral index

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Assess whether the following statement is true or false: Most spectral vegetation indices are built around the concept of the “red edge” which describes the strong reflectance of green wavelength light compared to red and blue wavelengths of light by vegetation

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Many spectral indices use a mathematical approach that contrasts two bands while controlling for their overall brightness. This approach is known as:

a. Band Ratio
b. Normalized Difference
c. Band Differencing with Thresholds
d. Linear Combination

A

b. Normalized Difference

Band ratios do indeed quantify the contrast between two bands, but do not control for overall brightness. The correct answer is “normalized difference”. Differencing two bands quantifies the contrast and dividing by their sum controls for their overall brightness. This is used, for example, in the NDVI and NDWI indices

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

This is a popular linear transformation of spectral bands designed to create new bands that are less correlated and have clear physical meaning

a. Change Vector Analysis
b. Tasseled Cap
c. Random Forest
d. Enhanced Vegetation Index

A

b. Tasseled Cap

Yes, the tasseled cap transformation is a linear combination of multispectral values that is akin to the axes rotation accomplished by principal components analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

An analytical technique that attempts to assign labels to pixels using an algorithm and examples of known categories (i.e. training data) is called:

a. Unsupervised classification
b. Principal Components Analysis
c. Supervised classification
d. Hierarchical Clustering

A

c. Supervised classification

Yes, supervised classification can be conducted when the analyst can provide the algorithms with examples of each category or class that is desired in the output map.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Techniques which identify clusters or groupings that occur naturally in the data, without the need for training data are known as:

a. Unsupervised Classification
b. Supervised Classification
c. Artificial Neural Networks
d. Random Forest

A

a. Unsupervised Classification

ANN’s, when used in image classification, rely on labeled training data in order to “learn” over time. The correct answer is “unsupervised classification”, which are the family of approaches that can identify clusters or groupings in the data based on intrinsic characteristics of the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Assess whether the following statement is true or false: Unsupervised classification techniques do not result in semantic classes

A

True - Correct, unsupervised classification can only provide data classes and the analyst must then supply the semantic meaning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which of the following is not true of spectral signatures as it relates to image classification?

a. Ancillary data may help to discriminate when different classes have the same or similar spectral signatures
b. To be successful, a classifications schema should be devised such that the spectral signatures for each class show no within-class variability
c. Classes with distinct theoretical spectral signatures may nonetheless be indistinguishable based on which portions of the EM spectrum our sensor has observed
d. Even pixels with identical surface states (e.g. the same land cover/use) will often have different spectral signatures

A

b. To be successful, a classifications schema should be devised such that the spectral signatures for each class show no within-class variability

This is false. There will almost always be some variability in the spectral signatures, even for classes that are narrowly defined and relatively homogeneous throughout the image.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which of the following is not a land cover/use product available for use by the scientific community?

a. ESA World Cover
b. Copernicus DEM (e.g. GLO30)
c. National Land Cover Database (NLCD)
d. MODIS Global Land Cover Product

A

b. Copernicus DEM (e.g. GLO30)

This is a global digital elevation model product and not a land cover/use product.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Which of the following is not a parametric classification technique:

a. Parallelepiped Classifier
b. Minimum Distance to Means Classifier
c. Maximum Likelihood Classification
d. Artificial neural networks

Correct, ANNs do not require assumptions about the statistical distribution of the predictors or the calculation of their parameters

A

d. Artificial neural networks

Correct, ANNs do not require assumptions about the statistical distribution of the predictors or the calculation of their parameters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

In contrast to minimum distance to means, the maximum likelihood classifier takes into account:

a. uneven variance in predictors between classes
b. Non-normal distributions in predictors
c. uneven variance in predictors between classes and covariance amongst predictors
d. covariance amongst predictors

A

c. uneven variance in predictors between classes and covariance amongst predictors

Yes, MLC takes both uneven variance and covariance into account

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

In which situation would using prior probabilities in a maximum likelihood classification approach be most useful?

a. When there is strong covariance amongst one or more of the predictors within some or all classes
b. When there are two similar classes but one of them is much rarer than the other
c. When we suspect that some of the training data has inaccurate labels
d. When all classes have an equal amount of area in the study region

A

b. When there are two similar classes but one of them is much rarer than the other

Yes, when we have rare classes, especially when they’re easily confused with other less rare classes, using prior probabilities can help these classes from being over represented in the output

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

In which case would MLC most likely not work well due to a violation of its core assumption(s)?

a. When there is strong covariance among one or more predictors in the classification. For example: NDVI and NIR reflectance will usually increase in tandem for vegetated classes like forest
b. When two classes have a high degree of overlap in their predictors and other characteristics. For example: shrublands and woody savannas.
c. When one class has much greater variability in one or more predictors than the others. For example: a forest class containing many different types and densities of forests
d. When a particular class has one or more predictors showing non-normal, such as bimodal, distributions. For example: dark water and shallow/turbid water

A

d. When a particular class has one or more predictors showing non-normal, such as bimodal, distributions. For example: dark water and shallow/turbid water

Correct, in this case the normality assumption would be violated and the parameters used to compute the covariance matrix would be invalid

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Which is not a normal part of the workflow for an unsupervised classification approach?

a. Compiling a set of predictors (spectral, derived indices, and ancillary data) that we expect will lead to the ability to differentiate real surface states
b. Refining the feature set to better distinguish between categories/classes/clusters that contain more than one surface type that we hope to differentiate
c. Merging or collapsing multiple data/spectral classes into semantic classes for which a label can be determined
d. Designing an appropriate sampling scheme for the training data to ensure the variability in each class is appropriately captured

A

d. Designing an appropriate sampling scheme for the training data to ensure the variability in each class is appropriately captured

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Which of the following is a feature of ISODATA but not K-means approaches to clustering?

a. An iterative process whereby the mean centers of potential clusters are refined until an exit condition is met
b. Starting with randomly assigned cluster centers
c. Can be conducted without any training data at all
d. The ability to split, merge, and delete clusters

A

d. The ability to split, merge, and delete clusters

Yes, ISODATA is identical to K-means, but with an additional step wherein potential clusters are split if they are too highly variable in one or more dimensions, merged if their centers are very close, or deleted altogether if they do not contain enough points/pixels/entities.

17
Q

All of the following are true of validation data for land cover classification accuracy assessment except which one?

a. The validation data must be collected according to a probabilistic scheme, such as a stratified random approach
b. The validation data must contain samples from every category/class, but the number of samples may not be equal
c. The precision of the resulting accuracy measurements will be proportional to the size (number of sample) in our validation data
d. The training data may be used for validation as long as you are using a machine learning or other nonparametric classification technique

A

d. The training data may be used for validation as long as you are using a machine learning or other nonparametric classification technique

18
Q

This quantity reflects the total proportion of validation samples that were correctly classified in the map

a. Overall accuracy
b. Cohen’s Kappa
c. Producer’s accuracy
d. User’s accuracy

A

a. Overall accuracy

The user’s accuracy reflects the probability that something labeled as class X on the map is actually class X in the real world. The total proportion correct in the validation sample is the overall accuracy.

19
Q

A measure of overall accuracy that corrects for chance agreement is:

a. Cohen’s Kappa
b. F1 Score
c. Quantity disagreement
d. Allocation disagreement

A

a. Cohen’s Kappa

Yes, Cohen’s Kappa is related to overall accuracy, but it accounts for chance agreement, or the fact that even some randomly assigned labels would be correct

20
Q

For a particular class, the total errors of omission are captured by which accuracy measure?

a. User’s accuracy
b. Overall accuracy
c. Cohen’s Kappa
d. Producer’s accuracy

A

d. Producer’s accuracy

Yes, the producer’s accuracy is the probability that something of class X in the real world is labeled as class X, and therefore entities of the validation data that were incorrectly classified as belonging to another class have been omitted from their true class. So, producer’s accuracy captures the total omission error.

21
Q

Confusion matrix question

22
Q

Confusion matrix question

23
Q

Confusion matrix question

24
Q

Producer’s accuracy question

A

Yes, since everything is classified as water and producer’s accuracy for the water class is the probability that real water is correctly classified as water, the producer’s accuracy for water is 100%

The correct answer is ‘True’.

25
A sampling design whereby locations are spatially random, but where the analyst can prescribe the number of samples in each category is: a. Stratified random b. Simple random c. Cluster sampling d. Systematic sampling
a. Stratified random Correct, the stratified random sampling design has random locations but allows the user to ensure that a certain number of samples are within each category.
26
For the special case of binary classifications (e.g. event detection), the accuracy measure which balances precision and recall is called the a. False Positive Rate b. False Negative Rate c. Cohen's Kapp d. F1 Score
d. F1 Score Correct, the F1 score is defined as twice the product of precision and recall divided by their sum
27
Some researchers have demonstrated that the Kappa score can often lead to confusing or nonsensical results. Instead, they have argued for: a. Focusing analysis only on the producer's and user's accuracies instead b. Calculation of the new quantities: quantity and allocation disagreement c. Reducing the occurrences of chance agreement by mapping many classes d. Using the F1 score instead
b. Calculation of the new quantities: quantity and allocation disagreement While it's true that increasing the number of classes would reduce the probability of assigning a label correctly purely by chance, the schema is driven by the application and the separability of the classes, not by accuracy considerations. Dr. Pontius, in particular, has demonstrated the problems with Kappa and presented quantity and allocation disagreement metrics to better capture these distinct aspects of classification errors
28
This is an approach to data classification/labelling that uses series of simple decision rules that progressively split the data into more and more homogeneous groups a. Classification and Regression Trees (CART) b. Artificial Neural Networks (ANN) c. ISODATA d. Maximum Likelihood Classifier
a. Classification and Regression Trees (CART) Yes, CARTs make decisions by building "trees" composed of branching decision rules
29
"Overfitting" in data classification/machine learning is defined as: a. The technique whereby an "overall" fit to the data is achieved by allowing multiple algorithms to "vote" on the outcome b. A technique whereby some training examples are randomly withheld each time the model is fit c. The tendency for algorithms to perform much better on their training dataset compared to test/validation sets d. The procedure whereby a model is refit to a permuted and replicated version of the training data to reduce misclassification
c. The tendency for algorithms to perform much better on their training dataset compared to test/validation sets Yes, this is particularly a problem of machine learning methods that can continuously add complexity in order to fit their training data exactly, but will then usually fail to generalize well.
30
An analysis technique based on an ensemble of many independently fit classification or regression trees is known as: a. Convolutional Neural Network b. Artificial Neural Network (ANN) c. Random Forest d. CART
c. Random Forest Correct, Random Forest fits many independent classification or regression trees, each with randomization of the training data and features, and then determines the final outcome through voting
31
Is the following statement true or false: "As long as the training data are representative of the application domain, then a conservative estimate of the random forest's classification accuracy can be obtained without separate training data, via the out of bag (OOB) accuracy measure"
True
32
A method of classifying data that uses interconnected nodes producing simple outputs, and which learns overtime by minimizing output error on training data is known as: a. ISODATA b. Random Forest (RF) c. Artificial Neural Network (ANN) d. Gradient Descent
c. Artificial Neural Network (ANN) Correct. ANNs use a network of interconnected nodes, or neurons, that receive inputs from other nodes, perform simple mathematical operations to produce a single output, and then pass those along to other nodes. The ANN "learns" by adjusting the weights along each of the connection pathways.
33
Backpropagation and Gradient descent are techniques related to: a. The "learning" or training process for an ANN b. The process whereby spatial dimensionality reduction is achieved for object recognition in images c. The randomization or "bagging" process in random forests d. The process of minimizing overfitting in machine learning models
a. The "learning" or training process for an ANN Correct. Backpropagation of errors refers to the process of calculating the error contribution of each weight in the network. And gradient descent refers to the process whereby the weight adjustment is determined so as to minimize the output error.
34
All of these are techniques used to minimize overfitting in artificial neural networks except which one? a. Increasing the number of hidden layers b. Limiting the training period c. Dropout (i.e. eliminating nodes and/or connections) d. Adding noise to the predictors in the training data
a. Increasing the number of hidden layers Correct, increasing the number of hidden layers would probably hurt, not help, overfitting.
35
These are a special class of artificial neural networks that use specially designed layers that reduce data dimensionality through focal-area mathematical manipulation. They are particularly well-suited to object recognition in situations where the pixels are much smaller than the objects to be identified. a. Convolutional Neural Networks (CNN) b. Recurrent Neural Networks c. Generative Adversarial Networks d. Random Forests
a. Convolutional Neural Networks (CNN) Correct. CNNs use a variety of convolutional and pooling layers that condense lots of information from a focal area into one or a few pieces of information