L10 - CNN’s 2 Flashcards

1
Q
  1. What is a Feature Map?
A
  1. Map of the match between filter and image at every spot.
    1. Represents where that particular pattern was found.
    2. Every filter results in 1 feature map
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
  1. What is the ReLU function? What’s its purpose?
A
  1. An activation function that introduces non-linearity into a neural network
    1. Enables modelling of more complex relationships in data
    2. Determines quality of the filter match
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q
  1. How does the ReLU activation function operate?
A
  1. Converts all negative values to 0
    1. Leaves all positive values
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q
  1. What is the ReLU applied to?
A
  1. A feature map, resulting in a feature map with values of only 0 and above
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q
  1. What is downsampling?
A
  1. The process of reducing the dimensionality of data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q
  1. Why is downsampling important in Computer Vision?
A
  1. Images are very data intensive, thus, have high dimensionality.
    1. Reducing dimensionality improves performance and makes processing more manageable
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
  1. What are some issues with downsampling?
A
  1. Information loss -> Reducing dimensionality reduces the information of the image which can lead to poor model performance is vital details and patterns are removed.
    1. Overfitting -> Reducing data may make model prone to learning noise rather than underlying patterns of image.
    2. Introduce bias
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
  1. What is considered the correct technique of downsampling?
A
  1. Pooling
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
  1. What is Pooling?
A
  1. Technique for reducing dimensionality of image data.
    1. Applied after ReLU function has been applied to the feature map in order to further decrease the dimensions.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q
  1. What is the general idea of pooling?
A
  1. Aggregates a group of pixel via their average or max.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q
  1. When is pooling applied?
A
  1. After ReLU function has been applied to the feature map.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q
  1. What is the end result of downsampling?
A
  1. A pooled feature map that is small enough so it becomes a traditional ML problem.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q
  1. What is Padding and Stride?
A
  1. Padding -> Increases data dimensions so filter can better pick up border features of image.
    1. Stride -> Move the filter a customer amount over the image
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q
  1. What are the effects of a higher stride or lower stride?
A
  1. Higher -> Reduces feature map size, picks up less information, reduces computational cost.
    1. Lower -> Increases feature map size, picks up more info, higher computational cost.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q
  1. What is the purpose of an activation function? What are the 6 main ones?
A
  1. To introduce non-linearity into a model, enabling the identification of more complex relationships.
    1. ReLU, Leaky ReLU, Tanh, Maxout, Sigmoid, ELU
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q
  1. What is the Softmax Function? How does it work?
A
  1. An activation function used in neural networks tasked with multi-classification problems.
    1. Transforms a real number vector into probabilities of belonging to certain classes.
    2. For example: Classes are defined by the networks fully connected layer, the Softmax function is applied to each of the class values to obtain the probabilities of an input being in a class.
17
Q
  1. Define the learning rate of a model…
A
  1. How much the model updates it parameters at every step.
18
Q
  1. What is the trade off between large and low learning rates?
A
  1. Large -> May overshoot optimal values
    1. Small -> Takes long time to final optimal and can get stuck at local optimal
19
Q
  1. What technique can we use to ensure our learning model uses both large and small values appropriately?
A
  1. Decaying at every step -> Initial large learning rate for speed, and decrease rate at each step so we don’t overshoot.
    1. I.e Decay the learning rate at every step