DL CNNs & RNNs Flashcards Preview

AML > DL CNNs & RNNs > Flashcards

Flashcards in DL CNNs & RNNs Deck (34)
Loading flashcards...

How to calculate number of parameters (weights and biases) in a CNN?

((filter width * filter height) * (num of old channels/filters) + 1 for bias) * num of new filters


What is dropout?

Randomly setting a fraction of input units to 0 at each update during training time - helps prevent overfitting


How do we combat exploding and vanishing gradients?

1. Normaliztion of inputs + Careful initialization of weights
2. Regularization
3. Gradient clipping


Tips for Choosing Initial weights

1. Never set to all zero
2. Try somewhere between -0.2 and 0.2 (or fanin)
3. Biases are often initialized to 0.01 or similar small value


What is regularization?

“Regularization is any modification we make to a
learning algorithm that is intended to reduce its
generalization error but not its training error.”
Goodfellow et al, 2016


Regularization methods for regression

1. Lasoo - L1 encourages sparseness in weight matrices
2. ridge - L2 (weight decay/parameter shrinkage)
3. elastic net - combines lasoo and ridge


Inverted dropout

During training randomly drop out units according
to a dropout probability at each training epoch


How does dropout work?

Spread out weights - cannot rely on any one input too much


Disadv. of dropout

Introduces another hyperparameter - dropout probability - often one for each layer


What's another type of regularization apart from inverted dropout?

- Dataset augmentation
- Synthesize examples by flipping, rotating, cropping, distorting
- makes dataset more robust


What is early stopping?

Allowing a model to overfit and then rollback to the point at which the error curve on the training and test sets begin to diverge


1:1 RNN

Vanilla network without RNN

Image classification



Image captionning



Sentiment analysis



Machine translation



Video classification (syncned sequence input and output)


Why Convolutions?

The main advantages of using convolutions are parameter sharing and sparsity of connections. Parameter sharing is helpful because reduces the number of weight parameters is one layer without losing accuracy. Additionally, the convolution operation breaks down the input features into a smaller feature space, so that each output value depends on a small number of inputs and can be quickly adjusted.



Designed to process data that come in the form of multiple arrays, for example, colour images composed of 3 2-D arrays containing pixel intensities in the 3 colour channels


Key features of CNNs

- local connections
- shared weights
- pooling
- use of many layers

- roots in neocognition


Role of convolutional layer

- detect local conjunctions of features from the previous layer


Role of pooling layer

To merge semantically similar features into one


Success of CNNs

ImageNet 2012

Halved error rates of competing approaches
Efficient use of GPUs, ReLUs
New regularizations - dropout
Techniques to generate more training examples by deforming existing ones


RNNs are good for what tasks

Those that involve sequential input (speech and language)

- process an input sequence one element at a time, maintaining in their hidden units a 'state vector' that implicitly contains information about the history of all the past elements of the sequence


Useful things about CNNs

1. Partial connectivity (i.e. sparse connections) - not all the units in layer i are connected to all the units in layer i + 1

2. Weight sharing - different parts of the network are forced to use the same weights


Four key ideas behind CNNs that take adv of the properties of natural signals

1. local connections
2. shared weights
3. pooling
3. use of many layers


Convolutional layer

- units in a conv layer are organized in feature map, within which each unit is connected to local patches in the feature maps of the previous layer through a set of weights called a filter bank

- result of this local weighted sum is then passed through a non-linearity such as a Relu

- All units in a feature map share the same filter bank

- Different feature maps in a layer use different filter banks

Why this architecture:

1. Local groups of values are often highly correlated, forming distinctive motifs that are easily detected
2. local statistics of images are invariant to location (a motif can appear anywhere on the image - hence the idea that units at different locations share the same weights

Mathematically, the filtering operation performed by a feature map is a discrete convolution (name)


Role of conv layer

To detect local conjunctions of features from the previous layer


Role of pooling

To merge semantically similar features into one

Reduces dimensions of the representation and creates an invariance to small shifts and distortions


How is training in a CNN done?

Backpropagating is the same

Allows all the filter banks to be trained


Deep NNs exploit the property that many natural signals are compositional hierarchies in which higher-level features are obtained by composing lower-level ones.

Images -> local combos of edges form motifs -> parts -> objects

Same with speech

Pooling allows representations to vary very little when elements in the previous layer vary in position and appearance