DL CNNs & RNNs Flashcards Preview

AML > DL CNNs & RNNs > Flashcards

Flashcards in DL CNNs & RNNs Deck (34)
Loading flashcards...
1

How to calculate number of parameters (weights and biases) in a CNN?

((filter width * filter height) * (num of old channels/filters) + 1 for bias) * num of new filters

2

What is dropout?

Randomly setting a fraction of input units to 0 at each update during training time - helps prevent overfitting

3

How do we combat exploding and vanishing gradients?

1. Normaliztion of inputs + Careful initialization of weights
2. Regularization
3. Gradient clipping

4

Tips for Choosing Initial weights

1. Never set to all zero
2. Try somewhere between -0.2 and 0.2 (or fanin)
3. Biases are often initialized to 0.01 or similar small value

5

What is regularization?

“Regularization is any modification we make to a
learning algorithm that is intended to reduce its
generalization error but not its training error.”
Goodfellow et al, 2016

6

Regularization methods for regression

1. Lasoo - L1 encourages sparseness in weight matrices
2. ridge - L2 (weight decay/parameter shrinkage)
3. elastic net - combines lasoo and ridge

7

Inverted dropout

During training randomly drop out units according
to a dropout probability at each training epoch

8

How does dropout work?

Spread out weights - cannot rely on any one input too much

9

Disadv. of dropout

Introduces another hyperparameter - dropout probability - often one for each layer

10

What's another type of regularization apart from inverted dropout?

- Dataset augmentation
- Synthesize examples by flipping, rotating, cropping, distorting
- makes dataset more robust

11

What is early stopping?

Allowing a model to overfit and then rollback to the point at which the error curve on the training and test sets begin to diverge

12

1:1 RNN

Vanilla network without RNN

Image classification

13

1:M RNN

Image captionning

14

M:1 RNN

Sentiment analysis

15

M:M RNN

Machine translation

16

M:M RNN

Video classification (syncned sequence input and output)

17

Why Convolutions?

The main advantages of using convolutions are parameter sharing and sparsity of connections. Parameter sharing is helpful because reduces the number of weight parameters is one layer without losing accuracy. Additionally, the convolution operation breaks down the input features into a smaller feature space, so that each output value depends on a small number of inputs and can be quickly adjusted.

18

CNNs

Designed to process data that come in the form of multiple arrays, for example, colour images composed of 3 2-D arrays containing pixel intensities in the 3 colour channels

19

Key features of CNNs

- local connections
- shared weights
- pooling
- use of many layers

- roots in neocognition

20

Role of convolutional layer

- detect local conjunctions of features from the previous layer

21

Role of pooling layer

To merge semantically similar features into one

22

Success of CNNs

ImageNet 2012

Halved error rates of competing approaches
Efficient use of GPUs, ReLUs
New regularizations - dropout
Techniques to generate more training examples by deforming existing ones

23

RNNs are good for what tasks

Those that involve sequential input (speech and language)

- process an input sequence one element at a time, maintaining in their hidden units a 'state vector' that implicitly contains information about the history of all the past elements of the sequence

24

Useful things about CNNs

1. Partial connectivity (i.e. sparse connections) - not all the units in layer i are connected to all the units in layer i + 1

2. Weight sharing - different parts of the network are forced to use the same weights

25

Four key ideas behind CNNs that take adv of the properties of natural signals

1. local connections
2. shared weights
3. pooling
3. use of many layers

26

Convolutional layer

- units in a conv layer are organized in feature map, within which each unit is connected to local patches in the feature maps of the previous layer through a set of weights called a filter bank

- result of this local weighted sum is then passed through a non-linearity such as a Relu

- All units in a feature map share the same filter bank

- Different feature maps in a layer use different filter banks

Why this architecture:

1. Local groups of values are often highly correlated, forming distinctive motifs that are easily detected
2. local statistics of images are invariant to location (a motif can appear anywhere on the image - hence the idea that units at different locations share the same weights

Mathematically, the filtering operation performed by a feature map is a discrete convolution (name)

27

Role of conv layer

To detect local conjunctions of features from the previous layer

28

Role of pooling

To merge semantically similar features into one

Reduces dimensions of the representation and creates an invariance to small shifts and distortions

29

How is training in a CNN done?

Backpropagating is the same

Allows all the filter banks to be trained

30

Deep NNs exploit the property that many natural signals are compositional hierarchies in which higher-level features are obtained by composing lower-level ones.

Images -> local combos of edges form motifs -> parts -> objects

Same with speech

Pooling allows representations to vary very little when elements in the previous layer vary in position and appearance