10.2 Deep learning Flashcards

1
Q

A neural network embedding maps instances from a ____-dimensional input space to a ____-dimensional representation space.

A
  • high / low

Embeddings generally map instances from a high-dimensional input space like pixels to a low-dimensional space that represents some useful features of the input, in which it is easier to learn.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Which operation in a convolutional neural network produces an output that is smaller than the input?

A
  • All of these

All of these will produce an output smaller than the input. The (valid) output of a convolution with kernel size > 1 is always smaller than input image, regardless of the stride. Max pooling with stride > 1 will also produce an output smaller than its input.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

If a deep neural network is overfitting, which of these options would help prevent that? (select all that are correct)

A
  • Reduce training epochs
  • Add dropout after some layers

Dropout is a regularization that can be added to neural networks to reduce overfitting, and training for fewer epochs (early stopping) may also help prevent overfitting. Changing layer type is not generally helpful to prevent overfitting (unless it reduces the number of weights to train) and adding more layers may make the problems worse (more parameters to train means more chance to overfit to training data).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which is not generally true of the current state-of-the-art deep neural networks?

A
  • They show human-like generalization to tasks for which they were not trained

State-of-the-art networks often meet or exceed human performance on the tasks for which they are trained, but they often don’t generalize like humans (for example, they are sensitive to adversarial image attacks). Neural networks learn non-linear functions of their input data due their non-linear activation functions. Current state-of-the-art networks need large datasets for training due to their very large number of parameters.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly