Week 7 - Deep Learning Architectures & Training Flashcards
(9 cards)
What is the concept of Transfer Learning?
Transfer Learning is a method that leverages a pre-trained network for new tasks without needing vast amounts of new training data.
How does Transfer Learning work?
- A network is initially trained on a large, general-purpose dataset like ImageNet
- Then, if the specific dataset is too small, you freeze the pre-trained network’s weights and only re-train the classifier layer
- For a medium sized dataset, you start with the pre-trained weights, then re-train the whole network or just the higher layers using a reduced learning rate.
What are some properties of the ImageNet dataset?
- 1.2 million high-resolution images
- 1000 different classes
What are some historical examples of Classification CNNs?
- AlexNet
- VGG
- Residual Network
What are some historical examples of Segmentation CNNs?
- U-Net
- SegNet
What are some historical examples of Object Detection CNNs?
- RCNN
- Fast RCNN
- Mask RCNN
- YOLO
- nnU-Net
How does Eye Tracking get used effectively in Computer Vision?
Deep learning performs better when regions are marked before the learning process.
As such, eye tracking is a cheap and easy way to record where someone has been looking. Once this data is filtered, it can be passed to a suitable deep learning system.
What can help to ‘pad-out’ a dataset if it isn’t suitably large enough for training?
Creating and using synthetic data, as it can be used to augment the original dataset with ‘new’ data
What are some factors to consider when rendering synthetic images?
- Accuracy of material
- Lighting
- Background
- How to create annotations