Class Twelve Flashcards

1
Q

What is a Convolutional Neural Network (CNN)?

A

A Convolutional Neural Network (CNN) is a type of deep learning model designed for processing and analyzing visual data, such as images or videos. It uses convolutional layers to automatically extract relevant features from the input data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the advantages of using CNNs for image processing?

A

Advantages of using CNNs include their ability to capture spatial relationships, hierarchical feature extraction, parameter sharing, and translation invariance. They are particularly effective in tasks such as image classification, object detection, and image segmentation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are convolutional layers in a CNN?

A

Convolutional layers are the key building blocks of CNNs. They consist of filters or kernels that perform convolutions on the input data, enabling the network to extract local features and learn hierarchical representations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How do pooling layers contribute to CNNs?

A

Pooling layers in CNNs reduce the spatial dimensions of the feature maps while retaining the most relevant information. They help improve translation invariance, reduce computational complexity, and control overfitting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the purpose of the fully connected layers in a CNN?

A

Fully connected layers in a CNN integrate the extracted features from the convolutional layers and make final predictions or classifications. They provide high-level representations and enable the network to learn complex decision boundaries.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What considerations should you have when designing a convolutional layer?

A

Filter size: Neuron’s weights can be represented as a small image size of receptive field. (Filters are also known as convolution kernels.)
Stride of filter: determines how many pixel steps filter makes when moving from one image activation to another (typical to use a stride of 1).
Padding for input layer: Zero padding is used to pad borders of image pixels with a defined layer of zeros.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the challenges of training deep CNNs?

A

Challenges of training deep CNNs include overfitting, vanishing/exploding gradients, computational complexity, and the need for large amounts of labeled training data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the differences between feed forward and backpropagation CNN models?

A

The main difference between feed-forward and backpropagation CNN models lies in the learning process. In feed-forward models, information flows only in one direction, from input to output, without any adjustment of parameters based on error. In backpropagation models, the learning process involves both a forward pass for prediction and a backward pass for updating the model’s parameters using the calculated error signal. This allows the model to learn from the discrepancies between predictions and desired outputs, iteratively improving its performance through parameter updates.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is an adversarial attack in the context of deep learning?

A

An adversarial attack refers to the deliberate manipulation of input data to deceive or mislead a deep learning model. The goal is to generate perturbations that are imperceptible to humans but can cause the model to make incorrect predictions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the objective of an adversarial attack?

A

The objective of an adversarial attack is to exploit the vulnerabilities of deep learning models and expose their susceptibility to small perturbations in the input data. This helps identify weaknesses in the model’s decision-making process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are some common methods for generating adversarial examples?

A

Common methods for generating adversarial examples include the Fast Gradient Sign Method (FGSM), the Projected Gradient Descent (PGD) attack, and optimization-based approaches such as the Carlini-Wagner attack. These methods aim to find perturbations that maximize the model’s prediction error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How can adversarial attacks be used to evaluate and improve deep learning models?

A

Adversarial attacks provide insights into the vulnerabilities and limitations of deep learning models. By analyzing and understanding how models can be fooled, researchers can develop robust defenses and improve the model’s generalization capabilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are some defense mechanisms against adversarial attacks?

A

Defense mechanisms against adversarial attacks include adversarial training, where models are trained using adversarial examples, defensive distillation, input preprocessing techniques, and the use of certified robust models. These techniques aim to enhance the model’s robustness against adversarial perturbations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Can adversarial attacks be applied to other domains beyond images?

A

Yes, adversarial attacks can be applied to other domains beyond images, including natural language processing (NLP) tasks such as text classification and sentiment analysis. The goal is to generate subtle modifications to input text that can manipulate the model’s predictions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the ethical implications of adversarial attacks?

A

Adversarial attacks raise ethical concerns, as they can be used maliciously to deceive deep learning models, leading to potential security risks, privacy breaches, and misinformation. Research in adversarial robustness aims to address these concerns and improve the overall security of deep learning systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How do adversarial attacks relate to the robustness of deep learning models?

A

Adversarial attacks highlight the lack of robustness in deep learning models and expose their vulnerability to small perturbations. By studying adversarial examples, researchers can develop techniques to enhance the model’s robustness and improve its real-world performance.

17
Q

Can adversarial attacks be completely eliminated?

A

Completely eliminating adversarial attacks is challenging due to the fundamental properties of deep learning models. However, ongoing research focuses on developing more robust models and defense mechanisms to mitigate the impact of adversarial attacks.

18
Q

What are the different types of adversarial attacks?

A
  • White-Box: Attacker has access to training method (data/network initialization/ algorithm/hyperparameter).
  • Black-Box attacks: Attacker does not have complete access to network training method
19
Q

What is GAN?

A

GAN is a special type of machine learning model that can create new things, like images or text. It has two main parts: a generator and a discriminator. The generator tries to make fake samples that look real, while the discriminator tries to tell the difference between real and fake samples. They compete against each other and get better over time. The goal is for the generator to become really good at making realistic samples. GANs are used to make things like realistic images, generate new text, or improve datasets. [Uses binary cross-entropy loss].

20
Q

What is one example of GAN?

A

Fast Gradient Sign Method (FGSM) effective method to generate adversarial
images.
1. Take input image –> Make prediction (using CNN).
2. Compute loss of prediction based on true class label.
3. Calculate gradients of loss with respect to input image.
4. Compute gradient sign –> Use to construct adversarial image (output)

21
Q

How does GAN actually work?

A

Each training iteration is divided into two phases:
* Discriminator training: Batch of real images (label 1) is sampled from training set
and is completed with an equal number of fake images (label 0) produced by the
generator. [Uses binary cross-entropy loss].
* Generator training: Produce another batch of fake images, and once again
discriminator is used to tell whether images are fake or real. Do not add real
images in the batch, and all labels are set to 1 (real).
* Never actually sees any real images.