ML Security Flashcards

1
Q

-Science of making things smart or human tasks performed by machines (example: visual recognition, Natural Language processing)

A. Artificial Intelligence (AI)
B. Machine Learning (ML)
C. Deep Learning (DL)

A

A. Artificial Intelligence - Science of making things smart or human tasks performed by machines (example: visual recognition, Natural Language processing) Ability of machines to perform human tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

-One of many approaches to AI that uses a system capable of learning from experience. Makes decisions based on data rather than algorithm.

A. Artificial Intelligence (AI)
B. Machine Learning (ML)
C. Deep Learning (DL)

A

B. Machine Learning (ML)

-One of many approaches to AI that uses a system capable of learning from experience. Makes decisions based on data rather than algorithm.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

-A set of techniques for implementing machine learning that recognizes patterns of patterns. (for example: image recognition). Identifies object boundary, type, structure.

A. Artificial Intelligence (AI)
B. Machine Learning (ML)
C. Deep Learning (DL)

A

C. Deep Learning (DL)

A set of techniques for implementing machine learning that recognizes patterns of patterns. (for example: image recognition)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Different applications work with different data.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is an AI Threat?

A. Hacker break system through stickers on stop signs
B. Hackers can bypass facial recogniton
C. Hackers can break web platforms and filters via social media.
D. Hackers like Nest Assistance can be broken
E. All the above

A

E. All the above are AI Threats.

a. Self Driving Car Threat:
Hacker break system through stickers on stop signs

b. Classification / Image Threat:
Hackers can bypass facial recogniton

c. Social Media Threat:
Hackers can break web platforms and filters via social media.

d. Home Automation Threat:
Hackers like Nest Assistance can be broken

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What algorithm categories are the following categories?

-Classification
-Regression

A. Supervised
B. Unsupervised
C. Semi-Supervised
D. Reinforcement Learning

A

-Classification
-Regression

A. Supervised

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What algorithm categories are the following categories?

-Clustering
-Dimensionality Reduction

A. Supervised
B. Unsupervised
C. Semi-Supervised
D. Reinforcement Learning

A

-Clustering
-Dimensionality Reduction

B. Unsupervised

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What algorithm categories are the following categories?

-Generative models

A. Supervised
B. Unsupervised
C. Semi-Supervised
D. Reinforcement Learning

A

-Generative models

C. Semi-Supervised

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What algorithm categories are the following categories?

-reinforcement learning

D. Reinforcement Learning

A

-reinforcement learning

D. Reinforcement Learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How are AI attacks classified?

A. confidentiality, availability, and integrity (triad)
B. Espionage, sabotage, and fraud
C. Availability, fraud, and integrity
D.A and B

A

How AI attacks classified

A. confidentiality, availability, and integrity (triad)

and

B. Espionage, sabotage, and fraud

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the steps to start an AI Security Project?

I. Identify an AI object and a task
ii. understand algorithm category and algorithm itself
iii. choose an ai attack relevant to your task and algorithm

A. 3,2,1
B. 2,1,3
C. 1,2,3
D. 3,1,2

A

Start and AI Security Project Steps:

C. 1,2,3

I. Identify an AI object and a task
ii. understand algorithm category and algorithm itself
iii. choose an ai attack relevant to your task and algorithm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

True or False:

AI Threats are similar / mostly the same, but their appraoches are different

A

True

AI Threats are similar / mostly the same, but their appraoches are different

Reasoning: The difference comes in Algorithms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Steps to Set up your Environment:

i. have nvidia gpu or not
ii. choose operating system (recommend Ubuntu)
iii. follow guidelines provided

A. 3,2,1
B. 1,2,3
C. 2, 1, 3,
D. 3,1,2,

A

Steps to Set up your Environment:

i. have nvidia gpu or not
ii. choose operating system (recommend Ubuntu)
iii. follow guidelines provided

B. 1,2,3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Which attack cannot be used for breaking integrity of AI?

A. backdoor
b. adversarial
c. inference attack
d. poisoning

A

c. inference attack

inference attack- dont break functionality they extract critical data

REASONING:

Adversarial attacks- break integrity by misclassification
Poisoning - poisoning breaks integrity
Backdoor-backdoor attacks break integrtiy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the most important hardware for this course?

a. CPU
b. GPU
c. RAM
d. HDD

A

most important hardware
b. GPU

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Model is getting trained on label data set. Examples is Classification and regression:

A. Supervised
B. Unsupervised
C. Semi-Supervised
D. Reinforcement Learning

A

A. Supervised

Supervised- Model is getting trained on label data set. Examples is Classification and regression.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Model is attempting to automatically find structure in the data by extracting useful features and analyzing its structure. Examples: Clustering, Association, Dimension Reduction (Generalization)

A. Supervised
B. Unsupervised
C. Semi-Supervised
D. Reinforcement Learning

A

B. Unsupervised

Unsupervised - Model is attempting to automatically find structure in the data by extracting useful features and analyzing its structure. Examples: Clustering, Association, Dimension Reduction (Generalization)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Imagine a road sign detection system aiming to classify signs. Supervised learning approach is usually used. Examples of certain groups is known and all classes should be defined in the beginning. This method is:

A. Classification
B. Regression
C. Clustering

A

A. Classification

Classification - imagine a road sign detection system aiming to classify signs. Supervised learning approach is usually used. Examples of certain groups is known and all classes should be defined in the beginning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

The knowledge about the existing data is utilized to have an idea about new data (Past explains future). Ex. is stock price prediction.

A. Classification
B. Regression
C. Clustering

A

B. Regression

Regression - The knowledge about the existing data is utilized to have an idea about new data (Past explains future). Ex. is stock price prediction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Supervised learning approach is usually used. Examples of certain groups is known and information about classes in data is unknown.

A. Classification
B. Regression
C. Clustering

A

C. Clustering

Clusteirng - Supervised learning approach is usually used. Examples of certain groups is known and information about classes in data is unknown.

Algorithms: KNN (K-Nearest Neighbor), K-Means, Mixture Model (LDA)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Necessary if you deal with complex systems with unlabeled data and many potential features (facial recogntion)

A. Classification
B. Dimension Reduction (Generalization)
C. Clustering
D. Generative Models

A

B. Dimension Reduction (Generalization)

Dimension Reduction - Necessary if you deal with complex systems with unlabeled data and many potential features (facial recogntion)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

_______ designed to stimulate the actual data and not decisions, based on previous data.

AI data based on previous data.

A. Classification
B. Dimension Reduction (Generalization)
C. Clustering
D. Generative Models

A

D. Generative Models

Generative Models - AI data based on previous data. designed to stimulate the actual data and not decisions, based on previous data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

________ A behavior that depends on the changing environment.

A. Reinforcement Learning
B. Dimension Reduction (Generalization)
C. Active Learning
D. Generative Models

A

A. Reinforcement Learning -A behavior that depends on the changing environment.

Reinforcement Learning
(Behavior should react to the changing environment. Trial and Error.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

_____ A subclass of reinforcement learning, which helps correct errors, in addition to the environment changes

A. Reinforcement Learning
B. Dimension Reduction (Generalization)
C. Active Learning
D. Generative Models

A

C. Active Learning

Active Learning - A subclass of reinforcement learning, which helps correct errors, in addition to the environment changes

Acts as a teacher who can help correct errors in addition to environment changes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

_________ are inputs to machine learning models that results in an incorrect input.

A. adversarial example
B. king penguin
C. starfish
D. baseball

A

A. adversarial example

adversarial example - inputs to machine learning models that results in an incorrect input.

Reasoning:
b. King penguin - is a adversarial example
c. starfish - is a adversarial example
d. baseball - is an adversarial example

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

________ - Is the cause for ML models to create a false prediction?

A. adversarial example
B. king penguin
C. starfish
D. baseball

A

A. adversarial example

Adversarial example - Is the cause for ML models to create a false prediction?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

___________ tries to move inputs across the decision boundary?

A. adversarial example
B. king penguin
C. adversarial attacks
D. baseball

A

C. adversarial attacks

ADVERSARIAL ATTACKS- tries to move inputs across the decision boundary.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

How AI Attacks Work:

What do AI Attacks calculate?

A. How much inputs change affect the outputs.
B. How much outputs change affect inputs
C. Decision boundary
D. Neither

A

A. How much inputs change affect the outputs.

AI Attacks work by calculating how much INPUT changes AFFECT OUTPUT.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What do you need to calculate AI Attacks?

a. Gradient
b. Loss function
c. Optimal Perturbations measuring Lp Norms
d. All the above

A

d. All the above

What you need to calculate AI Attacks:
1. Gradient
2. Loss Function
3. Optimal Perturbations measuring Lp Norms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

______ defines how good a given model is at making predictions for a given scenario.

a. Gradient
b. Loss function
c. Optimal Perturbations measuring Lp Norms
d. None of the Above

A

b. Loss function

Loss Function - Defines how good a given model is at making predictions for a given scenario

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What method has the following characteristics:
-it has its own curve and gradients
-slope of the curve indicates the appropriate way of updating the parameters to make the model more accurate in case of prediction

a. Gradient
b. Loss function
c. Optimal Perturbations measuring Lp Norms
d. None of the Above

A

b. Loss function

-it has its own curve and gradients
-slope of the curve indicates the appropriate way of updating the parameters to make the model more accurate in case of prediction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

____ a fancy work for derivative, also known as vector. Means rate of change.

a. Gradient
b. Loss function
c. Optimal Perturbations measuring Lp Norms
d. None of the Above

A

a. Gradient

Gradient - a fancy work for derivative, also known as vector. Means rate of change.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

_____ attacks try to move inputs across the decision boundary.

a. Gradient
b. Loss function
c. Optimal Perturbations measuring Lp Norms
d. None of the Above

A

c. Optimal Perturbations measuring Lp Norms

_____ attacks try to move inputs across the decision boundary.

Perturbation - attacks try to move inputs across the decision boundary

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

____ denotes the maximum change for all pixels in the adversarial examples

a. l(8)
b. u
c. l0
d. none of above

A

a. l(8)

__l(8)____denotes the maximum change for all pixels in the adversarial examples. (Used in Perturbation)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

_____ number of pixels changed in the adversarial examples.

a. l(8)
b. u
c. l0
d. none of above

A

c. l0

___l0___number of pixels changed in the adversarial examples. (Used in Perturbation)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Topic “If ML Algorithms have Vulnerabilities”

Ex. malefactor is implementing bypass techniques is a “spam”, sending out. All algorithms on ML models are based (from SVMs to random forests and neural networks) which are vulnerable to different kinds of adversairal inputs. This type of attack was targets what form of AI?

a. Classification
b. Random Forests
c. K-Means
d. Regression

A

a. Classification

Adversarial Classification -

Is an attack where malefactor is implementing bypass techniques is a “spam”, sending out. All algorithms on ML models are based (from SVMs to random forests and neural networks) which are vulnerable to different kinds of adversairal inputs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Which type of ML algorithms has few examples of practical attacks?

a. Classification
b. Random Forests
c. K-Means
d. Regression

A

d. Regression

Regression- a type of ML Algorithms that has FEW EXAMPLES of PRACTICAL attacks.

Source: “Adversarial Regression with Multiple Learners 2018”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

True / False:
Most attacks used in Classification can be used in Regression?

A

TRUE

MOST attacks used in Classification CAN BE USED in Regression

Reasoning: Condition Based Instance and Null Analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Which type of ML algorithms would succumb to auto-encoders prone to attacks or attack such as (input reconstruction, spoofs)

Input image the model encodes the lower dimensional then uses that to reconstruct the original image.

a. Classification
b. Generative Models
c. K-Means
d. Regression

A

b. Generative Models

Generative Models (GANS) or auto-encoders - would succumb to auto-encoders prone to attacks such as (input reconstruction, spoofs)

Input image the model encodes the lower dimensional then uses that to reconstruct the original image.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Which type of ML algorithm can be used for malware detection?

a. Classification
b. Generative Models
c. K-Means
d. Clustering

A

d. Clustering

Clustering - used for malware detection.
Clustering algorithm is K-Nearest Neighbors (KNN)

Note: Training data comes from the wild.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

______ is the most common dimensionality reduction algorithms?

A. PCA
B. Clustering
C. Generalization
D. MNIST

A

A. PCA

PCA- is the most common dimensionality reduction algorithm.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Why type of ML Algorithm is used sensitive to outliers that can be exploited by contaminating training data?

A. PCA
B. Clustering
C. Generalization
D. MNIST

A

A. PCA

PCA - sensitive to outliers that can be exploited by contaminating training data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

What does this example show (insert image)

A

It allows dramatically decreasing the detection rate for DoS attacks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

______ which type of algorithm is used for Facial Recognition? An example of this is using your face to unlock your iphone.

A. PCA
B. Clustering
C. Generalization
D. MNIST

A

A. PCA

PCA- algorithm is used for Facial Recognition. An example of this is using your face to unlock your iphone.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

RL the framework known as DNN , using DNN for Feature Selection and Q Functional Approximation. Hence enable

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

What are the steps of a Deep Reinforcement Learning Attack (DQN)?

i. attacker observes current state and transitions in environment
ii. attacker estimates best action according to adversarial policy
iii. attacker crafts perturbation to induce adversarial action
iv. attacker applies perturbation
v. perturbed input is revealed to target
vii. attacker waits for targets action

A. 1,2,3,4,5,6
B. 6,5,4,3,2,1
C. 4,3,2,5,6,1
D. 2,5,3,4,6,1

A

steps of a Deep Reinforcement Learning Attack (DQN)?

A. 1,2,3,4,5,6

i. attacker observes current state and transitions in environment
ii. attacker estimates best action according to adversarial policy
iii. attacker crafts perturbation to induce adversarial action
iv. attacker applies perturbation
v. perturbed input is revealed to target
vii. attacker waits for targets action

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

What is the most wide spread attack method?

a. LBFGS
b. FGSM (Fast Gradient Side Method)
c. DQN
d. none of the above

A

b. FGSM (Fast Gradient Side Method)

FGSM-

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

_____ attack does the following:
1. Takes the label of the least likely class predicted by network
2. The computed pertrubation is subtracted from original image
3. This maximizes the probability that the network predicts target as the label of the adversarial example

a. LBFGS
b. FGSM (Fast Gradient Side Method)
c. DQN
d. none of the above

A

b. FGSM (Fast Gradient Side Method)

FGSM works using the following steps:

  1. Takes the label of the least likely class predicted by network
  2. The computed pertrubation is subtracted from original image
  3. This maximizes the probability that the network predicts target as the label of the adversarial example
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

_____ attack method was very time consuming, especially for larger images and practically non-applicable

a. LBFGS
b. FGSM (Fast Gradient Side Method)
c. DQN
d. none of the above

A

a. LBFGS

LBFGS - attack method was very time consuming, especially for larger images and practically non-applicable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

Which ML task category is required if you deal with complex systems with unlableled data and many potential features?

a. classification
b. clustering
c. reinforcement learning
d. dimentionality reduction

A

d. dimentionality reduction

Dimentionality Reduction- ML category required if you deal with complex systems with unlabeled data and many potential features.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

How do you measure Adversarial Attacks?

A. using Gradient
B. using Loss Function
C. using L-p norm
D. using the size of ML Model

A

C. using L-p norm

L-p norm used to measure changes for adversarial attacks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Which ML task category has the biggest number of research papers?

A. Clustering
B. Reinformcement Learning
C. Classification
D. Regression

A

C. Classification

Classification - Has the larges number of research papers spanning 300

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

Why is FGSM method better than BFGS method?

A. Requires less information
B. FGSM is more accurate
C. More universale
D. The FGSM method is faster

A

D. The FGSM method is faster

Reasoning-
Not C. LBFGS is more universal but slower and less accurate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

Which dataset is better for testing practical attacks?

A.CIFAR
B. MNIST
C. LFW
D. ImageNew

A

B. MNIST

MNIST is the dataset best for testing practical attacks. The MNIST dataset is the smallest one, and all tests will be less time-consuming with lower computation cost

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

What are the reasons to Hack AI?

A. AI is eating software
B. Exansion of technology related to Cybersecurity
C. Vulnerable to various cyber attacks like any other algorithms
D. All Above

A

D. All Above

Hack AI
-AI is eating software
-Expansion of tech related to cybersecurity
-vulnerability to various cyber attacks like any other algorithms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

Autonomous cars use image classification such as Identification of Raw Science

______ can lead to horrible accidents

A. Spoofing of Raw Science

A

Autonomous cars use image classification such as Identification of Raw Science

Spoofing of Raw Science- can lead to horrible accidents

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

What are AI risks in the Cybersecurity Industry?

A. Bypass spam filters
B. Bypass threat detection solutions
C. Bypass AI-based Malware Detection tools
D. All Above

A

AI risks in Cybersecurity Industry

D. All Above
-Bypass spam filter
-Bypass threat detection solutions
-bypass AI based malware detection tools

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

What are AI risks in the Retail Industry?

A. bypass Facial recognition

A

AI Risks in Retail Industry:

A. bypass Facial recognition
(used w/ makeup, surgerty etc.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

How does AI use in Retail

a. Behavior retail of clients
b. Optimize business processes
c. all above

A

c. all above

AI use in retail:
1. Behavior retail of clients
2. Optimize business processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

How is AI used in Smart Home Industry?

Amazon echo recognizes Noise as a Comment. This voice is recognized as certain instructions.

a. forge voice commands

A

AI used in Smart Home Industry

a. forge voice commands

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

How AI used in Web and Social Media Industry

a. Fool sentiment analysis of movie reviews, hotels etc.

A

How AI used in Web and Social Media Industry

  1. Fool sentiment analysis of movie reviews, hotels etc.

Misinterpret a comment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

How AI used in Finance

a. trick anomaly and fraud detection engines

A

How AI used in Finance

  1. trick anomaly and fraud detection engines
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

What are ways to prevent Frauds using ML?

a. learn customer behavior
b. analysis of aggregated data
c. analysis of social graphs
d. automation of routine processes
e. control use ID information
f. ALL ABOVE

A

f. ALL ABOVE

-learn customer behavior
- analysis of aggregated data
-analysis of social graphs
- automation of routine processes
- control use ID information

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

Confidentiality is associated with:

a. Gather System Insights
b. Disable AI System Functionality
c. Modify AI logic

A

Confidentiality is associated with:

a. Gather System Insights
-Obtain insights into the system
-utilize the received info or plot more advanced attacks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

Which triad is the following:
(A malicious person deals with a ML system that is an Image Recognition System. They get to learn more about the internals or the datasets from this system)

a. confidentiality
b. availability
c. integrity

A

a. confidentiality

(A malicious person deals with a ML system that is an Image Recognition System. They get to learn more about the internals or the datasets from this system)

Reasoning-
Confidentiality because they are gathering information about the system and that information can be used to plot attacks.

NOT: Integrity because they did not change logic
NOT: Availability because they did not disable anything

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

Availability is associated with:

a. Gather System Insights
b. Disable AI System Functionality
c. Modify AI logic

A

b. Disable AI System Functionality

Availability = Disable AI System Functionality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

Which triad is the following:
-Flood AI with requests, which demand more time
-Flood with incorrect classified objects to increase manual work
-Modify a model by retraining it with wrong examples
-Use computing power of an AI model for solving your own tasks

a. confidentiality
b. availability
c. integrity

A

b. availability

-Flood AI with requests, which demand more time
-Flood with incorrect classified objects to increase manual work
-Modify a model by retraining it with wrong examples
-Use computing power of an AI model for solving your own tasks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

Integrity is associated with:

a. Gather System Insights
b. Disable AI System Functionality
c. Modify AI logic

A

c. Modify AI logic

Integrity = Modify AI Logic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

Which triad is the following:
-Ex. Make autonomous cars, believe that there is a cat on the road, when in fact it is a car.
-2 different ways to interact with a system at the learning or production stage
1) poinsoning
2) evasion

a. confidentiality
b. availability
c. integrity

A

c. integrity

This attack is integrity because you modified the car to think it was a cat when it was really a car.

2 types of integrity (modify ai logic)
1. Poisoning - attackers poison some data in the training dataset
2. Evasion- attackers exploit vulnerabilities of an algorithm by showing modified picture at the production stage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

Which integrity interaction is this?
________ attackers alter some data in the training dataset

a. poisoning
b. evasion
c. modify ai logic

A

a. poisoning

POSIONING- attackers poinson / alter some data in the training dataset

A attack form of Integrity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

Which integrity interaction is this?
______ attackers exploit vulnerabilities of an algorithm by showing the modified picture at the production stage

a. poisoning
b. evasion
c. modify ai logic

A

b. evasion

EVASION - attackers exploit vulnerabilities of an algorithm by showing the modified picture at the production stage

A attack form of Integrity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

_______ a procedure where someone is trying to exploit ML model, by injecting malicious data into the training dataset.

a. poisoning
b. evasion
c. modify ai logic

A

a. poisoning

Poisoning - a procedure where someone is trying to exploit ML model, by injecting malicious data into the training dataset.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

_________ attacks change classification boundry while
_________ attacks change input examples

a. Poisoning, Adverarial
b. Adversarial, Poisoning
c. Posioning, Evasion
d. Evasion, Poisoning

A

a. Poisoning, Adverarial

Poisoning attacks - change classification boundry WHILE
Adversarial attacks - change input examples

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

True or False

If points are added to the training data, the decision boundry will change

A

True

If points are added to the training data, the decision boundry will change

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

______ attack allows an adversary to modify solely the labels in supervised learning datasets but for arbitrary data points

A. Label modification
B. Poisoning
C. Evasion
D. Data Injection

A

A. Label modification

label modification attack allows an adversary (enemy) to modify solely the labels in supervised learning datasets but for arbitrary (opposite) data points

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

______ An adversary (enemy) does not have access to the training data nor to the learning algorithm, but has the ability to add new data to the training set

A. Label modification
B. Poisoning
C. Data Injection
D. Adversarial

A

C. Data Injection

Data Injection - An adversary (enemy) does not have access to the training data nor to the learning algorithm, but has the ability to add new data to the training set

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

_______ An adversary does not have access to the learning algorithm but has full access to the training data

A. Label modification
B. Data Modification
C. Data Injection
D. Adversarial

A

B. Data Modification

Data modification - An adversary does not have access to the learning algorithm but has full access to the training data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

______ An adversary has the ability to meddle with the learning algorithm and such attacks are viewed as logic corruption.

A. Label modification
B. Data Modification
C. Data Injection
D. Logic Corruption

A

D. Logic Corruption

Logic Corruption - An adversary has the ability to meddle with the learning algorithm and such attacks are viewed as logic corruption

79
Q

______ An attacker intends to explore the system such as model or dataset, that can further come in handy.

A. Label modification
B. Data Modification
C. Data Injection
D. Logic Corruption
E. Privacy Attack (Inference Attack)

A

E. Privacy Attack (Inference Attacks)

Privacy Attack - An Attacker intends to explore the system such as Model or dataset, that can further come in handy

80
Q

These attacks are done at the production stage.
These attacks are achievable at training, if the training data is injected, we can learn how the algorithm works based on the given data.
The goal is to break Confidentiality

A. Label modification
B. Data Modification
C. Data Injection
D. Logic Corruption
E. Privacy Attack (Inference Attack)

A

E. Privacy Attack (Inference Attack)

Privacy Attack - An Attacker intends to explore the system such as Model or dataset, that can further come in handy

Characteristics:
These attacks are done at the production stage.
These attacks are achievable at training, if the training data is injected, we can learn how the algorithm works based on the given data.
The goal is to break Confidentiality

81
Q

Type of attacker: Example with particular property was in a dataset.

A. Membership inference
B. Attribute Inference
C. Input Inference
D. Parameter Inference

A

B. Attribute Inference

Attribute inference- Example with particular property was in a dataset.

82
Q

Type of attacker: Particular example was in dataset

A. Membership inference
B. Attribute Inference
C. Input Inference
D. Parameter Inference

A

A. Membership inference

Membership inference- Particular example was in dataset

83
Q

Type of attacker: Extract an example from the dataset

A. Membership inference
B. Attribute Inference
C. Input Inference
D. Parameter Inference

A

C. Input Inference

Input Inference - Extract an example from the dataset

84
Q

Type of attacker: Obtain ML model parameters

A. Membership inference
B. Attribute Inference
C. Input Inference
D. Parameter Inference

A

D. Parameter Inference

Parameter Inference - Obtain ML model parameters

85
Q

______ Attack’s main goal is to inject additional behavior in such a way that backdoors operate after retraining the system

A. Poisoning
B. Backdoor
C. Evasion
D. Parameter Inference

A

B. Backdoor

Backdoor - Main goal is to inject additional behavior in such a way that the backdoors operate after retraining the system

86
Q

Why Use BackDoors
1. NN represent large structure like millions of neurons. Need backdoors to do minor changes like a small set of neurons

  1. Operating models are trained with tremendous data and computing power. It is impossible for small co to recreate them so usually train existing models.
  2. Malefactors can hack a server that stores public models and upload their own model using a backdoor. The NN model will keep the backdoor up to the model is retrained
A

Why Use BackDoors
1. NN represent large structure like millions of neurons. Need backdoors to do minor changes like a small set of neurons

  1. Operating models are trained with tremendous data and computing power. It is impossible for small co to recreate them so usually train existing models.
  2. Malefactors can hack a server that stores public models and upload their own model using a backdoor. The NN model will keep the backdoor up to the model is retrained
87
Q

_____ attacks are lesser-known than adversaril attacks

a. listed
b. backdoor
c. adversarial
d. parameter

A

a. listed

Listed attacks are lesser-known than adversarial attacks

88
Q

Which industry is one of the most critical in terms of AI attacks?

a. Transportation
b. Energy
c. Entertainment
d. Oil and Gas

A

a. Transportation

The transportation industry is the most critical because AI is taking this industry by storm and any error related to security may affect human lives

89
Q

An attack on __ is an attack where a hacker’s aim is to get information on ML Models insights

a. safety
b. availability
c. integrity
d. confidentiality

A

d. confidentiality

confidentiality - an attack where a hacker’s aim is to get information on ML Models insights

90
Q

How is an attack subtype called if an adversary does not have any access to the training data as well as to the learning algorithm but instead it has an ability to add new data to the training set?

a. Label modification
b. Data injection
c. Logic corruption
d. Data modification

A

b. Data injection

Data injection - adversary ability to add new data to the training set

91
Q

What algorithms can be used for detecting posioning attacks?

a. clustering
b. decision trees
c. neural networks
d. KNN

A

a. Clustering

clustering used to detect posioning attacks

92
Q

Is parameter inference privacy attack implemented in CypherCat?

True / False

A

False

Parameter Inference Privacy Attack is not implemented in Cypher Cat

93
Q

What algorithm is required for backdoor detection?

a. classification
b. outlier detection
c. segmentation
d. regression

A

b. outlier detection

94
Q

What are 3 things you need to consider when you want to analyze a security of AI

a. architecture, algorithm, and dataset
b. architecture, SVM, and dataset
c. training data, algorithm, dataset
d. none of the above

A

a. architecture, algorithm, and dataset

3 things to consider when analyze security
1. Architecture
2. Algorithm
3. Dataset

95
Q

Linear Regression
SVM
MLP
CNN (Convolution Neural Network)

These are all examples of
a. algorithm
b. dataset
c. architecture

A

c. architecture

Linear Regression
SVM
MLP
CNN (Convolution Neural Network)

96
Q

_______ is a type of architecture that has multiple layers of neural networks, each is responsible for its own set of features

a. algorithm
b. dataset
c. architecture

A

c. architecture

a type of architecture that has multiple layers of neural networks, each is responsible for its own set of features

97
Q

Which type of algorithm is the following:
-simple architecture
-slow for training
-model is large
-avoid in practice

a. VGG (Visual Geometry Group)
b. ResNet (Residual networks)
c. Inception

A

a. VGG (Visual Geometry Group)

VGG (Visual Geometry Group)
-simple architecture
-slow for training
-model is large
-avoid in practice

98
Q

Which type of algorithm is the following:
-deep neural network
-addresses the problem of vanishing gradients

a. VGG (Visual Geometry Group)
b. ResNet (Residual networks)
c. Inception

A

b. ResNet (Residual Networks)

an algorithms
-deep neural network
-addressed the problem of vanishing gradients

99
Q

Which type of algorithm is the following:
-developed by Google
-4 versions available
-Inception V3 and Inception V4 (image classification)

a. VGG (Visual Geometry Group)
b. ResNet (Residual networks)
c. Inception

A

c. Inception
-developed by Google
-4 versions available
-Inception V3 and Inception V4 (image classification)

100
Q

Which type of datast is the following:
-MNIST / CIFAR : play while practicing
-MNIST / CIFAR: run text faster
-ImageNet- need alot of memory on your computer

A
101
Q

Which type of dataset would you used based on the following task:
“Want to develop a production based solution and Attacks / Defenses.”

a. MNIST
b. CIFAR
c. ImageNet

A

c. ImageNet

ImageNet - A datatype that has solution for Attacks / Defenses also way to go if you want to develop a production based solution

102
Q

Which type of dataset would you used based on the following task:
“run text faster”, “pay while practicing”

a. MNIST
b. CIFAR
c. ImageNet
d. both a and b

A

d. both a and b

BOTH MNIST and CIFAR datatypes have advantages of running text faster and play while practicing.

103
Q

Which type of dataset would you used based on the following task:
“need alot of memory”

a. MNIST
b. CIFAR
c. ImageNet

A

c. ImageNet

A disadvantage of ImageNet is that you will need alot of memory.

104
Q

What questions must be answered about adversarial attacks?

a. goals
b. perturbation and iterations
c. environment and constrains
d. knowledge
e. all the above

A

e. all above

Questions need to be answered about adversarial attacks and obtain the utmost information :
- Attackers Goal
-Perturbation
-Environment
-Iterations
-Constrains
-Knowledge

105
Q

Which Adversarial Attack Goal is the following:
“Change a class to a particular target”

a. targeted misclassification
b. source / target misclassification
c. confidence reduction
d. misclassification
e. all above

A

c. confidence reduction

Confidence reduction - “Change a class to a particular target”

106
Q

Which Adversarial Attack Goal is the following: “Change a class without any specific target”

a. targeted misclassification
b. source / target misclassification
c. confidence reduction
d. misclassification
e. all above

A

d. misclassification

“Change a class without any specific target”

107
Q

Which Adversarial Attack Goal is the following: “Dont change a class but impact the confidence greatly”

a. targeted misclassification
b. source / target misclassification
c. confidence reduction
d. misclassification
e. all above

A

c. confidence reduction

“dont change a class but impact the confidence greatly”

108
Q

Which Adversarial Attack Goal is the following: “Change a class without any specific target”

a. targeted misclassification
b. source / target misclassification
c. confidence reduction
d. misclassification
e. all above

A

d. misclassification

misclassification - Change a class without any specific target”

109
Q

Which Adversarial Attack Perturbation is the following:
“Adversarial perturbation can only be applied to 1 source”

a. individual
b. universal

A

a. individual

110
Q

Which Adversarial Attack Perturbation is the following:
“Adversarial perturbation can only be applied to many source”

a. individual
b. universal

A

b. universal

111
Q

Which Adversarial Attack Perturbation is the following:
“Adversarial attack can only be applied to digital world”

a. individual
b. universal
c. digital
d. physical

A

c. digital

ex. attacker has digital photo (profile picture) and small perturbation to mutliple pixels they can fool facial recognition in digital world

112
Q

Which Adversarial Attack Perturbation is the following:
“Adversarial attack applied to physical world”

a. individual
b. universal
c. digital
d. physical

A

d. physical

camera takes photo sends to ml system. Camera quality is insufficient and smooths before sent to system. This smoothing destroys adversarial perturbation. This shows that what is done in physical world cant be done in digital world.

113
Q

Single step attacks require just 1 steps.
What are Single steps attack examples

a. FGSM
b. RSSA
c. BIM
d. Both A and B

A

d. Both A and B

FGSM and RSSA are both single step attacks.
(Fast and less accurate)

114
Q

Iterative attacks require multiple iterations.
What are examples of Iterative attacks?

a. BIM
b. DeepFool
c. FGSM
d. both A and B

A

d. both A and B

BIM and DeepFool both are iterative attacks require multiple iterations. (More accurate but very slow)

115
Q

________ This Adversarial Attack Constraint - measures the Euclidean distance between adversarial example and the original sample

a. L8
b. L2
c. L1
d. L0

A

Adversarial Attack Constraint

b. L2

L2 - measures the Euclidean distance between adversarial example and the original sample

116
Q

_______ This Adversarial Attack Constraint -measures distance between 2 points (number of dimensions that have different values) and number of pixels changed)

a. L8
b. L2
c. L1
d. L0

A

Adversarial Attack Constraint

d. L0

L0- measures distance between 2 points (number of dimensions that have different values) and number of pixels changed)

117
Q

______ This Adversarial Attack Constraint - Distance is equivalent to the sum of the absolute value of each dimension, which is also known as the Manhattan distance

a. L8
b. L2
c. L1
d. L0

A

Adversarial Attack Constraint
c. L1

L1 - Distance is equivalent to the sum of the absolute value of each dimension, which is also known as the Manhattan distance

118
Q

______ This Adversarial Attack Constraint - Denotes the maximum change for all pixels in adversarial examples

a. L8
b. L2
c. L1
d. L0

A

Adversarial Attack Constraint

a. L8

l8 - maximum change for all pixels in adversarial examples

119
Q

_______ Everything about the network is known including all weights and all data on which this network was trained

a. White-box
b. Grey-box
c. Black-box

A

a. White-box

White-box- Everything about the network is known including all weights and all data on which this network was trained

120
Q

______ An attacker may know details about the dataset or a type of netural network, its structure, the number of layers, and so on

a. White-box
b. Grey-box
c. Black-box

A

b. Grey-box

An attacker may know details about the dataset or a type of netural network, its structure, the number of layers, and so on

121
Q

________ An attacker can only send information to the system and obtain a simple result about a class

a. White-box
b. Grey-box
c. Black-box

A

c. Black-box

An attacker can only send information to the system and obtain a simple result about a class

122
Q

Steps on “How to Choose an Attack”

i. Understand Knowledge Level + Goal
ii. Understand Constrain + Environment
iii. Iterations + Perturbations

a. 1,2,3
b. 3,2,1
c. 2,1,3

A

Steps on “How to Choose an Attack”
a. 1,2,3

i. Understand Knowledge Level + Goal
ii. Understand Constrain + Environment
iii. Iterations + Perturbations

123
Q

Attack quality depends on AI model hyperparameters

True
False

A

True

AI Attack quality depends on AI model hyperparameters such as, number of layers, activation functions etc.

124
Q

Iterative attacks are better than single-step attacks because they are faster

True
False

A

False

Iterative attacks are slower than Single-Step attacks

125
Q

FGSM is faster than DeepFool

True
False

A

True

FGSM is faster than DeepFool

126
Q

Grey-box attack is an attack where an attacker doesn’t know anything about the model and the dataset

True
False

A

False

Grey-box attack is an attack where an attacker know a little about the model and the dataset

127
Q

Decision-based attacks are harder than score-based ones

True
False

A

True

Decision-based attacks are harder than the score-based ones because they are based on less information about the system

128
Q

What are the 4 different ways to measure attacks?

  1. misclassification
  2. imperceptibility
  3. robustness
  4. speed
A

misclassification
imperceptibility
robustness
speed

129
Q

What are one of the ways to measure for attacks:
“how good the attack is against all examples”

a. misclassification
b. imperceptibility
c. robustness
d. speed

A

a. misclassification

130
Q

What are one of the ways to measure for attacks:
“how hard is it to recognize an attack”

a. misclassification
b. imperceptibility
c. robustness
d. speed

A

b. imperceptibility

“how hard is it to recognize an attack”

131
Q

What are one of the ways to measure for attacks:
“how resistant to modification this adversarial example is”

a. misclassification
b. imperceptibility
c. robustness
d. speed

A

c. robustness
“how resistant to modification this adversarial example is”

132
Q

What are one of the ways to measure for attacks:
“how fast the computation is”

a. misclassification
b. imperceptibility
c. robustness
d. speed

A

d. speed
“how fast the computation is”

133
Q

What are the 3 measure of Misclassification
1. Misclassification Ratio (MR)
2. Average Confidence of Adverarial Class (ACAC)
3. Average Confidence of True Class (ACTC)

A

The 3 measure of Misclassification
1. Misclassification Ratio (MR)
2. Average Confidence of Adverarial Class (ACAC)
3. Average Confidence of True Class (ACTC)

134
Q

Which Misclassification measure is the following:
“the percentage of adversarial examples, which are successfully misclassified as relating to an arbitrary class”

a. Misclassification ratio (MR)
b. Average Confidence of Adversarial Class (ACAC)
c. Average Confidence of True Class (ACTC)

A

a. Misclassification ratio (MR)

“the percentage of adversarial examples, which are successfully misclassified as relating to an arbitrary class”

135
Q

Which Misclassification measure is the following:
“The average prediction confidence toward the incorrect class”

a. Misclassification ratio (MR)
b. Average Confidence of Adversarial Class (ACAC)
c. Average Confidence of True Class (ACTC)

A

Misclassification Measure

b. Average Confidence of Adversarial Class (ACAC)
The average prediction confidence toward the incorrect class”

136
Q

Which Misclassification measure is the following:

“Averaging the prediction confidence of true classes for AEs, ACTC is used to further evaluate the extent to which the attacks escape from the ground truth”

a. Misclassification ratio (MR)
b. Average Confidence of Adversarial Class (ACAC)
c. Average Confidence of True Class (ACTC)

A

c. Average Confidence of True Class (ACTC)

“Averaging the prediction confidence of true classes for AEs, ACTC is used to further evaluate the extent to which the attacks escape from the ground truth”

137
Q

What are the 3 measure of Imperceptibility

  1. Average Lp Distortion (ALDp)
  2. Average Structural Similarity (ASS) [image specific]
  3. Perturbation Sensitivity Distance (PSD) [image-specific]
A

What are the 3 measure of Imperceptibility

  1. Average Lp Distortion (ALDp)
  2. Average Structural Similarity (ASS) [image specific]
  3. Perturbation Sensitivity Distance (PSD) [image-specific]
138
Q

Which Imperceptibility measure is the following:

“As the average normalized Lp distortion for all successful adversarial examples”

a. Average Lp Distortion (ALDp)
b. Average Structural Similarity (ASS) [image-specific]
c. Perturbation Sensitivity Distance (PSD) [image-specific]

A

Measure of Imperceptibility:

a. Average Lp Distortion (ALDp)-

“As the average normalized Lp distortion for all successful adversarial examples”

139
Q

Which Imperceptibility measure is the following:

“Structural similarity is considered to be consistent to human visual perception than Lp similarity”

a. Average Lp Distortion (ALDp)
b. Average Structural Similarity (ASS) [image-specific]
c. Perturbation Sensitivity Distance (PSD) [image-specific]

A

A measure of Imperceptibility

b. Average Structural Similarity (ASS) [image-specific]

“Structural similarity is considered to be consistent to human visual perception than Lp similarity”

140
Q

Which Imperceptibility measure is the following:

“Based on the contrast masking theory, this measure is proposed to evaluate human perception of perturbations”

a. Average Lp Distortion (ALDp)
b. Average Structural Similarity (ASS) [image-specific]
c. Perturbation Sensitivity Distance (PSD) [image-specific]

A

A measure of Imperceptibility

c. Perturbation Sensitivity Distance (PSD) [image-specific]

“Based on the contrast masking theory, this measure is proposed to evaluate human perception of perturbations”

141
Q

What are the 3 measure of Robustness

  1. Noise Tolerance Estimation (NTE)
  2. Robustness to Gaussian Blur (RGB)
  3. Robustness to Image Compression (RIC) [image-specific]
A

What are the 3 measure of Robustness

  1. Noise Tolerance Estimation (NTE)
  2. Robustness to Gaussian Blur (RGB)
  3. Robustness to Image Compression (RIC) [image-specific]
142
Q

Which Robustness measure is the following:

“Noise tolerance reflects the amount of noises that AEs can tolerate while keeping the misclassified class unchanged”

a. Noise Tolerance Estimation (NTE)
b. Robustness to Gaussian Blur (RGB)
c. Robustness to Image Compression (RIC) [image-specific]

A

a. Noise Tolerance Estimation (NTE)

“Noise tolerance reflects the amount of noises that AEs can tolerate while keeping the misclassified class unchanged”

143
Q

Which Robustness measure is the following:

“Gaussian Blur is widely used as a pre-processing stage in computer vision algorithms to reduce noise in images”

a. Noise Tolerance Estimation (NTE)
b. Robustness to Gaussian Blur (RGB)[image-specific]
c. Robustness to Image Compression (RIC) [image-specific]

A

Measure of Imperceptibility

b. Robustness to Gaussian Blur (RGB)[image-specific]
“Gaussian Blur is widely used as a pre-processing stage in computer vision algorithms to reduce noise in images”

144
Q

Which Robustness measure is the following:

“Image-specific measure similar to RGB”

a. Noise Tolerance Estimation (NTE)
b. Robustness to Gaussian Blur (RGB)[image-specific]
c. Robustness to Image Compression (RIC) [image-specific]

A

Robustness measure:

“Image-specific measure similar to RGB”

c. Robustness to Image Compression (RIC) [image-specific]

145
Q

5 Measures of Speed:
-Single CPU
-Single GPU
-Parallel CPU
-Parallel GPU
-Memory consumption

A

5 Measures of Speed:
-Single CPU
-Single GPU
-Parallel CPU
-Parallel GPU
-Memory consumption

146
Q

What are the steps to choose Metrics for Better Attacks?

i. Misclassification
ii. Imperceptibility
iii. Robustness

a. 1,2,3
b. 2,1,3
c. 3,2,1
d. none

A

Steps to choose Metrics for Better Attacks

a. 1,2,3

i. Misclassification
ii. Imperceptibility
iii. Robustness

147
Q

_______ attacks produce much smaller changes and bypass defensive distillation

a. advanced attacks
b. list attacks
c. listed attacks
d. adversarial attacks

A

a. advanced attacks

advanced attacks produce much smaller changes and bypass defensive distillation

148
Q

Which attack provides 3 different attack options: (L0,L2, L8), Also uses box constraints such as Adam?

a. CW Attack
b. L-BFGS
c. FGMS

A

a. CW Attack

CW Attack logic
- provides 3 different attack options: (L0,L2, L8),
- Also uses box constraints such as Adam

149
Q

Why use DeepFool Attack over L-BFGS, FGSM, and CW Attack?

a. CW attack is slow
b. L-BFGS and FGSM perturbations are big
c. We need faster solutions with smaller perturbations.
d. All above are true why need DeepFool

A

d. All above are true why need DeepFool

Need DeepFool Attack
1. L-BFGS and FGSM perturbations are big
2. CW Attack is slow
3. Need faster solutions with smaller perturbations

150
Q

Which attack was the first method specifically for deep networks?

a. DeepFool
b. CW
c. L-BFGS
d. FGSM

A

a. DeepFool

DeepFool- attack was the first method specifically for deep networks

151
Q

What is a big advantage to DeepFool?

a. Faster than CW
b. Finds the closest decision boundary to a given X
c. step by steps calculate the best pixels
d. none of the above

A

b. Finds the closest decision boundary to a given X

The biggest advantage to DeepFool is that
DeepFool -Finds the closest decision boundary to a given X

steps:
1. Step by step calculate the best pixels to change
2. Algorithm perturbs the image by a small vector
3. Vector takes the resulting image to the boundary of the polyhedron that is obtained by linearizing the boundaries of the image region.

152
Q

_______ attack is a universal approach to analysis of model security against adversarial examples

a. PGD (Project Gradient Descent) attack
b.DeepFool
c. CW
d. L-BFGS

A

a. PGD attack

PGD attack is a universal approach to analysis of model security against adversarial examples

153
Q

Among white-box defense that appeared in ICLR-2018 and CVPR-2018 _______ was the only defense that has not been successfully attacked so far.

a. PGD adversarial
b. Deep Fool
c. CW
d. L-BFGS

A

a. PGD adversarial

The only defense that has not been successfully attacked so far.

154
Q

_____ is a variation of the BIM method, but instead of directly clipping Xadv + pXadv at Xmin,Xmax; it performs a projection of pXadv onto the Lp-ball with radius 3total.

a. PGD (Projected Gradient Descent)
b. BIM
c. PGD Adversarial

A

a. PGD (Projected Gradient Descent)
is a variation of the BIM method, but instead of directly clipping Xadv + pXadv at Xmin,Xmax; it performs a projection of pXadv onto the Lp-ball with radius 3total.

155
Q

_____ wants the closest similarity to another class with minimum perturbation for a source input

a. PGD (Projected Gradient Descent)
b. BIM
c. PGD Adversarial
d. PGD

A

d. PGD (Projected Gradient Descent)

-wants the closest similarity to another class with minimum perturbation for a source input

156
Q

_____ goal is to find model parameters so that the “adversarial loss” given by inner attack problem is minimized.

a. PGD (Projected Gradient Descent)
b. BIM
c. PGD Adversarial
d. PGD

A

d. PGD

goal is to find model parameters so that the “adversarial loss” given by inner attack problem is minimized.

157
Q

BIM attack is better than FGSM because:

a. its faster
b. its more precise
c. use less resources
d. can be optimized to work on GPU

A

b. its more precise

Both BIM and FGSM work on GPU

158
Q

CW attack was invented to

a. bypass adversarial training defense
b. invent the fastest attack
c. use less resources
d. bypass defensive distillation

A

d. bypass defensive distillation

The Main Idea of CW attack was created to bypass defensive distillation protection

159
Q

Which attack is the most similar to DeepFool by Imperceptability metric.

a. BIM
b. FGSM
c. CW
d. PGD

A

c. CW

BIM is different to PGD according to Imperceptability metrics.

160
Q

Why PGD is better than BIM in practice?

a. can find same Adversarial examples much faster
b. always more precise
c. more robust
d. faster

A

a. can find same Adversarial examples much faster

Note: BIM usually calculating attacks faster than PGD

161
Q

Which attack has the worst robustness?

a. FGSM
b. CW
c. BIM
d. PGD

A

b. CW

FGSM is not the best attack but robustness is quite ok.

162
Q

What is the best approach to protect AI solutions?

a. PPDR (predict, prevent, detect, respond)
b. PPRD (predict, prevent, respond, detect)
c. RDPP (respond, detect, predict, prevent)

A

a. PPDR (predict, prevent, detect, respond)

163
Q

Out of the PPDR Model which part uses the following information: “Protects a model production - testing and verification?”

a. predict
b. prevent
c. respond
d. detect

A

a. predict

“Protects a model production - testing and verification”

164
Q

Out of the PPDR Model which part uses the following information: “Preventing attacks at the production stage by different model modifications?”

a. predict
b. prevent
c. respond
d. detect

A

b. prevent

“Preventing attacks at the production stage by different model modifications”

165
Q

Out of the PPDR Model which part uses the following information: “Active reaction to attacks such as modification of model responses?”

a. predict
b. prevent
c. respond
d. detect

A

c. respond

“Active reaction to attacks such as modification of model responses”

166
Q

Out of the PPDR Model which part uses the following information: “If an input is adversarial, don’t let this data into a model?”

a. predict
b. prevent
c. respond
d. detect

A

d. detect
“If an input is adversarial, don’t let this data into a model?”

167
Q

which approach in PREDICTION is the following:
“sub-category collects all defense that somehow modifies the training procedure to minimize the chances of potential attacks?”

a. modified training
b. verification

A

PREDICTION Method

a. modified training

A sub-category collects all defense that somehow modifies the training procedures to minimize the chances of potential attacks

168
Q

which approach in PREDICTION is the following:
“sub-category NOT an actual defense but a health-check trying to explore all the potential ways to attack a model and as a result present the worst case scenarios”

a. modified training
b. verification

A

PREDICTION Method

b. verification

verification - “sub-category NOT an actual defense but a health-check trying to explore all the potential ways to attack a model and as a result present the worst case scenarios”

169
Q

which approach in PREVENTION is the following:

“sub-category modifying an input in order to corrupt or smooth objects (compression, purification, randomization, and many other approaches)”

a. modified input
b. modified model

A

PREVENTION Method

a. modified input

“sub-category modifying an input in order to corrupt or smooth objects (compression, purification, randomization, and many other approaches)”

170
Q

which approach in PREVENTION is the following:

“Modifying a ML model in order to prevent form attacks (changing hyperparameters, activation functions, layers, or combining multiple models together)”

a. modified input
b. modified model

A

PREVENTION method

b. modified model

“Modifying a ML model in order to prevent form attacks (changing hyperparameters, activation functions, layers, or combining multiple models together)”

171
Q

which approach in DETECTION is the following:

“Detecting potential attacks on ML models by learning initial distribution”

a. Supervised Detection
b. Unsupervised Detection

A

DETECTION method

a. Supervised Detection

“Detecting potential attacks on ML models by learning initial distribution”

172
Q

which approach in DETECTION is the following:

“(1) Detecting potential attacks on ML models without initial training. ; (2) It Learns behavior from all inputs and detects outliers.

a. Supervised Detection
b. Unsupervised Detection

A

DETECTION method

b. Unsupervised Detection

“(1) Detecting potential attacks on ML models without initial training. ; (2) It Learns behavior from all inputs and detects outliers.”

173
Q

which approach in RESPONSE is the following:

“Detecting outliers and deleting them from training in order to save the model from retraining and posioning attacks”

a. Retraining
b. Counterattack

A

RESPONSE METHOD

a. Retraining

“Detecting outliers and deleting them from training in order to save the model from retraining and posioning attacks”

174
Q

which approach in RESPONSE is the following:

“Detecting outliers and deleting them from training in order to save the model from retraining and posioning attacks”

a. Retraining
b. Counterattack

A

RESPONSE METHOD

b. Counterattack

“Responding to potential attacks by detecting attack attempts and replying in such a way that attacks will continue heading in the wrong direction”

175
Q

How to Detect and Measure Priority

a. predict, prevent, detect
b. detect, prevent, predict
c. prevent, detect, predict

A

a. predict, prevent, detect

176
Q

Adversarial training, regularization, and distillation are examples of which method:

a. modified training
b. modified model
c. modified input
d. none of the above

A

a. modified training

PREDICTION Measure
Examples: Adversarial training, regularization, distillation

PRO/CON- very time consuming

177
Q

Reconstruction, Compression, and Purification
are examples of which method:

a. modified training
b. modified model
c. modified input
d. none of the above

A

c. modified input

PREVENTION

Examples: Reconstruction, compression, and purification

PRO/CON: very good but application specific

178
Q

Binary classifier and Additional Output
are examples of which method:

a. add-on detection
b. modified model
c. modified input
d. none of the above

A

a. add-on detection

examples: Binary Classifer and Additional output

Pro / Con: Very diverse with respect to quality and speed

179
Q

Adversarial training is?

a. predict
b. prevent
c. detect
d. respond

A

Adversarial training is an example of prediction

180
Q

What is the most model-specific defense

a. Verification
b. Input modification
c. Detection
d. Model modification

A

d. Model modification

model modification defense is the most model-specific

181
Q

What is the best Adversarial training defense from those which were tested in this video

a. NAT
b. EAT
c. PAT
d. EIT

A

b. EAT

182
Q

What is the WORST metric for modified input defense?

a. CRS
b. CRR
c. CAV
d. CCV

A

c. CAV

CAV - the worst metric for modified input defense

183
Q

Which defense has CVV =0

a. NAT
b. Thermometer Encoding
c. EIT
d. Region-based Classification

A

RC defense shows the minimum CVV rate.

184
Q

Steps to Start AI Security Project

i. select attacks
ii. select defenses
iii. test attacks vs. defenses

a. 1,2,3
b. 3,2,1
c. 2,1,3
d. 2,3,1

A

Steps to Start AI Security Project:

a. 1,2,3

i. select attacks
ii. select defenses
iii. test attacks vs. defenses

185
Q

How do we know which application to run?

-Which application you are targeting
-What task it will solve
-What is the algorithm category
-What is the attackers goal etc.

A

Know the Application to run by asking Question:

-Which application you are targeting
-What task it will solve
-What is the algorithm category
-What is the attackers goal etc.

186
Q

How do we know which defense to run?

-Which attack you are targeting
-By what category
-What is the algorithm category
-What are the restrictions

A

We know which Defense to run by asking the Question:

-Which attack you are targeting
-By what category
-What is the algorithm category
-What are the restrictions

187
Q

Combining Application + Defense Mechanism

-You should only have 1 defense
-Ensemble defense
-Use multiple datasets
-Use multiple hyperparameters
-Use Multiple attacks

A

Combine Testing: Application + Defense Mechanism

-You should only have 1 defense
-Ensemble defense
-Use multiple datasets
-Use multiple hyperparameters
-Use Multiple attacks

188
Q

True or False:
Face Recognition could be cheated with the help of special glasses

A

True

Face Recognition could be cheated with the help of special glasses

189
Q

____ the way we read or hear a language

A

speech perception

speech perception- the way we read or hear a language

190
Q

What is the first step in AI security project?

a. Identify AI object Task, Threats
b. Chose attacks
c. Choose Defense
d. calculate metrics

A

a. Identify AI object Task, Threats

191
Q

How AI backdoors problem can be used for good

a. captcha protection
b. watermarks
c. password protection
d. privacy protection

A

b. watermarks

Reasoning:
Backdoors CAN be used for watermarks
Backdoors cannot be used for privacy protection

192
Q

How many AI security article methods published on Arxiv so far

A

1000+

193
Q

What is the last step in AI security project

a. defense testing
b. attack testing
c. metric evaluation
d. threat modeling

A

c. metric evaluation