ML Security Flashcards

(193 cards)

1
Q

-Science of making things smart or human tasks performed by machines (example: visual recognition, Natural Language processing)

A. Artificial Intelligence (AI)
B. Machine Learning (ML)
C. Deep Learning (DL)

A

A. Artificial Intelligence - Science of making things smart or human tasks performed by machines (example: visual recognition, Natural Language processing) Ability of machines to perform human tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

-One of many approaches to AI that uses a system capable of learning from experience. Makes decisions based on data rather than algorithm.

A. Artificial Intelligence (AI)
B. Machine Learning (ML)
C. Deep Learning (DL)

A

B. Machine Learning (ML)

-One of many approaches to AI that uses a system capable of learning from experience. Makes decisions based on data rather than algorithm.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

-A set of techniques for implementing machine learning that recognizes patterns of patterns. (for example: image recognition). Identifies object boundary, type, structure.

A. Artificial Intelligence (AI)
B. Machine Learning (ML)
C. Deep Learning (DL)

A

C. Deep Learning (DL)

A set of techniques for implementing machine learning that recognizes patterns of patterns. (for example: image recognition)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Different applications work with different data.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is an AI Threat?

A. Hacker break system through stickers on stop signs
B. Hackers can bypass facial recogniton
C. Hackers can break web platforms and filters via social media.
D. Hackers like Nest Assistance can be broken
E. All the above

A

E. All the above are AI Threats.

a. Self Driving Car Threat:
Hacker break system through stickers on stop signs

b. Classification / Image Threat:
Hackers can bypass facial recogniton

c. Social Media Threat:
Hackers can break web platforms and filters via social media.

d. Home Automation Threat:
Hackers like Nest Assistance can be broken

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What algorithm categories are the following categories?

-Classification
-Regression

A. Supervised
B. Unsupervised
C. Semi-Supervised
D. Reinforcement Learning

A

-Classification
-Regression

A. Supervised

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What algorithm categories are the following categories?

-Clustering
-Dimensionality Reduction

A. Supervised
B. Unsupervised
C. Semi-Supervised
D. Reinforcement Learning

A

-Clustering
-Dimensionality Reduction

B. Unsupervised

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What algorithm categories are the following categories?

-Generative models

A. Supervised
B. Unsupervised
C. Semi-Supervised
D. Reinforcement Learning

A

-Generative models

C. Semi-Supervised

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What algorithm categories are the following categories?

-reinforcement learning

D. Reinforcement Learning

A

-reinforcement learning

D. Reinforcement Learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How are AI attacks classified?

A. confidentiality, availability, and integrity (triad)
B. Espionage, sabotage, and fraud
C. Availability, fraud, and integrity
D.A and B

A

How AI attacks classified

A. confidentiality, availability, and integrity (triad)

and

B. Espionage, sabotage, and fraud

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the steps to start an AI Security Project?

I. Identify an AI object and a task
ii. understand algorithm category and algorithm itself
iii. choose an ai attack relevant to your task and algorithm

A. 3,2,1
B. 2,1,3
C. 1,2,3
D. 3,1,2

A

Start and AI Security Project Steps:

C. 1,2,3

I. Identify an AI object and a task
ii. understand algorithm category and algorithm itself
iii. choose an ai attack relevant to your task and algorithm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

True or False:

AI Threats are similar / mostly the same, but their appraoches are different

A

True

AI Threats are similar / mostly the same, but their appraoches are different

Reasoning: The difference comes in Algorithms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Steps to Set up your Environment:

i. have nvidia gpu or not
ii. choose operating system (recommend Ubuntu)
iii. follow guidelines provided

A. 3,2,1
B. 1,2,3
C. 2, 1, 3,
D. 3,1,2,

A

Steps to Set up your Environment:

i. have nvidia gpu or not
ii. choose operating system (recommend Ubuntu)
iii. follow guidelines provided

B. 1,2,3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Which attack cannot be used for breaking integrity of AI?

A. backdoor
b. adversarial
c. inference attack
d. poisoning

A

c. inference attack

inference attack- dont break functionality they extract critical data

REASONING:

Adversarial attacks- break integrity by misclassification
Poisoning - poisoning breaks integrity
Backdoor-backdoor attacks break integrtiy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the most important hardware for this course?

a. CPU
b. GPU
c. RAM
d. HDD

A

most important hardware
b. GPU

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Model is getting trained on label data set. Examples is Classification and regression:

A. Supervised
B. Unsupervised
C. Semi-Supervised
D. Reinforcement Learning

A

A. Supervised

Supervised- Model is getting trained on label data set. Examples is Classification and regression.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Model is attempting to automatically find structure in the data by extracting useful features and analyzing its structure. Examples: Clustering, Association, Dimension Reduction (Generalization)

A. Supervised
B. Unsupervised
C. Semi-Supervised
D. Reinforcement Learning

A

B. Unsupervised

Unsupervised - Model is attempting to automatically find structure in the data by extracting useful features and analyzing its structure. Examples: Clustering, Association, Dimension Reduction (Generalization)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Imagine a road sign detection system aiming to classify signs. Supervised learning approach is usually used. Examples of certain groups is known and all classes should be defined in the beginning. This method is:

A. Classification
B. Regression
C. Clustering

A

A. Classification

Classification - imagine a road sign detection system aiming to classify signs. Supervised learning approach is usually used. Examples of certain groups is known and all classes should be defined in the beginning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

The knowledge about the existing data is utilized to have an idea about new data (Past explains future). Ex. is stock price prediction.

A. Classification
B. Regression
C. Clustering

A

B. Regression

Regression - The knowledge about the existing data is utilized to have an idea about new data (Past explains future). Ex. is stock price prediction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Supervised learning approach is usually used. Examples of certain groups is known and information about classes in data is unknown.

A. Classification
B. Regression
C. Clustering

A

C. Clustering

Clusteirng - Supervised learning approach is usually used. Examples of certain groups is known and information about classes in data is unknown.

Algorithms: KNN (K-Nearest Neighbor), K-Means, Mixture Model (LDA)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Necessary if you deal with complex systems with unlabeled data and many potential features (facial recogntion)

A. Classification
B. Dimension Reduction (Generalization)
C. Clustering
D. Generative Models

A

B. Dimension Reduction (Generalization)

Dimension Reduction - Necessary if you deal with complex systems with unlabeled data and many potential features (facial recogntion)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

_______ designed to stimulate the actual data and not decisions, based on previous data.

AI data based on previous data.

A. Classification
B. Dimension Reduction (Generalization)
C. Clustering
D. Generative Models

A

D. Generative Models

Generative Models - AI data based on previous data. designed to stimulate the actual data and not decisions, based on previous data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

________ A behavior that depends on the changing environment.

A. Reinforcement Learning
B. Dimension Reduction (Generalization)
C. Active Learning
D. Generative Models

A

A. Reinforcement Learning -A behavior that depends on the changing environment.

Reinforcement Learning
(Behavior should react to the changing environment. Trial and Error.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

_____ A subclass of reinforcement learning, which helps correct errors, in addition to the environment changes

A. Reinforcement Learning
B. Dimension Reduction (Generalization)
C. Active Learning
D. Generative Models

A

C. Active Learning

Active Learning - A subclass of reinforcement learning, which helps correct errors, in addition to the environment changes

Acts as a teacher who can help correct errors in addition to environment changes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
_________ are inputs to machine learning models that results in an incorrect input. A. adversarial example B. king penguin C. starfish D. baseball
A. adversarial example adversarial example - inputs to machine learning models that results in an incorrect input. Reasoning: b. King penguin - is a adversarial example c. starfish - is a adversarial example d. baseball - is an adversarial example
26
________ - Is the cause for ML models to create a false prediction? A. adversarial example B. king penguin C. starfish D. baseball
A. adversarial example Adversarial example - Is the cause for ML models to create a false prediction?
27
___________ tries to move inputs across the decision boundary? A. adversarial example B. king penguin C. adversarial attacks D. baseball
C. adversarial attacks ADVERSARIAL ATTACKS- tries to move inputs across the decision boundary.
28
How AI Attacks Work: What do AI Attacks calculate? A. How much inputs change affect the outputs. B. How much outputs change affect inputs C. Decision boundary D. Neither
A. How much inputs change affect the outputs. AI Attacks work by calculating how much INPUT changes AFFECT OUTPUT.
29
What do you need to calculate AI Attacks? a. Gradient b. Loss function c. Optimal Perturbations measuring Lp Norms d. All the above
d. All the above What you need to calculate AI Attacks: 1. Gradient 2. Loss Function 3. Optimal Perturbations measuring Lp Norms
30
______ defines how good a given model is at making predictions for a given scenario. a. Gradient b. Loss function c. Optimal Perturbations measuring Lp Norms d. None of the Above
b. Loss function Loss Function - Defines how good a given model is at making predictions for a given scenario
31
What method has the following characteristics: -it has its own curve and gradients -slope of the curve indicates the appropriate way of updating the parameters to make the model more accurate in case of prediction a. Gradient b. Loss function c. Optimal Perturbations measuring Lp Norms d. None of the Above
b. Loss function -it has its own curve and gradients -slope of the curve indicates the appropriate way of updating the parameters to make the model more accurate in case of prediction
32
____ a fancy work for derivative, also known as vector. Means rate of change. a. Gradient b. Loss function c. Optimal Perturbations measuring Lp Norms d. None of the Above
a. Gradient Gradient - a fancy work for derivative, also known as vector. Means rate of change.
33
_____ attacks try to move inputs across the decision boundary. a. Gradient b. Loss function c. Optimal Perturbations measuring Lp Norms d. None of the Above
c. Optimal Perturbations measuring Lp Norms _____ attacks try to move inputs across the decision boundary. Perturbation - attacks try to move inputs across the decision boundary
34
____ denotes the maximum change for all pixels in the adversarial examples a. l(8) b. u c. l0 d. none of above
a. l(8) __l(8)____denotes the maximum change for all pixels in the adversarial examples. (Used in Perturbation)
35
_____ number of pixels changed in the adversarial examples. a. l(8) b. u c. l0 d. none of above
c. l0 ___l0___number of pixels changed in the adversarial examples. (Used in Perturbation)
36
Topic "If ML Algorithms have Vulnerabilities" Ex. malefactor is implementing bypass techniques is a "spam", sending out. All algorithms on ML models are based (from SVMs to random forests and neural networks) which are vulnerable to different kinds of adversairal inputs. This type of attack was targets what form of AI? a. Classification b. Random Forests c. K-Means d. Regression
a. Classification Adversarial Classification - Is an attack where malefactor is implementing bypass techniques is a "spam", sending out. All algorithms on ML models are based (from SVMs to random forests and neural networks) which are vulnerable to different kinds of adversairal inputs.
37
Which type of ML algorithms has few examples of practical attacks? a. Classification b. Random Forests c. K-Means d. Regression
d. Regression Regression- a type of ML Algorithms that has FEW EXAMPLES of PRACTICAL attacks. Source: "Adversarial Regression with Multiple Learners 2018"
38
True / False: Most attacks used in Classification can be used in Regression?
TRUE MOST attacks used in Classification CAN BE USED in Regression Reasoning: Condition Based Instance and Null Analysis
39
Which type of ML algorithms would succumb to auto-encoders prone to attacks or attack such as (input reconstruction, spoofs) Input image the model encodes the lower dimensional then uses that to reconstruct the original image. a. Classification b. Generative Models c. K-Means d. Regression
b. Generative Models Generative Models (GANS) or auto-encoders - would succumb to auto-encoders prone to attacks such as (input reconstruction, spoofs) Input image the model encodes the lower dimensional then uses that to reconstruct the original image.
40
Which type of ML algorithm can be used for malware detection? a. Classification b. Generative Models c. K-Means d. Clustering
d. Clustering Clustering - used for malware detection. Clustering algorithm is K-Nearest Neighbors (KNN) Note: Training data comes from the wild.
41
______ is the most common dimensionality reduction algorithms? A. PCA B. Clustering C. Generalization D. MNIST
A. PCA PCA- is the most common dimensionality reduction algorithm.
42
Why type of ML Algorithm is used sensitive to outliers that can be exploited by contaminating training data? A. PCA B. Clustering C. Generalization D. MNIST
A. PCA PCA - sensitive to outliers that can be exploited by contaminating training data.
43
What does this example show (insert image)
It allows dramatically decreasing the detection rate for DoS attacks
44
______ which type of algorithm is used for Facial Recognition? An example of this is using your face to unlock your iphone. A. PCA B. Clustering C. Generalization D. MNIST
A. PCA PCA- algorithm is used for Facial Recognition. An example of this is using your face to unlock your iphone.
45
RL the framework known as DNN , using DNN for Feature Selection and Q Functional Approximation. Hence enable
46
What are the steps of a Deep Reinforcement Learning Attack (DQN)? i. attacker observes current state and transitions in environment ii. attacker estimates best action according to adversarial policy iii. attacker crafts perturbation to induce adversarial action iv. attacker applies perturbation v. perturbed input is revealed to target vii. attacker waits for targets action A. 1,2,3,4,5,6 B. 6,5,4,3,2,1 C. 4,3,2,5,6,1 D. 2,5,3,4,6,1
steps of a Deep Reinforcement Learning Attack (DQN)? A. 1,2,3,4,5,6 i. attacker observes current state and transitions in environment ii. attacker estimates best action according to adversarial policy iii. attacker crafts perturbation to induce adversarial action iv. attacker applies perturbation v. perturbed input is revealed to target vii. attacker waits for targets action
47
What is the most wide spread attack method? a. LBFGS b. FGSM (Fast Gradient Side Method) c. DQN d. none of the above
b. FGSM (Fast Gradient Side Method) FGSM-
48
_____ attack does the following: 1. Takes the label of the least likely class predicted by network 2. The computed pertrubation is subtracted from original image 3. This maximizes the probability that the network predicts target as the label of the adversarial example a. LBFGS b. FGSM (Fast Gradient Side Method) c. DQN d. none of the above
b. FGSM (Fast Gradient Side Method) FGSM works using the following steps: 1. Takes the label of the least likely class predicted by network 2. The computed pertrubation is subtracted from original image 3. This maximizes the probability that the network predicts target as the label of the adversarial example
49
_____ attack method was very time consuming, especially for larger images and practically non-applicable a. LBFGS b. FGSM (Fast Gradient Side Method) c. DQN d. none of the above
a. LBFGS LBFGS - attack method was very time consuming, especially for larger images and practically non-applicable
50
Which ML task category is required if you deal with complex systems with unlableled data and many potential features? a. classification b. clustering c. reinforcement learning d. dimentionality reduction
d. dimentionality reduction Dimentionality Reduction- ML category required if you deal with complex systems with unlabeled data and many potential features.
51
How do you measure Adversarial Attacks? A. using Gradient B. using Loss Function C. using L-p norm D. using the size of ML Model
C. using L-p norm L-p norm used to measure changes for adversarial attacks
52
Which ML task category has the biggest number of research papers? A. Clustering B. Reinformcement Learning C. Classification D. Regression
C. Classification Classification - Has the larges number of research papers spanning 300
53
Why is FGSM method better than BFGS method? A. Requires less information B. FGSM is more accurate C. More universale D. The FGSM method is faster
D. The FGSM method is faster Reasoning- Not C. LBFGS is more universal but slower and less accurate
54
Which dataset is better for testing practical attacks? A.CIFAR B. MNIST C. LFW D. ImageNew
B. MNIST MNIST is the dataset best for testing practical attacks. The MNIST dataset is the smallest one, and all tests will be less time-consuming with lower computation cost
55
What are the reasons to Hack AI? A. AI is eating software B. Exansion of technology related to Cybersecurity C. Vulnerable to various cyber attacks like any other algorithms D. All Above
D. All Above Hack AI -AI is eating software -Expansion of tech related to cybersecurity -vulnerability to various cyber attacks like any other algorithms
56
Autonomous cars use image classification such as Identification of Raw Science ______ can lead to horrible accidents A. Spoofing of Raw Science
Autonomous cars use image classification such as Identification of Raw Science Spoofing of Raw Science- can lead to horrible accidents
57
What are AI risks in the Cybersecurity Industry? A. Bypass spam filters B. Bypass threat detection solutions C. Bypass AI-based Malware Detection tools D. All Above
AI risks in Cybersecurity Industry D. All Above -Bypass spam filter -Bypass threat detection solutions -bypass AI based malware detection tools
58
What are AI risks in the Retail Industry? A. bypass Facial recognition
AI Risks in Retail Industry: A. bypass Facial recognition (used w/ makeup, surgerty etc.)
59
How does AI use in Retail a. Behavior retail of clients b. Optimize business processes c. all above
c. all above AI use in retail: 1. Behavior retail of clients 2. Optimize business processes
60
How is AI used in Smart Home Industry? Amazon echo recognizes Noise as a Comment. This voice is recognized as certain instructions. a. forge voice commands
AI used in Smart Home Industry a. forge voice commands
61
How AI used in Web and Social Media Industry a. Fool sentiment analysis of movie reviews, hotels etc.
How AI used in Web and Social Media Industry 1. Fool sentiment analysis of movie reviews, hotels etc. Misinterpret a comment
62
How AI used in Finance a. trick anomaly and fraud detection engines
How AI used in Finance 1. trick anomaly and fraud detection engines
63
What are ways to prevent Frauds using ML? a. learn customer behavior b. analysis of aggregated data c. analysis of social graphs d. automation of routine processes e. control use ID information f. ALL ABOVE
f. ALL ABOVE -learn customer behavior - analysis of aggregated data -analysis of social graphs - automation of routine processes - control use ID information
64
Confidentiality is associated with: a. Gather System Insights b. Disable AI System Functionality c. Modify AI logic
Confidentiality is associated with: a. Gather System Insights -Obtain insights into the system -utilize the received info or plot more advanced attacks
65
Which triad is the following: (A malicious person deals with a ML system that is an Image Recognition System. They get to learn more about the internals or the datasets from this system) a. confidentiality b. availability c. integrity
a. confidentiality (A malicious person deals with a ML system that is an Image Recognition System. They get to learn more about the internals or the datasets from this system) Reasoning- Confidentiality because they are gathering information about the system and that information can be used to plot attacks. NOT: Integrity because they did not change logic NOT: Availability because they did not disable anything
66
Availability is associated with: a. Gather System Insights b. Disable AI System Functionality c. Modify AI logic
b. Disable AI System Functionality Availability = Disable AI System Functionality
67
Which triad is the following: -Flood AI with requests, which demand more time -Flood with incorrect classified objects to increase manual work -Modify a model by retraining it with wrong examples -Use computing power of an AI model for solving your own tasks a. confidentiality b. availability c. integrity
b. availability -Flood AI with requests, which demand more time -Flood with incorrect classified objects to increase manual work -Modify a model by retraining it with wrong examples -Use computing power of an AI model for solving your own tasks
68
Integrity is associated with: a. Gather System Insights b. Disable AI System Functionality c. Modify AI logic
c. Modify AI logic Integrity = Modify AI Logic
69
Which triad is the following: -Ex. Make autonomous cars, believe that there is a cat on the road, when in fact it is a car. -2 different ways to interact with a system at the learning or production stage 1) poinsoning 2) evasion a. confidentiality b. availability c. integrity
c. integrity This attack is integrity because you modified the car to think it was a cat when it was really a car. 2 types of integrity (modify ai logic) 1. Poisoning - attackers poison some data in the training dataset 2. Evasion- attackers exploit vulnerabilities of an algorithm by showing modified picture at the production stage
70
Which integrity interaction is this? ________ attackers alter some data in the training dataset a. poisoning b. evasion c. modify ai logic
a. poisoning POSIONING- attackers poinson / alter some data in the training dataset A attack form of Integrity
71
Which integrity interaction is this? ______ attackers exploit vulnerabilities of an algorithm by showing the modified picture at the production stage a. poisoning b. evasion c. modify ai logic
b. evasion EVASION - attackers exploit vulnerabilities of an algorithm by showing the modified picture at the production stage A attack form of Integrity
72
_______ a procedure where someone is trying to exploit ML model, by injecting malicious data into the training dataset. a. poisoning b. evasion c. modify ai logic
a. poisoning Poisoning - a procedure where someone is trying to exploit ML model, by injecting malicious data into the training dataset.
73
_________ attacks change classification boundry while _________ attacks change input examples a. Poisoning, Adverarial b. Adversarial, Poisoning c. Posioning, Evasion d. Evasion, Poisoning
a. Poisoning, Adverarial Poisoning attacks - change classification boundry WHILE Adversarial attacks - change input examples
74
True or False If points are added to the training data, the decision boundry will change
True If points are added to the training data, the decision boundry will change
75
______ attack allows an adversary to modify solely the labels in supervised learning datasets but for arbitrary data points A. Label modification B. Poisoning C. Evasion D. Data Injection
A. Label modification label modification attack allows an adversary (enemy) to modify solely the labels in supervised learning datasets but for arbitrary (opposite) data points
76
______ An adversary (enemy) does not have access to the training data nor to the learning algorithm, but has the ability to add new data to the training set A. Label modification B. Poisoning C. Data Injection D. Adversarial
C. Data Injection Data Injection - An adversary (enemy) does not have access to the training data nor to the learning algorithm, but has the ability to add new data to the training set
77
_______ An adversary does not have access to the learning algorithm but has full access to the training data A. Label modification B. Data Modification C. Data Injection D. Adversarial
B. Data Modification Data modification - An adversary does not have access to the learning algorithm but has full access to the training data.
78
______ An adversary has the ability to meddle with the learning algorithm and such attacks are viewed as logic corruption. A. Label modification B. Data Modification C. Data Injection D. Logic Corruption
D. Logic Corruption Logic Corruption - An adversary has the ability to meddle with the learning algorithm and such attacks are viewed as logic corruption
79
______ An attacker intends to explore the system such as model or dataset, that can further come in handy. A. Label modification B. Data Modification C. Data Injection D. Logic Corruption E. Privacy Attack (Inference Attack)
E. Privacy Attack (Inference Attacks) Privacy Attack - An Attacker intends to explore the system such as Model or dataset, that can further come in handy
80
These attacks are done at the production stage. These attacks are achievable at training, if the training data is injected, we can learn how the algorithm works based on the given data. The goal is to break Confidentiality A. Label modification B. Data Modification C. Data Injection D. Logic Corruption E. Privacy Attack (Inference Attack)
E. Privacy Attack (Inference Attack) Privacy Attack - An Attacker intends to explore the system such as Model or dataset, that can further come in handy Characteristics: These attacks are done at the production stage. These attacks are achievable at training, if the training data is injected, we can learn how the algorithm works based on the given data. The goal is to break Confidentiality
81
Type of attacker: Example with particular property was in a dataset. A. Membership inference B. Attribute Inference C. Input Inference D. Parameter Inference
B. Attribute Inference Attribute inference- Example with particular property was in a dataset.
82
Type of attacker: Particular example was in dataset A. Membership inference B. Attribute Inference C. Input Inference D. Parameter Inference
A. Membership inference Membership inference- Particular example was in dataset
83
Type of attacker: Extract an example from the dataset A. Membership inference B. Attribute Inference C. Input Inference D. Parameter Inference
C. Input Inference Input Inference - Extract an example from the dataset
84
Type of attacker: Obtain ML model parameters A. Membership inference B. Attribute Inference C. Input Inference D. Parameter Inference
D. Parameter Inference Parameter Inference - Obtain ML model parameters
85
______ Attack's main goal is to inject additional behavior in such a way that backdoors operate after retraining the system A. Poisoning B. Backdoor C. Evasion D. Parameter Inference
B. Backdoor Backdoor - Main goal is to inject additional behavior in such a way that the backdoors operate after retraining the system
86
Why Use BackDoors 1. NN represent large structure like millions of neurons. Need backdoors to do minor changes like a small set of neurons 2. Operating models are trained with tremendous data and computing power. It is impossible for small co to recreate them so usually train existing models. 3. Malefactors can hack a server that stores public models and upload their own model using a backdoor. The NN model will keep the backdoor up to the model is retrained
Why Use BackDoors 1. NN represent large structure like millions of neurons. Need backdoors to do minor changes like a small set of neurons 2. Operating models are trained with tremendous data and computing power. It is impossible for small co to recreate them so usually train existing models. 3. Malefactors can hack a server that stores public models and upload their own model using a backdoor. The NN model will keep the backdoor up to the model is retrained
87
_____ attacks are lesser-known than adversaril attacks a. listed b. backdoor c. adversarial d. parameter
a. listed Listed attacks are lesser-known than adversarial attacks
88
Which industry is one of the most critical in terms of AI attacks? a. Transportation b. Energy c. Entertainment d. Oil and Gas
a. Transportation The transportation industry is the most critical because AI is taking this industry by storm and any error related to security may affect human lives
89
An attack on __ is an attack where a hacker's aim is to get information on ML Models insights a. safety b. availability c. integrity d. confidentiality
d. confidentiality confidentiality - an attack where a hacker's aim is to get information on ML Models insights
90
How is an attack subtype called if an adversary does not have any access to the training data as well as to the learning algorithm but instead it has an ability to add new data to the training set? a. Label modification b. Data injection c. Logic corruption d. Data modification
b. Data injection Data injection - adversary ability to add new data to the training set
91
What algorithms can be used for detecting posioning attacks? a. clustering b. decision trees c. neural networks d. KNN
a. Clustering clustering used to detect posioning attacks
92
Is parameter inference privacy attack implemented in CypherCat? True / False
False Parameter Inference Privacy Attack is not implemented in Cypher Cat
93
What algorithm is required for backdoor detection? a. classification b. outlier detection c. segmentation d. regression
b. outlier detection
94
What are 3 things you need to consider when you want to analyze a security of AI a. architecture, algorithm, and dataset b. architecture, SVM, and dataset c. training data, algorithm, dataset d. none of the above
a. architecture, algorithm, and dataset 3 things to consider when analyze security 1. Architecture 2. Algorithm 3. Dataset
95
Linear Regression SVM MLP CNN (Convolution Neural Network) These are all examples of a. algorithm b. dataset c. architecture
c. architecture Linear Regression SVM MLP CNN (Convolution Neural Network)
96
_______ is a type of architecture that has multiple layers of neural networks, each is responsible for its own set of features a. algorithm b. dataset c. architecture
c. architecture a type of architecture that has multiple layers of neural networks, each is responsible for its own set of features
97
Which type of algorithm is the following: -simple architecture -slow for training -model is large -avoid in practice a. VGG (Visual Geometry Group) b. ResNet (Residual networks) c. Inception
a. VGG (Visual Geometry Group) VGG (Visual Geometry Group) -simple architecture -slow for training -model is large -avoid in practice
98
Which type of algorithm is the following: -deep neural network -addresses the problem of vanishing gradients a. VGG (Visual Geometry Group) b. ResNet (Residual networks) c. Inception
b. ResNet (Residual Networks) an algorithms -deep neural network -addressed the problem of vanishing gradients
99
Which type of algorithm is the following: -developed by Google -4 versions available -Inception V3 and Inception V4 (image classification) a. VGG (Visual Geometry Group) b. ResNet (Residual networks) c. Inception
c. Inception -developed by Google -4 versions available -Inception V3 and Inception V4 (image classification)
100
Which type of datast is the following: -MNIST / CIFAR : play while practicing -MNIST / CIFAR: run text faster -ImageNet- need alot of memory on your computer
101
Which type of dataset would you used based on the following task: "Want to develop a production based solution and Attacks / Defenses." a. MNIST b. CIFAR c. ImageNet
c. ImageNet ImageNet - A datatype that has solution for Attacks / Defenses also way to go if you want to develop a production based solution
102
Which type of dataset would you used based on the following task: "run text faster", "pay while practicing" a. MNIST b. CIFAR c. ImageNet d. both a and b
d. both a and b BOTH MNIST and CIFAR datatypes have advantages of running text faster and play while practicing.
103
Which type of dataset would you used based on the following task: "need alot of memory" a. MNIST b. CIFAR c. ImageNet
c. ImageNet A disadvantage of ImageNet is that you will need alot of memory.
104
What questions must be answered about adversarial attacks? a. goals b. perturbation and iterations c. environment and constrains d. knowledge e. all the above
e. all above Questions need to be answered about adversarial attacks and obtain the utmost information : - Attackers Goal -Perturbation -Environment -Iterations -Constrains -Knowledge
105
Which Adversarial Attack Goal is the following: "Change a class to a particular target" a. targeted misclassification b. source / target misclassification c. confidence reduction d. misclassification e. all above
c. confidence reduction Confidence reduction - "Change a class to a particular target"
106
Which Adversarial Attack Goal is the following: "Change a class without any specific target" a. targeted misclassification b. source / target misclassification c. confidence reduction d. misclassification e. all above
d. misclassification "Change a class without any specific target"
107
Which Adversarial Attack Goal is the following: "Dont change a class but impact the confidence greatly" a. targeted misclassification b. source / target misclassification c. confidence reduction d. misclassification e. all above
c. confidence reduction "dont change a class but impact the confidence greatly"
108
Which Adversarial Attack Goal is the following: "Change a class without any specific target" a. targeted misclassification b. source / target misclassification c. confidence reduction d. misclassification e. all above
d. misclassification misclassification - Change a class without any specific target"
109
Which Adversarial Attack Perturbation is the following: "Adversarial perturbation can only be applied to 1 source" a. individual b. universal
a. individual
110
Which Adversarial Attack Perturbation is the following: "Adversarial perturbation can only be applied to many source" a. individual b. universal
b. universal
111
Which Adversarial Attack Perturbation is the following: "Adversarial attack can only be applied to digital world" a. individual b. universal c. digital d. physical
c. digital ex. attacker has digital photo (profile picture) and small perturbation to mutliple pixels they can fool facial recognition in digital world
112
Which Adversarial Attack Perturbation is the following: "Adversarial attack applied to physical world" a. individual b. universal c. digital d. physical
d. physical camera takes photo sends to ml system. Camera quality is insufficient and smooths before sent to system. This smoothing destroys adversarial perturbation. This shows that what is done in physical world cant be done in digital world.
113
Single step attacks require just 1 steps. What are Single steps attack examples a. FGSM b. RSSA c. BIM d. Both A and B
d. Both A and B FGSM and RSSA are both single step attacks. (Fast and less accurate)
114
Iterative attacks require multiple iterations. What are examples of Iterative attacks? a. BIM b. DeepFool c. FGSM d. both A and B
d. both A and B BIM and DeepFool both are iterative attacks require multiple iterations. (More accurate but very slow)
115
________ This Adversarial Attack Constraint - measures the Euclidean distance between adversarial example and the original sample a. L8 b. L2 c. L1 d. L0
Adversarial Attack Constraint b. L2 L2 - measures the Euclidean distance between adversarial example and the original sample
116
_______ This Adversarial Attack Constraint -measures distance between 2 points (number of dimensions that have different values) and number of pixels changed) a. L8 b. L2 c. L1 d. L0
Adversarial Attack Constraint d. L0 L0- measures distance between 2 points (number of dimensions that have different values) and number of pixels changed)
117
______ This Adversarial Attack Constraint - Distance is equivalent to the sum of the absolute value of each dimension, which is also known as the Manhattan distance a. L8 b. L2 c. L1 d. L0
Adversarial Attack Constraint c. L1 L1 - Distance is equivalent to the sum of the absolute value of each dimension, which is also known as the Manhattan distance
118
______ This Adversarial Attack Constraint - Denotes the maximum change for all pixels in adversarial examples a. L8 b. L2 c. L1 d. L0
Adversarial Attack Constraint a. L8 l8 - maximum change for all pixels in adversarial examples
119
_______ Everything about the network is known including all weights and all data on which this network was trained a. White-box b. Grey-box c. Black-box
a. White-box White-box- Everything about the network is known including all weights and all data on which this network was trained
120
______ An attacker may know details about the dataset or a type of netural network, its structure, the number of layers, and so on a. White-box b. Grey-box c. Black-box
b. Grey-box An attacker may know details about the dataset or a type of netural network, its structure, the number of layers, and so on
121
________ An attacker can only send information to the system and obtain a simple result about a class a. White-box b. Grey-box c. Black-box
c. Black-box An attacker can only send information to the system and obtain a simple result about a class
122
Steps on "How to Choose an Attack" i. Understand Knowledge Level + Goal ii. Understand Constrain + Environment iii. Iterations + Perturbations a. 1,2,3 b. 3,2,1 c. 2,1,3
Steps on "How to Choose an Attack" a. 1,2,3 i. Understand Knowledge Level + Goal ii. Understand Constrain + Environment iii. Iterations + Perturbations
123
Attack quality depends on AI model hyperparameters True False
True AI Attack quality depends on AI model hyperparameters such as, number of layers, activation functions etc.
124
Iterative attacks are better than single-step attacks because they are faster True False
False Iterative attacks are slower than Single-Step attacks
125
FGSM is faster than DeepFool True False
True FGSM is faster than DeepFool
126
Grey-box attack is an attack where an attacker doesn't know anything about the model and the dataset True False
False Grey-box attack is an attack where an attacker know a little about the model and the dataset
127
Decision-based attacks are harder than score-based ones True False
True Decision-based attacks are harder than the score-based ones because they are based on less information about the system
128
What are the 4 different ways to measure attacks? 1. misclassification 2. imperceptibility 3. robustness 4. speed
misclassification imperceptibility robustness speed
129
What are one of the ways to measure for attacks: "how good the attack is against all examples" a. misclassification b. imperceptibility c. robustness d. speed
a. misclassification
130
What are one of the ways to measure for attacks: "how hard is it to recognize an attack" a. misclassification b. imperceptibility c. robustness d. speed
b. imperceptibility "how hard is it to recognize an attack"
131
What are one of the ways to measure for attacks: "how resistant to modification this adversarial example is" a. misclassification b. imperceptibility c. robustness d. speed
c. robustness "how resistant to modification this adversarial example is"
132
What are one of the ways to measure for attacks: "how fast the computation is" a. misclassification b. imperceptibility c. robustness d. speed
d. speed "how fast the computation is"
133
What are the 3 measure of Misclassification 1. Misclassification Ratio (MR) 2. Average Confidence of Adverarial Class (ACAC) 3. Average Confidence of True Class (ACTC)
The 3 measure of Misclassification 1. Misclassification Ratio (MR) 2. Average Confidence of Adverarial Class (ACAC) 3. Average Confidence of True Class (ACTC)
134
Which Misclassification measure is the following: "the percentage of adversarial examples, which are successfully misclassified as relating to an arbitrary class" a. Misclassification ratio (MR) b. Average Confidence of Adversarial Class (ACAC) c. Average Confidence of True Class (ACTC)
a. Misclassification ratio (MR) "the percentage of adversarial examples, which are successfully misclassified as relating to an arbitrary class"
135
Which Misclassification measure is the following: "The average prediction confidence toward the incorrect class" a. Misclassification ratio (MR) b. Average Confidence of Adversarial Class (ACAC) c. Average Confidence of True Class (ACTC)
Misclassification Measure  b. Average Confidence of Adversarial Class (ACAC) The average prediction confidence toward the incorrect class"
136
Which Misclassification measure is the following: "Averaging the prediction confidence of true classes for AEs, ACTC is used to further evaluate the extent to which the attacks escape from the ground truth" a. Misclassification ratio (MR) b. Average Confidence of Adversarial Class (ACAC) c. Average Confidence of True Class (ACTC)
c. Average Confidence of True Class (ACTC) "Averaging the prediction confidence of true classes for AEs, ACTC is used to further evaluate the extent to which the attacks escape from the ground truth"
137
What are the 3 measure of Imperceptibility 1. Average Lp Distortion (ALDp) 2. Average Structural Similarity (ASS) [image specific] 3. Perturbation Sensitivity Distance (PSD) [image-specific]
What are the 3 measure of Imperceptibility 1. Average Lp Distortion (ALDp) 2. Average Structural Similarity (ASS) [image specific] 3. Perturbation Sensitivity Distance (PSD) [image-specific]
138
Which Imperceptibility measure is the following: "As the average normalized Lp distortion for all successful adversarial examples" a. Average Lp Distortion (ALDp) b. Average Structural Similarity (ASS) [image-specific] c. Perturbation Sensitivity Distance (PSD) [image-specific]
Measure of Imperceptibility: a. Average Lp Distortion (ALDp)- "As the average normalized Lp distortion for all successful adversarial examples"
139
Which Imperceptibility measure is the following: "Structural similarity is considered to be consistent to human visual perception than Lp similarity" a. Average Lp Distortion (ALDp) b. Average Structural Similarity (ASS) [image-specific] c. Perturbation Sensitivity Distance (PSD) [image-specific]
A measure of Imperceptibility b. Average Structural Similarity (ASS) [image-specific] "Structural similarity is considered to be consistent to human visual perception than Lp similarity"
140
Which Imperceptibility measure is the following: "Based on the contrast masking theory, this measure is proposed to evaluate human perception of perturbations" a. Average Lp Distortion (ALDp) b. Average Structural Similarity (ASS) [image-specific] c. Perturbation Sensitivity Distance (PSD) [image-specific]
A measure of Imperceptibility c. Perturbation Sensitivity Distance (PSD) [image-specific] "Based on the contrast masking theory, this measure is proposed to evaluate human perception of perturbations"
141
What are the 3 measure of Robustness 1. Noise Tolerance Estimation (NTE) 2. Robustness to Gaussian Blur (RGB) 3. Robustness to Image Compression (RIC) [image-specific]
What are the 3 measure of Robustness 1. Noise Tolerance Estimation (NTE) 2. Robustness to Gaussian Blur (RGB) 3. Robustness to Image Compression (RIC) [image-specific]
142
Which Robustness measure is the following: "Noise tolerance reflects the amount of noises that AEs can tolerate while keeping the misclassified class unchanged" a. Noise Tolerance Estimation (NTE) b. Robustness to Gaussian Blur (RGB) c. Robustness to Image Compression (RIC) [image-specific]
a. Noise Tolerance Estimation (NTE) "Noise tolerance reflects the amount of noises that AEs can tolerate while keeping the misclassified class unchanged"
143
Which Robustness measure is the following: "Gaussian Blur is widely used as a pre-processing stage in computer vision algorithms to reduce noise in images" a. Noise Tolerance Estimation (NTE) b. Robustness to Gaussian Blur (RGB)[image-specific] c. Robustness to Image Compression (RIC) [image-specific]
Measure of Imperceptibility b. Robustness to Gaussian Blur (RGB)[image-specific] "Gaussian Blur is widely used as a pre-processing stage in computer vision algorithms to reduce noise in images"
144
Which Robustness measure is the following: "Image-specific measure similar to RGB" a. Noise Tolerance Estimation (NTE) b. Robustness to Gaussian Blur (RGB)[image-specific] c. Robustness to Image Compression (RIC) [image-specific]
Robustness measure: "Image-specific measure similar to RGB" c. Robustness to Image Compression (RIC) [image-specific]
145
5 Measures of Speed: -Single CPU -Single GPU -Parallel CPU -Parallel GPU -Memory consumption
5 Measures of Speed: -Single CPU -Single GPU -Parallel CPU -Parallel GPU -Memory consumption
146
What are the steps to choose Metrics for Better Attacks? i. Misclassification ii. Imperceptibility iii. Robustness a. 1,2,3 b. 2,1,3 c. 3,2,1 d. none
Steps to choose Metrics for Better Attacks a. 1,2,3 i. Misclassification ii. Imperceptibility iii. Robustness
147
_______ attacks produce much smaller changes and bypass defensive distillation a. advanced attacks b. list attacks c. listed attacks d. adversarial attacks
a. advanced attacks advanced attacks produce much smaller changes and bypass defensive distillation
148
Which attack provides 3 different attack options: (L0,L2, L8), Also uses box constraints such as Adam? a. CW Attack b. L-BFGS c. FGMS
a. CW Attack CW Attack logic - provides 3 different attack options: (L0,L2, L8), - Also uses box constraints such as Adam
149
Why use DeepFool Attack over L-BFGS, FGSM, and CW Attack? a. CW attack is slow b. L-BFGS and FGSM perturbations are big c. We need faster solutions with smaller perturbations. d. All above are true why need DeepFool
d. All above are true why need DeepFool Need DeepFool Attack 1. L-BFGS and FGSM perturbations are big 2. CW Attack is slow 3. Need faster solutions with smaller perturbations
150
Which attack was the first method specifically for deep networks? a. DeepFool b. CW c. L-BFGS d. FGSM
a. DeepFool DeepFool- attack was the first method specifically for deep networks
151
What is a big advantage to DeepFool? a. Faster than CW b. Finds the closest decision boundary to a given X c. step by steps calculate the best pixels d. none of the above
b. Finds the closest decision boundary to a given X The biggest advantage to DeepFool is that DeepFool -Finds the closest decision boundary to a given X steps: 1. Step by step calculate the best pixels to change 2. Algorithm perturbs the image by a small vector 3. Vector takes the resulting image to the boundary of the polyhedron that is obtained by linearizing the boundaries of the image region.
152
_______ attack is a universal approach to analysis of model security against adversarial examples a. PGD (Project Gradient Descent) attack b.DeepFool c. CW d. L-BFGS
a. PGD attack PGD attack is a universal approach to analysis of model security against adversarial examples
153
Among white-box defense that appeared in ICLR-2018 and CVPR-2018 _______ was the only defense that has not been successfully attacked so far. a. PGD adversarial b. Deep Fool c. CW d. L-BFGS
a. PGD adversarial The only defense that has not been successfully attacked so far.
154
_____ is a variation of the BIM method, but instead of directly clipping Xadv + pXadv at Xmin,Xmax; it performs a projection of pXadv onto the Lp-ball with radius 3total. a. PGD (Projected Gradient Descent) b. BIM c. PGD Adversarial
a. PGD (Projected Gradient Descent) is a variation of the BIM method, but instead of directly clipping Xadv + pXadv at Xmin,Xmax; it performs a projection of pXadv onto the Lp-ball with radius 3total.
155
_____ wants the closest similarity to another class with minimum perturbation for a source input a. PGD (Projected Gradient Descent) b. BIM c. PGD Adversarial d. PGD
d. PGD (Projected Gradient Descent) -wants the closest similarity to another class with minimum perturbation for a source input
156
_____ goal is to find model parameters so that the "adversarial loss" given by inner attack problem is minimized. a. PGD (Projected Gradient Descent) b. BIM c. PGD Adversarial d. PGD
d. PGD goal is to find model parameters so that the "adversarial loss" given by inner attack problem is minimized.
157
BIM attack is better than FGSM because: a. its faster b. its more precise c. use less resources d. can be optimized to work on GPU
b. its more precise Both BIM and FGSM work on GPU
158
CW attack was invented to a. bypass adversarial training defense b. invent the fastest attack c. use less resources d. bypass defensive distillation
d. bypass defensive distillation The Main Idea of CW attack was created to bypass defensive distillation protection
159
Which attack is the most similar to DeepFool by Imperceptability metric. a. BIM b. FGSM c. CW d. PGD
c. CW BIM is different to PGD according to Imperceptability metrics.
160
Why PGD is better than BIM in practice? a. can find same Adversarial examples much faster b. always more precise c. more robust d. faster
a. can find same Adversarial examples much faster Note: BIM usually calculating attacks faster than PGD
161
Which attack has the worst robustness? a. FGSM b. CW c. BIM d. PGD
b. CW FGSM is not the best attack but robustness is quite ok.
162
What is the best approach to protect AI solutions? a. PPDR (predict, prevent, detect, respond) b. PPRD (predict, prevent, respond, detect) c. RDPP (respond, detect, predict, prevent)
a. PPDR (predict, prevent, detect, respond)
163
Out of the PPDR Model which part uses the following information: "Protects a model production - testing and verification?" a. predict b. prevent c. respond d. detect
a. predict "Protects a model production - testing and verification"
164
Out of the PPDR Model which part uses the following information: "Preventing attacks at the production stage by different model modifications?" a. predict b. prevent c. respond d. detect
b. prevent "Preventing attacks at the production stage by different model modifications"
165
Out of the PPDR Model which part uses the following information: "Active reaction to attacks such as modification of model responses?" a. predict b. prevent c. respond d. detect
c. respond "Active reaction to attacks such as modification of model responses"
166
Out of the PPDR Model which part uses the following information: "If an input is adversarial, don't let this data into a model?" a. predict b. prevent c. respond d. detect
d. detect "If an input is adversarial, don't let this data into a model?"
167
which approach in PREDICTION is the following: "sub-category collects all defense that somehow modifies the training procedure to minimize the chances of potential attacks?" a. modified training b. verification
PREDICTION Method a. modified training A sub-category collects all defense that somehow modifies the training procedures to minimize the chances of potential attacks
168
which approach in PREDICTION is the following: "sub-category NOT an actual defense but a health-check trying to explore all the potential ways to attack a model and as a result present the worst case scenarios" a. modified training b. verification
PREDICTION Method b. verification verification - "sub-category NOT an actual defense but a health-check trying to explore all the potential ways to attack a model and as a result present the worst case scenarios"
169
which approach in PREVENTION is the following: "sub-category modifying an input in order to corrupt or smooth objects (compression, purification, randomization, and many other approaches)" a. modified input b. modified model
PREVENTION Method a. modified input "sub-category modifying an input in order to corrupt or smooth objects (compression, purification, randomization, and many other approaches)"
170
which approach in PREVENTION is the following: "Modifying a ML model in order to prevent form attacks (changing hyperparameters, activation functions, layers, or combining multiple models together)" a. modified input b. modified model
PREVENTION method b. modified model "Modifying a ML model in order to prevent form attacks (changing hyperparameters, activation functions, layers, or combining multiple models together)"
171
which approach in DETECTION is the following: "Detecting potential attacks on ML models by learning initial distribution" a. Supervised Detection b. Unsupervised Detection
DETECTION method a. Supervised Detection "Detecting potential attacks on ML models by learning initial distribution"
172
which approach in DETECTION is the following: "(1) Detecting potential attacks on ML models without initial training. ; (2) It Learns behavior from all inputs and detects outliers. a. Supervised Detection b. Unsupervised Detection
DETECTION method b. Unsupervised Detection "(1) Detecting potential attacks on ML models without initial training. ; (2) It Learns behavior from all inputs and detects outliers."
173
which approach in RESPONSE is the following: "Detecting outliers and deleting them from training in order to save the model from retraining and posioning attacks" a. Retraining b. Counterattack
RESPONSE METHOD a. Retraining "Detecting outliers and deleting them from training in order to save the model from retraining and posioning attacks"
174
which approach in RESPONSE is the following: "Detecting outliers and deleting them from training in order to save the model from retraining and posioning attacks" a. Retraining b. Counterattack
RESPONSE METHOD b. Counterattack "Responding to potential attacks by detecting attack attempts and replying in such a way that attacks will continue heading in the wrong direction"
175
How to Detect and Measure Priority a. predict, prevent, detect b. detect, prevent, predict c. prevent, detect, predict
a. predict, prevent, detect
176
Adversarial training, regularization, and distillation are examples of which method: a. modified training b. modified model c. modified input d. none of the above
a. modified training PREDICTION Measure Examples: Adversarial training, regularization, distillation PRO/CON- very time consuming
177
Reconstruction, Compression, and Purification are examples of which method: a. modified training b. modified model c. modified input d. none of the above
c. modified input PREVENTION Examples: Reconstruction, compression, and purification PRO/CON: very good but application specific
178
Binary classifier and Additional Output are examples of which method: a. add-on detection b. modified model c. modified input d. none of the above
a. add-on detection examples: Binary Classifer and Additional output Pro / Con: Very diverse with respect to quality and speed
179
Adversarial training is? a. predict b. prevent c. detect d. respond
Adversarial training is an example of prediction
180
What is the most model-specific defense a. Verification b. Input modification c. Detection d. Model modification
d. Model modification model modification defense is the most model-specific
181
What is the best Adversarial training defense from those which were tested in this video a. NAT b. EAT c. PAT d. EIT
b. EAT
182
What is the WORST metric for modified input defense? a. CRS b. CRR c. CAV d. CCV
c. CAV CAV - the worst metric for modified input defense
183
Which defense has CVV =0 a. NAT b. Thermometer Encoding c. EIT d. Region-based Classification
RC defense shows the minimum CVV rate.
184
Steps to Start AI Security Project i. select attacks ii. select defenses iii. test attacks vs. defenses a. 1,2,3 b. 3,2,1 c. 2,1,3 d. 2,3,1
Steps to Start AI Security Project: a. 1,2,3 i. select attacks ii. select defenses iii. test attacks vs. defenses
185
How do we know which application to run? -Which application you are targeting -What task it will solve -What is the algorithm category -What is the attackers goal etc.
Know the Application to run by asking Question: -Which application you are targeting -What task it will solve -What is the algorithm category -What is the attackers goal etc.
186
How do we know which defense to run? -Which attack you are targeting -By what category -What is the algorithm category -What are the restrictions
We know which Defense to run by asking the Question: -Which attack you are targeting -By what category -What is the algorithm category -What are the restrictions
187
Combining Application + Defense Mechanism -You should only have 1 defense -Ensemble defense -Use multiple datasets -Use multiple hyperparameters -Use Multiple attacks
Combine Testing: Application + Defense Mechanism -You should only have 1 defense -Ensemble defense -Use multiple datasets -Use multiple hyperparameters -Use Multiple attacks
188
True or False: Face Recognition could be cheated with the help of special glasses
True Face Recognition could be cheated with the help of special glasses
189
____ the way we read or hear a language
speech perception speech perception- the way we read or hear a language
190
What is the first step in AI security project? a. Identify AI object Task, Threats b. Chose attacks c. Choose Defense d. calculate metrics
a. Identify AI object Task, Threats
191
How AI backdoors problem can be used for good a. captcha protection b. watermarks c. password protection d. privacy protection
b. watermarks Reasoning: Backdoors CAN be used for watermarks Backdoors cannot be used for privacy protection
192
How many AI security article methods published on Arxiv so far
1000+
193
What is the last step in AI security project a. defense testing b. attack testing c. metric evaluation d. threat modeling
c. metric evaluation