21. Artificial Neural Networks 2 Flashcards

(20 cards)

1
Q

What does an ANN consist of?

A

A collection of neurons (nodes) organized in layers and connected by weighted links.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the structure of a feedforward ANN?

A

Composed of an input layer, one or more hidden layers, and an output layer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a perceptron?

A

A single-layer neural network with binary output based on weighted input summation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How is a perceptron output computed?

A

Using the activation function: y = f(w·x + b), where w are weights, x inputs, and b the bias.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What activation function does a perceptron use?

A

A step function that outputs 1 if input > threshold, else 0.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are limitations of a perceptron?

A

Cannot solve non-linearly separable problems like XOR.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the difference between perceptron and multi-layer perceptron (MLP)?

A

MLP has one or more hidden layers, allowing it to solve non-linear problems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the common activation functions in MLP?

A

Sigmoid, tanh, and ReLU.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How does learning occur in an MLP?

A

Via backpropagation, adjusting weights using the gradient of the error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is backpropagation?

A

A supervised learning technique using gradient descent to minimize output error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are key components in backpropagation?

A

Forward pass, error computation, backward pass, weight update.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the role of the learning rate?

A

Controls how much the weights are adjusted during training.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What happens if the learning rate is too high or too low?

A

Too high: unstable training; Too low: slow convergence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is overfitting in ANN?

A

When the model learns training data too well, failing to generalize to new data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How to prevent overfitting?

A

Use regularization, dropout, or early stopping.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the vanishing gradient problem?

A

When gradients become too small during backpropagation, slowing or stopping learning.

17
Q

Which activation function is prone to vanishing gradients?

A

Sigmoid and tanh.

18
Q

Which activation function helps reduce vanishing gradients?

A

ReLU (Rectified Linear Unit).

19
Q

What is the purpose of bias in a neuron?

A

Allows the activation function to be shifted left or right.

20
Q

What is the universal approximation theorem?

A

An MLP with one hidden layer can approximate any continuous function given sufficient neurons.