topic 5 Flashcards

(34 cards)

1
Q

what is an artificial neuron

A
  • an individual building block if a neutral network
  • made up of bias and activation function
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what are weights

A
  • give importance to features that contribute more towards learning
  • represent strength of connections between neurons
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what is transfer (summation) function

A

combines multiple inputs into one output value, so activation function can be applied

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

what is activation function

A

introduces non-linearity to the network

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what is bias

A

shifts the values produced by activation function, allows neurons to make predicitions even when all input is zero

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what is a layer

A

combination of multiple neurons stacked together in a row

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what is input layer

A

each neuron corresponds to a feature in the input dataset

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what is hidden layers

A

intermediate layers that do computations and extract the features from the data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what is the output layer

A

maps the learned features from hidden layer to final output

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

example of supervise learning algorithms

A
  • backpropagation
  • gradient descent
  • stochastic gradient descent
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

examples of unsupervised learning algorithms

A
  • autoencoders
  • generative adversarial
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

examples of reinforcement learning algorithms

A
  • q- learning
  • policy gradient networks
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

types of connection pattern

A
  • feedforward (graphs have no loops)
  • recurrent (loops occur because of feedback)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

types of weights

A
  • fixed (not changed at all)
  • adaptive (update weights though out training)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

types of memory unit

A
  • static (memoryless, current output depends on current input)
  • dynamic (output depends upon current input as well as current output)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what are feed forward neutral networks

A
  • data flows from input layer to output layer without loops
    LOOK AT NOTES FOR PIC OF IT
17
Q

What is multi-layer perceptron

A
  • it is an FFNN with fully connected neutrons with non-linear activation functions
18
Q

what are convolutional neutral networks

A
  • extract features from images, such as edges, textures and shapes. Features used to recognise objects, patterns and classify images
19
Q

What are recurrent neutral networks

A
  • process sequential data, where order of inputs matters
  • contain loops that allow info to pass from one step to the next
  • suitable for tasks that involve time - series data or sequences of information
20
Q

what does encoder - decoder model do

21
Q

what is an encoder

A
  • read input sequence
  • Summarises info in fixed-length vector
  • passed along as input for decoder
22
Q

what is a decoder

A
  • interprets context vector and generates output sequence
23
Q

autoencoders

A
  • encoder-decoder models in which input an output domains are the same
24
Q

large language models

A

designed for natural language processing tasks

25
foundation models
ML models trained on broad/vast data
26
generative AI
subset of artificial intelligence to create content
27
generative adversarial network
- consist of 2 competing networks, generator and discriminator - generator = create realistic data - discriminator = distinguish real from fake data
28
diffusion model
- add noise to training data and then learn how to reverse the process
29
what's fine tuning
adapting pre-trained models to specific tasks by training them in smaller, targeted datasets
30
invariance
output stays constant no matter the input is symmetry-transformed
31
equivariance
output undergoes exactly the same symmetry transformations as applied to input
32
message passing
sharing info between nodes in graph along edges that connect them
33
pooling
- process that aggregates node representations to generate a single graph-level representation - GNNs can learn an predict on the graph as a whole
34
geometric deep learning
- neutral network architectures that incorporate and process symmetry info