Task 3- M&M Flashcards

1
Q

Keyword: BCI

-> What is it?

A

use brain activity to control external devices, thereby enabling severely disabled patients to interact with the environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Which two kinds of BCIs are there?

A
  • Assistive BCIs are designed to enable paralyzed patients to communicate or control external robotic devices, such as prosthetics
  • Rehabilitative BCIs (or restorative or neurofeedback based BCI systems) are designed to facilitate recovery of neural function
  • invasive or non-invasive
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Keyword: Connectionism

A
  • Connectionist modelling is inspired by information processing in the brain (neurally inspired)
  • Several layers of processing units
  • More emphasis on connections than on one single node
  • Unit: neuron/ group of neurons
  • Each unit sums info from unit of previous layer, performs a computation (e.g. deciding if its above the threshold) and passes the results to units in the next layer
  • Influence of one layer to the next depends on the strengths of connections between them
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the aim of connectionism?

A

To see whether models based on these principles can perform the computations which we know the brain can perform

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the principles on which connectionism are built?

A
  • Neuron passing info to other neurons
  • Learning changes strength of connections between neurons
  • Cognitive processes: basic computations are performed in parallel by many neurons
  • Information: distributed across many neurons and many connections
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the 5 assumptions on which connectionist models are based?

A
  1. Neurons integrate information: get input from other neurons -> send output -> functional role like the one of neurons
  2. Neurons pass information about their level of their input: activity level transmitted as a single value; the higher the input, the higher the activity level
  3. Brain structure is layered: activity passes through sequence of physically independent structures: input layer, middle layer, output layer; hidden units: units in middle layer, neither receive input directly, nor produce the network’s response
  4. The influence of one neuron to the other depends on strength of the connection between them; also called weight of the connection
  5. Learning is achieved by changing the strengths of connections between neurons
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the symbols and elementary equations?

  • activity
  • weight
  • input
  • netinput
  • activation function
  • learning by weight change
  • bias
A
  1. Activity: represented by a
  2. Weight: symbol w
  3. Input: input from unit j to unit i: input (I,j) = a (j)* w (ij) (= w: weight of connection)
  4. Netinput: summing the input from all the units which send input to it
  5. Activation function (see below)
  6. Learning by weight change: ability to learn by experience -> Delta rule (see below)
  7. Bias: receives no input itself; its activity is always set at +1; input of it can be positive or negative; effect of the bias is to make the unit it is connected to active if the weight is positive or inactive if the weight is negative; positive bias: represents the base firing rate or a neuron with a high spontaneous firing rate; negative bias can be seen as a threshold (Other input must exceed this before the unit will fire)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Keyword: activation functions

A

To know what activity level unit i achieves, we need to know the relation between the net input and the activity level which this produces; several possible activation functions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the 4 types of activation functions?

A
  1. Linear relation between net input and activity level
  2. Threshold linear function: threshold like real neurons
  3. Binary threshold function: neurons: two state devices (either on or off); once threshold achieved -> neuron maximally activated
  4. Sigmoid function: range of possible activity has been set, arbitrarily, from 0 to 1. When the net input is large and negative the unit has an activity level close to zero. As the input becomes less negative the activity increases gradually at first then goes down again, asymptoting at the maximum value (similar to neurons: lower thresholds and maximum firing rate); threshold, linear, maximum firing rate
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Keyword: Graceful degradation

  • …is an example of?
  • What does it mean?
  • What is the effect of it?
A
  • example of fault tolerance
  • ability of brains and connectionist models to continue to produce a reasonable approximation to the correct answer following damage (rather than undergoing catastrophic failure)
  • minor damage -> causes small change in response to many inputs, rather than a total loss
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How is graceful degradation possible?

A

Due to distributed knowledge representation and calculation throughout brain/ network

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Keyword: delta rule

A
  • if response of output unit is incorrect -> network can be changed so that its more likely to produce correct response next time
  • any change in connection weights will change the activity level of units in the next layer
  • output unit can be corrected by increasing the weights of connections from units in the previous layer which provide a positive input to it and by decreasing the weights of the connections which provide a negative input
  • -> type of backpropagation
  • -> actual minus desired outcome
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How are the human nervous system and the ANN similar to each other? (6 similarities)

A

• Both still perform reasonably well after minor damage to components of the system; if their input is noisy or inaccurate (graceful degradation)
• Both allow memory retrieval by content
• Knowledge representation is distributed across many processing units -> Information and calculation is spread across the network
- Computations take place in parallel across these distributed representations
• Prototype theory: categories; learns through generalizations
• Learning by error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How do BCIs work?

A
  • brain signals that are detected are amplified, filtered and decoded using online classification algorithms.
  • brain signals are classified according to relevant characteristics, filtered and smoothed before being fed back to users as a reward -> increases the probability that they will reproduce the rewarded brain response.
  • After processing and decoding brain signals, the output of the BCI can be used to control movement of a prosthesis, orthosis, wheelchair, robot or cursor, or to direct electrical stimulation of muscles or the brain
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How does (machine) learning take place in ANN?

A

Takes place by changing the weights of the connections leading to all output units which have an incorrect level of activity -> delta rule

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Backpropagation

A

algorithm in training feedforward neural networks for supervised learning