Exam 2021 and resit Flashcards

1
Q

Difference between cognition and intelligence

A

Cognition is a
- global process at the system
- that integrates many different processing modalities.
Special cognitive skills such as intelligence, learning, memory etc are constituents and synergies of a cognitive system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Difference btw. bio-inspired and purely computational models

A

Bio-inspired models: Implement cognitive functions by replicating known or hypothesized mechanisms of cognitive processing from biological organisms.
Computational models: Implement cognitive functions based on a functional view of the system without any reference to biology.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Name and explain 2 Properties of dynamical systems

A

Dissipation: the number of reachable states reduces over time
Non-equilibrium system: stable functions require external energy supply
Non-linearity: Complex behavior can emerge from a small set of state parameters
Collective variables: the system is represented by a small set of state variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

3 types of cognitive architecture

A

symbolic, emergent, hybrid

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Explain how ACT-R implements parallel information processing and how it implements serial information processing

A

Parallel Information Processing:

  • information processing within modules can be parallel to support high-dimensional data streams
  • different modules process data in parallel independently from each other

Serial Information Processing.

  • every module buffer can store only a single chunk
  • in each cycle of the production system exactly one rule fires
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Describe the buffer test and buffer action of the following production rule:

(p type
=goal>
isa goal
state enter-number
==> 
\+Manual>
CMD PRESS-KEY
KEY "2"
)
A
Test if the goal buffer 
holds a chunk of type goal
and if the state is enter-number
then send a buffer request
to the manual module
to press to key 2
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Explain the purpose of MapSpikeSource and MapSpikeSink devices of a transfer function in the NR platform

A

MapSpikeSink: reads out spikes from the SNN and coverts them into float values
MapSpikeSource: Creates spikes from a float value that is fed into the SNN

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Names of the three main components of biological neuron

A

Soma, dendrites, axon

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

2 visual processing streams and which information is computed along them

A

Dorsal stream: About the location of the object (Where?) In a more recent view also about how motor functions take place (How?)
Ventral stream: About what the identity of the item ist (What?)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

2 main schemes for encoding SNN and the main advantages

A

rate encoding: robustness against noise

time encoding: fast reaction time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

State the basic principle of Hebbian learning

A

When axon of cell A is near enough to cell B and persistently or repeatedly takes part in firing to it then some growth process takes place in one or both cells so that A’s efficiency is increased.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Fundamental properties of Hebbian learning?

A

Saturation: Ovoid unbounded growth of synaptic weights
Competition-> Selection: Avoid weights to converge to the same value
Locality: weights changes dependent on local variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Describe the statistical process that takes place when Oja’s rule is applied to a dataset

A
  • maximize the variance of the neuron’s output
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Name and describe 3 learning paradigms

A
  • unsupervised learning: learning of statistical regularities in the input data with no labels
  • supervised learning:
    learn the mapping between input and output data with predefined labels
  • reinforcement learning: learning which action to chose in a certain situation in order to maximize an external reward signal through interaction with the environment.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

return

A

sum of future discounted rewards in a Markov reward process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

action policy

A

a function that describes the probability for selecting a specific action conditioned by a state pi(a|s)

17
Q

state-value function

A

State value is the expected cumulated discounted reward following a specific policy when being a specific state

18
Q

reward

A

numerical signal that implicitly expresses the agent’s goal by encouraging/punishing goal directed/undirected state transitions.

19
Q

action-value function

A

q is the expected cumulative and discounted reward following a specific policy when selecting a specific action in a particular state.

20
Q

approximate RL

A

the agent predicts values with the help of non-linear function approximators.

21
Q

Dynamic Programming (bootstrapping? sampling?)

A

bootstraps, no sampling

-> high width of updating, low depth

22
Q

Monte Carlo

A

sampling, no bootstrapping

low width, high depth

23
Q

Temporal Difference

A

sampling, bootstrapping

- low width, low depth

24
Q

Explain GPI

A

Almost every RL algorithm implements the mechanisms of two alternating steps to evaluate an optimal policy

  • policy evaluation: evaluation of a value function depending on a policy
  • policy improvement: improving the policy by acting greedily on the value function

Both depend on each other so both steps need to be performed iteratively until an optimal policy is found.

25
Q

Name and describe the 3 cognitive capabilities

A

Self-reliance: A cs must be able to act in and interact with its environment self-reliantly, purposefully, and independently.
Perception and action: A sc must be able to perceive its environment, make sense of its perceptions, and predict future events.
Adaption: A cs must be able to adapt to changes of itself, of others, and within the environment.

26
Q

Describe the main characteristics of enactive emergent systems and explain the two principles that they are based on.

A

An enactive cs makes sense of its env. by interaction with it.

Ontogeny: development. The system is structurally coupled with the environment through sense-making and generates its own specific epistemology by capturing the regularities of interaction.

Phylogeny: The interaction between the system and its environment is structurally determined by the innate embodied physical and cognitive capabilities.

27
Q

Define a multiplication of type chunk and implement the equation 11*3 as declarative knowledge

A

Define chunk type
(chunk-type multiplication factor1 factor2 product)

(add-dm 
   (Multiply113:
       isa multiplication
       factor1 11
       factor2 3
       product 33))
28
Q

four main modules of ACT_R

A

Manual, Visual, Intentional, Declarative

29
Q

Thalamus

A

relay and distribution of sensory and motor signals to the different regions of the cortex

30
Q

white vs. grey matter

A

the cerebral cortex, also called grey matter, is the folded outer layer of the cerebrum that is mainly comprised of cell bodies.
The inner part of the cerebrum, the white matter, is a core of nerve fibers that connect the cortical regions.

31
Q

Models that are analog

A

General analog model

McColloch-Pitts model

32
Q

SNN models

A

Hodgking-Hudley model
Leaky-integrate and fire model
AdEx model

33
Q

2 benefits of reducing a neuron model from 4D to 2D

A

less computational complexity, analytical tractability, analysis in the phase plane are posssible

34
Q

Why is time-to-first spike more energy efficient on neuromorphic hardware?

A

because it needs to simulate a shorter time period and less spikes need to be sent via the communication grid compared to rate-based encoding.

35
Q

Describe BCM theory

A

These rules include threshold variables. If the activity of the cell is higher than the threshold then the weight increases, but if it is lower than the weight decreases.
These rules include a second equation governing the dynamics of the variable threshold.

36
Q

structural plasticity

A

physical creation and deletion of synapses