Midterm 3 Flashcards

(60 cards)

1
Q

What are the patterns of laminar interconnections?

A

feedforward connections from LGN to V1, feedforward connections to V2, feedback connections from V2 to V1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Information and processing in the eye

A

rod and cone receptors -> horizontal cells -> bipolar cells -> ganglion cell…out to optic nerve fibers and then to LGN in thalamus.

Process through neural spikes via light

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does the superior colliculus do?

A

sends info to motor/cerebellum area. It is the ‘inner eyes’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the five steps of visual computation?

A

1) rod and cone cells (light detection)
2) horizontal and bipolar cells (preprocessing)
3) ganglion cells (preprocessing)
4) Lateral geniculate nucleus (relay station)
5) visual cortex (conscious perception)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Rods vs cones

A

rods: discriminate between brightness in low illumination
. contribute to peripheral vision

cones: discriminate colors, contribute to central vision

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is our visual spectrum?

A

400nm (violet) - 700nm (red)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the three types of cone cells (color detectors)

A

S-cones (blue)
M-cones (green)
L-cones (red)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Why did we develop the system of trichromacy (three cone cells)?

A

evolutionarily advantageous to be able to distinguish between a wide range of colors. Since there is overlap of the wavelengths for the cones, they combine to create different colors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the sensory binding problem?

A

how does the brain combine different sensory features into one unified, coherent object

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the elements of visual computation?

A
  1. receptive field
  2. on-center/off-surround RF
  3. edge detection
  4. orientation detector
  5. location-invarient orientation detector
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the receptive field of a neuron?

A

the area of retina cells that trigger activity of that neuron

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is on-center/off-surround receptive field?

A

on-center cells - the surround ganglion cells are inhibitory and dampen the signal of the center ganglion cell. Most excitatory activation (biggest response) when light is concentrated in the center of the receptive field

off-surround cells have the opposite pattern -> most activation when light concentrated in surround illumination

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How do the ganglion cells detect edges?

A

the on-center/off-surround RF is able to detect the change in light intensity which creates an edge

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How do orientation detectors work?

A

a neuron has a preferred orientation
selectivity - respond to (detect) only a relatively narrow range of orientations
graded responses for nearby orientations (neighboring cells share similar orientations)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How were the first orientation cells discovered?

A

first detected in a cat where the excitatory vs inhibitory area of a selective neuron corresponds to the orientation preference

David Hubel and Torsten Wiesel used single-cell recordings in primary visual cortex of cats

Got nobel prize in physiology/medicine

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How do orientation detector networks work?

A

‘Boolean AND’ neuron (all LGN cells need to be activated to activate cortical cell). selective cells in the retina are organized according to their orientation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How do location-invarient orientation detectors work?

A

detecting motion of the same object. Many receptive fields of ganglion cells work together and go to LGN neurons which send signals to cortical simple cells and then activate the cortical complex cell. Boolean OR computation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is the visual computational pathway?

A

Retina -> LGN -> VA -> V2…-> IT -> anterior IT

in anterior IT learning generalizes over orientation, location, and form

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

connectionist model approach

A

Brains are parallel, distributed, analog computers, not based on symbolic logic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

father of neural network modeling (modern AI)

A

david rumelhart

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Is a neural network a universal turing machine?

A

yes, it can solve any computable problem

Further, if equipped with appropriate algorithms, the neural
network can be made into an intelligent computing
machine that solves problems in fast and finite time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Recent learning algorithms for neural networks

A

– Backpropagation learning rule (D. Rumelhart et al, 1986)
– Hebbian learning rule (1949)
– Kohonen’s self-organizing feature map (1982)
– Boltzman machine (1986)
– Deep learning network (G. Hinton et al, 2006)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Who won the 2024 physics nobel prize?

A

geoffrey hinton & john hopfield

content-addressable memory and deep belief nets

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

When was traditional vs modern AI?

A

traditional: 1950-2008 (symbolic system models)
modern: 2008-present (connectionist/neural networks)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
mathematics of single neuron computation
Simplifying assumptions: *Ignore: – ion-current dynamics – spike timing *Discrete time “cycles” (~10 ms?). *“Activation” models firing rate (0-100 spikes/s) *“Weights” model synaptic efficacies. if sum of weights x inputs exceeds threshold, binary output activation
26
Neural network model of word identification
* Word superiority effect: – TAKE (faster) – DAKE (slower) * Context sensitivity * Pattern completion
27
examples of pattern completion by a hopfield network
noise tolerant memory: denoising pattern completion (content addressable memory)
28
Simple Neural Networks: McCulloch-Pitts (M-P) Nets
Binary output neurons, S = 1 (fire) or 0 (silent), with an all-or-none step function f - Model of artificial neurons that computes Boolean logical functions where Y, X 1 , X 2 take on binary values of 0 or 1, W 1 , W 2 , Q take on continuous values, and f is the step-function. Boolean AND computation
29
Knowledge representation, learning, and acquisition of new knowledge in M-P network
Hiring rule problem Knowledge is in the connection weights (and thresholds) learning through weight modification Acquisition of new knowledge through recruiting other neurons
30
McCulloch & Pitts
1943 – First neural network model – Binary “neurons” – Boolean logic functions – No learning
31
D. Hebb
1949 – McGill psychobiologist – Proposed a learning rule (“Hebb Learning Rule”) * Unsupervised learning (i.e., learning w/o teacher) (e.g., learning natural categories)
32
F. Rosenblatt
1957 – Perceptron: 2-layer network with Delta Learning Rule
33
Can the perceptron solve the convex problem?
No, it is nonlinear. Need third (hidden) layer.
34
First AI revolution
Backpropagation Learning Rule (1986) * Led by David Rumelhart & James McClelland * Discovered Backpropagation (BP) Learning Rule for multi- layer networks * “Any problems” can now be solved! * Universal Turing Machines (“can compute whatever is computable”) * Biological plausibility of BP not established.
35
second AI revolution
Deep Neural Networks (DNN) (2006) * Led by G. Hinton (U. Toronto) * Possibility of realizing Strong AI
36
Deep neural net
A deep neural/learning/belief network is a multi-layer net with many, many hidden layers (e.g., 5-100). DNNs represent the most successful modeling framework to date for solving real-world problems in speech recognition, text-based image retrieval, and language translation.
37
what launched the second revolution?
A DNN For Hand-written Character Recognition NN-review 26 Hinton et al. (Neural Computation, 2006)
38
Hierarchical feature representations in DNN
Object features are learned and presented in a hierarchical manner such that each hidden layer represents a different level of features: the higher the layer, the higher level the feature.
39
How do the two DNNs in AlphaGO interact with each other?
DNN1: To select the next “best” move DNN2: To evaluate the odds of winning the match C2 Consciousness (self-reflection, meta-cognition)?? Google deep mind competition against lee sedol
40
Generative Adversarial Networks (GAN)
a type of deep learning model that use an adversarial process, pitting a generator against a discriminator, to learn to generate realistic synthetic data. The generator creates fake data samples, and the discriminator attempts to distinguish them from real data. face generative AI
41
godfather of modern AI
Geoffrey Hinton
42
Deep dreams
DeepDream is a computer vision program created by Google engineer Alexander Mordvintsev that uses a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia, thus creating a dream-like appearance reminiscent of a psychedelic experience in the deliberately overprocessed images.
43
Three approaches to robotics AI
1) Model-based (intelligence with representation; symbolic processing) v ACT-R (John Anderson, CMU) 2) Model-free/behavior-based (intelligence without representation; reflexive processing) v BigDog, Roomba (Rodney Brooks, formerly MIT) 3) Neural networks
44
Model-based Robotics/AI Systems: Intelligence with Representation
ACT-R SHAKEY the robot
45
What is ACT-R
ACT-R (Adaptive Control of Thought – Rational) is a computational modeling framework developed in the early 1990s by John Anderson at CMU (refined multiple times since then). John Anderson (CMU) ACT-R is a symbolic processing approach to the study of cognition in the tradition of the General Problem Solver (GPS: Newell & Simon, 1959), as opposed to the connectionist (i.e., neural network) approach.
46
What is the current ACT-R?
6.0 brain mapping & instructable production system
47
organization of ACT-R
modular architecture Modules connected serially & activated one after another sensors -> cognition layer -> ACT-R buffers -> perception layer -> environment ->actuators
48
SHAKEY the robot
developed by SRI team in 1960/1970s the world's first mobile robot capable of perceiving and reasoning about its environment could perform tasks requiring planning, route-finding, and rearranging simple objects. symbol manipulation
49
does ACT-R have an explainability problem like AI does?
no, we know exactly what and why it's doing what it's doing
50
Cogito ergo sum
“I think, therefore I am.” descartes – The world is not present, but instead is re-presented in an internal model of the external world (“virtual world”) – Cognition (information processing) involves only the manipulation of internal (symbolic) representations. – Although an external world exists, it is irrelevant for understanding cognition.
51
Behavior-based Robotics/AI Systems: Intelligence without Representation
cricket robot ALLEN the robot BigDog roomba Atlas humanoid robot
52
father of behavior based robotics
Rodney Brooks “The world outside is its own best model.”
53
Cricket robot
* Sensory inputs from microphones * Motor outputs to wheels * Neurons mediate * Behavior is jointly determined by: – Environment – Neuronal connectivity The speed of the left wheel is proportional to the intensity of the sound in the right microphone. The speed of the right wheel is proportional to the intensity of the sound in the left microphone. * Note the tight coupling between the robot and its environment. The robot does not build an internal model of the world. It just reacts to sound intensities.
54
a better cricket robot
* Additional inputs from photoreceptors * Excitatory auditory neurons * Inhibitory visual neurons -- only allow motion in the dark * Motor neurons integrate and “decide”
55
subsumption architecture
Made of layers of autonomous sub-systems that operate simultaneously in parallel, unlike a modular architecture (e.g., ACT-R) that is made of functional sub-systems called modules that operates serially. Parallel Decomposition by Behaviors
56
ALLEN the robot
(developed by Rodney Brooks in 1980s) Three levels of behavior: Layer 1: Avoid contact with other object (including object coming at you). Layer 2: Wander around aimlessly without hitting obstacles Layer 3: Explore “interesting” places to visit
57
allen's subsumption architecture
* Each layer adds new functionality * Lower layers serve as foundations for upper layers
58
Where is there subsumption architecture in the brain?
Perception-Action Cycles (PACs) in Basal Ganglia
59
Who made BigDog and when?
Boston Dynamics 2005-2010
60
Who made atlas and when?
Boston Dynamics 2025