Final Exam Flashcards

1
Q

What makes a robot?

A

Interactive aspect it important. Sense, think, act, and communicate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Which kind of robots are there?

A

Physical manipulators and Social manipulators.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Social manipulator

A

Social manipulators manipulate the social world. They communicate using same interaction modalities used between people and have a limited mobility and manipulation capabilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Physical manipulator

A

Physical manipulators manipulate the physical world. They have good manipulation abilities to interact with the world, it does not always have to be mobile.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Autonomy for living agents

A

The degree which the agent determines its own goals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Autonomy for robots

A

The degree which there is no direct user control. Goals are pre-determined by programming.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Cognitive architecture

A

Embodiments of scientific hypothesis about aspects of human cognition, which are relatively constant over time and independent over task.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Agents

A

a system that is situated in some environment and capable of autonomous action in this environment to meet its goals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Cognitive model

A

Cognitive architecture + knowledge

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Cycle in agents

A

perceiving environment -> thinking -> act

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Natural motivations in humans

A

Belief, Goals, and Intentions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Asimov’s 3 laws of robotics

A
  1. robot may not injure human or through inaction allow human to come to harm
  2. robot must obey orders except when it conflisct 1st law
  3. robot must protect itself as long it doesn’t conflict with 1st and 2nd law.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Intelligence

A
  • autonomy = operate without human intervention and have some control over actions and internal state
  • social ability = interact with other agents
  • reactivity = perceive and respond to environment
  • pro-activeness = exhibit goal-directed behavior
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Intentional system (1st and 2nd order)

A

Behavior can be predicted by attribution of intentional notions/mental states.

1st order has beliefs and desires and rational acumen, but 2nd order also has concerning beliefs and desires of itself and other agents

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Symbolic reasoning agents

A

knowledge-based system, symbolic representation of world, decision via symbolic reasoning, behavior according to rules

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Abstract architecture

A

For symbolic agents. Abstract representations of knowledge such as symbols, predicates or logical formulas. For example environments, actions and runs (alternating sequences of states and actions)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Planning

A

Reason about sequences of actions and possible outcomes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Deductive agent

A

Agent that acts by deducing appropriate action from logical formulas describing the current state and set of rules.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

3 problems with symbolic reasoning agents

A

Frame problem, Transduction problem and Representation/reasoning problem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Frame problem

A

Figuring out which statements are necessary and sufficient ti describe the environment for a symbolic agent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Representation/reasoning problem

A

Figuring out how to symbolically represent info about complex world and processes (symbolic framework) and how to reason with it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Transduction problem

A

Figuring out hwo to translate the real world into a symbolic description that is accurate and adequate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Practical reasoning in agents

A

Process of figuring out what action to do

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Rational agents

A

Committed to doing what they intend/plan that is feasible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

BDI architectures

A

Beliefs, Desires, Intentions controller. Associated with symbolic agents. Agents are modeled based on beliefs, desires, and intentions. Takes into account that everything has a time cost.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Deliberation

A

Deciding what state of affairs we want to achieve, which becomes intentions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Two components of deliberation

A

Option generation based on current beliefs and intentions and filtering which to commit (= new intentions).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Means-ends reasoning

A

Deciding how to achieve intentions and re-plan when needed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Desires

A

like goals (reason for doing things) and/or options for the agent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Intentions

A

Desires which the agent is committed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What are the roles of intentions?

A

Drive means-ends reasoning, persist, constrain future deliberation, and influence beliefs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Beliefs

A

assumptions, current state of the world according to the agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Intention-belief inconsistency

A

Having an intention which you belief won’t achieve.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Intention-belief incompleteness

A

Having an intention without believing that necessary prerequisities will happen.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Blind commitment (intentions)

A

Continue to maintain an intention until is has been achieved.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Single-minded commitment (intentions)

A

Maintain intention until agent believes that either intention has been achieved or is no longer possible to achieve.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Reactive robots

A

Inteligent behavior emerges from interaction of simpler behavior systems. Perception is critical, decide action very quicly based on percepts. No symbolic presentation and reasoning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Open-minded commitment (intentions)

A

Maintain intentions as long as it is still optimal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Affordances

A

Able to directly perceive the action possibilities with objects.

40
Q

Ecological niche

A

Goals, world and sensorimotor possibilities

41
Q

Reflexive behavior

A

Relative in reactive robots. Reflexes, taxes, fixed-action paterns, and sequencing of innate behaviors.

42
Q

Reflexes

A

Simple involuntary response to a specific event/stimulus. Proportional to duration and intesity (hardwired).

43
Q

Taxes

A

Movement in relation to stimulus at particular orientation. Navigate based on taxes.

44
Q

Fixed-action patterns

A

Action sequence of rigid order, continues until completion. Not result of prior learning.

45
Q

Sequencing of innate behaviors

A

Behavior coordination mechanisms through external environmental stimuli.

46
Q

Concurrent behaviors + types

A

Multiple behaviors are active concurrently. Equilibrium, dominance or cancellation.

47
Q

Brook’s key propositions for reactive robots

A
  1. Intelligence is emergent, can be generated.
  2. The world is its best model, you have to sense it appropriately and often enough
48
Q

Subsumption architecture

A

Paradigm by Brooks for reactive robots. Layered control structures where simpler behaviors have precedence over complex ones (hierarchy).

49
Q

Advantages of reactive agents

A
  1. iterative construction
  2. behaviors from reactive to pro-active
  3. simple rule-like behaviors
  4. only few hard-coded assumptions, good in dynamic environments
50
Q

Disadvantages of reactive agents

A
  1. hard to engineer overall behavior that needs to emerge from simple behaviors
  2. they avoid internal model/symbolic representation (which are sometimes needed)
51
Q

Hybrid agents

A

Combining symbolic and reactive agents with layered architecture.

52
Q

2 systems of hybrid agents

A
  1. deliberative system = symbolic world model, develop plans and decisions
  2. reactive system = reacting to events without complex reasoning (has precedence)
53
Q

PID controller

A

Proportional, Integral, Derivative. Try to get a certain variable remain at some value.

54
Q

Proportional control

A

K_P * e(t). Where e(t) is error signal

55
Q

Derivative control

A

Rate of change of the error. K_D * derivative of e(t)

56
Q

Critical damping

A

Decrease error quickly, and correct it to the set point. Never oscillates.

57
Q

Integral control

A

History of the error. K_I * e(t)dt. Integral of error over time.

58
Q

Kalman filter

A

Estimate state of the system as precise as possible.

59
Q

Why is a Kalman filter needed?

A

It is a good way to obtain a reasonable good guess about the actual state of system when all information we have is noisy (like PID controller)

60
Q

What assumption do we have while using a Kalman filter?

A

That the noise is normally Gaussian distributed.

61
Q

How does Kalman filter work?

A

Combine estimate of current computed state and measured state which will be our new estimated (multiply them). The mean will be the means of both estimations.

62
Q

Kinematics

A

The study of how things move.

63
Q

Forward kinematics

A

From constrol signal to position of end effector.
From robot configuration in joint space to location in task space.

64
Q

Inverse kinematics

A

From position of end effector to control signal.
From location in task space to configuration in joint space.

65
Q

Task space

A

Frame of reference in the world, typically with cartesian x,y,z coordinates.

66
Q

Joint space

A

State of joints of a robot as an angle with respect to its own frame of reference.

67
Q

Why is inverse kinematics much more complex than forward kinematics?

A
  • Could be none or multiple solutions of control signal needed
  • non-linear, inverse trigonomtry needed
  • not all joints work fully in all configurations
68
Q

When forward kinematics is also challenging (non rigid bodies), how do we learn those kinematics?

A

We learn them through motor babbling, demonstration or prediction.

69
Q

Motor babbling

A

Learn mapping of what commands causes what action, through keeping track of sensory consequences of motor commonds sent.

70
Q

Learning from demonstration

A

Demonstrate desirable movement to robot.

71
Q

Learning to predict consequences of actions

A

Online/Offline prediction: already knowing reponse of action with given percept

72
Q

Forward model

A

Allow you to predict outcomes of possible actions without carrying them out. (Good for planning)

73
Q

Inverse model

A

Allow you to determine what actions will achieve a specific goal.

74
Q

HAMMER

A

Hierarchical Attentive Multiple Models for Execution and Recognition: a robot control architecture with forward and inverse models.

75
Q

Morphological computation

A

The computations of the body.

76
Q

Artificial agents

A

Agents that do what they are programmed to do, but without constant remote control.

77
Q

What are the benefits in adopting the intentional stance to artificial systems?

A
  1. Humans naturally attribute mental states to systems anyways
  2. Low level explanations are not enough for complex artificial agents acting in complex environments
  3. Makes sense for programmers to think of these notions to capture what intended behavior is
78
Q

Physical stance

A

Explain behavior through laws of physics.

79
Q

Design stance

A

Explain behavior through knowledge of purpose of the system.

80
Q

Intentional stance

A

Explain behavior through terms of mental properties.

81
Q

Environment in abstract architecture

A

Triplet: Env = (E, e0, t)
E = set of possible states
e0 = initial state
t = state transformer, maps a run ending in action to possible next states

82
Q

Implementation of abstract architecture (5 steps)

A
  1. start in initial internal state
  2. observe environment -> percepts
  3. update internal state
  4. select appropriate action
  5. repeat
83
Q

Synthesis problem

A

Given task environment, automatically find an agent that can solve it. We want agent that perform well given an environment.

84
Q

Core problem symbolic agents

A

Rely on complete description of the world in some formal language.

85
Q

Practical reasoning for humans

A

Deliberation and means-ends reasoning (= planning)

86
Q

Strategies in intention reconsideration

A

Bold agents (never reconsider) and cautious agents (reconsider after every action).

87
Q

Embodied cognition

A

Bodily interaction with the environment is primary to cognition.

88
Q

IRM

A

Innate Releasing Mechanism = releaser of control signal that can be triggered.

89
Q

Brooks’ reactive robots

A

Basic behaviors interact through inhibition and supression. Reactive paradigm (Sense -> act). No representaional model of the world.

90
Q

What is needed for a PID controller?

A
  • set point: goal state of variable
  • way of measuring the error
  • way of reducing the error
91
Q

D-term

A

Based on D-term in a PID controller you can predict what future values of error might be if P-term continues.

92
Q

Damping

A

Combine P-term with D-term to dampen influence of P-term.

93
Q

Pro and con of P-gain

A

Pro: can make system more accurate and respond more rapidly
Con: can lead to oscillatory movement

94
Q

Pro and con of D-gain

A

Pro: can reduce oscillation in system
Con: can slow response down

95
Q

Pro and con I-gain

A

Pro: can eliminate constant errors
Con: most likely will destabilze system

96
Q

How does HAMMER work?

A
  1. inverse model receives information about current state + target goal
  2. outputs motor commands needed to achieve target goal
  3. forward model provides estimate of upcoming states after those commands
  4. error of prediction returned back to inverse model