exam questions Flashcards

(21 cards)

1
Q

Compatibilism about free will means…whereas incompatibilism means….

A
  • Compatibilism: Free will is compatible with determinism.
  • Incompatibilism: Free will and determinism cannot both be true.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the simulation hypothesis? (Each version)

A

We might be in a computer simulation.
- Metaphysical: We are in a simulation.
- Epistemic: We can’t know if we are or not.
- Probabilistic (Bostrom): It’s likely we are.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the two responses to skeptical argument regarding simulations?

A
  • Dismiss (doesn’t matter practically)
  • Accept (but say we still have knowledge and morality either way)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the mind-body problem?

A

How can the mind (non-physical) and body (physical) interact?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is dualism, physicalism, mind-brain identity theory, and functionalism?

A

theories of mind:
- Dualism: Mind ≠ body (they’re separate)
- Physicalism: Mind = physical brain
- Mind-Brain Identity: Each mental state = brain state
- Functionalism: Mind = what the brain does (functions/roles)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is platonism, psychologism, and fictionalism about mathematical objects?

A

math object theories:
- Platonism: Numbers exist independently (abstract realm)
- Psychologism: Math = human thoughts
- Fictionalism: Numbers are useful fictions (like stories)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is utilitarianism and deontology?

A
  • Utilitarianism: Do what maximizes happiness or good outcomes
  • Deontology: Follow rules/duties no matter the outcome
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What does utilitarianism recommend in cases like the Trolley case? What about deontology?

A
  • Utilitarianism: Pull the lever (save more lives)
  • Deontology: Don’t pull (killing is always wrong)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What implications does the simulation hypothesis have for the theory of mind?

A

If we’re in a simulation, minds might not depend on biological brains.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the core reason for thinking that mental states are not just brain states?

A

Because mental states have meanings and feel a certain way — not just neurons firing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the consequence argument — what view does it support about free will?

A

If determinism is true, we have no control, so no free will.
(Supports incompatibilism.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Just because you did something (an action) does that mean you are morally responsible for it?

A

Action ≠ Moral Responsibility
Just doing something doesn’t mean you’re responsible — you need free will and choice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Does utilitarianism allow the use of lethal autonomous weapons? What about deontology?

A
  • Utilitarianism: Okay if it saves more lives
  • Deontology: Not okay — violates moral rules (e.g., dignity, intention)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What’s the difference between a moral agent and patient?

A
  • Agent: Can act morally (e.g., adult human)
  • Patient: Deserves moral consideration (e.g., baby, animal)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is ethical subjectivism? (What is egoism, relativism, and emotivism?)

A

Ethical theories:
- Subjectivism: Morality is based on personal opinion
- Egoism: Right = what benefits me
- Relativism: Right/wrong depends on culture
- Emotivism: Moral claims = emotional reactions (e.g., “Boo!” or “Yay!”)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How can egoism be applied

A

If helping someone makes you feel good or benefit you, egoism says it’s the right thing.

17
Q

What supports relativism in ethics?

A

Different cultures have different moral values — no universal standard.

18
Q

What is the main puzzle or paradox for our thinking about the ethics of AI?

A

How do we assign moral responsibility to AI or those who build it?

19
Q

What is the value alignment problem for the ethics of AI?

A

How do we make AI systems that match human values? -> how do we ensure that AI gets its interpretation of our values right?

20
Q

What do simulations and ‘experience machines’ reveal about our values? (which values are incompatible with plugging into the experience machine?)

A

They show we value more than pleasure, like real relationships, truth, and achievement.

21
Q

What argument in the philosophy of mathematics helps us to resolve the dispute between platonism and its rivals (fictionalism, etc.)?

A

Indispensability Argument:
We must believe in math objects because we rely on them in science.