exam questions Flashcards
(21 cards)
Compatibilism about free will means…whereas incompatibilism means….
- Compatibilism: Free will is compatible with determinism.
- Incompatibilism: Free will and determinism cannot both be true.
What is the simulation hypothesis? (Each version)
We might be in a computer simulation.
- Metaphysical: We are in a simulation.
- Epistemic: We can’t know if we are or not.
- Probabilistic (Bostrom): It’s likely we are.
What are the two responses to skeptical argument regarding simulations?
- Dismiss (doesn’t matter practically)
- Accept (but say we still have knowledge and morality either way)
What is the mind-body problem?
How can the mind (non-physical) and body (physical) interact?
What is dualism, physicalism, mind-brain identity theory, and functionalism?
theories of mind:
- Dualism: Mind ≠ body (they’re separate)
- Physicalism: Mind = physical brain
- Mind-Brain Identity: Each mental state = brain state
- Functionalism: Mind = what the brain does (functions/roles)
What is platonism, psychologism, and fictionalism about mathematical objects?
math object theories:
- Platonism: Numbers exist independently (abstract realm)
- Psychologism: Math = human thoughts
- Fictionalism: Numbers are useful fictions (like stories)
What is utilitarianism and deontology?
- Utilitarianism: Do what maximizes happiness or good outcomes
- Deontology: Follow rules/duties no matter the outcome
What does utilitarianism recommend in cases like the Trolley case? What about deontology?
- Utilitarianism: Pull the lever (save more lives)
- Deontology: Don’t pull (killing is always wrong)
What implications does the simulation hypothesis have for the theory of mind?
If we’re in a simulation, minds might not depend on biological brains.
What is the core reason for thinking that mental states are not just brain states?
Because mental states have meanings and feel a certain way — not just neurons firing.
What is the consequence argument — what view does it support about free will?
If determinism is true, we have no control, so no free will.
(Supports incompatibilism.)
Just because you did something (an action) does that mean you are morally responsible for it?
Action ≠ Moral Responsibility
Just doing something doesn’t mean you’re responsible — you need free will and choice.
Does utilitarianism allow the use of lethal autonomous weapons? What about deontology?
- Utilitarianism: Okay if it saves more lives
- Deontology: Not okay — violates moral rules (e.g., dignity, intention)
What’s the difference between a moral agent and patient?
- Agent: Can act morally (e.g., adult human)
- Patient: Deserves moral consideration (e.g., baby, animal)
What is ethical subjectivism? (What is egoism, relativism, and emotivism?)
Ethical theories:
- Subjectivism: Morality is based on personal opinion
- Egoism: Right = what benefits me
- Relativism: Right/wrong depends on culture
- Emotivism: Moral claims = emotional reactions (e.g., “Boo!” or “Yay!”)
How can egoism be applied
If helping someone makes you feel good or benefit you, egoism says it’s the right thing.
What supports relativism in ethics?
Different cultures have different moral values — no universal standard.
What is the main puzzle or paradox for our thinking about the ethics of AI?
How do we assign moral responsibility to AI or those who build it?
What is the value alignment problem for the ethics of AI?
How do we make AI systems that match human values? -> how do we ensure that AI gets its interpretation of our values right?
What do simulations and ‘experience machines’ reveal about our values? (which values are incompatible with plugging into the experience machine?)
They show we value more than pleasure, like real relationships, truth, and achievement.
What argument in the philosophy of mathematics helps us to resolve the dispute between platonism and its rivals (fictionalism, etc.)?
Indispensability Argument:
We must believe in math objects because we rely on them in science.