Lecture 6 Normative Ethics and AI Flashcards

(6 cards)

1
Q

What are the theories of Normative Ethics and AI?

A
  • Utilitarianism: the right action is the action that produces the most good overall
  • goodness as pleasure (hedonism)
  • goodness as well-being (welfarism)
    Lacks the resources to distinguish acts (e.g., killing) from omissions (e.g., letting die)
  • Deontology: Consequences of actions are not the only moral consideration -> action can be wrong even if it produces net benefits
  • Action is wrong if it violates duties we owe to others or ourselves
  • Action is right if it does not violate duties we owe to others or ourselves
  • act only on those reasons that everyone could act on -> universalization
  • categorical imperative (first formulation): act only on those reasons that everyone could act on
  • categorical imperative (second formulation): treat others as an end, as having vlaue in their own right, rather than merely as means to an end
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What should the core values of AI design be?

A
  • privacy
  • transparency
  • fairness (mitigating favoritism/discrimination)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the responsibility gap?

A

A responsibility gap emerges when it is unclear who (or whether) there is a bearer of moral responsibility

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

obligation gap info

A
  • backward-looking responsibility: who is responsible -> who is praise or blameworthy?
  • forward-looking responsibility: who is responsible -> who is under an obligation?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what is the vlaue alignment problem?

A

How to ensure that AI gets its interpretation of our values right?

  • AI makes decisions for us or about us
  • AI should be encoded with values and moral principles
  • But which ones? Which moral principles should be encoded within AI
  • utilitarian would maximize well-being, even at the expense of others
  • deontologicals would seek to treat human beings as ends in themselves, but how can it do that if it were to prioritize the driver in virtue of being the car owner
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what are echo chambers and epistemic bubbles?

A
  • echo chambers: social structures or communities centering on shared core beliefs in which outsider criticisms are actively discredited or ignored
  • epistemic bubbles: social structures or communities centering in which outsider view or criticims are not seen/heard
How well did you know this?
1
Not at all
2
3
4
5
Perfectly