Machine ethics Flashcards
(11 cards)
Machine decisional capacities
Two characteristics:
- Trust
- Vulnerability
Machine decisional capacities
- Vulnerability
Two characteristics
→ possibility of suffering a damage
→ uncertainty
Humans vs machine
→ Decision
when interact humans are vulnerable of decision
- who decides between two computer when they interact?
→ Interaction
human interaction create behaviour
- is the same for machine?
→ Cultural factors
key factor
Machine decisional capacities
- Trust
Attitude to help achieve goal in uncertainity and vulnerability
2 requisities
→ Functional (necessary but not sufficient)
→ Ethical (necessary and sufficient)
Machine decisional capacities
- Trust
Functional requisities
Performative factors → factors to create trust in a machine
→ Reliability
→ Low number of false alarms
→ Transparency
→ Capacity to execute complex operations
→ Low level of danger
Machine decisional capacities
- Trust
Ethical requisities
Trust built on upright honest behaviour
5 principles
→ Non-maleficence
→ Beneficence
→ Authonomy
→ Justice
→ Explicability
Machine decisional capacities
- Trust
Ethical requisities
1) Non - maleficence
AI should not caise harm to people
Potential risks
→ Reputetional risks
biased systems may cause harm to people’s reputation
→ Economic and legal risks
if too successful it may become a monopoly
→ Environmental risks
→ Social risks
→ Political risks
controlling AI
Machine decisional capacities
- Trust
Ethical requisities
2) Beneficence
→ AI should actively seek advantage for others
→ Benefits should overcome risks and disadvantage
Machine decisional capacities
- Trust
Ethical requisities
3) Autonomy
→ machine and information ethics
→ 2 perspective
- Human authonomy
(AI help o limit?)
- Machine authonomy
(it is possible to program a machine to be ethically authonomous?)
→ Limit of machine autonomous
→ Interaction with machines
→ Bioethical perspective
→ Kant’s perspective
Machine decisional capacities
- Trust
Ethical requisities
3) Autonomy
→ Interaction with machines
- AI and machine should respect human scopes, values and desires
- “autonomy” may also mean that machine can operate without the head of human interactions
→
- AI should not help human to pursue illegal and unethical goals
- should not cause harm to others
Machine decisional capacities
- Trust
Ethical requisities
3) Autonomy
→ Bioethical perspective
→ right to make their own minds
→ right to decide and act on the basis of their personal values
→ respecting other person’s authonomy
- proactive behaviour
- not a respectful attitude but to act respectfully
(right = ontological status → a feature that makes the moral subject possible)
Machine decisional capacities
- Trust
Ethical requisities
3) Autonomy
→ Kant’s perspective
Authonomy
- capacity to make an ethical choice (to ethically regulate ourselves)
- possess will and rationality (no morals are possible without)
→ Not anthropocentric (Rational beings)
- Machine learning (machine learn from experience) ≠ Kant
- There are similarities?