Morley et al. (2021) – Operationalizing AI ethics: barriers, enablers and next steps Flashcards

1
Q

Abstract of the article (sorry, idk how else to make cards of a f-ing survey research)

A

By mid-2019 there were more than 80 AI ethics guides available in the public domain. Despite this, 2020 saw numerous news stories break related to ethically questionable uses of AI. In part,
this is because AI ethics theory remains highly abstract, and of limited practical applicability to those actually responsible for designing algorithms and AI systems. Our previous research
sought to start closing this gap between the ‘what’ and the ‘how’ of AI ethics through the creation of a searchable typology of tools and methods designed to translate between the five most
common AI ethics principles and implementable design practices. Whilst a useful starting point, that research rested on the assumption that all AI practitioners are aware of the ethical
implications of AI, understand their importance, and are actively seeking to respond to them. In reality, it is unclear whether this is the case. It is this limitation that we seek to overcome here by
conducting a mixed-methods qualitative analysis to answer the following four questions: what do AI practitioners understand about the need to translate ethical principles into practice? What
motivates AI practitioners to embed ethical principles into design practices? What barriers do AI practitioners face when attempting to translate ethical principles into practice? And finally, what
assistance do AI practitioners want and need when translating ethical principles into practice?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
  • A fifth of the survey respondents indicated they had access to a code of conduct, and 24% indicated that they would find such a resource useful, thus….
A

The AI ethics community is thus not yet meeting the needs of AI practitioners, despite the plethora of resources that have been produced

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

There is recognition that pro-ethical design can improve social impact, particularly from the perspective of avoiding bias

A
  • This is interpreted from a risk-based approach as meaning avoiding harm (avoiding privacy infringement) rather than actively doing good either for society or for the environment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Normative cascade

A

à when AI practitioners find themselves facing the same grey issue
repeatedly, and society finds itself suffering from the consequences of related poor decisions, it’s likely that legislation will be developed to provide the AI practitioners with
the reassurance they seek
- It is therefore unrealistic to expect laws to be the ‘solution’ to all ethical dilemmas

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Encouraging cultural shift

A
  • AI practitioners need to be encouraged to develop an understanding of the ethical implications of the products that they design by combining ethics theories in mandatory courses provided to all involved employees
  • AI ethics researchers, in collaboration with journalists and public engagement specialists, should focus on making AI ethics relatable (to AI practitioners and the public)
  • Policymakers and legislators need to push against the false logic of the Collingridge dilemma à the idea that when trying to govern emerging technologies, they face a double-bind problem of information and power whereby impacts cannot be easily
    predicted until a specific technology is extensively developed and widely used, but technology cannot be controlled or changed once it has become entrenched
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Utility

A

Until AI practitioners have gained more experience considering ethical implications and translating these considerations into pro-ethical design decisions, they will require more specific guidance on what good looks like

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

AI ethics frameworks

A

no matter how much they come to be relied upon—must always
be seen as guard- rails, designed to stop AI practitioners from crossing social red-lines but not specifying exactly what to do to do this in each individual instance

  • Instead, AI practitioners should take an approach, inspired by Habermas’s discourse ethics where the aim of AI ethics frameworks is to guide open discussions in which all
    sides of an argument are listened to and considered until a decision that is acceptable to all can be reached
How well did you know this?
1
Not at all
2
3
4
5
Perfectly