Week Flashcards

(22 cards)

1
Q

How is ai applied

A

. policing
. military
.politics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

problems with using ai to police people

A

Policing AI:
increase breadth of surveillance
lack of accountability of AI systems
worse facial recognition on minority data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

problems with using ai in wars

A

search for specific human targets =
bias in war, more deaths,
automated defence = dehumanisation of war.
Loss of moral engagement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

problems with ai in politics

A

state surveillance of opposition, minorities , increased censorship and disinformation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what is phrenology in AI

A

A) assumption that ai can reason someones sexuality , gender , race , emotion , criminality from an image of their face

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Q: Why are predictions of traits like gender, sexuality, or criminality from facial images problematic?

A

These are social constructs and so you may lead to
opression of groups like lgbt by misidentifying , wrongful persecution (as impossible to detect criminality from someones face)…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

who has access to ai

A

AI is expensive
(requires expensive hardware, computation power and money to train and house)

AI is in a big tech monopoly
problem wihth this is companies like apple will keep prices high and use if for their business and not necessarily societies’ needs

unis cant compete and poorer labs/countries cannot afford to use AI

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what are the harms in ai

A

.Lack of human control/ accountability
.Lack of safety
.Discrimination
.Privacy invasion, surveillance
.Environment and societal impact [easy to pick]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

explain lack of human control

A

LACK OF CONTROL / ACCOUNTABILITY

Many AI algorithms are complex, non-intuitive → hard to understand, control
Authority of objectivity, automation bias → too much delegation no accountability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

explain lack of safety

A

Often not robust → not safe when conditions change
AI often embedded in physical systems →physical safety

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

explain discrimation

A

Impact and harms are different for different groups of people due to lack of testing/data/consideration during design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

explain privacy invasion

A

Privacy invasion, surveillance
AI allows more pervasive privacy invasion
Constant data gathering for resale/training

data used without consent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

explain environmental and societal impact

A

Environment and societal impact
Modern AI is power hungry → 100s of GPU, training
Requires data centres → cooling and pollution, mineral mining, construction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Trustworthy ai

A

design and deployment of ai generates positive outcomes and mitigates harmful outcomes that can depend on a number of factors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what is value allignment

A

value allignment is writing down the rules ai must follow to be in allow with human values

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

problem with value allignment

A

technical ( hard to encode - may be too large)

social ( who decides the rules - stakeholdeers agree on)

problem with this is there may be too many rules / rules too complext to write down and so
the ai will use the rules it does have IGNORING THE RULES IT DOESNT HAVE DUE TO HUMAN FORGETING… to solve a problem . This may lead to situations where ai acts outside of its confines (due to unspecified rules)

17
Q

what is direct discrimination ( disparate treatment)

A

someone is disadvantaged due to a personal attribute

18
Q

indirect discrimination (disparate impact)

A

someone with personal attribute disadvantage but the personal attribute is not explicitly taken into account

ie algorithm that decides not to give a promotion due to how often an employee interrupts work

woman are indirectly discriminated due to child care ( which leads them to interrupting work)

19
Q

Bias

A

.imbalance or tendendy in data ( bias in input)

. direct / indirect discrimination ( bias in output)

20
Q

2 causes of discrimination

no 1

A

Risk not anticipated, tested, alleviated
World bias: world distribution bias
Representation bias: data collection bias
Measurement bias: wrong categorisation of people/ wrong measurements
Algorithm bias: wrong choice of algorithm
Evaluation bias: wrong choice of metric, test set

21
Q

2 causes of discrimination

no 2

A

Risk is obvious, problematic task
Ethics board not flagging a problem
Developer does not oppose to build, does not report, blow the whistle
Management makes decision to deploy

22
Q

what is ethics washing

A

Ethics washing: companies are setting up AI ethics teams and initiatives as a distraction - to make the public think they are working on it, even though they are not

propose principles that are too vague to be implementable
not fully implement principles that they propose
say they are developing ethical AI but not publicly reveal what they are actually doing