Week Flashcards
(22 cards)
How is ai applied
. policing
. military
.politics
problems with using ai to police people
Policing AI:
increase breadth of surveillance
lack of accountability of AI systems
worse facial recognition on minority data
problems with using ai in wars
search for specific human targets =
bias in war, more deaths,
automated defence = dehumanisation of war.
Loss of moral engagement
problems with ai in politics
state surveillance of opposition, minorities , increased censorship and disinformation
what is phrenology in AI
A) assumption that ai can reason someones sexuality , gender , race , emotion , criminality from an image of their face
Q: Why are predictions of traits like gender, sexuality, or criminality from facial images problematic?
These are social constructs and so you may lead to
opression of groups like lgbt by misidentifying , wrongful persecution (as impossible to detect criminality from someones face)…
who has access to ai
AI is expensive
(requires expensive hardware, computation power and money to train and house)
AI is in a big tech monopoly
problem wihth this is companies like apple will keep prices high and use if for their business and not necessarily societies’ needs
unis cant compete and poorer labs/countries cannot afford to use AI
what are the harms in ai
.Lack of human control/ accountability
.Lack of safety
.Discrimination
.Privacy invasion, surveillance
.Environment and societal impact [easy to pick]
explain lack of human control
LACK OF CONTROL / ACCOUNTABILITY
Many AI algorithms are complex, non-intuitive → hard to understand, control
Authority of objectivity, automation bias → too much delegation no accountability
explain lack of safety
Often not robust → not safe when conditions change
AI often embedded in physical systems →physical safety
explain discrimation
Impact and harms are different for different groups of people due to lack of testing/data/consideration during design
explain privacy invasion
Privacy invasion, surveillance
AI allows more pervasive privacy invasion
Constant data gathering for resale/training
data used without consent
explain environmental and societal impact
Environment and societal impact
Modern AI is power hungry → 100s of GPU, training
Requires data centres → cooling and pollution, mineral mining, construction
Trustworthy ai
design and deployment of ai generates positive outcomes and mitigates harmful outcomes that can depend on a number of factors
what is value allignment
value allignment is writing down the rules ai must follow to be in allow with human values
problem with value allignment
technical ( hard to encode - may be too large)
social ( who decides the rules - stakeholdeers agree on)
problem with this is there may be too many rules / rules too complext to write down and so
the ai will use the rules it does have IGNORING THE RULES IT DOESNT HAVE DUE TO HUMAN FORGETING… to solve a problem . This may lead to situations where ai acts outside of its confines (due to unspecified rules)
what is direct discrimination ( disparate treatment)
someone is disadvantaged due to a personal attribute
indirect discrimination (disparate impact)
someone with personal attribute disadvantage but the personal attribute is not explicitly taken into account
ie algorithm that decides not to give a promotion due to how often an employee interrupts work
woman are indirectly discriminated due to child care ( which leads them to interrupting work)
Bias
.imbalance or tendendy in data ( bias in input)
. direct / indirect discrimination ( bias in output)
2 causes of discrimination
no 1
Risk not anticipated, tested, alleviated
World bias: world distribution bias
Representation bias: data collection bias
Measurement bias: wrong categorisation of people/ wrong measurements
Algorithm bias: wrong choice of algorithm
Evaluation bias: wrong choice of metric, test set
2 causes of discrimination
no 2
Risk is obvious, problematic task
Ethics board not flagging a problem
Developer does not oppose to build, does not report, blow the whistle
Management makes decision to deploy
what is ethics washing
Ethics washing: companies are setting up AI ethics teams and initiatives as a distraction - to make the public think they are working on it, even though they are not
propose principles that are too vague to be implementable
not fully implement principles that they propose
say they are developing ethical AI but not publicly reveal what they are actually doing