1 Flashcards

1
Q

what is definition of AI

A

AI is difficult to define and there isn’t one set definition
- Mark cockelbergh claimed AI is ‘intelligence displayed or simulated by code or machines’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
1
Q

Definition of Robot

A

There is not a set definition as there is so many different types
- no EU official definition but the EU parliament agreed that is features attributes such as self learning machine which gains autonomy through exchanging data and has the ability to adapt to its environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

concerns of AI in general

A
  • bias and discrimination
  • inequality
  • privacy
  • ## feedback loop
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Where does BIAS arise in robots and AI

A
  • predictive policing - crime data is scarce and therefore they will go they know where current crime is and feedback loop repeats. Tendency to racial profile
  • in the training data of the machines (for example imagenet uses a lot of US data- which is not representative). it will continue to be reflected
  • The developers a disproportionally western white men.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Where do privacy concerns arise

A
  • labour and state surveillance
  • deepfakes
  • Internet of things
  • predictive policing
  • affect recognition
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What solution is available to tackle bias in predictive policing

A
  • bias testing using audits to quantity how the system is run (2nd party otherwise treated as a checkbox)
  • changing the training data and those affecting the outcomes such as the police. Police force is known for discrimination and bias within the force.
  • EDUCATE - on ways to tackle it not just who can arrest the most people. that is not justice. stop using it until it is accurate, and not in a. way that it is biased
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what are the issue with the solutions to tackling bias in predictive policing

A

the solutions are educate, bias testing using audits and amending the training data
- AUDITS = how exactly is bias tested, questions to what is morally right and effect to tackle crime. must consider the balance of protecting public but also not being discriminatory (balance article 8 ECHR, private life, and national security)
- ACCURACY = would it make it less accurate?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what types of state surveillance is there and what are the problems

A

TYPES
- mass state surveillance
- affect recognition
- biometric policing programme in Greece (lawfulness being investigated)
- spyware
- predictive policing
-

PROBLEMS
- balance between article 8 and national security
- transparency
- privacy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what is Disinformation and what are the concerns

A

intentionally misleading
- danger to public health, national security and racial equality.

CONCERNS
- AI echoes it through algorithms that create echo chambers
- Facebook algorithms designed to receive likes by spreading disinformation.
- DEEPFAKES

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what is predicting privacy and anonymisation and what are the concerns

A
  • predicting personal information without knowledge
  • anonymisation is the removal of personally identifiable data

CONCERNS
- used to persuade users to consent
- BUT there is ways to re identify and it is hard to be truly anonymous.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what are Deepfakes and what are the concerns

A

visual deepfakes- create enormous harm, political issues and character assassination.
Textual - fake news. GROVER said to be more convincing that humans. it warps social communications and feeds into the algorithms

To tackle it is the same as disinformation
1- transparency
2- show its fake
3- accountability - does this compensate the victim?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

how to tackle disinformation

A

1- transparency - publicise algorithms. ISSUE - limited value to average user
2 - create intelligibility so users can see it is fake - freedom of speech
3 - accountability - accountable platforms- does this really compensate for the harm.

LEGISLATIVE
- making it illegal will just mean there is no regulatory aspects. concerns or deep fakes especially
1- US has no laws regulating algorithms
2- EU are introducing new legislation (France and manipulation of info law- has limited impact though)
increase awareness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what is the US and EU law on allowing copyright (specifically TDM)

A

US - no law on ‘scraping’ copyright. instead 4 factors to consider if it is fair use. EASIER TO COPYRIGHT
1- purpose of use
2- nature of copyright work
3- how much it affects the person
4- if the product is reasonably available/ affect on market

EU - COPYRIGHT in the DIGITAL SINGLE MARKET DIRECTIVE 2019 (CDSM)
1. article 3- TDM for scientific research
2- article 4- most common exeption - allows it for it if it is accessible/commercial and so long as it has not been “OPTED-OUT”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

what are some issues within copyright and AI

A

1- TDM- “any automated analytical technique aimed at analysing text and data to generate new info”
2- scraping copyright
3-

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what is the OPT OUT approach and should it be allowed

A

-OPENAI allows those to opt out of their work being used in training data
FOR - protects creators rights and ensures a fair balance. protects creators work.
AGAINST - creators have to include individuals copy of each image they don’t want to be submitted (could be hundreds)

Recommendation - Communia state there needs to be transparency as to how their work is being used.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

for ‘work’ to be considered for protected against copyright what is work classed as

A

CURRENT law
- EU LAW silent on what counts as ‘work’ and whether AI can be an author or count as work

  • ARTICLE 2 of the Berne Convention in protecting literary and artistic work= claims work is counted as ‘literary scientific and artistic domain’
15
Q

what constitutes as an author, can AI be included?

A

Berne convention doesn’t define author but suggests it is human /natural person and therefore doesn’t include AI
- In many MS there is a presumption of Authorship for the person indicated unless proven otherwise

16
Q

BERNE convention what is it and what is the test for what counts as ‘work’ and ‘author’

A

deals with protection of works and rights of the authors in how their work is controlled and used by who.

1- literary, scientific or artistic domain
2- originality/ creativity stamp
3- human intellectual effort (by human)
4- expression

17
Q

what is GDPR and when is data processing allowed.

A

General data protection regulation unified data protection laws across europe.

processing of data is only lawful in circumstances
1- consent
2- performance of a contract
3- public interest
4- legitimate purposes

data minimisation- to this
collection, purpose and storage limitations no more than necessary.

18
Q

an aspect of GDPR is data minimisation. has it been successful at protecting individuals and what are the proposals

A

data minimisation is that the data controller should limit the collection to what is directly relevant and necessary

1- defintion difficulty - what does proportional and necessity mean?
2- limited case law to determine if it has a good effect - Swedish case in Stockholm

Further improvements
US proposes
1- ban biometric data collection on children, in work places,
2 - my suggestion is to make there more focus on the implications and not the collections process

19
Q

what are MUHLHOFF suggestions for proposing better privacy in regards to state surveillance

A

1- only allow consent when it solely affects that 1 user
2- render the exception of consent insufficient where data will be linked to others data
3- establish collective rights so that groups can also assert rights

20
Q

what are types of state surveillance and concerns

A
  • mass state surveillance
  • spyware
  • affect recogniton
  • predictive policing
  • predicting vulnerable users

CONCERNS
- riddled with flaws
- invasion of privacy
- balancing article 8 and national security
- transparency
- analysis of data can create a negative feedback loop
- bias and discrimination - can target users most likely to … without needing access to their data

ESTBALISH COLLECTIVE RIGHTS

21
Q

workplace surveillance what are the concerns?

A

workplace surveillance includes monitoring of their computer activity and webcams

CONCERNS
- low wage workers targeted more
- flawed but stilll make decisions.
- can be good to increase productivity but concerns over privacy

22
Q

how is the EU planning to deal with workplace surveillance and is it enough

A

PLATFORM WORK DIRECTIVE
1- access to data fro workers
2- transparency

is it enough?
- should include collective rights !! not just for platform workers!!
- include human in the loop - may lead to checkbox. shifts responsibility to low paid workers!
- GDPR is also designed for violations of law and not informing workers of all information collected

23
Q

what are some things that can be done to improve the workplace surveillance situation

A

PROPOSALS
- EU PLATFORM WORKWRS DIRECTIVE

  • ban surveillance of office bathrooms, off duty hours, algorithmic wage discrimination

legislation and proposals need to focus less on how it is collected and instead the outcome of the data collection on the user. let workers access it.

collective rights nit just individual rights and amend proposals to include all workers not just platform workers

24
Q

why should AI be regulated

A
  • human rights - privacy
  • discrimination - bias in systems, predictive policing
  • harmful in work place and crime detection
25
Q

HOW should AI be regulated

A

amend current law
1. USE CONTRACT/ TORT/ PRODUCT LIABILITY so when something goes wrong there is regulation
2. - there is no current legislation but AI Proposal.

AI proposal
1 - be as neutral as possible to cover new techniques
2- cover as much as possible
3- risk based approach to it

AUDITS- ensuring companies are complying with regulations, expanding transparency.
- it needs improving to hold companies accountable.

26
Q

what is the risk based approach

A
  1. unacceptable risk - prohibited
  2. high risk - permitted with conformity assessment
  3. Limited risk - permitted with transparency
  4. NO risk or minimal - permitted with no restrictions
27
Q

what is the basis of contract law and what are the issues that arise when applying it to AI and Robots when something goes wrong

A

contract liability includes 2 or more parties agreeing to something. IT includes the breached party justify the breach. the statute of limitations is very long (10 years)

what are the issues
1- it is difficult to establish who the parties to the contract are. those who have done the harm.
2- BofP - how do you prove something that isn’t visual

28
Q

what is the basis of tort law and what are the issues that arise when applying it to AI and Robots when something goes wrong

A

When someone unlawfully causes damage to another. Creditor has to prove everything. Statue of limitations 2-5 years.

1- is there a causal link? if it wasn’t physical
2- BofP - how is this possible? there is no strict liability

improvements
- strict liability for high risk AI products!!!!

29
Q

what is the basis of product liability and what are the issues that arise when applying it to AI and Robots when something goes wrong

A

Product liability is used when there is something defective about the product you bought. BofP is all on the creditor

1- defect - can something one person disagrees with be a defect? what counts as a defect in regards to AI.
2. person must prove damage - AI malfunction - what level of damage does this cause.

30
Q

what can be done to improve the current use of the law for defects in AI and robots/ what are the current proposals

A

-amended product liability directive seeks to change definitions of product, damage and defect to be more inclusive of digital environment
- there is also a new proposal for an AI liability Directive which aims to assist the burden of proof issues

31
Q

what are the different levels of legislation that can be implemented and which is best

A

international
- too broad
- sets standardised across countries
- hard to enforce and make specific

european level
- harmonisation and standardisation
- can only act within powers conferred on them
- not specific enough

national level
- more specific
- no standardised law across countries