Elliott Chapter 8 – Ethics of artificial intelligence Flashcards

1
Q

Historical and intellectual background of AI

A
  • Artificial intelligence: any artificial computational system that shows intelligent behavior, i.e., complex behavior that is conducive to reaching goals
  • AI gets under our skin further than other technologies: because the goal of AI is to create machines that have a feature central to how we humans see ourselves, namely as feeling, thinking, intelligent beings
  • The main purposes of an AI agent involve: sensing, modelling, planning, action, perception, text analysis, natural language processing (NLP), logical reasoning, game playing, decision support systems, data analytics, predictive analytics
  • The latest EU policy documents suggests that ‘trustworthy AI’ should be lawful, ethical and technically robust and gives the following seven requirements: human oversight,
    technical robustness, privacy and data governance, transparency, fairness, well-being and accountability
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Privacy has several well-recognized aspects

A

Information privacy
- Privacy as an aspect of personhood
- Control over information about oneself
- The right to secrecy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the main data-collection for the 5 biggest companies appeared to be based on?

A
  • Deception
  • Exploiting human weakness
  • Furthering procrastination
  • Generating addiction
  • Manipulation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

The primary focus of social media, gaming, and most of the internet is?

A

to gain, maintain and direct attention > data supply

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is Device fingerprinting?

A

A technique for identification

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Surveillance capitalism

A

A concept which denotes the widespread collection and
commodification of personal data by corporations (the business model of the internet)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Privacy debates

A

There is an ever-growing data collection about users and populations, which know more about us than we know ourselves

Users are manipulated into providing data, unable to escape this data collection and without knowledge of data access and use

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Manipulation

A

Manipulation is mostly aiming at the user’s money

With sufficient prior data, algorithms can be used to target individuals or small groups with just the kind of input that is likely to influence these individuals

Dark patterns > such manipulation is the business model in much of the gambling- and gaming industries, and low-cost airline

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Opacity

A

Opacity and bias are central issues in ‘data ethics’ or ‘big data ethics’

Opaque AI is a “black box” system, where the technology can’t explain itself or why it’s operating in a certain way

  • Data analysis is often used in predictive analytics in business, healthcare and other fields, to foresee future developments
  • If a system uses machine learning, it will be opaque even to the expert, who will not know how a particular pattern was identified, or even what the pattern is
  • Bias in decision systems and datasets is exacerbated by this opacity
  • There is a fundamental problem for democratic decision-making if we rely on a system that is supposedly superior to humans but cannot explain its decisions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Bias

A

Bias
* Bias = typically surfaces when unfair judgements are made because the individual make the judgement is influenced by a characteristic that is actually irrelevant to the matter at hand, typically a discriminatory preconception about members of a group

  • Confirmation bias = people tend to interpret information as confirming what they already believe
  • Statistical bias à the mere creation of a dataset involves the danger that it might be used for a different kind of issue and then turn out to be biased for that kind
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Deception and robots

A

Human-robot interaction (HRI) > an academic field in its own right that pays significant attention to ethical matters, the dynamics of perception from both sides, and both the different interests present in and the intricacy of the social context, including co-working

  • Humans attribute mental properties to objects, and empathize with them, especially when the appearance of these objects is similar to that of living beings
  • Human to human interaction has three situations which robots can’t replace > care, love and sex
  • Care robots > are performing tasks in a behavioral sense in care environments, not in the sense that a human cares for the patient
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Autonomy and responsibility

A

Autonomous vehicles
- How autonomous should vehicles behave? And how should responsibility and risk be distributed in the complicated system the vehicles operate in?

  • Military robots
    Would using autonomous weapons in war make wars worse? Or make wars less bad
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Singularity: Artificial general intelligence

A

Artificial general intelligence (AGI) = computers given the right programs can be literally said to understand and have other cognitive states

  • The idea of the singularity is that if the trajectory of AI towards AGI reaches up to systems that have a human level of intelligence, then these systems would themselves have the ability to develop AI systems that surpass the human level of intelligence (super-intelligent)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Ultra-intelligent machine

A
  • Ultra-intelligent machine = a machine that can far surpass all the intellectual activities of any man however clever
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Control problem (also called: value alignment)

A

how we humans can remain in control of an AI system once it is
super-intelligent

The problem of how we can make sure an AI system will turn out to be positive, in the sense we humans perceive it (also called: value alignment)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

four types of machine agents

A
  1. Ethical impact agents =robot jockeys
  2. Implicit ethical agents = safe autopilot
  3. Explicit ethical agents = using formal methods to estimate utility
  4. Full ethical agents = can make explicit ethical judgments and generally is competent tor reasonably justify them. An average adult human is a full ethical agent
  • Ethical agents have responsibilities, while ethical patients have rights, because harm to them matters
17
Q

Community of artificial consciousness à

A

researchers believe there is a significant concern whether it would be ethical to create such consciousness, since creating it would presumably imply ethical obligations to a sentient being, e.g., not to harm it and not to end its existence by switching it off