Personhood Flashcards
(18 cards)
Structure of the Essay
Philosophical issue, Define personhood, Ethical and Existential implications, thesis statement
Position 1: Alan Turing, Imitation game, Argument for Alan Turing, Argument from consciousness Lady Lovelace
Position 2: John Searle, Chinese room thought experiment, Takeaways from Chinese room thought experiment, brain simulator reply, response to brain simulator reply.
Position 3: Functionalism, Dreyfus, Mary’s room
Compare and contrast, explore why Dreyfus’ position is more plausible
Implications
Alan Turing: If machines can gain personhood, then they deserve to be treated like people. Ethical implications: other creatures and beings are entitled to have intrinsic value — which inevitably causes us to consider animals as potential beings with intrinsic value. Existential implications: Machines gaining personhood would suggest that us humans might be created by other beings as well, if yes, what would our purpose be?
John Searle: Ethical: We are the only beings in this universe to have intrinsic value (consider the universal declaration of human rights).
Functionalist Approach: It shifts ethical and legal discussions toward capacity-based models of rights and responsibilities; people who don’t have the capacity to do certain tasks (consider a person in a coma) wouldn’t be considered to be a person.
Philosophical Issue
Can machines gain personhood?
Define personhood
It is important to define what is personhood, to some it is consciousness, to others it is creativity, for others it is having emotions; one potential definition for personhood, could be that “it is the potential for a being to have all of these attributes and once personhood is obtained, it can’t be lost” — for example, a baby has personhood because it has the potential to have emotions, to think creatively and to obtain consciousness; a person with a coma has personhood even though he isn’t conscious now, this is because he “had gained” personhood when he didn’t have the coma.
Thesis Statement
I contend the position that it is impossible for machines to gain personhood.
Alan Turing
English Philosopher wrote an essay titled “can machines think?”
Imitation game
Alan Turing suggested an imitation game involving 3 players. Player A is a computer, Player B is a human and Player C is an interrogator. They can only communicate through written notes or any other form that does not give away any details about whether they are humans or a machine. Player A’s role is to trick the interrogator into making the wrong decision, while player B attempts to assist the interrogator in making the right one. Both the computer and human try to convince the judge that they are human. If the judge cannot consistently tell which one is which, then the computer wins the game.
Argument for Alan Turing
One might suggest that if it is possible to create an AI that appears to have human-like intelligence, it doesn’t matter whether the machine can actually think or not, because we wouldn’t be able to distinguish. Thus it is possible for them to gain personhood.
Argument from consciousness and Lady Lovelace
One of the main counter arguments to this position is the argument from consciousness — a computer can “artificially signal” that they can feel emotions or intentions but they don’t actually feel them. Similarly, Lady Lovelace argues that the analytical engine can do whatever we know how to order it to perform. But it doesn’t have any creativity and independent learning — two vital aspects of intelligence. Since intelligence is a key aspect of personhood, if it is impossible for machines to gain intelligence, to gain the ability to think independently, then it is impossible for them to gain personhood.
John Searle
A philosopher that believes it is impossible for machines to gain personhood is John Searle that came up with the Chinese Room Thought experiment
Chinese room Thought Experiment
A man who doesn’t understand Chinese sits in a room with a rulebook for manipulating Chinese characters. When given a question, he uses the rules to produce appropriate Chinese responses, fooling outsiders into thinking he understands. But he’s only following instructions—no comprehension, just symbol processing.
Takeaways from Chinese room thought experiment
Syntax doens’t suffice for semantics. There is a vital distinction between understanding and simulating understanding.
Connect The person in the Chinese room is the same as a computer.
Brain Simulator Reply
Consider a computer that operates in quite a different manner than an AI program with scripts, it instead parallels the actual sequence of nerve firings that occur in the brain of a native Chinese language speaker when that person understands Chinese — every nerve, every firing. Since the computer then works in the very same way as the brain of a native Chinese speaker, processing information in just the same way, it will understand Chinese.
Response to Brain simulator reply
Suppose that in the room the man has a huge set of valves and water pipes, in the same arrangement as the neurons in a native Chinese speaker’s brain. The program now tells the man which valves to open in response to input. In this scenario, there is absolutely no understanding of Chinese, therefore simulation of brain activity doens’t mean understanding Chinese; since it is impossible for machines to understand information, it’s impossible for them to gain intelligence, thus impossible for them to gain personhood.
Functionalism
Functionalism is the view that human personhood may be defined by a set of functions or abilities.
Dreyfus
He rejects the notion at the heart of AI — that all knowledge can be represented by a system of symbols and rules. He draws the distinction between “knowing that” and “knowing how”. For example, one might know that moving the pedals forward moves a bike, holding on the handle bars and turning them when appropriate will steer the bike. However, this does not mean that they know how to ride a bike. We might be able to explain to someone how to ride a bike, but this sense of “how” does not imply that they can or have the ability to do so.
Dreyfus argues that it is only through our immersion in the world that we can “know how” and gain “common sense knowledge”. He argued that human beings are not “individual, agential and rational. Instead, we are embedded, absorbed and embodied”.
When we recognise a face, we do not consciously use symbolic reasoning the way a computer does. instead, it is a physical and embodied experience.
Mary’s room thought experiment
Mary’s room thought experiment, when Mary finally sees colour, if you believe that she learns something new, despite having all the physical information about the colour red. Then you believe that there’s more to experience than physical information. Therefore there’s a difference between physical and phenomenological truths, linking back to the rejection of the notion that all knowledge can be represented by a system of symbols and rules.
Thinking Fast and Slow
In the book “thinking fast and slow”, Daniel provides a vast amount of scientific evidence to show that human beings use two very different methods to solve problems “system 1” which is unconscious, fast and intuitive. While “system 2” is slow logical and deliberate.
The computer may excel in utilising “system 2”; however, it is the ability to use both “system 1” and “system 2” that makes humans intelligent and creative. Therefore, if computers cannot develop the ability to use “system 1”, then they can’t be considered as intelligent — thus can’t gain personhood.