Functionalism Flashcards
How can the functionalist position be expressed?
- All mental states can be characterised in terms of functional roles which can be multiply realised,
- What makes something a mental state is not the internal composition or constitution that it has, but rather the role or function that it has in the system of which it is a part.
How can we define a functional state?
- In terms of the causal relations that it has to: environmental effects on the body (inputs); other types of mental states; and bodily behaviour (outputs),
- Unlike behaviourism, what is important is not just the outward stimulus and the outward response. Rather what is important also essentially includes the internal causal relations.
What is machine-state functionalism?
- An early theory developed by Putnam in response to the problems of behaviourism,
- Anything with a mind can be seen as having a certain kind of computational mechanism as constituting its mind,
- We are probabilistic automata, by this meaning that the programming of our mental software does not necessarily determine one unique output for a given input but may instead specify a possible range that could result,
- There would be a probability that we would change into another state and give a certain output, but not a deterministic certainty. However, this is taken to be a difference of degree, rather than type.
How does the inverted spectrum apply to functionalism?
- Suppose that perhaps I and a duplicate of me were born such that our qualia were inverted, such that when the duplicate sees anything I see as yellow, it is seen as what would be my blue,
- This applies to all colour qualia, so that the entire spectrum is inverted,
- As we were raised in the same contexts, we use the same colour words for the same objects, so we would both call a banana ‘yellow’, though the duplicate has what I would call my blue quale,
- In functionalism, it is perfectly conceivable and possible for all our functional states to be identical, so for the input-stimuli, the internal causal relations of my cognitive system, and the output behaviour to be exactly the same,
- So my duplicate and I, even though we see things with different quales, we both describe them the same, all the associations have also been transferred, so that there is no external way of telling a difference between us,
- My duplicate and I would be identical on the functionalist account even though there is a fundamental mental difference because of the different qualia.
Functionalist response to the inverted spectrum criticism?
- Might claim that the inverted spectrum is ruled out by definition, this would presuppose the correctness of the functionalist theory, however,
- Could argue through a verificationist angle: that qualia are not empirically observable nor are they known a priori, analytic, so are incoherent or meaningless,
- Could argue that if my duplicate and I were functionally identical as supposed then we must perceive the same ‘qualia’ in response to stimulus, as qualia are simply expressions of functional responses to environmental stimuli and behavioural responses.
What is the functional zombies argument?
- Block argues that functionalism is guilty of ‘liberalism’,
- That it allows too many things too be conscious,
- Here, he argues that functionalism would maintain that the could be systems that are functional duplicates of us, and so which should be conscious, but which we would be highly reluctant to ascribe conscious, subjective, qualitative experience to,
- He asks that we imagine another human being, just like us, but with the head hollowed out and filled with tiny beings who collectively perform the same roles/functions that our brain plays,
- This would seem to be identical to me, with the same experience. Yet, the critic suggests, this is clearly wrong. Though, this argument is reliant on intuition.
Ned Block’s China thought experiment?
- Asks that we suppose that the 1.398 billion people who live in China start engaging in collective behaviour that is equivalent to some kind of machine-table or program that we associate with consciousness. Suppose that everyone is equipped with radios or phones for immediate communication, similar to neurons and their synaptic connections,.
- it would seem the functional system of China must be alive and having conscious experiences such as pain, etc; but this seems counterintuitive.
Putnam’s response to the China thought experiment?
- Argues that we should exclude systems of which the elements which make them up themselves have minds, the case in both of Block’s examples,
- He writes in ‘The Nature of Mental States’ that “No organism capable of feeling pain possesses a decomposition into parts which separately possess” the functional ability to feel conscious pain.
Problems with Putnam’s response?
- Seems ad hoc, doesn’t seem to give a concrete, positive argument for why a group of smaller constituent organisms couldn’t form a greater, higher-level conscious system,
- The argument seems to have been designed and introduced simply to halt the problematic counterarguments that Block provides.
John Searle on conscious experience?
- Argues that conscious experience requires organic matter to emerge, and therefore that artificial life, based on non-organic matter like silicon, is not possible,
- Blocks most classic cases of AI from genuinely having minds,
- Provides a distinction between the functionalist understanding of the mind and that of computers; a rejection of the understanding of mental function as a sort of software.
Searle’s chinese room thought experiment?
- Asks us to imagine a person who only speaks English locked in aroom which has only a single slot which allows them to communicate with the outside world,
- The room also contains an extremely effective and comprehensive rule book for speaking mandarin, so that anyone who waanted to look up the vocabulary or the grammatically rules for speaking ‘proper’ mandarin could do so, and they also have a set of every Mandarin symbol,
- A Mandarin speaker outside the room wants to communicate with the person inside the room to tell whether they also speak Mandarin,
- The rules and symbols themselves are meaningless to the speaker in the room, but, given the sophistication of the rule book, it would seem like they could give the impression that they speak Mandarin to the actual Mandarin speaker outside.
Searle’s Chinese room against AI?
- Is strong evidence that computers do not have intrinsic intentionality (the man in the room is the computer), as he takes what the computer does, or would do, when it is speaking/communicating to be simply matching certain inputs to outputs according to certain predetermined rules, without a proper understanding of the meanings of the symbols that it takes in as input,
- The computer or the person in the room, if following the rules of the language, the grammar or syntax, but that this does not mean that they understand the semantics of the language, meanings of the expressions of the language,
- Understanding syntax is not sufficient for knowledge of semantics. The computer is simply operating blindly and unconsciously, following its program,
- Another distinction between the mind and the functionalist representation of it, i.e. in functionalism we would be like the computer, however, we are able to understand semantics, etc.
The ‘Systems reply’ against Searle? Searle’s response?
- Argues that while the person in the room does not understand Mandarin, the system composed of the whole room, including the person, does understand Mandarin,
- Searle’s response was that the person could internalise all the rules, leave the room, and be given the impression of functionally speaking Mandarin, while actually not really speaking it because they do not understand the meaning of symbols and expressions, only when to use them,
- The functionalist might respond that this is nonsensical, if someone really did manage to internalise the book, could they be said to not speak mandarin? How feasible is it that they would know how and when to use the rules, but not the meanings of the symbols and expressions governed by these linguistic rules?
What is the sophistication reply against Searle?
- Accepts that neither the person in the room, nor the room, have intrinsic intentionality, and, therefore, do not understand Mandarin,
- Maintains that more sophisticated types of machines would be able to understand Mandarin,
- May argue that Searle has taken an overly simplistic mechanism of the person reading and applying mechanistically the rule book in the room, that linguistic understanding includes more complex systems of rules and mechanisms and that these could be used in more sophisticated machines.
Alternative response to Searle?
- Could reject the assumption that there is a fundamental difference between when we attempt to tell whether another human being can speak a language, and when we are attempting to tell whether the room does,
- The only basis we have for concluding that someone speaks Mandarin is that they follow the rules of speaking Mandarin, as determined in the linguistic communities which speak Mandarin.
What is the robot reply?
- However, what if the machine or computer (e.g., the room in the China Room case) were given a more complex system for interacting with the outside world? What if it were upgraded and given a means of moving around, sensing more data in the world, interacting with the world and other people.
- In this case, would it not learn the meanings of the symbols by being able to associate them with objects and facts in the world, so that it can learn the references and meanings of symbols and expressions by observing how other human beings use the terms? Now the machine would be more like other organism.
Searle’s response to the robot reply?
- That the machine would still only be taking in the new data inputs as information and converting it into lines of code or signals that the person in the room, or the machine itself as a whole, applies rules to and reacts to but is incapable of understanding,
- Searle maintains that intrinsic intentionality would still be missing. The person or whole system would still simply be reacting in a blind, mechanical way to syntactical information.