What Do Mini-Man Robots Have to Do with It?
The two questions, "who can have a mind?" and "how do we know?" are particularly troubling in philosophy, and the answers vary depending on which theory is espoused. Some theories are problematic because they make it difficult to conclusively tell who can think. Others because they make it difficult to tell who cannot. This is one of the problems that functionalism encounters; some philosophers believe it attributes mentality to too many beings. This might not necessarily be true. But before this discussion begins, it is necessary to explain what functionalism is, and how a functionalist defends it. What is Functionalism?
In "The Nature of Mental States", Hilary Putnam argues that pain, and sensations like it, are not brain states. Rather, he proposes that pain is a functional state of a particular organism and not a chemical relationship between the person's brain and their body. This theory is called "functionalism". To explain functionalism, Putnam describes the human system as a "Probabilistic Automaton", like a Turing Machine. He defines a Probabilistic Automaton as a system that accepts "sensory inputs" and produces "motor outputs".
Order custom essay The Impact of Functionalism on the Possibility of Intelligent Robots with free plagiarism report
The inputs and outputs are governed by a "Machine Table", or in the case of living things, their "Functional Organization". This table has instructions for every possible combination of states and inputs, and this determines the "transitional probability" of each output that could come from said combinations (Putnam 75). Pain, in this case, would be a functional state. For example, a person's system could be in any given initial state (calm or angry) at time t. A sensory input (like a punch), or a combination of inputs, put the system into state "pain". And depending on the probability of certain outputs (or "dispositions" to behave) resulting from the combination of the initial state, the sensory input, and resulting state pain, the system will produce a motor, or behavioral, output. This is functionalism, and Putnam believes that this is how humans, and some non- humans, operate.
Though functionalism seems very broad, Putnam says that there are two reasons to deny the brain-state thesis and believe pain, and other psychological states, are functional states rather than brain states. First, he argues if pain is a brain state, then there has to be one "physical- chemical state such that any organism (not just a mammal) is in pain if and only if (a) it possesses a brain of a suitable physical-chemical structure; and (b) its brain is in that physical- chemical state" (Putnam 77).
The problem, Putnam says, is that this state must be possible in the brains of mammals, reptiles, and mollusks; and it must also not be possible in the brains of any animals that cannot feel pain (Putnam 77). Secondly, Putnam says that the brain-state hypothesis is hopeless because it is not simply attempting to say that pain is a brain state, but that all psychological states are. This is problematic, he argues, because "if we can find even one psychological predicate which can clearly be applied to both a mammal and an octopus (say "hungry"), but whose physical-chemical "correlate" is different in the two cases, the brain-state theory has collapsed" (Putnam 77). This is a lot for brain-state theorists to overcome, and this is why Putnam speaks of it as little more than an effort of futility.
Putnam also gives arguments in favor of functionalism. First, he says that behavior is a feature on which people typically make judgments. But behavior tells us little about the physical processes that inspired them. However, he says "similarities in the behavior of two systems are at least a reason to suspect similarities in the functional organization of the two systems, and a much weaker reason to suspect similarities in the actual physical details" (Putnam 77). In sum, functionalism is more palatable and explicable. In addition to understandability, Putnam argues that functionalism is more realistic across the board. He believes it is more possible for philosophy to encounter universal psychological laws that are species-independent before it finds universal neurophysiological laws of the same nature.
He says this partially because the basic "transition probabilities" can already be applied across species. When an animal exhibits a certain behavior, one can reasonably determine its initial and subsequent states. "Thus, we would not count an animal as thirsty if its "unsatiated" behavior did not seem to be directed toward drinking and was not followed by [the state] "satiation for liquid" (Putnam 77). However, these behaviors would tell us little to nothing about the neurophysiological processes that influenced them. And even if we did learn something about brain states, it is unlikely that these states persist across every species. Ultimately, the functional states beat brain states because, at least for the moment, they make more sense.
Did Functionalism Get Blocked?
Functionalism does not describe any quality that separates man from machine, so it is hard to tell if there is a difference. One philosopher who tests this ambiguity is Ned Block. In his essay "Troubles with Functionalism", he argues that functionalism is too liberal in its willingness to apply "mentality" to things that only seem conscious. His thought experiment in this essay is a large reason why one might believe functionalism entails the conclusion that it is possible to build a robot or computer that has mentality. Even if it does, this entailment does not undermine the theory.
Block creates a "homunculi-headed robot" to support his claim that functionalism is too liberal. This hypothetical robot would look and function like a normal human; but instead of having a brain, it would have a head full of miniature men who press buttons that control the body. "Each [little man] has a very simple task: to implement a "square" of a reasonably adequate machine table that describes you. On one wall is a bulletin board on which is posted a state card, i.e., a card that bears a symbol designating one of the states specified in the machine table" (Block 96). The state card on the wall tell the men what button to press to transition the body into the next state. This system, Block argues, would seem just like you "because the functional organization they have been trained to realize is yours." With an understanding of a person's basic functional organization, a system of little men could perfectly imitate any person.
This homunculi-headed system seems like a punch in functionalism's gut since it meets much of Putnam's criteria. There is no doubt that a functionalist would be forced to attribute mentality to this robot right? It has a sophisticated functional organization that is exactly the same as yours. Block makes a point: there is some "prima facie" doubt that comes with applying mentality to this robot. Most would undoubtedly be reluctant to immediately say this robot can think as a person can. But his refutation has a problem that Putnam anticipated and that neither Block nor Putnam present a clear answer to.
The problem has multiple parts: 1) Block's robot-thing needs to be able to feel sensations you do if it has the same functional organization as you. 2) Block's robot-thing cannot feel pain. 3) Since Block's robot-thing cannot feel pain, it cannot have the same functional organization as you. To the first point, if the robot has the same machine table as you, then it can feel pain. Your table mandates that you feel it, and anything with the same functional organization must also be able to feel pain as a functional state. Secondly, the robot cannot feel because it can be broken down into components that have their own functional organization.
This part is a bit more complex. Putnam says "Every organism capable of feeling pain possesses at least one Description of a certain kind (i.e., being capable of feeling pain is possessing an appropriate kind of Functional Organization.)" (Putnam 76). A description of a system is a true statement about how that system possess distinct states, sensory inputs, and motor outputs (Putnam 75). No organism capable of feeling pain possesses a decomposition into parts which separately possess Descriptions of the kind referred to [above]" (Putnam 76). This is important, and, it presents a worthy defense to Block's argument. Block's homunculi-headed robot could not be able to feel; it would only be able to simulate the outputs produced from functional states. To Putnam, pain is a functional state, but he also notes that it is a sensation.
The body must feel pain. Because this body is run by little men, it is unclear how the pain would be felt and how it would be transmitted. Would they all feel it? Sure the body would be able to signal damage to the system, but how would the pain be absorbed? Would there be certain little men who feel it in the part of the body that was affected, or would there just be a state card that simulated how a real body would respond to pain? Moreover, all of these little men have their own functional organization and description which disqualifies the entire system, according to Putnam. There is no central nervous system or method of feeling for Block's robot. It would only be able to observe a pain- like state rather than actual pain.
And this brings us to the third point. Block's robot does not have a brain with a nervous system; it has a head of little men who simulate the function of a normal person. And though they can follow a machine table to present outputs like the ones you would produce if you were in a pain state, there is no way for it to truly "feel". Since it cannot feel, there is no way for the homunculi-headed robot or any system like it (see the Chinese government example he provides on page 96) to have the same functional organization as a real person. Why does this matter? Block's argument that a person's functional organization can be replicated is off-base, and his robot does not meet functionalism's criteria. So it cannot really undermine it.
Even though I believe there is a small caveat to Block's argument, I think that it shows something about the possibility for computers to have mentality. Block's mini-man robot could think within the functionalist framework; the robot just could not feel. It had the same transition probabilities, states, inputs, and outputs of any other person. But feeling is not required for thinking, is it? I don't think so. The Turing Machine had its own functional organization with fewer transition probabilities.
Could a future computer not be programmed as Block's robot (without the little men) to utilize a plethora of possible behavioral dispositions and act on them depending on the situation? Plausibly. Nonetheless, Block does not seem to undermine functionalism. There may not be a computer today with external sensory capabilities that would allow it to recognize damage and react accordingly, but there could be. A robot could conceivably recognize a broken part and take itself somewhere to be repaired. Ultimately, there is no quality that the functionalism attributes to people and animals and means to exclude from humans. In sum: people can think and, one day, so will computers.
- Block, Ned. "Troubles with Functionalism." Philosophy of Mind: Classical and Contemporary Readings, edited by David Chalmers, Oxford University Press, 2002, pp. 94-98.
- Putnam, Hilary. "The Nature of Mental States." Philosophy of Mind: Classical and Contemporary Readings, edited by David Chalmers, Oxford University Press, 2002, pp.73-79.
Cite this Page
The Impact of Functionalism on the Possibility of Intelligent Robots. (2023, May 17). Retrieved from https://phdessay.com/the-impact-of-functionalism-on-the-possibility-of-intelligent-robots/
Run a free check or have your essay done for you