Human Cognition and the Chinese Room

Today I’m going to use a famous philosophical thought experiment to help elucidate one of the most significant problems in cognitive science. Before I get into it though, let me set the stage. There are various competing theories of cognition in the field of cognitive science. Some explain human cognition and thought through the manipulation of symbols, others explain it though a web of interconnected networks. While there are many different approaches, there is one thing all these different theories have in common. And that is that they are trying to explain how human thought and intelligence emerges out of the interaction of very stupid processes. How is it that an assortment of firing neurons can lead to consciousness? To explain the relevance of this question, just think about a computer. Your computer can calculate math equations faster than you can ever dream of. It can handily beat you in chess. You ask a question to your search engine and it pops up with a ton of relevant answers. There are even programs that have been around since the early nineties (maybe earlier?) that can have full fledged conversations with you. Now, none of us confuse these programs for real people, and none of us think our computers are intelligent, at least not in the way we ascribe to humans. They are not aware, they are not conscious. They are just running programs. Some input comes in, calculations are done, output is spit out.

But what is so different about the way computers function and how we function? Neuroscience has broken down human cognition and laid bare the simple and stupid processes that lie at the heart of it. You’re walking down the street, and suddenly you hear someone yell, “Watch out! Duck!” You quickly glance up and see something flying towards your head. You duck and narrowly avoid a baseball that zooms past your head. Seems like an intelligent action right? You heard someone say something, you understood their words and directed your attention based on them. What you saw next you interpreted as danger to you so you moved out of the way. At every step of the process you were aware, and conscious, and acted intelligently. You understood what was happening. But when we look at what actually happens inside your head, it’s not at all so straight forward. The entire process of hearing, and seeing and moving, is all governed by disparate modules in the brain which communicate by sending electrical signals back and forth. In essence you have millions of tiny computers running in your head, processing input and spitting out output, and there is no one place in the brain where all this information comes together so YOU can become aware of it…so how are you?

Imagine this. Imagine we built a robot with all the advanced technology we have. The robot has the most advanced computer acting as its brain, and it’s got itself a whole humanoid body, with ears and eyes that actually work. The ears take in sound waves and map them phonetically and compare them to dictionary entries and have an incredibly complex series of actions that are initiated when it hears certain phrases. It has cameras for eyes and is able to pass the information seen with the cameras to a central processing area that controls movement. When it hears “watch out! Duck!” it maps the phrase, looks up what to do (when you hear “watch out”, look up), the central processor receives an instruction to direct the eyes upward, the cameras swivel and records a fast approaching object. This information is sent to the central processor, which has instructions to move the body when in the path of fast approaching objects. This is all highly complex, but nothing in there is really a stretch. But here is the question. In both situations, the same thing happened. You and the robot acted in exactly the same way. But how many of you would say the robot was intelligent? That the robot understood what was going on? Few of us. And yet, what is so different about us? Why is it that you have this privileged understanding that a robot can’t have when underneath the hood, we both have a bunch of stupid little computers mindlessly running their little programs (in our case neurons firing according to strict rules)?

This is the problem that John Serle was contemplating when he creating the Chinese Room thought experiment. Searle was criticizing people who were trying to develop artificial intelligence by pointing out that creating a robot that could interact with the world in seemingly intelligent ways, would not necessarily make it conscious, it would not be aware. If a machine could convincingly simulate intelligent conversation, it wouldn’t actually “understand” that conversation because it was just manipulating symbols based on predefined rules. Here is how he put it. Imagine there is a man in a room. Through one end of the room slips of paper are passed in. These slips of paper contain Chinese characters (questions or statements). The man in the room has filing cabinets full of instructions that allow him to match up the Chinese characters passed in the room, and they give him further instructions to create a new set of characters and pass them out of the room. The end result is that this process stimulates a conversation in Chinese with the people outside the room. Make this process as complicated as you want. To account for the fact that the same sentence can often lead to varying responses, have the slips of paper contain a serial number that acts as the “current state of the system”, causing the man in the room to go to a different filing cabinet based on the current state, and thus produce a different response. The point is that even though a conversation is occurring, the man the room in no way understands Chinese. Seemingly intelligent behavior does not necessitate real intelligence.

There are a few responses to this. One of which states that while the man himself is does not understand Chinese, the room as a whole does (man, cabinets, paper, pens, and all). This seems like a pretty silly response, how is it exactly that a room can be intelligent? Can anyone honestly suggest the room understands anything? But Searle plays along and says, okay, try this: take the entire contents of the room, all the instructions, and have the man memorize every single one (for the sake of the experiment, imagine this can be done). Now the entire system is inside the man. People pass him pieces of paper, he writes meaningless symbols on other pieces of paper based on the rules he memorized, and hands them back. To other people, it would seem that this man understands Chinese, but this is obviously not the case.

A more substantial reply to Searle invokes the fact that language is something that is understood by embodied agents, not by rooms that pieces of paper are slipped into. It is the complex social nature of interaction with other people and things in the world that language emerges from. These replies say to embed the Chinese room in a robot and give it sensory organs and a way to interact with other people and that who is to say it’s not intelligent. Well, Searle, that’s who. He argues that the behavior is seemingly intelligent, but since we know that under the hood what is going on is meaningless symbol manipulation there is no true representation or intelligence. Whether the robot is interacting or not doesn’t change that fact.

It seems we’ve hit an impasse. We can’t figure out how to grant a machine intelligence, as simply manipulating symbols does not bestow intelligence, does not make them meanginful. And yet these same arguments seem to hold up against us intelligent humans, and we have to begin wondering at this point, what exactly are the criteria (the necessary and sufficient conditions) for understanding language? We can’t just say that humans understand language because we are conscious…because that just begs the question, how does consciousness arise from simple biological processes? So what do you think? What are some ways for us to get out of this problem? How can we account for our human ability to understand language, to ‘represent’ the meaning of words as cognitive scientists would say? The implications of this are far reaching, both for the ability to construct intelligent robotic systems, but more importantly, for understanding the very fundamental nature of human cognition. My next post will explore some ways of thinking about this question that I find more promising. In the mean time, what are your thoughts? How would you reply to Searle’s Chinese Room thought experiment?

Read the follow up post on language acquisition and development!