Language Acquisition and Development
Autodesk navisworks manage 2014 buy now telestream sony acdsee telestream Designer 2012 cheap
autodesk build 164 build 164 Buy online apple final cut studio 3 cheap Adobe FreeHand autodesk
In my last post I discussed the problem of the Chinese Room, which is a way to think about language use and human cognition. The thrust of the argument is that seemingly intelligent behavior does not necessitate conscious understanding or awareness. If a program, or a robot, or an english speaking human locked inside a room, can convincingly engage in a conversation by formal manipulations of symbols (manipulating the syntax), we won’t have real language understanding (the system involved does not have semantic knowledge, knowledge of meaning). This has implications not only for the possibility of AI, but for constraining and directing theories about human cognition in cognitive science.
There is a way out of this problem though, because we know that humans can represent the semantic content of language, so there must be a way for a system to acquire it. I want to argue that examining how language learning and acquisition actually occurs in humans is one way of tackling the question. Since we know that it does in fact happen, it’s a good place to start.
Imagine I give you a chinese-english dictionary (phonetic, not symbol based), you would be able to learn conversational Chinese to some degree, right? What if I gave you a chinese-chinese dictionary? There is no way you’d ever learn anything from this book. This is important. When you first hear a new word, the word is meaningless to you. It is completely devoid of any salience until someone defines it for you (take away the context, which usually helps us narrow down meaning). Once the word is defined for you, it is no longer meaningless because you are able to associate it with all these other words you know and thus give it meaning . This is an important point to consider. We can assign meaning to words because we have other words with meaning assigned to them and we can place new words within a network of associations that bestow meaning upon the new word (like using a dictionary). But what about before you have any other words as a foundation to build off of? Before a toddler speaks it has no concept of what all the sounds mean that all the people around it keep emitting out of their mouths. It’s all just noise. Symbols with no meaning at all. But children learn to speak somehow right? The syntactic content gains semantic salience. How?
Well, an obvious answer is that through continual interaction with the world these repeated symbols gain meaning. Consider a toddler about to put its hand on the stove and the mother yells, “No! Hot stove!” The toddler has no idea what those sounds mean, but a memory is formulated associating those sounds with the act it was about to perform. Through further interaction, those auditory signals gain salience for the child. Or consider, every time someone sees the family’s dog, they look at it and among other words, the sound “Rover” is heard multiple times by the child. Time after time, this sound “Rover” is heard by the child and eventually the child comes to understand that the sound “Rover” is a way to gain the attention of the large furry thing in the house (i.e. – Rover comes running when the sound is made). The auditory signal represents certain opportunities for interacting with Rover. Each and every one of us develops from a child without any understanding of a single word, to an intelligent language using individual, and it is through interaction that this is able to occur. Humans and animals of all sorts can learn to function appropriately to novel signals through a process of interaction (as the history of conditioning in psychology has shown).
Searle’s basic premise is that any theory arguing for intelligence in non biological systems is ignoring the fact that representation and understanding happen in the brain; not in a computer, but in a biological organ. Quite right, but he still seems to look at this as basically an input/output system, and is simply arguing that a computational black box cannot generate meaning. By doing this though, he is ignoring the integral relationship between the brain and the body, between the body and the environment, and the interaction and development that the organism goes through. Searle imagines the embedded interactive robot argument that I went over in my last post, but pictures a gigantic robot with Searle inside its head. The robot transduces auditory signals or visual signals into symbols which Searle manipulates and feeds back to the robot which transduces them into auditory signals and outputs them to the people it is “conversing” with. Searle still doesn’t understand anything, and thus the robot doesn’t understand anything on this account. Remove Searle and you are again left with a robot unable to understand, unable to represent. We need an understander to do the understanding.
I think that Searle has failed to think through some of the implications of his argument that are of relevance to the nature of meaning. Language isn’t a tool we use to ask and answer a pointless stream of questions (if it was it’d be unlikely that we could ever learn to use it properly). Language is a tool to interact with others in our environments in a much broader sense. If Searle is actually looking out of the robot’s eyes, and seeing what it sees and hearing what it hears, seeing the responses and reactions to the robot’s outputs, it’s quite possible Searle would over time begin to associate interactive opportunities to certain input streams of language, based on the events surrounding previous times those input streams came in, and the events that followed from his answers. Communication is directive, it is expectational. It serves a function in guiding behavior. A child doesn’t learn language by answering questions while locked in a room, but by interacting with the people using the language and his or her environment. By following hand gestures, facial features, and tracking the results of those interactions. While Searle might not learn a perfect understanding of the Chinese language, it is obvious that through these interactions he would learn something, some meaning could be attributed to the sounds and symbols.
Imagine you are with someone who speaks chinese, and they say to you “follow me” but all you hear is a meaningless stream of sound. But then they gesture (in the universal follow me hand gesture) for you to follow. It wouldn’t take long to associate their sounds with the meaning “follow me”. But what if you’ve never seen the ‘follow me’ gesture? Well, you can sit there staring at them, doing nothing, but that won’t help. You can try out different things. You can move backwards. You can sit down. You can hop up and down. You can spin around. But this person is probably only going to stop gesturing when you start moving forward. And now that you have that memory, next time you see the gesture, you can think, “what happened last time I saw that, what brought that interaction to a successful conclusion?” Too often philosophical thought experiments view a problem as a snapshot in time, and it leads to mistaken conclusions because the issue at hand can really only be understood when taking into account temporality, development, and ongoing interaction. Of course a man in a room manipulating symbols doesn’t really understand Chinese, but importantly, a child could never actually acquire language in this sort of situation either.
This is not the first time I’ve spoken about the importance of interaction in our conscious experience, but interaction alone is not enough for meaning. Interaction is part of a process, a temporal one, that also includes memory and learning. A system has to be able to interact in its environment, form memories of those interactions, and learn what sorts of affordances those interactions represent based on the results of those interactions. This certainly doesn’t answer all our questions, since even the act of language acquisition takes place in a system (human beings) able to already represent and understand aspects of it’s environment. You “see” the world, you perceive the world around you, you are not a mindless robot; language is built up on this system. This isn’t a question I’m going to tackle today, but hopefully I’ve made language understanding a little less mysterious, while pointing out the flaws in the Chinese Room thought experiment. These ideas would also necessarily constrain any theories or possible implementations of inorganic language using systems.