Book Review: Neuroscience and Philosophy: Brain, Mind, and Language
This book is a good ‘ol fashion philosophers slugfest at its best. But I’m getting ahead of myself…
T.H. Huxley famously said:
how it is that any thing so remarkable as a state of consciousness comes about as the result of irritating nervous tissue, is just as unaccountable as the appearance of the Djin when Aladdin rubbed his lamp.
In recent decades the field of neuroscience has proclaimed to have a gone a long way towards answering this question. We have studied the inner workings of the brain and have been able to correlate neuronal activity in certain areas of the brain with specific cognitive processes. For instance, knocking out the activity of certain areas prevents a person from engaging in the cognitive processes correlated with (controlled by?) that area. We now say in passing that the visual cortex is responsible for vision. The auditory cortex is responsible for hearing. Memories are stored in the hippocampus and fear is in your amygdala. And so it is that the brain, or parts of it, see or hear, think or believe, hope and fear, plan and decide.
Along come Maxwell Bennett and Peter Hacker, a neuroscientist and a philosopher, and they write a book about how they are displeased with this pervasive aspect of neuroscientific terminology and its use by not only neuroscientists, but neuro-friendly philosophers as well. They argue that it’s not specific areas of the brain that see or hear or feel or remember; it is people that do! These are attributes of human beings, not of their brains. The brain is simply not an appropriate subject for psychological predicates, and making it so has serious consequences for both neuroscience and philosophy according to Bennett and Hacker. It leads us down dead ends and makes research difficult, if not futile. We think we’ve answered Huxley’s riddle from above, when in reality we’re no closer to answering the fundamental question he posed.
Bennett and Hacker place the blame for this state of affairs squarely on the shoulders of Descartes. Ask any neuroscientist and they will strongly deny the plausibility of a dualistic theory of mind. And yet, the predicates which used to be ascribed to an immaterial mind, neuroscientists unreflectively ascribe to the brain instead. They replaced mind/body dualism with brain/body dualism, retaining the same basic structure. And so there is talk about maps in the brain, symbols, representations, information, etc…but the fact that certain features of the visual field can be mapped onto the firing of groups of neurons in a particular brain region does not mean that these maps actually “exist” in the brain, and that the brain actually “uses” these maps to “formulate hypotheses” about what is visible.
by speaking about the brain’s thinking and reasoning, about one hemisphere’s knowing something and not informing the other, about the brain’s making decisions without the person’s knowing, about rotating mental images in mental space, and so forth, neuroscientists are fostering a form of mystification and cultivating a neuro-mythology that are altogether deplorable.
Now, with that out of the way, here’s where things get interesting. In this book by Bennett and Hacker, they devote two whole sections to criticizing the views of Daniel Dennett and John Searle, respectively. And two years after publishing this book, they were invited to participate in a special session of the meeting of the American Philosophical Association entitled “Authors and Critics,” where their critics were, you guessed it, Dennett and Searle. Dennett and Searle dug into them over the course of this session through prepared remarks and questions, and some time later, Bennett and Hacker published their own “reply to the rebuttals” stemming from this conference. The book I’m discussing here is that fight, laid out for our amusement, and education. The first part is a few specific selections from Bennett and Hacker’s original book, followed up by Dennett and Searle’s responses. Dennett and Searle don’t pull any punches in their criticisms, and Bennett and Hacker then fire right back. So the book ends up being worth it both for the show, and for the great philosophical work presented by a group of great thinkers.
Dennett and Searle (writing separately) make for some strange bedfellows given that Searle’s Biological Naturalism and Dennett’s functionalism (of the computational variety?) stand in strong opposition to each other. Searle’s formulation of the Chinese Room was in direct response to computational functionalist accounts of mind, and Dennett’s Intentional Stance, taken to its extreme ascribes beliefs and desires to lawn mowers and thermostats. (click here for my take on the Chinese room) And yet both agree that Bennett and Hacker are way off. Go figure that you have two philosophers defending neuroscience against the attack of a neuroscientist! But these are two great (scientifically minded) philosophers, and their criticisms of Bennett and Hacker do the work of forcing the reader to question their support for what at first seemed so intuitively appealing in Bennett and Hacker’s arguments.
One specific example that sticks out is Searle’s criticism of certain aspects of Bennett and Hacker’s positions in regards to the location of conscious experience. If you cut your foot, to Bennett and Hacker, the answer to the question of “where is your pain?” is an obvious one…It’s in your foot! But Searle brings up the point that there are people who are missing limbs who feel pain in their phantom limb. For people such as this, who no longer have a foot, where is their pain located? Bennett and Hacker say the pain is located where the person’s foot would have been. But Searle finds this ridiculous. The pain would then have to be said to be in the bed or under the sheet, it would have to exist in a location where no part of the person’s physical body exists. Searle says that considerations such as these, which can be explained by neuroscience, result in the conclusion that it only makes sense to say that the pain exists in the person’s brain. And this is true of someone with a real foot as much as it is of someone with a phantom foot. There’s an intuitive appeal here. Damage to your foot is just that, damage. For there to be pain, a signal has to be sent up to the brain. Stop that signal from reaching the brain, and the person feels no pain. So it does seem that the pain isn’t really *in* the foot, but rather, damage to the foot causes you to have an experience of pain, where it feels to you that the pain exists in the foot.
I happen to think that all of the responsible parties are wrong about the location of subjective experience. I’m not entirely convinced that it even makes sense to ascribe a location to subjective experience. Saying that the pain is “in your brain” is precisely what Bennett and Hacker criticize, and even given the data surrounding something like phantom limb syndrome, I’m sympathetic to this aspect of their criticism. As Dan Dennett himself so humorously and engagingly points out in his essay “Where Am I,” the question of *where* the self exists is not so easily answered, and our intuitions often pull us in contradictory directions. So while it’s obvious that saying the pain is in your foot ends up being indefensible, saying the pain is in your brain, seems just as sloppy. What’s in your brain is a neurological process. Where the pain is, and even what the pain is, doesn’t seem to be answered by a neuroscientific understanding of the processes that underlie it.
This was a great read, both because you get to see some philosophical heavyweights duking it out, and because they cover some fascinating topics in neuroscience, philosophy of mind, and philosophy of language. I didn’t explicitly cover much of this last topic, but the specter of Wittgenstein was summoned regularly, and interestingly, by all parties involved. How these philosophers can all be so influenced by Wittgenstein, while simultaneously disagreeing with each other so much, was an amusement in its own right.
3 Responses
3:34 am
Dear Greg,
Thanks very much for sending me notice of this book and for your very interesting comments on it. I gave a talk to the Bath RLSI on Jan 8th in which I offered “A Pragmatic Conception of Consciousness” that addresses all of the issues you raise. My proposal, in brief, is that consciousness is not a state, either spiritual or physical, but an activity, so that whoever engages in it can be said to be acting consciously. This entails that consciousness is a virtual attribute, i.e., that we are conscious by virtue of engaging in that activity, and that we can be conscious of, – and only of, – whatever can become an object of that activity, such as sensations, images, events, actions, assertions, reasons, numbers, persons, etc.
The activity I propose is discourse. Sensations are not forms, but rather they are objects, of consciousness, and discourse enables us to direct our own attention and that of others onto our sensations and, unlike animals without language, to keep that attention focused on a sensation (e.g., an ache in a limb) even when it has ceased, so that we can wonder what caused it, whether it will return and, if so, whether we can bear it, what can be done about it, etc., – and this applies whether the limb is real or a phantom: unlike other animals, we can be hypochondriacs.
I say that “consciousness is attention directed by discourse”. The attention of non-linguistic animals is drawn now here, now there; it’s at the mercy of external and internal events, and cannot be focused intentionally as ours can by our discourse. This entails that animals have sensation without consciousness, which sounds like a contradiction in terms, but only if we are already committed to the current, common-sense view that sensation is a form of consciousness, and I want to overthrow that consensus.
Thanks again,
Chris
12:44 am
Hi Chris, interesting thoughts, thanks for sharing. In your first paragraph it sounded like you were saying something very similar to what the embodied cognition people say about the necessary conditions for consciousness. And I’m on board with a lot of what comes out of embodied cognition research. You lost me after that though.
It’s not clear to me why that activity has to be one of discourse. Why does consciousness only emerge through linguistic manipulation of concepts? Why can’t non-verbal communication do the job. Why can’t interaction more generally do the job? I’m also not clear on the distinction you make between sensation and consciousness. So non-linguistic animals have sensation, they have some sort of subjective experience, but they are not conscious? You’re right that it sounds like a contradiction in terms. What is your definition of consciousness that subject experience, qualia, whatever, can exist without it? At that point, why not just say there’s a meaningful difference between the consciousness of language using animals and non-linguistic animals. That it’s richer, etc. Why cleave it off?
9:10 am
Firstly, why does it have to be discourse?
My approach is to recognise that “consciousness” is a word we use to refer to our experience, but that there is no agreement between us on what it is, and to propose a strategy for establishing the agreement we seek. I propose to analyse what is implicit in our use of the word and to draw conclusions from that.
1. If we treat each other as interlocutors, we are implicitly treating each other as capable of authorizing speech-acts (and therefore also non-speech-acts) for which we can hold each other responsible.
2. If we treat each other as authoritative and responsible, we are implicitly treating each other as “conscious” (whatever we mean by “conscious”):
3. Therefore, if we treat each other as interlocutors, we are implicitly treating each other as “conscious”. (If I ask you, “Are you conscious?” and you reply “No!” we both know you’re joking, i.e., exploiting ironically the self-evident absurdity of that denial.)
4. However, there is no absurdity in denying the consciousness of non-linguistic animals because we cannot have with them the interlocutory relationship which we speakers have with each other and which implicitly commits us to regarding each other as “conscious”. This makes logical space for the question to be asked whether non-linguistic animals are “conscious”, and this question is comprehensively begged if we assume dogmatically, as common sense currently requires us to, that all sentiently responsive individuals are “conscious”.
5. The reason for doubting that what used to be called “dumb” animals are “conscious” is that the external, semantic-pragmatic relations between interlocutors which commit us to treating each other as “conscious” are so radically different in kind from the internal, electro-chemical relations between neurons on which sentient responsiveness is based, that neither can reasonably be regarded as a reduced or extended version of the other. The only basis for claiming that dumb animals are “conscious” is that they are sentiently responsive, but I have shown that this is not a sufficient reason for the claim, i.e., not one that I am bound to accept.
6. My claim is (a) that what we mean by “conscious” is derived from our own experience, but (b) that our experience of “consciousness” is shaped by our interlocutory relations, and (c) that we have no warrant for projecting our experience onto organisms who are incapable of those relations.
7. Discourse enables us to keep attention, – our own and that of others, – focused on objects, concrete or abstract, whether they are present to us or absent or even merely imaginary, and I want to focus on how this ability, which we internalize as we learn to participate in discourse, shapes our experience. We experience only what we are conscious of, and we are conscious only of what we pay attention to, so experience is not a given: it is something we create through discourse. E.g., listening to BBC Radio 3, I am constantly impressed by how the acquisition of a musical vocabulary enables presenters to focus on features of musical texture and structure which completely escape someone like myself, who responds to music intuitively, but lacks that vocabulary.
Second, what about qualia?
“Likeness” is an experience accessible only to speakers who can use the word “like” (or its equivalents in other languages). Thinking of Thomas Nagel’s question: I think we can imagine what it would be “like” for us to have echo-location, – blind people have some experience of what it might be like, – but not what it would be like to be a bat, because, strictly speaking, it’s not “like” anything to be a bat. “Likeness” doesn’t exist for bats. Claiming that bats have “subjective experience” is another way of claiming that they are “conscious”, which, as I have shown, I am not bound to accept.