What is a memory?
Today I want to talk about memory, and what’s wrong with some of our standard conceptions of how memory works, and even fundamentally what memory is. After decades of research in the cognitive sciences we seem to know more about memory than ancient philosophers thought imaginable, and we’re learning new things every day (see my post on the role of theta waves in memory formation). A quick visit to the Wikipedia page on memory will provide ample information on encoding, storage, and retrieval of memory. It discusses the difference between long term memory and short term memory, and talks about working memory. It elucidates the difference between implicit and explicit memory, and it even gets into the different brain regions involved in memory. Finally, it discusses cellular biology, the real underpinnings and root of our memory formation. We’ve decided that memories are “encoded” by synaptic change, that these cellular changes “store” our memories. All we need is some sort of cue, the correct neurons proceed to fire, and bam, we’ve retrieved our memory from storage! But not a single sentence in this wikipeida page is devoted to considering just how the heck changes in the strength of connections between cells causes memory formation! Where exactly is this memory? If you open up a brain and look inside you won’t see any memories. You don’t open up a neuron like a storage container and memories come spilling out. What about a network of them makes this story any different?
Let’s consider this idea that a pattern of neurons encode for a memory. What does that mean exactly? Let’s use a computer analogy for a second. A computer encodes let’s say an image as a pattern of 0s and 1s on a hard drive. But those 0s and 1s are completely meaningless to the computer (take a look at my chinese room post if you want to explore this idea further). It doesn’t *see* an image or *know* that some pattern of 0s and 1s is a picture of a dog and another is a picture of a cat. It’s just bits, a meaningless stream of information. The central processor passes that pattern to a program, and that program will cause a specific array of pixels on a computer screen to light up. And only then, when observed by someone, does that “encoding” gain any meaning…it never means anything to the computer. An encoding needs to be interpreted by a knower for it to represent something. So how is it that a pattern of neurons that encodes a memory can become meaningful to you?
You’re not aware of the pattern of neuronal firing, in fact, you’re not aware of any of the physical processes in your brain. You are just aware of a memory. But there’s also no little man inside the brain that takes that pattern of neuronal firing and understands what it is. A little guy that sees the neuronal pattern and goes, “ahhhh…that’s the time I went skiing in Vermont” or who, like a computer, takes that pattern and projects it on a screen in some sort of internal theater in your mind. It’s not like there’s a “you” who experiences this memory. There’s not a YOU and a MEMORY. There’s just you in the state of experiencing what we call a memory. Your conscious experience, your sense of self, emerges from the same neuronal firing, the same physical processes that “encodes” for a memory. How any amount of neuronal firing, any physical process can cause conscious experience is a mystery.
You experience all sorts of different things. And whether it’s the experience of remembering, or perceptual experiences (seeing, hearing), or the feeling of pain or anger or love or excitement, for all these aspects of consciousness, there is a brain state that underlies them. ‘Encodings’ as the standard literature would put it. But brain states are just physical processes. It’s just a big web of interconnected cells passing signals electrochemically. Why should any amount of physical process lead to conscious experience? Atoms are processes that aren’t conscious. Stars aren’t conscious. Cells aren’t conscious. What is? Algae? Trees? Worms? Mice? Cats? Apes? Humans? Why are some physical processes conscious, and others not? There’s nothing in our laws of physics that would predict it, neurons or no neurons. And all of science so far seems ill equipped to address this question. Neuroscience can point out all sorts of important correlations between certain aspects of cognition and consciousness, but it has no tools for even trying to address the phenomenal (subject experience) aspects of those processes. Psychology can illuminate and describe all sorts of aspects of human behavior and thought, but itself already starts from a level above the physical processes. These sciences together can tell us about which brain states are conscious and which aren’t, but this just begs the question even more, why are some types of neuronal firing unconscious, and other types conscious? It’s all the same neurons right?
I was talking about memory though, and so you might be thinking, how does all this explicitly relate to memory? Well, once we realize that the encoding framework has serious theoretical issues for consciousness, we can think about how this affects our conception of memory. How exactly does a pattern of neuronal firing relate to the experience of a memory? What is the necessary relationship between that pattern and the memory? For instance, let’s say neuroscience can determine the exact pattern of connections that constitute a memory by using the most advanced neuroimaging techniques, and subtracting out everything that is deemed to be not “necessary” to the memory. Some set of neurons IS your memory.
So if that same exact pattern of firing is later stimulated will you have the same “exact” memory experience? What if you had artificially stimulated that same pattern *before* that memory was ever formed, what would you subjectively experience? We know we can insert memories through psychological means (through interactions with words and pictures), but can we insert memories through neuroscientific insertion? Is that even a coherent concept? If we artificially strengthen certain synapses, and create new connections on other neurons, what would be the mental change that occurs? If you find yourself skeptical that creating some random neuronal connections in the brain could cause someone to have specific memories they never experienced, you’re probably right to (though until the experiment is run we can’t be sure). But this has to tell us something about the incoherence of neurons actually ‘storing’ memories.
What about when you have a lot of very similar experiences? Are the various experiences literally very similar neural patterns? Where a few neurons veer right instead of left? Or can similar experiences be underlied by completely different patterns of firing?
Psychologists often talk about multiple memories merging to stabilize a memory. But is that really what is going on? How does an encoding for a “general” memory come about, if each new experience is explicitly encoded as its own memory? Meaning, if one experience encodes pattern A, and another pattern B, and a third pattern C. And you then later have a general memory that is neither A nor B nor C, what is it exactly that makes that pattern a general memory if it itself was never encoded? If memories are encoded by patterns of neuronal firing, how can a memory arise from a new and different pattern? What is the difference between thinking about your ski trip last week, your skiing experiences in the last few years, and the idea of skiing in general? Are they different encodings? Or is there something else fundamentally going on here?
I don’t have the answers; I barely even know how to pose the questions, though some of the things I bring up could conceivably be researched. What I’m arguing is that we might need an entirely new conception of how memory works if we ever want to make real breakthroughs in understanding ourselves. We have long known that memories are not faithful replicas of past events. Memories are reconstructed at the time of recall. And that act of recall at the same time necessarily changes them. That general memory I described above may not be a memory at all, but an entirely new mental experience constructed in the moment, a story I’m creating, which we label “memory”. Even your specific memory of a party last weekend is itself an entirely new mental experience every time you go through the act of “remembering” it. And even though scientists for the most part have long ago left behind the naïve concepts of memory as a filing cabinet, we are still stuck in a paradigm that may prove insurmountable because figuring out how to even ask the right questions is perplexing.
13 Responses
4:30 am
[…] Greg at cognitivephilosophy.net has a fascinating post about neurological decriptions of memory, its relationship to consciousness, and …. […]
8:48 pm
Hi Greg,
Daniel C. Dennett addresses these very issues by claiming that the neuronal firings that constitute brain activity are not ”instantiating” memories or consciousness but are indeed, pure consciousness itself. An active computational parallel processing brain does not produce sentient awareness (for who?) but, instead, when functioning is in a perpetual reactive state to it’s environment. Our special consciousness is, ofcourse, dependent on our capacity to use language and syntax to validate the user illusion in the creation of a fictive self so as to make these mode of inquiry even possible. How the brute physical reductionist chemistry of the soft grey matter accomplishes this is still a profound riddle. But it is important to believe that it is not intractable and at some point we will ultimately come up with answers.
9:56 pm
Hi James. For the most part I tend to really enjoy Dennett’s work. Though I find he’s much better at pointing out and illuminating flaws in other theories, and giving us a better framework to think about consciousness issues, than in offering his own replacement theory. In general, I do agree with him that we are not conscious of things (memories in this case) which are patterns of neural activity, but that our consciousness, our being in the state of remembering is emergent from those underlying processes. And I also agree that how any amount of physical process can lead to this is still a mystery. I’m sometimes unsure of whether Dennett thinks this problem is worth contemplating about. In his earlier writings, he seemed to be of the mind that ascription of mental states is the best we can hope for, that there is nothing else going on. Later on he started talking about real patterns and functional organization, and I do think that’s much closer to the mark. But when he says that once we explain the function, there’s nothing left to explain, I can’t figure out whether he truly means that qualia is an illusion in the sense that there is nothing to explain, or that once we explain the function we will understand how what we currently call qualia comes about. I would agree with the latter, but I don’t find his framework capable of addressing it.
I think if he ditched the computationalist framework (and further the whole idea of the brain as an information processor) I think he’d be much better off. But that’s so far a minority opinion in cognitive science and philosophy (though gaining ground strongly; see the dynamic systems and embodied cognition approaches and anything to do with sensorimotor coupling (O’regan and Noe had an absolutely fantastic article in Behavioral and Brain Sciences some years back called Perception and Consciousness: A sensorimotor approach, if you can get your hands on it)). The problem is, many in these new emerging fields, while able to account for many aspects of cognition that classic models cannot, eschew the entire idea of representation all together, and don’t really have an answer for subjective experience, the explanatory gap. I’m optimistic that a better explanation of function as emerging from these new frameworks can fit right into the holes of Dennett’s theory.
10:43 pm
Greg , thank you so much for responding so quickly to my short reply to your provocative and interesting insights into memory and consciousness. I honestly enjoy the high level of discourse you encourage by the well thought out arguments presented on such important issues on what being a clever, self conscious mammal entails.
My background is in the visual arts and in 1991 when perplexed by what could motivate me to paint images of such enormous complexity; thousands of spectators in sports stadiums, I stumbled upon Dennett’s remarkable and entertaining ” Consciousness Explained”and instantly become hooked on cognitive philosophy. After devouring all the referenced authors from Searle to Minsky to Churchland and Pinker, I came away still convinced that Dennett’s hunches and analogies to memes(a la Dawkins)and computer analogies seemed to offer the most satisfactory paradigms.
But you fall into a classic trap when you question the subjective experience and the especially weird complaint you make when challenging Dennetts exhaustively argued notion that there is simply no need for additional extra explanations beyond the level of the functional or physical patterns and neurological firings. Searle vehemently argues that Dennett avoids the ”hard problem” of real qualia driven awareness and ineffability like the flavor of chocolate,etc., but I don’t think you buy into Searle’s mysterian approach. Yet you seem baffled by Dennett’s”framework”. You are very astute and I slowly and lovingly read carefully what you write so please elaborate on this reference to this cryptic framework and how it dilutes the logic that posits no need for additional explanations above and beyond the evolutionary constraints on a healthy primate brain.
It is amazing that you bring up the article by O’regan and Noe as I read it with enthusiasm and even made a copy of it. It was brilliant and original and in no way challenged Dennett’s assumptions but amplified them by their original emphasis on locomotion and coordinated movement.
To conclude, yes , there are holes in Dennett’s theory and he would be the first to admit it, but not his premise that subjective experience reguires an extra confirmation above it’s still dimly understood physical brain states. Like the poetic user illusion of creating a self by essentially maintaining a useful internal dialogue [ ”a running narrative fiction”] subjective human emotional states are instantiated through language,rehearsal, and imitation.
6:50 pm
Hey Jim (or James, let me know which!), that’s so exciting that you’ve read that O’Regan and Noe paper, so few people have heard about it. I feel like years from now that paper will be considered a classic breakthrough in our understanding of cognition.
It’s not that I question Dennett’s argument that no extra explanations beyond the functional explanations are necessary, it’s that I don’t think Dennett’s theory itself accounts for those explanations. Dennett’s recourse to function is mainly through evolution. But evolution describes the process of selection that allows functional changes to take place, it itself is not an explanation of how that function allows experience to arise (this is a remnant of his early intentional stance stuff I think). Dennett is working within the “brain as an information processor” framework, and if I remember correctly subscribes to the “computational” approach (though even if he preferred connectionism, the same problems would exist). The brilliance of the dynamic systems and embodied cognition movements has been to reject the entire conception of information processing in favor of a ‘sensorimotor’ accounts of cognition. O’Regan and Noe even go so far as to explicitly reject the idea that subjective experience arises out of brain states, since neural activity alone is not sufficient, it’s the interaction of the environment and sensorimotor systems from which experience emerges. The brain being an integral and necessary component, but it is not “the brain itself” that realizes that experience, but a system engaging in sensorimotor exploratory behavior. Experience is not a passive representation, but the act of doing.
This is not to contradict many of Dennett’s overall points. O’Regan and Noe indicate their theory would fit right into Dennett’s theory, and Dennett even said the same in his response to them. But as far as I know he has not explicitly in his writings embraced the dynamic system approach of rejecting information processing. It could be because there still IS a problem with the sensorimotor account, namely the rejection of representation all together. While I agree with rejecting naive accounts of “re-presentation”, I’m not sure the same can be done for the emergence of a point of view in general. It seems to me the sensorimotor account is a better mechanistic explanation, but still cannot explain the phenomenal aspects of experience.
I’ve recently been reading someone who finds a balance between these views, embracing dynamic systems and embodied cognition while accounting for the emergence of representation. Mark Bickhard. In correspondence he’s indicated that depending on how he were to take Dennett’s writing, his theory either fits right into Dennett’s holes, or they disagree fundamentally, but he’s not sure which. I can send you some of his stuff if you’re interested, or I can even send along a paper I wrote that was a survey of the issue of “representation” in cognitive science. Let me know.
I do have one significant disagreement with Dennett, and that is his reliance on language to account for representation. Because he has not really embraced or internalized the sensorimotor accounts, he doesn’t yet see how representation can occur in animals which can’t form “concepts”. But the sensorimotor account embraces implicit presuppositions of an organism, functional presuppositions, and thus can account for representing in non language using animals. Though I do have sympathy for views which place an importance on language in regards to issues of the self and reflective thought, I think Dennett goes too far.
Btw, thanks for commenting! I’m glad you’re enjoying the blog.
10:34 pm
Hello Greg,
Just a quick response as life is short and detailed explanations back and forth can quickly get out of hand and at some point we run the risk of talking past each other. If you haven’t read ” Kinds of Minds”, c1996 by Dennett I believe it will address some the problems I detect inherent in your theories concerning dynamic systems and sensorimotor accounts of consciousness and, most importantly, allow you to atleast reconsider some of your ”significant disagreements” with Dennett on the importance of language and evolution in representational thinking. Steven Pinker’s impressive” How the Mind Works”c1997 is also clear and brilliant on the subject.
You are a deep thinker and I admire you for taking risks in some of your theories concerning what and how sentience produces subjective experience, but please be careful when you mitigate the tremendous crane of evolution and language acquisition when talking about cognition.
I am not a graduate student in the contentious field of Cognitive Science, but I have read and reread many books on the subject and I am aware of the debates and egos while staying safely on the sidelines. I would suggest (gently and politely) that you re evaluate your views on O’Regan and Noe’s dismissal of brain states or neural correlates in their integration of sensorimotor systems with a constantly changing environment for you must have misconstrued their observations. They would not( how could they?) explain experience and perception and especially sensorimotor responses to the environment without having it ultimately ”brain based”; and pressured by demands evolutionary biology.
Please do send me your paper on Representation in cognitive science. It is a subject dear to my heart as a visual artist.
Your admirer and blog fan,
Jim Sparks
3:44 pm
Hey Jim, yes, we do run the risk of talking past each other sometimes. So I do want to stress that my (and O’Regan and Noe’s, and others’) criticisms of the role of brain states or evolution are not attempting to downplay the integral and important role these things play. The role of evolution is absolutely essential in leading to organisms able to represent, and experience consciousness. And obviously the evolution of the brain and the role of neuronal firing is also absolutely essential. The disagreements are rather with the nature of that role. What is it about neuronal firing that allows consciousness to emerge? Saying that we evolved neural subsystems to allow for different cognitive processes is in itself not enough of an explanation of “function” to explain qualia. That’s what I mean when I say I agree with Dennett’s recourse to evolution, but I’m not sure if his explanation of the “function” will get us to representation. This is also why I disagree with his stance on language (yes, I’ve read Kinds of Minds, which I thought was on the whole excellent), because for him experience requires concept manipulation, and that requires language. I would agree that “reflective thought” and the kind of higher order consciousness we’re used to requires language, but if representation is emergent from (or in) function, then a better understanding of function would point to the fact that non linguistic animals do experience the world, if they are capable of representing the world.
On the “brain state” front here are some quotes from the O’Regan and Noe paper:
Under this view it is not the brain state in and of itself, or even patterns of neuronal firing in and of themselves, it is the interactions that the neuronal firing allows to take place from which experience emerges from. As I’ve mentioned, I think O’Regan and Noe are still missing something, and I think Bickhard has found it, in his account of representation. I’ll send along the paper shortly!
7:56 pm
Hi Greg,
Thanks for the quotes from O’Regan and Noe and your clear, concise thoughts on the matter. I am encouraged by your intuitions that something is missing and I eagerly await Bickhards’s insights into how the brain deals with representation.
I might add that the quotes you selected would be applauded by Chalmers, McGinn, and Searle who would welcome the emergent properties of vision and the whole dispensing of emphasis on brain states in sensorimotor ”contingencies”. To me it all sounds incoherent and short sighted or as Dennett would say, it is confusing ” a failure of imagination as an insight into necessity”. In any event, you inspired me to re read ”Sweet Dreams : Philosophical Obstacles to a Science of Consciousness” by Dennett published in 2005. The book convincingly explodes the myth of qualia and the flaws of David Chalmers’ hard problem of ”real”conscious sensations that are irreducible to ”mere” brain activity. Have you read this gem of a little book ( 178 pages )? If you haven’t, I must warn you that he is brutal in his rebuttals and arguments and presents his functionalist ”framework” for a science of heterophenomenological third person approaches to consciousness in a very persuasive light. It is hard to put down and, for you, I feel it might open up possibilities to, at the very least, expand the boundaries of philosophical debate on mind/brain controversies. The book is simply brilliant!
Fondly and with great admiration,
Jim
10:33 pm
Hey Jim, I’m actually not sure how much Chalmers and Searle would applaud those quotes (i’m not very familiar with McGinn, so I can’t make pronouncements). It’s worth noting that those quotes alone, out of context of their broader theory, might be jumped upon, but given the framework they set up wouldn’t seem to mesh with Searle, and definitely not Chalmers. The sensorimotor account is explicitly a naturalistic, maybe a physicalistic account, and O’Regan and Noe early on discount property dualism. Searle might be a bit trickier, since i’m not sure he believes consciousness is multpli realizable, and these guys would I imagine say that anything that can take advantage of sensorimotor contingencies in the proper way, i.e. -replicate the functionality, would allow consciousness to emerge.
I have read Sweet Dreams (you’ll actually be hardpressed to find any novels of Dennett’s that I haven’t read, and I’ve made my way through a fair bit of papers as well), and actually have a review of it somewhere where I express some frustration with Dennett’s arguments. I’ve been thinking about adapting it for a blog post, so you’ll probably see it some time soon. Again, I agree that the answer lies in functionalism, and I agree that the account of functionalism will lie in our evolutionary history and our neuronal firing, I just find Dennett’s theory of what the actual neuronal firing does and how that realizes consciousness to be lacking (even though I do have sympathy for the global workspace theory in general). I’ll get those Bickhard papers sent over shortly! We’ll see if we can convert you. 🙂
3:48 pm
Thanks Greg for encapsulating so clearly your position relative to Dennett’s theories. I can relate to what you say and accept some limitations, too, on just how sentience, whether emergent or not, can flourish in it’s special vividness and acuity. I fully expect in the not too distant future that advanced computers will assist in modeling the chemical and nano aspects of consciousness as the futurist Ray Kurzweil so seductively and confidently advertises in The Age of Spiritual Machines. Karl Popper is convinced it originates in the sub molecular tubules( well, why not?). Dennett, as you know, is readable and entertaining but there is no fact of the matter in cognitive philosophy which makes it so stimulating.
Looking forward to those Bickhard papers. Do other responders engage in this level of give and take correspondence? Jim
6:00 pm
You certainly take the cake on that front Jim. But I wish more did, I really enjoy it.
2:07 am
I am a freshman chemical engineer at the university of delaware, and I would like to say that this conversation truly, intellectually stimulated me. I have never been able to pose these ideas in such a clear way before like you have written in this paper. Your thought process is clear like a whistle. The expansive contemplation for cognitive philosophy has left me sleepless for nights ever since I was a child. In turn, I plan to dedicate my life’s works to uncovering an answer to some of these problems. I am going to continue to read papers on cognitive philosophy to find a problem I would like to research, but if you have ever thought of ideas which you would love to have answered by the ever growing grasp of science I would love to hear them. Seriously, my life is so young still I could begin my research in anything and focus my studies anywhere. It would be best for me to find what I want to do now. It’s never too early to start.
Thank you for your ideas,
Adam Moyer
10:48 pm
Hi Adam, thanks for the kind words!
If you’re interested in the kinds of things I talk about on this blog, I’d suggest doing some research about questions in “cognitive science” as opposed to “cognitive philosophy”. As far as I’m aware the latter isn’t actually an official academic discipline, but was just a phrase I thought was a neat way of indicating that I’d be writing about these issues surrounding the mind/brain from a philosophical perspective.
If your plan is to stick with chemical engineering you might be interested in taking some courses in philosophy of science and see if any of that interests you. I’m not particularly aware though of what kinds of issues in cognitive science segway with chemical engineering though, as much that is done in cognitive science starts from the biological level. If the kinds of things I talk about here are truly what you want to pursue, a change in major might be in order, and that could be anything from biology, to neuroscience, to psychology or philosophy depending on what questions you’re interested in and what is available at your school. Depending on how much the engineering aspect is of interest to you, there are tracts that could lead you towards working with robots and artificial intelligence.