The Brain and the Cultural Animal
Those of you who have read my articles and essays in MySpace or other blogs I participate, or I used to have, may recognize the following article regarding its content. If you have read it already, feel free to skip it. The reason why I include it here in another modified way is because the articles of these series will eventually form part of a book I’m going to write in a more concise, formal and academic manner. The second reason to include it is because it is a stepping stone to a broader picture regarding memetics, economics, politics, ethics and spirituality.
(c) 2009-2010, Pedro M. Rosario Barbosa
Some Rights Reserved
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.
"C-sharp is green." What the heck?! What does it mean that C-sharp (a sound) is green? The identification of a sound with a color can happen to a particular kind of synesthete. When he or she hears a sound, it will evoke a color. There is a "blending of the senses", which is the meaning of the word "synesthesia" (??? = together; ???????? = sensation).
One very interesting form of synesthesia is the number-color synesthesia. Some synesthetes would claim that two is red, or that five is green, or that seven is yellow. The relationship between number and color can vary among synesthetes, though.
This was a very strange phenomenon for some scientists, and they made all kinds of theories about it. One of them was that synesthetes were talking metaphorically. For example, men sometimes refer to attractive women as "hot babes". They are neither hot, nor babies. Vilayanur Ramachandran, who has studied this phenomenon extensively, did not seem to agree with this sort of explanation. If it were a metaphor, why wouldn’t synesthetes recognize what they say as metaphor? They seem to understand the notion of metaphor pretty well. It may be argued, for instance, that their mental relationship between two separate sensations can be every-day metaphoric such as when we say that "cheese is sharp". But Ramachandran asked: "Why then would you use a tactile adjective to describe taste? Is this not a mix of categories?"
Another explanation is that number-color synestetes were playing with magnets when they were children. They may subconsciously remember that seven was purple or that five was green, and so on. So, when synesthetes look at numbers, they remember the color of the magnets they used to play with. This didn’t make any sense either. Rama knew pretty well that the first research ever made on synesthesia was carried out by Sir Francis Galton, a cousin of Sir Douglas Galton, who was half-cousin of Charles Darwin. He discovered that synesthesia runs in families. So, Ramachandran reasoned: "Would we have to assume now that playing with magnets runs in families too? This doesn’t make any sense. There has to be another explanation."
First, he wanted to show that the phenomenon is real, so he devised a visual experiment. Here it is:
Now, this experiment looks confusing, but here’s the idea: look at the five and the twos. They are mirror images of each other, which is the reason why it looks so messy to look at them. Measure the time it takes you to find a regular figure such as a triangle of a circle. I am not a synesthete, so I had some hard time trying to find it until a minute or so later. However, a number-color synesthete would find it relatively fast (in a few seconds), and would say something like: "Look! It’s a red triangle with a green background. It’s jumping out at me." Indeed, he or she would be correct.
This could not be explained by remembering magnets, or metaphor. It is a very real phenomena, they were literally seeing colors along with the numbers. But why did such phenomena exist? Ramachandran’s hypothesis is that the way the brain is arranged can explain it. Look at this picture of the brain.
There is a sector of the brain called the "fusiform gyrus" where there is a region called the "grapheme area" (green in the illustration of the brain), which associates certain shapes with numbers. For example, if you have grown up in the time of the Roman Empire, your brain would have associated the number seven with this sign "VII" in the grapheme area. In our society, which uses Indo-Arabic Numbers, we associate the number seven with the sign "7", and that is what the grapheme area records. The usual number-color synesthete would see a color when he or she sees "7", but will not if you show him or her "VII", which indicates that the grapheme area is involved in the phenomenon.
Now, you will notice in the illustration that there is a red area almost touching the grapheme area. This red region is called the "V4", which is the primary region where visual color is processed in the brain. Ramachandran theorized that the most prominent form of number-color synesthesia is due to the "closer-than-usual" contact between grapheme area and the V4, and when a number-color synesthete looks at a number, inevitably it evokes a color.
From Synesthesia to Higher-Level Abstraction
However, there is another kind of number-color synesthete. For instance, there are some of them who not only will say that 1 is red, 2 is green, 3 is yellow, and so on, but will also see I as red, II as green, III as yellow. And also they will also say that January is red, February is green, March is yellow; or that Monday is red, Tuesday is green, etc. The common denominator of these cases is that apparently this kind of synesthesia treats number in a more abstract and ordinal manner.
This means that the "crosswiring" in these cases does not occur in the V4 and the grapheme area, but occurs in the higher-level of brain processes.
According to Ramachandran’s theory, a higher level process related to numbers takes place in a region known as the Temporal-Parietal-Occipital junction (or TPO junction), far from the fusiform gyrus. The reason he suspected it, has to do with the fact that when that region of the brain is damaged the patient suffers from dyscalculia: he or she is not able to add or multiply, except by huge instances of memorization. He or she has a hard time subtracting and it is practically impossible for him or her to divide.
However, the TPO junction is next to the angular gyrus, the place where we conceptualize colors (cells which deal with color processing, and carry out a higher process than the number-grapheme area). Ramachandran makes a very important point in this whole discussion.
This may seem counter-intuitive, but just think of something like a number. There is nothing more abstract than a number. Five pigs, five donkeys, five hairs, even five tones — all very different, but with fiveness in common (Ramachandran, 2003, pp. 83-84).
For him, the TPO junction plays a role in that abstraction. Hence, these synesthetes are called by him "higher synesthetes". Due to the interconnectivity that we see in the illustration of the brain above (Illustration 1), in some cases there can be certain mixes of lower-level synesthesia and higher-level ones.
Being Closet Synesthetes and the Origins of Language
Now, Ramachandran reaches a key point in his exposition on synesthesia: he is going to show us that we are all closet synesthetes, because we are in "denial" about our synesthesia.
Wofgang Khöle designed this experiment and usually asked the public which figure they would associate with the sound "booba"and "kiki". Generally 98% of the people in the public will say that the name of the figure in the left is "kiki" while the other would be "booba". Now, why does this happen? Because our brain is biased to associate certain images with certain sounds. The bias is not arbitrary. Think for instance about the way your mouth and tongue shapes when you pronounce the word "booba": the tongue and lips curve. The same with "kiki", the corner of the figure associated with "kiki" makes us think of crystals that shatter, evoking sharpness, and the way the tongue behaves when we pronounce "kiki" seems sharp too.
This is Ramachandran’s starting point regarding his proposal on a theory on the origins of language. If you think about the "booba" and "kiki" experiments, although the relationships between sound and vision is not arbitrary, we have to admit that they really have nothing in common, because visual appearance is different from sound. The relationship is carried out in our brain, it is the result of how our brain is set up.
Yet, to be able to develop language, we have to conceptualize, we need to abstract. Our primordial conceptuation or abstraction takes place in our mind, where, perhaps, the TPO junction and the anglular gyrus play a role. This is possible because the angular gyrus, which deals with abstract concepts, is located in the "crossroads" (as Ramachandran describes it) between the parietal lobe of the brain (which deals with touch proprioception), the temporal lobe (which deals with hearing), and the occipital lobe (which deals with vision). One of the things that have been discovered about people with systemic damage to the angular gyrus is not just that they cannot perform simple arithmetic, but also the majority cannot do "booba/kiki" experiment.
Another thing they cannot do is understand metaphor. Think about it, what is a metaphor? A metaphor is the association of two very different concepts which belong to very different conceptual realms. An example of a metaphor can be found in Shakespeare’s Romeo and Juliet:
[The light] is in the East,
and Juliet is the Sun.
As Ramachandran says, with a bit of sense of humor, that this passage does not say that Juliet is a big ball of fire, but that Juliet is "bright" or "radiant" like the Sun. People with systemic damage to the angular gyrus will not get that. They will ask: "But Juliet is not a giant sphere made out of plasma in a state of fusion".
Why does this happen? Our brain is also pre-disposed to conceptualize, even when we are not aware of it. There have been stroke patients who lose concepts, they are not able to grasp them anymore. Some of them lose the concept of "tools", they cannot conceive something as a tool (screwdriver, for instance). So, if our brain is pre-disposed to concepts, then it can associate those concepts in a certain way. The visual and audio relationship that is shown in the "booba/kiki" experiment illustrates the beginning of abstraction by association. Metaphor illustrates that not only sounds and vision can be abstracted and associated, but also concepts already formed in our mind can be associated among themselves.
The process of conceptuation is important before language development. If we are not able to have abstract concepts, we cannot refer to objects collectively through concepts. We would not be able to say that "Fifi is a dog" if we have no concept of what a dog is.
Now, there is lexicon involved with language too. For that purpose let’s look at the following illustration:
We develop words to refer to objects. However, how were the first words formed? Ramachandran theorizes that we have a biological bias to associate certain sounds with certain visual shapes. The first words must have been sounds that our brain associated with certain traits of objects. This is all you need in evolution, because once that dynamic begins, there is a "bootstrapping" from the original bias: we are then able to associate words, create neologisms, establish associations among words and so on. As languages evolve, the words we have now in different languages are the result of millions of years since certain primitive communities made language from the original linguistic (visual and sound) bias.
Lexicon is not everything. With our own lips we tend to mimic in certain ways the objects we are looking at. For instance, Ramachandran wants you to notice the shape of your mouth when you say "teeny weeny", "diminutive", "un peu". Now do the same with "enormous", "large", "grand". In both of these cases your lips, tongue and mouth shape themselves to "synesthetically" mimic the size of the object you are looking at.
In our brain, this is due to the part that controls the mouth in the Broca’s area (which deals with language) along with visual appearance in the fusiform gyrus, which also communicates with the auditory cortex. But this is not all. Remember this illustration we showed in Part V?
Remember when we talked about Darwin’s observation that some people clench and unclench their jaw when they are cutting with scissors, and that this is probably due to the fact that in our primordial motor cortex (the right in the illustration) the jaw is close to the thumb and the index fingers? We know that the face of the homunculus is the other way around, the jaw is close to the fingers and hands, while the eye of the homunculus is further away. Ramachandran calls "synkinesia" the phenomenon of the mouth and jaw mimicking the motion of the hands.
Now, think about what this implies linguistically. Haven’t you noticed that sometimes you have the tendency to make gestures with your hands as you speak or to describe things that you see? For instance, when you describe something tiny, don’t you use your thumb and index finger just to say, look it is "teeny weeny", "un peu", "pequeñito", "diminutive". Your fingers are imitating what your mouth is doing, which at the same time it synesthetically mimics the object that is being referred to.
So, Ramachandran is talking about a multi-directional bootstrapping going on in the brain.
Social Signs which Confirm the Synesthetic Bootstrapping Theory of Language
There are some signs that Ramachandran’s "synesthetic bootstrapping theory" of language is going exactly in the right direction, although I think it is not yet complete. I’m going to use two examples. These are my examples, not Ramachandran’s, but they illustrate how well formulated is this theory.
The Emergence of Sign Language
In the end of the 1970s, the Sandinista government in Nicaragua created a program to teach hearing-impaired children a conventional sign language. However, in the process of carrying out the program, children immediately began communicating in a new language, which was originally a sort of "pidgin" (as linguists call it), where there is no formalized language yet with all of its proper grammar. This was known as "Lenguaje de Signos Nicaragüense". One generation later, the children developed a completely new proper syntax and formal language which today is called "Idioma de Signos Nicaragüense".
As you can see here, there is connection between motor areas of the brain that deal with facial expressions and hand motion, or synkinesia at its best. There are two simultaneous activities we see here: a synesthetic and a synkinetic mimic of facial expressions and the gestures of the hands. However, due to their hearing impairment, they are not able to enunciate as effectively as hearing people (though many do a great job to be as effective enunciators as they can be).
This reveals some things regarding language. If we remember Part VII, we become aware of the fact that consciousness are realized when a human being grows within a community. We develop a moral conscience, and we develop a conception of selves distinct from our own. Apparently, not only consciousness is realized in community, but also language development. María No Name was not raised in a community of hearing-impaired people. The hearing-impaired children of Managua were formed within a hearing-impaired community which created first a pidgin, and then in a single generation, developed a whole new sign language.
If you look at the way the children mimic visual objects in their use of the sign language, they reveal primordial biases. This destroys the way average people conceive language as something formed for hundreds or thousands of years. Quite the contrary, languages can appear spontaneously, and if pidgins are created in one generation, the formalized language appears in the next. This has been shown to be true regarding creole languages in the South Pacific (including the Philippines and Hawai’i), and in the case of African slaves brought to America from different nations who interacted among themselves and with Native peoples, along with Spanish, English, French and Portuguese (depending on the region).
The Origins of Writing
The origins of writing reflect how a theory of spoken language can be applied to writing. Look at the following two illustrations. The first shows the "evolution" of the letter "aleph" in Hebrew; the second, the evolution of Sumerian language from a pictographic "star" to the cuneiform sign for "god".
In the first illustration, in the left we find the very first primitive character of "aleph". As you can see, it is the shape of an ox’s head, with eye and horns. At the extreme right, you see the modern Hebrew version of aleph, almost unrecognizable from the original. In the middle, you notice that if you take a good look at the third character (from left to right), it resembles a lot to a leaning letter "A". In fact, the Greek and Hebrew have a "common ancestor" (so-to-speak). The word "aleph" also evolved into the word "alpha" which is represented exactly like its Latin version: "A". It is also unrecognizable from the original.
The second illustration represents the evolution from a pictograph character of ancient Sumer, which represents an idea, not a letter. Yet, this character gradually evolved to represent the concept of "god", since gods live in the sky.
In both of these examples, we can see that there is an initial bias between what is being visualized and what is written. The best example of this is the famous Ancient Egyptian hieroglyphics. In both of these cases, in the beginning of the use of letters to be pronounced when read, or in the beginning of the use of pictographs, there is an effort to mimic in writing what the objects visually look like. From then on, through a long process, for many different cultural reasons, the original characters changed until they are almost unrecognizable from the original.
It seems almost like the way Ramachandran says that language originally evolved from an initial bias of vision, sound, and mouth movements. The languages we have today resemble very little to the original, but seem to derive from that original bias, the original mimicking of vision, sound and writing.
A Complete Theory of Language is Needed
There is something lacking in Ramachandran’s account for language. We need to separate two sorts of abstraction. Following Edmund Husserl, we can call one "sensible abstraction", which means that we can abstract from sensible objects and conceptualize material concepts (i.e. concepts which refer to sensible or material objects). However, there is another sort of abstraction, which we will call "categorial abstraction". This confusion between two forms of abstraction is due to the often empiricist or psychological conception of numbers as being somehow a form of abstraction from sensible objects themselves.
In reality, sensible objects do not appear to us as just individual objects, but they are organized as states-of-affairs: there is a computer in front of me, there is a book beside me, there is this glass of water on the table, and so on. The way these objects appear depend greatly on how they are "related" by our mental acts: as a set of objects, as seven objects in front of me, as the first thing I find, etc. If you notice, the "sets", "seven", "first", etc. are not sensibly given to us. They do not appear in a sensible manner. They are abstract categories which are constituted using sensible objects as basis. This is what Husserl called "categorial intuition". When our mind gets rid of the sensible component of the sets, or the seven, or the first, etc. (categorial abstraction), we are able to manage numbers in pure abstraction without taking into consideration any sort of material concepts. We don’t talk about four chairs added to five chairs will give us nine chairs, but about "4+5=9" without any reference to sensible objects.
A similar, but not identical, form of abstraction is involved in the way we formulate propositions about these objects and states-of-affairs, and the way they are organized hierarchically so that the propositions make sense, and that these propositions can be organized logically. Mathematical logic has shown that you can also get rid of sensible components of propositions and discover a priori new logical laws and theorems without making any reference to sensible objects.
Formal categories (sets, cardinal numbers, ordinal numbers, relations, etc.) and meaning categories (disjunction, conjunction, forms of plural, etc.) involve a very different process than just abstracting from sensible experience as Ramachandran suggests. Rama is going the right path, but he needs to account for this too. Only this way, not only we are able to explain the process of conceptuation from sensible experience, but also what is often called the Chomsky’s proposal of the syntactic tree, and the Nativist conception of language.
Pinker, S. (1994). The language instinct: how the mind creates language. NY: Harper Perennial.
Pinker, S. (1997). How the mind works. NY: W. W. Norton & Company.
Pinker, S. (2002). The blank slate: the modern denial of human nature. US: Viking Penguin.
Pinker, S. (2007). The stuff of thought: language as a window into human nature. US: Viking Penguin.
Ramachandran, V. S. (2003). The emerging mind. UK: BBC — Profile Books.
Ramachandran, V. S. & Hubbard, E. M. (2001) Synaesthesia — a window into perception, thought and language. Journal of Consciousness Studies, 8. 12, 3-34.
Powered by Blogilo
One of the very big problems we find regarding consciousness is that of the appearance of the ego and the fact that it has some sort of life-experience (as phenomenologists would say). This has been the holy grail of neurobiology for many years now, and we still has no adequate theory to address it.
Some scientists take an exotic path to solve this problem, like in the case of Stuart Hameroff when he tries to address this issue using quantum mechanics. Here is his explanation (an hour and 9 minutes interview):
I am no neurobiologist, so I will not criticize Hameroff on that basis, but I can do so on a philosophical basis. For instance, I noticed that he is positing the physical existence of a platonic realm where we have access to rational aspects of consciousness in the universe. In fact, according to the theory suggested by David Bohm, in order to avoid the Copenhagen Interpretation of quantum physics, we have to recognize existence of hidden variables and non-local interconnectedness. Hameroff also used Roger Penrose’s proposal that when superpositions happen there is a split in reality, but these splits are unstable, so they end up collapsing. Hameroff and Penrose suggest that this platonic dominion is the realm of non-local interconnectedness at the Planck scale, where we find logical and mathematical truths, ethical norms and values, and the aspiration for a deeper meaning. And particularly in the microtubules in the brain at the quantum level, there is superposition being carried out all the time which actually connects to this platonic realm due quantum behavior (including the travel of quantum information backwards in time).
As philosopher, a realist (particularly a platonist), I am particularly concerned about this, because it starts from the premise that platonic abstract truths and relations are physical, hence reducing relations-of-ideas (as Hume would call them) to matters-of-fact. Philosophically speaking, this would still not explain why are logical and mathematical truths true in every possible world, while physical laws can apply to this world, but not necessarily to any other world.
It also has the problem of what can actually be the base to determine that such physical logico-mathematical laws or ethical values are indeed correct, the only way is to posit a non-physical and non-causal abstract reality, which would lead us back to square one. Since the physical platonic realm is based on the non-causal platonic realm, then how is the physical one legitimized or unconditionally valid in any way?
Second, many have suggested epistemological models to achieve the level of abstraction that would enable us to recognize abstract concepts or objects. Guillermo Rosado Haddock, my former thesis director and now friend, has proposed Edmund Husserl’s epistemology of mathematics (see Rosado, 2000), which is essentially platonist but whose epistemology can be naturalizable in principle. A similar proposal has been made by Jerrold Katz with his realistic epistemology, which can be naturalizable (Katz, 1998, pp. 45-51). I think that the Husserlian mathematical epistemology, in my judgment, is more satisfactory and complete, and the way it was formulated by him can show that it can be naturalized, neurobiologists could, in principle, discover the natural mechanisms of the brain that would let us perceive abstract structures and relations based on objects being shown to us. Much of Husserl’s view of "elementary experience" have been confirmed again and again in the realm of psychology and neurobiology. A "physical platonic" realm, or a platonic space embedded in the universe is not needed for this.
The same thing happens with the discovery of elementary ethics and values, which in reality can be explained through the diverse conflicts developed within human groups, which leads to the setting of several basic moral rules to be followed, the recognition of another person as a person, another rational being (here is a phenomenological aspect), and then universalizing it to all rational beings. And as we see in different stages of society throughout history, group morals limited ethical behavior to the group, and then this requirement was expanded to include larger groups, until we are able to empathize with humanity as a whole. We are going to talk about this in a later article, but the origin of our grasp of ethical values seems to be evolutionary. The same can be said about meaning values which serve as one basis of all religious beliefs and spiritual paths.
Finally, from the perspective of philosophy of science, perhaps the most serious flaw in Hameroff’s proposal is that it is non-testable. There is absolutely no way at all to show experimentally that there is a consciousness embedded in the universe, much less a physical platonic realm of truths and values. Unless there is a way to create an experiment that would confirm his theory, it will remain a metaphysical proposal (in the Popperian sense of the word "metaphysical").
At a personal level, I feel that quantum physics has become a sort of quick answer to certain mysteries of the universe. The reasoning is: if quantum phenomena look as weird as X, then in some way X and quanta are related. The quantum world is strange indeed, but the problem with always looking for answers in quantum physics is that in the end it does not explain anything. It just restates in the form of an extensive chain of equations that both X and quanta are weird. The same criticism goes to the holonomic view of the brain.
Memory Power: Sometimes You Have It All Over the Brain, Sometimes You Don’t
Sure enough, some of the holonomic view of the brain is valid, but not completely. Karl Pribram was busy trying to understand how the human brain works, specifically the faculty of memory. Is memory located in some part of the brain? As it turns out his experiments showed that memory is dispersed all through the brain like a hologram. When you use a holographic film and you shine a LASER beam through it, it will show you on the screen a three dimensional object. The funny thing about the film is that you can tear it up to pieces and still you have a complete image in each one of them. If you take one piece and shine the LASER beam through it, it still shows the three dimensional object. The whole information of the image is in each part of that film. Pribram figured it would be more or less like that, we could lose any part of the brain, and still the whole of memory would be retained. The brain, in fact, can store up to ten trillion bytes of memory, and the holonomic model seems to help us understand that.
However, the holonomic model does not explain everything in the brain, because, as we have seen before, our brain is a set of organs, producing higher level modules which interact with each other in order for them to work in coordination. The same thing happens with memory in a way. Have you seen this movie?
Awwww! Romantic comedy! Who didn’t like that movie? Drew Barrymore is this girl who has the peculiar thing: ever since an accident she had, she is only able to remember briefly all recent events, until the very next day when she wakes up and forgets completely the day before, or all of the days after the accident. The character played by Adam Sandler has to make a videotape to update her to the present every single day. Hollywood fantasy … right? Weeeelllll … maybe not too much! Yes, Goldfield Syndrome, what she was going through, does not exist at all. But, remember "ten seconds Tom"? That was hilarious! … Except something very similar (yet not the same) does happen. Meet Clive Wearing!
Born in 1938, he was a musical conductor and musicologist with an entire career ahead of him, until a virus basically affected a part of his brain. Ever since then, his only lapse of immediate memory is about 30 seconds. He does not remember anything about his life. He only has short-term memory. The reason is that although memory is spread all through the brain, immediate memory is not. Short-term memory is located in the hippocampus within the limbic system, the evolutionary inheritance from the earliest forms of mammals. Long-term memory is all over the brain. Wearing’s brain is not able to transform short-term into long-term, so he forgets everything in a short period of time. The virus severely damaged his hippocampus.
But still, Wearing’s case is illuminating regarding one specific thing. He never forgot how to speak. The reason is that apparently the memory that requires him to speak is essentially different from that which stores events of his life. It is possible that the memory of language and speech are located in the temporal lobes, where we find the Wernicke’s area, a key location for human speech and language. Oh, and he can still play the piano! His procedural memory is also unharmed, and it works extremely well (and in the case of his piano-playing, it works beautifully).
Even though we sometimes feel that memory works like a tape recorder or a super-hard drive that "records" everything we do, in reality that is not the way that our brain works. In reality it all works out because of the connections of events in our brain, and the way neurons arrange themselves in order for us to connect different events. In cases like Wearing’s, those who are Hippocampus amnesiacs not only are unable to remember anything, but also live in fragmented and disconnected moments. They do not have enough sense of continuity to project a possible scenario for the future.
Also we must keep in mind that recalling is not exactly rewinding our memory to a certain moment and playing whole events in our mind once again. What we do when we remember is to literally reconstruct an event from bits and pieces that we actually store in our brain. This is the reason why, for most of us, when we recall something it is more vague than what we are presently living an event on the flesh.
This is also the reason why we have false memories. Michael Gazzaniga, renowned neurologist and member of the Law and Neuroscience Project, often complains about how fragile are the memories of rape victims. There are many documented cases where the victims swear to recognize their rapists, but after making a DNA analysis or looking at other conclusive evidence, the alleged rapists turn out to be innocent. Yet, the whole legal system is made up on the basis that memory is more reliable than it really is.
People who try to recall lost experiences through hypnosis may not intentionally try to distort their experiences they are trying to recall, but their brain can do it. The same happens with people who use L. Ron Hubbard’s Dianetics to engage in auditing, most of the "recalled" experiences may not be genuine memories, but constructed memories during the whole auditing process, and this could lead to worsen memory rather than enhance it.
What Makes Consciousness Possible?
Memory is not merely our ability to recall, but it is an integral part of what our consciousness is. If we have an self at all, it is in great part because our experience of time is linear: there is a past, present, and future. If there is no memory at all, there cannot be any consciousness. Wearing’s case teaches us that at the very least there must be a minimum of short-term memory for a consciousness to be possible and a self to appear as the component of conscious mental acts that remains ideally the same despite flow of time.
Memory, short-term and long-term, are not the only components that make consciousness possible. What else is needed at a neurobiological level for it to exist? First, as we have said, there are different parts or organs of our brain that interact with each other, and functionally speaking produce modules, which themselves interact with each other too. So, we are at the problem of emergence, we are asking: which modular interactions or layers of mental processes are necessary for consciousness to emerge?
António Damásio is a neurobiologist whom, in my opinion, has proposed one of the best theories on the origin of consciousness. We should think consciousness as a multilayered building, much like the step-pyramids in Egypt, or like a Babylonian zyggurat, or like a Mayan pyramid. From the base upwards, the order is the following:
- Proto-Self: According to Damásio, all animals have this basic trait, and it can be described as a sort of preconscious biological precedent to consciousness. He defines it this way: "The proto-self is a coherent collection of neural patterns which map, moment by moment, the state of the physical structure of the organism in its dimensions" (Damásio, 1999, p. 154). This does not take place in one part of the brain, but in multiple levels and multiple places, from the brain stem to the cerebral cortex (Damásio, 1999, p. 154). We are not aware of this proto-self, and the vast majority of animals have this proto-self, but without any consciousness at all. This suggests that the proto-self came up in an early evolutionary stage of our ancestors.
- Core-Consciousness: "Core consciousness occurs when the brain’s representation devices generate an imaged, nonverbal account of how the organism’s own state is affected by the organism’s processing of an object, and when this process enhances the image of the causative object, thus placing it saliently in a spacial and temporal context" (Damásio, 1993, p. 169). What does this mean? It means that when we think, we use to think in images. Contrary to the dogmas that many people still cling to, we do not think in language, our primary form of thought is nonverbal imaging. As a result of processing images, we pay attention to an object (any object) that our mind considers somehow relevant. Core consciousness include two sorts of selves: Transient Core Self which lets us be aware of our existence and the effects of the world on us because of sensory experience; the Autobiographical Self which operates thanks to a certain memory capacity that we all have in order to create a "fleeting feeling of knowing" which is created anew in short periods of time.
- Extended Consciousness: This part of our consciousness depends on the capacity of memory, especially long-term memory "for facts". Acts of knowing "objects" consists in attending objects according in our personal past. These objects understood within a temporal framework makes us able to substantiate our identity and our personhood (Damasio, 1999, p. 196). Extended consciousness is precisely the result of the ability to retain in our mind numerous sets of experiences, and being able to recall them at will, and to have a sense of obtaining knowledge by experiences (counting past experiences). This part of consciousness is one that primates have developed in general, and that humans have developed better than any primate. There is a reason for that, humanity is endowed with language and intelligence, which help us enhance extended consciousness.
A long time ago Edmund Husserl pointed out the importance of memory in phenomenology, without it there would not be consciousness nor any possible knowledge of anything in this world. Today neurobiologists basically state the same thing on naturalist foundations.
The Problem of the Self
Yet, as Damásio already knows, this does not explain everything about consciousness. For example, the phenomenon that philosophers have called "qualia" is one that represents one of the biggest challenges of both neurobiology and cognitive science. Qualia refers to our ability to experience sensations. All Damásio does in his theory is to explain how the self comes to be, but he never explains how we experience the world around us. Not only does my mind can perceive colors, sounds, etc., but also there is a self which actually senses them and is pleased by them.
Daniel Dennet has been one of those philosophers who tries to show that the problem of qualia is a pseudo-problem. In his essay "Quining Qualia", he practically denies its existence at all. Other cognitive scientists and psychologists are not as easily persuaded by Dennett’s arguments. They are very clever, but qualia is a real phenomenon, and contrary to what Dennett believes, Descartes was right in pointing out that "thought" (understood in the Cartesian sense) is the only fact that cannot be doubted. Whether the self is a substance that thinks is another very different story, though. Husserl, who many consider the last Cartesian, recognized the existence of the ideality (non-causal abstract content), and also did recognize Cartesian thinking (cogito) as the only matter-of-fact that we can be certain of. He also argues that because of the essence of the cogito, this matter-of-fact requires a thinking subject (a self) and an object (cogitatum). So the act of intending an object (intentionality), requires a self that intends.
What Husserl argues from the point of view of intentionality, Ramachandran argues from the point of view of qualia. For him, qualia evolved thanks to the fact that there are layers of neural processes of encoding sensory representations which are processed in the higher executive structure of the brain. Remember that our frontal lobes make up the Executive Brain (or as Michael Dowd would call it "The Higher Porpoise"). Ramahandran suggests that during this process we reach a metarepresentation from what were originally sensory representations, and that these metarapresentations create a more economical description of what we are sensing, and they "have" qualia so-to-speak. From the point of view of natural selection, qualia highlights what is important from the whole set of sensory representations and what is not. The most advanced process for metarepresentation is unique in humans, or at least different from other primates related to us: the case of the chimp, their metarepresentations are not as sophisticated as ours.
However, qualia comes with a price. What is qualia? They are experience. If they are, then that means that something or someone is experiencing it, since there cannot be experiences "lose floating in the air" so to speak. As qualia evolved, so did the self evolve. The self is the mind’s correlate to the development of qualia: the self has experiences.
Now, what is the self? Ramachandran does not define it, but he suggests five essential characteristics of what we mean by "self":
- Continuity or the sense of an unbroken flow in our experiences, with the rudimentary feeling of past, present and future.
- The experience of being one sole "self" despite all the disparity of experiences, beliefs, memory, thoughts and so on.
- There is a sense that the self is joined together with the body.
- The sense of agency, that the self is somehow "driving" or "managing" the body.
- In the very specific case of human, the self is also self-aware … it is aware of itself.
This does not mean that the self is a substance inhabiting the body, but the result of these brain processes.
Is the Self an Illusion?
Give me a nickle each neuroscientist who has got to the conclusion that the "self" is an illusion. Daniel Dennett, to Steven Pinker, to António Damásio, and beyond have stated that the self is a mere illusion, it is a make-believe made by the brain as a result of processes that the self is not aware of. Even Francisco Ayala, Evan Thompson and Eleanor Rosch use Buddhist reasonings to deny the existence of the self, as if it dissolved in the midst of the argument.
I am no neuroscientist, but I beg to differ as a philosopher. I’m not going to defend the Cartesian statement that the self is a "thinking substance", since it is not a substance at all. However, I will argue that the self has an ideal reality, that is, an abstract unity that remains the same despite the flow of experiences. The fact that it comes to be from brain processes does not mean that it is less real. If it were pure fiction, it would be practically impossible to even talk about our selves at all, or how it comes to be, or how our brain and our mind relate to it.
I think that one philosopher can help us understand this is Karl Popper, who held a semi-platonist view of a non-causal abstract reality. He basically uses the Fregean distinction of three realms (or "worlds" as he calls them): the first realm (world 1) would be the world of physical objects, the second realm (world 2) being the world of psychological representations and subjective experiences, and the third realm (world 3) an abstract cultural realm that is filled with objective creations of world 2, including propositions, problems, numbers, information, theories, and so on.
For Popper, the self is a world 3 entity, it is something that is abstract, but real. Yet, in his philosophy, usually world 3 is created culturally by "selves", yet selves cannot create themselves. His view is that selves are language creations, that we acquire language, and then we start being selves. However, this cannot be the case, since, as we have seen, there can be core-selves without having developed language.
Also in his argument he confuses "self" with "self-awareness", he says that animals don’t have "selves" because they are not self-aware. However, as Fernandes (1985) has pointed out, you cannot be self-aware without having a self to be aware of.
It seems to me that the solution to the problem is that when we look at the processes in the brain, these processes go constantly from lower-level processes to higher more abstract processes. Remember, the mind is nothing more than a product of the brain, a network of modules that carry out all sorts of functions. Each module, even when it is processed by different parts of the brain, create abstractions. As we shall see in later articles, the brain is a conceptual, conjectural and theory making machine. So, the self is a form of an objective abstraction created by lower-level mental processes. This is its reality. It might well be that the self is a world 3 abstract and non-substantial being that can be generated by processes that happen in the brain (world 1), but mediated by different levels of mental abstract processes.
Those who deny the existence of selves or state that they are illusions usually do so on grounds of either physicalism or naturalism. Yet, even on a moderate physicalist view, like Quine’s for instance, it is consistent to think this way: abstract reality can come up of matter but cannot exist without matter. What a physicalist will never accept is the existence of abstract reality divorced from the physical universe, that would be platonism.
Denying selves carries also several dangers, one of which denies the self’s agency. Yes, there can be cases where a person thinks it is making a well thought decision when in fact is something at a deeper mental level. This is the case of anosognosia, a person’s mental refusal to recognize his or her problem, it is not a voluntary self-delusion, but instead something that happens at a deeper level in the mind. This also happens when, for example, a man approaches a woman supposedly to ask what time it is (and he may well believe he is actually doing this for this specific reason), but in reality it is because he is attracted to her, and his R-Complex is pushing him to mate in some way.
However, we are moral animals. With the exception of people whose brain are seriously affected in some way, in general we all make decisions that we know are good or bad, and we can control our bodies to a certain extent despite all the web of complex impulses our mind is pushing us to do. The R-Complex may be pushing me to mate with a woman, but that does not justify rape or sexual harassment. In fact, I can act against that instinct because I know it would be wrong to do so. It is not exactly that we as selves are not in control of anything! Our selves are just not in control of subconscious and unconscious processes.
And last, but not least, the denial of selves is also the denial of the inherent dignity of every human being as a rational being. Yes, the self is the result of processes in the brain, but it is not reducible to these processes. As Michael Dowd would say, this is a case where the whole is more than the sum of its parts … and I would add "and also the sum of its mental processes". Each self in each person can be viewed as a rational being, capable of making moral choices, not only with self-awareness, but also with an inherent sense of dignity: making a dignifying act or being degraded.
To deny the existence of the self just because there is a whole set of complex biological processes behind is a very big non-sequitur.
Damasio, A. (1994). Descartes’ error: emotion, reason, and the human brain. US: Penguin Books.
Damasio, A. (1999). The feeling of what happens: body and emotion in the making of consciousness. San Diego, US: Hancourt.
Dennett, D. (2002). Quining qualia. In D. J. Chalmers (ed.) Philosophy of mind: classical and contemporary readings. (pp. 226-246). NY: Oxford University Press.
Dowd, M. (2007). Thank God for evolution: how the marriage of science and religion will transform your lie and our world. US: Plume.
Fernandes, S. L. de C. (1985). Foundations of objective knowledge: the relations of Popper’s Theory of Knowledge to that of Kant. Dordrecht: D. Reidel Publishing Company.
Gazzaniga, M. S. (2006). The ethical brain: the science of our moral dilemmas. US: Harper Perennial.
Greene, A. J. (2010, July/August). Making connections: the essence of memory is linking one thought to another. Scientific American Mind. 22-29.
Hubbard, L. R. (1950). Dianetics: the modern science of mental health. CA: Bridge Publications.
Huston, T. & Pitney, J. (2010, Spring/Summer). Finding spirit in the fabric of space & time: an exploration of quantum consciousness with Stuart Hameroff, MD. EnlightenNext: the Magazine for Evolutionaries, 46 (Spring/Summer), 44-57.
Pinker, S. (1997). How the mind works. NY: W. W. Norton & Company.
Pinker, S. (2007, January 19). The brain: the mystery of consciousness. Time. http://www.time.com/time/magazine/article/0,9171,1580394,00.html
Popper, K. (1994). Knowledge and the body-mind problem: in defence of interaction. London: Routledge.
Ramachandran, V. S. (2004). A brief tour of human consciousness. NY: Pi Press.
Ramachandran, V. S. & Blakeslee, S. (1998). Phantoms in the brain: probing the mysteries of the human mind. NY: Harper Perennial.
Rosado Haddock, G. E. (2000). Husserl’s epistemology of mathematics and the foundation of platonism in mathematics. In C. O. Hill & G. E. Rosado Haddock (eds.) Husserl or Frege? Meaning, objectivity and mathematics. (pp. 221-239). US: Open Court.
Talbot, M. (1991). The holographic universe. US: Harper Perennial.
Varela, F. J., Thompson, E., & Rosch, E. (1991). The embodied mind: cognitive science and human experience. MA: The MIT Press.
Powered by Blogilo
The Realm of Consciousness
In philosophy of the mind, the discussions basically center around three basic subjects: the brain, the mind and consciousness. There are many perspectives on this subject, many of them being discussed for centuries. Philosophy of the mind began when the concept of the soul started to be treated philosophically. We could trace that important discussion to Pythagoras, whose philosophy on the soul was adopted later by Plato. For both of them, the soul is the rational aspect of our being, and our intellective and true being. According to them, there are two very different worlds. One is an ideal realm, or what Plato called "the world of forms" or a world of abstract entities which serve as archetypes for everything that exists in the physical world. For both Pythagoras and Plato, forms or ideas (?????) are changeless, perfect, and eternal. On the other hand, we have a world of physical bodies, which are far from perfect, they change constantly, and come and cease to be in temporality.
Through reason, a faculty that only humans have, we are able to discover all of these abstract objects and truths. We cannot discover them relying on our senses, because they deceive … confuse, the passions associated with them will not help us make us see truth clearer. They will not help us be rational. We discover the true nature of our souls as non-physical: they belong to the realm of forms, which is intelligible, as our soul is able to understand because it belongs to that realm.
Christianity later adopted this view when it integrated Middle Platonism, and elaborated it later in its Neo-Platonistic period. However, much later St. Thomas Aquinas, wanted to reconcile previous Christian thought about the soul with Aristotelianism, especially inspired by Aristotle’s De Anima. According to Aristotle, the soul is the form of the body, in other words, it exists because of the body, and is non-separable from it. If the body dies, the soul as well ceases to be. In fact, in Aristotle there is no distinction between body and soul. It is not an inner substance or spectator that pulls the strings and drives the body. This is a dramatic departure from any Platonic view on the soul, and it was incompatible with Christian thought.
To reconcile Aristotelian thinking with the Christian view of the soul, St. Thomas Aquinas wanted to distinguish between two souls: the first one being the animal soul, the one all animals (i.e. living beings that move, have anima); the second one is the rational soul, which is what humans are endowed with. For Saint Thomas Aquinas, the rational soul is the the substantial form of the body, which was a doctrine which has become an article of faith in Catholicism since 1311 (Vienne’s Council). Unlike Aristotle, he did consider the soul substantial, it can be separable from the body, but itself is not a substance (the rational soul was made to be part of the body).
Finally, there is René Descartes. In his Discourse of Method and Metaphysical Meditations, Descartes argued the body/mind duality. If we carry out his methodical doubt (that we can place into question everything that we can have a minimal doubt whatsoever), we reach the conclusion that our thoughts (cogito) must exist: even if we want to deny that we are thinking, we are thinking. And as a correlate to our acts of thinking (cogitations), the "self" (ego) carrying it out must exist too. His methodical doubt made clear that everything physical, including the body, can be doubted, and in many ways cannot be understood mostly due to the confusion that arise out of the senses. However, the ego can recognized with all clarity and distinctness, as something simple. As a result, Descartes asked what was the "self" (what am I?), and his answer was that the self is something that thinks: something that feels, sees, loves, hates, etc. In other words, the ego is a thinking substance. The soul can be thought of as a disembodied entity, which happens to inhabit the brain. For Descartes, that place in the brain is the pineal gland.
It is no surprise to anyone that all that we call "Philosophy of the Mind" is a response to Descartes. Most Philosophy of the Mind anthology would be totally incomplete without Descartes’ own writings about the body/mind problem. No text introducing to the Philosophy of the Mind would be complete without at least some discussion of Descartes’ ideas.
Many years later, psychology and epistemology came to be, and from them we have two very different fields (among others): cognitive science and evolutionary psychology. Evolutionary psychology is just becoming a good fashion within psychology, and bases itself on evolution. The other centers on the principles of the brain that lets us have cognition of the world. Usually the latter has a phobia to the former, but as cognitive science is being complemented by neurobiology, gradually evolutionary psychology is being adopted by cognitive science.
Evolutionary psychology can tell us a lot about who we are. As we have seen in our Part V, once we understand how our brain evolved, we are able to understand much of our behavior as humans.
Premises of our Exposition on Consciousness
There is no full agreement in neurobiology nor cognitive science regarding on the proper model to understand our brain. Of course, there are those who make no distinction between the brain and the mind, and these hold the monist view of the mind. Neurobiologists and psychologists such as Daniel Dennett, V. S. Ramachandran and Antonio Damasio believe in this brain/mind view. There are others who distinguish the brain and the mind conceptually, but the mind is not conceived as a "soul" apart from the brain, but instead as a result of brain operations. This is the way Steven Pinker, John C. Eccles, and Karl Popper regard the mind, although none of them hold the Cartesian view.
Points of view who hold the radical Plato-Descartes view of the absolute separation of body and mind is not seriously held by the majority of neurobiologists and cognitive scientists. Why? For the same reason the vast majority of scientists reject Intelligent Design (ID), in the end it does not offer a natural solution to the body/mind problem, much less to the problem of consciousness.
For purposes of our discussion, I will recognize a series of aspects about the mind and consciousness.
1. The Recognition of the Existence of the Unconscious and the Conscious
We have to recognize that the mind carries out unconscious processes. This does not mean that we will adopt Freudian psychoanalysis. This scientific theory has fallen into disrepute on the second half of the twentieth century. For more on the subject, I recommend the following references: Crews (1999), Grünbaum (1985), and Webster (1996). Now, it does not mean that Freud’s theory was totally wrong, there are aspects of his theory of the unconscious which are still valid today (Ramachandran & Blakeslee, 1998, pp. 153-154):
- Denial: Especially in phenomena such as anosognosia (when a patient denies an obvious impairment, i.e. for example, a patient thinks his or her paralized arm is is alright, and tries to rationalize his or her paralisis).
- Repression: When a problem is recognized, but later denied in some way.
- Reaction formation: When the patient tries to assert the opposite of what he or she suspects of being him/herself. For example, when a gay man tries to show as manly as he can possibly can.
- Rationalization: Patients with anosognosia will try to explain away their symptoms as being perfectly normal.
- Humor: Humor can be used as a defense mechanism.
- Projection: When we fail to recognize ones own impair or disability and attribute it to someone else.
But, for instance, theories such as the Oedipus Complex are slowly being abandoned in psychology. Many of the symptoms attributed to the Oedipus Complex, in reality are symptoms of something else. For instance, Capgras Delusion, for a long time was attributed to this complex. Today we know that the reason for the delusion has to do with the fact that the nerves between the visual cortex and the amygdala are cut accidentally. Why is this important? Because it shows how emotions play a role in our recognition of our loved ones. Since the nerves from the eyes to the visual cortex are intact, a person with Capgras Delusion can recognize a face. He can recognize the face of his mother, for instance. However, since the nerve between the visual cortex and the amygdala is cut, he does not feel anything resembling the feelings he associates with his mother. Therefore, he will say: "She is not my mother, she is an impostor."
Of course, because there is another nerve from the hearing organs to the brain, he can recognize his mom when she calls him on the phone, but when he sees her, he says: "No, she’s not my mother."
2. The Modular Theory of the Brain/Mind
The prevalent theoretical model of the brain is the modular theory of the brain. This makes sense within the framework of evolution via exaptations. As our ancestors evolved, they kept developing organs or modules which have specific functions. For example, in Part V of our exposition, we talked about thirty different regions in the Neocortex that lets us see. One of those regions has to do with seeing motion, if we lose it, we don’t see motion anymore, if we lose the color region, we can only see in grayscale, and so on. So, there are different modules that make vision possible. The same can be said with all other operations of the brain.
3. Dual View of the Brain/Mind
Of course, I’m not a neurobiologist or anything of the sort. However, for purposes of the discussion I will assume a mild dual view of the brain and the mind. The brain is the composite physical organ within our skull. The mind is, as Pinker would describe, "what the brain does", and "not everything that it does" (like giving off heat) (Pinker, 1998, pp. 24). It regards the mind as a system of organs that interact between themselves (Pinker, 1998, p. 27).
There is a debate regarding how to understand the modular structure of the brain or of the mind (depending on the case). For instance, Pinker tries to explain religious experience in humans positing the existence of a "God module" as a result of a an activity in a specific area of the brain. Ramachandran, on the other hand, believes that religious experiences is the result of interactions between different areas of the brain, whose result would be something similar to a functional module. The press wrongly attributed him the finding of the "God module" (or as many people humoristically call it "the G-Spot of the brain"). This is the position I will adopt in these series of articles.
4. The Computational Theory of the Brain
Today, the computational theory of the brain is widely accepted in both, in cognitive science and neurobiology in general. The debate about this model still rages on, but most scientists now have no problem of accepting this model as adequate to understand how our brain circuitry exchanges information. However, those who posit the mind as being "what the brain does" actually focus more on the computation arising out of mental modules. Pinker (1998) clarifies something regarding this view of the mind:
The computational theory of the mind is not the same thing as the despised "computer metaphor." As many critics have pointed out, computers are serial, doing one thing at a time; brains are parallel, doing mullions of things at once. Computers are fast, brains are slow. Computer parts are reliable: brain parts are noisy. Computers have limited number of connections; brains have trillions. Computers are assembled according to blueprint; brains must assemble themselves. Yes, and computers come in putty-colored boxes and have AUTOEXEC.BAT files and run screen-savers with flying toasters, and brains do not. The claim is not that the brain is like commercially available computers. Rather, the claim is that brains and computers embody intelligence or some of the same reasons. To explain how birds fly, we invoke principles of lift and drag and fluid mechanics that also explain how airplanes fly. That does not commit us to an Airplane Metaphor for birds, complete with jet engines and complimentary beverage service (pp. 26-27).
There are Still Problems . . .
Now, the problem of the computational theory of the mind, as well as any other of the proposals made in neurobiology: … the problem is called sentience, or as philosophers call it "qualia". This is the ability that we have of not only receiving stimuli and reacting to it, but to actually experience these stimuli, and have an inner life. I won’t solve this problem in these articles, I will just assume that qualia is the result of our brain’s evolution.
Aristóteles. (1994). Acerca del alma. Madrid: Editorial Gredos.
Damasio, A. (1994). Descartes’ error: emotiona, reason, and the human brain. US: Penguin.
Descartes, R. (1985). Meditations on first philosophy. US: Cambridge University Press.
Crews, F. (Ed.). (1999). Unauthorized Freud: doubters confront a legend. US: Penguin.
Cushing, J. T. (1998). Philosophical concepts in physics: the historical relation between philosophy and scientific theories. UK: Cambridge University Press.
Grünbaum, A. (1985). The foundations of psychoanalysis: a philosophical critique. US: University of California Press.
Maslin, K. T. (2007). An introduction to the Philosophy of the Mind. 2nd Ed. US: Polity.
Pinker, S. (1997). How the mind works. NY: W. W. Norton & Company.
Popper, K. (1994). Knowledge and the body-mind problem. London & NY: Routledge.
Ramachandran, V. S. (2004). A brief tour of human consciousness. NY: Pi Press.
Ramachandran, V. S. & Blakeslee, S. (1998). Phantoms in the brain: probing the mysteries of the human mind. NY: Harper Perennial.
Webster, R. (1996). Why Freud was wrong: sin, science and psychoanalysis. US: Basic Books.
Powered by Blogilo