The Brain and the Cultural Animal

Small Foreword

Those of you who have read my articles and essays in MySpace or other blogs I participate, or I used to have, may recognize the following article regarding its content. If you have read it already, feel free to skip it. The reason why I include it here in another modified way is because the articles of these series will eventually form part of a book I’m going to write in a more concise, formal and academic manner. The second reason to include it is because it is a stepping stone to a broader picture regarding memetics, economics, politics, ethics and spirituality.

Creative Commons License
(c) 2009-2010, Pedro M. Rosario Barbosa
Some Rights Reserved
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.


The imperfect way our brain is arranged through a series of exaptations is what allows us to have an aesthetic sense, hence our brain allows us to have an aesthetic experience. We have seen the following points already:

  • Synesthesia is a door to understand how our mind associates certain sensations. We are all, as Ramachandran states, "closet synesthetes", because our brain is set up in such a way that it can establish non-arbitrary associations among sensations.
  • Our primordial way of thinking is through images, which provoke us a reaction or an association with other sensations.
  • Synesthesia is also a key not only to understand associations among sensations, but also conceptual associations. People who have systematic damage of the angular gyrus are unable to do the "booba/kiki" experiment, and are also unable to understand metaphors: which are essentially associations among concepts.
  • Because of conceptuation, the associations among concepts, and elementary computations in our mind make possible language development. It is an integral part of human nature. It is not that language produces concepts or a world-view … it is the other way around, basic and elementary conceptuation by our mind is what makes language possible.
  • Artists, using associations among sensations and concepts, discover ways to hyper-stimulate the mind in many different ways. This hyper-stimulation is not necessarily reduced to an association with sensations, but also all sorts of associations among concepts, be them cultural or biologically predisposed.

All of these factors, and perhaps more, are the basic foundations for our aesthetic experience. In the case of the plastic arts, images and sensations play a major role in this experience, because this is what is being shown visually to us, and produce a set of responses from our own mind.

What about literature? Our primordial experience with literature is not a set of sensations, we only read words. Of course, a good-looking book or ebook reader may enhance our pleasure for reading a book, but what about the actual content? Does not that defeat the assertion that we think with images, or that images play a significant role in aesthetic experience?

Although our primary experience with reading literature is not necessarily images, imagination (the ability to reproduce images in our mind) does play a role while we read. Edmund Husserl discovered that phantasy, i.e. the reproduction of an image in our minds, plays a significant role at the time of making sense of our primary sensations. If you go to your pocket or your purse and hold an object inside there, when you touch it, immediately your mind will try to reproduce an image trying to figure out if it corresponds to whichever object you are touching.

When you read literature in general such as a novel or poetry, what makes it work is the fact that such works evoke to you a series of sensations, associations among concepts and values, and hyper-stimulation in many ways. Some of these are biological in origin, others are more environmental factors, especially coming from culture.

Further Along the Paths of Synesthesia and Metaphor

There is something very interesting about synesthesia that we have not discussed yet: when you combine sensations the wrong way. Take for instance, number-color synesthetes. It is true that if they look at a number, they see a color. However, what happens if you show them a colored number. If for a person who sees green when he sees five, you show him a green five, there is no problem. However, if you show him a purple five, or a red five, immediately they are disgusted by what they see. They see colors in "layers" (so-to-speak), so they can see the real color before them and the synesthetically induced color simultaneously. But then, their reaction is: "Oh my God! That’s hideous!" Why would that be?

It seems that the way our brain is arranged predisposes us to find some forms of sensation-arrangements to be pleasing, while finding others displeasing or ugly. The same thing happens with metaphors: we find some arrangements of metaphors pleasing, while others seem to be displeasing. In the case of poetry and rhetoric, not only metaphor or analogies, but the way they sound when they are read is also pleasing or not depending on how they are arranged. Finally, this is true in music, the arrangement of certain sounds can be pleasing while others are not.

This relationship between synesthesia and the way metaphors, concepts, sounds, etc. are arranged is not accidental. One of the key traits of synesthesia is that almost always they are in "one direction" so-to-speak. For example, in the case of number-color synesthetes, numbers evoke colors, but colors won’t evoke numbers. Ramachandran documents only one case where a color has evoked a number (but not the reverse). In the same way, metaphors work only in "one direction". He refers to a thorough research made by the linguist George Lakoff and the philosopher Mark Johnson in this area. For example, we speak of a "loud shirt", but not a "red sound"; we say "sharp taste" but not "sour touch". Like synesthesia, these are not pure random associations. We speak of a smell being "sweet", because in our brain, the sectors of smell and taste are close together.

It should not surprise us, then, that in the fields of art and literature there are eight times more synesthetes than any other field.

Some Principles Behind Poetic Art

Lover: Your lips are a scarlet thread
and your words enchanting.
Your cheeks, behind your veil,
are halves of pomegranate.
Your neck is the Tower of David
built on layers,
hung around with a thousand bucklers,
and each the shield of a hero (Song of Songs 4:3-4)

Question: Did the Lover just call her Beloved a "giraffe"? I understand the "scarlet thread", because red lips evoke beauty, elegance, and biologically speaking, sexual attraction to the Beloved. The same happens with the description of the cheeks. But "your neck is the Tower of David"??!!! Really?? If you tell that to a woman today, she would ask what the heck would THAT mean.

Fray Luis de León found himself with the problem, and would pay for it dearly much later. He was a great theologian, a mystic, and an amazing poet. He lived at the time of St. Theresa of Ávila and St. John of the Cross, two of the most eminent figures of the Mystical Period of Spanish literature, great poets too, and two great Catholic saints. The Inquisition had an eye on them. St. Theresa of Ávila had the Inquisition looking over her shoulder constantly: she was suspicious because she belonged to a Jewish family, and wanted to reform the Carmelite religious order. St. John of the Cross was in jail, and escaped (no one knows how).

On the other hand, Fray Luis de León did spend time in jail and did not escape mysteriously. What were the charges? Writing a work on the Song of Songs of Salomon. The writing was not meant to be published. It was a translation from Hebrew to Spanish of that very beautiful book of the Bible. Its translation was forbidden because of its erotic connotations. A nun, of all people, privately asked him to translate it and explain its meaning. In the introduction to the writing, Fray Luis de León explains that in Hebrew much of these metaphors and similes make full sense, but when translated to Spanish, they sound strange. If comparing a neck to a tower seems strange to us, it should not be in the Hebrew mind. What the author means is something like this kind of neck:


Now, if you notice her neck, it is long and sensual, and her necklace looks like a set of bucklers, shields surrounding "the tower", a familiar scene for the Jews of the time, especially for a king like Salomon. Fray Luis de León did not use Nefertiti’s bust as an example to illustrate what these expressions mean. However, he did say that part of of the explanation for this expression is that the simile alluded to her height. Then, he makes an interesting observation: the Tower of David is fortified with shields, which is an indirect reference to his Beloved’s necklace. However, he did give an advice if you seek to understand strange passages in the Song of Songs: imagine what the author is describing within the context of passionate love, and then you’ll get it!

Here we find a clear illustration of how both biology and culture interact.

Human Universals

Much of the history of anthropology specializes in trying to study different societies, in order to try to understand their culture and see all the similarities and differences from several Western societies. However, there are many who have distorted their studies to indicate that societies are not only "too" different from ours, but also to the point of displaying them as almost the "complete opposite" of ours.

I remember the first time I found myself struggling with this view when the teacher wanted to analyze the beliefs of many of the African slaves that came to America. He showed us a Cuban movie called La Última Cena (The Last Supper), where a master decided to invite his slaves for a supper. During the discussion, The Last Supper brought up an African legend (I don’t know which of the nations had the legend, though), where it said how Lie wanted to kill Truth, and how Lie wanted to present itself as Truth, by wearing Truth’s own face over his. I remember one of the students remarking how astoundingly different was African culture from ours, and that this tale about Truth and Lie was evidence for that.

On the other hand, my mind was screaming: "No, it’s NOT that different!" In fact, I didn’t find any differences at all. In Christianized Western Europe and in American culture, we can understand this parable perfectly. There are many similar stories we can find in the Bible. For example, the classic wolf in sheep’s clothing (Matt. 7:15). When you look at the story of Truth and Lie you realize that far from being a story completely different from our way of thinking, it shows that it has transcultural concepts and world-views:

  • Even when we don’t have a formal definition of what is truth and a lie, all cultures share the concepts of truth and lies. Truth is correlated with states-of-affairs, and when one tells the truth, there is the virtue of being truthful, a quality admired in all cultures. A lie is when a person purposely intends not to tell the truth in order to deceive, and this is a vice despised in all cultures.
  • The reason we understand the story so well is because the notion of disguise is also shared by all cultures. Many use disguises due to cultural significance (from the mythical reproduction of ancient stories before the eyes of the spectator, to celebrating Halloween). However, disguises are often used to deceive, which is also a transcultural phenomenon.
  • Finally, there is the notion of presenting something that is false as if it were true, which is essentially what a lie is. This is also a transcultural phenomenon.

Why are all of these points transcultural? Because one basic need of every organism, and humans no less, is the need to know what is the state-of-affairs, the facts that surround it. Every single culture around the world has this need, and values it, because it is a means for survival. Lies are also part of the animal kingdom. Some animals survive because they are able to deceive their predators in a way. Some animals even try deceiving those of the same species, if it makes an individual have an advantage over other members of their species. This can translate to human culture as well, where lies can be used for good or bad purposes. Yet, lies are not in themselves a value that cultures consider moral, which is the reason why still today the use of lies for good purposes is such a hot-button in the field of Ethics, and society in general.

There is no basis to believe that there are no human universals, that is concepts and some world-views shared by many cultures. As many recent anthropologists have found, anthropological positions which seem to establish that no concepts or world-views are similar to our own are either unfounded, misleading, or seriously flawed. Margaret Mead’s work is used today as reference to understand the natives in Samoa, because it shows how they are so amazingly different from us regarding punishment, sexual jealousy, and rape. Today we know better, mostly because of a criticism made by Derek Freeman who looked at the data provided by Mead, and also those data obtained by the U.S. government in Samoa, and saw that all of these data (including Mead’s) contradict her main conclusions.

The same thing happens with colors. Give me a nickel for every person I have met saying that color categorization is not transculturally shared because Eskimos have about four hundred words for white. Well, this statement is false in two aspects: first, originally it was not four hundred words for the color "white", but for "snow"; second, not even this is true, there are no four hundred words for "snow", the Inuit have only two words for snow. This is the reason why this is called the "Great Eskimo Vocabulary Hoax". The reason why there are so manypeople willing to believe it is because of the prejudice that all cultures foreign to Western-European society must be completely different.

Yet, when it comes to colors, they are not all that different from ours. If you want to find out how do people from different cultures learn colors, you just have to go to your nearest neighborhood pharmacy or store and buy a set of eight crayons, Crayola brand! Steven Pinker will explain this to you:

Indeed, humans the world over (and babies and monkeys, for that matter) color their perceptual world using the same palette, and this constrains the vocabularies they develop. Although languages may disagree about the wrappers in the sixty-four crayon box –the burnt umbers, the turquoises, the fuchias– they agree much more on the wrappers of the eight-crayon box –the fire-engine reds, grass greens, lemon yellows. Speakers of different languages unanimously pick these shades as the best examples of their color words, as long as the language has a color word in that general part of the spectrum. And where languages do differ in their color words, they differ predictably, not according to the idiosyncratic taste of some word-coiner. Languages are organized a bit like the Crayola product line, the fancier ones adding colors to the more basic ones. If a language has only two color words, they are for black and white (usually encompassing dark and light respectively). If it had three, they are for black, white, and red; if four, black, white, red, and either yellow or green. Five adds in both yellow and green; six, blue; seven, brown; more than seven, purple, pink, orange, or gray. But the clinching experiment was carried out in the New Guinea highlands with the Grand Valley Dani, a people speaking one of the black-and-white languages. The psychologist Eleanor Rosch found that the Dani were quicker at learning a new color category that was based on fire-engine red than a category based on an off-red. The way we see colors determines how we learn words for them, not vice-versa (Pinker, 1994, p. 52).

We have to consider here the Pirahã people, studied by Daniel Everett. He claimed that the Pirahã violated Chomsky’s theory of Universal Grammar (UG) as well as the conviction that there are human universals. Yet a thorough study of his own data, plus data gathered from other linguists, social scientists and anthropologists seem to contradict Everett’s own statements. For instance, he claimed that the Pirahã were not able to believe anything if it were not empirically shown to them, which is a reason why they did not believe in gods, spirits or the after life. Yet according to Everett himself the spirits and the spirit-world have a large role in their lives. Linguists Andrew Nevins, David Pesetky and Cilene Rodrigues contradict Everett’s own statements about the Pirahã’s language, and did show how it does conform to Chomsky’s UG in many ways. Here is their paper, here is Everett’s response to that paper, and here is Nevins’, Pesetsky’s and Rodrigues’ response to his response 😛

Last, but not least, we cannot leave out the study made by anthropologist Donald E. Brown regarding "human universals", that is universal notions and concepts shared by all cultures, such as (but not limited to): war, beauty, affection, magic, age, anthropomorphization, antonyms, marriage, meal times, beliefs on the afterlife, measuring, body adornment, childcare, metaphor, moral sense, classification of fauna and flora, music, dance, songs, tools, numerals (counting), pain, pleasure, crying, death rituals, subgroups in a community, divination, poetry or rhetoric, empathy, promise, proper names, sense of property, envy, sayings, solidarity systems, facial responses of fear, disgust, anger, happiness, sadness or contempt, there is fear of death, self as subject, awareness of self, gestures, gossip, grammar, metonymy, hope, weapons, use of symbols and so on.

Notice something about poetry (as long languages admit poetry), and that is that there are also universal aspects of how it is performed. For example, the use of rhyme, repetition and variation in singing and poetry, and the fact that all over the world there is a poetic form where there are lines close to three seconds long separated by pauses.

Culture, Poetry, and Why the Past Matters

If we notice the list of universals, some of the universals are associated with survival, others are related to associations made in the brain, some of them are linked to the way to produce pleasing sounds, some that have to do with issues such as the meaning of life, some of them have a sense of self, and so on. All of these are biological in origin as we have seen in previous articles.

Then, why are there differences in culture? The reason is that the basis for the origins of art, literature, and music is biological, but there is a freedom in humanity’s own nature to express itself in different manners. This is where culture comes from. As we have seen with the example of the Song of Songs, the basic sensual admiration for a woman’s long neck is shared not only in Egypt and Ancient Jewish society, but also admired today in Western society (just think of the beauty of Jamie Lee Curtis’ neck, for instance). Yet, the ways they express this admiration can differ from culture to culture. It is not a surprise that when Spaniards invaded Mexico, they had to stand in awe at the amazing beauty of Tenotchtitlán, and especially its big monuments: the pyramids. It is not surprising to be moved by the vocals of Lebo M and his African Choir with the song "A Circle of Life" in The Lion King, or the more recent video by Shakira "Waka Waka (This is Africa)" uses the famous Cameroun song "Zangalewa" as the basis for her song. The same happens with Eastern Music of all kinds.

What is true in art, is true in literature and language. As we have seen earlier, a good work of art provokes a chain of hyper-stimulations in the brain, but it has to be done the right way so that the aesthetic pleasure of the work of art is reached. Literature’s reference to images, along with rhythm, rhyme (which may or may not be prestent), along with the way words are arranged and read, or even spoken, poems should create a whole chain of aesthetic hyper-stimulation of the mind. Cleverness of how we do this is key to creating a great literary work.

Perhaps not only do we discover in the French play Cyrano of Bergerac the uses of rhythm, combination of works and concepts, but also it uses other poetic devices such as hyperbole, or the musicality and the set of sensations and moods that the poetic verse in the play are trying to convey.

We should include in these layers all of those that belong to culture, because not only will they suggest visual impressions to stimulate our imagination, but the reader should discover the meaning of symbols and signs in the work.

This is the reason why, for me, the greatest poetic work is Dante’s Divine Comedy. There has been no poetic work before or since that can remotely compare to it. Why? Because when you read the book, you discover the following:

  • Theology: the doctrine of the Holy Trinity, the conception of the afterlife (heaven, purgatory and hell), the doctrine of sin and grace, the Neo-Platonic conception of angels, the doctrine on the soul.
  • Philosophy: Ethics (virtues and vices), physics, astronomy, mathematics.
  • Literature: Allusions to the works of Homer (the Iliad and the Odyssey), the work of Virgil (the Aeneid), and the Bible.
  • History: Julius Caesar, Brutus, and Cassius, Cicero, Octavian, Socrates, Plato, Aristotle, and many more.
  • Reference to the Status Quo of His Times: Religious Orders (Franciscans and Dominicans), references to Popes, references to political and religious figures of the time.

And much, much more. The Divine Comedy is a complete encyclopedia of the times and knowledge available at that moment in the Middle Ages. Would anyone do something like that in our times. Imagine writing about philosophy, science, ethics, religious orders, EVERYTHING you know …. all in the same work, exploiting sensations from the reader to make you feel fear, repulsion, desperation, peace, joy, and so on … ALL of those things in verse and rhyming! Now THAT is a work of genius!

Yet, we notice something very important about Dante’s Divine Comedy and every other classical work: they are not wholly "brand new". They are all based on a legacy, a gift by the past.

In this case, I’m not necessarily talking about knowledge of history or science. I’m talking here about the importance of how these styles and techniques achieve a certain goal in our minds, an aesthetic experience. Learning art, poetry and music are important, because they enhance our aesthetic experience, and they form a new basis for which new artists discover new ways for stimulating our aesthetic sense.

Mozart used a lot of earlier musical ideas when composing his symphonies (such as the structure of symphonies), but at the same time elaborated them to the point of them being works of genius. He even used earlier works of literature and adapted them to his opera. Think, for example, in Le Nozze di Figaro and Don Giovanni. Today there are whole new genres which try to mix all sorts of other genre: hip-hop practices sampling extensively, I know several artists which mix Puerto Rican music with Celtic music (Celtorican music), Jazz and others. What we learn from artists, writers and musicians is the value of culture.

All cultures are based on two things, human nature and the cultural past.


Brown, D. E. (1991). Human universals. Boston: McGraw Hill.

Damasio, A. (1994). Descartes’ error: emotion, reason, and the human brain. US: Penguin Group.

Damasio, A. (1999). The feeling of what happens: body and emotion in the making of consciousness. San Diego: Harcourt.

The New Jerusalem Bible. (1990). NY: Doubleday.

Lakoff, G. & Johnson, M. (1980). Metaphors we live by. US: University of Chicago Press.

Lakoff, G. & Johnson, M. (1999). Philosophy in the flesh: the embodied mind and its challenge to Western thought. NY: Basic Books.

León, F. L. de (2001). El cantar de los cantares. Madrid: San Pablo.

Pinker, S. (1994). The language instinct. NY: Harper Perennial.

Ramachandran, V. S. (2004). A brief tour of human consciousness. NY: Pi Press.

Ramachandran, V. S. & Hubbard, E. M. (2001). Synaesthesia: a window into perception, thought and language. Journal of Consciousness Studies, 8, 12, 3-34.

Rostand, E. (2003). Cyrano of Bergerac. (L. Blair, Trans.). US: Signet Classic. (Original work published in 1897).

Powered by Blogilo

Tagged with:

The Brain and the Cultural Animal

Small Foreword

Those of you who have read my articles and essays in MySpace or other blogs I participate, or I used to have, may recognize the following article regarding its content. If you have read it already, feel free to skip it. The reason why I include it here in another modified way is because the articles of these series will eventually form part of a book I’m going to write in a more concise, formal and academic manner. The second reason to include it is because it is a stepping stone to a broader picture regarding memetics, economics, politics, ethics and spirituality.

Creative Commons License
(c) 2009-2010, Pedro M. Rosario Barbosa
Some Rights Reserved
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

Images and Emotions

Most people think that "we think in language". This is a belief which has been proven false many times, even in every-day experience. For example, haven’t you had the experience of writing down something, and realizing later that such words are not what you meant? Haven’t you had the experience of having a very clear notion in your head, or a particular emotion, but being unable to find the appropriate words to describe it?

Also, another sign of this is the amount of patients who have no linguistic ability because of systemic damage to certain parts of the brain essential for language, yet all other brain functions including a clear understanding of the world seem fine.

In fact, António Damásio argues that our first form of thought has to do with images and our response to them. This is not surprising. As we have seen, much of our developed brain involves vision. It is as if the sense of sight is the primordial sense for our brain.

This apparently involves numbers too. Remember that our association of numbers with certain shapes, and then a conceptuation in the higher levels of the brain is a complex process which seems to involve vision and images in some way. Think about it. Even when we know numbers are abstract, we have the need to use symbols to represent these abstract objects and retain these images in our heads, we would not be able to manipulate them. It is far easier to carry out elementary arithmetical operations using Indi-Arabic numbers than using Roman numbers. The same truths can be recognized objectively using both sorts of symbolic systems of numbers, but one system lets you advance quite considerably over the other. This is the reason why, when Medieval Europe was far behind regarding mathematical knowledge, the Arabic world at the time was far ahead, even creating new mathematical fields such as Algebra and Trigonometry. Without the use of these images, we would not be able to deal with abstract objects which, by definition, cannot be represented sensibly.

There are some signs of this being true for mathematicians, even theoretical mathematicians. For instance, there are other kinds of synesthesia we have not discussed yet, such as the number-line synesthesia. In this case, synesthetes see the Indo-Arabic numbers in a specific position in space. Sir Francis Galton discovered this sort of synesthesia too, and here is the illustration of how one of the subjects described the locations of numbers in space:

Number-Line Synesthesia

By the way, even when this is a two-dimensional drawing of the location of numbers in space, in reality these synesthetes experience a three dimensional location of numbers. One of the renowned number-line synesthetes is Daniel Tammet, who can see numbers literally as landscapes. In those landscapes, he discovers true mathematical relations. When asked by researchers about the decimal digits of the number pi, an irrational number, he was able to reach up to 22,514 digits.

Mice and Bird-Brains

In his quest for understanding art, Ramachandran shows us two examples which seem to point out why art exists, and why we respond to it the way we do: mice and seagull chicks.

Mice are interesting creatures to experiment on (humanely of course!) The way they behave can tell us a lot about the way we behave.

For instance, there have been experiments where you make a mouse choose between two holes, one has a square shape, and the other one a rectangular shape. If there is food at other side of the rectangular hole, then the mouse would go after it. After a certain time, the mouse will learn to go to the rectangular hole for that food, and will associate food with rectangularity. If later you show the mouse two rectangular holes, one which is wider (more "rectangular") than the other, it will always go to the widest shaped hole, regardless of whether there is food in that rectangular hole or not.

Why would a mouse do that? As it happens, it seems that the mouse’s brain is able to abstract "rectangularity" and has associated it with something that stimulates it. Therefore, the more "rectangular" the hole looks like, the better. We can see right away that the imperfection of the brain in this matter, since the association between food and rectangularity can lead to false expectations. Yet, a certain shape can stimulate a mouse’s brain, even to the point of ignoring what stimulated it in the first place (food!)

Interesting …

Speaking of animals, we should also look at birds too, especially seagull chicks. When a chick hatches, it seeks its mother’s yellow beak, which has a red spot in it. The chick would peck the red spot, the mom would regurgitate the food into the chick’s mouth, and it is fed. Now, it is not exactly that the chick is conscious that the yellow beak with the spot is "attached to its mother". If a seagull chick just hatched, and you are a malicious scientists who shows it a long yellow detached beak with a red spot, the chick will react the same.

Why is that? When seagulls evolved, chicks brains were developed enough so that seagulls would survive. Remember that natural selection usually does not make any living being to be optimal in all its capacities, but evolves to be adequate, to have just enough for it to survive. Since in nature, malicious scientists with detached seagull beaks who experiment with chicks is not a prevalent state-of-affairs, there is no problem regarding survival value.

But there are further experiments being done on these chicks which have interesting results. For example, take a long yellow stick (not a beak this time) with a spot at the very end. The chick still goes for it.

Now, what would happen if you show them a long stick with three red stripes in the end? Niko Tinbergen made the experiment and here is the result: the chick goes nuts!!!! It pecks the stick incessantly! It seems that such a combination of long stick with three red stripes hyper-stimulates its brain, maybe because of the way evolution has wired it up. Again, this is an imperfection of the brain, and it has no survival value whatsoever. On the other hand, again, it would be extremely rare for such chick to be confronted with a similar scenario, so there is no practical survival problem that the chick’s brain is wired that way.

Now, what does this have anything to do with Ramachandran’s research on art? Let him explain you in his own words. 😉

All of this brings me to my punch-line about art. If herring-gulls had an art gallery, they would hang a long stick with three red stripes on the wall; they would worship it, pay millions of dollars for it, call it a Picasso, but not understand why — why they are mesmerized by this thing eve though it doesn’t resemble anything. That’s all any art lover is doing when buying contemporary art: behaving exactly like those gull chicks (Ramachandran, 2003, pp. 54-55).

Our Brains and Art

Jessica Rabbit is hot, and it is not surprising to find people, especially men, stimulated just by looking at her. I mean, look! Look! … Isn’t she something?

Jessica Rabbit

You know what is funny side of all of this. The reason why people are stimulated when they look at Jessica Rabbit is exactly the same reason why someone would admire this standing Parvati.

Standing Parvati

And you know what is funnier still? That the reason why people can be hyper-stimulated when they see these figures is the same reason why the mice and the gull-chicks in our experiments were hyper-stimulated too.

Think about it! Ramachandran gives the example of how our brains work like those of mice: Nixon’s face. How do you make a Nixon cartoon? Abstract from Nixon’s face many traits we already find in an average man’s face, take all of Nixon’s special facial traits, exaggerate them, and in the end we will end up with a face that looks for us "more like Nixon" than Nixon himself. This is exactly what the mouse did in our experiment: the more "rectangle", the better!

But what makes both Jessica Rabbit and the Parvati statue so appealing is that it is associated with our sexual instincts, not mouse food this time. But how do they hyper-stimulate? By the same way we create a Nixon cartoon: we abstract an average woman’s physical body, and exaggerate precisely those physical traits that stimulate sexually our minds. We end up with women with large round breasts, impossible curves, large hips, large buttocks, perfect legs.

And don’t minsunderstand me, hyper-stimulation is not limited merely to sexual stimulation. Humans are stimulated by all sorts of things, and usually corporations are very good into finding out what makes the general public tick. Watch that on the Internet or next time you turn on your TV or radio: logos, designs, musical melodies, or word arrangements can hyper-stimulate our brains in many different ways.


However, when we are talking about elite art, such as art galleries, classic music, even very good rock concerts, there is much more to it than just hyper-stimulation. There is an aspect of cleverness involved. One of them has to do with what Ramachandran calls a "pick-a-boo" principle in our brain. This is perfectly illustrated in Richard Gregory’s Dalmatian image.

Richard Gregory's Dalmatian

Now, if you look at the image, you see pure black spots on a white background. But if you keep looking at it, you’ll notice a "pattern", that of a dalmatian dog. When your mind finds out that it fits the Dalmatian pattern and "solves" the image puzzle, it reacts as if it said "Aha! There it is!"

Middle Eastern Dances (or as we usually call them "belly dancing") the dress usually heightens the roundness of the dancer’s breasts, her figure, even the opening in the sides of the skirt suggest the shape of her thighs and legs. If she is a professional dance, she can play with her hair and her eyes to flirt with the public, etc. It is not a crude sexual appeal, the colors she chooses and the way she dances will suggest the public sensual snake-like movements, stability, warmth, shyness, grace and so forth. It is an entire art in itself. And while she is dancing, our brain is going: "Aha! She is suggesting such and such!"

The same occurs with Parvati statues. Look at this one.


Although this particular picture is a bit blurry, you can notice that there is a necklace around her neck, which falls on her breasts, and hides her nipples. The fact that the necklace falls over her breasts "heightens" their roundness before our eyes. The fact that they cover her nipples immediately activates the "pick-a-boo" trait of our mind: "Aha!"

That is not the whole story, though. The color red evokes love, or being angry, or being upset, or even shyness. Why? Precisely because the human face gets red when we are in those states. No wonder a red-haired and red-dressed Jessica Rabbit is sexually appealing!

It is usually a response that we see in other people’s bodies, and we can associate its combination with smiles, frowns, and so on, with being angry, upset, or love; green evokes natural settings, blue evokes the sky or the waters, and so on. Brown, depending on the context, can be beautiful or ugly. Usually mother-goddesses which represent the Earth or the soil are shown symbolically to have dark-brown skin.

As we see here, colors may mean many things depending on the context, and may hyper-stimulate our brain in different ways. Darkness (abundance of black or dark colors) can inspire fear, silence, or even peace, depending on how the artist uses it. Compare the way darkness is shown in both of these works.


Rembrandt - Adoration of the Shepherds

The first image above inspires fear, given the context darkness is used. This is not the case of the second image, which inspires peace given the context of the Nativity scene.

What does a professional artist do? Very simple: he or she creates through his or her work a chain of hyper-stimulation and different "Aha!" s in the process. For instance, the second picture we have just seen, "The Adoration of the Shepherds" by Rembrandt, makes a Christian react in certain ways according to the context in the picture. Other paintings of the Nativity scene might create other different sorts of reactions.

The same thing happens with Van Gogh’s works, that is why I’m fond of impressionism, since it plays with that so very much. If you are close to an impressionist painting and look at it, you would get something like this.

Van Gogh -

But if you distance yourself from it, gradually your mind discovers whole sets of patterns, until it gets the entire picture we call "Starry Night".

Van Gogh - Starry Night

Now, think back about our condition of "closet synesthetes", constantly blending our senses, and see how certain shades of blue are associated to our feeling cold in the evening. We synesthetially blend a color with the sensation of cold. However, notice that there is a sense that there are different directions of the shades of blue and yellow in the sky, which suggest to us a continuous flow in the night sky (a sense of motion), and that the stars and the moon are shining brighter than usual. There is also a sense that the light in the sky is being part of this spectacular motion taking place up there. But there is also a chain of hyper-stimulation of our senses and the sense that our mind is discovering the hidden meaning of the different strokes of color into a full picture …. in other words, a chain of "Aha! Look here!"s.


Pinker, S. (2007). The language instinct: how the mind creates language. US: Harper Perennial.

Ramachandran, V. S. (2003). The emerging mind. UK: BBC – Profile Books.

Powered by Blogilo

Tagged with:

The Brain and the Cultural Animal

Small Foreword

Those of you who have read my articles and essays in MySpace or other blogs I participate, or I used to have, may recognize the following article regarding its content. If you have read it already, feel free to skip it. The reason why I include it here in another modified way is because the articles of these series will eventually form part of a book I’m going to write in a more concise, formal and academic manner. The second reason to include it is because it is a stepping stone to a broader picture regarding memetics, economics, politics, ethics and spirituality.

Creative Commons License
(c) 2009-2010, Pedro M. Rosario Barbosa
Some Rights Reserved
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

Investigating Synesthesia

"C-sharp is green." What the heck?! What does it mean that C-sharp (a sound) is green? The identification of a sound with a color can happen to a particular kind of synesthete. When he or she hears a sound, it will evoke a color. There is a "blending of the senses", which is the meaning of the word "synesthesia" (??? = together; ???????? = sensation).

One very interesting form of synesthesia is the number-color synesthesia. Some synesthetes would claim that two is red, or that five is green, or that seven is yellow. The relationship between number and color can vary among synesthetes, though.

This was a very strange phenomenon for some scientists, and they made all kinds of theories about it. One of them was that synesthetes were talking metaphorically. For example, men sometimes refer to attractive women as "hot babes". They are neither hot, nor babies. Vilayanur Ramachandran, who has studied this phenomenon extensively, did not seem to agree with this sort of explanation. If it were a metaphor, why wouldn’t synesthetes recognize what they say as metaphor? They seem to understand the notion of metaphor pretty well. It may be argued, for instance, that their mental relationship between two separate sensations can be every-day metaphoric such as when we say that "cheese is sharp". But Ramachandran asked: "Why then would you use a tactile adjective to describe taste? Is this not a mix of categories?"

Another explanation is that number-color synestetes were playing with magnets when they were children. They may subconsciously remember that seven was purple or that five was green, and so on. So, when synesthetes look at numbers, they remember the color of the magnets they used to play with. This didn’t make any sense either. Rama knew pretty well that the first research ever made on synesthesia was carried out by Sir Francis Galton, a cousin of Sir Douglas Galton, who was half-cousin of Charles Darwin. He discovered that synesthesia runs in families. So, Ramachandran reasoned: "Would we have to assume now that playing with magnets runs in families too? This doesn’t make any sense. There has to be another explanation."

First, he wanted to show that the phenomenon is real, so he devised a visual experiment. Here it is:

Number-Color Synesthesia Experiment

Now, this experiment looks confusing, but here’s the idea: look at the five and the twos. They are mirror images of each other, which is the reason why it looks so messy to look at them. Measure the time it takes you to find a regular figure such as a triangle of a circle. I am not a synesthete, so I had some hard time trying to find it until a minute or so later. However, a number-color synesthete would find it relatively fast (in a few seconds), and would say something like: "Look! It’s a red triangle with a green background. It’s jumping out at me." Indeed, he or she would be correct.

Number-Color Synesthesia Experiment

This could not be explained by remembering magnets, or metaphor. It is a very real phenomena, they were literally seeing colors along with the numbers. But why did such phenomena exist? Ramachandran’s hypothesis is that the way the brain is arranged can explain it. Look at this picture of the brain.

V4 and Fusiform Gyrus

There is a sector of the brain called the "fusiform gyrus" where there is a region called the "grapheme area" (green in the illustration of the brain), which associates certain shapes with numbers. For example, if you have grown up in the time of the Roman Empire, your brain would have associated the number seven with this sign "VII" in the grapheme area. In our society, which uses Indo-Arabic Numbers, we associate the number seven with the sign "7", and that is what the grapheme area records. The usual number-color synesthete would see a color when he or she sees "7", but will not if you show him or her "VII", which indicates that the grapheme area is involved in the phenomenon.

Now, you will notice in the illustration that there is a red area almost touching the grapheme area. This red region is called the "V4", which is the primary region where visual color is processed in the brain. Ramachandran theorized that the most prominent form of number-color synesthesia is due to the "closer-than-usual" contact between grapheme area and the V4, and when a number-color synesthete looks at a number, inevitably it evokes a color.

From Synesthesia to Higher-Level Abstraction

However, there is another kind of number-color synesthete. For instance, there are some of them who not only will say that 1 is red, 2 is green, 3 is yellow, and so on, but will also see I as red, II as green, III as yellow. And also they will also say that January is red, February is green, March is yellow; or that Monday is red, Tuesday is green, etc. The common denominator of these cases is that apparently this kind of synesthesia treats number in a more abstract and ordinal manner.

This means that the "crosswiring" in these cases does not occur in the V4 and the grapheme area, but occurs in the higher-level of brain processes.


According to Ramachandran’s theory, a higher level process related to numbers takes place in a region known as the Temporal-Parietal-Occipital junction (or TPO junction), far from the fusiform gyrus. The reason he suspected it, has to do with the fact that when that region of the brain is damaged the patient suffers from dyscalculia: he or she is not able to add or multiply, except by huge instances of memorization. He or she has a hard time subtracting and it is practically impossible for him or her to divide.

However, the TPO junction is next to the angular gyrus, the place where we conceptualize colors (cells which deal with color processing, and carry out a higher process than the number-grapheme area). Ramachandran makes a very important point in this whole discussion.

This may seem counter-intuitive, but just think of something like a number. There is nothing more abstract than a number. Five pigs, five donkeys, five hairs, even five tones — all very different, but with fiveness in common (Ramachandran, 2003, pp. 83-84).

For him, the TPO junction plays a role in that abstraction. Hence, these synesthetes are called by him "higher synesthetes". Due to the interconnectivity that we see in the illustration of the brain above (Illustration 1), in some cases there can be certain mixes of lower-level synesthesia and higher-level ones.

Being Closet Synesthetes and the Origins of Language

Now, Ramachandran reaches a key point in his exposition on synesthesia: he is going to show us that we are all closet synesthetes, because we are in "denial" about our synesthesia.

Booba/Kiki Experiment

Wofgang Khöle designed this experiment and usually asked the public which figure they would associate with the sound "booba"and "kiki". Generally 98% of the people in the public will say that the name of the figure in the left is "kiki" while the other would be "booba". Now, why does this happen? Because our brain is biased to associate certain images with certain sounds. The bias is not arbitrary. Think for instance about the way your mouth and tongue shapes when you pronounce the word "booba": the tongue and lips curve. The same with "kiki", the corner of the figure associated with "kiki" makes us think of crystals that shatter, evoking sharpness, and the way the tongue behaves when we pronounce "kiki" seems sharp too.

This is Ramachandran’s starting point regarding his proposal on a theory on the origins of language. If you think about the "booba" and "kiki" experiments, although the relationships between sound and vision is not arbitrary, we have to admit that they really have nothing in common, because visual appearance is different from sound. The relationship is carried out in our brain, it is the result of how our brain is set up.

Yet, to be able to develop language, we have to conceptualize, we need to abstract. Our primordial conceptuation or abstraction takes place in our mind, where, perhaps, the TPO junction and the anglular gyrus play a role. This is possible because the angular gyrus, which deals with abstract concepts, is located in the "crossroads" (as Ramachandran describes it) between the parietal lobe of the brain (which deals with touch proprioception), the temporal lobe (which deals with hearing), and the occipital lobe (which deals with vision). One of the things that have been discovered about people with systemic damage to the angular gyrus is not just that they cannot perform simple arithmetic, but also the majority cannot do "booba/kiki" experiment.

Another thing they cannot do is understand metaphor. Think about it, what is a metaphor? A metaphor is the association of two very different concepts which belong to very different conceptual realms. An example of a metaphor can be found in Shakespeare’s Romeo and Juliet:

[The light] is in the East,
and Juliet is the Sun.

As Ramachandran says, with a bit of sense of humor, that this passage does not say that Juliet is a big ball of fire, but that Juliet is "bright" or "radiant" like the Sun. People with systemic damage to the angular gyrus will not get that. They will ask: "But Juliet is not a giant sphere made out of plasma in a state of fusion".

Why does this happen? Our brain is also pre-disposed to conceptualize, even when we are not aware of it. There have been stroke patients who lose concepts, they are not able to grasp them anymore. Some of them lose the concept of "tools", they cannot conceive something as a tool (screwdriver, for instance). So, if our brain is pre-disposed to concepts, then it can associate those concepts in a certain way. The visual and audio relationship that is shown in the "booba/kiki" experiment illustrates the beginning of abstraction by association. Metaphor illustrates that not only sounds and vision can be abstracted and associated, but also concepts already formed in our mind can be associated among themselves.

The process of conceptuation is important before language development. If we are not able to have abstract concepts, we cannot refer to objects collectively through concepts. We would not be able to say that "Fifi is a dog" if we have no concept of what a dog is.

Now, there is lexicon involved with language too. For that purpose let’s look at the following illustration:


We develop words to refer to objects. However, how were the first words formed? Ramachandran theorizes that we have a biological bias to associate certain sounds with certain visual shapes. The first words must have been sounds that our brain associated with certain traits of objects. This is all you need in evolution, because once that dynamic begins, there is a "bootstrapping" from the original bias: we are then able to associate words, create neologisms, establish associations among words and so on. As languages evolve, the words we have now in different languages are the result of millions of years since certain primitive communities made language from the original linguistic (visual and sound) bias.

Lexicon is not everything. With our own lips we tend to mimic in certain ways the objects we are looking at. For instance, Ramachandran wants you to notice the shape of your mouth when you say "teeny weeny", "diminutive", "un peu". Now do the same with "enormous", "large", "grand". In both of these cases your lips, tongue and mouth shape themselves to "synesthetically" mimic the size of the object you are looking at.

In our brain, this is due to the part that controls the mouth in the Broca’s area (which deals with language) along with visual appearance in the fusiform gyrus, which also communicates with the auditory cortex. But this is not all. Remember this illustration we showed in Part V?

Sensory and Motor Homunculi

Remember when we talked about Darwin’s observation that some people clench and unclench their jaw when they are cutting with scissors, and that this is probably due to the fact that in our primordial motor cortex (the right in the illustration) the jaw is close to the thumb and the index fingers? We know that the face of the homunculus is the other way around, the jaw is close to the fingers and hands, while the eye of the homunculus is further away. Ramachandran calls "synkinesia" the phenomenon of the mouth and jaw mimicking the motion of the hands.

Now, think about what this implies linguistically. Haven’t you noticed that sometimes you have the tendency to make gestures with your hands as you speak or to describe things that you see? For instance, when you describe something tiny, don’t you use your thumb and index finger just to say, look it is "teeny weeny", "un peu", "pequeñito", "diminutive". Your fingers are imitating what your mouth is doing, which at the same time it synesthetically mimics the object that is being referred to.

So, Ramachandran is talking about a multi-directional bootstrapping going on in the brain.

Social Signs which Confirm the Synesthetic Bootstrapping Theory of Language

There are some signs that Ramachandran’s "synesthetic bootstrapping theory" of language is going exactly in the right direction, although I think it is not yet complete. I’m going to use two examples. These are my examples, not Ramachandran’s, but they illustrate how well formulated is this theory.

The Emergence of Sign Language

In the end of the 1970s, the Sandinista government in Nicaragua created a program to teach hearing-impaired children a conventional sign language. However, in the process of carrying out the program, children immediately began communicating in a new language, which was originally a sort of "pidgin" (as linguists call it), where there is no formalized language yet with all of its proper grammar. This was known as "Lenguaje de Signos Nicaragüense". One generation later, the children developed a completely new proper syntax and formal language which today is called "Idioma de Signos Nicaragüense".

As you can see here, there is connection between motor areas of the brain that deal with facial expressions and hand motion, or synkinesia at its best. There are two simultaneous activities we see here: a synesthetic and a synkinetic mimic of facial expressions and the gestures of the hands. However, due to their hearing impairment, they are not able to enunciate as effectively as hearing people (though many do a great job to be as effective enunciators as they can be).

This reveals some things regarding language. If we remember Part VII, we become aware of the fact that consciousness are realized when a human being grows within a community. We develop a moral conscience, and we develop a conception of selves distinct from our own. Apparently, not only consciousness is realized in community, but also language development. María No Name was not raised in a community of hearing-impaired people. The hearing-impaired children of Managua were formed within a hearing-impaired community which created first a pidgin, and then in a single generation, developed a whole new sign language.

If you look at the way the children mimic visual objects in their use of the sign language, they reveal primordial biases. This destroys the way average people conceive language as something formed for hundreds or thousands of years. Quite the contrary, languages can appear spontaneously, and if pidgins are created in one generation, the formalized language appears in the next. This has been shown to be true regarding creole languages in the South Pacific (including the Philippines and Hawai’i), and in the case of African slaves brought to America from different nations who interacted among themselves and with Native peoples, along with Spanish, English, French and Portuguese (depending on the region).

The Origins of Writing

The origins of writing reflect how a theory of spoken language can be applied to writing. Look at the following two illustrations. The first shows the "evolution" of the letter "aleph" in Hebrew; the second, the evolution of Sumerian language from a pictographic "star" to the cuneiform sign for "god".

In the first illustration, in the left we find the very first primitive character of "aleph". As you can see, it is the shape of an ox’s head, with eye and horns. At the extreme right, you see the modern Hebrew version of aleph, almost unrecognizable from the original. In the middle, you notice that if you take a good look at the third character (from left to right), it resembles a lot to a leaning letter "A". In fact, the Greek and Hebrew have a "common ancestor" (so-to-speak). The word "aleph" also evolved into the word "alpha" which is represented exactly like its Latin version: "A". It is also unrecognizable from the original.

The second illustration represents the evolution from a pictograph character of ancient Sumer, which represents an idea, not a letter. Yet, this character gradually evolved to represent the concept of "god", since gods live in the sky.

In both of these examples, we can see that there is an initial bias between what is being visualized and what is written. The best example of this is the famous Ancient Egyptian hieroglyphics. In both of these cases, in the beginning of the use of letters to be pronounced when read, or in the beginning of the use of pictographs, there is an effort to mimic in writing what the objects visually look like. From then on, through a long process, for many different cultural reasons, the original characters changed until they are almost unrecognizable from the original.

It seems almost like the way Ramachandran says that language originally evolved from an initial bias of vision, sound, and mouth movements. The languages we have today resemble very little to the original, but seem to derive from that original bias, the original mimicking of vision, sound and writing.

A Complete Theory of Language is Needed

There is something lacking in Ramachandran’s account for language. We need to separate two sorts of abstraction. Following Edmund Husserl, we can call one "sensible abstraction", which means that we can abstract from sensible objects and conceptualize material concepts (i.e. concepts which refer to sensible or material objects). However, there is another sort of abstraction, which we will call "categorial abstraction". This confusion between two forms of abstraction is due to the often empiricist or psychological conception of numbers as being somehow a form of abstraction from sensible objects themselves.

In reality, sensible objects do not appear to us as just individual objects, but they are organized as states-of-affairs: there is a computer in front of me, there is a book beside me, there is this glass of water on the table, and so on. The way these objects appear depend greatly on how they are "related" by our mental acts: as a set of objects, as seven objects in front of me, as the first thing I find, etc. If you notice, the "sets", "seven", "first", etc. are not sensibly given to us. They do not appear in a sensible manner. They are abstract categories which are constituted using sensible objects as basis. This is what Husserl called "categorial intuition". When our mind gets rid of the sensible component of the sets, or the seven, or the first, etc. (categorial abstraction), we are able to manage numbers in pure abstraction without taking into consideration any sort of material concepts. We don’t talk about four chairs added to five chairs will give us nine chairs, but about "4+5=9" without any reference to sensible objects.

A similar, but not identical, form of abstraction is involved in the way we formulate propositions about these objects and states-of-affairs, and the way they are organized hierarchically so that the propositions make sense, and that these propositions can be organized logically. Mathematical logic has shown that you can also get rid of sensible components of propositions and discover a priori new logical laws and theorems without making any reference to sensible objects.

Formal categories (sets, cardinal numbers, ordinal numbers, relations, etc.) and meaning categories (disjunction, conjunction, forms of plural, etc.) involve a very different process than just abstracting from sensible experience as Ramachandran suggests. Rama is going the right path, but he needs to account for this too. Only this way, not only we are able to explain the process of conceptuation from sensible experience, but also what is often called the Chomsky’s proposal of the syntactic tree, and the Nativist conception of language.


Pinker, S. (1994). The language instinct: how the mind creates language. NY: Harper Perennial.

Pinker, S. (1997). How the mind works. NY: W. W. Norton & Company.

Pinker, S. (2002). The blank slate: the modern denial of human nature. US: Viking Penguin.

Pinker, S. (2007). The stuff of thought: language as a window into human nature. US: Viking Penguin.

Ramachandran, V. S. (2003). The emerging mind. UK: BBC — Profile Books.

Ramachandran, V. S. & Hubbard, E. M. (2001) Synaesthesia — a window into perception, thought and language. Journal of Consciousness Studies, 8. 12, 3-34.

Powered by Blogilo

Tagged with:

Why do We Have Hiccups?

Being human has its annoyances. For instance, don’t hiccups drive you crazy?! Many times they are brief. Other times, they can be a pain in the … neck … literally! Do you know that the longest uninterrupted hiccup lasted from 1922 to 1990? No kidding! The unfortunate guy who had this incredibly long problem was called Charles Osborne, look for it in the Guinness World of Records. According to one source, he hiccuped a total of 430 million times over that 68-year period.

Like the appendix, hiccups have absolutely no function, and no survival value whatsoever. They simply consist of involuntary contractions of the diaphragm which bring abrupt rushes of air into the lungs. Why do we have them? Neo-Darwinism has an answer to this problem.

Neil Shubin reminds us that we came from fish! Some of those fish evolved into amphibians, many of them which had gills, much like the Acanthostega. Our ancestors, fish and amphibians, had a very small brain which basically controlled all aspects of their body: breathing, digesting, heart beats, and so on. Basically, the brain consisted in what we now call the brain stem, the place where all of these processes took place. This means that the nerves that controlled breathing connect the stem of the brain directly to the gills, both which were located almost next to each other. This worked pretty well for our ancestors, as well as for fish today, but this arrangement sucks for mammals, especially humans.

As our ancestors kept evolving, we stopped breathing through those gills close to our brains. We now breathe with our lungs, which are located aaaallll the way down our chest, very far from our brain stem. If our bodies were "intelligently designed" such as Creationists and intelligent design (ID) advocates suggest, these nerves should connect the diaphragm with the spinal cord at the same level each to prevent all sorts of problems … such as hiccups. The nerves that control the diaphragm, the vangus and phrenic nerves, come from the brain stem and extend all the way to the chest cavity to reach the diaphragm. The longer these nerves, the more the chances that anything that affects them interrupts their activity, and when this happens … well .. hiccups!

Notice that hiccups show a rhythm, a pattern of sudden inhalation of air which Neil Shubin has noticed in amphibians such as tadpoles, they use both lungs and gills to breathe. The whole physical process between both is so similar, that biologists have proposed that they are one and the same process. They are so similar, that Shubin thinks about many of the ways we stop hiccups:

Gill breathing in tadpoles can be blocked by carbon dioxide, just like our hiccups. We can also block gill breathing by stretching the wall of the chest, just as we can stop hiccups by inhaling deeply and holding our breath. Perhaps we could even block gill breathing in tadpoles by having them drink a glass of water upside down (Shubin, 2009, p. 192).

The Brain is a Mess!

This whole deal about hiccups reveal that our nervous system is not so well designed. If this is so, what does this imply for our brain? It is one very amazing composite organ (or system of organs), but, as we have said in Part V, it is a mess. The professor of neuroscience at John Hopkins University, David Linden, says that he is impressed by the brain, but he is not amazed at how it is organized. He describes it as a "cobbled-together mess". We cannot understand the brain from an ID perspective, precisely because the brain does not seem to be "intelligently designed". According to Linden, the brain is "quirky, inefficient and bizarre … a weird agglomeration of ad hoc solutions that have accumulated throughout millions of years of evolutionary history" (Begley, 2007).

An analogy with the computer world will make you understand what he is talking about. In 1984, Richard Stallman wanted to create a new operative system he could call GNU (a recursive acronym which stands for "GNU’s Not Unix"). Every operative system has a kernel, which is the part of the program that actually allocates hardware resources to the programs it is running. He wanted to use the microkernel model, which he considered to be extremely powerful, and he would call the GNU Hurd ("Hurd" stands for "Hird of Unix-replacing daemons"; and what is a "Hird"? It stands for "Hurd of interfaces representing depth" … See? A mutually recursive acronym!) Why did he choose the microkernel model? Mostly because the kernel is divided into different servers, each carrying out a different job. This is the equivalent to the theory of division of labor in the software arena. The more servers carry out different jobs, the end result would be great, cheap but efficient. Having servers run together, but performing different complex tasks will make things a lot simpler.

Another programmer called Linus Torvalds begged to differ. He wanted to create his own kernel, which today is called Linux, which was built using another philosophy: the monolithic kernel. Why did he consider it better? In the case of the monolithic kernel you have a kernel that is one entity, indivisible, which carries out all the jobs. This would be far, far better in terms of efficiency and debugging than any microkernel.

Of course, we may ask why would microkernels be more inefficient to develop, operate and debug? It may well be that separating the kernel into servers to do different jobs would sound like a good idea, but then you have to coordinate among servers so that they run efficiently! As a result, you have a living nightmare. The GNU Hurd logo will give you an idea of why this is so complicated:

GNU Hurd Logo

To debug it is a living hell, because if the bug occurs in the connection between servers, then you’ll have to try to find out if the bug happened between server A and server B, or server A and server C, or if it does involve many servers simultaneously, or if it occurred before or after server A sent a signal to server B, and so on. As Linus states very clearly, the supposed simplicity of micokernels is a false simplicity.

It is not surprising that ever since 1990 the GNU Hurd is still in development phase and is not ready for public use. Linux, on the other hand, is widely used along with the rest of the GNU components, forming a GNU/Linux operative system. Today this operative system is being modified by corporations and user-communities everywhere in the form of distributions.

The problem with our brain is that it resembles a microkernel of hundreds of servers. There are too many parts of the brain carrying out so many different functions, and at the same time establishing the most implausible connections you can imagine, like, for instance, when the part of the brain that has to do with vision is in the very back of the brain, and it is divided into many different organs each carrying a different function for vision. There are twenty areas of the brain dedicated to just vision. As I have stated in Part V, the extreme degree of complexity in the brain is not and argument for Creationism or ID, because an intelligent designer would have made our brain far more efficiently an much less messy. Of course, you may claim that God is a wild and crazy guy, and that’s why He made our brains that way. I feel, though, that this is not a theology many people are willing to accept.

If we look at our from an evolutionary standpoint, the way our brain is organized makes perfect sense. At least the GNU Hurd is being intelligently designed, but our brain was not even designed by the GNU Project. It was designed through a series of legacies our ancestors left us within our own skull. We can see in it signs of exaptation: different organs performing different functions, and then together assuming another higher-level function.

Our Reptilian Brain … Again!

No one should be surprised at this fact. We have hiccups because our brain is organized according to our evolutionary ancestry. Hiccups are the result of our reptilian brain! As we have said in Part V, our brain is a set of organs, and is organized according to the way it evolved. Remember these illustrations inspired by Paul MacLean’s proposal?

Quadrune BrainQuadrune Brain
[Drawing by Nancy Margulies, reproduced with permission]

Remember that the "Lizard Legacy" represents the R-Complex, the "Lil’ Furry Mammal" is the lymbic system, the "Monkey Mind" is the Neocortex, and the "Higher Porpoise" is the Executive part of our brain.

The R-Complex, which is precisely the brain stem, is the one which causes hiccups when the nerves which control the diaphragm are affected in some way. Reptiles evolved from amphibians and still preserved the brain stem. The R-Complex, as it happens with fish and amphibians, also controls in great measure our heart beating, breathing, digestive system, sexual drive, and everything else involuntary but which keeps us alive.

However, the R-Complex is also the seat of other things. In humans, it also assumes other functions such as hierarchical thinking derived from the instinct of territoriality. There is a sense that a certain piece of land or place "belongs to us". Let’s not misrepresent this as a justification for capitalism’s ideal of private property. It is not about exploiting "what you buy or sell". What matters for the reptile is survival, and the place where it lives is essentially a scarce resource.

Not everything on Earth holds the same quality of life for a reptile. It is not the same to live near water, or to live in a forest, than to live in a desert or dry land. It is this reptilian instinct of territoriality which lets it defend its territory where it can prosper, nourish itself and reproduce. It is not a big surprise, then, that the reptilian part of our mind is the source of our inherent aggressive behavior. In the animal world, if you are not aggressive, you cannot defend "your territory", you cannot reproduce, hence your species will not survive. Aggressive behavior exists because of natural selection.

Of course, this insight made by Paul MacLean when he proposed the triune-brain theory (today the quadrune brain-theory) could be used by Social Darwinists to state that forms of violence in order to preserve a group, or race, or oppressive economic systems, are morally justified. The problem is that this perspective actually ignores other parts of the brain that evolved ,precisely because they also helped us survive and make better decisions than the R-Complex alone would ever do.

The Moral Importance of the Limbic System

She was all smiles, lovely as ever! Beautiful, slender, gorgeous, attractive. She was there to visit António Damásio, one of the most renowned neurobiolotists in the field.

António Damásio

She has had some seizures, which was the reason why she wanted Damásio to check her brain out. And she had something that we, a prima facie would regard as something charming about her: she had a very positive view of life. Yet, the problem was that she was too positive about life’s outcomes.

She was always cheerful, she could be very friendly, formed romantic relationships easily, and so on. But she had attitudes of positive thought that were inappropriate for any regular person to have under certain circumstances. In fact, she had no fear, nor anger in any way. She happened to be a great artist, dedicated to drawing. She could draw practically facial expressions which communicated all sorts of emotions … except fear.

So, Damásio carried out an experiment. He would show one hundred human faces so she would judge them: are they "trustworthy" or "approachable" people? If you showed them to any normal person, usually they all agree which faces seem as trustworthy and which are not. But the woman of our story was not able to do that. For her, all faces were "trustworthy".

The problem with not having particular negative emotions such as anger or fear is that you cannot make certain judgments according to states of emotions in other people, therefore making it impossible to know which situations could be risky, or call for caution, or even to be avoided altogether.

This woman had seizures, and at the same time was fearless because one of the amygdalae of her brain was almost completely calcified. The amygdala is a very important part of the limbic system, which regulates our emotions. As we have said in Part V, the limbic system is a legacy from our ancestors, the first primordial mammals, which developed through natural selection and genetic mutation a way to feel empathy towards their offspring, or towards other members of their group or species. It also helps drive our emotions towards feeding and reproducing. It is said that the limbic system mediates and regulates some aspects of our behavior which can be summed up with the four F’s: feeding, fighting, fleeing, and f … sexual behavior. 😛

As we can see, emotions play a vital role in making moral decisions, since they enable us to have empathy towards another, or to avoid circumstances, or to recognize a state of emotion in other people that will let us deal with them better under certain circumstances. Also, with the limbic system we find the residence of care and bonding. Many people who have an affected limbic system are unable to bond, love, or care appropriately, sometimes to the point of completely nullifying moral behavior. For instance, much of the most horrid serial killers have their amygdalae and/or the temporal lobe affected in some way (see more factual research here).

Also another point I want to bring, a little criticism to post-feminism. Post-feminism thrives in the idea that much of our stereotypes regarding women are of cultural origin and they need to be deconstructed. However, Steven Pinker, John Eccles and many cognitive scientists and neurobiologists have shown that usually most stereotypes people have are based on actual experience, it is not something instilled by culture in order to oppress the "other". For example, post-feminists criticize that most of society holds that females are generally more emotional than males, and that this is a male construct in a male-dominated society to place women as inferior. Quite the contrary, females in general are more emotional in many aspects than males, especially when it comes to bonding and empathy, and this in no way makes them inferior or completely irrational, or that males lack emotions or should not use emotions. The reason why they are emotional is that the connection among amygdalae in both hemispheres of the brain is usually larger than males. For this, and many other reasons, women are usually far better than males in detecting another person’s state of emotions.

This fact does not make females inferior or more irrational, especially when making moral and ethical judgments. One day a dear friend of mine, whom I hold as a model of kindness, humanity and spirituality, wrote me some months ago feeling somewhat frustrated, because most people didn’t feel as outraged as she was when she saw some injustice in the world, or when she saw hate-speech online, or when she saw some excesses by government or corporations. She felt naïve for having these feelings, because no one else seemed to care. I told her that if everyone had her sense of outrage, the world would be a much better place.

Anosognosia and the Neocortex

Of all mental illnesses, one of the most curious is anosognosia. Vilayanur Ramachandran had a very interesting patient. Mrs. Dodds had a physical problem, her left arm was paralyzed as a result of a stroke in the right hemisphere of the brain, but what was more amazing about her is that she denied that it was paralyzed. When Ramachandran asked her if she could move her arm, she did not do so, but she claimed that it was perfectly fine. When he asked her to touch his nose with her left hand, she claimed she was doing it despite the fact that it was "too obvious" that she was not doing it. She would also claim that she could clap, and if he asked her to do so, her paralyzed arm would remain unmoved, but claim she was clapping.

António Damásio had his share of patients like this:

My patient DJ had a complete left-side paralysis, but when I would ask her about her left arm, she would begin by saying that it was fine, that once, perhaps, it had been impaired but not any longer. When I would ask her to move her left arm, she would search around for it and, upon confronting the inert limb, she would ask whether I "really" wanted "it" to move. It was only then, as the result of my insistence, that she would acknowledge that "it doesn’t seem to do much by itself." Invariably, she would then have the good hand move the bad arm and state the obvious: "I can move it with my right hand." (Damásio, 1999, p. 211).

Sometimes, patients, after being told that they have something wrong, forget about such statement minutes later.

Why are there patients who behave this way? Most people in psychology have always tried to explain it a la Freud. The Freudian proposal is that the patient simply will not recognize something as unpleasant as his or her problem. It is simply a gross form of denial (understood as Freudian denial). The problem with this view is that even the patient is surprised that it is so amazingly evident to everyone else what he or she is denying is happening. This cannot be explained simply by a psychological defense mechanism.

A better explanation might be neurological. These are patients with a systematic damage of certain areas of the right hemisphere of the brain as the result of a stroke or an accident. This is due in part because the brain’s right hemisphere also deals with emotions in some way in a higher level. When it is affected, the patient doesn’t seem to respond emotionally well about it, and doesn’t seem to understand the magnitude of what is happening to him or her. Ramachandran theorizes that the story would be incomplete without the intervention of the left hemisphere, which is the part of the brain that "rationalizes" and "calculates" out of experience. When the brain is affected in the right hemisphere in such a way that belief systems mostly elaborated in the right side of the brain conflict with experience, the left side of the brain tries to deny the experience to keep the status quo so-to-speak.

Damásio’s perspective, different from Ramachandran’s in some aspects (though not mutually exclusive), proposes that these affected parts of the brain are essential for the development of the autobiographical self and the extended consciousness, and when they are damaged they affect these levels of our mental activity. As a result, they affect consciousness to a certain degree.

Of course, the right and left hemispheres exist only in one place, the Neocortex. As we discussed in Part V, it is the part of the brain that was developed by some of our ancestors, namely earlier forms of primates. This is where all the processes of emotion, recognition of faces, basic instincts, etc. are processed by the brain to a higher level. It is also the place where most of our rationalization and calculation takes place. In the case of humans, the Neocortex has grown so much that it constitutes 90% of our brain mass. It has developed so much, that it actually enhances our cognitive abilities such as language and working memory (the understanding and reasoning of certain complex tasks). No other primate has the brain development that we do. As patients of anosognosia can show very clearly, we can rationalize well our situation, even to the point of self-deception. On the other hand, having developed such higher level emotions and calculations has given us a very clear advantage over other animals.

As you might imagine, the Neocortex is also important to make moral decisions, because it helps us consider courses of action regarding difficult ethical circumstances and prioritize our values so we can carry out whatever it is that we should do.

However, a big Neocortex is not the only part of our brain that makes us uniquely human.

Our Executive Brain, or Why Phineas Gage Lost his Sanity

If there is a part of the brain that makes us uniquely human, that would be our frontal lobes, where the highest concentration of nerve-cells and nervous activity takes place in the brain. Paul MacLean’s original model conceived this area as part of the Neocortex, but the more recent model of the quadrune brain considers it a different area, because they have very unique properties that are not shared by the rest of the Neocortex. The frontal lobes of our brain are called the Executive Area, or the Executive Brain.

In Phineas Gage’s case, this part of the brain was suddenly missing.

Gage had an accident that would change the rest of his life. In 1848, while blowing up some rocks in order to place railroad tracks, one bar flew at him and literally went through his skull, more or less this way:

Phineas Gage

The fascinating thing about this incident is that he survived. When he recovered, he could speak, reason, recall his past, no paralyzed limbs, no blindness, and so on. Later, his wife, colleagues and friends claimed: "Gage is no longer Gage". There was a significant change in his character. He practically became profane, most irreverent, and disrespectful towards his family and friends. But there was more than that. Phineas Gage was described by his boss as been one of the most efficient men he had ever seen …. before the accident. What about after the accident? Gage would try engaging in projects which were proposed, but then abandoned instantly. Nothing ever suited him to do absolutely anything. He became so inefficient in trying to plan ahead or make decisions about those plans, that his boss had to fire him. As a result, he ended up as a "circus freak show" to be able to earn a bit of money.

Phineas Gage

He worked for a theatrical group for a while, with which he left for South America. By 1859, his health deteriorated, and after returning to the United States, he died in 1860. Apparently, such an accident made Gage an amoral animal, it made him lose his moral sense.

The reason why Gage changed drastically may have to do with the fact that the bar that went through his skull literally destroyed some part of his limbic system (explaining his profanity and attitude), but also the frontal lobes of the brain. We know that this may be true for the simple reasons that people today have endured some loss of that frontal part of the brain, and had similar problems.

For instance, Damásio does tell the story of how he met one man who lost part of the frontal lobe of his brain due to surgery to extirpate a brain tumor, he called him Elliot. Damásio was extremely surprised at the fact that, as Elliot was telling his story, he never showed any sort of emotion whatsoever, not a feeling of sadness, not joy, not happiness, not anger, not anything! Unlike Gage, he remained always respectful, but he was totally emotionless. However, like Gage, he had a problem: he could not be satisfied with a planned project. As a matter of fact, he found it difficult to plan a project in the first-place. He could not make any decisions about anything anymore, nor learn from his mistakes. Gage and Elliot lost all free will. This video shows another example:

Yes, it seems that the frontal lobes of the brain, the Executive Area, is the one part of the brain that lets us make a plan, foresee consequences, and make decisions. We could even say that this is the teleological part of our brain: it seeks a purpose and an end. Notice that in both Gage’s and Elliot’s cases, emotions seem to play a role. It is as if without emotions they were unable to make any decisions. This is not surprising given the fact that for our brain to function socially, it needs to use a means to an end, and that this end must be desired in some way, there must be an emotion that pushes us (so to speak) towards it. No emotions, no purpose, no decision-making. The Executive Area of the brain is that part of the brain that coordinates feelings and rational calculation.

This practically destroys one of the greatest myths that have been supported by philosophy and religious thinking for millenia: the opposition between reason and emotion. I call it the "Star Trek view". There is Dr. McCoy, who is all emotion, little reason; then there is Mr. Spock, who is a Vulcan, all reason and no emotion; and then there is James T. Kirk, who is sort of "in between". He listens to reason (Spock), and emotion (McCoy), and makes a decision. This is a false dichotomy. You cannot make any rational decisions without emotions. The false dichotomy arises from the fact that if we are too emotional, then we are not able to think straight. We agree. However, that does not mean that emotion does not play a role in our rational decisions. Having no faculty for empathy (as determined by the limbic system) would not enable us to ethically and rationally reflect on the pleasures and the pain of another person. To have no faculty of anger, we would not be able to be outraged against certain situations of human relations that call for it. As with everything, if you keep yourself emotionally balanced, you can make rational decisions.

As we have seen in this exposition all of the components of our brain are necessary to make moral and ethical decisions. This is the evolutionary recipe to forge a human brain, to make humans moral beings.

Evolutionary Explanation for Humanity Being Moral

Jorge José Ferrer, a Puerto Rican Jesuit priest and renowned bioethicist, gives us the most plausible explanation, from an evolutionary standpoint, for how we came to be moral beings.

  1. Instinct is not enough for humans: An instinct is a set of natural inclinations and impulses which drive any animal to act a certain way. Our R-Complex is the most instinctive of all of the parts of our brain, and enables us to adapt to an environment. Humans have some instincts: when we stumble, immediately our instinct is to move our body a certain way to be in equilibrium again. We could include here the instincts that drive us to feed, to sexual acts, and so on. Unlike other animals, though, instincts in humans are not as developed. For this reason, says Ferrer, we need longer care than other animals and we depend greatly on the protection of other humans. Humans also require a long process of learning, socialization and adaptation (Ferrer, 2007, pp. 31-32).
  2. Human intelligence: We have the most developed intelligence in the animal kingdom, which is the part that compensates our lack of instincts. As we have seen, our Neocortex is 90% of our brain, and our Executive Area is so well developed that it lets us make rational and moral decisions. Through intelligence, humans are able to understand reality, make plans for the future, and even modify the world according to our needs. It is for this reason that all humans have true moral and social duties. Due to intelligence, humans are the only ones who can foresee the long-term consequences of their actions (Ferrer, 2007, pp. 32-33).
  3. Autonomy: Also, because of intelligence, human beings in general have free will. A human being is not independent of his or her environment, but given certain circumstances, he or she can make a choice. Autonomy is defined by Ferrer as the capacity that people have to self-determine in order to achieve self-realization, choosing among many good means and ends they have before them (Ferrer, 2007, p. 34).
  4. Responsibility: Due to the fact that we are autonomous, we have responsibility, we are accountable for our actions. This arises from our sense of duty towards our conscience, towards society, and (if you are a religious person) towards God, Goddess, or Supreme Being, or Wholeness of the Universe, and so on (Ferrer, 2007, pp. 35-36).
  5. Sociality of humanity: Aristotle said "man is a political animal", i.e. animal of the polis, the community. Our best evolving mechanism is the fact that we live in communities, and we can only grow and develop individually when we are in a community. This is part of the reason why we developed a sense of duty and responsibility towards the community (Ferrer, 2007, pp. 36-37).
  6. Human vulnerability: Every living being is vulnerable: its life can come to a sudden end. However, unlike much of the animal kingdom, humans are highly vulnerable, and capable of being easily hurt or killed. We need to develop protection. This is part of the reasons we are able to recognize, from an evolutionary standpoint, the most basic moral norms of the community: no assassinating, no lying, no stealing, etc. This could be the evolutionary beginning of the development of an idea of morals as a whole, and the morals of virtues, which promote moral excellence (Ferrer, 2007, p. 37).

These are the six basic traits that constitute humanity as moral beings, and why we construct moral norms.


Begley, S. (2007, April 9). In our messy, reptilian brains. MSNBC. Available online:

Damasio, A. (1994). Descartes’ error: emotion, reason, and the human brain. US: Penguin Books.

Damasio, A. (1999). The feeling of what happens: body and emotion in the making of consciousness. San Diego, US: Hancourt.

Dowd, M. (2007). Thank God for evolution: how the marriage of science and religion will transform your lie and our world. US: Plume.

Ferrer, J. J. (2007). Deber y deliberación: una invitación a la bioética. PR: Centro de Publicaciones Académicas, UPR-RUM.

Gazzaniga, M. S. (2006). The ethical brain: the science of our moral dilemmas. US: Harper Perennial.

MacLean, P. D. (1990). The triune brain in evolution: role in paleocerebral functions. Springer.

Ramachandran, V. S. (2004). A brief tour of human consciousness. NY: Pi Press.

Ramachandran, V. S. & Blakeslee, S. (1998). Phantoms in the brain: probing the mysteries of the human mind. NY: Harper Perennial.

Rebato, E., Susanne, C., & Chiarelli, B. (Eds.). (2005). Para comprender la antropología biológica: evolución y biología humana. España: Editorial Verbo Divino.

Sagan, C. (1977). The dragons of Eden: speculation on the evolution of human intelligence. NY: Ballantine Books.

Shubin, N. (2009). Your inner fish: a journey into 3.5-billion year history of the human body. NY: Vintage Books.

Torvalds, L. & Diamond, D. (2002). Just for fun: the story of an accidental revolutionary. US: Harper Paperbacks.

Williams, S. (2009). Free as in freedom: Richard Stallman’s crusade for free software. US: CreateSpace.

Powered by Blogilo

Quantum Consciousness?

One of the very big problems we find regarding consciousness is that of the appearance of the ego and the fact that it has some sort of life-experience (as phenomenologists would say). This has been the holy grail of neurobiology for many years now, and we still has no adequate theory to address it.

Some scientists take an exotic path to solve this problem, like in the case of Stuart Hameroff when he tries to address this issue using quantum mechanics. Here is his explanation (an hour and 9 minutes interview):

I am no neurobiologist, so I will not criticize Hameroff on that basis, but I can do so on a philosophical basis. For instance, I noticed that he is positing the physical existence of a platonic realm where we have access to rational aspects of consciousness in the universe. In fact, according to the theory suggested by David Bohm, in order to avoid the Copenhagen Interpretation of quantum physics, we have to recognize existence of hidden variables and non-local interconnectedness. Hameroff also used Roger Penrose’s proposal that when superpositions happen there is a split in reality, but these splits are unstable, so they end up collapsing. Hameroff and Penrose suggest that this platonic dominion is the realm of non-local interconnectedness at the Planck scale, where we find logical and mathematical truths, ethical norms and values, and the aspiration for a deeper meaning. And particularly in the microtubules in the brain at the quantum level, there is superposition being carried out all the time which actually connects to this platonic realm due quantum behavior (including the travel of quantum information backwards in time).

As philosopher, a realist (particularly a platonist), I am particularly concerned about this, because it starts from the premise that platonic abstract truths and relations are physical, hence reducing relations-of-ideas (as Hume would call them) to matters-of-fact. Philosophically speaking, this would still not explain why are logical and mathematical truths true in every possible world, while physical laws can apply to this world, but not necessarily to any other world.

It also has the problem of what can actually be the base to determine that such physical logico-mathematical laws or ethical values are indeed correct, the only way is to posit a non-physical and non-causal abstract reality, which would lead us back to square one. Since the physical platonic realm is based on the non-causal platonic realm, then how is the physical one legitimized or unconditionally valid in any way?

Second, many have suggested epistemological models to achieve the level of abstraction that would enable us to recognize abstract concepts or objects. Guillermo Rosado Haddock, my former thesis director and now friend, has proposed Edmund Husserl’s epistemology of mathematics (see Rosado, 2000), which is essentially platonist but whose epistemology can be naturalizable in principle. A similar proposal has been made by Jerrold Katz with his realistic epistemology, which can be naturalizable (Katz, 1998, pp. 45-51). I think that the Husserlian mathematical epistemology, in my judgment, is more satisfactory and complete, and the way it was formulated by him can show that it can be naturalized, neurobiologists could, in principle, discover the natural mechanisms of the brain that would let us perceive abstract structures and relations based on objects being shown to us. Much of Husserl’s view of "elementary experience" have been confirmed again and again in the realm of psychology and neurobiology. A "physical platonic" realm, or a platonic space embedded in the universe is not needed for this.

The same thing happens with the discovery of elementary ethics and values, which in reality can be explained through the diverse conflicts developed within human groups, which leads to the setting of several basic moral rules to be followed, the recognition of another person as a person, another rational being (here is a phenomenological aspect), and then universalizing it to all rational beings. And as we see in different stages of society throughout history, group morals limited ethical behavior to the group, and then this requirement was expanded to include larger groups, until we are able to empathize with humanity as a whole. We are going to talk about this in a later article, but the origin of our grasp of ethical values seems to be evolutionary. The same can be said about meaning values which serve as one basis of all religious beliefs and spiritual paths.

Finally, from the perspective of philosophy of science, perhaps the most serious flaw in Hameroff’s proposal is that it is non-testable. There is absolutely no way at all to show experimentally that there is a consciousness embedded in the universe, much less a physical platonic realm of truths and values. Unless there is a way to create an experiment that would confirm his theory, it will remain a metaphysical proposal (in the Popperian sense of the word "metaphysical").

At a personal level, I feel that quantum physics has become a sort of quick answer to certain mysteries of the universe. The reasoning is: if quantum phenomena look as weird as X, then in some way X and quanta are related. The quantum world is strange indeed, but the problem with always looking for answers in quantum physics is that in the end it does not explain anything. It just restates in the form of an extensive chain of equations that both X and quanta are weird. The same criticism goes to the holonomic view of the brain.

Memory Power: Sometimes You Have It All Over the Brain, Sometimes You Don’t

Sure enough, some of the holonomic view of the brain is valid, but not completely. Karl Pribram was busy trying to understand how the human brain works, specifically the faculty of memory. Is memory located in some part of the brain? As it turns out his experiments showed that memory is dispersed all through the brain like a hologram. When you use a holographic film and you shine a LASER beam through it, it will show you on the screen a three dimensional object. The funny thing about the film is that you can tear it up to pieces and still you have a complete image in each one of them. If you take one piece and shine the LASER beam through it, it still shows the three dimensional object. The whole information of the image is in each part of that film. Pribram figured it would be more or less like that, we could lose any part of the brain, and still the whole of memory would be retained. The brain, in fact, can store up to ten trillion bytes of memory, and the holonomic model seems to help us understand that.

However, the holonomic model does not explain everything in the brain, because, as we have seen before, our brain is a set of organs, producing higher level modules which interact with each other in order for them to work in coordination. The same thing happens with memory in a way. Have you seen this movie?

Fifty First Days

Awwww! Romantic comedy! Who didn’t like that movie? Drew Barrymore is this girl who has the peculiar thing: ever since an accident she had, she is only able to remember briefly all recent events, until the very next day when she wakes up and forgets completely the day before, or all of the days after the accident. The character played by Adam Sandler has to make a videotape to update her to the present every single day. Hollywood fantasy … right? Weeeelllll … maybe not too much! Yes, Goldfield Syndrome, what she was going through, does not exist at all. But, remember "ten seconds Tom"? That was hilarious! … Except something very similar (yet not the same) does happen. Meet Clive Wearing!

Born in 1938, he was a musical conductor and musicologist with an entire career ahead of him, until a virus basically affected a part of his brain. Ever since then, his only lapse of immediate memory is about 30 seconds. He does not remember anything about his life. He only has short-term memory. The reason is that although memory is spread all through the brain, immediate memory is not. Short-term memory is located in the hippocampus within the limbic system, the evolutionary inheritance from the earliest forms of mammals. Long-term memory is all over the brain. Wearing’s brain is not able to transform short-term into long-term, so he forgets everything in a short period of time. The virus severely damaged his hippocampus.

But still, Wearing’s case is illuminating regarding one specific thing. He never forgot how to speak. The reason is that apparently the memory that requires him to speak is essentially different from that which stores events of his life. It is possible that the memory of language and speech are located in the temporal lobes, where we find the Wernicke’s area, a key location for human speech and language. Oh, and he can still play the piano! His procedural memory is also unharmed, and it works extremely well (and in the case of his piano-playing, it works beautifully).

Even though we sometimes feel that memory works like a tape recorder or a super-hard drive that "records" everything we do, in reality that is not the way that our brain works. In reality it all works out because of the connections of events in our brain, and the way neurons arrange themselves in order for us to connect different events. In cases like Wearing’s, those who are Hippocampus amnesiacs not only are unable to remember anything, but also live in fragmented and disconnected moments. They do not have enough sense of continuity to project a possible scenario for the future.

Also we must keep in mind that recalling is not exactly rewinding our memory to a certain moment and playing whole events in our mind once again. What we do when we remember is to literally reconstruct an event from bits and pieces that we actually store in our brain. This is the reason why, for most of us, when we recall something it is more vague than what we are presently living an event on the flesh.

This is also the reason why we have false memories. Michael Gazzaniga, renowned neurologist and member of the Law and Neuroscience Project, often complains about how fragile are the memories of rape victims. There are many documented cases where the victims swear to recognize their rapists, but after making a DNA analysis or looking at other conclusive evidence, the alleged rapists turn out to be innocent. Yet, the whole legal system is made up on the basis that memory is more reliable than it really is.

People who try to recall lost experiences through hypnosis may not intentionally try to distort their experiences they are trying to recall, but their brain can do it. The same happens with people who use L. Ron Hubbard’s Dianetics to engage in auditing, most of the "recalled" experiences may not be genuine memories, but constructed memories during the whole auditing process, and this could lead to worsen memory rather than enhance it.

What Makes Consciousness Possible?

Memory is not merely our ability to recall, but it is an integral part of what our consciousness is. If we have an self at all, it is in great part because our experience of time is linear: there is a past, present, and future. If there is no memory at all, there cannot be any consciousness. Wearing’s case teaches us that at the very least there must be a minimum of short-term memory for a consciousness to be possible and a self to appear as the component of conscious mental acts that remains ideally the same despite flow of time.

Memory, short-term and long-term, are not the only components that make consciousness possible. What else is needed at a neurobiological level for it to exist? First, as we have said, there are different parts or organs of our brain that interact with each other, and functionally speaking produce modules, which themselves interact with each other too. So, we are at the problem of emergence, we are asking: which modular interactions or layers of mental processes are necessary for consciousness to emerge?

António Damásio is a neurobiologist whom, in my opinion, has proposed one of the best theories on the origin of consciousness. We should think consciousness as a multilayered building, much like the step-pyramids in Egypt, or like a Babylonian zyggurat, or like a Mayan pyramid. From the base upwards, the order is the following:

  • Proto-Self: According to Damásio, all animals have this basic trait, and it can be described as a sort of preconscious biological precedent to consciousness. He defines it this way: "The proto-self is a coherent collection of neural patterns which map, moment by moment, the state of the physical structure of the organism in its dimensions" (Damásio, 1999, p. 154). This does not take place in one part of the brain, but in multiple levels and multiple places, from the brain stem to the cerebral cortex (Damásio, 1999, p. 154). We are not aware of this proto-self, and the vast majority of animals have this proto-self, but without any consciousness at all. This suggests that the proto-self came up in an early evolutionary stage of our ancestors.
  • Core-Consciousness: "Core consciousness occurs when the brain’s representation devices generate an imaged, nonverbal account of how the organism’s own state is affected by the organism’s processing of an object, and when this process enhances the image of the causative object, thus placing it saliently in a spacial and temporal context" (Damásio, 1993, p. 169). What does this mean? It means that when we think, we use to think in images. Contrary to the dogmas that many people still cling to, we do not think in language, our primary form of thought is nonverbal imaging. As a result of processing images, we pay attention to an object (any object) that our mind considers somehow relevant. Core consciousness include two sorts of selves: Transient Core Self which lets us be aware of our existence and the effects of the world on us because of sensory experience; the Autobiographical Self which operates thanks to a certain memory capacity that we all have in order to create a "fleeting feeling of knowing" which is created anew in short periods of time.
  • Extended Consciousness: This part of our consciousness depends on the capacity of memory, especially long-term memory "for facts". Acts of knowing "objects" consists in attending objects according in our personal past. These objects understood within a temporal framework makes us able to substantiate our identity and our personhood (Damasio, 1999, p. 196). Extended consciousness is precisely the result of the ability to retain in our mind numerous sets of experiences, and being able to recall them at will, and to have a sense of obtaining knowledge by experiences (counting past experiences). This part of consciousness is one that primates have developed in general, and that humans have developed better than any primate. There is a reason for that, humanity is endowed with language and intelligence, which help us enhance extended consciousness.

A long time ago Edmund Husserl pointed out the importance of memory in phenomenology, without it there would not be consciousness nor any possible knowledge of anything in this world. Today neurobiologists basically state the same thing on naturalist foundations.

The Problem of the Self

Yet, as Damásio already knows, this does not explain everything about consciousness. For example, the phenomenon that philosophers have called "qualia" is one that represents one of the biggest challenges of both neurobiology and cognitive science. Qualia refers to our ability to experience sensations. All Damásio does in his theory is to explain how the self comes to be, but he never explains how we experience the world around us. Not only does my mind can perceive colors, sounds, etc., but also there is a self which actually senses them and is pleased by them.

Daniel Dennet has been one of those philosophers who tries to show that the problem of qualia is a pseudo-problem. In his essay "Quining Qualia", he practically denies its existence at all. Other cognitive scientists and psychologists are not as easily persuaded by Dennett’s arguments. They are very clever, but qualia is a real phenomenon, and contrary to what Dennett believes, Descartes was right in pointing out that "thought" (understood in the Cartesian sense) is the only fact that cannot be doubted. Whether the self is a substance that thinks is another very different story, though. Husserl, who many consider the last Cartesian, recognized the existence of the ideality (non-causal abstract content), and also did recognize Cartesian thinking (cogito) as the only matter-of-fact that we can be certain of. He also argues that because of the essence of the cogito, this matter-of-fact requires a thinking subject (a self) and an object (cogitatum). So the act of intending an object (intentionality), requires a self that intends.

What Husserl argues from the point of view of intentionality, Ramachandran argues from the point of view of qualia. For him, qualia evolved thanks to the fact that there are layers of neural processes of encoding sensory representations which are processed in the higher executive structure of the brain. Remember that our frontal lobes make up the Executive Brain (or as Michael Dowd would call it "The Higher Porpoise"). Ramahandran suggests that during this process we reach a metarepresentation from what were originally sensory representations, and that these metarapresentations create a more economical description of what we are sensing, and they "have" qualia so-to-speak. From the point of view of natural selection, qualia highlights what is important from the whole set of sensory representations and what is not. The most advanced process for metarepresentation is unique in humans, or at least different from other primates related to us: the case of the chimp, their metarepresentations are not as sophisticated as ours.

However, qualia comes with a price. What is qualia? They are experience. If they are, then that means that something or someone is experiencing it, since there cannot be experiences "lose floating in the air" so to speak. As qualia evolved, so did the self evolve. The self is the mind’s correlate to the development of qualia: the self has experiences.

Now, what is the self? Ramachandran does not define it, but he suggests five essential characteristics of what we mean by "self":

  1. Continuity or the sense of an unbroken flow in our experiences, with the rudimentary feeling of past, present and future.
  2. The experience of being one sole "self" despite all the disparity of experiences, beliefs, memory, thoughts and so on.
  3. There is a sense that the self is joined together with the body.
  4. The sense of agency, that the self is somehow "driving" or "managing" the body.
  5. In the very specific case of human, the self is also self-aware … it is aware of itself.

This does not mean that the self is a substance inhabiting the body, but the result of these brain processes.

Is the Self an Illusion?

Give me a nickle each neuroscientist who has got to the conclusion that the "self" is an illusion. Daniel Dennett, to Steven Pinker, to António Damásio, and beyond have stated that the self is a mere illusion, it is a make-believe made by the brain as a result of processes that the self is not aware of. Even Francisco Ayala, Evan Thompson and Eleanor Rosch use Buddhist reasonings to deny the existence of the self, as if it dissolved in the midst of the argument.

I am no neuroscientist, but I beg to differ as a philosopher. I’m not going to defend the Cartesian statement that the self is a "thinking substance", since it is not a substance at all. However, I will argue that the self has an ideal reality, that is, an abstract unity that remains the same despite the flow of experiences. The fact that it comes to be from brain processes does not mean that it is less real. If it were pure fiction, it would be practically impossible to even talk about our selves at all, or how it comes to be, or how our brain and our mind relate to it.

I think that one philosopher can help us understand this is Karl Popper, who held a semi-platonist view of a non-causal abstract reality. He basically uses the Fregean distinction of three realms (or "worlds" as he calls them): the first realm (world 1) would be the world of physical objects, the second realm (world 2) being the world of psychological representations and subjective experiences, and the third realm (world 3) an abstract cultural realm that is filled with objective creations of world 2, including propositions, problems, numbers, information, theories, and so on.

For Popper, the self is a world 3 entity, it is something that is abstract, but real. Yet, in his philosophy, usually world 3 is created culturally by "selves", yet selves cannot create themselves. His view is that selves are language creations, that we acquire language, and then we start being selves. However, this cannot be the case, since, as we have seen, there can be core-selves without having developed language.

Also in his argument he confuses "self" with "self-awareness", he says that animals don’t have "selves" because they are not self-aware. However, as Fernandes (1985) has pointed out, you cannot be self-aware without having a self to be aware of.

It seems to me that the solution to the problem is that when we look at the processes in the brain, these processes go constantly from lower-level processes to higher more abstract processes. Remember, the mind is nothing more than a product of the brain, a network of modules that carry out all sorts of functions. Each module, even when it is processed by different parts of the brain, create abstractions. As we shall see in later articles, the brain is a conceptual, conjectural and theory making machine. So, the self is a form of an objective abstraction created by lower-level mental processes. This is its reality. It might well be that the self is a world 3 abstract and non-substantial being that can be generated by processes that happen in the brain (world 1), but mediated by different levels of mental abstract processes.

Those who deny the existence of selves or state that they are illusions usually do so on grounds of either physicalism or naturalism. Yet, even on a moderate physicalist view, like Quine’s for instance, it is consistent to think this way: abstract reality can come up of matter but cannot exist without matter. What a physicalist will never accept is the existence of abstract reality divorced from the physical universe, that would be platonism.

Denying selves carries also several dangers, one of which denies the self’s agency. Yes, there can be cases where a person thinks it is making a well thought decision when in fact is something at a deeper mental level. This is the case of anosognosia, a person’s mental refusal to recognize his or her problem, it is not a voluntary self-delusion, but instead something that happens at a deeper level in the mind. This also happens when, for example, a man approaches a woman supposedly to ask what time it is (and he may well believe he is actually doing this for this specific reason), but in reality it is because he is attracted to her, and his R-Complex is pushing him to mate in some way.

However, we are moral animals. With the exception of people whose brain are seriously affected in some way, in general we all make decisions that we know are good or bad, and we can control our bodies to a certain extent despite all the web of complex impulses our mind is pushing us to do. The R-Complex may be pushing me to mate with a woman, but that does not justify rape or sexual harassment. In fact, I can act against that instinct because I know it would be wrong to do so. It is not exactly that we as selves are not in control of anything! Our selves are just not in control of subconscious and unconscious processes.

And last, but not least, the denial of selves is also the denial of the inherent dignity of every human being as a rational being. Yes, the self is the result of processes in the brain, but it is not reducible to these processes. As Michael Dowd would say, this is a case where the whole is more than the sum of its parts … and I would add "and also the sum of its mental processes". Each self in each person can be viewed as a rational being, capable of making moral choices, not only with self-awareness, but also with an inherent sense of dignity: making a dignifying act or being degraded.

To deny the existence of the self just because there is a whole set of complex biological processes behind is a very big non-sequitur.


Damasio, A. (1994). Descartes’ error: emotion, reason, and the human brain. US: Penguin Books.

Damasio, A. (1999). The feeling of what happens: body and emotion in the making of consciousness. San Diego, US: Hancourt.

Dennett, D. (2002). Quining qualia. In D. J. Chalmers (ed.) Philosophy of mind: classical and contemporary readings. (pp. 226-246). NY: Oxford University Press.

Dowd, M. (2007). Thank God for evolution: how the marriage of science and religion will transform your lie and our world. US: Plume.

Fernandes, S. L. de C. (1985). Foundations of objective knowledge: the relations of Popper’s Theory of Knowledge to that of Kant. Dordrecht: D. Reidel Publishing Company.

Gazzaniga, M. S. (2006). The ethical brain: the science of our moral dilemmas. US: Harper Perennial.

Greene, A. J. (2010, July/August). Making connections: the essence of memory is linking one thought to another. Scientific American Mind. 22-29.

Hubbard, L. R. (1950). Dianetics: the modern science of mental health. CA: Bridge Publications.

Huston, T. & Pitney, J. (2010, Spring/Summer). Finding spirit in the fabric of space & time: an exploration of quantum consciousness with Stuart Hameroff, MD. EnlightenNext: the Magazine for Evolutionaries, 46 (Spring/Summer), 44-57.

Pinker, S. (1997). How the mind works. NY: W. W. Norton & Company.

Pinker, S. (2007, January 19). The brain: the mystery of consciousness. Time.,9171,1580394,00.html

Popper, K. (1994). Knowledge and the body-mind problem: in defence of interaction. London: Routledge.

Ramachandran, V. S. (2004). A brief tour of human consciousness. NY: Pi Press.

Ramachandran, V. S. & Blakeslee, S. (1998). Phantoms in the brain: probing the mysteries of the human mind. NY: Harper Perennial.

Rosado Haddock, G. E. (2000). Husserl’s epistemology of mathematics and the foundation of platonism in mathematics. In C. O. Hill & G. E. Rosado Haddock (eds.) Husserl or Frege? Meaning, objectivity and mathematics. (pp. 221-239). US: Open Court.

Talbot, M. (1991). The holographic universe. US: Harper Perennial.

Varela, F. J., Thompson, E., & Rosch, E. (1991). The embodied mind: cognitive science and human experience. MA: The MIT Press.

Powered by Blogilo

The Realm of Consciousness

In philosophy of the mind, the discussions basically center around three basic subjects: the brain, the mind and consciousness. There are many perspectives on this subject, many of them being discussed for centuries. Philosophy of the mind began when the concept of the soul started to be treated philosophically. We could trace that important discussion to Pythagoras, whose philosophy on the soul was adopted later by Plato. For both of them, the soul is the rational aspect of our being, and our intellective and true being. According to them, there are two very different worlds. One is an ideal realm, or what Plato called "the world of forms" or a world of abstract entities which serve as archetypes for everything that exists in the physical world. For both Pythagoras and Plato, forms or ideas (?????) are changeless, perfect, and eternal. On the other hand, we have a world of physical bodies, which are far from perfect, they change constantly, and come and cease to be in temporality.

Through reason, a faculty that only humans have, we are able to discover all of these abstract objects and truths. We cannot discover them relying on our senses, because they deceive … confuse, the passions associated with them will not help us make us see truth clearer. They will not help us be rational. We discover the true nature of our souls as non-physical: they belong to the realm of forms, which is intelligible, as our soul is able to understand because it belongs to that realm.

Christianity later adopted this view when it integrated Middle Platonism, and elaborated it later in its Neo-Platonistic period. However, much later St. Thomas Aquinas, wanted to reconcile previous Christian thought about the soul with Aristotelianism, especially inspired by Aristotle’s De Anima. According to Aristotle, the soul is the form of the body, in other words, it exists because of the body, and is non-separable from it. If the body dies, the soul as well ceases to be. In fact, in Aristotle there is no distinction between body and soul. It is not an inner substance or spectator that pulls the strings and drives the body. This is a dramatic departure from any Platonic view on the soul, and it was incompatible with Christian thought.

To reconcile Aristotelian thinking with the Christian view of the soul, St. Thomas Aquinas wanted to distinguish between two souls: the first one being the animal soul, the one all animals (i.e. living beings that move, have anima); the second one is the rational soul, which is what humans are endowed with. For Saint Thomas Aquinas, the rational soul is the the substantial form of the body, which was a doctrine which has become an article of faith in Catholicism since 1311 (Vienne’s Council). Unlike Aristotle, he did consider the soul substantial, it can be separable from the body, but itself is not a substance (the rational soul was made to be part of the body).

Finally, there is René Descartes. In his Discourse of Method and Metaphysical Meditations, Descartes argued the body/mind duality. If we carry out his methodical doubt (that we can place into question everything that we can have a minimal doubt whatsoever), we reach the conclusion that our thoughts (cogito) must exist: even if we want to deny that we are thinking, we are thinking. And as a correlate to our acts of thinking (cogitations), the "self" (ego) carrying it out must exist too. His methodical doubt made clear that everything physical, including the body, can be doubted, and in many ways cannot be understood mostly due to the confusion that arise out of the senses. However, the ego can recognized with all clarity and distinctness, as something simple. As a result, Descartes asked what was the "self" (what am I?), and his answer was that the self is something that thinks: something that feels, sees, loves, hates, etc. In other words, the ego is a thinking substance. The soul can be thought of as a disembodied entity, which happens to inhabit the brain. For Descartes, that place in the brain is the pineal gland.

It is no surprise to anyone that all that we call "Philosophy of the Mind" is a response to Descartes. Most Philosophy of the Mind anthology would be totally incomplete without Descartes’ own writings about the body/mind problem. No text introducing to the Philosophy of the Mind would be complete without at least some discussion of Descartes’ ideas.

Many years later, psychology and epistemology came to be, and from them we have two very different fields (among others): cognitive science and evolutionary psychology. Evolutionary psychology is just becoming a good fashion within psychology, and bases itself on evolution. The other centers on the principles of the brain that lets us have cognition of the world. Usually the latter has a phobia to the former, but as cognitive science is being complemented by neurobiology, gradually evolutionary psychology is being adopted by cognitive science.

Evolutionary psychology can tell us a lot about who we are. As we have seen in our Part V, once we understand how our brain evolved, we are able to understand much of our behavior as humans.

Premises of our Exposition on Consciousness

There is no full agreement in neurobiology nor cognitive science regarding on the proper model to understand our brain. Of course, there are those who make no distinction between the brain and the mind, and these hold the monist view of the mind. Neurobiologists and psychologists such as Daniel Dennett, V. S. Ramachandran and Antonio Damasio believe in this brain/mind view. There are others who distinguish the brain and the mind conceptually, but the mind is not conceived as a "soul" apart from the brain, but instead as a result of brain operations. This is the way Steven Pinker, John C. Eccles, and Karl Popper regard the mind, although none of them hold the Cartesian view.

Points of view who hold the radical Plato-Descartes view of the absolute separation of body and mind is not seriously held by the majority of neurobiologists and cognitive scientists. Why? For the same reason the vast majority of scientists reject Intelligent Design (ID), in the end it does not offer a natural solution to the body/mind problem, much less to the problem of consciousness.

For purposes of our discussion, I will recognize a series of aspects about the mind and consciousness.

1. The Recognition of the Existence of the Unconscious and the Conscious

We have to recognize that the mind carries out unconscious processes. This does not mean that we will adopt Freudian psychoanalysis. This scientific theory has fallen into disrepute on the second half of the twentieth century. For more on the subject, I recommend the following references: Crews (1999), Grünbaum (1985), and Webster (1996). Now, it does not mean that Freud’s theory was totally wrong, there are aspects of his theory of the unconscious which are still valid today (Ramachandran & Blakeslee, 1998, pp. 153-154):

  • Denial: Especially in phenomena such as anosognosia (when a patient denies an obvious impairment, i.e. for example, a patient thinks his or her paralized arm is is alright, and tries to rationalize his or her paralisis).
  • Repression: When a problem is recognized, but later denied in some way.
  • Reaction formation: When the patient tries to assert the opposite of what he or she suspects of being him/herself. For example, when a gay man tries to show as manly as he can possibly can.
  • Rationalization: Patients with anosognosia will try to explain away their symptoms as being perfectly normal.
  • Humor: Humor can be used as a defense mechanism.
  • Projection: When we fail to recognize ones own impair or disability and attribute it to someone else.

But, for instance, theories such as the Oedipus Complex are slowly being abandoned in psychology. Many of the symptoms attributed to the Oedipus Complex, in reality are symptoms of something else. For instance, Capgras Delusion, for a long time was attributed to this complex. Today we know that the reason for the delusion has to do with the fact that the nerves between the visual cortex and the amygdala are cut accidentally. Why is this important? Because it shows how emotions play a role in our recognition of our loved ones. Since the nerves from the eyes to the visual cortex are intact, a person with Capgras Delusion can recognize a face. He can recognize the face of his mother, for instance. However, since the nerve between the visual cortex and the amygdala is cut, he does not feel anything resembling the feelings he associates with his mother. Therefore, he will say: "She is not my mother, she is an impostor."

Of course, because there is another nerve from the hearing organs to the brain, he can recognize his mom when she calls him on the phone, but when he sees her, he says: "No, she’s not my mother."

2. The Modular Theory of the Brain/Mind

The prevalent theoretical model of the brain is the modular theory of the brain. This makes sense within the framework of evolution via exaptations. As our ancestors evolved, they kept developing organs or modules which have specific functions. For example, in Part V of our exposition, we talked about thirty different regions in the Neocortex that lets us see. One of those regions has to do with seeing motion, if we lose it, we don’t see motion anymore, if we lose the color region, we can only see in grayscale, and so on. So, there are different modules that make vision possible. The same can be said with all other operations of the brain.

3. Dual View of the Brain/Mind

Of course, I’m not a neurobiologist or anything of the sort. However, for purposes of the discussion I will assume a mild dual view of the brain and the mind. The brain is the composite physical organ within our skull. The mind is, as Pinker would describe, "what the brain does", and "not everything that it does" (like giving off heat) (Pinker, 1998, pp. 24). It regards the mind as a system of organs that interact between themselves (Pinker, 1998, p. 27).

There is a debate regarding how to understand the modular structure of the brain or of the mind (depending on the case). For instance, Pinker tries to explain religious experience in humans positing the existence of a "God module" as a result of a an activity in a specific area of the brain. Ramachandran, on the other hand, believes that religious experiences is the result of interactions between different areas of the brain, whose result would be something similar to a functional module. The press wrongly attributed him the finding of the "God module" (or as many people humoristically call it "the G-Spot of the brain"). This is the position I will adopt in these series of articles.

4. The Computational Theory of the Brain

Today, the computational theory of the brain is widely accepted in both, in cognitive science and neurobiology in general. The debate about this model still rages on, but most scientists now have no problem of accepting this model as adequate to understand how our brain circuitry exchanges information. However, those who posit the mind as being "what the brain does" actually focus more on the computation arising out of mental modules. Pinker (1998) clarifies something regarding this view of the mind:

The computational theory of the mind is not the same thing as the despised "computer metaphor." As many critics have pointed out, computers are serial, doing one thing at a time; brains are parallel, doing mullions of things at once. Computers are fast, brains are slow. Computer parts are reliable: brain parts are noisy. Computers have limited number of connections; brains have trillions. Computers are assembled according to blueprint; brains must assemble themselves. Yes, and computers come in putty-colored boxes and have AUTOEXEC.BAT files and run screen-savers with flying toasters, and brains do not. The claim is not that the brain is like commercially available computers. Rather, the claim is that brains and computers embody intelligence or some of the same reasons. To explain how birds fly, we invoke principles of lift and drag and fluid mechanics that also explain how airplanes fly. That does not commit us to an Airplane Metaphor for birds, complete with jet engines and complimentary beverage service (pp. 26-27).

There are Still Problems . . .

Now, the problem of the computational theory of the mind, as well as any other of the proposals made in neurobiology: … the problem is called sentience, or as philosophers call it "qualia". This is the ability that we have of not only receiving stimuli and reacting to it, but to actually experience these stimuli, and have an inner life. I won’t solve this problem in these articles, I will just assume that qualia is the result of our brain’s evolution.


Aristóteles. (1994). Acerca del alma. Madrid: Editorial Gredos.

Damasio, A. (1994). Descartes’ error: emotiona, reason, and the human brain. US: Penguin.

Descartes, R. (1985). Meditations on first philosophy. US: Cambridge University Press.

Crews, F. (Ed.). (1999). Unauthorized Freud: doubters confront a legend. US: Penguin.

Cushing, J. T. (1998). Philosophical concepts in physics: the historical relation between philosophy and scientific theories. UK: Cambridge University Press.

Grünbaum, A. (1985). The foundations of psychoanalysis: a philosophical critique. US: University of California Press.

Maslin, K. T. (2007). An introduction to the Philosophy of the Mind. 2nd Ed. US: Polity.

Pinker, S. (1997). How the mind works. NY: W. W. Norton & Company.

Popper, K. (1994). Knowledge and the body-mind problem. London & NY: Routledge.

Ramachandran, V. S. (2004). A brief tour of human consciousness. NY: Pi Press.

Ramachandran, V. S. & Blakeslee, S. (1998). Phantoms in the brain: probing the mysteries of the human mind. NY: Harper Perennial.

Webster, R. (1996). Why Freud was wrong: sin, science and psychoanalysis. US: Basic Books.

Powered by Blogilo

Tagged with:

Humans’ Complicated Way of Seeing and Thinking

(c) 2010 Pedro M. Rosario Barbosa

Creative Commons License

This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 Unported License.

A Preacher on the Road …

Last year, I found the most amazing book called Thank God for Evolution. It stressed the importance on "marrying" science and religion in order to transform our lives, and the world. Bold claims! Here is the author: Michael Dowd.

Michael Dowd

If he looks like a preacher, that’s because he is. He is an ordained minister who preaches wherever he is invited in the United States. He used to be a young-earth Creationist, today he is a renowned preacher for evolution, not only of living things, but of evolution of the universe as a whole. He married an atheist called Connie Barlow, who is a biologist who is today struggling for the environment. One day, he told her that he wished to live on the road in order to preach a new perspective on God and creation, one meta-religious view which integrates evolution. He was surprised when she agreed. So, an act that almost makes us remember Christ’s call to "leave everything and follow Him", both of them left everything and dedicated to preach this new view of God. Today Dowd considers himself a crea-theist, while Barlow considers herself a cre-atheist. They preach the Gospel according to St. Charles … Darwin, of course.

They have a van, which they use to travel everywhere in the United States. On the side of it you’ll find the following image:


Dowd says that this image alone is enough to make people wonder. Of course, when he drives in the south of the United States, some people are not amused by it. According to him, he even gets some "interesting" gestures from people. When he pulled over to park his van in Lawrence, Kansas, a biology professor said: "Oh great! Now you piss everybody off!"

I have to confess that I was skeptical about his book when I picked it up. I was already acquainted with several efforts by some "spriritual" people to mix science with spiritual beliefs. I’m not fond of Deepak Chopra’s use of quantum physics, which tends to confuse rather than clarify. I’m not particularly crazy about What the [Bleep] do We Know? (much less about Down the Rabbit Hole), which is filled with misinformation and pseudoscience, and I most certainly despise The Secret, which is a scam in my opinion.

But here comes the interesting part: with the exception of academic books, I usually ignore book recommendations on the book, be it by The New York Times, or The Boston Globe, or, etc. Usually, they never help me distinguish between a good book or a bad one. recommended the Da Vinci Code only to write nasty articles about it afterwards.

Dowd’s book is the exception to the rule. Who endorses the book? Eugenie Scott (the director of the National Center for Science Education [!]), Michael Shermer (publisher of Skeptic magazine [!!]), Francisco Ayala (renowned scientist [geez whiz!]), John Haught (renowned Catholic theologian), five Nobel Prize winners (Craig Mello, John Mather, Thomas Schelling, Frank Wilczek, and Lee Hartwell), among many other credible people. So, I thought that the book may be worthy of being read.

When I opened it and gazed at it, the following two illustrations surprised me:

Quadrune Brain IQuadrune Brain II
[Drawing by Nancy Margulies, reproduced with permission]

It is interesting that this illustration appears in the chapter on original sin. Dowd claims in this chapter you cannot realize grace or sin if you don’t pay attention to your brain. According to him, the brain contains the "Great Story" of evolution. In order to "realize" original sin (this is, understand original sin in a way that is universally valid), we have to understand the Lizard, the Furry Li’l Mammal, the Monkey Mind, and the Higher Porpoise. As I kept reading it, I started smiling at the explanation he gave in the book regarding our brain.

Ah! And I bought the book …

Some People are Blind, and Yet, They can See!

Neuroscientists carry out all sorts of weird experiments every day. For instance, they carry out experiments on whether a blind man can see motion. This is not something that they come up because they have nothing to do. On the contrary, they are studying a well-documented phenomenon called "blind-sight". This phenomenon is due to the way the brain is "wired up". To help you with the explanation, I give you the following illustration.

Human Brain

Now, here is an illustration of how our eyes are connected to our brain:

[Modified version of an illustration of the brain by Patrick J. Lynch.
It includes illustration of the eye by
Joël Gubler and Jakov.
This illustration can be reproduced under the terms of
CC-BY-SA 3.0.]

From the eyeballs there are two separate pathways to the brain, one which goes to the thalamus, and then ends in the occipital lobe of the brain (black), while another one goes to the brain stem, and from it goes to the parietal lobe (green). The former is where the visual cortex of the brain can be found, the area used for conscious sight. This is the area that lets us see the objects around us. It is very important to point out that the other pathway goes to the brain stem, and from it there is a path to the higher centers of the brain.

The fascinating thing about the higher centers of the brain is that they are concerned with reaction to stimulus from a visual field, it directs your attention to something significant. People with blind-sight have the problem that one of the pathways (represented by black in the illustration) is non-functional, either because of an accident or a stroke, but they have the other pathway intact. As a result, they can "see" motion and react to it, but they cannot consciously see.

The funny thing is that many reptiles today cannot see the way we humans see. You may ask why do they have eyes in the first place? The answer is that reptiles can see motion, direction, and orientation of the eyes towards something significant. Even the novel and movie Jurassic Park exploited this trait for Tyrannosaurus rex. The reason why lizards can be so accurate in capturing flies with their tongue is not because they see them "just" as we do, but because they see motion and they react.

This reptilian quality that our brain carries out a subconscious or unconscious mental activity. Imagine that you are driving. You may be consciously focused ahead of you, or on the music that you are listening to, or in a conversation you are carrying out in the car, and so on. Your mind is focused on those activities, yet you keep driving perfectly well without problems, being aware that some stuff is happening in the periphery. Why is that? This is due in part to that "reptilian" trait, of being able to react if something unexpected happens in the street in the periphery of our attention. This unconscious activity of the brain will let us have a minimum attention on the street at a subconscious level, while we are consciously focusing on other things. It seems to be a survival mechanism. It not only lets us drive, but also let us walk. Thanks to it, we do not stumble with everything we find (unless our thoughts are too lost in space).

The Complexity of the Eye and the Brain

One of the arguments presented by Creationists or Intelligent Design (ID) proponents is that evolution itself cannot account for the complexity of the most complex organ that we have, which is the brain. I don’t blame them for thinking this. V. S. Ramachandran describes our brain this way:

… it has been calculated that the number of possible permutations and combinations of brain activity, in other words the numbers of brain states, exceeds the number of elementary particles in the known universe (Ramachandran, 2004, p. 3).

Even it has been argued that evolution cannot account for the complexity of something as perfect as the human eye. This is not surprising either, since vision is the sense that we all preer most. Aristotle in the Metaphysics begins by talking about human’s thirst for knowledge, and that evidence of this is the importance we give to our senses, especially the sense of sight. It is said that only an "intelligent designer" can account for a complexity that lets us function so well. I want to argue the opposite: the degree of complexity of the eye and the brain are exactly the sign that they were not intelligently designed, but forged by continuous mutation and adaptation.

Most of us marvel at our eyes, but not everyone is impressed. Hermann von Helmholtz, a physicist, took a look at the way the eye is arranged, and he was dismayed. His thought was that humans are able to build today better lens and cameras which do a much better job than the human eye! Helmholtz stated the following about the human eye:

If an optician wanted to sell me an instrument which had all these defects, I should think myself quite justified in blaming his carelessnesss in the strongest terms, and giving him back his instrument (quoted in Dawkins, 2009, p. 353).

Let’s look at the human eye.

Human Eye

[Illustration of the eye by Joël Gubler and Jakov.]

In the case of the eye we can look that the light enters through the pupils (on the left of the illustration) through the pupils so that it can be perceived by the retina (the fine dark-green line at the right). Immediately we notice that something is wrong with the architecture. For instance, the retina seems to be attached to another layer (represented in red) at the back of the eye. In reality, these two are not attached to each other. In some cases, as we age, the jelly inside the eye liquifies, and it can cause the retina to tear. This liquid then fills the in between layers, so that it creates a retinal detachment.

In the retina itself we can find other significant problems. For instance, there is a little dip which is our "blind spot". This is created by a set of nerve endings which curl up right behind that spot and create a pathway to the brain. But it gets weirder than this because the image perceived by the eye is inverted. Most of us notice this when we took anatomy and biology class in middle school or high school, but we never reflect on this as an imperfection in the eye.

Think about that. There is a blind spot in our eyes, yet, we have no "blind spot" in our vision. We can see everything "just fine"! This happens because our brain "fills" the gap in our vision. Let’s do the following experiment. When looking at the following image, cover your left eye, fix your right eye at the star and slowly move towards the screen. At some point you will notice the spot on the right disappear.

Blind Spot

Why does this happen? In order to compensate the loss of vision due to the blind spot, our brain fills whatever it cannot see due to the blind spot. That means that to compensate for the blind spot, the brain has to develop structures and signals. The blind spot, which is an imperfection of the eye, complicates the brain. If there were no blind spot, there would be no need for these brain sturctures.

The same thing happens with the "inverted" image that stimulates our retina. There should be brain structures to "invert" the image "inversion" by the eye. Once again this particular imperfection of the eye complicates the brain.

Another imperfection of the eye is its inability to perceive the vast majority off the radiation spectrum. We can only see a very small part of the radiation spectrum. And it assigns colors to different frequencies our eyes perceives. This further complicates the brain. Many people think that objects "have color". In reality, objects have no color themselves, they emanate light frequencies that reach our eyes, and our brain interpret these frequencies as colors.

Last, but not least, there is one major imperfection of the human eye. If you look at the way our retina is internally structured, you notice something very peculiar. If you design a camera to perceive everything very well, you try the best for the photosensible part of the camera to be exposed to the light. This is not the case with our retina. There are photocells (cells responsive to light) in the retina, but ironically they are away from the light. There are cells with several nerve endings in the side of the retina that receives the light, and these nerve endings deliver the information of the light to photocells inside the retina. Then this signal from the photocells goes to the nerves at the blind spot, to then send the signal to the brain. So, the brain not only has to compensate for the blind spot, but it also needs to compensate for the imperfection of having the photocells in the wrong place in the retina. Needless to say that the ill-designed retina complicates the brain further.

It is not a surprise that if you take into account all the regions of our brain dedicated to vision (thirty regions), they are a bit more than 1/3 of our brain.

Why in heaven’s name are our eyes so ill-made? Unfortunately neither Creationism nor ID can solve this problem. In the case of ID they would have to suppose that the designer must be REALLY stupid, a position not acceptable by Creationists in any way. Now, if we look at the evolution of the eyes, we can understand perfectly why the eye is the way it is.

See … nature is a "blind watchmaker", it is not "trying" to create an eye. It is not "trying" to create an ear, or teeth, or limbs. Evolution is a long process of gene mutation and natural selection. Metaphorically speaking, what nature does is not create an organ out of the blue, but slowly "use" mutation of the genetic code that is already there, and evolution "builds" from that genetic material that makes the phenotypical traits change enough so that through exaptation it creates a new organ.

Even if we were lacking fossils, we can look at different sorts of organisms alive today and present a theoretical model on how the human eye evolved. Why can we do it? Because many of the organisms alive today did not need to develop eyes the way we have in order to survive, they developed other means of survival that did not require a completely evolved human eye. We could ask: "if nature designs every species to survive, then why those organisms that don’t have eyes did not evolve eyes? Besides, it would have more survival value." Remember that since nature is not a conscious designer, it does not seek to "maximize" survival of each organism. Natural selection just lets be the mutations that are adequate (not perfect) for survival.

We can pick several organisms that could represent different stages of the evolution of the human eye. Here is the model itself:

Evolution of the Human Eye
[Illustration created by Matticus78]

Dan Eric Nilsson was the person who proposed this model. We can find many organisms in nature which represent these different stages of the evolution of the human eye. (a) From a set of photocells in an organism, nature would favor any tendency for those photocells to form a sort of cup that would give the organism more visual information on the surroundings. (b) The cup-structure would help shade the photocells from aspects of the environment. This "eye" can do little more than detect movement, such as in the case of flatworms. (c) That cup-structure made possible an additional measure for creating a "pinhole" that would control the amount of light it received by the "photocells", just as we see in the chambered nautilus. Essentially it helps focus light up to a point. (d) This made possible the development of an enclonsed chamber, (e) and later some lens filled with liquid jelly, which helped eyes to focus light more sharply on the photocells, we can find this in many animals such as birds and mammals. (f) As the eye kept filling with jelly and water, the images grew sharper, and sharper, until we (and many mammals) have the eye that we have today.

But What about the Brain?

A Messy Way Through our Mental Operations

Contrary to what many people think, the brain is not a simple organ. Quite the contrary. Some neurologists think the brain as a composite organ, while others think of it in terms of a system of organs. These organs of the brain are not very well organized either. If you didn’t know better about the relation between vision and the brain, you would swear that the logical place for vision is just at the frontal lobes of the brain. In fact they are all the way back.

Those parts of the brain that deal with motor skills and sensory areas are not well organized either. How are these sensory and motor organs spread in these areas of the brain? Let’s look at the Homunculi drawn by the famous neurologist Wilder Penfield:

PreCentral GyrusPostCentral Gyrus
[Illustrations of the Primary Somatosensory Cortex (left) and the Primary Motor Cortex (right)
Life Science Database Archives in Japan and available under a CC-BY-SA 2.1 JA]

Sensory and Motor Homunculi
[Illustration in Penfield & Rasmussen (1950): click to enlarge]

It should be noted that, after a thorough study of the brain, it has been found that the faces in these homunculi are actually inverted: the jaw and mouth are closer to the fingers, and the eyes are away from them.

We could ask, why is this so important? These illustrations help us realize something very important: brain development does not follow the reasoning of an intelligent designer. And these illustrations in turn help us explain why did we evolve certain behaviors, some of which we are not even aware of. For instance, Charles Darwin noticed that when some people are cutting with scissors, they clench and unclench their jaw mimicking the motion of the hand. Why is that? Again loot at the homunculus of the primary motor complex, noting that the face is actually inverted, the fingers (particularly the thumb and the index) are right next to the mouth and the jaw. The neural proximity of these areas can explain that kind of behavior (Ramachandran, 2004, p. 79).

Not only that, but it can also explain the phenomenon called "phantom limbs" so to speak. What is a phantom limb? Many people who have had accidents, or have been subject to surgeries, have lost an arm, or a leg. After it happens, they report that they still feel the arm or hand: they feel that they are waving goodbye, or petting their cat, or grabbing objects. What is interesting about this phenomenon is that when they are touched in some areas of the face, or even in a part of the arm that is not off the body, they can actually feel the same sensation in their phantom limb (especially phantom fingers and phantom hand). The reason is that both the part of the somatosensory cortex that controls the sensations of the hands are just next to the arm and to the face (again, remember that the face of the homunculus is inverted). So, when a person loses an arm the part of the somatosensory cortex of the arm and the hands are "hungry" for sensations. So, they literally invade the part of the cortex of the face and the jaw. So, when you are touched on the face or the jaw, you will feel the same sensation in the phantom hand or arm.

"The Great Story" in Our Heads

Now, according to Michael Dowd and his wife Connie Barlow, our brain tells us the Great Story: the story of the whole of reality, that reality called God. This story includes our evolution as a species. As I said before, they illustrate it like this.

Quadrune Brain

Now, why in the heavens would I buy a book that would represent the brain in an apparently weird, even childish manner? The answer is: it accurately represents how our brain evolved! It is actually based on a refinement of a model of the brain called the "Triune Brain" as it has been developed by the neurologist Paul D. MacLean. His proposal of the Triune Brain has been modified slightly due to the importance of the frontal lobes of the brain, as we shall see later, turning it into the Quadrune Brain proposal. Let’s begin with the exposition, shall we?

The Lizard Legacy

Lizard Legacy

The first significant evolutionary stage when our ancestors evolved from amphibians to reptiles is what Dowd calls "The Lizard Legacy", or as McLean called it: the R-Complex ("R" for "reptilian"). This complex consists in the brain stem and the cerebellum. Remember the phenomenon of "blind-sight"? Why do we have a reptilian quality of reacting and directing our eyes at something that seems important? The part of the brain that deals with that subconscious process is connected to the brain stem … precisely part of our R-Complex.

According to McLean, the R-Complex has to do with the most basic sexual instincts, impulse to search for food, and territoriality. Also, it reacts towards whatever is interpreted as a threat either to it or its territory. Hence, one of the most basic traits of the R-Complex is that in it resides the ability we have to aggressive behavior when we think we are threatened in some way, or if we are going to attack. Finally, because it is concerned with territorial protection, its conception of nature is hierarchical, there is a tendency to dominate others in order to protect itself or its territory. Also it deals with our ability to involuntarily breathe, respond to stimuli, and even "acquired muscle memory" (so to speak).

Furry Li’l Mammal

Furry Li'l Mammal

Dowd calls the next stage of our brain development as "Furry Li’l Mammal", also called the "paleomammalian complex", or as MacLean and neurobiologists call it: the Limbic System. It is a complex made up of the hippocampus, septum, thalamus, hypothalamus, insula, cingulate cortex, and amygdala.

This is the emotional region of the brain. This limbic system is shared by all mammals alike. Why is being a mammal linked to emotional states? There is a difference between mammals and reptiles: mammals in general care for their offspring, reptiles usually don’t. Emotional states can help bond parents (especially mothers) and offsprings. Love, in the emotional sense of the word, originates in the Limbic System. It also provides the biological basis for emotional responses to certain sense-stimuli, especially taste and smell. There is also a part of the limbic system dedicated to emotional responses to sexual stimuli.

The Monkey Mind

Monkey Mind

For Dowd there is also what he calls "The Monkey Mind", but called by neurobiologists "Neocortex". Dowd borrows the term "Monkey Mind" from Buddhism to somehow describe what the Neocortex does: calculation (e.g. it calculates risk or safety). However, this term is somehow imperfect, because in Buddhism’s case, they see the Monkey Mind as describing often capricious, uncontrollable, or confused behavior. On the other hand, our Neocortex lets us have an organized way to view the world, since it has spatial and mathematical reasoning, it makes language possible, it contains the higher functions that deal with what we perceive and our motor commands (as we saw in the case of the primary motor cotex). Conscious thinking and experience seems to take place in the Neocortex, as well as our ability to abstract from experience and forge abstract notions and concepts.

However, the term "Monkey Mind" may describe accurately how our brain evolved, since the development of the Neocortex came to be as our mammal ancestors evolved into primates. Humans have the most developed Neocortex in the animal kingdom.

Our Neocortex has been adapted to evolving two hemispheres. The left hemisphere controls operations of the right side of our body, and also engages in analytic and rational thinkin. The right hemisphere controls the left side of our body, and also engages in intuition and creativity.

Higher Porpoise

Higher Porpoise

Finally, Dowd calls the most developed part of the human brain (where our nerve cells concentrate the most) the "Higher Porpoise". MacLean assumed that the frontal lobes were part of the Neocortex, as indeed it is. However, careful studies have led some neurobiologists to make an essential distinction between the frontal lobes of the brain and the rest of the Neocortex. Some o them call "Executive Brain" to the frontal lobes of the brain. When this part of the human brain is recognized, MacLean’s proposal of the Triune Brain becomes the Quadrune Brain Model.

This is the part of the brain that carries out what Elkhonon Goldberg calls "executive functions", the most complex operations of the brain. According to Goldberg we can find in it four very important operations of the human brain: intentionality, purposefulness, self-awareness, and complex decision making. This has been developed only in humans, and, according to Goldberg, it is what makes us human. This is the part of the brain that can actually drive us into action, make moral commitments, make moral decisions, and even refrain us from expressing all of the impulses from other areas of the brain, especially the Limbic System and the R-Complex. It is also the part of the brain that lets us build a plan we are going to carry out in the future.

Our Evolved Brain

Just as we can correlate many stages of the evolution of the human eye with many organisms around the world, it is this methodology that lets us identify different evolutionary stages of the human brain. And we also find that each one of these stages has a very significant component of human behavior, especially regarding survival. We cannot regard all of these regions of the brain or all these stages of evolution as just "separate" operations. Each one of these regions and stages of the brain has an organic relationship with the rest of it. This organic interaction of the organs of the brain that makes it a system is what is the origin of human nature.

Without evolution, there is no human nature. The brain is very important evidence for exaptation, since much of the organs (or parts) of the brain once developed make the whole system have a new function, a new behavior. It shows evidence that it was not intelligently designed, but it is an important evidence that it has been designed by evolution. Everything in our brain has evolved according to natural selection, giving us an enormous advantage over many other organisms. Indeed, human intelligence and mental states have made us one of the most successful organisms on the face of this planet. So successful, in fact, that we represent a very real threat to the existence of other organisms everywhere in the world. This is an issue we will talk about in a future article.


Aristóteles. (2008). Metafísica. Madrid: Alianza Editorial.

Ayala, F. J. (2007). Darwin’s gift to science and religion. Washington, D.C.: Joseph Henry Press.

Carter, R. (2000). Mapping the Mind. US: University of California Press.

Damasio, A. (1994). Descartes’ error: emotion, reason, and the human brain. US: Penguin Books.

Damasio, A. (1999). The feeling of what happens: body and emotion in the making of consciousness. San Diego, NY & London: Harcourt.

Darwin, C. (2004). The descent of man, and selection in relation to sex. US: Penguin. (Originally published in 1879).

Darwin, C. (2008a). On the origin of species by means of natural selection of the preservation of favoured races in the struggle for life. NY: Bantam. (Originally published in 1859).

Darwin, C. (2008b). On the origin of species: the illustrated edition. D. Quammen (Ed.). NY & London: Sterling.

Dawkins, R. (1996). The blind watchmaker: why the evidence of evolution reveals a universe without design. NY & London: W. W. Norton & Company.

Dawkins, R. (2004). The ancestor’s tale: a pilgrimage to the dawn of evolution. Boston & NY: Houghton Mifflin Company.

Dawkins, R. (2009). The greatest show on earth: the evidence for evolution. NY: Free Press.

Dowd, M. (2007). Thank God for evolution: how the marriage of science and religion will transform your life and our world. US: Penguin.

Falk, D. R. (2004). Coming to peace with science: bridging the worlds between faith and biology. IL: InterVarsity Press.

Futuyma, D. J. (2009). Evolution. US: Sinauer Associates.

Goldberg, E. (2001). The executive brain: frontal lobes and the civilized mind. Oxford & NY: Oxford University Press.

Goldberg, E. (2009). The new executive brain: frontal lobes in a complex world. Oxford & NY: Oxford University Press.

Jeeves, M. & Brown, W. S. (2010). Neurociencia, psicología y religión: ilusiones, espejismos y realidades acerca de la naturaleza humana. España: Editorial Verbo Divino.

Maturana, H.R. & Varela, F. J. (1992). The tree of knowledge: the biological roots of human understanding. Boston: Shambala.

MacLean, P. D. (1990). The triune brain in evolution: role in paleocerebral functions. Springer.

Penfield, W. & Rasmussen, T. (1950). The cerebral cortex of man. NY: The Macmillan Company.

Pinker, S. (1997). How the mind works. NY & London: W. W. Norton & Company.

Ramachandran, V. S. (2004). A brief tour of human consciousness. NY: Pi Press.

Ramachandran, V. S. & Blakeslee, S. (1998). Phantoms in the brain: probing the mysteries of the human mind. NY: Harper Perennial.

Rebato, E., Susanne, C., & Chiarelli, B. (eds.). (2005). Para comprender la antropología biológica: evolución y biología humana. España: Editorial Verbo Divino.

Sagan, C. (1977). The dragons of Eden: speculation on the evolution of human intelligence. NY: Ballantine Books.

Servos, P., Engel, S. A., Gati, J., & Menon, R. (1999). fMRI evidence for an inverted face representation in human somatosensory cortex. NeuroReport, 10, 7: 1393-1395.

Shubin, N. (2009). Your inner fish: a journey into 3.5-billion year history of the human body. NY: Vintage Books.

Varela, F. J., Thompson, E. T., & Rosch E. (1992). The embodied mind: cognitive science and human experience. US: The MIT Press.

Bookmark and Share