31 October 2014

A Library is My Temple

Books and stories have always played an important role in my life. One of my early memories is a battle of wills with a librarian over how many Asterix books I could take out. Policy said only three at a time, but I wanted them all. Later when I became a librarian I understood the policy was designed to ration a limited resource. That was the small-town library in Taupo (pop. 12,000 in 60s & 70s) where I grew up. Before we left I had discovered science-fiction and began checking out Asimov and Arthur C Clark books. We had a small book case at home with books collected mostly by my mother, since my father was unable to read well (I think he had what we'd call dyslexia these days). Some of those books became companions and guides.

I recall libraries in all the places where I've lived. The magnificent Wellington City Library with it's curving glass wall and matching curved shelving. The first cafe in a library in New Zealand I think. The ugly functionality, but massive size of the Auckland City Library. For a few years I had keys to the stacks of ACL as a result of my job and I would explore the catacombs. I discovered unbroken runs of Popular Electronics and built circuits based on designs from them. There was a complete set of Max Müller's Sacred Books of the East series gathering dust in a gloomy corner. Libraries in Taupo, Hamilton, Northcote, and Glenfield too. School libraries, university libraries (Waikato, Auckland, Victoria, AUT, Unitec), and private libraries too. I owned very few books until I was in my late 20s. Books were expensive and anyway, Libraries made owning them unnecessary. I spent my money on buying records. Then I discovered second-hand books and the Hard to Find Bookshop (but that's another story).

One of the important libraries I got to know was at Waikato University where I studied chemistry in the mid 1980s for four years. This was a large purpose-built university library on 4 floors with views overlooking the extensive grounds of the campus. Chemistry was on the fourth floor. They use Library of Congress call numbers, so science was Q and chemistry was QD. I got to know the QDs pretty well. But I did other sciences as well so the whole Q section was where I spent most of my time. However some days I would stop off at the 2nd or 3rd floor and just wander amongst the stacks. Trailing through sections on sociology or literature, marvelling at the titles of the books. Trying to imagine the scope of the knowledge that the books represented. All that knowledge! I was spell bound. 

My first job in a library was at what was then the Auckland College of Education, now absorbed into the University of Auckland. I was lucky to get the job in many ways. My forays into rock 'n' roll were not paying the bills and I was bored. I'd been out of work long enough to qualify for a subsidised placement and my boss was canny enough to take advantage of that while giving the job I applied for to someone else. The staff there were all educated, urbane, friendly and talkative. They talked about literature in such a way that for the first time in my life I wanted to read it. I started on Nobel Prize winners, reading Hemingway, Steinbeck, Updike. I got into John Irving and D H Lawrence. I even read James Joyce. I've read his Ulysses, but prefer the original. 

Importantly I learned about being a librarian and liked it enough to go to Victoria University in Wellington in 1991 for a post-graduate Librarianship course. In the process I did a research project using citation analysis on the New Zealand Library Journal that became my first academic publication. My main finding was that the local librarians were influenced by reading the New Zealand Library Journal. I did research in and around the Victoria University library and learned about writing essays (something I'd never done much of). I learned to type my essays on a computer. And in 1991, two years before the world-wide-web launched, I created my first hypertext document. 

Most of my professional life was spent in engineering libraries. I became more of an information consultant, a specialist in database searches and document supply. My favourite thing was identifying a book for an engineer that was precisely what he needed and the only thing like it, finding it in a library in Canada, checking the online catalogue (this was 1995 so it was one of the very first online library catalogues that was web-searchable), and requesting the book be sent to us in NZ. A week later we got it. I also recognised the potential for the WWW to save libraries money (a feature of my approach to online information).

I gave up working in libraries in 2002 to come to Britain and immerse myself full-time in a Buddhist life style. But one of the first things I did was join the local public library (which is rather small and disappointing considering where it is). I got my readers card for the Cambridge University library about 2006. A Triratna Order colleague is a fellow of Trinity College and kindly wrote a recommendation. It costs very little and gives me access to all the collections, including electronic and to some extent manuscripts. The "UL" as everyone calls it was built in the 1920s. It's probably what you'd call "monumental". With a large tower over the entrance way and a forbidding exterior. The inside seems to be modelled on a monastery - with central courtyards and wings surrounding them. 

The UL has the oddest filing system I've ever come across. Books are filed in order first of decreasing size (a,b, c or d); then by a broad subject based on a home grown system (Buddhism is 2:3-2:5); then by acquisition order, with a number indicating century and decade, then a running number. So all the middle-sized books on Buddhism are together at one end of the south wing, 3rd floor, but from the point of view of browsing they are randomly jumbled together: one gets Tibetan Tantra, followed by a meditation manual, a history of Buddhism in Sri Lanka, a treatise on Pureland Buddhism all next to each other. 

The atmosphere inside is also monastic. Quiet industry. Scholars working behind piles of books. I feel the incessant tapping of computer keys detracts somewhat, but I might just be jealous of the wafer-thin laptops that scholars here all seem to have. Because the central spaces are courtyards and the books and study spaces are distributed around the edges of a large building, one can walk a very long way during a day at the library. Going from Buddhism, to where the Sanskrit books are, to the nearest photocopier is about 200m of walking and four levels of stairs! 

The internal architecture is a weird mix of 1920s art-deco based utilitarian and at times rococo decoration with carved wooden panels. Mostly the former. It's pre-brutalism fortunately, but still quite stark in places. The sixth-floor North  Front wing has nothing much going for it - a concrete bunker with books. And yet closer to the entrance way there is light and space and attention to detail, along with art exhibitions. 

Here I have access to the literature of Buddhism in manuscript form, and published in many languages. The Tripiṭaka can be found in Pāḷi (three versions), Tibetan (two versions) and Chinese (only the Peking ed.) Published editions of Indian literature from the beginnings of Western engagement with it, and editions of ancient texts in Sanskrit and other Indic languages are comprehensively collected. Secondary literature is held in a separate area, but is also fairly comprehensive, despite the demise of Buddhist studies at Cambridge. Being a legal deposit library one of book published in the UK must be deposited there. I also have access to the entire range of their electronic collections of databases and article aggregators like JSTOR.

As a professional librarian I was often involved in discussions about the role of the library in the age of computers. In my last library job I managed projects that shifted our reliance from print and CD based indexes and sources to web-based products. I negotiated with, or translated for, suppliers, IT staff, senior management and Librarians. The UL makes full use of all these electronic resources. In the mean time many journals of free to read online (though let's not forget that someone pays to host them, they are not free). Google Books is becoming an increasingly useful tool for finding info in books - even books I already own. Scanned articles and books abound, though they are of dubious legality. And many scholars either maintain an online bibliography (e.g. Michael Witzel, Bhikkhu Anālayo, Richard Gombrich) or they upload their work to academia.edu (Jan Nattier, Harry Falk, Geoffrey Samuel). But despite all the wizardry I still need to visit the library from time to time. Sometimes I leave with burning eyes and running nose from the paper dust having handled some book that has mouldered on the shelf for decades. Very often what I want is in storage and must be retrieved by a library assistant (this is a consequence of the demise of Buddhist studies). But the service is efficient and seldom takes more than half an hour. 

Book Inscription
To Isaline B. Horner
colleague and friend
Nov 1937
There are not many libraries in the world that are so well funded, so comprehensive and so accessible. The UL is a place I can go to commune with the many scholars on whose shoulders I stand. And if I want to read the original Robin Dunbar article on neo-cortex size correlations with group size I can just get it off the shelf (I have done). Sometimes I come across little gems: books previously owned by I B Horner or books signed by Edward Conze or C.A.F. Rhys Davids (reminding us that Cambridge was once an important centre for Buddhist studies who's star has faded). I may not be a member of the university, but I know that I belong there. I may not be a world-class scholar with a lifetime of achievement and honours, but I am part of that milieu. My few publications are a contribution to the quest for knowledge to which the UL is both monument and cathedral.


24 October 2014

When Did Language Evolve?

This question is one of the most interesting and most difficult to answer of all the interesting questions that scientists seek answers for. Language is one of the defining characteristics of humans. Yes, some animals do have relatively sophisticated signs they use with each other, but language in all it's glory – phonology, morphology, syntax and grammar – is something that sets humans apart. Robin Dunbar's recent book Pelican Introduction to Human Evolution (2014) has a nice little essay on the subject (235-244) that I'll attempt to précis here.

In fact the question when did language evolve devolves into two questions:
  1. When did humans evolve the capability for language?
  2. When did humans begin to use language. 
Before we examine the evidence we need to quickly outline Dunbar's main themes. The book draws on two main fields of research other than anthropology and paleo-anthropology. Dunbar's main work is on what he calls The Social Brain Hypothesis. Dunbar found a correlation between the ratio of neo-cortex to brain size (volume) and the size of groups in social animals. Taking certain other factors into account, the correlation allows Dunbar to accurately predict the average group size for any social animal. In fact social animals occupy the centre of a series of concentric groups of increasing size. For humans it turns out that the numbers are (approximately): 5, 15, 50, 150, 500, 1500. These numbers correspond to structures within human groups. The community has 150 and this is the most famous Dunbar Number. 150 is the mean size of communities in the Doomsday Book for example. (see 70-71 for a range of other correlations). The SBH says we can only keep track of the business (mates, kin, alliances etc) of about 150 other people. We might know 500 by name and 1500 or more people by sight, but we won't know about their likes and dislikes or their relationships with other group members. Chimpanzee's by contrast live in communities of about 50 and don't have the larger groupings. Using this correlation Dunbar is able to calculate what size of groups our distant ancestors lived in. And this leads to the second field of research. 

Social animals have an extra time pressure that solitary animals do not. As well as feeding, resting and mating, social animals have to socialise, or put effort into maintaining social links. Primates do this primarily by grooming each other (though bonobo chimps also use sexual activity). Grooming causes both partners to produced endorphins, thus creating a sense of well-being. By studying living primates we can see how much time they spend doing various activities and build up models called Time Budgets. In groups of 150 there is simply not enough time to do everything. In order to maintain these large groups we need to do more than eat raw vegetation and pick fleas of each other. Dunbar explores how we might have responded to the time pressure of larger groups. For example cooking food increases the calories available and decreases the amount of time needed for feeding. Singing and dancing together also create a sense of well-being in a group, and do so far more efficiently than one-to-one grooming.

Some physical changes associated with language use occur at the same time as changes in our brain size that coincide with living in larger group sizes. So there is no doubt that language use is correlated with changes in the brain, but we're not sure yet whether it was causal and in which direction.

The Evidence

Dunbar considers a range of evidence in trying to answer the question of when humans began to use language. Some of it does not tell us much in the long run. For example the lateralisation of the brain—into left and right, with the left side slightly larger—was once seen as an important development. However, it's not language specific. For all we know it might be related to right-handed spear throwing (in humans) and in fact the same lateralisation is present in prehistoric sharks. The emergence of symbolism—as in cave painting and grave goods—has also been seen as significant. The use of symbolism starts around 40,000 bp which is interesting, but post dates some of the other developments (below) very considerably. 

There is also genetic evidence. But again the genes cited—FoxP2 and MYH16—lack specificity. Because mutation in FoxP2 is associated with speech and grammar difficulties, it's still sometimes called "the language gene". However, for example, mice were recently implanted with the FoxP2 gene and did not start talking. What they did do is learn better, in particularly they found "...it easier to transform new experiences into routine procedures." FoxP2 is now known to be shared with Neanderthals and thus to be at least 800,000 years old (the last common ancestor of Neanderthals and Archaic Modern Humans). MYH16 is even older at 2.4 Million years. Inactivation of MYH16 decreases the size of the jaw and associated muscles. The argument being, though this cannot be substantiated, that it made speaking possible. Thus the genetic evidence is also, to date, inconclusive. Language use being such a complex task suggests that no one gene is going to be more than a tiny part of a larger story.

In terms of anatomy we can look at the thoracic nerves, the hypoglossal canal in the skull, the position of the hyoid bone, and the ear canals. Thoracic nerves control the chest and diaphragm and since breath control is required for speech we expect to see significant enlargement of these nerves in modern humans. The hypoglossal canal is where cranial nerve XII, which "innervates the tongue and mouth" emerges from the skull. Both are significantly larger in modern humans than in apes. Sketchy fossil records suggest that Homo Heidelbergensis, Neanderthals and Archaic Modern Humans (AMH) all had human-like values for these nerves. The hyoid bone connects the base of the tongue to the top of the larynx and in humans is positioned low allowing us to make certain sounds, particularly the vowels. Neanderthals also seem to have had low hyoid bones. Finally the ear canals, as well as providing us with balance also allow us to hear. We know that chimp and human canals differ in ways that affect how we hear speech. 500,000 year old AMHs had similar ear canals to humans. 

The physical evidence suggests that many of the key anatomical changes were in place for humans (and Neanderthals) to start speaking roughly 500,000 years bp. Dunbar notes that this coincides with when the time demands for grooming would have risen above 20% of available time. 
"it is very likely that a more complex vocal repertoire evolved quite early on in hominin evolution in response to increasing group size." (241).
In fact we see parallels in the complexity of some bird calls (chickadees). There is also direct evidence that primate facial and gestural repertoires increase in complexity with increasing group size (241). 

A key ability some social animals have is the ability to form impressions of the intentions of another animal. This is called mentalising. Social animals need to know the disposition of the other members of their community and have developed the ability to infer this from clues such as posture, facial expressions and tone of voice. One of the main things we do with language is report on other people. If I tell you "Brian likes Mary" you must understand your own mind, my mind, and Brian's mind: that's 3rd order mentalising. No doubt you'd probably wonder whether Mary knows that you know that I told you that Brian likes her, and how she would respond to this and that's 4th order. Humans average out at being capable of 5th order mentalising. This ability to mentalise bares "an uncanny resemblance to the embeddedness of clauses in the grammatical structure of sentences" (242): e.g. Shakespeare attempts to have us, the audience, believe that Othello thinks that Iago is telling the truth when he says that Desdemona returns the love that Cassio has for her. Understanding this play requires the audience to use 5th order mentalising. Shakespeare is revered as a story teller partly because he must have been able to sustain 6th order mentalising. He must have been able to see the 5th order story from our point of view. 

It turns out that we can estimate mentalising capability from neuro-imaging studies of various primates. We think that Australopithecus would have managed 2nd order mentalising on average. Homo erectus and heidelbergensis averaged 3rd order, but certain members might have reached 4th order. Neanderthals averaged 4th order, but some individuals reached 5th order. And modern humans average 5th order and some reach to 6th order. So it's possible that Neanderthals had language, but it would not have been as sophisticated as ours. We also know that Neanderthals had large brains, but their increase in brain size was mainly in the occipital lobe concerned with eyesight (and their eyes were also larger than ours), whereas as Homo sapiens' increase in brain size was more in the frontal lobes, so Neanderthals may not have been capable of quite the same levels of abstraction as modern humans, but could see well in low light levels. 

Putting it all together.

It seems that by 500,000 years ago we had all the physical and mental equipment for using language in place. Archaic humans and (probably) Neanderthals, were anatomically capable of using language. Physical evidence suggests language use at least by 40,000 years ago. Language being a complex phenomenon, we must look for complex conditions related to its evolution. Michael Witzel's study in comparative mythology (See: Origins of the World's Mythologies) suggests that story telling and mythology dates from at least 70,000 ybp. By the time modern humans left Africa they had well developed mythic narratives which involved abstractions and metaphors. I think this points to Modern Humans (ca 250,000-100,000 ybp) using speech in symbolic ways from very early on.

Some suggest that language developed alongside hunting of large animals, but just because we hunted together does not mean that hunting was a driver of language, as Dunbar points out: many animals hunt as packs without language. Wolves, orca, humpback whales, and dolphins all use sophisticated, coordinated hunting strategies without the need to sit down and explain everything first. More likely is that complex tool making and use was accompanied by more sophisticated communication, if not fully developed language. 

We might also usefully consider work by George Lakoff into the nature of metaphor and abstraction. Both are rooted in our experience of interacting physically with the world. I think, but cannot prove, that our hand gestures as we speak are related to the metaphors of interaction we are invoking, that is to say our hands act out the interactions underlying our abstractions and metaphors. Gesture can be powerfully communicative as anyone who knows sign language will attest, and infants can learn to communicate with gestures long before they learn to speak (though the jury is still out on whether this facilitates later language development). The way signers convey metaphors also gives us potential insights into the process of using language to communicate. Language is not simply or only speech. The nature of it must be understood within paradigms of the embodied mind. Presumably at first we talked mainly about our physical interactions with the world and each other. Then we discovered the use of similes: "the man can run fast, like a cheetah"; and then the use of metaphors: "the man is a predator". This progression is creatively explored in literature in China Miéville's novel Embassytown. Presumably this all took a long time. Along with mentalising, this ability presumably also evolved in sophistication over time producing changes that any one generation might not have noticed. 

Finally out of left field I would like to highlight research into "conversational grunts", these are the non-language sounds (mmm, uh, huh, ah, etc) that we make when we listen to others speaking to let the talker know we are listening. We can actually signify a great deal simply by intonation of a sound like /mmm/: affirmation, disagreement, disapproval, happiness etc. Other research into this kind of area, e.g. sound symbolism, show that communicating, especially our emotional state (and this is extremely important in socialising) can be done without semantics. 

Language is not simply about communicating abstracts, though fully fledged language has this facility. Through language we communicate our disposition and socialise more effectively: language use allows us to use our time more efficiently. Language seems to have evolved alongside our larger brains and group sizes; alongside tool use and other indicators of increasing sophistication of our minds. It seems the capability was anatomically in place long before we began to use it. The communication of even archaic humans was likely a good deal more sophisticated than modern day apes.

Once language did evolve note that it constantly and rapidly changed. Language was almost certainly never a universal. Each language group (unconsciously) adapted language to reinforce group membership and identity. In the extreme we find 1000 of the worlds 7000 languages on the island of New Guinea. Language differences make inter-community communication difficult. Until the advent of civilisation language would have been a defining feature of one's identity. And this might explain why some languages developed very complex grammar that is difficult to learn except from growing up with it. Some of the changes in grammar might be explained by expanded worldviews. Trade links and the possibility of travel outside the range of one's tribe made possible by civilisation and empires, exposed us to strangers. It's worth reading Dunbar's theoretical book in conjunction with something like Jared Diamond's The World Until Yesterday which describes the day-to-day reality of hunter-gatherer life.

Dunbar's book is unique in its approach to human evolution. The combination of the Social Brain Hypothesis and Time Budget modelling allow Dunbar to draw a compelling picture of how our distance ancestors might have lived and also when they might have adopted new technologies like fire for cooking, and of course language use. A good deal of the time he is drawing directly on his own research or research conducted by members of his research group at Oxford. While we will only ever be able to infer how pre-historic humans lived from such evidence as has survived the millennia, Dunbar shows that we can obtain much more detail than before. His book takes us from SVGA to HD. Language use is in fact only a small part of the book, but it highlights the kinds of inferences that can be drawn, and of course language use is iconically human (Koko et al notwithstanding). Understanding where we came from and how we developed over time is a key task for understanding who we are now.


17 October 2014

Anicca, Dukkha, Anattā

This essay discusses the Aniccavaggo (the Section on Impermanence) in Saṃyutta 35 (Saḷāyatanā the six sense bases) in the fourth book of the Saṃyutta Nikāya (SN iv.1ff). The key words nibbindati, virajjati and vimuccati mark these passages as relating to the third stage of the Spiral Path, the stage of paññā (Skt prajñā) which I will translate here as "understanding". These texts lay out, in a very accessible way, some important ideas with regard to what Buddhists are seeking to understand. At least for the early Buddhists, understanding has a specific domain and content. 

I'll present my translation the first text of the section (with notes on the 2nd and third which differ only by substituting dukkha and anattan for anicca) and then discuss the texts afterwards. There are 12 texts in this section, but we can easily summarise them because there is considerable repetition with minor variation. Each text is presented with more or less identical wording focussing first on impermanence (anicca), then on disappointment (dukkha), and finally on insubstantiality (anattan); and each of these is repeated from the "subjective" (ajjhatta) and "objective" (bāhira) points of view; and finally with respect to the past, present and future giving twelve variations on the basic text. Only the first text in the section has a tradition nidāna or framing narrative.

1. Ajjhattāniccasuttaṃ ~ 2. Ajjhattadukkhasuttaṃ ~ 3. Ajjhattānattasuttaṃ
The Suttas on Subjective Impermanence, Disappointment and Non-identification. (SN 35: 1-3)
1. Evaṃ me sutaṃ. Ekaṃ samayaṃ bhagavā sāvatthiyaṃ viharati jetavane anāthapiṇḍikassa ārāme. Tatra kho bhagavā bhikkhū āmantesi – ‘‘bhikkhavo’’ti. ‘‘Bhadante’’ti te bhikkhū bhagavato paccassosuṃ. Bhagavā etadavoca –
Thus I heard. One time the Bhagavan was staying in Sāvatthī in the Jeta Grove or Anāthapiṇḍika's park. Right there the Bhagavan addressed the bhikkhus: "bhikkhus!"
"Sir?", the bhikkhus replied.
This is what the Bhagavan said:
‘‘Cakkhuṃ, bhikkhave, aniccaṃ. Yadaniccaṃ taṃ dukkhaṃ; yaṃ dukkhaṃ tadanattā. Yadanattā taṃ ‘netaṃ mama, nesohamasmi, na meso attā’ti evametaṃ yathābhūtaṃ sammappaññāya daṭṭhabbaṃ. Sotaṃ aniccaṃ. Yadaniccaṃ…pe… ghānaṃ aniccaṃ. Yadaniccaṃ…pe… jivhā aniccā. Yadaniccaṃ taṃ dukkhaṃ; yaṃ dukkhaṃ tadanattā. Yadanattā taṃ ‘netaṃ mama, nesohamasmi, na meso attā’ti evametaṃ yathābhūtaṃ sammappaññāya daṭṭhabbaṃ. Kāyo anicco. Yadaniccaṃ…pe… mano anicco. Yadaniccaṃ taṃ dukkhaṃ; yaṃ dukkhaṃ tadanattā. Yadanattā taṃ ‘netaṃ mama, nesohamasmi, na meso attā’ti evametaṃ yathābhūtaṃ sammappaññāya daṭṭhabbaṃ. 
The eye is impermanent [2. disappointing; 3. Insubstantial]. What is impermanent is disappointing. What is disappointing cannot be identified with a Self. Of that which cannot be identified with [we say] "It is not mine; I am not this; this is not my Self." Just this is to be seen as it is, with perfect understanding (samma-paññā). The ear is impermanent, etc The nose, etc, The tongue, etc. The body, etc
Evaṃ passaṃ, bhikkhave, sutavā ariyasāvako cakkhusmimpi nibbindati, sotasmimpi nibbindati, ghānasmimpi nibbindati, jivhāyapi nibbindati, kāyasmimpi nibbindati, manasmimpi nibbindati. Nibbindaṃ virajjati; virāgā vimuccati; vimuttasmiṃ vimuttamiti ñāṇaṃ hoti. ‘Khīṇā jāti, vusitaṃ brahmacariyaṃ, kataṃ karaṇīyaṃ, nāparaṃ itthattāyā’ti pajānātī’’ti. 
Seeing this way, bhikkhus, the educated insightful disciple, is disenchanted with the eye; disenchanted with the ear, disenchanted with the nose, disenchanted with the tongue, disenchanted with the mind. Being disenchanted they can disentangle themselves. Having disentangled themselves, they are freed. Being free there is the knowledge "I am free". They know: "birth is ended; the religious life is fulfilled; the task is completed; I'll never be reborn."

The other texts in the section are:

4. Bāhirāniccasuttaṃ ~ 5. Bāhiradukkhasutta ~ 6. Bāhirānattasuttaṃ

The Suttas on Objective Impermanence, Disappointment and Non-identification.

7. Ajjhattāniccātītānāgatasuttaṃ ~ 8. Ajjhattadukkhātītānāgatasuttaṃ ~ 9. Ajjhattānattātītānāgatasuttaṃ

The Suttas on Past and Future Subjective Impermanence, Disappointment and Non-identification.

10. Bāhirāniccātītānāgatasuttaṃ ~ 11. Bāhiradukkhātītānāgatasuttaṃ ~ 12. Bāhirānattātītānāgatasuttaṃ.

The Suttas on Past and Future Objective Impermanence, Disappointment and Non-identification.


I've made the point about the domain of application for paṭiccasamuppāda many times, but not for a while. So to reiterate, these texts confirm the summary found in the Sabba Sutta. The domain of application of paṭiccasamuppāda is the sensory world; that is to say the domain of experience.

Here we focus on the two aspects of sense experience: the "subjective" (internal = ajjhatta) aspect in terms of the eye, ear, nose, tongue, body and mind; and the "objective" (external = bāhira) in the sense of forms, sounds, odours, tastes, tactile sensations and mental-activity. This is a relatively unsophisticated view of sensory perception in which the eye does the action of seeing as well as all the processing that we now associate with the brain. The eye passes on the seen to the manas which carries out the other functions, such as naming (saññā) and attraction/repulsion (saṅkhārā), etc. Both subjective and objective aspects of experience are treated identically.

I'm usually wary of the terms subjective and objective for reasons I've spelled out in previous essays (See esp. Subjective & Objective). The term here is purely epistemological. The experience of seeing a form has two aspects: the seen and the seeing. No ontological conclusions can be drawn from this. From the mere experience of seeing a form we cannot know the nature of the form nor of the eye. Where form is defined, it is defined in experiential terms: colour, resistance, shape, texture. In the Buddhist description of experience both form and eye—i.e. both sense object (alambana) and sense faculty (indriya) —are necessary for the arising of sense cognition (viññāna) and the three together give rise to a sensory experience (vedanā "a known", "a datum"). There are no pure forms or ideas as in Plato's account of phenomena and noumena. Indeed noumena are implicitly denied here and elsewhere. 

Later Buddhism insists that the subject/object distinction is just something we impose on experience, an argument which is itself based on deep meditative experience. But even when the distinction is acknowledged, as it is here, there is no difference in treatment, no suggestion of ontological speculation or position taking. Even in form etc., there is nothing in experience to identify with. 

The object of knowing and seeing (ñānadassana), then, is the process of sensory perception. It is not "reality". When we say that we see "things" as they really are (yathābhūta), we do not mean "things" in the the general sense of "everything" (reality) but specifically we mean the things experience. We may choose to generalise this into a Theory of Everything, but this generalisation creates many philosophical problems of the kind that Buddhist philosophers are still arguing about. As a theory of why experience is disappointing the traditional account is still quite workable and based on sound foundations that will make it relevant for the foreseeable future. The rest, the arguments about the nature of reality and all that (all ontological arguments), are already anachronistic and irrelevant. 

It is evametaṃ 'just this' relation to sense experience that is to be seen with perfect understanding (samma-paññā; Skt. samyak-prajñā). In Buddhist jargon, right-view consists in correctly seeing experience as it is. To take this statement in context, we know that a similar analysis is carried out with regard to the khandhas (the factors of experience). So neither the factors of experience, nor the content of experience, nor any aspect of experience, is permanent. And what is impermanent is disappointing; and what is disappointing cannot be our Self. This logic is almost certainly drawn from the Brahmanical sphere. It represents a direct contradiction of the Vedantic ideal of saccidānanda. These are the three characteristics (trilakṣaṇa) of brahman/ātman: being (sat < √as), consciousness (cit) and bliss (ānanda). But we know that the early Buddhists denied that experience has being. In fact neither existence (astitā < √as) nor non-existence (na-astitā) apply to the domain of experience. And because experience is anicca it is dukkha rather than sukkha; sukkha being a synonym for ānanda. Nothing that is dukkha can possibly ātman or brahman. This parallel between Buddhist and Vedantic thought was established by K R Norman (1981). 

The Buddhist analysis blocks identification with any aspect of experience as our essence, self, soul or any enduring entity - which is why I'm suggesting "non-identification" as a translation of anattan (Skt. anātman). If ātman means 'myself' (reflexive pronoun) then an-ātman can be seen as a bahuvrīhi compound: "without a myself", "non-self-referential". Since absolutely every experience is impermanent, disappointing and non-self-referential even if we did have a soul, we'd never have access to knowledge of it, since knowledge is mental and thus an aspect of the experiential domain. If we can know something permanent, then if we do not presently know it, we'll never know it; or if we presently know it, we've always known it and always will. Ignorance of a soul is either impossible or absolute, precisely because the soul is defined as permanent. Thus if we don't know now, we never will. This is the essence of the argument that Nāgārjuna went on to make about dharmas having svabhāva (See Emptiness for Beginners). 

Note also that, though many Buddhists claim that bodhi has no intellectual content, this text and countless others like it, ascribe a very specific content to the experience of vimutti. Firstly one knows that having become disenchanted with the sensory world and losing interest in the froth of the play of thoughts and emotions one has disentangled oneself from it all. We cease to suspend our disbelief in the play of senses and see sense experience as it is (yathābhūta). There is nothing here about seeing reality. And being free from entanglement, free from the automatic moving towards attractive sensations and automatic moving away from repulsive sensations, we know that we are free. Interestingly this is expressed in the first person: vimuttami (i.e. vimuttaṃ asmi) 'I am freed'. But then there are a series of realisations related to the ending of rebirth. Being free from automatic responses one cannot carry out the kind of actions that contribute to rebirth. One is free in the precise sense of being free from rebirth

Those who do not believe in rebirth have yet to propose an alternative understanding of this process of disenchantment and what it signifies. This maybe because so few of the proponents of a no-rebirth (apunabhava) Buddhism have experienced liberation for themselves. We won't have a truly modern Buddhism until we have a number of credible first-hand accounts of liberation in rationalist terms. As far as I know most people who have insight still resort to traditional narratives to describe their experience. This may be because the traditionalists are more motivated to practice with sufficient intensity. 


Norman, K. R.  (1981) 'A Note on Attā in the Alagaddūpama Sutta.' Studies in Indian Philosophy LD Series, 84 – 1981

10 October 2014

The Second "Hidden" Kātyāyana Sūtra in Chinese

Stele, Korea.
This text is "hidden" because even though it has been translated into English (Choong 2010), it has not been discussed in relation to the other versions of the text so far as I'm aware. What tends to happen is that when the text is mentioned, scholars think of the Pāli version or the Sanskrit passage cited by Candrakīrti in his commentary on Nāgārjuna's Mūlamadhyamaka Kārikā which mentions the Kātyāyana Sūtra (MMK 15.7). I'm hoping to give some prominence to the other versions of which two are in Chinese.

The Pali Kaccānagotta Sutta (SN 12.15 = KP) is quoted verbatim in the Channa Sutta (SN 22.90; iii.132-5) and as such is of little interest except that when a text is cited by another text we get a sense of relative dating: it implies chronology. In the Chinese Saṃyuktāgama, the counterpart of the Channa Sutta (CC; SĀ 262 = T 2.99 66c01-c18) also quotes the Chinese counterpart of the Kātyāyana Sūtra (KC; SĀ 301), but in this case the text is different in some interesting ways. And thus we have a fourth version of the text: KP = CP, KS, KC and now CC.

Most significant is how the two Chinese versions deal with a partic-ularly difficult paragraph that in Pali and Sanskrit reads:
KP: dvayanissito khvāyaṃ kaccāna loko yebhuyyena atthitañceva natthitañca. Upayupādānābhinivesavinibandho khvāyaṃ, kaccāna, loko yebhuyyena. Tañcāyaṃ upayupādānaṃ cetaso adhiṭṭhānaṃ abhinivesānusayaṃ na upeti na upādiyati nādhiṭṭhāti ‘attā me’ti.
Generally, Kaccāna, this world relies on a dichotomy: existence and non-existence.” Usually, Kaccāna, this world is bound to the tendency to grasping and attachment. And he does not attach, does not grasp, is not based on that biased, obstinate tendency of the mind to attachment and grasping: [i.e.] “[this is] my essence”.
KS: dvayaṃ niśrito ’yaṃ kātyāyana loko yadbhūyasāstitāñ ca niśrito nāstitāñ ca | Upadhyupādānavinibaddho ’yaṃ kātyāyana loko yad utāstitāñ ca niśrito nāstitāñ ca | etāni ced upadhyupādānāni cetaso ’dhiṣṭhānābhiniveśānuśayān nopaiti nopādatte nādhitiṣṭhati nābhiniviśaty ātmā meti |
Generally, Kātyāyana, this world relies on a dichotomy: it relies on existence and non-existence. This world, which relies on existence and non-existence, is bound by attachments and grasping. If he does not attach to these, does not grasp, is not based on or devoted to the biased, obstinate tendency of the mind to attachments and grasping: “[this is] my essence”.


The syntax here is tortuous and in addition contains some distracting word play. The nouns in the green section are from the same roots as the verbs in the orange section. Both Chinese versions replicate this same structure. It's possible that the nouns and verbs are meant to be understood as linked: upāyaṃ with na upeti; upādānaṃ with na upādiyati and so on, but at this stage I'm unsure. The Sanskrit is more difficult to parse because of the "if" (ced) and the Pali seems like a better reading for not having it. 

Note that P "attā me" & Skt "ātmā me" appear to be references to the formula often used with reference to the skandhas. Here wrong view would be of the form:
rūpaṃ etam mama, eso’ham amsi, eso me attā ti samanupassati.
He considers form: “it is mine”; “I am this”; “this is my essence”.
Our text hints that the duality of existence (astitā) and non-existence (nāstitā) arises from the same wrong view. Indeed seeing experience in terms of existence and non-existence is probably at the heart of interpreting it as "mine", "I" or "my essence". 

The Saṃyuktāgama text translated into Chinese by Guṇabhadra in the 5th century CE from a text that was evidently similar to the Sanskrit of KS. Even non-Chinese-readers will see there are similarities and differences in the two Chinese versions of this paragraph, which I've marked up using the same colour scheme as above for comparison.
KC: “世間有二種若有、 若無為取所觸; 取所觸故,或依有、或依無。無此取者心境繫著。使不取、不住、不計
KC: “Among the worldly (世間) two categories are relied on: being and non-[being]. Because of having grasping the touched, they either rely on being or non-being. If he is not a seizer of that , he doesn’t have the obstinate mental state of attachment; he doesn’t insist on, or think wrongly about ‘I’.”
CC: 『世人顛倒於二邊,若有、若無世人取諸境界心便計著迦旃延不受、不取、不住、不計於
CC: Wordly people (世人) who are topsy-turvy (顛倒) rely on () two extremes (二邊): existence (若有) and non-existence (若無). Worldly people (世人) generally (諸) adhere to (取) perceptual objects (境界) [because of] a biased, obstinate tendency of the mind (心便計著). Kātyāyana: if not appropriating (受), not obtaining (取), not abiding (住), not attached to or relying on I’...

The first difference is in interpreting Skt/P. loka. KC translates 世間 "in the world" while CC has 世人 "worldly people". CC adds that the worldly people are 顛倒 i.e. "top-down", "upside-down", or "topsy-turvy". Choong translates "confused", which is perfectly good, but there's a connotation in Buddhist jargon of viparyāsa (c.f. DDB sv 顛倒) which refers to mistaking the impermanent for the permanent and so on.

KC and CC both translate niśrito/nissito as 依. But they again differ in how they convey dvayam: KC 二種 "two varieties" and CC 二邊 "two sides". The character 邊 often translates Skt. anta which is significant because the word crops up later in the text in the Sanskrit and Pali, e.g. in KS:
ity etāv ubhāv antāv anupagamya madhyamayā pratipadā tathāgato dharmaṃ deśayati |
Thus, the Tathāgata teaches the Dharma by a middle path avoiding both these extremes.
KC and CC both use 二邊 to translate ubhāv antāv "both extremes" (in the dual case; without sandhi = ubau antau). It makes more sense to refer to "two extremes" early on if that's what's talked about later, especially when by "later" we mean just three sentences later. Thus CC provides better continuity than KC.

The next part of this section is where the two texts differ most markedly.
KC: Because of having grasping the touched (取所觸), they either rely on being or rely on non-being (或依有、或依無). If [he is] not a seizer of that (若無此取者), he doesn't have the obstinate mental state of attachment (心境繫著).

CC: worldly people generally (境 ) adhere and attach to 計著 objects of the mind (界心). Kātyāyana: if not appropriating (受), not obtaining (取), not abiding (住), not attached to or relying on “I”...

(Choong "Worldlings become attached to all spheres, setting store by and grasping with the mind.")
In KC we have some confusion around the phrase 取所觸. In Choong's translation of KC (40) he wants to have it mean “This grasping and adhering" but that's not what it appears to say and in any case no dictionary I have access to translates 觸 chù as ‘adhere’ or anything like it. On face value, and taking into account Buddhist Chinese, it says "grasping what is touched": 取 = Skt. upādāna; 所 = relative pronoun; 觸 = Skt. sparśa < √spṛś 'touch'. In other words Guṇabhadra seems to have made a mistake here. I think Choong is tacitly amending the text to correct it, probably based on reading the Pali.

Elsewhere KS seems to be defective: KP has upay(a)-upādāna-abhinivesa-vinibandha ‘bound by the tendency to attachment and grasping’ whereas KS has upadhy-upādāna-vinibaddho, missing out abhinivesa, which doesn't really make sense. Upadhi is out of place here and probably a mistake for upāya. It may be that the source text for KC was also defective. 

Note that CC has abbreviated the text. The green section of KC repeats some of the first red section, but CC eliminates the repetition and makes the paragraph easier to read overall. 

The Chinese texts both run on to include the next section, although it's clear from KP and KS that the next part is a separate sentence. 


"In short, when reading any given line of a Chinese Buddhist sūtra—excepting perhaps those produced by someone like Hsüan-tsang, who is justifiably famous for his accuracy—we have a roughly equal chance of encountering an accurate reflection of the underlying Indian original or a catastrophic misunderstanding."
Jan Nattier. A Few Good Men. p.71

As a warning this might be slightly overstated for effect and it is qualified by Nattier who says that multiple translations make it easier for the scholar. But it's often true that in order to really get what a Chinese text is on about, one must use the Indic (Pāḷi, Saṃskṛta, Gāndhārī) text as a commentary. This is partly because Buddhist Chinese is full of transliterations and jargon. Words are used in ways that are specific to a Buddhist context and must be read as technical terms. Buddhist Chinese very often uses something approximating Sanskrit syntax (Chinese is an SVO language while Sanskrit is SOV). The paragraph we have been considering is a good example of this phenomena as the Chinese apes the syntax of the Sanskrit. 

It's hard to avoid the conclusion that KC and CC were translated by different people and that the translator of CC did a slightly better job than the translator of KC. So perhaps the named translator, Guṇabhadra, was a sort of editor-in-chief working with a team? This was a common way of creating Chinese translations. Or perhaps he translated the same passage twice and did it differently each time? Though this seems less likely. 

By comparison with the Pāli Tipiṭaka we expect KC and CC to be identical, as the quotation of KP in the Channa Sutta is verbatim. The fact that they are not raises questions about the source text for the Samyuktāgama translated in Chinese. Having different translations into Chinese is valuable because it is precisely where KC is difficult that CC is different and arguably clearer. But perhaps the different translations are because the source text itself was different? KS is different from KP in other ways, and different from citations in later literature. This points to a number of versions of the text being in circulation of which we have a sample in the various canons.

So often the Chinese Tripiṭaka contains little that conflicts with the Pāḷi Tipiṭaka. But sometimes, as in this case, the differences are instructive, especially where versions in Sanskrit and/or Gāndhārī survive. We're now starting to see the treatment of Pali and Chinese versions of texts side by side in articles about early Buddhism. No doubt the publication of canonical translations into English, which has begun, will facilitate this. Certainly Early Buddhism is no longer synonymous with Theravāda and Pāḷi.

My close reading of all four Kātyāyana texts is slowly becoming a journal article. A subsequent project will be to explore the many citations of the text in Mahāyāna Sūtras. Exact citations or mentions of the same idea can be found in at least the Aṣṭasāhasrikā Prajñāpāramitā Sūtra  and the Laṅkāvatāra Sūtra, and also in Nāgarjuna's Mūlamadhyamakakārikā and especially in Chandrakīrti's commentary on MMK, Prasannapāda. Thus the text and the ideas in it were foundational to the Mahāyāna and provide an important thread of continuity, between the first two great phases of Buddhist thought.


03 October 2014

Evolution, Depression and Suicide.

Is it possible that "mental illness" is an evolutionary adaptation to prevent us committing suicide when we feel like ending it all? It seems unlikely, but this is the subject of an article tweeted by @sarahdoingthing September 28, 2014.
C A Soper. "Anti-suicide mechanisms as a general evolutionary explanation for common mental disorders". University of Gloucester. [The article appears to be formatted for a publication, but no publication details are included. It's hosted at academia.edu where you can also find my academic work]. 
Since I have a long interest all three subjects (evolution, depression and suicide) I started to try to discuss the article on Twitter, but could not compress my thoughts into 140 characters. So this essay critically examines Soper's approach to evolution. 

Soper argues that (1) "Common mental disorders are too common not to exist for a reason." and that (2) "On this evidence it is reasonable to deduce, as evolutionary psychologists do, that common psychiatric disorders must have their origins in natural selection" and that (3) "When two traits routinely occur together in this way, it is reasonable to infer that a single process is at work behind both, and a mechanism offered to explain one must be flawed if it cannot also explain the other." I want to look at these three underlying claims specifically and consider whether this is credible approach to evolutionary psychology and thus whether the conclusions drawn are valid.

Evolutionary Psychology

The field of evolutionary psychology is the approach of trying to find evolutionary explanations for (i.e. applying the idea of natural selection to) the ways our minds work. If our minds can be shown to work in a particular way, then we assume that this has some evolutionary advantage that has been selected for in the sense that people with this trait are more likely to survive and produce viable offspring. Evolutionary psychologists try to identify what that advantage is or speculate on what it might be. The second prong to this approach is to try to locate genes specifically associated with this function. This has produced the, usually spurious, media syndrome of reporting that "the gene for X has been found". Soper is not concerned with the second approach, only with trying to explain a puzzling phenomenon of the co-morbidity of some well-known problems.

On the whole I find evolutionary psychology a compelling hermeneutic. The idea that traits emerge and are preserved not just in our anatomy, but also in our neuro-anatomy and therefore in our brain function and behaviour, and that this anatomy is determined by our genome, all sounds quite reasonable. Amongst the more credible proponents of this theory include Robin Dunbar, Justin Barrett, and Robert McCauley. Dunbar has cogently argued that behaviours like laughter, singing and dancing helped to lower the time burden of maintaining social relationships (by enabling one to many relationships that replaced one to one grooming in ways not available to our ancestors or to present-day chimps) and made living in larger groups practical after our increased neocortex size made it possible. Larger groups have benefits in terms of protection from predators and thus help individuals in groups to survive. Justin Barrett has argued that our predilection for seeing agency in events makes us alert to being hunted and allows us to avoid becoming food for a predator, but as a side effect also primes us for believing in supernatural agency. And so on. 

Soper's main sources for the evolutionary approach seem to be textbooks specifically related to evolutionary psychiatry. I'm not familiar with these authors or their work so I can't comment on them, though I am surprised not to find reference to more fundamental research on evolutionary psychology in his article.

Mental Disorders and Evolution

Soper's specific claims begin with this: "Common mental disorders are too common not to exist for a reason." By "reason", here, he means for a positive reason. Soper wants to argue that every common trait must, by definition, give us an evolutionary advantage or it wouldn't have survived (which is the most simplistic reading of evolution). But in this he is wrong. For example hair colour and eye colour are common traits and there's no plausible evolutionary advantage to having different coloured hair and eyes. By way of contrast, we know that skin tone is directly related to long term habitation in certain latitudes. If a group lives on the equator for a few thousand years their skin will be darkened by melanin. And if a group lives at 50 degrees north for thousands of years their melanin decreases and they become pale. It's a positive adaptation to the amount of sunlight and vitamin D synthesis and it occurs over relatively short time scales as evolutions goes (and thus is probably epigenetic - a change in gene expression rather than a mutation of a gene). Importantly it makes a mockery of the concept of colour-based race. One must always be alert to other sources of change or variety.  One thinks also of the impact of our microbiome (the sum total of microscopic life that lives in and on our bodies and plays significant and often vital roles).

Think also of the common trait of susceptibility to being infected by viruses. On Soper's logic -- that susceptibility to depression conveys an evolutionary advantage -- our susceptibility to viral infections, such as influenza or ebola, is so common that it must also confer some evolutionary advantage. Viruses exploit vulnerabilities in surface features of our cells that evolved for other reasons. For example sperm use a similar mechanism to deliver their DNA to an ovum. These diseases are virulent and indiscriminate. Before modern medicine an influenza outbreak could kill millions of people. The 1918 influenza pandemic which killed 50-100 million people worldwide is a good example. And influenza is constantly mutating so that there is no immunity conferred from having had the disease once. The best evolutionary argument might be that such diseases weed out the weaker members of the species: i.e. that influenza is natural selection in action. This has the indirect advantage of allowing stronger members to live with less competition. In the simple version of evolution, then, we have positively evolved to allow weak members of our species to be eliminated by disease, though I don't find this a compelling argument and this the opposite of what Soper is arguing for depression. Unlike viral disease which reduces competition by killing weaker members of the species, Soper is arguing that mental illness, specifically depression, prevents those who are susceptible to suicidal ideation from actually committing suicide. Thus it acts to preserve people who carry a trait that in the cold light of day makes them less fit in an evolutionary sense (and I say this as a life-long sufferer of depression). On the face of it Soper is describing anti-evolution.

As already suggested Soper makes an extraordinary assumption in his view of evolution. He assumes that evolution is the only force at work on our mental states. Another aspect of our history he takes no account of is the massive changes that began to occur ca 12000 years ago as our ancestors began to form stable settlements: i.e. civilisation. As Robin Dunbar points out (Human Evolution) for primates being in large groups of strangers is stressful. Of course we find ways of coping with that stress - clear evidence of alcohol use begins around the same time as large scale settlements in Anatolia. 12,000 years is enough time for our skin to change the levels of melanin produced, but it is not enough time for major changes to the genome, especially under the kind of NeoDarwinian paradigm that Soper unquestioningly adopts. Dunbar, again, notes that we are evolved to live in groups of ca 150 with progressively weaker links to units of ca. 500 and ca. 1500. just as present day hunter-gathers still live. Limits are imposed on group size by the amount of neocortex in the brain which has not changed significantly in the last 200,000 years since anatomically modern humans first emerged in East Africa. We cannot keep track of more than 150 relationships (on average) and groups considerably larger lose coherence. Indeed in present day hunter gatherer society most groups spend the night in groups of about 50 that have close links (often by marriage) to two other groups of 50. City dwellers are forced to adapt to their situation by adjusting how much time they spend on the different layers of their social structure (generally more time spent with less people), but the average number of Facebook "friends" is still ca. 150.

Thus simply living in settlements creates enormous stresses on humans that no other primate has ever faced. Since civilisation brings many changes in terms of how we spend our time (eps. work) and what we eat (esp. the gross over-availability of calorie rich foods) it is clearly one of the most important factors in considering the health, mental or otherwise of modern humans. In many ways one could argue that we are not well adapted to modern life - slumped over a keyboard developing bad posture, carpal-tunnel-syndrome and occupational overuse syndrome, while gorging on foods laden with fat, salt and sugar so that we overflow the poorly designed chairs we sit on for most of our sedentary day is hardly an advert for evolution. If anything many of us are not evolutionarily fit for this environment and increasing numbers are having civilisation-related or "life-style" illnesses like coronary heart disease, type II diabetes, etc. 

One of the strengths of Professor Robin Dunbar's work is his ability to compare his results with other primates and to extract evidence from fossilised remains. It allows him to take a genuinely evolutionary view of the traits he is examining by showing how things have changed over time. When we only examine modern humans, have no reliable data for change over a time scale beyond ca. 50 years, and have little reliable data from outside Europe and America the method is very much weaker. Soper presents no data from other primates on mental illness and suicide for example. I suspect that this is because there is none. Animals don't, on the whole, deliberately kill themselves though they do show analogues of some kinds of mental illness and are susceptible to addiction (at least in laboratories). 

The first challenge of any evolutionary study of suicide is to try to determine when humans began to kill themselves. And of course it's impossible to tell because the kind of evidence we need is unavailable. So the theory that we evolved this behaviour is already on very shaky ground. There is no history, no fossil record, none of the evidence over time that is crucial to all evolutionary arguments. The second challenge is to explain why humans do and other primates do not kill themselves. No explanation is presented for this either, except that Soper simply states that it must have an evolved in humans. Indeed he treats present rates of suicide as evidence of suicidality as part of "human nature". Now there is a slippery concept if ever there was one: human nature. It's entirely out of place in a scientific article. And the idea that present data represent historical data is simply mistaken. All we know for sure is that there are some ancient literary records of suicide (see my article Suicide as a Response to Suffering for a survey of suicide in the Pāli texts; where, coincidently, alcohol is described as leading to madness). We can associate suicide with settled human culture, for a few thousand years, but there is no evidence whatever for the evolution of suicide, it's simply an assumption that everything evolved because in the paradigm every trait is the positive result of evolution. 

"Other addictive, obsessive and compulsive behaviours may function as dis-tractions, effectively keep-ing a person in danger of suicide mentally and physically preoccupied.

Depression may be understood equally as a means to incapacitate a potentially suicidal indivi-dual:" (Soper p.2)
Soper cites a number of opinions on suicide and its aetiology, but noticeably absent is the monumental, if a little dated, study of suicide by Durkheim. One of Durkheim's main points is that suicide seems to be strongly associated with social isolation. This jibes well with other evolutionary psychology authors. As social animals we thrive if and only if we are part of a thriving community. Modern humans evolved for participation in a community of ca. 150 people. In fact we moderns frequently live in massive conglomerations of hundreds of thousands, if not millions, of people almost all of whom are strangers. Modern life allows a sizeable minority to become isolated and alienated from society. Many moderns live in social isolation to some degree. We're surrounded by strangers and have none of the intimate exchanges that bond primate groups, not even the sublimated activities of laughing, singing, dancing or praying together (cf Dunbar). What the effects of this have been over the long term, we are only just beginning to understand. Clearly some thrive in this new configuration, but some do not. And those who do not, I would argue, are those who develop so-called mental illness. Of course there are other, often organic, causes of mental illness as well and this is not a causal argument yet, but highlighting a correlation that begs to be investigated. Importantly, there is no unitary phenomenon here that can be ascribed to a single simple cause. Not even the depression that Soper focusses on has a singular aetiology. But Durkheim's original observations on suicide seem to stand up. 

Soper argues that (3) "When two traits routinely occur together in this way, it is reasonable to infer that a single process is at work behind both..." By two traits here, Soper is specifically referring to addiction and depression. His solution is argue that depression, with its associated lethargy, contributes to suppression of the suicidal ideation that occurs in the addict. This assumption appears to stem from his conclusion, not the other way around. He also flirts with the fallacy that correlation indicates causation. Certainly a strong correlation is interesting and deserves further study, but I doubt it is reasonable to infer from the outset that a common mechanism is at work when there's no common mechanism for depression in it's various forms nor one for suicide.

Importantly Soper presents a caricature of depression as involving lethargy. But he does not account for the phenomenon of depression associated with irritability and anger that is common, but under-reported and poorly understood, in men. Cf. Irritability, Anger Indicators of Complex, Severe Depression; or Depression & Men. Indeed the popular media representation of depression often focusses on women who have a big collapse, can't get out of bed for 6 months and then recover. That's not typical of depressed men, nor for the people who suffer repeated bouts of major depression or those who suffer long-term depression. The different aetiology of depression in men may be why men are twice as likely (15 per 100k) than women (8 per 100k) to commit suicide (WHO). Some will say that men need to talk about their problems more, but this is a simplistic and unhelpful generalisation. I've commented on this elsewhere so won't say more here. But if a supposedly singular problem is characterised by at least two unrelated traits (lethargy or anger), manifests differently in the sexes, and can be acute or chronic, then we've most likely been too superficial in our explanation and need to look more deeply. There's no one problem called "depression".

One of the most important and productive ways of looking at depression is to see the popular "chemical imbalance" explanation as having a behavioural cause. Over-stimulation of various brain mechanisms leads to problems. Constant anxiety—with activation of flight-or-fight response—can lead to lethargy and unresponsiveness, both characteristics of depression (I first experimented with this by examining the fight-or-flight response of earthworms more than 30 years ago for a high-school science class). Over-stimulation of pleasure mechanisms (through drugs, porn, eating, etc.) leads to an inability to experience pleasure—both through endorphin mediated pleasure/well-being, and through dopamine mediated anticipation and reward—also characteristic of depression. I can offer no explanation of the anger or rage felt by depressed men as yet.

One observable result is consistently lower serotonin levels in depressed people. But even after many decades there is no evidence for a causal relationship between serotonin (a hormone that has multiple roles in the body) and depression. Indeed the fact that antidepressants raise serotonin levels almost immediately, but (when they do work) take two to four weeks to lift mood, suggests something far more complex is going on.

Soper is also interested in the co-morbidity of depression and addiction. Robin Dunbar makes an interesting aside in Human Evolution. Alcoholics do not become addicted to alcohol per se, they become addicted to the endorphins that alcohol stimulates. Endorphins are one of the primary hormones produced in primates by mutual grooming and produce the sense of well being and contentment that comes from being a well established group member. Laughing, singing, and dancing in groups have the same hormonal effect. We're 30 times more likely to laugh at a comedy in a group than we are alone. This is consistent with the neuroanatomy of pleasure that I outlined in The Science of Pleasure, based largely on a book called The compass of Pleasure by David Linden (well reviewed here). See also my 2013 essay Pleasure, Desire and Buddhism

Addicts, according to David J Linden's recent account of addiction, overstimulate the part of their brain that is also responsible for the feelings of well-being associated with positive social interactions. Addicts who over-stimulate this function, progressively become unable to experience that feeling of well being, or only associate it with their drug of choice (the exception being nicotine addicts who use the frequent but weak stimulation of smoking as a way of bonding). There are in fact at least two mechanisms working in tandem: addicts gradually become less able to experience well-being and/or pleasure in the absence of their drug; and they make poor decisions and become unreliable as a result of the effects of the drug and thus become socially isolated. All too often drug abuse is initiated by some lingering unhappiness or dissatisfaction that might have led to, or already caused, depression anyway. The obvious example is that abuse and neglect in their various forms, especially at crucial developmental stages, can leave people vulnerable to depression.

By Robin Dunbar's argument, social alcohol use persists, despite the risk of addiction in some people, because it plays an important role in allowing us to operate in larger groups than we would otherwise have time for (with all the benefits that large groups provide). Disinhibition makes for fun, laughter, singing and other promoters of a sense of well-being and communality. This is not natural selection in the usual sense, in that we are not genetically programmed to make and consume ethanol, but it is natural selection in that societies which used alcohol to enhance social bonding seem to have prospered.

These mechanisms that mediate the experience of pleasure, well-being, and anticipation and reward clearly have evolved and we know them in quite a lot of detail now: which areas of the brain are involved, when those areas evolved, which neurotransmitters are involved and the more generalised impact of disrupting these mechanisms. Any evolutionary approach to mood disorders or addiction really needs to get to grips with these mechanisms and show how they are involved, preferably by citing clinical evidence, just as Dunbar and Linden do and Soper does not.

Soper speculates that addiction might "distract" the person from acting on suicidal impulses. Some addicts use substances in an attempt to control how they feel, to compensate for the lack of pleasure or reward or to suppress feelings of shame or anger. But is this really an evolutionary argument? Does our potential to abuse substances really convey an advantage? In the end the substance of choice in addiction is often the means of suicide (albeit slowly), just as many depressives over-dose on their anti-depressant medication. Is the alcoholic who does not commit suicide, but whose behaviour causes the breakdown of supportive familial and working relationships, and who suffers liver and brain damage really ahead on points? The deleterious effects of drugs during pregnancy are so severe (e.g. Fetal Alcohol Syndrome) that they must surely outweigh any perceived advantage from merely being alive to pass on one's genes. Soper's argument here is facile at best.  


Soper's key claim is that the lethargy commonly associated with depression acts as a defence against suicidal ideation. Maybe. But we also know that people with depression are far more likely to commit suicide that people without it. So if it is a mechanism, it's not a very good one. One suicide prevention website reckons that [in the USA] "15% of those who are clinically depressed die by suicide." and "The strongest risk factor for suicide is depression." (Save) Suicidal ideation and impulses are one of the most common features of the experience of depression.

Suicide is a terrible problem. It is the fifteenth most common cause of death worldwide (WHO). Depression is a leading cause of suicide (and to date I think it is under estimated because of the failure to fully recognise how it affects men). When someone kills themselves their family and friends are often left shocked, sad and angry. Suicide often seems like a betrayal. On top of everything, people are angry because the suicide has broken off the relationship, has not reached out, has not apparently reciprocated the love they feel. Death is never easy, but to most people suicide seems so preventable because it involves a conscious choice. As @sarahdoingthing says it's hard to understand because it is sui generis (self generated). How do the living empathise with the wish to be dead? Mostly they do not. The difficulty is to see how the choice is made by a disordered mind in a person who has frequently lost the ability to experience a sense of connection and does not have the perspective to see that the situation is temporary. Depression feels like solitary confinement.

Because it's so difficult to imagine what depression or addiction is like, most people who have not experienced it find they cannot empathise easily with sufferers. Very often the problems are ascribed to personal weakness such as a "weak will" or a moral failing (an example of the fundamental attribution fallacy). This can have the effect of increasing the social isolation of the person afflicted with depression. This is part of the stigma of mental illness.

The best thing you can offer someone who suffers is to listen to them without judgement and deal with your own discomfort discreetly. Whatever you do, don't offer unsolicited advice. If you're concerned about someone's safety encourage them to seek professional help. If you feel certain someone will harm themselves take whatever action you feel appropriate, but don't expect to be thanked (at least not right away).

In any case we need to be careful when constructing arguments based on evolution. It is no doubt a powerful and at present fashionable explanatory framework. There's no doubt in my mind that we evolved into our present form. But modern humans are unusual in the animal world in having the ability to over-ride evolution using culture and civilisation. While our genes are the blueprint for our neuro-anatomy, experience is a powerful shaper (both literally and figuratively) of the brain.

Without clear evidence of change over time evolution is a weak explanation. It may well be the explanation, but we cannot show why. Sometimes a trait has an obvious evolutionary advantage - language, mentalising, and laughter all provide demonstrable advantages and fit well with other areas of the theory. The potential to suppress suicidal impulses might confer an advantage, or it might have a deleterious effect on the population. Who is to say that suicide is not an instrument of natural selection? How do we weigh up the costs and benefits in these complex problems? I find no answer in Soper's article.

Even though we can identify commonalities depression and addiction likewise have multiple causes. When combining traits with many causes we multiply the complexity. Seeking unitary causes for complex problems is understandable, but often leads to fallacious thinking. Seeking a single generalised evolutionary explanation in terms of conferred advantage looks ideological. And in this case the premise looks flawed at best. So the interpretation of the data is unlikely to be trustworthy. For all these reasons I find Soper's theory unconvincing. Scarily, Soper is already making suggestions on implications for therapy as though his theory was sound. 



Barrett, J.L. (2004) Why Would Anyone Believe in God? Walnut Creek, CA: AltaMira Press.

Dunbar, R.I.M. (1992). 'Neocortex size as a constraint on group size in primates.' Journal of Human Evolution 22 (6): 469–493. doi:10.1016/0047-2484(92)90081-J 

Dunbar, Robin. (2014) Human Evolution: a Pelican Introduction. Pelican.

Durkheim, Emile. (1897) Suicide : a study in sociology. The Free Press, 1951.

Jayarava. (2004) 'Suicide as a Response to Suffering.' Western Buddhist Review.

Linden, David J. (2011) The Compass of Pleasure: How Our Brains Make Fatty Foods, Orgasm, Exercise, Marijuana, Generosity, Vodka, Learning, and Gambling Feel So Good. Viking.

C A Soper. "Anti-suicide mechanisms as a general evolutionary explanation for common mental disorders". University of Gloucester.

Related Posts with Thumbnails