Tag Archives: human behavior

On the Uses of an Interdisciplinary Ph.D.

Today, I participated in a panel -- along with super-smart colleagues Alex Konings and Kabir Peay -- for the first-year Ph.D. students in the E-IPER program, an interdisciplinary, graduate interdepartmental program (IDP) at Stanford. As is the idiom for any E-IPER event, we spent a lot of time fretting about interdisciplinarity: what it means, how you achieve it, what costs it entails for jobs, etc.

I expressed the slightly heretical opinion that we should not pursue interdisciplinarity for interdisciplinarity's sake. What matters -- both in terms of the science and more instrumental outcomes such as getting published, getting a job, getting tenure -- are questions. Yes, questions. One should ask important questions that people care about. Why are there so many species in the tropics? Where do pandemic diseases come from and how can we best control them? Does democracy and the rule of law provide the best approach to governance? How do people adapt to a changing climate?

Where the interdisciplinary Ph.D. program comes in is it provides students the opportunity to pursue whatever tools and approaches are required to answer the question in the best way possible. You don't need to use a particular approach because that's what people in your field do. Sometimes the best thing to do will be totally interdisciplinary; sometimes it will look a bit more like what someone in a disciplinary program would do. Always lead with the question.

Answering important questions using the best tools available is probably the best route to managing the greatest risk of an interdisciplinary degree. This risk, of course, is the difficulty in getting a job when you don't look like what any given department had in mind when they wrote a job ad. The best way to manage this risk is simply to be excellent. If your work is strong enough, the specific discipline of your Ph.D. doesn't really matter. Now, there are certainly some disciplines that are more xenophobic than others (anthropology and economics come immediately to mind), but if your work is really outstanding, the excuse that you don't have the right degree for a given job gets much more tenuous. Two people who come immediately to mind are my colleague David Lobell and my sometime collaborator and former Stanford post-doc Marcel Salathé.

Is David a geographer? Geologist? Economist? Doesn't really matter because he's generally recognized as being a smart guy doing important work. Similarly with Marcel: population geneticist? Epidemiologist? Computer scientist? Who cares? He has important things to say and gets recognized for it.

Now, alas, we can't all be David and Marcel, but we can strive to ask important scientific questions and let these questions lead us to both the skills and the bodies of knowledge we need. These then form the foundation of our research careers. Interdisciplinarity then is about following the question. It is not an end to itself.

Ecology and Evolution of Infectious Disease, 2013

I am recently back from the Ecology and Evolution of Infectious Disease (EEID) Principal Investigators' Meeting hosted by the Odum School of Ecology at the University of Georgia in lovely Athens. This is a remarable event, and a remarkable field, and I can't remember ever being so energized after returning from a professional conference (which often leave me dismayed or even depressed about my field). EEID  is an innovative, highly interdisciplinary funding program jointly managed by the National Science Foundation and the National Institutes of Health. I have been lucky enough to be involved with this program for the last six years. I've served on the scientific review panel a couple times and am now a Co-PI on two projects.

We had a big turn-out for our Uganda team in Athens and team members presented no fewer than four posters. The Stanford social networks/human dimensions team (including Laura Bloomfield, Shannon Randolph and Lucie Clech) presented a poster ("Multiplex Social Relations and Retroviral Transmission Risk in Rural Western Uganda") on our preliminary analysis of the social network data. Simon Frost's student at Cambridge, James Lester, presented a poster ("Networks, Disease, and the Kibale Forest") analyzing our syndromic surveillance data. Sarah Paige from Wisconsin presented a poster on the socio-economic predictors of high-risk animal contact ("Beyond Bushmeat: Animal contact, injury, and zoonotic disease risk in western Uganda") and Maria Ruiz-López, who works with Nelson Ting at Oregon, presented a poster on their work on developing the resources to do some serious population genetics on the Kibale red colobus monkeys ("Use of RNA-seq and nextRAD for the development of red colobus monkey genomic resource").

Parviez Hosseini, from the EcoHealth Alliance, also presented a poster for our joint work on comparative spillover dynamics of avian influenza ("Comparative Spillover Dynamics of Avian Influenza in Endemic Countries"). I'm excited to get more work done on this project which is possible now that new post-doc Ashley Hazel has arrived from Michigan. Ashley will oversee the collection of relational data in Bangladesh and help us get this project into high gear.

The EEID conference has a unique take on poster presentations which make it much more enjoyable than the typical professional meeting. In general, I hate poster sessions. Now, don't get me wrong: I see lots of scientific value in them and they can be a great way for people to have extended conversations about their work. They can be an especially great forum for students to showcase their work and start the long process of forming professional networking. However, there is an awkwardness to poster sessions that can be painful for the hapless conference attender who might want, say, to walk through the room in which a poster session is being held. These rooms tend to be heavy with the smell of desperation and one has to negotiate a gauntlet of suit-clad, doe-eyed graduate students desperate to talk to anyone who will listen about their work. "Please talk to me; I'm so lonely" is what I imagine them all saying as I briskly walk through, trying to look busy and purposeful (while keeping half an eye out for something really interesting!).

The scene at EEID is much different. All posters go up at the same time and the site-fidelity of poster presenters is the lowest I have ever seen. It has to be since, if everyone stuck by their poster, there wouldn't be anyone to see any of them! What this did was allow far more mixing than I normally see at such sessions and avoid much of the inherent social awkwardness of a poster session. Posters also stayed up long past the official poster session. I continued to read posters for at least a day after the official session ended. Of course, it helps that there was all manner of great work being presented.

There were lots of great podium talks too. I was particularly impressed with the talks by Charlie King of Case Western on polyparasitism in Kenya, Maria Diuk-Wasser of Yale on the emergence of babesiosis in the Northeast, Jean Tsao (Michigan State) and Graham Hickling's (Tennessee) joint talk on Lyme disease in the Southeast, and Bethany Krebs's talk on the role of robin social behavior in West Nile Virus outbreaks. Laura Pomeroy, from Ohio State, represented one of the other few teams with a substantial anthropological component extremely well, talking about the transmission dynamics of foot-and-mouth disease in Cameroon. Probably my favorite talk of the weekend was the last talk by Penn State's Matt Thomas. They done awesome work elucidating the role of temperature variability on the transmission dynamics of malaria.

It turns out that this was the last EEID PI conference. Next year, the EEID PI conference will be combined with the other EEID conference which was originally organized at Penn State (and is there again this May). This combining of forces is, I'm sure, a good thing as it will reduce confusion and probably make it more likely that all the people I want to see have a better chance of showing up. I just hope that this new, larger conference retains the charms of the EEID PI conference.

EEID is a new, interdisciplinary field that has grown thanks to some disproportionately large contributions of a few, highly energetic people. One of the principals in this realm is definitely Sam Scheiner, the EEID program officer at NSF.  The EEID PI meeting has basically been Sam's baby for the past 10 years. Sam has done an amazing job creating a community of interdisciplinary scholars and I'm sure I speak for every researcher who has been heavily involved with EEID when I express my gratitude for all his efforts.

Thoughts on Black Swans and Antifragility

I have recently read the latest book by Nassim Nicholas Taleb, Antifragile. I read his famous The Black Swan a while back while in the field and wrote lots of notes. I never got around to posting those notes since they were quite telegraphic (and often not even electronic!), as they were written in the middle of the night while fighting insomnia under mosquito netting. The publication of his latest, along with the time afforded by my holiday displacement, gives me an excuse to formalize some of these notes here. Like Andy Gelman, I have so many things to say about this work on so many different topics, this will be a bit of a brain dump.

Taleb's work is quite important for my thinking on risk management and human evolution so it is with great interest that I read both books. Nonetheless, I find his works maddening to say the least. Before presenting my critique, however, I will pay the author as big a compliment as I suppose can be made. He makes me think. He makes me think a lot, and I think that there are some extremely important ideas is his writings. From my rather unsystematic readings of other commentators, this seems to be a pretty common conclusion about his work. For example, Brown (2007) writes in The American Statistician, "I predict that you will disagree with much of what you read, but you'll be smarter for having read it. And there is more to agree with than disagree. Whether you love it or hate it, it’s likely to change public attitudes, so you can't ignore it." The problem is that I am so distracted by all the maddening bits that I regularly nearly miss the ideas, and it is the ideas that are important. There is so much ego and so little discipline on display in his books, The Black Swan and Antifragile.

Some of these sentiments have been captured in Michiko Kakutani's excellent review of Antifragile. There are some even more hilarious sentiments communicated in Tom Bartlett's non-profile in the Chronicle of Higher Education.

I suspect that if Taleb and I ever sat down over a bottle of wine, we would not only have much to discuss but we would find that we are annoyed -- frequently to the point of apoplexy -- by the same people. Nonetheless, I find one of the most frustrating things about reading his work the absurd stereotypes he deploys and broad generalizations he uses to dismiss the work of just about any academic researcher. His disdain for academic research interferes with his ability to make cogent critique. Perhaps I have spent too much time at Stanford, where the nerd is glorified, but, among other things, I find his pejorative use of the term "nerd" for people like Dr. John, as contrasted to man-of-his-wits Stereotyped, I mean, Fat Tony off-putting and rather behind the times. Gone are the days when being labeled a nerd is a devastating put down.

My reading of Taleb's critiques of prediction and risk management is that the primary problem is hubris. Is there anything fundamentally wrong with risk assessment? I am not convinced there is, and there are quite likely substantial benefits to systematic inquiry. The problem is that the risk assessment models become reified into a kind of reality. I warn students – and try to regularly remind myself – never to fall in love with one's own model. Something that many economists and risk modelers do is start to believe that their models are something more real than heuristic. George Box's adage has become a bit cliche but nonetheless always bears repeating: all models are wrong, but some are useful. We need to bear in mind the wrongness of models without dismissing their usefulness.

One problem about both projection and risk analysis, that Taleb does not discuss, is that risk modelers, demographers, climate scientists, economists, etc. are constrained politically in their assessments. The unfortunate reality is that no one wants to hear how bad things can get and modelers get substantial push-back from various stakeholders when they try to account for real worst-case scenarios.

There are ways of building in more extreme events than have been observed historically (Westfall and Hilbe (2007), e.g., note the use of extreme-value modeling). I have written before about the ideas of Martin Weitzman in modeling the disutility of catastrophic climate change. While he may be a professor at Harvard, my sense is that his ideas on modeling the risks of catastrophic climate change are not exactly mainstream. There is the very tangible evidence that no one is rushing out to mitigate the risks of climate change despite the fact that Weitzman's model makes it pretty clear that it would be prudent to do so. Weitzman uses a Bayesian approach which, as noted by Westfall and Hilbe, is a part of modern statistical reasoning that was missed by Taleb. While beyond the scope of this already hydra-esque post, briefly, Bayesian reasoning allows one to combine empirical observations with prior expectations based on theory, prior research, or scenario-building exercises. The outcome of a Bayesian analysis is a compromise between the observed data and prior expectations. By placing non-zero probability on extreme outcomes, a prior distribution allows one to incorporate some sense of a black swan into expected (dis)utility calculations.

Nor does the existence of black swans mean that planning is useless. By their very definition, black swans are rare -- though highly consequential -- events. Does it not make sense to have a plan for dealing with the 99% of the time when we are not experiencing a black swan event? To be certain, this planning should not interfere with our ability to respond to major events but I don't see any evidence that planning for more-or-less likely outcomes necessarily trades-off with responding to unlikely outcomes.

Taleb is disdainful about explanations for why the bubonic plague didn't kill more people: "People will supply quantities of cosmetic explanations involving theories about the intensity of the plague and 'scientific models' of epidemics." (Black Swan, p. 120) Does he not understand that epidemic models are a variety of that lionized category of nonlinear processes he waxes about? He should know better. Epidemic models are not one of these false bell-curve models he so despises. Anyone who thinks hard about an epidemic process -- in which an infectious individual must come in contact with a susceptible one in order for a transmission event to take place -- should be able to infer that an epidemic can not infect everyone. Epidemic models work and make useful predictions. We should, naturally, exhibit a healthy skepticism about them as we should any model. But they are an important tool for understanding and even planning.

Indeed, our understanding gained from the study of (nonlinear) epidemic models has provided us with the most powerful tools we have for control and even eradication. As Hans Heesterbeek has noted, the idea that we could control malaria by targeting the mosquito vector of the disease is one that was considered ludicrous before Ross's development of the first epidemic model. The logic was essentially that there are so many mosquitoes that it would be absurdly impractical to eliminate them all. But the Ross model revealed that epidemics -- because of their nonlinearity -- have thresholds. We don't have to eliminate all the mosquitoes to break the malaria transmission cycle; we just need to eliminate enough to bring the system below the epidemic threshold. This was a powerful idea and it is central to contemporary public health. It is what allowed epidemiologists and public health officials to eliminate smallpox and it is what is allowing us to very nearly eliminate polio if political forces (black swans?) will permit.

Taleb's ludic fallacy (i.e., games of chance are somehow an adequate model of randomness in the world) is great. Quite possibly the most interesting and illuminating section of The Black Swan happens on p. 130 where he illustrates the major risks faced by a casino. Empirical data make a much stronger argument than do snide stereotypes. This said, Lund (2007) makes the important point that we need to ask what exactly is being modeled in any risk assessment or projection. One of the most valuable outcomes of any formalized risk assessment (or formal model construction more generally) is that it forces the investigator to be very explicit about what is being modeled. The output of the model is often of secondary importance.

Much of the evidence deployed in his books is what Herb Gintis has called "stylized facts" and, of course, is subject to Taleb's own critique of "hidden evidence." Because the stylized facts are presented anecdotally, there is no way to judge what is being left out. A fair rejoinder to this critique might be that these are trade publications meant for a mass market and are therefore not going to be rich in data regardless. However, the tone of the books – ripping on economists and bankers but also statisticians, historians, neuroscientists, and any number of other professionals who have the audacity to make a prediction or provide a causal explanation – makes the need for more measured empirical claims more important. I suspect that many of these people actually believe things that are quite compatible with the conclusions of both The Black Swan and Antifragile.

On Stress

The notion of antifragility turns on systems getting stronger when exposed to stressors. But we know that not all stressors are created equally. This is where the work of Robert Sapolsky really comes into play. In his book Why Zebras Don't Get Ulcers, Sapolsky, citing the foundational work of Hans Seyle, notes that some stressors certainly make the organism stronger. Certain types of stress ("good stress") improves the state of the organism, making it more resistant to subsequent stressors. Rising to a physical or intellectual challenge, meeting a deadline, competing in an athletic competition, working out: these are examples of good stresses. They train body, mind, and emotions and improve the state of the individual. It is not difficult to imagine that there could be similar types of good stressors at levels of organization higher than the individual too. The way the United States came together as a society to rise to the challenge of World War II and emerge as the world's preeminent industrial power comes to mind. An important commonality of these good stressors is the time scale over which they act. They are all acute stressors that allow recovery and therefore permit the subsequently improved performance.

However, as Sapolsky argues so nicely, when stress becomes chronic, it is no longer good for the organism. The same glucocorticoids (i.e., "stress hormones") that liberate glucose and focus attention during an acute crisis induce fatigue, exhaustion, and chronic disease when the are secreted at high levels chronically.

Any coherent theory of antifragility will need to deal with the types of stress to which systems are resistant and, importantly, have a strengthening effect. Using the idea of hormesis – that a positive biological outcome can arise from taking low doses of toxins – is scientifically hokey and boarders on mysticism. It unfortunately detracts from the good ideas buried in Antifragile.

I think that Taleb is on to something with the notion of antifragility but I worry that the policy implications end up being just so much orthodox laissez-faire conservatism. There is the idea that interventions – presumably by the State – can do nothing but make systems more fragile and generally worse. One area where the evidence very convincingly suggests that intervention works is public health. Life expectancy has doubled in the rich countries of the developed world from the beginning of the twentieth century to today. Many of the gains were made before the sort of dramatic things that come to mind when many people think about modern medicine. It turns out that sanitation and clean water went an awful long way toward decreasing mortality well before we had antibiotics or MRIs. Have these interventions made us more fragile? I don't think so. The jury is still out, but it seems that reducing the infectious disease burden early in life (as improved sanitation does) seems to have synergistic effects on later-life mortality, an effect is mediated by inflammation.

On The Academy

Taleb drips derision throughout his work on university researchers. There is a lot to criticize in the contemporary university, however, as with so many other external critics of the university, I think that Taleb misses essential features and his criticisms end up being off base. Echoing one of the standard talking points of right-wing critics, Taleb belittles university researchers as being writers rather than doers (echoing the H.L. Menken witticism  "Those who can do; those who can't teach"). Skin in the game purifies thought and action, a point with which I actually agree, however, thinking that that university researchers live in a world lacking consequences is nonsense. Writing is skin in the game. Because we live in a quite free society – and because of important institutional protections on intellectual freedom like tenure (another popular point of criticism from the right) – it is easy to forget that expressing opinions – especially when one speaks truth to power – can be dangerous. Literally. Note that intellectuals are often the first ones to go to the gallows when there are revolutions from both the right and the left: Nazis, Bolsheviks, and Mao's Cultural Revolution to name a few. I occasionally get, for lack of a better term, unbalanced letters from people who are offended by the study of evolution and I know that some of my colleagues get this a lot more than I. Intellectuals get regular hate mail, a phenomenon amplified by the ubiquity of electronic communication. Writers receive death threats for their ideas (think Salman Rushdie). Ideas are dangerous and communicating them publicly is not always easy, comfortable, or even safe, yet it is the professional obligation of the academic.

There are more prosaic risks that academics face that suggest to me that they do indeed have substantial skin in the game. There is a tendency for critics from outside the academy to see universities as ossified places where people who "can't do" go to live out their lives. However, the university is a dynamic place. Professors do not emerge fully formed from the ivory tower. They must be trained and promoted. This is the most obvious and ubiquitous way that what academics write has "real world" consequences – i.e., for themselves. If peers don't like your work, you won't get tenure. One particularly strident critic can sink a tenure case. Both the trader and the assistant professor have skin in their respective games – their continued livelihoods depend upon their trading decisions and their writing. That's pretty real. By the way, it is a huge sunk investment that is being risked when an assistant professor comes up for tenure. Not much fun to be forty and let go from your first "real" job since you graduated with your terminal degree... (I should note that there are problems with this – it can lead to particularly conservative scholarship by junior faculty, among other things, but this is a topic for its own post.)

Now, I certainly think that are more and less consequential things to write about. I have gotten more interested in applied problems in health and the environment as I've moved through my career because I think that these are important topics about which I have potentially important things to say (and, yes, do). However, I also think it is of utmost importance to promote the free flow of ideas, whether or not they have obvious applications. Instrumentally, the ability to pursue ideas freely is what trains people to solve the sort of unknown and unforecastable problems that Taleb discusses in The Black Swan. One never knows what will be relevant and playing with ideas (in the personally and professionally consequential world of the academy) is a type of stress that makes academics better at playing with ideas and solving problems.

One of the major policy suggestions of Atifragile is that tinkering with complex systems will be superior to top-down management. I am largely sympathetic to this idea and to the idea that high-frequency-of-failure tinkering is also the source of innovation. Taleb contrasts this idea of tinkering is "top-down" or "directed" research, which he argues regularly fails to produce innovations or solutions to important problems. This notion of "top-down," "directed" research is among the worst of his various straw men and a fundamental misunderstanding of the way that science works. A scientist writes a grant with specific scientific questions in mind, but the real benefit of a funded research program is the unexpected results one discovers while pursuing the directed goals. As a simple example, my colleague Tony Goldberg has discovered two novel simian hemorrhagic viruses in the red colobus monkeys of western Uganda as a result of our big grant to study the transmission dynamics and spillover potential of primate retroviruses. In the grant proposal, we discussed studying SIV, SFV, and STLV. We didn't discuss the simian hemorrhagic fever viruses because we didn't know they existed! That's what discovery means. Their not being explicitly in the grant didn't stop Tony and his collaborators from the Wisconsin Regional Primate Center from discovering these viruses but the systematic research meant that they were in a position to discover them.

The recommendation of adaptive, decentralized tinkering in complex systems is in keeping with work in resilience (another area about which Taleb is scornful because it is the poor step-child of antifragility). Because of the difficulty of making long-range predictions that arises from nonlinear, coupled systems, adaptive management is the best option for dealing with complex environmental problems. I have written about this before here.

So, there is a lot that is good in the works of Taleb. He makes you think, even if spend a lot of time rolling your eyes at the trite stereotypes and stylized facts that make up much of the rhetoric of his books. Importantly, he draws attention to probabilistic thinking for a general audience. Too much popular communication of science trades in false certainties and the mega-success of The Black Swan in particular has done a great service to increasing awareness among decision-makers and the reading public about the centrality of uncertainty. Antifragility is an interesting idea though not as broadly applicable as Taleb seems to think it is. The inspiration for antifragility seem to lie in largely biological systems. Unfortunately, basing an argument on general principles drawn from physiology, ecology, and evolutionary biology pushes Taleb's knowledge base a bit beyond its limit. Too often, the analogies in this book fall flat or are simply on shaky ground empirically. Nonetheless, recommendations for adaptive management and bricolage are sensible for promoting resilient systems and innovation. Thinking about the world as an evolving complex system rather than the result of some engineering design is important and if throwing his intellectual cachet behind this notion helps it to get as ingrained into the general consciousness as the idea of a black swan has become, then Taleb has done another major service.

Wealth and Cheating

I recently read a story in the Los Angeles Times about a team of psychologists at UC Berkeley who showed, in a series of experimental and naturalistic studies, that wealthy individuals are more likely to cheat or violate social norms about fairness. The Story in the Times referred to the paper by Piff et al. in the 27 February edition of PNAS.  Here is the abstract of this paper:

Seven studies using experimental and naturalistic methods reveal that upper-class individuals behave more unethically than lower-class individuals. In studies 1 and 2, upper-class individuals were more likely to break the law while driving, relative to lower-class individuals. In follow-up laboratory studies, upper-class individuals were more likely to exhibit unethical decision-making tendencies (study 3), take valued goods from others (study 4), lie in a negotiation (study 5), cheat to increase their chances of winning a prize (study 6), and endorse unethical behavior at work (study 7) than were lower-class individuals. Mediator and moderator data demonstrated that upper-class individuals’ unethical tendencies are accounted for, in part, by their more favorable attitudes toward greed.

This study was apparently motivated by observations that people in expensive luxury cars are more likely to bolt ahead of their turn at four-way stop intersections in the San Francisco Bay Area, a daily experience for anyone driving in Palo Alto! It's terrific that these authors actually took the trouble to systematize their casual observations of driving behavior and make an interesting and compelling scientific statement.

On Friday, I made my own observations about class, cheating, and the violation of norms as I flew down to LAX to attend Sunbelt XXXII (the annual conference for the International Network for Social Network Analysis). Of late, I've racked up a lot of miles on United and, as a result, occasionally get upgraded to first class or business class seating. My trip Friday was one of those occasions. As I sat in the (relatively) comfy leather seat of the first-class cabin reading Jeremy Boissevain's rather appropriate (1974) book Friends of Friends: Networks, Manipulators, and Coalitions, I noticed that nearly everyone around me was busily chatting away or otherwise fiddling around with their smart phones. When the cabin door finally closed and the announcement was made requesting that phones be switched off, none of the people in my neighborhood did so. They put their phones down or in their shirt pockets and watched the flight attendants.  When the flight attendants passed through the cabin and were occupied with other business, out came the smart phones again. The one gentleman across the aisle from me looked like a school kid writing a note in class or something. He kept a wary half-eye out for the flight attendants and looked extremely guilty about his actions, but he nonetheless kept doing his, no doubt, extremely important business.  The man on the phone in the row ahead of me was a little more shameless. He seemed completely unconcerned that he might get busted. The woman in the row ahead of me and across the aisle moved her phone so that it was partially hidden by the arm-rest of her seat as she continued to scroll through her very, very important email. Of the six people I could easily see in my neighborhood, fully half of them continued to use their phones right into taxi and take-off.  Based on their attempts at concealment, at least two of them knew what they were doing was wrong. Now, any regular traveler has seen people using their phones on the plane after they are supposed to. However, I had never seen this sort of density of norm violation on a single flight before.

Of course, this is an anecdote but the study by Piff et al. (2012) shows how anecdotes about social behavior can go on to be systematized into interesting scientific studies.

Risk Management: The Fundamental Human Adaptation

It was a conceptually dense week in class.  The first part of the week I spent talking about topics such as ecological complexity, vulnerability, adaptation, and resilience. One of the key take-home messages of this material is that uncertainty is ubiquitous in complex ecological systems.  Now, while systemic uncertainty does not mean that the world is unpatterned or erratic, it does mean that people are never sure what their foraging returns will be or whether they will come down with the flu next week or whether their neighbor will support them or turn against them in a local political fight. Because uncertainty is so ubiquitous, I see it as especially important for understanding human evolution and the capacity for adaptation. In fact, I think it's so important a topic that I'm writing a book about it.  More on that later...

First, it's important to distinguish two related concepts.  Uncertainty  simply means that you don't know the outcome of a process with 100% certainty.  Outcomes are probabilistic.  Risk, on the other hand, combines both the likelihood of a negative outcome and the outcome's severity. There could be a mildly negative outcome that has a very high probability of occurring and we would probably think that it was less risky than a more severe outcome that happened with lower probability. When a forager leaves camp for a hunt, he does not know what return he will get.  10,000 kcal? 5,000 kcal? 0 kcal? This is uncertainty.  If the hunter's children are starving and might die if he doesn't return with food, the outcome of returning with 0 kcal worth of food is risky as well.

Human behavioral ecology has a number of elements that distinguish it as an approach to studying human ecology and decision-making.  These features have been discussed extensively by Bruce Winterhalder and Eric Smith (1992, 2000), among others.  Included among these are: (1) the logic of natural selection, (2) hypothetico-deductive framework, (3) a piecemeal approach to understanding human behavior, (4) focus on simple (strategic) models, (5) emphasis on behavioral strategies, (6) methodological individualism.  Some others that I would add include: (7) ethological (i.e., naturalistic) data collection, (8) rich ethnographic context, (9) a focus on adaptation and behavioral flexibility in contrast to typology and progressivism.  The hypothetico-deductive framework and use of simple models (along with the logic of selection) jointly accounts for the frequent use of optimality models in behavioral ecology. Not to overdo it with the laundry lists, but optimality models also all share some common features.  These include: (1) the definition of an actor, (2) a currency and an objective function (i.e., the thing that is maximized), (3) a strategy set or set of alternative actions, and (4) a set of constraints.

For concreteness' sake, I will focus on foraging in this discussion, though the points apply to other types of problems. When behavioral ecologists attempt to understand foraging decisions, the currency they overwhelmingly favor is the rate of energy gain. There are plenty of good reasons for this.  Check out Stephens and Krebs (1986) if you are interested. The point that I want to make here is that, ultimately, it's not the energy itself that matters for fitness.  Rather it is what you do with it. How does a successful foraging bout increase your marginal survival probability or fertility rate? This doesn't sound like such a big issue but it has important implications. In particular, fitness (or utility) is a function of energy return.  This means that in a variable environment, it matters how we average.  Different averages can give different answers. For example, what is the average of the square root of 10 and 2? There are two ways to do this: (1) average the two values and take the square root (i.e., take the function of the mean), and (2) take the square roots and average (i.e., take the mean of the function). The first of these is \sqrt{6}=2.45. The second is (\sqrt{10} + \sqrt{2})/2=2.29.  The function of the mean is greater than the mean of the function.  This is a result of Jensen's inequality. The square root function is concave -- it has a negative second derivative. This means that while \sqrt{x} gets bigger as x gets bigger (its first derivative is positive), the increase is incrementally smaller as x gets larger. This is commonly known as diminishing marginal utility.

Lots of things naturally show diminishing marginal gains.  Imagine foraging for berries in a blueberry bush when you're really hungry.  When you arrive at the bush (i.e., 'the patch'), your rate of energy gain is very high. You're gobbling berries about as fast as you can move your hands from the bush to your mouth. But after you've been there a while, your rate of consumption starts to slow down.  You're depleting the bush.  It takes longer to pick the berries because you have to reach into the interior of the bush or go around the other side or get down on the ground to get the low-hanging berries.

berryplot

Chances are, there's going to come a point where you don't think it's worth the effort any more.  Maybe it's time to find another bush; maybe you've got other important things to do that are incompatible with berry-picking. In his classic paper, Ric Charnov derived the conditions under which a rate-maximizing berry-picker should move on, the so-called 'marginal value theorem' (abandon the patch when the marginal rate of energy gain equals the mean rate for the environment). There are a number of similar marginal value solutions in ecology and evolutionary biology (they all arise from maximizing some rate or another). Two other examples: Parker derived an marginal value solution for the optimal time that a male dung fly should copulate (can't make this stuff up). van Baalen and Sabelis derived the optimal virulence for a pathogen when the conditional probability of transmission and the contact rate between infectious and susceptible hosts trade off.

So, what does all this have to do with risk? In a word, everything.

Consider a utility curve with diminishing marginal returns.  Suppose you are at the mean, indicated by \bar{x}. Now you take a gamble.  If you're successful, you move to x_1 and its associated utility.  However, if you fail, you move down to x_0 and its associated utility.  These two outcomes are equidistant from the mean. Because the curve is concave, the gain in utility that you get moving from \bar{x} to x_1 is much smaller than the loss you incur moving from \bar{x} to x_0.  The downside risk is much bigger than the upside gain.  This is illustrated in the following figure:

risk-aversion

When returns are variable and utility/fitness is a function of returns, we can use expected utility as a tool for understanding optimal decisions. The idea goes back to von Neumann and Morgenstern, the fathers of game theory. Expected utility has received some attention in behavioral ecology, though not as much as it deserves.  Stephens and Krebs (1986) discuss it in their definitive book on foraging theory.  Bruce Winterhalder, Flora Lu, and Bram Tucker (1999) have discussed expected utility in analyzing human foraging decisions and Bruce has also written with Paul Leslie (2002; Leslie & Winterhalder 2002) on the topic with regard to fertility decisions.  Expected utility encapsulates the very sensible idea that when faced with a choice between two options that have uncertain outcomes, choose the one with the higher average payoff. The basic idea is that the world presents variable pay-offs. Each pay-off has a utility associated with it. The best decision is the one that has the highest overall expected, or average, utility associated with it. Consider a forager deciding what type of hunt to undertake. He can go for big game but there is only a 10% chance of success. When he succeeds, he gets 10,000 kcal of energy. When he fails, he can almost always find something else on the way back home to bring to camp. 90% of the time, he will bring back 1,000 kcal.  The other option is to go for small game, which is generally much more certain endeavor. 90% of the time, he will net 2,000 units of energy.  Such small game is remarkably uniform in its payoff but sometimes (10%) the forager will get lucky and receive 3,000 kcal. We calculate the expected utility by summing the products of the probabilities and the rewards, assuming for simplicity in this case that the utility is simply the energy value (if we didn't make this assumption, we would calculate the utilities associated with the returns first before averaging).

Big Game: 0.1*10000 + 0.9*1000 = 1900

Small Game: 0.9*2000 + 0.1*3000 = 2100

Small game is preferred because it has higher expected utility.

We can do a bit of analysis on our utility curve and show something very important about risk and expected utility. I'll spare the mathematical details, but we can expand our utility function around the mean return using a Taylor series and then calculate expectations (i.e., average) on both sides.  The resulting expression encapsulates a lot of the theory of risk management. Let w(x) indicate the utility associated with return x (where I follow the population genetics convention that fitness is given by a w).

 \overline{w(x)} = w(\bar{x}) + \frac{1}{2} w'' \mathrm{Var}(x).

Mean fitness is equal to the fitness of the mean payoff plus a term that includes the variance in x and the second derivative of the utility function.  When there is diminishing marginal utility, this will be negative.  Therefore, variance will reduce mean fitness below the fitness of the mean. When there is diminishing marginal utility, variance is bad. How bad is determined both by the magnitude of the variance but also by how curved the utility curve is.  If there is no curve, utility is a straight line and w''=0.  In that case, variance doesn't matter.

So variance is bad for fitness.  And variance can get big. One can imagine it being quite sensible to sacrifice some mean return in exchange for a reduction in variance if this reduction outweighed the premium paid from the mean. This is exactly what we do when we purchase insurance or when a farmer sells grain futures.  This is also something that animals with parental care do.  Rather than spewing out millions of gametes in the hope that it will get lucky (e.g., like a sea urchin), animals with parental care use the energy they could spend on lots more gametes and reinvest in ensuring the survival of their offspring. This is probably also why hunter-gatherer women target reliable resources that generally have a lower mean return than other available, but risky, items.

It turns out that humans have all sorts of ways of dealing with risk, some of them embodied in our very biology.  I'm going to come up short in enumerating these because this is the central argument of my book manuscript and I don't want to give it away (yet)! I hope to blog here in the near future about three papers that I have nearly completed that deal with risk management and the evolution of social systems, reproductive decision-making in an historical population, and foraging decisions by contemporary hunter-gatherers.  When they come out, my blog will be the first to know!

References

Charnov, E. L. 1976. Optimal foraging: The marginal value theorem. Theoretical Population Biology. 9:129-136.

Leslie, P., and B. Winterhalder. 2002. Demographic consequences of unpredictability in fertility outcomes. American Journal of Human Biology. 14 (2):168-183.

Parker, G. A., and R. A. Stuart. 1976. Animal behavior as a strategy optimizer: evolution of resource assessment strategies and optimal emigration thresholds. American Naturalist. 110 (1055-1076).

Stephens, D. W., and J. R. Krebs. 1986. Foraging theory. Princeton: Princeton University Press.

van Baalen, M., and M. W. Sabelis. 1995. The dynamics of multiple infection and the evolution of virulence. American Naturalist. 146 (6):881-910.

Winterhalder, B., and P. Leslie. 2002. Risk-sensitive fertility:The variance compensation hypothesis. Evolution and Human Behavior. 23:59-82.

Winterhalder, B., F. Lu, and B. Tucker. 1999. Risk-sensitive adaptive tactics: Models and evidence from subsistence studies in biology and anthropology. Journal of Archaeological Research. 7 (4):301-348.

Winterhalder, B., and E. A. Smith. 2000. Analyzing adaptive strategies: Human behavioral ecology at twenty-five. Evolutionary Anthropology. 9 (2):51-72.

On Husserl, Hexis, and Hissy-Fits

There has been quite a brouhaha percolating through some Anthropology circles following the annual meeting of the American Anthropological Associate in New Orleans last month.  It seems that the AAA executive board, in all its wisdom, has seen fit to excise the term "science" from the Association's long-range planning document. You can sample some of the reaction to this re-write in blog posts from anthropologi.info, Neuroanthropology, Evolution on the Beach,  AAPA BANDITInside HigherEd, and Fetishes I Don't Get at Psychology Today. There is also a letter from AAA president, Virginia Dominguez here and you can find the full text of the planning document here. The primary concern has centered on the first paragraph of this document.  Here is that paragraph as it stood before the November meeting:

The purposes of the Association shall be to advance anthropology as the science that studies humankind in all its aspects, through archeological, biological, ethnological, and linguistic research; and to further the professional interests of American anthropologists; including the dissemination of anthropological knowledge and its use to solve human problems.

The new wording is as follows:

The purposes of the Association shall be to advance public understanding of humankind in all its aspects. This includes, but is not limited to, archaeological, biological, social, cultural, economic, political, historical, medical, visual, and linguistic anthropological research.  The Association also commits itself and to further the professional interests of anthropologists, including the dissemination of anthropological knowledge, expertise, and interpretation.

So, anthropology is no longer a science, though there are lots of rather particularistic approaches through which one can pursue anthropology that may or may not be scientific.  Apparently, the executive board has a newfound passion for public communication as well.  I guess we don't really need an organization that promotes scholarly understanding or the production of new knowledge.  Just look where that's gotten us!

The new wording has greatly concerned a number of parties, including the Society for Anthropological Sciences.  I am a member of this section and have never seen so much traffic on the society's listserv.

I will admit to being somewhat dismayed by the Society's response.  While I am not quite as tweaked by this as many, I nonetheless wrote a longish call for specific action -- one that involved good old-fashioned political organizing and attempting to forge alliances both with other sections within AAA and across other scholarly societies with an interest in anthropology (e.g., AAPA, HBA, SAA, HBES).  My call was greeted with a deafening (virtual) silence and I am left to guess why.  Perhaps the membership is suspicious of the imperialist ambitions of a biological anthropologist with the taint of evolution on him?  Perhaps they've heard and tried it all before and were simply convinced it would not work?  Perhaps they actually like being an embattled minority and don't really want to take action to jeopardize that status?

To what extent is the scandal a tempest in a teapot?  I honestly don't know.  The word "science" has been taken out of the first paragraph but there is nothing inherently anti-scientific about the statement.  After all, "advancing public understanding" can be done through "archaeological, biological, social, cultural, economic, political, historical, medical, visual, and linguistic anthropological research." Any number of these can be done through a scientific approach to understanding.

The thing that I find completely bizarre about the new wording is the exclusive focus on public understanding.  Public understanding? Really? Judging from my recent search committee and scientific review panel experience, I can only be left with the conclusion that the public must have an insatiable hunger for phenomenology.  This explains why I can never find any Husserl at Barnes and Noble -- he must just be flying off the shelves!  You'd think if the goal of our flagship professional organization is really promoting public understanding, that more anthropologists would write in a manner that was generally understandable to, you know, the public.  In his distinguished lecture, the eminent archaeologist Jeremy Sabloff chastised anthropologists for their unwillingness to engage with the general public.  I could not agree with this perspective more, especially if "engaging with the public" entails engaging with colleagues from cognate disciplines, another thing that I think we do a miserable job of, in general.

I was a bit disappointed to read Alex Golub's write-up of this issue on the Savage Minds blog.  I'm usually a big fan of both this blog and Alex's posts more generally. However, in this case I think that Alex engages in the kind of ahistorical, totalizing stereotyping of scientific anthropologists that normally gives anthropologists the willies.  Advocates of science are characterized as close-minded automata, utterly lacking any appreciation for ambiguity, historicity, politics, or contested meaning.  For example, he writes

The fact that the model used by 'scientific' anthropologists has as much complexity as an average episode of WWE Smackdown -- with a distinction between the evil 'fluff-head' cultural anthropologists and the good 'scientific' cultural anthropologists -- should be the first sign that something fishy is going on.

Très nuanced, eh?

The statements made by many scientific anthropologists, particularly those of the generation to enter the profession in the 1960s and 1970s, need to be understood in the historical and political context of the speakers.  I think that it is simply disingenuous to claim that scientific approaches to anthropological knowledge have not become increasingly marginalized within the mainstream of anthropology over the last several decades.  One need only look at what has become to the departments that were home to the vaunted physical anthropology programs of the past to find evidence of this trend. Consider, for example, the University of Chicago, the University of California Berkeley or Columbia University.  And this is just biological anthropology; it does not account for the loss of scientific social and cultural anthropologists (think Gene Hammel or Roy D'Andrade) in elite, Ph.D.-granting programs. The reasons for the marginalization of scientific approaches to anthropology are complex and do not fit neatly into the simplistic narrative of "objective, scientific anthropology ... under assault from interpretivists like Clifford Geertz who do not believe in truth." No doubt, part of the problem is simply the compartmentalization of knowledge.  As scholars become increasingly specialized, it becomes more and more difficult to be both scientist and humanist.  Increasingly, hiring decisions are zero-sum games. The gain of a scientist represents the loss of a humanist and vice-versa. Gone is Eric Wolf's conception of Anthropology as "both the most scientific of the humanities and the most humanist of the sciences."

The key is that the declining importance of science in the elite anthropology departments has led to a feeling of embattlement -- that is almost certainly counter-productive most of the time -- among the remaining scientific anthropologists. Another consequence is that the decline of the place of science within anthropological discourse selects for personalities who thrive on embattlement, so that the reproduction of the field is relatively enriched with young scholars who see no point to professional or intellectual engagement. And so it gets more and more difficult to integrate.  This is the lens through which I view much of the public complaining about the recent actions of the AAA executive board. However, as my colleague Rebecca Bird noted, those of us who still see a place for science in anthropology need to move beyond reactionary statements.  We need to be proactive and positive.

The academy is changing. This can be seen in the increasing number of cross-cutting requests-for-proposals from funding agencies such as NSF (e.g., HSD, EID, CHNS) or NIH and the wholesale re-organization of many research universities (ASU is only the most extreme case; the ascendency of interdisciplinary centers such as the Woods Institute for the Environment or the Freeman-Spogli Institute for International Studies at Stanford is a more common manifestation of this trend; the Columbia Earth Institute also comes to mind).  In an academy that increasingly values transdisciplinarity and integration of knowledge, I think that anthropologists have an enormous comparative advantage -- if we could just get over ourselves.  As I wrote in my 2009 Anthropology News piece:

Four-field anthropology is a biosocial discipline that integrates information from all levels of biological and social organization. To understand human behavior, the four-field anthropologist considers genetics and physiology; the history of the human lineage; historical, cultural and social processes; the dynamics of face-to-face interactions; and global political economy. Each of these individual areas is studied by other disciplines, but no other field provides the grounding in all, along with the specific mandate to understand the scope of human diversity. The anthropologist stands in a unique position to serve as the fulcrum upon which the quality of an interdisciplinary research team balances. Revitalizing the four-subfield approach to anthropological training could move anthropology from the margins of the interdisciplinary, research-based academy of the near future to the core.

I have no interest in disparaging forms of knowledge or excluding particular types of scholars from any social movement, but I think that scientific anthropologists have a particularly important role to play in such a revitalization, if for no other reason than they (presumably) care about more of these levels of organization.  Maybe such scholars could even communicate the subtlety and richness of ethnographic experience that our more humanistic colleagues so value if we could just get beyond the name-calling.

I may be dismissed as being naively optimistic by the old guard of scientific anthropologists (hypothesis 2, above), but I think that I have good reasons to be optimistic about the future of anthropology, despite the many challenges. This optimism stems from the work of individual anthropologists.  I'll do a quick shout-out to a number of people who I think are doing particularly good work, integrating different anthropological perspectives, and communicating with a broader audience.  This is a very personal and idiosyncratic list -- these scholars are people I've encountered recently or whose work has been brought to my attention of late. They tend to be focused on questions of health and human-environment interactions, naturally, since these are the issues that organize my research.

If you want to feel good about the future of a scientific anthropology that is simultaneously integrated into contemporary anthropology and communicates with a broader scientific and policy audience (and is generally great and transformative -- that key NSF buzz word), check out the work of:

  • Craig Hadley at Emory on food security and psychological well-being
  • Amber Wutich at ASU on vulnerability, water security, and common-pool resources
  • Lance Gravlee at UF on the embodiment of racial discrimination and its manifestations in health
  • Brooke Scelza at UCLA on parental investment and childhood outcomes
  • Dan Hrushka at ASU on how cultural beliefs, norms and values interact with economic constraints to produce health outcomes
  • Crickette Sanz at Washington University on multi-ape ecology of the Goualougo Triangle, Republic of Congo
  • Herman Pontzer at CUNY on measuring daily energy expenditures in hunter-gatherers
  • Rebecca and Douglas Bird on subsistence and signaling among Martu foragers

This list could go on. I won't even mention the amazing anthropology post-docs, Siobhan MattisonSean Downey, and Brian Wood, with whom I have been so lucky to interact this academic year.

I have plenty more to say on this -- particularly how the portrayal of politics and political agendas enters the discourse -- but I have final exams to grade!

Jones & Bird (2008) == Evolutionary Psychology???

So, I've been spending a bunch of time recently thinking about evolutionary psychology (EP). This is a field about which I have some serious reservations for a variety of reasons both technical and philosophical. That said, I do find the constant in-fighting among human evolutionary biologists tedious and think that it's absurdly unproductive. I am currently working on a critique of some particular aspects of contemporary thought in EP and these blog posts have helped me to get some of my thoughts in order. I am also working on trying to find common ground with researchers in a variety of different "schools" of human evolutionary studies.

Rebecca Bird and I recently wrote a short essay published in Anthropology News that defends functionalist approaches to the study of human ecology (a position that, given the reaction of the editor, is rather controversial). Given the severe length constraints we faced, we were only able to give the barest outline of the research program in human evolutionary ecology that we are trying to establish at Stanford (see this previous post for some details). This is neither the place nor the time to elaborate on the argument of the essay, but I will re-cap a couple points:

  • Contemporary ecological (or environmental) anthropology has ceded explanations of human behavior based on rationality to economists
  • Allowing pure economic (i.e., pecuniary) rationality to define what we collectively consider "rational" is dangerous and ill-considered
  • Expectations of group and/or individual rationality may fail because of a failure to consider the correct objective function, individual heterogeneity, or key trade-offs
  • "Culture" is an amalgam of behaviors and institutions that represent responses to both past and present environments and as such is not particularly useful as a causal explanation for observed behavior

We also suggest some crazy methodological ideas like measuring things and testing multiple competing hypotheses.

The point I want to take up right now is the failure of observing the predictions of rational-actor models because of the failure to account for trade-offs. Rebecca and I had one particular trade-off in mind when we wrote this. We hypothesize that there is frequently a very general trade-off between pecuniary reward and social capital. This arises from the fact that, on the one hand, sharing (especially food-sharing) is so ubiquitous in face-to-face human groups and, on the other hand, people frequently engage in social signaling specifically through economically costly activities (see Bird and Smith (2005) for a review). It would not surprise us at all if it turned out that people were much better at solving complex social optimization problems than they are at optimizing pecuniary return.

Now, in my holiday-induced state of heightened self-reflection, it occurs to me that this argument is really not all that different from Leda Cosmides's (1989, et seq.) suggestion that people are better at solving the Wason selection test when it is presented in terms of social contracts than when it is presented in its traditional way as a test of abstract logical reasoning abilities. Yikes! Does this mean that Rebecca Bird and I are evolutionary psychologists? No, it doesn't. It does make me think that perhaps the time has come for détente among the different schools of thought working on evolution and human behavior. I'm hardly the first person to think this (See Eric Smith's (2000) paper for instance). But maybe I'm the first to blog it!

My Stanford colleagues Rebecca and Doug Bird are clearly leading figures in contemporary human behavioral ecology. I will let their work and the philosophy it entails stand for itself. In what follows, I will focus on my own philosophical and methodological orientation. (Perhaps they will comment on this entry at some point...)

Martin Daly and Margo Wilson (who I think generally do excellent work) rather infamously and imperialistically claimed in a 1999 review article in Animal Behaviour that EP "encompasses work by nonpsychologists, including even those who have deliberately differentiated themselves from 'evolutionary psychology' as 'evolutionary anthropologists', 'human sociobiologists' and 'human behavioural ecologists'." This led to a rebuttal paper by three eminent Human Behavioral Ecologists (Eric Smith, Monique Borgerhoff Mulder, and Kim Hill) (Smith et al. 2000). This is an excellent paper and I heartily recommend anyone interested in evolution and human behavior read the exchange, which is freely available here and here. I will defer to the Smith et al. (2000) paper for the bulk of the arguments on why it is not reasonable to think of HBE (and other approaches) as a subset of EP, but will highlight a couple here:

  • HBE actually pre-dates EP as a field
  • Prominent EP practitioners were the ones who advocated the separation in the first place, largely on theoretical grounds
  • There are substantial theoretical and methodological differences that characterize the two fields

The one issue that I will take up relates to my personal sensibility with regard to science. A tenet of EP is that contemporary behavior -- and the fitness outcomes of this behavior -- is irrelevant for evolutionary understanding. The contention is that we should instead focus on the study of the psychological mechanisms underlying behavior. The idea that current behavior and/or fitness is irrelevant comes across indirectly in Donald Symons's (1989) critique of "Darwinian Anthropology" and more directly and forcefully in Tooby and Cosmides's (1990) follow-up "The Past Explains the Present: Emotional Adaptations and the Structure of Ancestral Environments." I don't want to come across as too much of a de Finetti-style positivist here, but I have a hard time with the idea that we should sacrifice studying observables in favor of objects that we have no hope of observing. While I don't object to studying psychological mechanisms, I do think that since the thing we are interested in explaining is human behavior, perhaps that is what we should study.

But now I find myself confronted with the fact that I have made an EP-like argument in print (albeit Anthropology News!) as well as the very real fact that I have always found Cosmides and Tooby's argument about social reasoning and the Wason selection test compelling. Perhaps the lesson here is that we shouldn't be idealogues with regard to our approach to science. While I will admit a distressingly positivist love of observables (a common feature of Bayesians?), my true philosophical heritage lies in the works of Peirce, James, and Dewey. As a committed pragmatist, I am willing to at least entertain just about any theoretical or methodological position that helps me solve scientific questions.

What if every student of human behavior wrote a paper in which they adopted the approach of a contrasting school? Would this be cool or would it simply be anarchic?

There are some ways in which it is perhaps easier for me to think across these schools than some of my colleagues. My dirty little secret is that I was never really trained in any of them! My graduate training is in (nonhuman) primate behavioral ecology. One of the most influential people for my intellectual and personal development (and probably the only reason I actually got into Harvard) is Irv DeVore, a foundational figure for EP. My Ph.D. advisor Richard Wrangham, while very much a behavioral ecologist when studying chimpanzees, is clearly sympathetic to EP when studying humans. In my post-doc, I moved into more applied questions of human health and population dynamics and indirectly encountered one of the other "schools" of human evolutionary thought, namely, dual-inheritance theory. Cultural transmission models are used in health research to understand the adoption of things like modern contraception or oral rehydration therapy. As a result, I have thought a little about models of cultural transmission (a chapter that I wrote in Melissa Brown's recent book can be found here). This is a pretty natural extension of my work in epidemic modeling and while it is not a central part of my research, I suspect I haven't said my last on the topic (particularly not if I continue to attract clever students interested in the topic).

So, those are my thoughts du jour on the study of human behavior. The winter break is rapidly drawing to a close and pretty soon I will be back in the office and will need to get back to actual research. Hopefully, these meditations in the closing days of 2008 will have a positive influence on this research.

References

Bird, R. B., and E. A. Smith. 2005. Signaling Theory, Strategic Interaction, and Symbolic Capital. Current Anthropology 46 (2):221-248.

Cosmides, L. 1989. The Logic of Social Exchange: Has Natural Selection Shaped How Humans Reason? Studies with the Wason Selection Task. Cognition 31: 187-276.

Daly, M., and M. I. Wilson. 1999. Human Evolutionary Psychology and Animal Behaviour. Animal Behaviour 57:509-519.

Smith, E. A. 2000. Three Styles in the Evolutionary Study of Human Behavior. In Human Behavior and Adaptation: An Anthropological Perspective, edited by L. Cronk, W. Irons and N. Chagnon. Hawthorne, NY: Aldine de Gruyter.

Smith, E. A., M. Borgerhoff Mulder, and K. Hill. 2000. Evolutionary Analyses of Human Behaviour: A Commentary on Daly & Wilson. Animal Behaviour 60:F21-F26.

Symons, D. 1989. A Critique of Darwinian Anthropology. Ethology and Sociobiology 10 (1-3):131-144.

Tooby, J., and L. Cosmides. 1990. The Past Explains the Present - Emotional Adaptations and the Structure of Ancestral Environments. Ethology and Sociobiology 11 (4-5):375-424.

On Modules

As the next installment in my series on evolution psychology (see previous posts here and here), I thought that I would write about some thoughts on evolutionary modules.  As should be obvious from previous posts, I have serious concerns about evolutionary psychology.  Nonetheless, I don't want to repeat the knee-jerk criticisms that attended the rise of what you might call (and Symons (1989) did call) "Darwinian Anthropology."  Like Anthropology more generally, I have found that the level of discourse in human evolutionary studies tends to be particularly low and this surely hinders progress toward our presumably shared goals of understanding human behavior, the origin and maintenance of human diversity, and how people respond to social, environmental, and economic changes.

In this spirit, I am taking seriously the idea of modularity.  The concept of "massive modularity" seems to be pretty central to just about any definition of modern EP and it is one of the ideas that I see as potentially most problematic.  A major question that naturally arises in the analysis of cognitive modularity is: what is a module?  There are two senses of modularity that you find discussed in the EP literature. For a good review of this, see Barrett and Kurzban (2006). In his highly influential (1983) book, Fodor popularized the concept of a cognitive module.  A Fodorian module is characterized by reflex-like encapsulation of critical functions.  It is thought to be anatomically localized, inaccessible to conscious thought and has shallow outputs.  Our senses and motor systems are examples of possible Fodorian modules, as are the systems that underlie language (Machery 2007).

In contrast to the Fodorian module is the second sense of modularity found in the EP literature, the evolutionary module. Like a Fodorian module, the evolutionary module is domain-specific or informationally encapsulated.  That is where the resemblance ends though.  Rather than being defined by a list of attributes, an evolutionary module is characterized by function.  An evolutionary module is a domain-specific cognitive mechanism that has been shaped by natural selection to perform a specific task.   There is no need here to specify their characteristic operating time, the shallowness of their outputs, or their anatomical localization.

Using engineering-inspired arguments about efficiency and design, the proponents of massive modularity suggest that the brain is really a collection of domain-specific modules.  These modules drive not just the reflex-like actions of our sensory-motor systems but also govern higher cognitive processes like reason, judgment, and decision-making.  The brain is not, as we typically conceive it, a single organ.  Rather it is a collection of special-purpose information processing organs.   Needless to say, such a position has been controversial.  Among the notable critics are Jerry Fodor himself, who wrote a whole book with the sarcastic title (referring to Steve Pinker's (1997) book, How the Mind Works), The Mind Doesn't Work That Way: The Scope and Limits of Computational Psychology.  Another notable critic is David Buller, the ostensible subject of my last two posts.

Barrett & Kurzban (2006) suggest that much of the controversy surrounding the EP concept of massive modularity arises from confusion over what is meant by a module in the EP sense.  That is, critics are thinking about Fodorian modules when the advocates of massive modularity have something entirely different in mind. Maybe.  I'm no expert, but the argument seems plausible for at least part of the controversy.   I have my own issues with modularity but I will save that for the paper that I am writing (and for which these posts serve as sketches to hopefully help me get some thoughts straight).

One point that I will make here is a fairly orthodox criticism of modularity.  In enumerating possible evolutionary modules, and noting that such modules require domain-specific input criteria, Barrett & Kurzban (2006: 630) include "systems specialized for making good food choices process only representations relevant to the nutritional value of different potential food items."  Really? I'm not one to fall back on the weak "culture complicates things" argument, but I do think there are other things -- including ones potentially important for fitness -- involved in food choice than the nutritional quality of a potential foodstuff. Perhaps an anecdote is in order here.

A long time ago, my wife and I were taken out to a fancy Chinese restaurant in Kota Tua, Jakarta by a colleague who wanted to impress us with his esoteric knowledge of a variety of Asian cuisines.  He took the initiative and ordered for the table a range of items including tripe, jellyfish, pig trotters, and chicken feet. For a variety of complex social reasons, we felt it was in our interest to not seem like naïve rubes from America.  So, we ate everything unflinchingly and with smiles on our faces. These were not things we normally would have volunteered to eat (though we now regularly get jellyfish) but the social payoffs of eating these (at the time) unappealing items outweighed whatever distaste we may have experienced.

Clearly, this is a bit of a trivial example.  I nonetheless think that it highlights an extremely important aspect of human decision-making.  The optimal decision in a one dimensional problem may change when one increases the dimensionality of the problem, particularly when the elements of your (vector) optimand trade-off.  Sometimes the optimal nutritional choice is not the optimal choice with respect to social or cultural capital.  The person's foraging decision is presumably one that balances the various dimensions of the problem. In a less trivial example, this is what Hawkes, O'Connell and Bird and Bird are suggesting is going on with some men's foraging decisions  (summarized in this review by Bird & Smith (2005)).  According to their model, men make energetically suboptimal foraging decisions in order to signal their phenotypic quality to political allies and potential mates.  Food choice is thus a decision that balances the potential costs and benefits of at least three fitness-critical domains (energetics, politics, and reproduction).   The same logic can be applied to that other staple of EP, mate choice.  What people say they want on pen-and-paper surveys is not necessarily what they get when they actually choose a mate.  The problem is that one's choice of mate spills over into so many other domains than simply future reproduction.  So it's not simply a matter of the ideal mate being out of one's league.  Sometimes, people actually prefer a mate who does not conform to their ideal physical type.

At the very least, this point seems to require positing the existence of yet another module that integrates the outputs of various lower-level modules.  Of course, this is beginning to sound more like a generalized reasoning process, the bane of EP.

There is another usage of the term "module" that I think may have some relevance to this whole discussion.   In evo-devo, modularity refers to the degree that a group of phenotypic characters have independent genetic architecture and ontogeny.  I will call this an "evolutionary ontogenetic module" (EOM) and contrast that with an "evolutionary cognitive module" (ECM) of EP.  Sperber (2002), in his defense of massive modularity, actually discusses EOMs in passing.  Pigliucci (2008) details the various, largely divergent definitions of modularity.  I tend to think about EOMs the way that Wagner & Altberg (1996) do, wherein a modular set of traits is one with (1) a higher than average level of integration by pleiotropic effects (i.e., gene interactions) and (2) a higher than average level of independence from other trait sets.  That is, modular architecture occurs where there are few pleiotropic genes that act across characters with different functions but more such effects falling on functionally related traits. 

Modularity in the evo-devo sense is central to the evolution of complexity as well as the evolution of evolvability (the capacity of an organism to respond adaptively to selection).  Do ECMs need to be EOMs? Does this and other related concepts from evo-devo help provide a means for relating the ideas of EP or HBE to their genetic architecture and ontogenetic assembly?  I think so but I think an elaboration on this topic awaits a later post.  

References

Barrett, H. C., and R. Kurzban. 2006. Modularity in Cognition: Framing the Debate. Psychological Review 113 (3):628-647.

Bird, R. B., and E. A. Smith. 2005. Signaling Theory, Strategic Interaction, and Symbolic Capital. Current Anthropology 46 (2):221-248.

Fodor, J. 1983. The Modularity of Mind. Cambridge: MIT Press.

Machery, E. 2007. Massive Modularity and Brain Evolution. Philosophy of Science74: 825–838.

Pigliucci, M. 2008. Is Evolvability Evolvable? Nature Genetics 9:75-82.

Pinker, S. 1997. How the Mind Works. New York: Norton.

Sperber, D. 2002. In Defense of massive modularity. In Dupoux, E.  Language, Brain and Cognitive Development: Essays in Honor of Jacques Mehler. Cambridge, Mass. MIT Press. 47-57.

Symons, D. 1989. A Critique of Darwinian Anthropology. Ethology and Sociobiology 10 (1-3):131-144.

Wagner, G.P., and L. Altenberg. 1996. Perspective: Complex Adaptations and the Evolution of Evolvability. Evolution 50 (3):967-976.

Buller on Evolutionary Psychology

Relentless critic of evolutionary psychology, David Buller recently wrote a piece in Scientific American outlining the critique he has developed over the last several years against this particular flavor of human evolutionary studies.  The author of Adapting Minds lists four ideas from contemporary evolutionary psychology (EP) that he suggests are fallacious:

  1. Analysis of Pleistocene Adaptive Problems Yields Clues to the Mind’s Design
  2. We Know, or Can Discover, Why Distinctively Human Traits Evolved
  3. “Our Modern Skulls House a Stone Age Mind”
  4. The Psychological Data Provide Clear Evidence for Pop EP

In my graduate seminar on Evolutionary Theory for the Anthropological Sciences, we read Buller's more technical (2005) critique of EP.  I find myself largely in agreement with his criticisms and, importantly, when I disagree with him, I think it is for interesting reasons.  

The first of these critiques is, in my opinion, the most far-reaching and damning. The Pleistocene, the geological epoch that lasted from around 1.8 million to 10,000 years before present, takes on the role as a mythical age of creation for EP. You see, the Pleistocene represents out species "Environment of Evolutionary Adaptedness" (EEA), a concept derived from developmental psychology and particularly John Bowlby, the father of attachment theory.  In the words of Tooby and Cosmides (1990: 386-387), the EEA "is not a place or a habitat, or even a time period.  It is a statistical composite of the adaptation-relevant properties of ancestral environments ecounted by members of ancestral populations, weighted by their frequency and fitness-consequences."

The key question, as Buller notes, is what would such a statistical composite look like for humans?  The insight that is regularly trotted out is that humans (hominins really) were everywhere hunter-gatherers until about 10,000 years ago -- and were mostly hunter-gatherers for some substantial period after that! So, what do we know about hunter-gatherers?  Much to our collective loss, most of what we know about hunter-gatherers comes from the study of highly marginalized populations.  This is because states, with their potential economic surpluses, large populations sizes, and hierarchical social organization (read: efficient militaries) pushed hunter-gatherers into marginal habitats throughout the world as they moved across the landscape.  Nonetheless, the hunter-gatherer populations that we know about are a remarkably diverse lot.  A terrific reference cataloging some of this diversity is Robert Kelly's (1995) The Foraging Spectrum.  In my specific area of interest (i.e., biodemography), Mike Gurven and Hilly Kaplan have recently written a very interesting paper on the diversity of hunter-gatherer patterns of mortality.  In this figure, taken from Gurven and Kaplan's paper, we can catch a glimpse of the variability just in hunter-gatherer demography.

Humans are clearly quite different from chimpanzees.  The point of Gurven and Kaplan's paper is that the existence of elderly within our societies is not simply an artefact of the modern industrialized world.  Old-age is as much a part of the human life cycle as is childhood.  Given the long potential lifespans of people in all the sampled populations, there is nonetheless a remarkable diversity in life expectancy (the average number of years lived by a person in the population) portrayed in this figure, considering that these are all groups without access to modern medicine.  There are people who live in arid lands of Sub-Saharan Africa (!Kung, Hadza), South American forests (Ache, Tsimane, Yanomamo) and South American grasslands (Hiwi).  Life expectancy at age 5, (\stackrel{\circ}{e}_5) varies by as much as 30%.  The basic point here is that even in something as basic as age-specific schedules of mortality and fertility, different hunter-gatherer groups are very different from each other (note that the Ache and !Kung differ in their total fertility rates by a factor of nearly two).

In all likelihood, our Pleistocene ancestors, like the sample of hunter-gatherer societies discussed in Kelly (1995) or Gurven and Kaplan (2007), lived in diverse habitats, engaged in diverse economic activities within the rubric of hunting and gathering, had diverse social structures, met with diverse biotic and abiotic environmental challenges to survival and reproduction, and dealt with diverse hostile and harmonious relations with conspecifics outside of their natal groups or communities. In other words, it's hard to imagine what neat statistical generalizations about hunter-gatherer lifeways -- and the selective forces they entailed -- could emerge from such diversity. People lived in face-to-face societies.  People had to integrate disparate sources of information to make decisions about fundamentally uncertain phenomena. There was probably a sexual division of labor, though not necessarily the same one everywhere. There were women and men. Probably some other things too, but not that many.  Robert Foley (1996) has a nice critique of what he sees as an extreme simplification of the Pleistocene hunter-gatherer lifeways under the rubric of the EEA.  

Another related problem with the EEA line of thinking is this idea that somehow selection stopped when humans developed agriculture.  10,000 years, while brief in the grand scheme of things, is still not exactly evolutionary chump change.  That span represents anywhere from 350-450 human generations.  This is, in fact, plenty of time for selection to act.  We know from genome scans done in the lab of Jonathan Pritchard, for example, that there is extensive evidence for rapid, recent selection in humans. New, complex psychological mechanisms?  Probably not, but we should nonetheless not fall into the trap of thinking that somehow evolution stopped for our species 10,000 years ago.

Buller's second fallacy ("We Know, or Can Discover, Why Distinctively Human Traits Evolved") is a deeply difficult problem in human evolution. I'm afraid that my current thinking on this problem leads me to the same pessimistic conclusion that Buller and his colleague Jonathan Kaplan come to: There are just some things that we can't know (scientifically) about human evolution.  This arises from the fact that our species is the only member of our genus and we are separated from our sister species by nearly six million years. As Dick Lewontin first noted in 1972, despite our dizzying cultural and social diversity, we are an amazingly homogenous species genetically.  I suspect that what this means is that the standard conceit of EP (one that Buller is highly critical of), that humans are everywhere the same critter, is probably true.  Unique (and universal) phenomena present science with a particular explanatory challenge. Buller is spot on when he criticizes EP for wanting it both ways.  On the one hand, EP sees a robust and universal human nature (an idea to which I am sympathetic, by the way).  On the other, EP sees strong selection driving the evolution of diverse psychological mechanisms.  The unpleasant reality is that if selection on psychological mechanisms were, in fact, that strong and pervasive, we should expect contemporary heterogeneity in the expression of such adaptations across different populations.  This is a topic that University of Illinois anthropological geneticist Charles Roseman and I have talked about quite a bit and have a very slowly gestating manuscript in which we discuss this and other ideas.  I know of no convincing evidence that such variation exists and for this and other reasons, I remain a steadfast skeptic of the idea that natural selection has shaped all these important psychological mechanisms independently and with precision to the tasks to which they are supposed to represent engineering solutions.

Buller's argument for fallacy #3 (“Our Modern Skulls House a Stone Age Mind”) is, I think, a little unfair.  The major argument he makes on this point is that some of our psychological mechanisms did not, in fact, arise in our Pleistocene hunter-gatherer ancestors, but are of a more ancient, primate (or even mammalian) nature.  Honestly, I doubt that this point would elicit many complaints by anyone of the so-called Santa Barbara school. Sometimes critics -- myself included -- make a little too much of the it-all-evolved-in-the-Pleistocene bit.  I think this is one example of that.  Tooby and Cosmides have themselve argued that the EEA can be thought of as working at a variety of time scales.  The emotional systems described by Jaak Panksepp and used by Buller in his critique -- Care, Panic and Play -- are all pretty basic ones for a social species.  Indeed, the emotional system of panic almost certainly pre-dates complex sociality.  The EEA argument, as laid out by John Tooby and Irv DeVore (1987) and then by Tooby and Cosmides (1990), is essentially one of evolutionary lag: complex adaptations to past environments are carried forward into the present.  When a system retains its function, the scale of such lag can be large.  Think about bilateral symmetry or the tetrapod bauplan. I think that a fair assessment of Santa Barbara style EP reveals that there is nothing contradictory about the existence of primitive (in the sense of pleisiomorphic) emotional systems in contemporary humans.

Another small point of departure between Buller's critique and my own thinking on the matter is his discussion of David Buss's work on sexual jealousy.  Now, I should be perfectly clear here.  I happen to think that the whole sex-differences in sexual preferences thing is the most overplayed finding in all of evolutionary science.  In class, I refer to this work as Men-Are-From-Mars Evolutionary Psychology.  The basic idea is to take whatever tired sexual stereotype that you'd hear in a second rate stand-up comedian's monologue, or read about in airport bookstore self-help tracts and dress it up as the scientifically proven patrimony of our evolutionary past.  Ugh.  No, the part of Buller's argument with which I disagree is his apparent take on decision-making. Buller writes, "According to Pop EP, many cultural differences stem from a common human nature responding to variable local conditions."  I guess I'm not so clear as to what's wrong with such a statement.  Isn't that really what he argues in the previous paragraphs when he suggests that women and men have a fundamentally similar reaction to sexual jealousy?  On this he writes, "Instead both sexes could have the same evolved capacity to distinguish threatening from nonthreatening infidelities and to experience jealousy to a degree that is proportional to the perceived threat to a relationship in which one has invested mating effort."  An evolutionary psychology that took seriously environmental (including cultural) variability and combined it with some preferences over risk and uncertainty and a generalized calculus of costs and benefits: Now that would be interesting!  Of course, I'd call that behavioral ecology.

Regarding fallacy #4 ("The Psychological Data Provide Clear Evidence for Pop EP") more generally, I think that Buller is right on.  The evidence for many of these so-called psychological adaptations is pretty weak.  There is general contempt for population genetics among the smarter (and there are smart ones) evolutionary psychologists with whom I have talked and general ignorance among the less gifted.  I think this contempt and/or ignorance is expressed to the detriment of scientific progress in EP.  Buller's point that cross-cultural differences are sometimes greater than inter-sexual differences in the psychological traits that are putative adaptations for sex-specific reproductive strategies, while not specifically substantiated, is pretty devastating.  This is where population genetics comes in.  Thinking about within vs. between population variance is a very important step in understanding the evolutionary forces at work.

The complex organ that is the human brain is certainly the result of selection.  As George Williams reminds us, selection is the only evolutionary mechanism that can produce this type of complexity. So, like Buller, I agree that there must be an evolutionary psychology.  Our various complaints are with the evolutionary psychology that Buller labels "Pop EP."  It's all too easy to be critical. Developing scientific theories for phenomena as complex as those surrounding the evolution of our species is a difficult task and takes ingenuity, courage, and, of course, thick skin. Among the various practitioners of EP of whom Buller is particularly critical, I think that John Tooby and Leda Cosmides are smart people who manifest all these qualities.  A fallacy of contemporary discourse -- one that is all too easily seen in anthropological meetings --  is that people who disagree intellectually must hold each other in contempt or otherwise dislike each other.  I disagree with much of current EP but I also think there are some interesting ideas among practitioners of EP, once we get beyond the trite Men-Are-From-Mar/Women-Are-From-Venus stereotypes.

Detailing where I think the action is in an interesting evolutionary psychology is at the very least another long blog post.  Some areas that I think are promising and/or under-studied include: detailed analyses of cultural transmission dynamics, understanding how people integrate diverse types of information to form decisions with fitness consequences, and understanding how people weigh risk and uncertainty.  I have a lot more to say on these topics, so I think it will have to wait for future posts...

References

Buller, D. J. (2005). Evolutionary psychology: the emperor's new paradigm. Trends in Cognitive Sciences, 9(6), 277-283.

Foley, R. (1996). The adaptive legacy of human evolution: A search for the environment of evolutionary adaptedness. Evolutionary Anthropology, 4, 194-203.

Gurven, M., & Kaplan, H. (2007). Longevity Among Hunter-Gatherers: A Cross-Cultural Examination. Population and Development Review, 33(2), 321–365.

Kelly, R. L. (1995). The Foraging Spectrum: Diversity in Hunter-Gatherer Lifeways. Washington DC: Smithsonian Institution Press.

Lewontin, R. C. (1972). The apportionment of human genetic diversity. Evolutionary Biology, 6, 381-398.

Voight, B. F., Kudaravalli, S., Wen, X., & Pritchard, J. K. (2006). A map of recent positive selection in the human genome. PLoS Biology, 4(3), e72. Epub 2006 Mar 2007.

Tooby, J., & Cosmides, L. (1990). The Past Explains the Present - Emotional Adaptations and the Structure of Ancestral Environments. Ethology and Sociobiology, 11(4-5), 375-424.

Tooby, J., & DeVore, I. (1987). The reconstruction of hominid behavioral evolution through strategic modeling. In W. Kinzey (Ed.), Primate Models of Hominid Behavior. Stony Brook: SUNY Press.

Disturbing Tag Cloud

Using the tag cloud widget for WordPress, I find that my most commonly used tag currently is "economics."  How can that be?  It's not even one of my categories. Perhaps it is my broad definition of economics?  Perhaps it is my frequent discontent with the way that human behavior gets discussed in the economics literature? Maybe I'm really interested in economic questions.  I think I definitely need to write more posts on diarrhea...