Category Archives: science

The Least Stressful Profession of Them All?

In the spirit of critics misunderstanding the life of university researchers that I started in my last post, I felt the need to chime in a bit on a story that has really made the social-media rounds in the last couple days. This kerfuffle stems from a Forbes piece by Susan Adams enumerating the 10 least stressful jobs for 2013. Reporting on a study from the job-site careercast.com, and to the surprise of nearly every academic I know, she listed university professor as the least stressful of all jobs. Adams writes: "For tenure-track professors, there is some pressure to publish books and articles, but deadlines are few." This is quite possibly the most nonsensical statement I think I have ever read about the academy and it reveals a profound ignorance about its inner workings. This careercast.com list was also picked up by CNBC and Huffington Post, both of which were completely credulous of the rankings.

Before going on though, I have to give Ms. Adams some props for amending her piece following an avalanche of irate comments from actual professors. She writes:

Since writing the above piece I have received more than 150 comments, many of them outraged, from professors who say their jobs are terribly stressful. While I characterize their lives as full of unrestricted time, few deadlines and frequent, extended breaks, the commenters insist that most professors work upwards of 60 hours a week preparing lectures, correcting papers and doing research for required publications in journals and books. Most everyone says they never take the summer off, barely get a single day’s break for Christmas or New Year’s and work almost every night into the wee hours.

All true.

In the CNBC piece, the careercast.com publisher, Tony Lee, lays down some of the most uninformed nonsense that I've ever read:

"If you look at the criteria for stressful jobs, things like working under deadlines, physical demands of the job, environmental conditions hazards, is your life at risk, are you responsible for the life of someone else, they rank like 'zero' on pretty much all of them!" Lee said.

Plus, they're in total control. They teach as many classes as they want and what they want to teach. They tell the students what to do and reign over the classroom. They are the managers of their own stress level.

Careercast.com measured job-related stress using an 11-dimensional scale. These dimensions and the point ranges assigned to each include:

  • Travel, amount of (0-10)<
  • Growth Potential (income divided by 100)
  • Deadlines (0-9)
  • Working in the public eye (0-5)
  • Competitiveness (0-15)
  • Physical demands (stoop, climb, etc.) (0-14)
  • Environmental conditions (0-13)
  • Hazards encountered (0-5)
  • Own life at risk (0-8)
  • Life of another at risk (0-10)
  • Meeting the public (0-8)

These seem reasonable enough, but the extent to which they were accurately assessed for at least this first item in the list is another point altogether.

It is important to note that there is enormous heterogeneity contained in the job title "professor." There are professors of art history and professors of business and professors of law and professors of vascular surgery, and professors of chemistry, and professors of seismic engineering professors of volcanology and ... you get the point. No doubt some of these are more or less stressful than others. Many of these involve substantial work in the public eye and meeting the public. Some involve hazardous environmental conditions and physical demands.

However, I will focus mainly on what I see as the most ludicrous statements made by both Lee and Adams: that professors have no deadlines. My life is all about deadlines: article/book submission deadlines, institutional review board deadlines, peer review deadlines, editorial deadlines, and the all-important grant deadlines. There are the deadlines imposed by my students when they apply for grants or fellowships or jobs and need highly detailed letters of recommendation, often on very short notice. Oh, and guess what: grades are due on a particular date at the end of the term. You know, a deadline? And those classes we teach: better have a lecture ready before the class meets. Again, kinda like a deadline. I think that it is worth noting that one is expected to meet these teaching deadlines even when most professional incentives (at least at a research university) are focused around everything in your job description but teaching. There is a trite phrase describing the life of a professor -- particularly a junior professor -- that seems to have found its way into the general consciousness, "publish or perish." Notice that it is not "give coherent, interesting lectures and grade fairly and expediently or perish"!

So, yes, there are deadlines and there are very difficult trade-offs relating to the finiteness of time. Honestly, it's hard for me to imagine how even a casual observer of the university could not see the ubiquity of deadlines for the professor's life.

In an excellent rebuttal of this list, blogger Audra Diers writes about both the time demands and the economic realities of obtaining a tenure-track job. I will finish up with a few thoughts on competitiveness and "growth potential." My experience on a variety of job search committees since coming to Stanford is that there are typically hundreds of highly qualified candidates for any given job search. These are all people who have Ph.D.'s and, frequently, already have jobs at other universities. In the anthropology department at Stanford, the majority of faculty joined Stanford from faculty positions at other universities. It is very difficult to get a job at a university like Stanford directly out of graduate school. Inevitably, you are competing against people who have already been assistant professors (or at least post-docs) at other universities and already have a substantial publication and grant-writing record. The differences in salary, teaching loads, and institutional prestige can be substantial. Browsing the Chronicle of Higher Education's Almanac of Higher Education can provide some numbers. Many people bust it in lower-prestige universities with the hope of eventually getting an opportunity for a job at a place like Stanford or Berkeley or Harvard. This means publishing important work, often while teaching outrageously high teaching loads at universities with primarily teaching missions and that means long hours, juggling many conflicting demands, and enormous individual drive.

If you are a scientist, you are often competing with other scientists for results. Getting yourself in a position to secure such results means successful grant-writing and attracting top students and post-docs to your lab. Now, this competition is often enjoyable and almost certainly drives innovation, but it can be stressful (and deadline filled!). There is nothing quite like the feeling of looking at some journal's table of contents that's shown up in your inbox and realizing you've been scooped on a problem you've spent years working on. There is always that little bit of fear in the back of your head pushing you to publish your results before someone else does.

Where Lee gets the idea that professors "teach as many classes as they want and what they want to teach" is a mystery to me. Universities (and colleges within universities) have rules for the number of courses their faculty are expected to teach. Sometimes, a professor can buy out of some teaching by securing more research funding that specifically budgets for such buy-outs. Within departments, there is the dreaded curriculum committee. My department's CC decided this year that I should teach all my courses in the Spring quarter. While it's been nice to have large chunks of research time this Fall, Spring is going to be horrible. This is hardly teaching as much or what I want to teach. Departments have instructional needs (i.e., "service courses") and someone needs to teach these. Junior faculty are often dumped upon to teach the service courses (e.g., history of the field, methodological courses) that very few students want to attend.

Writes Adams at Forbes, "The other thing most of the least stressful jobs have in common: At the end of the day, people in these professions can leave their work behind, and their hours tend to be the traditional nine to five." This is just crazy talk. I work every night, some nights are more effective than others, for sure, but, like many professions, I take this as a given for my job.

So being a university professor is hardly a stress-free life. This doesn't in any way mean that we don't like our jobs. Being a tenured professor at a major research university is good work if you can get it. The job carries with it a great deal of autonomy, flexibility, and the ability to pursue one's passion. As a professor, one interacts with interesting, curious people on a daily basis and helps shape future leaders. The job-related stress felt by a university professor is almost certainly not on par with, say, an infantry soldier or police officer, but the job is not stress-free. It never ceases to surprise me of how ignorant about the workings of universities critics often are. This is an instance where there is no obvious political agenda -- the study just got some facts badly wrong -- but studies like this contribute to disturbing anti-intellectualism (and concomitant disdain for empirical evidence) that has become a part of American public consciousness.

Thoughts on Black Swans and Antifragility

I have recently read the latest book by Nassim Nicholas Taleb, Antifragile. I read his famous The Black Swan a while back while in the field and wrote lots of notes. I never got around to posting those notes since they were quite telegraphic (and often not even electronic!), as they were written in the middle of the night while fighting insomnia under mosquito netting. The publication of his latest, along with the time afforded by my holiday displacement, gives me an excuse to formalize some of these notes here. Like Andy Gelman, I have so many things to say about this work on so many different topics, this will be a bit of a brain dump.

Taleb's work is quite important for my thinking on risk management and human evolution so it is with great interest that I read both books. Nonetheless, I find his works maddening to say the least. Before presenting my critique, however, I will pay the author as big a compliment as I suppose can be made. He makes me think. He makes me think a lot, and I think that there are some extremely important ideas is his writings. From my rather unsystematic readings of other commentators, this seems to be a pretty common conclusion about his work. For example, Brown (2007) writes in The American Statistician, "I predict that you will disagree with much of what you read, but you'll be smarter for having read it. And there is more to agree with than disagree. Whether you love it or hate it, it’s likely to change public attitudes, so you can't ignore it." The problem is that I am so distracted by all the maddening bits that I regularly nearly miss the ideas, and it is the ideas that are important. There is so much ego and so little discipline on display in his books, The Black Swan and Antifragile.

Some of these sentiments have been captured in Michiko Kakutani's excellent review of Antifragile. There are some even more hilarious sentiments communicated in Tom Bartlett's non-profile in the Chronicle of Higher Education.

I suspect that if Taleb and I ever sat down over a bottle of wine, we would not only have much to discuss but we would find that we are annoyed -- frequently to the point of apoplexy -- by the same people. Nonetheless, I find one of the most frustrating things about reading his work the absurd stereotypes he deploys and broad generalizations he uses to dismiss the work of just about any academic researcher. His disdain for academic research interferes with his ability to make cogent critique. Perhaps I have spent too much time at Stanford, where the nerd is glorified, but, among other things, I find his pejorative use of the term "nerd" for people like Dr. John, as contrasted to man-of-his-wits Stereotyped, I mean, Fat Tony off-putting and rather behind the times. Gone are the days when being labeled a nerd is a devastating put down.

My reading of Taleb's critiques of prediction and risk management is that the primary problem is hubris. Is there anything fundamentally wrong with risk assessment? I am not convinced there is, and there are quite likely substantial benefits to systematic inquiry. The problem is that the risk assessment models become reified into a kind of reality. I warn students – and try to regularly remind myself – never to fall in love with one's own model. Something that many economists and risk modelers do is start to believe that their models are something more real than heuristic. George Box's adage has become a bit cliche but nonetheless always bears repeating: all models are wrong, but some are useful. We need to bear in mind the wrongness of models without dismissing their usefulness.

One problem about both projection and risk analysis, that Taleb does not discuss, is that risk modelers, demographers, climate scientists, economists, etc. are constrained politically in their assessments. The unfortunate reality is that no one wants to hear how bad things can get and modelers get substantial push-back from various stakeholders when they try to account for real worst-case scenarios.

There are ways of building in more extreme events than have been observed historically (Westfall and Hilbe (2007), e.g., note the use of extreme-value modeling). I have written before about the ideas of Martin Weitzman in modeling the disutility of catastrophic climate change. While he may be a professor at Harvard, my sense is that his ideas on modeling the risks of catastrophic climate change are not exactly mainstream. There is the very tangible evidence that no one is rushing out to mitigate the risks of climate change despite the fact that Weitzman's model makes it pretty clear that it would be prudent to do so. Weitzman uses a Bayesian approach which, as noted by Westfall and Hilbe, is a part of modern statistical reasoning that was missed by Taleb. While beyond the scope of this already hydra-esque post, briefly, Bayesian reasoning allows one to combine empirical observations with prior expectations based on theory, prior research, or scenario-building exercises. The outcome of a Bayesian analysis is a compromise between the observed data and prior expectations. By placing non-zero probability on extreme outcomes, a prior distribution allows one to incorporate some sense of a black swan into expected (dis)utility calculations.

Nor does the existence of black swans mean that planning is useless. By their very definition, black swans are rare -- though highly consequential -- events. Does it not make sense to have a plan for dealing with the 99% of the time when we are not experiencing a black swan event? To be certain, this planning should not interfere with our ability to respond to major events but I don't see any evidence that planning for more-or-less likely outcomes necessarily trades-off with responding to unlikely outcomes.

Taleb is disdainful about explanations for why the bubonic plague didn't kill more people: "People will supply quantities of cosmetic explanations involving theories about the intensity of the plague and 'scientific models' of epidemics." (Black Swan, p. 120) Does he not understand that epidemic models are a variety of that lionized category of nonlinear processes he waxes about? He should know better. Epidemic models are not one of these false bell-curve models he so despises. Anyone who thinks hard about an epidemic process -- in which an infectious individual must come in contact with a susceptible one in order for a transmission event to take place -- should be able to infer that an epidemic can not infect everyone. Epidemic models work and make useful predictions. We should, naturally, exhibit a healthy skepticism about them as we should any model. But they are an important tool for understanding and even planning.

Indeed, our understanding gained from the study of (nonlinear) epidemic models has provided us with the most powerful tools we have for control and even eradication. As Hans Heesterbeek has noted, the idea that we could control malaria by targeting the mosquito vector of the disease is one that was considered ludicrous before Ross's development of the first epidemic model. The logic was essentially that there are so many mosquitoes that it would be absurdly impractical to eliminate them all. But the Ross model revealed that epidemics -- because of their nonlinearity -- have thresholds. We don't have to eliminate all the mosquitoes to break the malaria transmission cycle; we just need to eliminate enough to bring the system below the epidemic threshold. This was a powerful idea and it is central to contemporary public health. It is what allowed epidemiologists and public health officials to eliminate smallpox and it is what is allowing us to very nearly eliminate polio if political forces (black swans?) will permit.

Taleb's ludic fallacy (i.e., games of chance are somehow an adequate model of randomness in the world) is great. Quite possibly the most interesting and illuminating section of The Black Swan happens on p. 130 where he illustrates the major risks faced by a casino. Empirical data make a much stronger argument than do snide stereotypes. This said, Lund (2007) makes the important point that we need to ask what exactly is being modeled in any risk assessment or projection. One of the most valuable outcomes of any formalized risk assessment (or formal model construction more generally) is that it forces the investigator to be very explicit about what is being modeled. The output of the model is often of secondary importance.

Much of the evidence deployed in his books is what Herb Gintis has called "stylized facts" and, of course, is subject to Taleb's own critique of "hidden evidence." Because the stylized facts are presented anecdotally, there is no way to judge what is being left out. A fair rejoinder to this critique might be that these are trade publications meant for a mass market and are therefore not going to be rich in data regardless. However, the tone of the books – ripping on economists and bankers but also statisticians, historians, neuroscientists, and any number of other professionals who have the audacity to make a prediction or provide a causal explanation – makes the need for more measured empirical claims more important. I suspect that many of these people actually believe things that are quite compatible with the conclusions of both The Black Swan and Antifragile.

On Stress

The notion of antifragility turns on systems getting stronger when exposed to stressors. But we know that not all stressors are created equally. This is where the work of Robert Sapolsky really comes into play. In his book Why Zebras Don't Get Ulcers, Sapolsky, citing the foundational work of Hans Seyle, notes that some stressors certainly make the organism stronger. Certain types of stress ("good stress") improves the state of the organism, making it more resistant to subsequent stressors. Rising to a physical or intellectual challenge, meeting a deadline, competing in an athletic competition, working out: these are examples of good stresses. They train body, mind, and emotions and improve the state of the individual. It is not difficult to imagine that there could be similar types of good stressors at levels of organization higher than the individual too. The way the United States came together as a society to rise to the challenge of World War II and emerge as the world's preeminent industrial power comes to mind. An important commonality of these good stressors is the time scale over which they act. They are all acute stressors that allow recovery and therefore permit the subsequently improved performance.

However, as Sapolsky argues so nicely, when stress becomes chronic, it is no longer good for the organism. The same glucocorticoids (i.e., "stress hormones") that liberate glucose and focus attention during an acute crisis induce fatigue, exhaustion, and chronic disease when the are secreted at high levels chronically.

Any coherent theory of antifragility will need to deal with the types of stress to which systems are resistant and, importantly, have a strengthening effect. Using the idea of hormesis – that a positive biological outcome can arise from taking low doses of toxins – is scientifically hokey and boarders on mysticism. It unfortunately detracts from the good ideas buried in Antifragile.

I think that Taleb is on to something with the notion of antifragility but I worry that the policy implications end up being just so much orthodox laissez-faire conservatism. There is the idea that interventions – presumably by the State – can do nothing but make systems more fragile and generally worse. One area where the evidence very convincingly suggests that intervention works is public health. Life expectancy has doubled in the rich countries of the developed world from the beginning of the twentieth century to today. Many of the gains were made before the sort of dramatic things that come to mind when many people think about modern medicine. It turns out that sanitation and clean water went an awful long way toward decreasing mortality well before we had antibiotics or MRIs. Have these interventions made us more fragile? I don't think so. The jury is still out, but it seems that reducing the infectious disease burden early in life (as improved sanitation does) seems to have synergistic effects on later-life mortality, an effect is mediated by inflammation.

On The Academy

Taleb drips derision throughout his work on university researchers. There is a lot to criticize in the contemporary university, however, as with so many other external critics of the university, I think that Taleb misses essential features and his criticisms end up being off base. Echoing one of the standard talking points of right-wing critics, Taleb belittles university researchers as being writers rather than doers (echoing the H.L. Menken witticism  "Those who can do; those who can't teach"). Skin in the game purifies thought and action, a point with which I actually agree, however, thinking that that university researchers live in a world lacking consequences is nonsense. Writing is skin in the game. Because we live in a quite free society – and because of important institutional protections on intellectual freedom like tenure (another popular point of criticism from the right) – it is easy to forget that expressing opinions – especially when one speaks truth to power – can be dangerous. Literally. Note that intellectuals are often the first ones to go to the gallows when there are revolutions from both the right and the left: Nazis, Bolsheviks, and Mao's Cultural Revolution to name a few. I occasionally get, for lack of a better term, unbalanced letters from people who are offended by the study of evolution and I know that some of my colleagues get this a lot more than I. Intellectuals get regular hate mail, a phenomenon amplified by the ubiquity of electronic communication. Writers receive death threats for their ideas (think Salman Rushdie). Ideas are dangerous and communicating them publicly is not always easy, comfortable, or even safe, yet it is the professional obligation of the academic.

There are more prosaic risks that academics face that suggest to me that they do indeed have substantial skin in the game. There is a tendency for critics from outside the academy to see universities as ossified places where people who "can't do" go to live out their lives. However, the university is a dynamic place. Professors do not emerge fully formed from the ivory tower. They must be trained and promoted. This is the most obvious and ubiquitous way that what academics write has "real world" consequences – i.e., for themselves. If peers don't like your work, you won't get tenure. One particularly strident critic can sink a tenure case. Both the trader and the assistant professor have skin in their respective games – their continued livelihoods depend upon their trading decisions and their writing. That's pretty real. By the way, it is a huge sunk investment that is being risked when an assistant professor comes up for tenure. Not much fun to be forty and let go from your first "real" job since you graduated with your terminal degree... (I should note that there are problems with this – it can lead to particularly conservative scholarship by junior faculty, among other things, but this is a topic for its own post.)

Now, I certainly think that are more and less consequential things to write about. I have gotten more interested in applied problems in health and the environment as I've moved through my career because I think that these are important topics about which I have potentially important things to say (and, yes, do). However, I also think it is of utmost importance to promote the free flow of ideas, whether or not they have obvious applications. Instrumentally, the ability to pursue ideas freely is what trains people to solve the sort of unknown and unforecastable problems that Taleb discusses in The Black Swan. One never knows what will be relevant and playing with ideas (in the personally and professionally consequential world of the academy) is a type of stress that makes academics better at playing with ideas and solving problems.

One of the major policy suggestions of Atifragile is that tinkering with complex systems will be superior to top-down management. I am largely sympathetic to this idea and to the idea that high-frequency-of-failure tinkering is also the source of innovation. Taleb contrasts this idea of tinkering is "top-down" or "directed" research, which he argues regularly fails to produce innovations or solutions to important problems. This notion of "top-down," "directed" research is among the worst of his various straw men and a fundamental misunderstanding of the way that science works. A scientist writes a grant with specific scientific questions in mind, but the real benefit of a funded research program is the unexpected results one discovers while pursuing the directed goals. As a simple example, my colleague Tony Goldberg has discovered two novel simian hemorrhagic viruses in the red colobus monkeys of western Uganda as a result of our big grant to study the transmission dynamics and spillover potential of primate retroviruses. In the grant proposal, we discussed studying SIV, SFV, and STLV. We didn't discuss the simian hemorrhagic fever viruses because we didn't know they existed! That's what discovery means. Their not being explicitly in the grant didn't stop Tony and his collaborators from the Wisconsin Regional Primate Center from discovering these viruses but the systematic research meant that they were in a position to discover them.

The recommendation of adaptive, decentralized tinkering in complex systems is in keeping with work in resilience (another area about which Taleb is scornful because it is the poor step-child of antifragility). Because of the difficulty of making long-range predictions that arises from nonlinear, coupled systems, adaptive management is the best option for dealing with complex environmental problems. I have written about this before here.

So, there is a lot that is good in the works of Taleb. He makes you think, even if spend a lot of time rolling your eyes at the trite stereotypes and stylized facts that make up much of the rhetoric of his books. Importantly, he draws attention to probabilistic thinking for a general audience. Too much popular communication of science trades in false certainties and the mega-success of The Black Swan in particular has done a great service to increasing awareness among decision-makers and the reading public about the centrality of uncertainty. Antifragility is an interesting idea though not as broadly applicable as Taleb seems to think it is. The inspiration for antifragility seem to lie in largely biological systems. Unfortunately, basing an argument on general principles drawn from physiology, ecology, and evolutionary biology pushes Taleb's knowledge base a bit beyond its limit. Too often, the analogies in this book fall flat or are simply on shaky ground empirically. Nonetheless, recommendations for adaptive management and bricolage are sensible for promoting resilient systems and innovation. Thinking about the world as an evolving complex system rather than the result of some engineering design is important and if throwing his intellectual cachet behind this notion helps it to get as ingrained into the general consciousness as the idea of a black swan has become, then Taleb has done another major service.

On Anthropological Sciences and the AAA

I guess the time has rolled around again for my annual navel-gaze regarding my discipline, my place within it, and its future. Two strangely interwoven events have conspired to make me particularly philosophical as we enter into the winter holidays. First, I am in the middle of a visit by my friend, colleague, and former student, Charles Roseman, now an associate professor of anthropology at the University of Illinois, Urbana-Champaign. The second is that the American Anthropological Association meetings just went down in San Francisco and this always induces an odd sense of shock and subsequent introspection.

Charles graduated with a Ph.D. from the Department of Anthropological Sciences (once a highly ranked department according the the National Research Council) in 2005. He was awarded tenure at UIUC, a leading department for biological anthropology, this past year and has come back to The Farm to collaborate with me on our top-secret sleeper project of the past seven years. We've made some serious progress on this project since he arrived and maybe I'll be able to write about that soon too.

The annual AAA meeting is one  that I never attended until about four years ago, coinciding with what we sometimes refer to as "the blessed event," the remarrying of the two Stanford Anthropology departments. It's actually a bit of coincidence that I started attending AAAs the same year that we merged but it has largely been business of the new Department of Anthropology that has kept me going back – largely to serve on job search committees. This year, I had two responsibilities that drew me to the AAAs. The first was the editorial board meeting for American Anthropologist, the flagship publication of the association.  I joined the editorial board this year and it seemed a good idea to go and get a feel for what is happening with the journal and where it is likely to head over the next couple years.

My other primary responsibility was chairing a session that was organized by two of my Ph.D. students, Yeon Jung Yu and Shannon Randolph. In addition to Yeon and Shannon, my Ph.D. student Alejandro Feged also presented work from his dissertation research.  All three of these students were actually accepted into Anthsci and are part of the last cohort of students to leave Stanford still knowing the two-department system.

It was a great pleasure to sit in the audience and watch Yeon, Shannon, and Alejandro dazzle the audience with their sophisticated methods, beautiful images, and accounts of impressive, extended -- and often hardcore -- fieldwork. For her dissertation research, Yeon worked for two years with commercial sex workers in southern China, attempting to understand how women get recruited into sex work and how social relations facilitate their ability to survive and even thrive in a world that is quite hostile to them. Her talk was incredibly professional and theoretically sophisticated. For her dissertation research, Shannon worked in the markets of Yaoundé, Cameroon, trying to understand the motivations for consumption of wild bushmeat. Shannon was able to share with the audience her innovative approaches to collecting data (over 4,000 price points, among other things) on a grey-market activity that people are not especially eager to discuss, especially in the market itself. Alejandro did his dissertation research in the Colombian Amazon, where he investigated the human ecology of malaria in this highly endemic region. His talk demonstrated that the conventional wisdom about malaria ecology in this region -- namely, that the people most at risk for infection are adult men who spend the most time in the forest -- is simply incorrect for some indidenous popualtions and his time-budget analyses made a convincing case for the behavioral basis of this violation of expectations. This was a pretty heterogeneous collection of talks but they shared the commonality of a very strong methodological basis to the research.

At at time when many anthropologists express legitimate concerns over their professional prospects, I have enormous confidence in this crop of students, all three of whom are regularly asked to do consulting for government and/or non-govermental organizations because of their subject knowledge and methodological expertise. Anthsci graduates -- there weren't that many of them since the department existed for less than 10 years -- have done very well in the profession overall. I will list just a couple here whose work I knew well because I was on their committees or their work was generally in my area

In addition to these grad students, I think that it's important to note the success of the post-docs who worked either in Anthsci or with former Anthsci faculty on projects that started before the merger. Some of these outstanding people include:

In a discipline that is lukewarm at best on the even very notion of methodology, I suspect that students with strong methodological skills -- in addition to the expected theoretical sophistication and critical thinking (note that these skills do not actually trade-off) -- enjoy a distinct comparative advantage when entering a less-than-ideal job market. Of course, I don't mean to imply that Anthsci didn't have its share of graduates who leave the field out of frustration or lack of opportunity or who get stuck in the vicious cycle of adjunct teaching. But this accounting gives me hope. It gives me hope for my both my current and future students and it gives me hope for the field. Maybe I'll even go to AAAs again next year...

AAPA 2012 Run-Down

I am done with this year's American Association of Physical Anthropologists annual meeting in Portland. Alas, I am not yet home as I had a scheduling snafu with Alaska Airlines yesterday and there was literally not a single seat on a flight to any airport in the Bay Area. So, I hung out in PDX for the night, where my sister-in-law is finishing up her MD/MPH at OHSU. Staying an extra night allowed me to have dinner at what is probably my favorite pizzaria on the West Coast, Bella Faccia on Alberta Ave in Northheast (Howie's in Palo Alto is a close second). I also had a lovely breakfast of rissotto cakes and poached eggs at Petite Provance, also on Alberta. All in all, a fantastic couple days' worth of food.
It was great to get a chance to catch up with old friends and colleagues and meet new ones. This is really what professional meetings are about. I had a chance to spend time with Charles Roseman, Rick Bribiescas, Josh Snodgrass, Nelson Ting, and Frances White. I also had very nice, if too brief, chats with Connie Mulligan, Lorena Madrigal, Larry Sugiyama, Greg Blomquist, Zarin Machanda, Melissa Emery Thompson, Cheryl Knott, and Chris Kuzawa.
I only go to the AAPAs every couple of years. Given the interdisciplinarity of my work and interests, I struggle to find a "home" professional meeting. Sometimes I feel like it's PAA; sometimes Sunbelt; sometimes AAPA/HBA.  One thing I can say for certain is that it is not AAA, my semi-annual experience in ethnographic surreality. Such a peculiar discipline anthropology is. Part of the reason I don't go to AAPAs all that often is that I rarely find all that much interesting there. There are a few really fantastic people working in the field but most of the talks I find stupifyingly boring. I'm just not that interested in teeth. I suppose this is true for any professional meeting, so I shouldn't be too hard on AAPA -- I'm also not that interested in contraceptive uptake, social media/online networks, or governmentality, apparently the modal topics in my competing meetings. In fact, I was pleasantly surprised by the diversity and quality of talks I saw at AAPA this year.
In my session alone, I saw really terrific and interesting talks by Steve Leigh and Connie Milligan. Steve spoke on the comparative gut microbiomes of primates and Connie presented early results on the modification of gene expression through methylation of infants born to women who experienced extreme psychosocial and physical trauma in eastern Congo. Really important stuff. It also struck me that you'd probably only see these types of talks at the AAPAs.
There were a lot of young people at this meeting -- a greater fraction than I remember from past meetings.  Maybe it was the draw of hipster Portland with its great beer, great food, and general atmosphere of grooviness. Maybe there really are lots and lots of young physical anthropologists being trained these days. I must admit that I had mixed feelings about this thought as I looked out over the vast ocean of twenty-something faces in the hotel bar Saturday night. On the one hand, it's great that people are being trained to do good work in physical anthropology. On the other hand, I worry about the ability of our discipline, which shows no signs of stopping with the charade that somehow anthropology is really akin to literary criticism, to absorb this many new Ph.D.s from (one of) the scientific wings of modern anthropology.
Two of the talks immediately before me in my session were, in fact, by young scientists and they were great. Andrew Paquette, from Northern Arizona University, gave a talk on the evolutionary history of Southeast Asian Ovalocytosis (SAO), a twenty-seven base pair deletion in the eleventh exon of the SLC4A1 gene that confers strong protection against infection with Plasmodium falciparum, the most dangerous form of malaria. Turns out this mutation, which has its geographic epicenter in Nusa Tenggara in Indonesia, is surprisingly ancient. Lots more to come from this, I'm sure. Margaux Keller, from Temple, gave a fantastic talk on finding some of the missing heritability in Parkinson's disease. Missing heritability of complex disease phenotypes is a major topic in genetic epidemiology and Margaux and her colleagues applied Genome-Wide Complex Trait Analysis to eight cohorts of case-control studies of PD. Their results substantially increase (i.e., by a factor of 10!) the fraction of total phenotypic variance in PD explained by straight-up genome-wide association studies (GWAS). In addition to the excellent scientific content of her presentation, I was struck by the very nice and original visual aesthetic of her slides.
I spoke on my recent work on the quantiative genetics of life-history traits.  With Statistics grad student Philip Labo, I've been doing some pretty serious number-crunching to examine the heritabilities of and (more interestingly) genetic correlations between human life-history characters. Good results that should be seeing some more light soon (including at PAA next month!).

I am done with this year's American Association of Physical Anthropologists annual meeting in Portland. Alas, I am not yet home as I had a scheduling snafu with Alaska Airlines yesterday and there was literally not a single seat on a flight to any airport in the Bay Area. So, I hung out in PDX for the night, where my sister-in-law is finishing up her MD/MPH at OHSU. Staying an extra night allowed me to have dinner at what is probably my favorite pizzeria on the West Coast, Bella Faccia on Alberta Ave in Northeast (Howie's in Palo Alto is a close second). I also had a lovely breakfast of risotto cakes and poached eggs at La Petite Provence, also on Alberta. All in all, a fantastic couple days' worth of food.

It was great to get a chance to catch up with old friends and colleagues and meet new ones. This is really what professional meetings are about. I had a chance to spend time with Charles Roseman, Rick Bribiescas, Josh Snodgrass, my EID buddy Nelson Ting, Kirstin Sterner, and Frances White. I also had very nice, if too brief, chats with Connie Mulligan, Lorena Madrigal, Larry Sugiyama, Greg Blomquist, Zarin Machanda, Melissa Emery Thompson, Cheryl Knott, Andy Marshall, and Chris Kuzawa.

I only go to the AAPAs every couple of years. Given the interdisciplinarity of my work and interests, I struggle to find a "home" professional meeting. Sometimes I feel like it's PAA; sometimes Sunbelt; sometimes AAPA/HBA.  One thing I can say for certain is that it is not AAA, my semi-annual experience in ethnographic surreality. Such a peculiar discipline anthropology is. Part of the reason I don't go to AAPAs all that often is that I rarely find all that much interesting there. There are a few really fantastic people working in the field, but most of the talks I find stupifyingly boring. I'm just not that interested in teeth. I suppose this is true for any professional meeting, so I shouldn't be too hard on AAPA -- I'm also not especially interested in contraceptive uptake, social media/online networks, or governmentality, apparently the modal topics in my competing meetings. In fact, I was pleasantly surprised by the diversity and quality of talks I saw at AAPA this year.

In my session alone, I saw really terrific and interesting talks by Steve Leigh and Connie Mulligan. Steve spoke on the comparative gut microbiomes of primates and Connie presented early results on the modification of gene expression through methylation of infants born to women who experienced extreme psychosocial and physical trauma in eastern Congo. Really important stuff. It also struck me that you'd probably only see these types of talks at the AAPAs.

There were a lot of young people at this meeting -- a greater fraction than I remember from past meetings.  Maybe it was the draw of hipster Portland with its great beer, great food, and general atmosphere of grooviness. Maybe there really are lots and lots of young physical anthropologists being trained these days. I must admit that I had mixed feelings about this thought as I looked out over the vast river of twenty-something faces pouring into the hotel bar Saturday night. On the one hand, it's great that people are being trained to do good work in physical anthropology. On the other hand, I worry about the ability of our discipline, which shows no signs of stopping with the charade that somehow anthropology is really akin to literary criticism, to absorb this many new Ph.D.s from (one of) the scientific wings of modern anthropology.

Two of the talks immediately before me in my session were, in fact, by young scientists and they were great. Andrew Paquette, from Northern Arizona University, gave a talk on the evolutionary history of Southeast Asian Ovalocytosis (SAO), a twenty-seven base pair deletion in the eleventh exon of the SLC4A1 gene that confers strong protection against infection with Plasmodium falciparum, the most dangerous form of malaria. Turns out this mutation, which has its geographic epicenter in Nusa Tenggara in Indonesia, is surprisingly ancient. Lots more to come from this, I'm sure. Margaux Keller, from Temple, gave a fantastic talk on finding some of the missing heritability in Parkinson's disease. Missing heritability of complex disease phenotypes is a major topic in genetic epidemiology and Margaux and her colleagues applied Genome-Wide Complex Trait Analysis to eight cohorts of case-control studies of PD. Their results substantially increase (i.e., by a factor of 10!) the fraction of total phenotypic variance in PD explained compared to straight-up genome-wide association studies (GWAS). In addition to the excellent scientific content of her presentation, I was struck by the very nice and original visual aesthetic of her slides.

I spoke on my recent work on the quantitative genetics of life-history traits.  With Statistics grad student Philip Labo, I've been doing some pretty serious number-crunching to examine the heritabilities of and (more interestingly) genetic correlations between human life-history characters. Good results that should be seeing some more light soon (including at PAA next month!).

That's How Science Works

The RealClimate blog has a very astute entry on how the controversy surrounding the recent report in the prestigious journal Science that bacteria living in the arsenic-rich waters of Mono Lake in California can substitute arsenic for phosphorous in their DNA.  If true, this would be a major finding because it expands the range of environments in which we could conceivably find extraterrestrial life.  In effect, this result would suggest a wider range of building blocks for life.  Pretty heavy stuff. Now, I am way out of my depth on this topic, but it sounds like the paper published in Science suffers from some fairly serious problems. Some of the problems noted by experts in the field have been assembled by Carl Zimmer on his blog.  Carl also provides a pithy treatment of the controversy in an article at Slate.com. John Roach has a similarly excellent review of the controversy, including what we learn about science from it on his Cosmic Log blog.

Regardless of the scientific merits of this work, this episode is actually an instructive example of the way that science works. As the RealClimate folks write,

The arseno-DNA episode has displayed this process in full public view. If anything, this incident has demonstrated the credibility of scientists, and should promote public confidence in the scientific establishment.

The post then goes on to list three important lessons we can draw from this whole incident:

  1. "Major funding agencies willingly back studies challenging scientific consensus." It helps if the challenge to scientific consensus is motivated by carefully reasoned theoretical challenges or, even better, data that challenge the consensus.  Some yahoo saying that evolution is "just a theory" or that climate change isn't real because it was really cold last winter isn't enough. In the case of arseno-DNA, as Carl Zimmer notes, the National Academy of Sciences published a report in 2007 that suggested the theoretical possibility of arsenic-based biology.  Carl also notes that some of the authors of this report are highly critical of the Science paper as well. The report challenged the orthodoxy that phosphate was a necessary building block of DNA, and the report's author's later called out NASA (the major funding source for this kind of extreme biology) for publishing sloppy science.  Lots of orthodoxy being challenged here...
  2. "Most everyone would be thrilled to overturn the consensus. Doing so successfully can be a career-making result. Journals such as Science and Nature are more than willing to publish results that overturn scientific consensus, even if data are preliminary – and funding agencies are willing to promote these results." Individual scientists have enormous individual and institutional incentives to overturn orthodoxies if it is within their power. You become a star when you pull this feat off. And you better believe that every funding agency out there would like to take credit for funding the critical research that helped overturn a fundamental scientific paradigm.
  3. "Scientists offer opinions based on their scientific knowledge and a critical interpretation of data. Scientists willingly critique what they think might be flawed or unsubstantiated science, because their credibility – not their funding – is on the line." As a scientist, you have to do this if you are going to be taken seriously by your peers -- you know, the ones who do all that peer review that climate deniers, e.g., seem to get their collective panties in a wad about?

The RealClimate piece summarizes by noting:

This is the key lesson to take from this incident, and it applies to all scientific disciplines: peer-review continues after publication. Challenges to consensus are seriously entertained – and are accepted when supported by rigorous data. Poorly substantiated studies may inspire further study, but will be scientifically criticized without concern for funding opportunities. Scientists are not "afraid to lose their grant money".

Read the RealClimate post to get the full story. Obviously, these authors (who do excellent science and amazing public education work, a rare combination) are interested in what this controversy has to say about accusations of bias in climate science -- check out the RealClimate archives for some back-story on this. However, the post is so much more broadly applicable, as they note in the quote above. Science is not a monolithic body of information; it is a process, a system designed to produce positive (as opposed to normative) statements about the world around us. When it works correctly, science is indifferent to politics or the personal motivations of individual scientists because results get replicated.  Everything about a scientific paper is designed to allow other researchers to replicate the results that are presented in that paper.  If other researchers can't replicate some group's findings, those findings become suspect (and get increasingly so as more attempts to replicate fail).

So what does this mean for Anthropology as a science? You may remember that there has been some at times shrill "discussion" (as well as some genuine intellectual discussion) about the place for science in Anthropology and the American Anthropological Association in particular. For me, replicability is a sine qua non of science. The nature of much anthropological research, particularly research in cultural anthropology, makes the question of replication challenging. When you observe some group of people behaving in a particular way in a particular place at a particular time, who is to say otherwise? I don't claim to have easy answers here, but there are a few things we can do to ensure the quality of our science.

First, we need to have scientific theories that are sufficiently robust that they can generate testable predictions that transcend the particularities of time and place. Results generated in one population/place/time can then be challenged by testing in other populations/places/times. Of course, it is of the utmost importance that we try to understand how the differences in population and place and time will change the results, but this is what our research is really about, right?  When we control for these differences, do we still see the expected results?

Second, we need to be scrupulous in our documentation of our results and the methods we employ to generate these results.  You know, like science? It's never easy to read someone else's lab notebook, but we need to be able to do this in anthropology, at least in principle.  Going back to the raw data as they are reduced in a lab notebook or its equivalent is probably the primary means through which scientific fraud is discovered. Of course, there are positive benefits to having scrupulously-kept field notes as well.  They serve as a rich foundation for future research by the investigator, for instance.

Third, we need to be willing to share our data. This is expected in the natural sciences (in fact, it is a condition for publication in journals like Science and Nature) and it should be in Anthropology as well.

I think that the points of the RealClimate post all apply to anthropology as well. Surrounding the latest brouhaha on science in anthropology, one hears a lot of grousing about various cartels (e.g., the AAA Executive Board, the editorial boards of various journals, etc.) that keep anthropologists of different strips (yes, it happens on both sides) from receiving grants or getting published or invited to serve on various boards, etc. Speaking from my experience as both panelist and applicant, I can confidently say that the National Science Foundation's Cultural Anthropology Program funds good cultural anthropology of a variety of different approaches (there are also other BCS programs that entertain, and sometimes fund, applications from anthropologists) and the panel will happily fund orthodoxy-busting proposals if they are sufficiently meritorious.  The editorial position of American Ethnologist not in line with your type of research?  If you've done good science, there are lots of general science journals that will gladly take interesting and important anthropology papers (and, might I add, have much higher impact factors). I co-authored a paper with Rebecca and Doug Bird that appeared in PNAS not too long ago. Steve Lansing has also had a couple nice papers in PNAS as does Richard McElreath, or Herman Pontzer, or ... a bunch of other anthropologists!  Mike Gurven at UCSB has had some luck getting papers into Proceedings of the Royal Society B.  Mhairi Gibson and Ruth Mace have papers in Biology Letters and PLoS Medicine.  Rebecca Sear has various papers in Proceedings of the Royal Society B. Monique Borgerhoff Mulder and a boat-load of other anthropologists (and at least one economist) have a paper in Science. Ruth Mace has papers in most of these journals as well as at least one in Science. Rob Boyd, Richard McElreath, Joe Henrich, and I all even have papers about human social behavior, culture, etc. in theoretical biology journals such as Theoretical Population Biology and the Journal of Theoretical Biology. There's lots more.  As with my previous post, this is a total convenience sample of work with which I am already familiar. The point is that there are outlets for good scientific anthropology out there even if people like me are unlikely to publish in journals like PoLAR.

So, I'm sanguine about the process of science and the continuing ability for anthropologists to pursue science. My winter break is drawing to a close and I'm going to try to continue some of this myself!

On Husserl, Hexis, and Hissy-Fits

There has been quite a brouhaha percolating through some Anthropology circles following the annual meeting of the American Anthropological Associate in New Orleans last month.  It seems that the AAA executive board, in all its wisdom, has seen fit to excise the term "science" from the Association's long-range planning document. You can sample some of the reaction to this re-write in blog posts from anthropologi.info, Neuroanthropology, Evolution on the Beach,  AAPA BANDITInside HigherEd, and Fetishes I Don't Get at Psychology Today. There is also a letter from AAA president, Virginia Dominguez here and you can find the full text of the planning document here. The primary concern has centered on the first paragraph of this document.  Here is that paragraph as it stood before the November meeting:

The purposes of the Association shall be to advance anthropology as the science that studies humankind in all its aspects, through archeological, biological, ethnological, and linguistic research; and to further the professional interests of American anthropologists; including the dissemination of anthropological knowledge and its use to solve human problems.

The new wording is as follows:

The purposes of the Association shall be to advance public understanding of humankind in all its aspects. This includes, but is not limited to, archaeological, biological, social, cultural, economic, political, historical, medical, visual, and linguistic anthropological research.  The Association also commits itself and to further the professional interests of anthropologists, including the dissemination of anthropological knowledge, expertise, and interpretation.

So, anthropology is no longer a science, though there are lots of rather particularistic approaches through which one can pursue anthropology that may or may not be scientific.  Apparently, the executive board has a newfound passion for public communication as well.  I guess we don't really need an organization that promotes scholarly understanding or the production of new knowledge.  Just look where that's gotten us!

The new wording has greatly concerned a number of parties, including the Society for Anthropological Sciences.  I am a member of this section and have never seen so much traffic on the society's listserv.

I will admit to being somewhat dismayed by the Society's response.  While I am not quite as tweaked by this as many, I nonetheless wrote a longish call for specific action -- one that involved good old-fashioned political organizing and attempting to forge alliances both with other sections within AAA and across other scholarly societies with an interest in anthropology (e.g., AAPA, HBA, SAA, HBES).  My call was greeted with a deafening (virtual) silence and I am left to guess why.  Perhaps the membership is suspicious of the imperialist ambitions of a biological anthropologist with the taint of evolution on him?  Perhaps they've heard and tried it all before and were simply convinced it would not work?  Perhaps they actually like being an embattled minority and don't really want to take action to jeopardize that status?

To what extent is the scandal a tempest in a teapot?  I honestly don't know.  The word "science" has been taken out of the first paragraph but there is nothing inherently anti-scientific about the statement.  After all, "advancing public understanding" can be done through "archaeological, biological, social, cultural, economic, political, historical, medical, visual, and linguistic anthropological research." Any number of these can be done through a scientific approach to understanding.

The thing that I find completely bizarre about the new wording is the exclusive focus on public understanding.  Public understanding? Really? Judging from my recent search committee and scientific review panel experience, I can only be left with the conclusion that the public must have an insatiable hunger for phenomenology.  This explains why I can never find any Husserl at Barnes and Noble -- he must just be flying off the shelves!  You'd think if the goal of our flagship professional organization is really promoting public understanding, that more anthropologists would write in a manner that was generally understandable to, you know, the public.  In his distinguished lecture, the eminent archaeologist Jeremy Sabloff chastised anthropologists for their unwillingness to engage with the general public.  I could not agree with this perspective more, especially if "engaging with the public" entails engaging with colleagues from cognate disciplines, another thing that I think we do a miserable job of, in general.

I was a bit disappointed to read Alex Golub's write-up of this issue on the Savage Minds blog.  I'm usually a big fan of both this blog and Alex's posts more generally. However, in this case I think that Alex engages in the kind of ahistorical, totalizing stereotyping of scientific anthropologists that normally gives anthropologists the willies.  Advocates of science are characterized as close-minded automata, utterly lacking any appreciation for ambiguity, historicity, politics, or contested meaning.  For example, he writes

The fact that the model used by 'scientific' anthropologists has as much complexity as an average episode of WWE Smackdown -- with a distinction between the evil 'fluff-head' cultural anthropologists and the good 'scientific' cultural anthropologists -- should be the first sign that something fishy is going on.

Très nuanced, eh?

The statements made by many scientific anthropologists, particularly those of the generation to enter the profession in the 1960s and 1970s, need to be understood in the historical and political context of the speakers.  I think that it is simply disingenuous to claim that scientific approaches to anthropological knowledge have not become increasingly marginalized within the mainstream of anthropology over the last several decades.  One need only look at what has become to the departments that were home to the vaunted physical anthropology programs of the past to find evidence of this trend. Consider, for example, the University of Chicago, the University of California Berkeley or Columbia University.  And this is just biological anthropology; it does not account for the loss of scientific social and cultural anthropologists (think Gene Hammel or Roy D'Andrade) in elite, Ph.D.-granting programs. The reasons for the marginalization of scientific approaches to anthropology are complex and do not fit neatly into the simplistic narrative of "objective, scientific anthropology ... under assault from interpretivists like Clifford Geertz who do not believe in truth." No doubt, part of the problem is simply the compartmentalization of knowledge.  As scholars become increasingly specialized, it becomes more and more difficult to be both scientist and humanist.  Increasingly, hiring decisions are zero-sum games. The gain of a scientist represents the loss of a humanist and vice-versa. Gone is Eric Wolf's conception of Anthropology as "both the most scientific of the humanities and the most humanist of the sciences."

The key is that the declining importance of science in the elite anthropology departments has led to a feeling of embattlement -- that is almost certainly counter-productive most of the time -- among the remaining scientific anthropologists. Another consequence is that the decline of the place of science within anthropological discourse selects for personalities who thrive on embattlement, so that the reproduction of the field is relatively enriched with young scholars who see no point to professional or intellectual engagement. And so it gets more and more difficult to integrate.  This is the lens through which I view much of the public complaining about the recent actions of the AAA executive board. However, as my colleague Rebecca Bird noted, those of us who still see a place for science in anthropology need to move beyond reactionary statements.  We need to be proactive and positive.

The academy is changing. This can be seen in the increasing number of cross-cutting requests-for-proposals from funding agencies such as NSF (e.g., HSD, EID, CHNS) or NIH and the wholesale re-organization of many research universities (ASU is only the most extreme case; the ascendency of interdisciplinary centers such as the Woods Institute for the Environment or the Freeman-Spogli Institute for International Studies at Stanford is a more common manifestation of this trend; the Columbia Earth Institute also comes to mind).  In an academy that increasingly values transdisciplinarity and integration of knowledge, I think that anthropologists have an enormous comparative advantage -- if we could just get over ourselves.  As I wrote in my 2009 Anthropology News piece:

Four-field anthropology is a biosocial discipline that integrates information from all levels of biological and social organization. To understand human behavior, the four-field anthropologist considers genetics and physiology; the history of the human lineage; historical, cultural and social processes; the dynamics of face-to-face interactions; and global political economy. Each of these individual areas is studied by other disciplines, but no other field provides the grounding in all, along with the specific mandate to understand the scope of human diversity. The anthropologist stands in a unique position to serve as the fulcrum upon which the quality of an interdisciplinary research team balances. Revitalizing the four-subfield approach to anthropological training could move anthropology from the margins of the interdisciplinary, research-based academy of the near future to the core.

I have no interest in disparaging forms of knowledge or excluding particular types of scholars from any social movement, but I think that scientific anthropologists have a particularly important role to play in such a revitalization, if for no other reason than they (presumably) care about more of these levels of organization.  Maybe such scholars could even communicate the subtlety and richness of ethnographic experience that our more humanistic colleagues so value if we could just get beyond the name-calling.

I may be dismissed as being naively optimistic by the old guard of scientific anthropologists (hypothesis 2, above), but I think that I have good reasons to be optimistic about the future of anthropology, despite the many challenges. This optimism stems from the work of individual anthropologists.  I'll do a quick shout-out to a number of people who I think are doing particularly good work, integrating different anthropological perspectives, and communicating with a broader audience.  This is a very personal and idiosyncratic list -- these scholars are people I've encountered recently or whose work has been brought to my attention of late. They tend to be focused on questions of health and human-environment interactions, naturally, since these are the issues that organize my research.

If you want to feel good about the future of a scientific anthropology that is simultaneously integrated into contemporary anthropology and communicates with a broader scientific and policy audience (and is generally great and transformative -- that key NSF buzz word), check out the work of:

  • Craig Hadley at Emory on food security and psychological well-being
  • Amber Wutich at ASU on vulnerability, water security, and common-pool resources
  • Lance Gravlee at UF on the embodiment of racial discrimination and its manifestations in health
  • Brooke Scelza at UCLA on parental investment and childhood outcomes
  • Dan Hrushka at ASU on how cultural beliefs, norms and values interact with economic constraints to produce health outcomes
  • Crickette Sanz at Washington University on multi-ape ecology of the Goualougo Triangle, Republic of Congo
  • Herman Pontzer at CUNY on measuring daily energy expenditures in hunter-gatherers
  • Rebecca and Douglas Bird on subsistence and signaling among Martu foragers

This list could go on. I won't even mention the amazing anthropology post-docs, Siobhan MattisonSean Downey, and Brian Wood, with whom I have been so lucky to interact this academic year.

I have plenty more to say on this -- particularly how the portrayal of politics and political agendas enters the discourse -- but I have final exams to grade!

An Alternate Course Load for the Game of Life

In a recent editorial in the New York Times, Harvard economist and former chairman of the Council of Economic Advisers, N. Gregory Mankiw provides some answers to the question "what kind of foundation is needed to understand and be prepared for the modern economy?"  Presumably, what he means by "modern economy" is life after college.  Professor Mankiw suggests that students of all ages learn something about the following subjects: economics, statistics, finance, and psychology.  I read this with interest and doing so made me think of my own list, which is rather different than the one offered by Mankiw. I will take up the instrumental challenge, making a list of subjects that I think will be useful in an instrumental sense -- i.e., in helping graduates become successful in the world of the twenty-first century. In no way do I mean to suggest that students can not be successful if they don't follow this plan for, like Mankiw, I agree that students should ignore advice as they see fit. Education is about discovery as much as anything and there is much to one's education that transcends instrumentality -- going to college is not simply about preparing people to enter "the modern economy," even if it is a necessary predicate for success in it.

People should probably know something about economics.  However, I'm not convinced that what most undergraduate students are taught in their introductory economics classes is the most useful thing to learn. Contemporary economics is taught as an axiomatic discipline.  That is, a few foundational axioms (i.e., a set of primitive assumptions that are not proved but considered self-evident and necessary) are presented and from these, theorems can be derived.  Theorems can then be logically proven by recourse to axioms or other already-proven theorems. Note that this is not about explaining the world around us.  It is really an exercise in rigorously defining normative rules for how people should behave and what the consequences of such behavior would be, even if actual people don't follow such prescriptions. Professor Mankiw has written a widely used textbook in Introductory Economics. In the first chapter of this book, we see this axiomatic approach on full display.  We are told not unreasonable things like "People Face Trade-Offs" or "The Cost of Something is What You Give Up to Get It" or "Rational People Think at the Margin." I couldn't agree more with the idea that people face trade-offs, but I nonetheless think there are an awful lot of problematic aspects to these axioms.  Consider the following paragraph (p. 5)

Another trade-off society faces is between efficiency and equality. Efficiency means that society is getting the maximum benefits from its scarce resources. Equality means that those benefits are distributed uniformly among society’s members. In other words, efficiency refers to the size of the economic pie, and equality refers to how the pie is divided into individual slices.

Terms like "efficiency" and "maximum benefits" are presented as unproblematic, as is the idea that there is a necessary trade-off between efficiency and equality.  Because it is an axiom, apparently contemporary economic theory allows no possibility for equality in efficient systems. Inequality is naturalized and thereby legitimized. It seems to me that this should be an empirical question, not an axiom. In his recent book, The Bounds of Reason: Game Theory and the Unification of the Behavioral Sciences, Herb Gintis provides a very interesting discussion of the differences between two highly formalized (i.e., mathematical) disciplines, physics and economics.  Gintis notes, "By contrast [to the graduate text in quantum mechanics], the microeconomics text, despite its beauty, did not contain a single fact in the whole thousand page volume. Rather, the authors build economic theory in axiomatic fashion, making assumptions on the basis of their intuitive plausibility, their incorporation of the 'stylized facts' of everyday life, or their appeal to the principles of rational thought."

If one is going to learn economics, "the study of how society manages its scarce resources" -- and I do believe people should -- I think one should (1) learn about how  resources are actually managed by real people and real institutions and (2) learn some theory that focuses on strategic interaction.  A strategic interaction occurs when the best choice a person can make depends upon what others are doing (and vice-versa). The formal analysis of strategic interactions is done with game theory, a field typically taught in economics classes but also found in political science, biology, and, yes, even anthropology. Alas, this is generally considered an advanced topic, so you'll have to go through all the axiomatic nonsense to get to the really interesting stuff.

OK, that was a bit longer than I anticipated. Whew.  On to the other things to learn...

Learn something about sociology. Everyone could benefit by understanding how social structures, power relations, and human stocks and flows shape the socially possible. Understanding that social structure and power asymmetries constrain (or enable) what we can do and even what we think is powerful and lets us ask important questions not only about our society but of those of the people with whom we sign international treaties, or engage in trade, or wage war. Some of the critical questions that sociology helps us ask include: who benefits by making inequality axiomatic? Does the best qualified person always get the job? Is teen pregnancy necessarily irrational? Do your economic prospects depend on how many people were born the same year as you were? How does taste reflect on one's position in society?

People should definitely learn some statistics. Here, Professor Mankiw and I are in complete agreement.

Learn about people other than those just like you. The fact that we live in an increasingly global world is rapidly becoming the trite fodder of welcome-to-college speeches by presidents, deans, and other dignitaries. Of course, just because it's trite doesn't make it any less true, and despite the best efforts of homogenizing American popular and consumer culture, not everyone thinks or speaks like us or has the same customs or same religion or system of laws or healing or politics. I know; it's strange. One might learn about other people in an anthropology class, say, but there are certainly other options. If anthropology is the chosen route, I would recommend that one choose carefully, making certain that the readings for any candidate anthropology class be made up of ethnographies and not books on continental philosophy. Come to grips with some of the spectacular diversity that characterizes our species. You will be better prepared to live in the world of the twenty-first century.

Take a biology class. If the twentieth century was the century of physics, the twenty-first century is going to be the century of biology.  We have already witnessed a revolution in molecular biology that began around the middle of the twentieth century and continued to accelerate throughout its last decades and into the twenty-first. Genetics is creeping into lots of things our parents would not have even imagined: criminology, law, ethics. Our decisions about our own health and that of our loved ones' will increasingly be informed by molecular genetic information. People should probably know a thing or two about DNA. I shudder at popular representations of forensic science and worry about a society that believes what it sees on CSI somehow represents reality. I happen to think that when one takes biology, one should also learn something about organisms, but this isn't always an option if one is going to also learn about DNA.

Finally, learn to write.  Talk about comparative advantage! I am continually blown away by poor preparation that even elite students receive in written English. If you can express ideas in writing clearly and engagingly, you have a skill that will carry you far. Write as much as you possibly can.  Learn to edit. I think editing is half the problem with elite students -- they write things at the last minute and expect them to be brilliant.  Doesn't work that way. Writing is hard work and well written texts are always well edited.

The Igon Value Problem

Priceless. Steve Pinker wrote a spectacular review of Malcolm Gladwell's latest book, What the Dog Saw and Other Adventures, in the New York Times today. I regularly read and enjoy Gladwell's essays in the New Yorker, but I find his style sometimes problematic, verging on anti-intellectual, and I'm thrilled to see a scientist of Pinker's stature calling him out.

Pinker coins a term for the problem with Gladwell's latest book and his work more generally.  Pinker's term: "The Igon Value Problem" is a clever play on the Eigenvalue Problem in mathematics.  You see, Gladwell apparently quotes someone referring to an "igon value." This is clearly a concept he never dealt with himself even though it is a ubiquitous tool in the statistics and decision science about which Gladwell is frequently so critical.  According to Pinker, the Igon Value Problem occurs "when a writer’s education on a topic consists in interviewing an expert," leading him or her to offering "generalizations that are banal, obtuse or flat wrong."  In other words, the Igon Value Problem is one of dilettantism.  Now, this is clearly a constant concern for any science writer, who has the unenviable task of rendering extremely complex and frequently quite technical information down to something that is simultaneously accurate, understandable, and interesting. However, when the bread and butter of one's work involves criticizing scientific orthodoxy, it seems like one needs to be extremely vigilant to get the scientific orthodoxy right.

Pinker raises the extremely important point that the decisions we make using the formal tools of decision science (and cognate fields) represent solutions to the inevitable trade-offs between information and cost.  This cost can take the form of financial cost, time spent on the problem, or computational resources, to name a few. Pinker writes:

Improving the ability of your detection technology to discriminate signals from noise is always a good thing, because it lowers the chance you’ll mistake a target for a distractor or vice versa. But given the technology you have, there is an optimal threshold for a decision, which depends on the relative costs of missing a target and issuing a false alarm. By failing to identify this trade-off, Gladwell bamboozles his readers with pseudoparadoxes about the limitations of pictures and the downside of precise information.

Pinker is particularly critical of an analogy Gladwell draws in one of his essays between predicting the success of future teachers and future professional quarterbacks.  Both are difficult decision tasks fraught with uncertainty.  Predicting whether an individual will be a quality teacher based on his or her performance on standardized tests or the presence or absence of teaching credentials is an imperfect process just as predicting the success of a quarterback in the N.F.L. based on his performance at the collegiate level.  Gladwell argues that anyone with a college degree should be allowed to teach and that the determination of the qualification for the job beyond the college degree should only be made after they have taught. This solution, he argues, is better than the standard practice of  credentialing, evaluating, and "going back and looking for better predictors.” You know, science? Pinker doesn't hold back in his evaluation of this logic:

But this “solution” misses the whole point of assessment, which is not clairvoyance but cost-effectiveness. To hire teachers indiscriminately and judge them on the job is an example of “going back and looking for better predictors”: the first year of a career is being used to predict the remainder. It’s simply the predictor that’s most expensive (in dollars and poorly taught students) along the accuracy- cost trade-off. Nor does the absurdity of this solution for professional athletics (should every college quarterback play in the N.F.L.?) give Gladwell doubts about his misleading analogy between hiring teachers (where the goal is to weed out the bottom 15 percent) and drafting quarterbacks (where the goal is to discover the sliver of a percentage point at the top).

This evaluation is spot-on. As a bit of an aside, the discussion of predicting the quality of prospective quarterbacks also reminds me of one of the great masterpieces of statistical science and the approach described by this paper certainly has a bearing on the types of predictive problems of which Gladwell ruminates.  In a 1975 paper, Brad Efron and Carl Morris present a method for predicting 18 major league baseball players' 1970 season batting average based on their first 45 at-bats. The naïve method for predicting (no doubt, the approach Gladwell's straw "we" would take) is simply to use the average after the first 45 at-bats. Turns out, there is a better way to solve the problem, in the sense that you can make more precise predictions (though hardly clairvoyant).  The method turns on what a Bayesian would call "exchangeability."  Basically, the idea is that being a major league baseball player buys you a certain base prediction for the batting average.  So if we combine the averages across the 18 players and with each individual's average in a weighted manner, we can make a prediction that has less variation in it.  A player's average after a small number of at-bats is a reflection of his abilities but also lots of forces that are out of his control -- i.e., are due to chance.  Thus, the uncertainty we have in a player's batting based on this small record is partly due to the inherent variability in his performance but also due to sampling error.  By pooling across players, we combine strength and remove some of this sampling error, allowing us to make more precise predictions. This approach is lucidly discussed in great detail in my colleague Simon Jackman's new book, draft chapters of which we used when we taught our course on Bayesian statistical methods for the social sciences.

Teacher training and credentialing can be thought of as strategies for ensuring exchangability in teachers, aiding the prediction of teacher performance.  I am not an expert, but it seems like we have a long way to go before we can make good predictions about who will become an effective teacher and who will not.  This doesn't mean that we should stop trying.

Janet Maslin, in her review of What the Dog Saw, waxes about Gladwell's scientific approach to his essays. She writes that the dispassionate tone of his essays "tames visceral events by approaching them scientifically." I fear that this sentiment, like the statements made in so many Gladwell works, reflects the great gulf between most educated Americans and the realities of scientific practice (we won't even talk about the gulf between less educated Americans and science).  Science is actually a passionate, messy endeavor and sometimes we really do get better by going back and finding better predictors.

Risk-Aversion and Finishing One's Dissertation

It's that time of the year again, it seems, when I have lots of students writing proposals to submit to NSF to fund their graduate education or dissertation research.  This always sets me to thinking about the practice of science and how one goes about being a successful scientist. I've written about "productive stupidity" before, and I still think that is very important. Before I had a blog, I composed a series of notes on how to write a successful NSF Doctoral Dissertation Improvement Grant when I saw the same mistakes over and over again sitting on the Cultural Anthropology panel.

This year, I've find myself thinking a lot about what Craig Loehle dubbed "the Medawar Zone." This is an nod to the great British scientist, Sir Peter Medawar, whose book, The Art of the Soluble: Creativity and Originality in Science, argued that best kind of scientific problems are those that can be solved.  In his classic (1990) paper Loehle argues that "there is a general parabolic relationship between the difficulty of a problem and its likely payoff." Re-reading this paper got me to thinking.

In Loehle's figure 1, he defines the Medawar Zone.  I have reproduced a sketch of the Medawar Zone here.

medawar-zoneNow, what occurred to me on this most recent reading of this paper is that for a net payoff curve to look like this, the benefits with increased difficulty of the problem are almost certainly concave.  That is, they show diminishing marginal returns to increased difficulty.  Hard to say what the cost curve with difficulty would be – linear? convex? Either way, there is an intermediate maximum (akin to Gadgil and Bossert's analysis of intermediate levels of reproductive effort) and the best plan is to pick a problem of intermediate difficulty because that is where the scientific benefits, net of the costs, are maximized.

Suppose that a dissertation is a risky endeavor.  This is not hard for me to suppose since I know many people from grad school days who had at least one failed dissertation project.  Sometimes this led to choosing another, typically less ambitious project.  Sometimes it led to an exit from grad school, sans Ph.D.  Stanford (like Harvard now, but not when I was a student) funds its Ph.D. students for effectively the entirety of their Ph.D.  This is a great thing for students because nothing interferes with your ability to think and be intellectually productive than worrying about how you're going to pay rent.  The downside of this generous funding is that students do not have much time to come up with an interesting dissertation project, write grants, go to the field, collect data, and write up before their funding runs out. So, writing a dissertation is risky.  There is always a chance that if you pick too hard a problem, you might not finish in time and your funding will run out. Well, it just so happens that the combination of a concave utility function and a risk of failure is pretty much the definition of a risk-averse decision-maker.

Say there is an average degree of difficulty in a field.  A student can choose to work on a topic that is more challenging than the average but there is the very real chance that such a project will fail and in order for the student to finish the Ph.D., she will have to quickly complete work on a problem that is easier than the average.  Because the payoff curve with difficulty is concave, it means that the amount you lose relative to the mean if you fail is much greater than the amount you gain relative to the mean if you succeed.  That is, your downside cost is much greater than your upside benefit.

risk-aversionIn the figure, note that d1>>d2.  Here, I have labeled the ordinate as w, which is the population genetics convention for fitness (i.e., the payoff).  The bar-x is the mean difficulty, while x2 and x1 are the high and low difficulty projects respectively.

The way that economists typically think about risk-aversion is that a risk-averse agent is one who is willing to pay a premium for certainty.  This certainty premium is depicted in the dotted line stretching back horizontally from the vertical dashed line at x=xbar to the utility curve.  The certain payoff the agent is willing to accept vs. the uncertain mean is where this dotted line hits the utility curve. Being at this point on the utility curve (where you have paid the certainty premium) probably puts you at the lower end of the Medawar Zone envelope, but hopefully, you're still in it.

I think that this very standard analysis actually provides the graduate student with pretty good advice. Pick a project you can do and maybe be a bit conservative.  The Ph.D. isn't a career – it's a launching point for a career. The best dissertation, after all, is a done dissertation.  While I think this is sensible advice for just about anyone working on a Ph.D., the thought of science progressing in such a conservative manner frankly gives me chills.  Talk about a recipe for normal science!  It seems what we need, institutionally, is a period in which conservatism is not the best option. This may just be the post-doc period.  For me, my time at the University of Washington (CSSS and CSDE) was a period when I had unmitigated freedom to explore methods relevant to what I was hired to do.  I learned more in two years than in – I'd rather not say how many – years of graduate school. The very prestigious post-doctoral programs such as the Miller Fellowships at Berkeley or the Society of Fellows at Harvard or Michigan seem like they are specifically designed to provide the environment where the concavity of the difficulty-payoff curve is reversed (favoring gambles on more difficult projects).

There is, unfortunately, a folklore that has diffused to me through graduate student networks that says that anthropologists need to get a faculty position straight out of their Ph.D. or they will never succeed professionally.  This is just the sort of received wisdom that makes my skin crawl and, I fear, is far too common in our field.  If our hurried-through Ph.D.s can't take the time to take risks, when can we ever expect them to do great work and solve truly difficult problems?

On Intelligence

Nicholas Kristof has an interesting Op-Ed piece this week in the Times.  Reporting on University of Michigan Professor Richard Nisbett's new book, Intelligence and How to Get It, Kristof argues for the general malleability of intelligence.  He writes,

If intelligence were deeply encoded in our genes, that would lead to the depressing conclusion that neither schooling nor antipoverty programs can accomplish much. Yet while this view of I.Q. as overwhelmingly inherited has been widely held, the evidence is growing that it is, at a practical level, profoundly wrong.

I think that this is an important point that is worth pursuing.  There is indeed a widely held view that intelligence is "genetically determined" (whatever that means -- how you define it matters), perhaps most infamously articulated in Charles Murray and Richard Herrnstein's book, The Bell Curve. This idea comes from numerous studies of the correlation of relatives' scores in standardized intelligence tests, the most common design for which is the twin study.  The basic idea is that you compare the concordance in test scores of monozygotic (i.e., genetically identical) twins with dizygotic twins who share only 50% of their genes.  The assumption is that both monozygotic and dizygotic twins will share the same rearing environment.  Therefore, differences that appear in the observed concordance should be attributable to genes.

Twin studies show that IQ, like many other features of human behavior, is moderately "heritable."  Now, a key to understanding this field and the debate that it has spawned is understanding what is meant by heritability.  Geneticists posit two different conceptions of heritability.  The first is the common parlance sense: heritability simply means that a trait is genetically determined and can therefore be inherited from one's parents.  This is known as "broad-sense heritability."  In contrast, "narrow sense heritability" has a fairly precise technical meaning. Narrow sense heritability is the fraction of total phenotypic variance attributable to additive genetic variance.  Based simply on this definition, laden with unfamiliar terms, you can see why most people think in terms of broad-sense heritability.

So let's parse out the definition of narrow-sense heritability.  First, "total phenotypic variance" simply means the total observed variance in the trait in question (e.g., IQ) for some well-defined population (e.g., the sample of individuals in the study).  This variance arises from a variety of sources, some genetic, some environmental, some both.  It is very important to note that variance is central to both definitions of heritability.  A trait can be completely genetically determined (whatever that means) but have no variance in a population.  Think head-number among human beings.  This trait is so deeply developmentally canalized that there is no variance (everybody has one) and, thus, zero heritability.

As sexual beings, when we reproduce, our alleles (variants of genes) reshuffle whenever we generate our gametes, or reproductive cells (i.e., eggs and sperm), during the process of meiosis.  One of the principles that Mendel is known for is the principle of independent assortment.  This is the idea that when our alleles get reshuffled during meiosis, they appear in any given gamete independently of what the other alleles that show up in that gamete are.  It turns out that independent assortment is not in any way universal.  Some alleles assort independently while others are linked to other alleles, typically because they are near each other on a chromosome (but sometimes for more interesting reasons).  The additive genetic variance in the definition of heritability refers to the variance attributable to just the alleles that assort independently. These are the so-called additive effects.  Additivity arises from the independence of the different allelic effects.  We care so much about the additive effects because these are what let us make predictive models.  When an animal breeder wants to know the response to selection of some quantitative trait (e.g., body size, milk fat percentage, age at maturity), she uses an equation that multiplies the narrow-sense heritability and the selective advantage of the trait in question.  Now, our scientific interest in the heritability of intelligence ostensibly arises from the desire to create predictive and explanatory models like this breeder's equation.  In the absence of explanatory or predictive power, I don't see much scientific value.

Genes can express their effects in ways other than through their additive effects.  For example, there is that familiar concept from Mendelian genetics, dominance.  Dominance is a type of allele-allele interaction, just limited to the special case of occurring within a single locus.  A more general case is allelic interactions is epistasis.  An epistatic gene is one that affects the expression of one or more other genes.  The epistatic gene is a regulator which can either increase or decrease (possibly turn off) the effect of other genes.  These interactions are harder to predict and typically go into the error term in the breeder's equation.

The real gotcha in heritability analysis though is the existence of genotype-environment (GxE) interactions. These are generally not measured and can be quite large.  Lewontin, in his classic (1974) paper, first suggested that GxE interactions (in addition to other types of difficult-to-measure interactions like those arising from epistasis) might actually be large.  Much of the thought that followed has supported this idea (see, e.g., Pigliucci 2001). In twin study designs, GxE interactions are non-identifiable, meaning that we don't have enough information to simultaneously estimate the interaction, genetic, and environmental effects, so they are generally assumed to be zero. I think that it is fair to say that the consensus among population geneticists is that heritability analyses, as done though twin studies, for example, are misleading at best because of this fundamental flaw.

In my mind, the fundamental problem with twin studies of the heritability of intelligence is that they can't begin to measure GxE interactions and therefore their estimates of heritability are hopelessly suspect.

Where is heritability of intelligence likely to be large and not quite as fraught with the problems of unmeasurable and potentially large GxE interactions? One possibility is in homogenous, affluent communities, not entirely unlike Palo Alto.  Kristof notes in his Op-Ed piece that "Intelligence does seem to be highly inherited in middle-class households." In such communities, external ("environmental") sources of variation are relatively small.  Most kids have stable homes with (two) college educated parents who place high value of achievement in school, go to safe, well-funded schools with motivated and highly trained teachers, eat nutritious food and live fairly enriched lives.  When the total variance is low, whatever variance is explained by additive genetic effects is likely to be a higher fraction of the total variance. Hence, high heritability. This is a quite general point: the more environmentally homogenous a population is, the higher we should expect heritability to be.

It is very, very important, however, to note that this is generally not the case.  When we move out of relatively homogenous and affluent communities, the sources of environmental variance increase and compound.  The fact that a trait with such high measured heritability can be modified so extensively as discussed in Nisbett's book suggests that intelligence is a trait with an enormous environmental effect and, I'm betting, a huge GxE interaction effect. It seems to me that the Flynn effect, the observation that IQ increases with time, provides further suggestive evidence for a massive environmental interaction. While the genomic evidence for recent strong selection on humans is mounting (in contrast to the bizarre idea that somehow selection came to a screeching halt with the advent of the Holocene), I doubt that there have been significant selective changes in the genes for intelligence (whatever that means) in the past century.  Now, the environment certainly has changed in the last 100 years.  This is what makes me thing big GxE interactions.

So, in a phrase, sure, genes help determine intelligence.  But the action of these genes is so fundamentally tied up in environmental interactions that it seems that the explanatory power of simple genetic models for intelligence and other complex social traits such as political and economic behavior or social network measures is very low indeed. Moreover, the predictive power of these models in changing environments is low.  Without explanatory or predictive potential, we are left with something that isn't really science. I applaud efforts to more deeply understand how productive environments, good schools, and healthy decisions can maximize human potential. Heritability studies of IQ (and I worry about these other fashionable areas) seem to provide an excuse for the inexcusable failure to deal with the fundamental social inequalities that continue to mar our country -- and the larger world.

References

Lewontin, R. 1974. The analysis of variance and the analysis of causes. American Journal of Human Genetics 26: 400-411.

Pigliucci, M. 2001. Phenotypic Plasticity: Beyond Nature and Nurture. Baltimore, Johns Hopkins University Press.