monkey's uncle

notes on human ecology, population, and infectious disease

monkey's uncle header image

On The Dilution Effect

March 18th, 2013 · Conservation, Human Ecology, Infectious Disease

A new paper written by Dan Salkeld (formerly of Stanford), Kerry Padgett (CA Department of Public Health), and myself just came out in the journal Ecology Letters this week.

One of the most important ideas in disease ecology is a hypothesis known as the “dilution effect”. The basic idea behind the dilution effect hypothesis is that biodiversity — typically measured by species richness, or the number of different species present in a particular spatially defined locality — is protective against infection with zoonotic pathogens (i.e., pathogens transmitted to humans through animal reservoirs). The hypothesis emerged from analysis of Lyme disease ecology in the American Northeast by Richard Ostfeld and his colleagues and students from the Cary Institute for Ecosystem Studies in Millbrook, New York. Lyme disease ecology is incredibly complicated, and there are a couple different ways that the dilution effect can come into play even in this one disease system, but I will try to render it down to something easily digestible.

Lyme disease is caused by a spirochete bacterium Borrelia burgdorferi. It is a vector-borne disease transmitted by hard-bodied ticks of the genus >Ixodes. These ticks are what is known as hemimetabolous, meaning that they experience incomplete metamorphosis involving larval and nymphal stages. Rather than a pupa, these larvae and nymphs resemble little bitty adults. An Ixodes tick takes three blood meals in its lifetime: one as a larva, once as a nymph, once as an adult. At different life-cycle stages, the ticks have different preferences for hosts. Larval ticks generally favor the white-footed mouse (Peromyscus leucopus) for their blood meal and this is where the catch is. It turns out that white-footed mice are extremely efficient reservoirs for Lyme disease. In fact, an infected mouse has as much as a 90% chance of transmitting infection to a larva feeding on it. The larvae then molt into nymphs and overwinter on the forest floor. Then, in spring or early summer a year after they first hatch from eggs, nymphs seek vertebrate hosts. If an individual tick acquired infection as a larva, it can now transmit to its next host. Nymphs are less particular about their choice of host and are happy to feed on humans (or just about any other available vertebrate host).

This is where the dilution effect comes in. The basic idea is that if there are more potential hosts such as chipmunks, shrews, squirrels, or skunks, there are more chances that an infected nymph will take a blood meal on a person. Furthermore, most of these hosts are much less efficient at transmitting the Lyme spirochete than are white-footed mice. This lowers the prevalence of infection and makes it more likely that it will go extinct locally. It’s not difficult to imagine the dilution effect working at the larval stage blood-meal too: if there are more species present (and the larvae are not picky about their blood meal), the risk of initial infection is also diluted.

In the highly-fragmented landscape of northeastern temperate woodlands, when there is only one species in a forest fragment, it is quite likely that it will be a white-footed mouse. These mice are very adaptable generalists that occur in a wide range of habitats from pristine woodland to degraded forest. Therefore, species-poor habitats tend to have mice but no other species. The idea behind the dilution effect is that by adding different species to the baseline of a highly depauperate assemblage of simply white-footed mice, the prevalence of nymphal infection will decline and the risk for zoonotic infection of people will be reduced.

It is not an exaggeration to say that the dilution-effect hypothesis is one of the two or three most important ideas in disease ecology and much of the explosion of interest in disease ecology can be attributed in part to such ideas. The dilution effect is also a nice idea. Wouldn’t it be great if every dollar we invested in the conservation of biodiversity potentially paid a dividend in reduced disease risk? However, its importance to the field or the beauty of the idea do not guarantee that it is actually scientifically correct.

One major issue with the dilution effect hypothesis is its problem with scale, arguably the central question in ecology. Numerous studies have shown that pathogen diversity is positively related to overall biodiversity at larger spatial scales. For example, in an analysis of global risk of emerging infectious diseases, Kate Jones and her colleagues form the London Zoological Society showed that globally, mammalian biodiversity is positively associated with the odds of an emerging disease. Work by Pete Hudson and colleagues at the Center for Infectious Disease Dynamics at Penn State showed that healthy ecosystems may actually be richer in parasite diversity than degraded ones. Given these quite robust findings, how is it that diversity at a smaller scale is protective?

We use a family of statistical tools known as “meta-analysis” to aggregate the results of a number of previous studies into a single synthetic test of the dilution-effect hypothesis. It is well known that inferences drawn from small samples generally have lower precision (i.e., the estimates carry more uncertainty) than inferences drawn from larger samples. A nice demonstration of this comes from the classical asymptotic statistics. The expected value of a sample mean is the “true mean” of a normal distribution and the standard deviation of this distribution is given by the standard error, which is defined as the standard deviation of the distribution divided by the square root of the sample size. Say that for two studies we estimate the standard deviation of our estimate of the mean to be 10. In the first study, this estimate is based on a single observation, whereas in the second, it is based on a sample of 100 observations. The estimated of the mean in the second study is 10 times more precise than the estimate based on the first because 10/\sqrt{1} = 10 while 10/\sqrt{100} = 1.

Meta-analysis allows us to pool estimates from a number of different studies to increase our sample size and, therefore, our precision. One of the primary goals of meta-analysis is to estimate the overall effect size and its corresponding uncertainty. The simplest way to think of effect size in our case is the difference in disease risk (e.g., as measured in the prevalence of infected hosts) between a species rich area and a species poor area. Unfortunately, a surprising number of studies don’t publish this seemingly basic result. For such studies, we have to calculate a surrogate of effect size based on the reported test statistics of the hypothesis that the authors report. This is not completely ideal — we would much rather calculate effect sizes directly, but to paraphrase a dubious source, you do a meta-analysis with the statistics that have been published, not with the statistics you wish had been published. On this note, one of our key recommendations is that disease ecologists do a better job reporting effect sizes to facilitate future meta-anlayses.

In addition to allowing us to estimate the mean effect size across studies and its associated uncertainty, another goal of meta-analysis is to test for the existence of publication bias. Stanford’s own John Ioannidis has written on the ubiquity of publication bias in medical research. The term “bias” has a general meaning that is not quite the same as the technical meaning. By “publication bias”, there is generally no implication of nefarious motives on the part of the authors. Rather, it typically arises through a process of selection at both the individual authors’ level and the institutional level of the journals to which authors submit their papers. An author, who is under pressure to be productive by her home institution and funding agencies, is not going to waste her time submitting a paper that she thinks has a low chance of being accepted. This means that there is a filter at the level of the author against publishing negative results. This is known as the “file-drawer effect”, referring to the hypothetical 19 studies with negative results that never make it out of the authors’ desk for every one paper publishing positive results. Of course, journals, editors, and reviewers prefer papers with results to those without as well. These very sensible responses to incentives in scientific publication unfortunately aggregate into systematic biases at the level of the broader literature in a field.

We use a couple methods for detecting publication bias. The first is a graphical device known as a funnel plot. We expect studies done on large samples to have estimates of the effect size that are close to the overall mean effect because estimates based on large samples have higher precision. On the other hand, smaller studies will have effect-size estimates that are more distributed because random error can have a bigger influence in small samples. If we plot the precision (e.g., measured by the standard error) against the effect size, we would expect to see an inverted triangle shape — or a funnel — to the scatter plot. Note — and this is important — that we expect the scatter around the mean effect size to be symmetrical. Random variation that causes effect-size estimates to deviate from the mean are just as likely to push the estimates above and below the mean. However, if there is a tendency to not publish studies that fail to support the hypothesis, we should see an asymmetry to our funnel. In particular, there should be a deficit of studies that have low power and effect-size estimates that are opposite of the hypothesis. This is exactly what we found. Only studies supporting the dilution-effect hypothesis are published when they have very small samples. Here is what our funnel plot looked like.

Note that there are no points in the lower right quadrant of the plot (where species richness and disease risk would be positively related).

While the graphical approach is great and provides an intuitive feel for what is happening, it is nice to have a more formal way of evaluating the effect of publication bias on our estimates of effect size. Note that if there is publication bias, we will over-estimate our precision because the studies that are missing are far away from the mean (and on the wrong side of it). The method we use to measure the impact of publication bias on our estimate of uncertainty formalizes this idea. Known as “trim-and-fill“, it uses an algorithm to find the most divergent asymmetric observations. These are removed and the precision of the mean effect size is calculated. This sub-sample is known as the “truncated” sample. Then a sample of missing values is imputed (i.e., simulated from the implied distribution) and added to the base sample. This is known as the “augmented” sample. The precision is then re-calculated. If there is no publication bias, these estimates should not be too different. In our sample, we find that estimates of precision differ quite a bit between the truncated and augmented samples. We estimate that between 4-7 studies are missing from the sample.

Most importantly, we find that the 95% confidence interval for our estimated mean effect size crosses zero. That is, while the mean effect size is slightly negative (suggesting that biodiversity is protective against disease risk), we can’t confidently say that it is actually different than zero. Essentially, our large sample suggests that there is no simple relationship between disease risk and biodiversity.

On Ecological Mechanisms One of the main conclusions of our paper is that we need to move beyond simple correlations between species richness and disease risk and focus instead on ecological mechanisms. I have no doubt that there are specific cases where the negative correlation between species richness and disease risk is real (note our title says that we think this link is idiosyncratic). However, I suspect where we see a significant negative correlation, what is really happening is that some specific ecological mechanism is being aliased by species richness. For example, a forest fragment with a more intact fauna is probably more likely to contain predators and these predators may be keeping the population of efficient reservoir species in check.

I don’t think that this is an especially controversial idea. In fact, some of the biggest advocates for the dilution effect hypothesis have done some seminal work advancing our understanding of the ecological mechanisms underlying biodiversity-disease risk relationships. Ostfeld and Holt (2004) note the importance of predators of rodents for regulating disease. They also make the very important point that not all predators are created equally when it comes to the suppression of disease. A hallmark of simple models of predation is the cycling of abundances of predators and prey. A specialist predator which induces boom-bust cycles in a disease reservoir probably is not optimal for infection control. Indeed, it may exacerbate disease risk if, for example, rodents become more aggressive and are more frequently infected in agonistic encounters with conspecifics during steep growth phases of their population cycle. This phenomenon has been cited in the risk of zoonotic transmission of Sin Nombre Virus in the American Southwest.

I have a lot more to write on this, so, in the interest of time, I will end this post now but with the expectation that I will write more in the near future!

 

→ 1 CommentTags: ······

The Least Stressful Profession of Them All?

January 5th, 2013 · science, Teaching

In the spirit of critics misunderstanding the life of university researchers that I started in my last post, I felt the need to chime in a bit on a story that has really made the social-media rounds in the last couple days. This kerfuffle stems from a Forbes piece by Susan Adams enumerating the 10 least stressful jobs for 2013. Reporting on a study from the job-site careercast.com, and to the surprise of nearly every academic I know, she listed university professor as the least stressful of all jobs. Adams writes: “For tenure-track professors, there is some pressure to publish books and articles, but deadlines are few.” This is quite possibly the most nonsensical statement I think I have ever read about the academy and it reveals a profound ignorance about its inner workings. This careercast.com list was also picked up by CNBC and Huffington Post, both of which were completely credulous of the rankings.

Before going on though, I have to give Ms. Adams some props for amending her piece following an avalanche of irate comments from actual professors. She writes:

Since writing the above piece I have received more than 150 comments, many of them outraged, from professors who say their jobs are terribly stressful. While I characterize their lives as full of unrestricted time, few deadlines and frequent, extended breaks, the commenters insist that most professors work upwards of 60 hours a week preparing lectures, correcting papers and doing research for required publications in journals and books. Most everyone says they never take the summer off, barely get a single day’s break for Christmas or New Year’s and work almost every night into the wee hours.

All true.

In the CNBC piece, the careercast.com publisher, Tony Lee, lays down some of the most uninformed nonsense that I’ve ever read:

“If you look at the criteria for stressful jobs, things like working under deadlines, physical demands of the job, environmental conditions hazards, is your life at risk, are you responsible for the life of someone else, they rank like ‘zero’ on pretty much all of them!” Lee said.

Plus, they’re in total control. They teach as many classes as they want and what they want to teach. They tell the students what to do and reign over the classroom. They are the managers of their own stress level.

Careercast.com measured job-related stress using an 11-dimensional scale. These dimensions and the point ranges assigned to each include:

  • Travel, amount of (0-10)<
  • Growth Potential (income divided by 100)
  • Deadlines (0-9)
  • Working in the public eye (0-5)
  • Competitiveness (0-15)
  • Physical demands (stoop, climb, etc.) (0-14)
  • Environmental conditions (0-13)
  • Hazards encountered (0-5)
  • Own life at risk (0-8)
  • Life of another at risk (0-10)
  • Meeting the public (0-8)

These seem reasonable enough, but the extent to which they were accurately assessed for at least this first item in the list is another point altogether.

It is important to note that there is enormous heterogeneity contained in the job title “professor.” There are professors of art history and professors of business and professors of law and professors of vascular surgery, and professors of chemistry, and professors of seismic engineering professors of volcanology and … you get the point. No doubt some of these are more or less stressful than others. Many of these involve substantial work in the public eye and meeting the public. Some involve hazardous environmental conditions and physical demands.

However, I will focus mainly on what I see as the most ludicrous statements made by both Lee and Adams: that professors have no deadlines. My life is all about deadlines: article/book submission deadlines, institutional review board deadlines, peer review deadlines, editorial deadlines, and the all-important grant deadlines. There are the deadlines imposed by my students when they apply for grants or fellowships or jobs and need highly detailed letters of recommendation, often on very short notice. Oh, and guess what: grades are due on a particular date at the end of the term. You know, a deadline? And those classes we teach: better have a lecture ready before the class meets. Again, kinda like a deadline. I think that it is worth noting that one is expected to meet these teaching deadlines even when most professional incentives (at least at a research university) are focused around everything in your job description but teaching. There is a trite phrase describing the life of a professor — particularly a junior professor — that seems to have found its way into the general consciousness, “publish or perish.” Notice that it is not “give coherent, interesting lectures and grade fairly and expediently or perish”!

So, yes, there are deadlines and there are very difficult trade-offs relating to the finiteness of time. Honestly, it’s hard for me to imagine how even a casual observer of the university could not see the ubiquity of deadlines for the professor’s life.

In an excellent rebuttal of this list, blogger Audra Diers writes about both the time demands and the economic realities of obtaining a tenure-track job. I will finish up with a few thoughts on competitiveness and “growth potential.” My experience on a variety of job search committees since coming to Stanford is that there are typically hundreds of highly qualified candidates for any given job search. These are all people who have Ph.D.’s and, frequently, already have jobs at other universities. In the anthropology department at Stanford, the majority of faculty joined Stanford from faculty positions at other universities. It is very difficult to get a job at a university like Stanford directly out of graduate school. Inevitably, you are competing against people who have already been assistant professors (or at least post-docs) at other universities and already have a substantial publication and grant-writing record. The differences in salary, teaching loads, and institutional prestige can be substantial. Browsing the Chronicle of Higher Education’s Almanac of Higher Education can provide some numbers. Many people bust it in lower-prestige universities with the hope of eventually getting an opportunity for a job at a place like Stanford or Berkeley or Harvard. This means publishing important work, often while teaching outrageously high teaching loads at universities with primarily teaching missions and that means long hours, juggling many conflicting demands, and enormous individual drive.

If you are a scientist, you are often competing with other scientists for results. Getting yourself in a position to secure such results means successful grant-writing and attracting top students and post-docs to your lab. Now, this competition is often enjoyable and almost certainly drives innovation, but it can be stressful (and deadline filled!). There is nothing quite like the feeling of looking at some journal’s table of contents that’s shown up in your inbox and realizing you’ve been scooped on a problem you’ve spent years working on. There is always that little bit of fear in the back of your head pushing you to publish your results before someone else does.

Where Lee gets the idea that professors “teach as many classes as they want and what they want to teach” is a mystery to me. Universities (and colleges within universities) have rules for the number of courses their faculty are expected to teach. Sometimes, a professor can buy out of some teaching by securing more research funding that specifically budgets for such buy-outs. Within departments, there is the dreaded curriculum committee. My department’s CC decided this year that I should teach all my courses in the Spring quarter. While it’s been nice to have large chunks of research time this Fall, Spring is going to be horrible. This is hardly teaching as much or what I want to teach. Departments have instructional needs (i.e., “service courses”) and someone needs to teach these. Junior faculty are often dumped upon to teach the service courses (e.g., history of the field, methodological courses) that very few students want to attend.

Writes Adams at Forbes, “The other thing most of the least stressful jobs have in common: At the end of the day, people in these professions can leave their work behind, and their hours tend to be the traditional nine to five.” This is just crazy talk. I work every night, some nights are more effective than others, for sure, but, like many professions, I take this as a given for my job.

So being a university professor is hardly a stress-free life. This doesn’t in any way mean that we don’t like our jobs. Being a tenured professor at a major research university is good work if you can get it. The job carries with it a great deal of autonomy, flexibility, and the ability to pursue one’s passion. As a professor, one interacts with interesting, curious people on a daily basis and helps shape future leaders. The job-related stress felt by a university professor is almost certainly not on par with, say, an infantry soldier or police officer, but the job is not stress-free. It never ceases to surprise me of how ignorant about the workings of universities critics often are. This is an instance where there is no obvious political agenda — the study just got some facts badly wrong — but studies like this contribute to disturbing anti-intellectualism (and concomitant disdain for empirical evidence) that has become a part of American public consciousness.

→ 2 CommentsTags: ··

Thoughts on Black Swans and Antifragility

December 26th, 2012 · science, Statistics

I have recently read the latest book by Nassim Nicholas Taleb, Antifragile. I read his famous The Black Swan a while back while in the field and wrote lots of notes. I never got around to posting those notes since they were quite telegraphic (and often not even electronic!), as they were written in the middle of the night while fighting insomnia under mosquito netting. The publication of his latest, along with the time afforded by my holiday displacement, gives me an excuse to formalize some of these notes here. Like Andy Gelman, I have so many things to say about this work on so many different topics, this will be a bit of a brain dump.

Taleb’s work is quite important for my thinking on risk management and human evolution so it is with great interest that I read both books. Nonetheless, I find his works maddening to say the least. Before presenting my critique, however, I will pay the author as big a compliment as I suppose can be made. He makes me think. He makes me think a lot, and I think that there are some extremely important ideas is his writings. From my rather unsystematic readings of other commentators, this seems to be a pretty common conclusion about his work. For example, Brown (2007) writes in The American Statistician, “I predict that you will disagree with much of what you read, but you’ll be smarter for having read it. And there is more to agree with than disagree. Whether you love it or hate it, it’s likely to change public attitudes, so you can’ t ignore it.” The problem is that I am so distracted by all the maddening bits that I regularly nearly miss the ideas, and it is the ideas that are important. There is so much ego and so little discipline on display in his books, The Black Swan and Antifragile.

Some of these sentiments have been captured in Michiko Kakutani’s excellent review of Antifragile. There are some even more hilarious sentiments communicated in Tom Bartlett’s non-profile in the Chronicle of Higher Education.

I suspect that if Taleb and I ever sat down over a bottle of wine, we would not only have much to discuss but we would find that we are annoyed — frequently to the point of apoplexy — by the same people. Nonetheless, I find one of the most frustrating things about reading his work the absurd stereotypes he deploys and broad generalizations he uses to dismiss the work of just about any academic researcher. His disdain for academic research interferes with his ability to make cogent critique. Perhaps I have spent too much time at Stanford, where the nerd is glorified, but, among other things, I find his pejorative use of the term “nerd” for people like Dr. John, as contrasted to man-of-his-wits Stereotyped, I mean, Fat Tony off-putting and rather behind the times. Gone are the days when being labeled a nerd is a devastating put down.

My reading of Taleb’s critiques of prediction and risk management is that the primary problem is hubris. Is there anything fundamentally wrong with risk assessment? I am not convinced there is, and there are quite likely substantial benefits to systematic inquiry. The problem is that the risk assessment models become reified into a kind of reality. I warn students – and try to regularly remind myself – never to fall in love with one’s own model. Something that many economists and risk modelers do is start to believe that their models are something more real than heuristic. George Box’s adage has become a bit cliche but nonetheless always bears repeating: all models are wrong, but some are useful. We need to bear in mind the wrongness of models without dismissing their usefulness.

One problem about both projection and risk analysis, that Taleb does not discuss, is that risk modelers, demographers, climate scientists, economists, etc. are constrained politically in their assessments. The unfortunate reality is that no one wants to hear how bad things can get and modelers get substantial push-back from various stakeholders when they try to account for real worst-case scenarios.

There are ways of building in more extreme events than have been observed historically (Westfall and Hilbe (2007), e.g., note the use of extreme-value modeling). I have written before about the ideas of Martin Weitzman in modeling the disutility of catastrophic climate change. While he may be a professor at Harvard, my sense is that his ideas on modeling the risks of catastrophic climate change are not exactly mainstream. There is the very tangible evidence that no one is rushing out to mitigate the risks of climate change despite the fact that Weitzman’s model makes it pretty clear that it would be prudent to do so. Weitzman uses a Bayesian approach which, as noted by Westfall and Hilbe, is a part of modern statistical reasoning that was missed by Taleb. While beyond the scope of this already hydra-esque post, briefly, Bayesian reasoning allows one to combine empirical observations with prior expectations based on theory, prior research, or scenario-building exercises. The outcome of a Bayesian analysis is a compromise between the observed data and prior expectations. By placing non-zero probability on extreme outcomes, a prior distribution allows one to incorporate some sense of a black swan into expected (dis)utility calculations.

Nor does the existence of black swans mean that planning is useless. By their very definition, black swans are rare — though highly consequential — events. Does it not make sense to have a plan for dealing with the 99% of the time when we are not experiencing a black swan event? To be certain, this planning should not interfere with our ability to respond to major events but I don’t see any evidence that planning for more-or-less likely outcomes necessarily trades-off with responding to unlikely outcomes.

Taleb is disdainful about explanations for why the bubonic plague didn’t kill more people: “People will supply quantities of cosmetic explanations involving theories about the intensity of the plague and ‘scientific models’ of epidemics.” (Black Swan, p. 120) Does he not understand that epidemic models are a variety of that lionized category of nonlinear processes he waxes about? He should know better. Epidemic models are not one of these false bell-curve models he so despises. Anyone who thinks hard about an epidemic process – in which an infectious individual must come in contact with a susceptible one in order for a transmission event to take place – should be able to infer that an epidemic can not infect everyone. Epidemic models work and make useful predictions. We should, naturally, exhibit a healthy skepticism about them as we should any model. But they are an important tool for understanding and even planning.

Indeed, our understanding gained from the study of (nonlinear) epidemic models has provided us with the most powerful tools we have for control and even eradication. As Hans Heesterbeek has noted, the idea that we could control malaria by targeting the mosquito vector of the disease is one that was considered ludicrous before Ross’s development of the first epidemic model. The logic was essentially that there are so many mosquitoes that it would be absurdly impractical to eliminate them all. But the Ross model revealed that epidemics – because of their nonlinearity – have thresholds. We don’t have to eliminate all the mosquitoes to break the malaria transmission cycle; we just need to eliminate enough to bring the system below the epidemic threshold. This was a powerful idea and it is central to contemporary public health. It is what allowed epidemiologists and public health officials to eliminate smallpox and it is what is allowing us to very nearly eliminate polio if political forces (black swans?) will permit.

Taleb’s ludic fallacy (i.e., games of chance are somehow an adequate model of randomness in the world) is great. Quite possibly the most interesting and illuminating section of The Black Swan happens on p. 130 where he illustrates the major risks faced by a casino. Empirical data make a much stronger argument than do snide stereotypes. This said, Lund (2007) makes the important point that we need to ask what exactly is being modeled in any risk assessment or projection? One of the most valuable outcomes of any formalized risk assessment (or formal model construction more generally) is that it forces the investigator to be very explicitly about what is being modeled. The output of the model is often of secondary importance.

Much of the evidence deployed in his books is what Herb Gintis has called “stylized facts” and, of course, is subject to Taleb’s own critique of “hidden evidence.” Because the stylized facts are presented anecdotally, there is no way to judge what is being left out. A fair rejoinder to this critique might be that these are trade publications meant for a mass market and are therefore not going to be rich in data regardless. However, the tone of books – ripping on economists and bankers but also statisticians, historians, neuroscientists, and any number of other professionals who have the audacity to make a prediction or provide a causal explanation – makes the need for more measured empirical claims more important. I suspect that many of these people actually believe things that are quite compatible with the conclusions of both The Black Swan and Antifragile.

On Stress

The notion of antifragility turns on systems getting stronger when exposed to stressors. But we know that not all stressors are created equally. This is where the work of Robert Sapolsky really comes into play. In his book Why Zebras Don’t Get Ulcers, Sapolsky, citing the foundational work of Hans Seyle, notes that some stressors certainly make the organism stronger. Certain types of stress (“good stress”) improves the state of the organism, making it more resistant to subsequent stressors. Rising to a physical or intellectual challenge, meeting a deadline, competing in an athletic competition, working out: these are examples of good stresses. They train body, mind, and emotions and improve the state of the individual. It is not difficult to imagine that there could be similar types of good stressors at levels of organization higher than the individual too. The way the United States come together as a society to rise to the challenge of World War II and emerge as the world’s preeminent industrial power comes to mind. An important commonality of these good stressors is the time scale over which they act. They are all acute stressors that allow recovery and therefore permit the subsequently improved performance.

However, as Sapolsky argues so nicely, when stress becomes chronic, it is no longer good for the organism. The same glucocorticoids (i.e., “stress hormones”) that liberate glucose and focus attention during an acute crisis induce fatigue, exhaustion, and chronic disease when the are secreted at high levels chronically.

Any coherent theory of antifragility will need to deal with the types of stress to which systems are resistent and, importantly, have a strengthening effect. Using the idea of hormesis – that a positive biological outcome can arise from taking low doses of toxins – is scientifically hokey and boarders on mysticism. It unfortunately detracts from the good ideas buried in Antifragile.

I think that Taleb is on to something with the notion of antifragility but I worry that the policy implications end up being just so much orthodox laissez-faire conservatism. There is the idea that interventions – presumably by the State – can do nothing but make systems more fragile and generally worse. One area where the evidence very convincingly suggests that intervention works is public health. Life expectancy has doubled in the rich countries of the developed world from the beginning of the twentieth century to today. Many of the gains were made before the sort of dramatic things that come to mind when many people think about modern medicine. It turns out that sanitation and clean water went an awful long way toward decreasing mortality well before we had antibiotics or MRIs. Have these interventions made us more fragile? I don’t think so. The jury is still out, but it seems that reducing the infectious disease burden early in life (as improved sanitation does) seems to have synergistic effects on later-life mortality, an effect is mediated by inflammation.

On The Academy

Taleb drips derision throughout his work on university researchers. There is a lot to criticize in the contemporary university, however, as with so many other external critics of the university, I think that Taleb misses essential features and his criticisms end up being off base. Echoing one of the standard talking points of right-wing critics, Taleb belittles university researchers as being writers rather than doers (echoing the H.L. Menken witticism  “Those who can do; those who can’t teach”). Skin in the game purifies thought and action, a point with which I actually agree, however, thinking that that university researchers live in a world lacking consequences is nonsense. Writing is skin in the game. Because we live in a quite free society – and because of important institutional protections on intellectual freedom like tenure (another popular point of criticism from the right) – it is easy to forget that expressing opinions – especially when one speaks truth to power – can be dangerous. Literally. Note that intellectuals are often the first ones to go to the gallows when there are revolutions from both the right and the left: Nazis, Bolsheviks, and Mao’s Cultural Revolution to name a few. I occasionally get, for lack of a better term, unbalanced letters from people who are offended by the study of evolution and I know that some of my colleagues get this a lot more than I. Intellectuals get regular hate mail, a phenomenon amplified by the ubiquity of electronic communication. Writers receive death threats for their ideas (think Salman Rushdie). Ideas are dangerous and communicating them publicly is not always easy, comfortable, or even safe, yet it is the professional obligation of the academic.

There are more prosaic risks that academics face that suggest to me that they do indeed have substantial skin in the game. There is a tendency for critics from outside the academy to see universities as ossified places where people who “can’t do” go to live out their lives, the university is a dynamic place. Professors do not emerge fully formed from the ivory tower. They must be trained and promoted. This is the most obvious and ubiquitous way that what academics write has “real world” consequences – i.e., for themselves. If peers don’t like your work, you won’t get tenure. One particularly strident critic can sink a tenure case. Both the trader and the assistant professor have skin in their respective games – their continued livelihoods depend upon their trading decisions and their writing. That’s pretty real. By the way, it is a huge sunk investment that is being risked when an assistant professor comes up for tenure. Not much fun to be forty and let go from your first “real” job since you graduated with your terminal degree… (I should note that there are problems with this – it can lead to particularly conservative scholarship by junior faculty, among other things, but this is a topic for its own post.)

Now, I certainly think that are more and less consequential things to write about. I have gotten more interested in applied problems in health and the environment as I’ve moved through my career because I think that these are important topics about which I have potentially important things to say (and, yes, do). However, I also think it is of utmost importance to promote the free flow of ideas, whether or not they have obvious applications. Instrumentally, the ability to pursue ideas freely is what trains people to solve the sort of unknown and unforecastable problems that Taleb discusses in The Black Swan. One never knows what will be relevant and playing with ideas (in the personally and professionally consequential world of the academy) is a type of stress that makes academics better at playing with ideas and solving problems.

One of the major policy suggestions of Atifragile is that tinkering with complex systems will be superior to top-down management. I am largely sympathetic to this idea and to the idea that high-frequency-of-failure tinkering is also the source of innovation. Taleb contrasts this idea of tinkering is “top-down” or “directed” research, which he argues regularly fails to produce innovations or solutions to important problems. This notion of “top-down,” “directed” research is among the worst of his various straw men and a fundamental misunderstanding of the way that science works. A scientist writes a grant with specific scientific questions in mind, but the real benefit of a funded research program is the unexpected results one discovers while pursuing the directed goals. As a simple example, my colleague Tony Goldberg has discovered two novel simian hemorrhagic viruses in the red colobus monkeys of western Uganda as a result of our big grant to study the transmission dynamics and spillover potential of primate retroviruses. In the grant proposal, we discussed studying SIV, SFV, and STLV. We didn’t discuss the simian hemorrhagic fever viruses because we didn’t know they existed! That’s what discovery means. Their not being explicitly in the grant didn’t stop Tony and his collaborators from the Wisconsin Regional Primate Center from discovering these viruses but the systematic research meant that they were in a position to discover them.

The recommendation of adaptive, decentralized tinkering in complex systems is in keeping with work in resilience (another area about which Taleb is scornful because it is the poor step-child of antifragility). Because of the difficulty of making long-range predictions that arises from nonlinear, coupled systems, adaptive management is the best option for dealing with complex environmental problems. I have written about this before here.

So, there is a lot that is good in the works of Taleb. He makes you think, even if spend a lot of time rolling your eyes at the trite stereotypes and stylized facts that make up much of the rhetoric of his books. Importantly, he draws attention to probabilistic thinking for a general audience. Too much popular communication of science trades in false certainties and the mega-success of The Black Swan in particular has done a great service to increasing awareness among decision-makers and the reading public about the centrality of uncertainty. Antifragility is an interesting idea though not as broadly applicable as Taleb seems to think it is. The inspiration for antifragility seem to lie in largely biological systems. Unfortunately, basing an argument on general principles drawn from physiology, ecology, and evolutionary biology pushes Taleb’s knowledge base a bit beyond its limit. Too often, the analogies in this book fall flat or are simply on shaky ground empirically. Nonetheless, recommendations for adaptive management and bricolage are sensible for promoting resilient systems and innovation. Thinking about the world as an evolving complex system rather than the result of some engineering design is important and if throwing his intellectual cachet behind this notion helps it to get as ingrained into the general consciousness as the idea of a black swan has become, then Taleb has done another major service.

→ 2 CommentsTags: ······

New Publication, Emerging infectious diseases: the role of social sciences

December 4th, 2012 · Human Ecology, Infectious Disease

This past week, The Lancet published a brief commentary I wrote with a group of anthropologist-collaborators. The piece, written with Craig Janes, Kitty Corbett, and Jim Trostle, arose from a workshop I attended in lovely Buenos Aires back in June of 2011. This was a pretty remarkable meeting that was orchestrated by Josh Rosenthal, acting director of the Division of International Training and Research at the Fogarty International Center at NIH, and hosted in grand fashion by Ricardo Gürtler of the University of Buenos Aires.

Our commentary is on a series of papers on zoonoses, a seemingly unlikely topic for about which a collection of anthropologists might have opinions. However, as we note in our paper, social science is essential for understanding emerging zoonoses. First, human social behavior is an essential ingredient in R_0, the basic reproduction number of an infection (The paper uses the term “basic reproductive rate,” which was changed somewhere in production from the several times I changed “rate” to “number”). Second, we suggest that social scientists who participate in primary field data collection (e.g., anthropologists, geographers, sociologists) are in a strong position to understand the complex causal circumstances surrounding novel zoonotic disease spill-overs.

We note that there are some challenges to integrating the social sciences effectively into research on emerging infectious disease. Part of this is simply translational. Social scientists, natural scientists, and medical practitioners need to be able to speak to each other and this kind of transdisciplinary communication takes practice. I’m not at all certain what it takes to make researchers from different traditions mutually comprehensible, but I know that it’s more likely to happen if these people talk more. My hypothesis is that this is best done away from anyone’s office, in the presence of food and drink. Tentative support for this hypothesis is provided by the wide-ranging and fun conversations over lomo y malbec. These conversations have so far yielded at least one paper and laid the foundations for a larger review I am currently writing. I know that various permutations of the people in Buenos Aires for this meeting are still talking and working together, so who knows what may eventually come of it?

→ No CommentsTags: ····

On Anthropological Sciences and the AAA

November 19th, 2012 · Anthropology, Evolution, Human Ecology, science, Teaching

I guess the time has rolled around again for my annual navel-gaze regarding my discipline, my place within it, and its future. Two strangely interwoven events have conspired to make me particularly philosophical as we enter into the winter holidays. First, I am in the middle of a visit by my friend, colleague, and former student, Charles Roseman, now an associate professor of anthropology at the University of Illinois, Urbana-Champaign. The second is that the American Anthropological Association meetings just went down in San Francisco and this always induces an odd sense of shock and subsequent introspection.

Charles graduated with a Ph.D. from the Department of Anthropological Sciences (once a highly ranked department according the the National Research Council) in 2005. He was awarded tenure at UIUC, a leading department for biological anthropology, this past year and has come back to The Farm to collaborate with me on our top-secret sleeper project of the past seven years. We’ve made some serious progress on this project since he arrived and maybe I’ll be able to write about that soon too.

The annual AAA meeting is one  that I never attended until about four years ago, coinciding with what we sometimes refer to as “the blessed event,” the remarrying of the two Stanford Anthropology departments. It’s actually a bit of coincidence that I started attending AAAs the same year that we merged but it has largely been business of the new Department of Anthropology that has kept me going back – largely to serve on job search committees. This year, I had two responsibilities that drew me to the AAAs. The first was the editorial board meeting for American Anthropologist, the flagship publication of the association.  I joined the editorial board this year and it seemed a good idea to go and get a feel for what is happening with the journal and where it is likely to head over the next couple years.

My other primary responsibility was chairing a session that was organized by two of my Ph.D. students, Yeon Jung Yu and Shannon Randolph. In addition to Yeon and Shannon, my Ph.D. student Alejandro Feged also presented work from his dissertation research.  All three of these students were actually accepted into Anthsci and are part of the last cohort of students to leave Stanford still knowing the two-department system.

It was a great pleasure to sit in the audience and watch Yeon, Shannon, and Alejandro dazzle the audience with their sophisticated methods, beautiful images, and accounts of impressive, extended — and often hardcore — fieldwork. For her dissertation research, Yeon worked for two years with commercial sex workers in southern China, attempting to understand how women get recruited into sex work and how social relations facilitate their ability to survive and even thrive in a world that is quite hostile to them. Her talk was incredibly professional and theoretically sophisticated. For her dissertation research, Shannon worked in the markets of Yaoundé, Cameroon, trying to understand the motivations for consumption of wild bushmeat. Shannon was able to share with the audience her innovative approaches to collecting data (over 4,000 price points, among other things) on a grey-market activity that people are not especially eager to discuss, especially in the market itself. Alejandro did his dissertation research in the Colombian Amazon, where he investigated the human ecology of malaria in this highly endemic region. His talk demonstrated that the conventional wisdom about malaria ecology in this region — namely, that the people most at risk for infection are adult men who spend the most time in the forest — is simply incorrect for some indidenous popualtions and his time-budget analyses made a convincing case for the behavioral basis of this violation of expectations. This was a pretty heterogeneous collection of talks but they shared the commonality of a very strong methodological basis to the research.

At at time when many anthropologists express legitimate concerns over their professional prospects, I have enormous confidence in this crop of students, all three of whom are regularly asked to do consulting for government and/or non-govermental organizations because of their subject knowledge and methodological expertise. Anthsci graduates — there weren’t that many of them since the department existed for less than 10 years — have done very well in the profession overall. I will list just a couple here whose work I knew well because I was on their committees or their work was generally in my area

In addition to these grad students, I think that it’s important to note the success of the post-docs who worked either in Anthsci or with former Anthsci faculty on projects that started before the merger. Some of these outstanding people include:

In a discipline that is lukewarm at best on the even very notion of methodology, I suspect that students with strong methodological skills — in addition to the expected theoretical sophistication and critical thinking (note that these skills do not actually trade-off) — enjoy a distinct comparative advantage when entering a less-than-ideal job market. Of course, I don’t mean to imply that Anthsci didn’t have its share of graduates who leave the field out of frustration or lack of opportunity or who get stuck in the vicious cycle of adjunct teaching. But this accounting gives me hope. It gives me hope for my both my current and future students and it gives me hope for the field. Maybe I’ll even go to AAAs again next year…

→ No CommentsTags: ··

This is Just What Greece Needs

August 23rd, 2012 · Climate Change, Human Ecology, Infectious Disease

Greece was officially deemed malaria-free in 1974. Recent reports, however, suggest that there is ongoing autochthonous transmission of of Plasmodium vivax malaria. According to a brief report from the Mediterranean Bureau of the Italian News Agency (ANSAmed), 40 cases of P. vivax malaria have been reported in the first seven months of 2012. Of these 40, six had no history of travel to areas known to be endemic for malaria transmission. The natural inference is thus that they acquired it locally (i.e., “autochthonously”) and that malaria may be back in Greece.

More detail on the malaria cases in Greece can be found on this European Centre for Disease Prevention and Control website. The actual ECDC report on autochthonous malaria transmission in Greece can be found here. A point in that report that is not mentioned in the ANSAmed newswire is that 2012 marks the third consecutive year in which autochthonous transmission has been inferred in Greece. So much for Greece being malaria-free.

→ No CommentsTags: ···

Why the Prediction Market Failed to Predict the Supreme Court

July 8th, 2012 · Social Network Analysis

There is a very interesting piece in the New York Times today by David Leonhardt on the apparent backlash against prediction markets such as Intrade and Betfair. In principle, these markets make predictions by aggregating the disparate information of many independent bettors who offer prices for a particular outcome. Prediction markets have enjoyed a fair amount of success in recent elections. The University of Iowa has even set up an influenza prediction market.  But prediction markets are hardly perfect and have had some pretty big recent failures. It turns out that Intrade failed in a pretty spectacular manner to predict the outcome of the recent Supreme Court ruling about the constitutionality of the Affordable Care Act. Leonhardt suggests that some of the failures of online prediction markets is attributable to relatively small number of people who actually trade on the market:

But the crowd was not everywhere wise. For one thing, many of the betting pools on Intrade and Betfair attract relatively few traders, in part because using them legally is cumbersome. (No, I do not know from experience.) The thinness of these markets can cause them to adjust too slowly to new information.

This may have been an issue with the ACA decision but the primary problem with the incorrect prediction is that the crowd doesn’t actually know much about the workings of the very closed social network that is the United States Supreme Court. Writes Leonhardt:

And there is this: If the circle of people who possess information is small enough — as with the selection of a vice president or pope or, arguably, a decision by the Supreme Court — the crowds may not have much wisdom to impart. ‘There is a class of markets that I think are basically pointless,’ says Justin Wolfers, an economist whose research on prediction markets, much of it with Eric Zitzewitz of Dartmouth, has made him mostly a fan of them. ‘There is no widely available public information.’

This point gets at a larger critique of market-based solutions to problems suggested by my Stanford colleague Mark Granovetter over 25 years ago (Granovetter 1985). This is the problem of embeddedness. The idea of embeddedness was anticipated by the work of substantivist economist Karl Polanyi, but Granovetter really laid out the details. Granovetter writes (1985: 487): “A fruitful analysis of human action requires us to avoid the atomization implicit in the theoretical extremes of under- and oversocialized conceptions [of human action]. Actors do not behave or decide as atoms outside a social context, nor do they adhere slavishly to a script written for them by the particular intersection of social categories that they happen to occupy. Their attempts at purposive action are instead embedded in concrete, ongoing systems of social relations.” Atomization is independent bettors making decisions about the price they are willing to pay for a certain outcome.

The argument for embeddedness emerges in Granovetter’s paper from the problem of trust in markets. Where does trust come from in competitive markets? The fundamental problem here regards the micro-foudnations of markets where “the alleged discipline of competitive markets cannot be called on to mitigate deceit, so the classical problem of how it can be that daily economic life is not riddled with mistrust and malfeasance has resurfaced.” (p. 488). The obvious solution to this is that actors choose to deal with alters whom they trust and that the most effect way to develop trust is to have prior dealings with an alter.

Granovetter’s embeddedness theory is a modest one. He notes that, unlike the alternative models, his “makes no sweeping (and thus unlikely) predictions of universal order or disorder but rather assumes that the details of social structure will determine which is found.” (p. 493)

These ideas about the careful analysis of social structure and networks of interlocking relationships are fundamental for understanding when the crowd will be wise and when it will not. They are also essential for developing effective development interventions and, for that matter, making markets work for the public good in general.  The theory of embeddedness allows for the possibility that markets can work but if we are to understand when they work and when they don’t, we need to think about social structure as more than just a bit of friction in an ideal market and take its measurement more seriously. People are not ideal gases. (Dirty little secret: most gases are not ideal gases). This gets at some problems that I have been thinking about a lot recently relating to the implications of additive, observational noise vs. process noise and its implications for prediction of multi-species epidemics, but that must wait for another post…

 

 

→ No CommentsTags: ····

On Global State Shifts

July 5th, 2012 · Climate Change, Human Ecology

This is a edited version of a post I sent out to the E-ANTH listserv in response to a debate over a recent paper in Nature and the response to it on the website “Clear Science,” written by Todd Meyers. In this debate, it was suggested that the Barnosky paper is the latest iteration of alarmist environmental narratives in the tradition of the master of that genre, Paul Ehrlich. Piqued by this conversation, I read the Barnosky paper and passed along my reading of it.

The Myers’s piece on the “Clear Science” web site is quite rhetorically clever. Climate-change deniers have a difficult task if they want to convincingly buck the overwhelming majority of reputable scientists on this issue. Myers uses ideas about the progress of science developed by the philosopher Thomas Kuhn in his classic book, The Structure of Scientific Revolutions. By framing the Barnosky et al. as mindlessly toeing the Kuhnian normal-science line, he has come up with a shrewd strategy for dealing with the serious scientific consensus around global climate change. Myers suggests that “Like scientists blindly devoted to a failed paradigm, the Nature piece simply tries to force new data to fit a flawed concept.”

I think that a pretty strong argument can be made that the perspective represented in the Barnosky et al. paper is actually paradigm-breaking. For 200 years the reigning paradigm in the historical sciences has been uniformitarianism. Hutton’s notion — that processes that we observe today have always been working — greatly extended the age of the Earth and allowed Lyell and Darwin to make their remarkable contributions to human understanding. This same principle allows us to make sense of the archaeological record and of ethnographic experience. It is a very useful foil for all manner of exceptionalist explanatory logic and I use it frequently.

However, there are plenty of ways that uniformitarianism fails. If we wanted to follow the Kuhnian narrative, we might say that evidence has mounted that leads to increased contradictions arising from the uniformitarian explanatory paradigm. Rates of change show heterogeneities and when we trying to understand connected systems characterized by extensive feedback, our intuitions based on gradual change can fail, sometimes spectacularly. This is actually a pretty revolutionary idea, apocalyptic popular writings aside, in mainstream science.

Barnosky et al. draw heavily on contemporary work in complex systems. The theoretical paper (Scheffer et al. 2009) upon which the Barnosky paper relies heavily represents a real step forward in the theoretical sophistication of this corpus and does so by making unique and testable predictions about systems approaching critical transitions. I have written about it previously here.

The most difficult part of projecting the future state of complex systems is that human element. This leads too many physical and biological scientists to simply ignore social and behavioral inputs. This said, there are far too few social and behavioral scientists willing to step up and do the hard collaborative work necessary to make progress on this extremely difficult problem. The difficulty of projecting human behavior often leads to projections of the business-as-usual variety and, unfortunately, these are often mischaracterized by the media and other readers. Such projections simply assume no change in behavior and look at the consequences some time down the line. A business-as-usual projection actually provides a lot of information, albeit about a very hypothetical future. What if things stayed the way they are? Yes, behavior changes. People adapt. Agricultural production becomes more efficient. Prices increase, reducing demand and allowing sustainable substitutes. Of course, sometimes things get worse too. Despite tremendous global awareness and lots of calls to reduce greenhouse gas emissions, carbon emissions have continued to rise. So, there is nothing inherently flawed about a business-as-usual projection. We just need to be clear about what it means when we use one.

A criticism that emerged on the list is that Barnosky et al. is essentially “an opinion piece.” However, the great majority of the Barnosky et al. paper is, in fact, simply a review. There are numerous facts to be reviewed: biodiversity has declined, fisheries have crashed, massive amounts of forest have been converted and degraded, the atmosphere has warmed. They are facts. And they are facts in which many vested interests would like to sow artificial uncertainty for political purposes. Positive things have happened too (e.g., malaria eradication in temperate climes, increased food security in some places that used to be highly insecure, increased agricultural productivity — though this may be of dubious sustainability), though these are generally on more local scales and, in some cases, may simply reflect exporting the problems to rich countries to the Global South. The fact that they are not reviewed does not mean that the paper belongs in an hysterical chicken-little genre.

A common critique of the doomsday genre is the certainty with which the horrible outcomes are framed. The Barnosky paper is suffused with uncertainty. In fact, this is the main point I take away from it! The first conclusion of the paper is that “it is essential to improve biological forecasting by anticipating critical transitions that can emerge on a planetary scale and understanding how such global forcings cause local changes.” This suggests to me that the authors are acknowledging massive uncertainty about the future, not saying that we are doomed with certainty. Or how about: “the plausibility of a future planetary state shift seems high, even though considerable uncertainty remains about whether it is inevitable and, if so, how far in the future it may be”?

Myers writes “they base their conclusions on the simplest linear mathematical estimate that assumes nothing will change except population over the next 40 years. They then draw a straight line, literally, from today to the environmental tipping point.” This is a profoundly misleading statement. Barnosky et al. are using the fold catastrophe model discussed in Scheffer et al. (2009). The Scheffer et al. analysis of the fold catastrophe model uses some fairly sophisticated ideas from complex systems theory, but the ideas are relatively simple. The straight line that so offends Myers arises because this is the direction of the basin of attraction. In the figure below, I show the fold-catastrophe model. The abcissa represents the forcing conditions of the system (e.g., population size or greenhouse gas emissions). The ordinate represents the state of the system (e.g., land cover or one of many ecosystem services). The sideways N represents an attractor — a more general notion of an equilibrium. The state of the system tends toward this curve whenever it is perturbed away.

The region in the interior of the fold (indicated by the dashed line) is unstable while the upper and lower tails (indicated by solid lines) are stable and tend to draw perturbations from the attractor toward them. The grey arrows indicate the basin of attraction. When the system is perturbed off of the attractor by some random shock, the state tends to move in the direction indicated by the arrow. When the state is forced all the way down the top arc of the fold, it enters a region where a relatively small shock can send the state into a qualitatively different regime of rapid degradation. This is illustrated by the black arrow (a shock) pushing the state away from point F2. The state will settle again on the attractor, but a second shock will send the state rapidly down toward the bottom arm of the fold (point F1). Note that this region of the attractor is stable so it would take a lot of work to get it back up again (e.g., reduce population or drastically reduced total greenhouse gasses). This is what people mean when they colloquially refer to a “global tipping point.”

This is the model. It may not be right, but thanks to Scheffer et al. (2009), it makes testable predictions. By framing global change in terms of this model, Barnosky et al. are making a case for empirical investigation of the types of data that can falsify the model. Maybe because of the restrictions placed on them by Nature (and these are severe!), maybe because of some poor choices of their own, they include an insufficiently explained, fundamentally complex figure that a critic with clear interests in muddying the scientific consensus can sieze on to dismiss the whole paper as just more Ehrlich-style hysteria.

For me — as I suspect for the authors of the Barnosky et al. paper — massive, structural uncertainty about the state of our planet, coupled with a number of increasingly well-supported models of the behavior or nonlinear systems (i.e., not simply normal science) strongly suggests a precautionary principle. This is something that the economist Marty Weitzman suggested in his (highly technical and therefore not widely read) paper in 2009 and that I have written about before here and here. This is not inflammatory fear-mongering, nor is it grubbing for grant money (I wish it were that easy!). It is responsible scientists doing their best to communicate the state of the science within the constraints of society and the primary mode of scientific communication. Let’s not be taken in by writers pretending to present “just the facts” in a cool, detached manner but who actually have every reason to try to foment unnecessary uncertainty about the state of our world and impugn the integrity of people doing their level best to understand a rapidly changing planet.

References

Kuhn, T. 1962. The Structure of Scientific Revolutions. Chicago: University of Chicago Press.

Scheffer, M., J. Bascompte, W. A. Brock, V. Brovkin, S. R. Carpenter, V. Dakos, H. Held, E. H. van Nes, M. Rietkerk, and G. Sugihara. 2009. Early-Warning Signals for Critical Transitions. Nature. 461 (7260):53-59.

Weitzman, M. L. 2009. On Modeling and Interpreting the Economics of Catastrophic Climate Change. The Review of Economics and Statistics. XCI (1):1-19.

 

→ No CommentsTags: ······

AAPA 2012 Run-Down

April 16th, 2012 · Anthropology, science

I am done with this year’s American Association of Physical Anthropologists annual meeting in Portland. Alas, I am not yet home as I had a scheduling snafu with Alaska Airlines yesterday and there was literally not a single seat on a flight to any airport in the Bay Area. So, I hung out in PDX for the night, where my sister-in-law is finishing up her MD/MPH at OHSU. Staying an extra night allowed me to have dinner at what is probably my favorite pizzaria on the West Coast, Bella Faccia on Alberta Ave in Northheast (Howie’s in Palo Alto is a close second). I also had a lovely breakfast of rissotto cakes and poached eggs at Petite Provance, also on Alberta. All in all, a fantastic couple days’ worth of food.
It was great to get a chance to catch up with old friends and colleagues and meet new ones. This is really what professional meetings are about. I had a chance to spend time with Charles Roseman, Rick Bribiescas, Josh Snodgrass, Nelson Ting, and Frances White. I also had very nice, if too brief, chats with Connie Mulligan, Lorena Madrigal, Larry Sugiyama, Greg Blomquist, Zarin Machanda, Melissa Emery Thompson, Cheryl Knott, and Chris Kuzawa.
I only go to the AAPAs every couple of years. Given the interdisciplinarity of my work and interests, I struggle to find a “home” professional meeting. Sometimes I feel like it’s PAA; sometimes Sunbelt; sometimes AAPA/HBA.  One thing I can say for certain is that it is not AAA, my semi-annual experience in ethnographic surreality. Such a peculiar discipline anthropology is. Part of the reason I don’t go to AAPAs all that often is that I rarely find all that much interesting there. There are a few really fantastic people working in the field but most of the talks I find stupifyingly boring. I’m just not that interested in teeth. I suppose this is true for any professional meeting, so I shouldn’t be too hard on AAPA — I’m also not that interested in contraceptive uptake, social media/online networks, or governmentality, apparently the modal topics in my competing meetings. In fact, I was pleasantly surprised by the diversity and quality of talks I saw at AAPA this year.
In my session alone, I saw really terrific and interesting talks by Steve Leigh and Connie Milligan. Steve spoke on the comparative gut microbiomes of primates and Connie presented early results on the modification of gene expression through methylation of infants born to women who experienced extreme psychosocial and physical trauma in eastern Congo. Really important stuff. It also struck me that you’d probably only see these types of talks at the AAPAs.
There were a lot of young people at this meeting — a greater fraction than I remember from past meetings.  Maybe it was the draw of hipster Portland with its great beer, great food, and general atmosphere of grooviness. Maybe there really are lots and lots of young physical anthropologists being trained these days. I must admit that I had mixed feelings about this thought as I looked out over the vast ocean of twenty-something faces in the hotel bar Saturday night. On the one hand, it’s great that people are being trained to do good work in physical anthropology. On the other hand, I worry about the ability of our discipline, which shows no signs of stopping with the charade that somehow anthropology is really akin to literary criticism, to absorb this many new Ph.D.s from (one of) the scientific wings of modern anthropology.
Two of the talks immediately before me in my session were, in fact, by young scientists and they were great. Andrew Paquette, from Northern Arizona University, gave a talk on the evolutionary history of Southeast Asian Ovalocytosis (SAO), a twenty-seven base pair deletion in the eleventh exon of the SLC4A1 gene that confers strong protection against infection with Plasmodium falciparum, the most dangerous form of malaria. Turns out this mutation, which has its geographic epicenter in Nusa Tenggara in Indonesia, is surprisingly ancient. Lots more to come from this, I’m sure. Margaux Keller, from Temple, gave a fantastic talk on finding some of the missing heritability in Parkinson’s disease. Missing heritability of complex disease phenotypes is a major topic in genetic epidemiology and Margaux and her colleagues applied Genome-Wide Complex Trait Analysis to eight cohorts of case-control studies of PD. Their results substantially increase (i.e., by a factor of 10!) the fraction of total phenotypic variance in PD explained by straight-up genome-wide association studies (GWAS). In addition to the excellent scientific content of her presentation, I was struck by the very nice and original visual aesthetic of her slides.
I spoke on my recent work on the quantiative genetics of life-history traits.  With Statistics grad student Philip Labo, I’ve been doing some pretty serious number-crunching to examine the heritabilities of and (more interestingly) genetic correlations between human life-history characters. Good results that should be seeing some more light soon (including at PAA next month!).

I am done with this year’s American Association of Physical Anthropologists annual meeting in Portland. Alas, I am not yet home as I had a scheduling snafu with Alaska Airlines yesterday and there was literally not a single seat on a flight to any airport in the Bay Area. So, I hung out in PDX for the night, where my sister-in-law is finishing up her MD/MPH at OHSU. Staying an extra night allowed me to have dinner at what is probably my favorite pizzeria on the West Coast, Bella Faccia on Alberta Ave in Northeast (Howie’s in Palo Alto is a close second). I also had a lovely breakfast of risotto cakes and poached eggs at La Petite Provence, also on Alberta. All in all, a fantastic couple days’ worth of food.

It was great to get a chance to catch up with old friends and colleagues and meet new ones. This is really what professional meetings are about. I had a chance to spend time with Charles Roseman, Rick Bribiescas, Josh Snodgrass, my EID buddy Nelson Ting, Kirstin Sterner, and Frances White. I also had very nice, if too brief, chats with Connie Mulligan, Lorena Madrigal, Larry Sugiyama, Greg Blomquist, Zarin Machanda, Melissa Emery Thompson, Cheryl Knott, Andy Marshall, and Chris Kuzawa.

I only go to the AAPAs every couple of years. Given the interdisciplinarity of my work and interests, I struggle to find a “home” professional meeting. Sometimes I feel like it’s PAA; sometimes Sunbelt; sometimes AAPA/HBA.  One thing I can say for certain is that it is not AAA, my semi-annual experience in ethnographic surreality. Such a peculiar discipline anthropology is. Part of the reason I don’t go to AAPAs all that often is that I rarely find all that much interesting there. There are a few really fantastic people working in the field, but most of the talks I find stupifyingly boring. I’m just not that interested in teeth. I suppose this is true for any professional meeting, so I shouldn’t be too hard on AAPA — I’m also not especially interested in contraceptive uptake, social media/online networks, or governmentality, apparently the modal topics in my competing meetings. In fact, I was pleasantly surprised by the diversity and quality of talks I saw at AAPA this year.

In my session alone, I saw really terrific and interesting talks by Steve Leigh and Connie Mulligan. Steve spoke on the comparative gut microbiomes of primates and Connie presented early results on the modification of gene expression through methylation of infants born to women who experienced extreme psychosocial and physical trauma in eastern Congo. Really important stuff. It also struck me that you’d probably only see these types of talks at the AAPAs.

There were a lot of young people at this meeting — a greater fraction than I remember from past meetings.  Maybe it was the draw of hipster Portland with its great beer, great food, and general atmosphere of grooviness. Maybe there really are lots and lots of young physical anthropologists being trained these days. I must admit that I had mixed feelings about this thought as I looked out over the vast river of twenty-something faces pouring into the hotel bar Saturday night. On the one hand, it’s great that people are being trained to do good work in physical anthropology. On the other hand, I worry about the ability of our discipline, which shows no signs of stopping with the charade that somehow anthropology is really akin to literary criticism, to absorb this many new Ph.D.s from (one of) the scientific wings of modern anthropology.

Two of the talks immediately before me in my session were, in fact, by young scientists and they were great. Andrew Paquette, from Northern Arizona University, gave a talk on the evolutionary history of Southeast Asian Ovalocytosis (SAO), a twenty-seven base pair deletion in the eleventh exon of the SLC4A1 gene that confers strong protection against infection with Plasmodium falciparum, the most dangerous form of malaria. Turns out this mutation, which has its geographic epicenter in Nusa Tenggara in Indonesia, is surprisingly ancient. Lots more to come from this, I’m sure. Margaux Keller, from Temple, gave a fantastic talk on finding some of the missing heritability in Parkinson’s disease. Missing heritability of complex disease phenotypes is a major topic in genetic epidemiology and Margaux and her colleagues applied Genome-Wide Complex Trait Analysis to eight cohorts of case-control studies of PD. Their results substantially increase (i.e., by a factor of 10!) the fraction of total phenotypic variance in PD explained compared to straight-up genome-wide association studies (GWAS). In addition to the excellent scientific content of her presentation, I was struck by the very nice and original visual aesthetic of her slides.

I spoke on my recent work on the quantitative genetics of life-history traits.  With Statistics grad student Philip Labo, I’ve been doing some pretty serious number-crunching to examine the heritabilities of and (more interestingly) genetic correlations between human life-history characters. Good results that should be seeing some more light soon (including at PAA next month!).

→ 2 CommentsTags: ·····

(Text Processing) Paradigms Lost

April 11th, 2012 · Uncategorized

Tom Scocca has wrote a brilliant essay in Slate today on the absurdities of Microsoft Word being the standard text processing tool in the age of digital publishing. I struggle to get students doing statistical and demographic analysis in R to not use Word because of all the unwanted junk it brings to the most trivial text-processing task. Using the word2cleanhtml website, Scocca shows how a two-word text chunk written in Word contains the equivalent of eight pages of unnecessary hidden text!

I encounter all the nonsense associated with the annoying default “annoying typographical flourishes” that Scocca discusses in my role as associate editor of a couple of journals and a regular reviewer for NSF. Both of these roles make extensive use of web-based platforms for managing workflows associated with writing-intensive tasks (ScholarOne for editing and Fastlane for NSF) and both snarf on the typographical annoyances Scocca enumerates (“smart” quotes, automatic em-dashes, etc.). When you do an NSF panel, you receive a briefing explaining that if you are going to write your panel summaries in Word, you need to turn off smart quotes and avoid other things that will lead to nonsense in the plain-text formatted fields of Fastlane. Of course, no one does this.

Don’t get me started on track changes…

I do the great majority of my own writing in a plain text-processor. My personal favorite is Aquamacs, a Mac-native variation on GNU Emacs. Emacs is definitely not for everyone, but there are lots of other possibilities. Scocca writes that he has turned to TextEdit, which is another Mac-native, but there are plenty of other options that run on different systems. Here is a list of possibilities.

It will be interesting to see how online collaborative tools such as Google Docs change the way people do text processing.  I find that more of my students do their work in Google Docs. It’s certainly not a majority yet but the fraction is growing rapidly each year. As Scocca notes, Google Docs provides a much more sane alternative to track changes, among other things.

Microsoft clearly needs to get serious and do a bit of innovation here if they want to stay in this particular game. I, for one, will not miss MS Word if it should go the way of WordStar.

→ 2 CommentsTags: ·