Aedes aegypti in San Mateo County

The mosquito, Aedes aegypti, which is the vector for a number of world scourges (e.g., dengue, yellow fever), has been found in San Mateo County (just across San Francisquito Creek from Stanford) for the first time since 1979. That makes three counties in California where the mosquito has been found. While not a panic-inducing development, it would be most excellent if the good people of San Mateo and Santa Clara counties would make sure their yards are free of mosquito breeding habitat!

Ecology and Evolution of Infectious Disease

I am recently back from the 2013 Ecology and Evolution of Infections Disease Conference at Penn State University. This was quite possibly the best meeting I have ever attended, not even for the science (which was nonetheless impeccable), but for the culture. I place the blame for this awesome culture firmly on the shoulders of the leaders of this field and, in particular, the primary motivating force behind the recent emergence of this field, Penn State's Peter Hudson. Since I had attended the other EEID conference at UGA earlier this Spring (another great conference), I had no intention on attending the Penn State conference this year. Then, one day in late March, Nita Bharti asked me if I was going and mentioned, "You know it's Pete's 60th birthday, right?" Well that sealed it; I really had no choice.  I simply had to go if for no other reason than to pay my due respect to this man I admire so greatly. Pete has the most relentless optimism about the future of science and a willingness to make things happen that I have ever encountered and, in this way, has provided me one of my primary role models as a university professor and mentor. He has played a role in developing so many of the brilliant people who make this field so exciting, it's amazing (just a sample that comes immediately to mind: Ottar Bjornstad, Matt Ferrari, Nita Bharti, Marcel Salathé, Isabella Cattadori, Jamie Lloyd-Smith, Shweta Bansal, Jess Metcalf...). Of course, even as I write this, I realize the joint influence of another major player in the field, Bryan Grenfell, formerly of Penn State but now at Princeton, becomes obvious. A great scientist in his own right, Pete is the master facilitator, providing the support (and institutional interference!) that allows young scholars to thrive. He is a talent-spotter extraordinaire.

The talks that made up the bulk of the scientific program were, for the most part, excellent. The average age of the speakers was about 30, maybe just a bit higher. When one attends an academic conference, one typically expects that the major addresses to the collected masses will be by geezers, er, senior scholars in the field. There was a clear play at inversion of the standard model here though. Speakers were clearly chosen because of their trajectories, not their past achievements.  That's pretty great. When I went up for tenure at Stanford, I was told that Stanford does not really care about what you have done; it cares about what you will do. Of course, the best information that the university has about your future work is the work you have already done. This conference embodied this spirit by placing the future (and, in many cases, current) leaders of the field in the key speaking roles while some of the biggest names in ecology, population biology, and epidemiology sat happily in the audience (e.g., joining Hudson and Grenfell were Andy Dobson, Andrew Read, Mick Crawley, Charles Godfray, Mike Boots, Mercedes Pascual, Les Real, Matt Thomas, ...)

The tone set by these great mentors carries through to the whole culture of the conference, where senior people attended the poster sessions, sat with students at lunches and dinners, and schmoozed at the plentiful open-bar mixers. For example, on the first full day of the conference, there was an afternoon poster session that started at 4:30 (we had been in back-to-back sessions since 8:30). This session was preceded by an hour-long poster-teaser session in which grad students and post-docs got up and presented 60-second (and, as Andrew Read noted, not one nanosecond more) teasers of their posters. Bear in mind, this session was entirely comprised of students and post-docs. It was striking that essentially every seat in the house was occupied and all the major players were present. The teasers were great – many were very funny, including a haiku apparently written by a triatomine bug and translated to us by Princeton EEB student Jennifer Peterson.

After the teasers, the conference went en masse to the fancy new Millenium Science Complex (it turns out that Pete Hudson has physical capital projects in addition to human capital ones!). There, participants milled about the 150 posters. After spending quite a bit of time doing this – and dutifully getting pictures of all my lab with their posters – I thought to check the time and realized it was nearly 6:30. The poster session had been going for two hours and nearly everyone was still there, including all the luminaries. It helped that there was free beer. I tweeted my amazement at this realization:

That is, in fact, Princeton's Bryan Grenfell moving fast in the middle of the picture, apparently making a bee-line for Michigan's Aaron King. Andrew Read is in the far background, talking to a poster-presenter (he has that posture).

Scientific highlights for me included Caroline Buckee's talk about measuring mobility in the context of malaria transmission in Kenya and Derek Cummings's talk on the Fluscape Project to measure spatial heterogeneity in influenza transmission in China. I am a long-time fan of this project and it's nice to see the great work that has come out of it. These talks were right in my wheelhouse of interest, but there were plenty other cool ones including Britt Koskella's talk on the dynamics of bacteria and phage on tree leaves.

Stanford was exceedingly well represented at this conference. My lab had no fewer than five posters. Ashley Hazel presented on her work with Carl Simon on modeling gonorrhea transmission dynamics in Kaokoland, Namibia. Whitney Bagge presented her work on remote-sensing of rodent-borne disease in Kenya. Alejandro Feged presented work on the transmission dynamics of malaria in the Colombian Amazon among the indigenous Nukak people. Laura Bloomfield presented her remote sensing and spatial analysis work from our project on the spillover of primate retroviruses in Western Uganda. I closed things out with a minimalist poster on simple graphical models for multiple attractors in vector-borne disease dynamics in multi-host ecologies. In addition to my lab group, Giulio De Leo (with whom I have been running a weekly disease ecology workshop at Woods since winter quarter) was there, helping to bridge all sorts of structural holes in our collective collaboration graphs.

The other thing that comes out of these meetings, especially more intimate ones like EEID, is some actual work on collaborative projects. I managed to find some time to sit down and discuss plans with collaborators as well as do some shameless recruitment for my planned re-submission of the Stanford Biodemography Workshops. I'm really excited about some of these collaborations, including one that brings together my two major areas of interest: biodemography and life history theory and infectious disease ecology.

Oh, and I'm convinced that there must be an interpretive dance component to the Ph.D. exam in the Grenfell lab. This is certainly the most parsimonious explanation for much of what I saw Wednesday night.

The Return of Lahontan Cutthroat Trout

The New York Times had a terrific story on Wednesday on the recovery of an endemic trout previously believed to be extinct since the 1940s in Pyramid Lake, Nevada. As I am currently teaching my class, Ecology, Evolution, and Human Health, with its emphasis on adaptation as local process and human-environment interaction, I was happy to see such an excellent story about local adaptation. In a nutshell, the trout was over-fished and also suffered devastating population declines in Pyramid Lake because of predation from introduced brook trout (and other exotic salmonids) and hybridization with introduced rainbows. This is, alas, an all too common story for trout endemics of western North America. A remanent population of Lahontan cutthroats, that were genetically very similar to the original Pyramid stock, was found in a Pilot Peak stream near the Utah border and samples from this population were brought to a USFWS breeding facility in cooperation with the Paiute Nation.  It sounds like the breeding/stocking program has been a tremendous success and the Lahontan cutties have now returned to Pyramid Lake. A big part of the story appears to be the intensive management of the main prey item of Lahontan cutties, the cui-ui sucker, which was devastated  following the construction of the Derby Dam in 1905.

This was all great news, but the thing that really caught my attention (because I'm currently teaching this class that focuses on adaptation) was the fact that the re-introduced Lahontan cutties have thrived so rapidly:

Since November, dozens of anglers have reported catching Pilot Peak cutthroats weighing 15 pounds or more. Biologists are astounded because inside Pyramid Lake these powerful fish, now adolescents, grew five times as fast as other trout species and are only a third of the way through their expected life span.

Can you say adaptation?! There is something about the interaction between this particular cutthroat species and the environment of Pyramid Lake that makes for giant fish as long as the juveniles can escape predation by exotic salmonids and adults can prey on their preferred species. Great news for anglers, great news for the Paiute Nation, great news for ecology.

Ecology and Evolution of Infectious Disease, 2013

I am recently back from the Ecology and Evolution of Infectious Disease (EEID) Principal Investigators' Meeting hosted by the Odum School of Ecology at the University of Georgia in lovely Athens. This is a remarable event, and a remarkable field, and I can't remember ever being so energized after returning from a professional conference (which often leave me dismayed or even depressed about my field). EEID  is an innovative, highly interdisciplinary funding program jointly managed by the National Science Foundation and the National Institutes of Health. I have been lucky enough to be involved with this program for the last six years. I've served on the scientific review panel a couple times and am now a Co-PI on two projects.

We had a big turn-out for our Uganda team in Athens and team members presented no fewer than four posters. The Stanford social networks/human dimensions team (including Laura Bloomfield, Shannon Randolph and Lucie Clech) presented a poster ("Multiplex Social Relations and Retroviral Transmission Risk in Rural Western Uganda") on our preliminary analysis of the social network data. Simon Frost's student at Cambridge, James Lester, presented a poster ("Networks, Disease, and the Kibale Forest") analyzing our syndromic surveillance data. Sarah Paige from Wisconsin presented a poster on the socio-economic predictors of high-risk animal contact ("Beyond Bushmeat: Animal contact, injury, and zoonotic disease risk in western Uganda") and Maria Ruiz-López, who works with Nelson Ting at Oregon, presented a poster on their work on developing the resources to do some serious population genetics on the Kibale red colobus monkeys ("Use of RNA-seq and nextRAD for the development of red colobus monkey genomic resource").

Parviez Hosseini, from the EcoHealth Alliance, also presented a poster for our joint work on comparative spillover dynamics of avian influenza ("Comparative Spillover Dynamics of Avian Influenza in Endemic Countries"). I'm excited to get more work done on this project which is possible now that new post-doc Ashley Hazel has arrived from Michigan. Ashley will oversee the collection of relational data in Bangladesh and help us get this project into high gear.

The EEID conference has a unique take on poster presentations which make it much more enjoyable than the typical professional meeting. In general, I hate poster sessions. Now, don't get me wrong: I see lots of scientific value in them and they can be a great way for people to have extended conversations about their work. They can be an especially great forum for students to showcase their work and start the long process of forming professional networking. However, there is an awkwardness to poster sessions that can be painful for the hapless conference attender who might want, say, to walk through the room in which a poster session is being held. These rooms tend to be heavy with the smell of desperation and one has to negotiate a gauntlet of suit-clad, doe-eyed graduate students desperate to talk to anyone who will listen about their work. "Please talk to me; I'm so lonely" is what I imagine them all saying as I briskly walk through, trying to look busy and purposeful (while keeping half an eye out for something really interesting!).

The scene at EEID is much different. All posters go up at the same time and the site-fidelity of poster presenters is the lowest I have ever seen. It has to be since, if everyone stuck by their poster, there wouldn't be anyone to see any of them! What this did was allow far more mixing than I normally see at such sessions and avoid much of the inherent social awkwardness of a poster session. Posters also stayed up long past the official poster session. I continued to read posters for at least a day after the official session ended. Of course, it helps that there was all manner of great work being presented.

There were lots of great podium talks too. I was particularly impressed with the talks by Charlie King of Case Western on polyparasitism in Kenya, Maria Diuk-Wasser of Yale on the emergence of babesiosis in the Northeast, Jean Tsao (Michigan State) and Graham Hickling's (Tennessee) joint talk on Lyme disease in the Southeast, and Bethany Krebs's talk on the role of robin social behavior in West Nile Virus outbreaks. Laura Pomeroy, from Ohio State, represented one of the other few teams with a substantial anthropological component extremely well, talking about the transmission dynamics of foot-and-mouth disease in Cameroon. Probably my favorite talk of the weekend was the last talk by Penn State's Matt Thomas. They done awesome work elucidating the role of temperature variability on the transmission dynamics of malaria.

It turns out that this was the last EEID PI conference. Next year, the EEID PI conference will be combined with the other EEID conference which was originally organized at Penn State (and is there again this May). This combining of forces is, I'm sure, a good thing as it will reduce confusion and probably make it more likely that all the people I want to see have a better chance of showing up. I just hope that this new, larger conference retains the charms of the EEID PI conference.

EEID is a new, interdisciplinary field that has grown thanks to some disproportionately large contributions of a few, highly energetic people. One of the principals in this realm is definitely Sam Scheiner, the EEID program officer at NSF.  The EEID PI meeting has basically been Sam's baby for the past 10 years. Sam has done an amazing job creating a community of interdisciplinary scholars and I'm sure I speak for every researcher who has been heavily involved with EEID when I express my gratitude for all his efforts.

On The Dilution Effect

A new paper written by Dan Salkeld (formerly of Stanford), Kerry Padgett (CA Department of Public Health), and myself just came out in the journal Ecology Letters this week.

One of the most important ideas in disease ecology is a hypothesis known as the "dilution effect". The basic idea behind the dilution effect hypothesis is that biodiversity -- typically measured by species richness, or the number of different species present in a particular spatially defined locality -- is protective against infection with zoonotic pathogens (i.e., pathogens transmitted to humans through animal reservoirs). The hypothesis emerged from analysis of Lyme disease ecology in the American Northeast by Richard Ostfeld and his colleagues and students from the Cary Institute for Ecosystem Studies in Millbrook, New York. Lyme disease ecology is incredibly complicated, and there are a couple different ways that the dilution effect can come into play even in this one disease system, but I will try to render it down to something easily digestible.

Lyme disease is caused by a spirochete bacterium Borrelia burgdorferi. It is a vector-borne disease transmitted by hard-bodied ticks of the genus >Ixodes. These ticks are what is known as hemimetabolous, meaning that they experience incomplete metamorphosis involving larval and nymphal stages. Rather than a pupa, these larvae and nymphs resemble little bitty adults. An Ixodes tick takes three blood meals in its lifetime: one as a larva, once as a nymph, once as an adult. At different life-cycle stages, the ticks have different preferences for hosts. Larval ticks generally favor the white-footed mouse (Peromyscus leucopus) for their blood meal and this is where the catch is. It turns out that white-footed mice are extremely efficient reservoirs for Lyme disease. In fact, an infected mouse has as much as a 90% chance of transmitting infection to a larva feeding on it. The larvae then molt into nymphs and overwinter on the forest floor. Then, in spring or early summer a year after they first hatch from eggs, nymphs seek vertebrate hosts. If an individual tick acquired infection as a larva, it can now transmit to its next host. Nymphs are less particular about their choice of host and are happy to feed on humans (or just about any other available vertebrate host).

This is where the dilution effect comes in. The basic idea is that if there are more potential hosts such as chipmunks, shrews, squirrels, or skunks, there are more chances that an infected nymph will take a blood meal on a person. Furthermore, most of these hosts are much less efficient at transmitting the Lyme spirochete than are white-footed mice. This lowers the prevalence of infection and makes it more likely that it will go extinct locally. It's not difficult to imagine the dilution effect working at the larval stage blood-meal too: if there are more species present (and the larvae are not picky about their blood meal), the risk of initial infection is also diluted.

In the highly-fragmented landscape of northeastern temperate woodlands, when there is only one species in a forest fragment, it is quite likely that it will be a white-footed mouse. These mice are very adaptable generalists that occur in a wide range of habitats from pristine woodland to degraded forest. Therefore, species-poor habitats tend to have mice but no other species. The idea behind the dilution effect is that by adding different species to the baseline of a highly depauperate assemblage of simply white-footed mice, the prevalence of nymphal infection will decline and the risk for zoonotic infection of people will be reduced.

It is not an exaggeration to say that the dilution-effect hypothesis is one of the two or three most important ideas in disease ecology and much of the explosion of interest in disease ecology can be attributed in part to such ideas. The dilution effect is also a nice idea. Wouldn't it be great if every dollar we invested in the conservation of biodiversity potentially paid a dividend in reduced disease risk? However, its importance to the field or the beauty of the idea do not guarantee that it is actually scientifically correct.

One major issue with the dilution effect hypothesis is its problem with scale, arguably the central question in ecology. Numerous studies have shown that pathogen diversity is positively related to overall biodiversity at larger spatial scales. For example, in an analysis of global risk of emerging infectious diseases, Kate Jones and her colleagues form the London Zoological Society showed that globally, mammalian biodiversity is positively associated with the odds of an emerging disease. Work by Pete Hudson and colleagues at the Center for Infectious Disease Dynamics at Penn State showed that healthy ecosystems may actually be richer in parasite diversity than degraded ones. Given these quite robust findings, how is it that diversity at a smaller scale is protective?

We use a family of statistical tools known as "meta-analysis" to aggregate the results of a number of previous studies into a single synthetic test of the dilution-effect hypothesis. It is well known that inferences drawn from small samples generally have lower precision (i.e., the estimates carry more uncertainty) than inferences drawn from larger samples. A nice demonstration of this comes from the classical asymptotic statistics. The expected value of a sample mean is the "true mean" of a normal distribution and the standard deviation of this distribution is given by the standard error, which is defined as the standard deviation of the distribution divided by the square root of the sample size. Say that for two studies we estimate the standard deviation of our estimate of the mean to be 10. In the first study, this estimate is based on a single observation, whereas in the second, it is based on a sample of 100 observations. The estimated of the mean in the second study is 10 times more precise than the estimate based on the first because 10/\sqrt{1} = 10 while 10/\sqrt{100} = 1.

Meta-analysis allows us to pool estimates from a number of different studies to increase our sample size and, therefore, our precision. One of the primary goals of meta-analysis is to estimate the overall effect size and its corresponding uncertainty. The simplest way to think of effect size in our case is the difference in disease risk (e.g., as measured in the prevalence of infected hosts) between a species rich area and a species poor area. Unfortunately, a surprising number of studies don't publish this seemingly basic result. For such studies, we have to calculate a surrogate of effect size based on the reported test statistics of the hypothesis that the authors report. This is not completely ideal -- we would much rather calculate effect sizes directly, but to paraphrase a dubious source, you do a meta-analysis with the statistics that have been published, not with the statistics you wish had been published. On this note, one of our key recommendations is that disease ecologists do a better job reporting effect sizes to facilitate future meta-anlayses.

In addition to allowing us to estimate the mean effect size across studies and its associated uncertainty, another goal of meta-analysis is to test for the existence of publication bias. Stanford's own John Ioannidis has written on the ubiquity of publication bias in medical research. The term "bias" has a general meaning that is not quite the same as the technical meaning. By "publication bias", there is generally no implication of nefarious motives on the part of the authors. Rather, it typically arises through a process of selection at both the individual authors' level and the institutional level of the journals to which authors submit their papers. An author, who is under pressure to be productive by her home institution and funding agencies, is not going to waste her time submitting a paper that she thinks has a low chance of being accepted. This means that there is a filter at the level of the author against publishing negative results. This is known as the "file-drawer effect", referring to the hypothetical 19 studies with negative results that never make it out of the authors' desk for every one paper publishing positive results. Of course, journals, editors, and reviewers prefer papers with results to those without as well. These very sensible responses to incentives in scientific publication unfortunately aggregate into systematic biases at the level of the broader literature in a field.

We use a couple methods for detecting publication bias. The first is a graphical device known as a funnel plot. We expect studies done on large samples to have estimates of the effect size that are close to the overall mean effect because estimates based on large samples have higher precision. On the other hand, smaller studies will have effect-size estimates that are more distributed because random error can have a bigger influence in small samples. If we plot the precision (e.g., measured by the standard error) against the effect size, we would expect to see an inverted triangle shape -- or a funnel -- to the scatter plot. Note -- and this is important -- that we expect the scatter around the mean effect size to be symmetrical. Random variation that causes effect-size estimates to deviate from the mean are just as likely to push the estimates above and below the mean. However, if there is a tendency to not publish studies that fail to support the hypothesis, we should see an asymmetry to our funnel. In particular, there should be a deficit of studies that have low power and effect-size estimates that are opposite of the hypothesis. This is exactly what we found. Only studies supporting the dilution-effect hypothesis are published when they have very small samples. Here is what our funnel plot looked like.

Note that there are no points in the lower right quadrant of the plot (where species richness and disease risk would be positively related).

While the graphical approach is great and provides an intuitive feel for what is happening, it is nice to have a more formal way of evaluating the effect of publication bias on our estimates of effect size. Note that if there is publication bias, we will over-estimate our precision because the studies that are missing are far away from the mean (and on the wrong side of it). The method we use to measure the impact of publication bias on our estimate of uncertainty formalizes this idea. Known as "trim-and-fill", it uses an algorithm to find the most divergent asymmetric observations. These are removed and the precision of the mean effect size is calculated. This sub-sample is known as the "truncated" sample. Then a sample of missing values is imputed (i.e., simulated from the implied distribution) and added to the base sample. This is known as the "augmented" sample. The precision is then re-calculated. If there is no publication bias, these estimates should not be too different. In our sample, we find that estimates of precision differ quite a bit between the truncated and augmented samples. We estimate that between 4-7 studies are missing from the sample.

Most importantly, we find that the 95% confidence interval for our estimated mean effect size crosses zero. That is, while the mean effect size is slightly negative (suggesting that biodiversity is protective against disease risk), we can't confidently say that it is actually different than zero. Essentially, our large sample suggests that there is no simple relationship between disease risk and biodiversity.

On Ecological Mechanisms One of the main conclusions of our paper is that we need to move beyond simple correlations between species richness and disease risk and focus instead on ecological mechanisms. I have no doubt that there are specific cases where the negative correlation between species richness and disease risk is real (note our title says that we think this link is idiosyncratic). However, I suspect where we see a significant negative correlation, what is really happening is that some specific ecological mechanism is being aliased by species richness. For example, a forest fragment with a more intact fauna is probably more likely to contain predators and these predators may be keeping the population of efficient reservoir species in check.

I don't think that this is an especially controversial idea. In fact, some of the biggest advocates for the dilution effect hypothesis have done some seminal work advancing our understanding of the ecological mechanisms underlying biodiversity-disease risk relationships. Ostfeld and Holt (2004) note the importance of predators of rodents for regulating disease. They also make the very important point that not all predators are created equally when it comes to the suppression of disease. A hallmark of simple models of predation is the cycling of abundances of predators and prey. A specialist predator which induces boom-bust cycles in a disease reservoir probably is not optimal for infection control. Indeed, it may exacerbate disease risk if, for example, rodents become more aggressive and are more frequently infected in agonistic encounters with conspecifics during steep growth phases of their population cycle. This phenomenon has been cited in the risk of zoonotic transmission of Sin Nombre Virus in the American Southwest.

I have a lot more to write on this, so, in the interest of time, I will end this post now but with the expectation that I will write more in the near future!

 

The Least Stressful Profession of Them All?

In the spirit of critics misunderstanding the life of university researchers that I started in my last post, I felt the need to chime in a bit on a story that has really made the social-media rounds in the last couple days. This kerfuffle stems from a Forbes piece by Susan Adams enumerating the 10 least stressful jobs for 2013. Reporting on a study from the job-site careercast.com, and to the surprise of nearly every academic I know, she listed university professor as the least stressful of all jobs. Adams writes: "For tenure-track professors, there is some pressure to publish books and articles, but deadlines are few." This is quite possibly the most nonsensical statement I think I have ever read about the academy and it reveals a profound ignorance about its inner workings. This careercast.com list was also picked up by CNBC and Huffington Post, both of which were completely credulous of the rankings.

Before going on though, I have to give Ms. Adams some props for amending her piece following an avalanche of irate comments from actual professors. She writes:

Since writing the above piece I have received more than 150 comments, many of them outraged, from professors who say their jobs are terribly stressful. While I characterize their lives as full of unrestricted time, few deadlines and frequent, extended breaks, the commenters insist that most professors work upwards of 60 hours a week preparing lectures, correcting papers and doing research for required publications in journals and books. Most everyone says they never take the summer off, barely get a single day’s break for Christmas or New Year’s and work almost every night into the wee hours.

All true.

In the CNBC piece, the careercast.com publisher, Tony Lee, lays down some of the most uninformed nonsense that I've ever read:

"If you look at the criteria for stressful jobs, things like working under deadlines, physical demands of the job, environmental conditions hazards, is your life at risk, are you responsible for the life of someone else, they rank like 'zero' on pretty much all of them!" Lee said.

Plus, they're in total control. They teach as many classes as they want and what they want to teach. They tell the students what to do and reign over the classroom. They are the managers of their own stress level.

Careercast.com measured job-related stress using an 11-dimensional scale. These dimensions and the point ranges assigned to each include:

  • Travel, amount of (0-10)<
  • Growth Potential (income divided by 100)
  • Deadlines (0-9)
  • Working in the public eye (0-5)
  • Competitiveness (0-15)
  • Physical demands (stoop, climb, etc.) (0-14)
  • Environmental conditions (0-13)
  • Hazards encountered (0-5)
  • Own life at risk (0-8)
  • Life of another at risk (0-10)
  • Meeting the public (0-8)

These seem reasonable enough, but the extent to which they were accurately assessed for at least this first item in the list is another point altogether.

It is important to note that there is enormous heterogeneity contained in the job title "professor." There are professors of art history and professors of business and professors of law and professors of vascular surgery, and professors of chemistry, and professors of seismic engineering professors of volcanology and ... you get the point. No doubt some of these are more or less stressful than others. Many of these involve substantial work in the public eye and meeting the public. Some involve hazardous environmental conditions and physical demands.

However, I will focus mainly on what I see as the most ludicrous statements made by both Lee and Adams: that professors have no deadlines. My life is all about deadlines: article/book submission deadlines, institutional review board deadlines, peer review deadlines, editorial deadlines, and the all-important grant deadlines. There are the deadlines imposed by my students when they apply for grants or fellowships or jobs and need highly detailed letters of recommendation, often on very short notice. Oh, and guess what: grades are due on a particular date at the end of the term. You know, a deadline? And those classes we teach: better have a lecture ready before the class meets. Again, kinda like a deadline. I think that it is worth noting that one is expected to meet these teaching deadlines even when most professional incentives (at least at a research university) are focused around everything in your job description but teaching. There is a trite phrase describing the life of a professor -- particularly a junior professor -- that seems to have found its way into the general consciousness, "publish or perish." Notice that it is not "give coherent, interesting lectures and grade fairly and expediently or perish"!

So, yes, there are deadlines and there are very difficult trade-offs relating to the finiteness of time. Honestly, it's hard for me to imagine how even a casual observer of the university could not see the ubiquity of deadlines for the professor's life.

In an excellent rebuttal of this list, blogger Audra Diers writes about both the time demands and the economic realities of obtaining a tenure-track job. I will finish up with a few thoughts on competitiveness and "growth potential." My experience on a variety of job search committees since coming to Stanford is that there are typically hundreds of highly qualified candidates for any given job search. These are all people who have Ph.D.'s and, frequently, already have jobs at other universities. In the anthropology department at Stanford, the majority of faculty joined Stanford from faculty positions at other universities. It is very difficult to get a job at a university like Stanford directly out of graduate school. Inevitably, you are competing against people who have already been assistant professors (or at least post-docs) at other universities and already have a substantial publication and grant-writing record. The differences in salary, teaching loads, and institutional prestige can be substantial. Browsing the Chronicle of Higher Education's Almanac of Higher Education can provide some numbers. Many people bust it in lower-prestige universities with the hope of eventually getting an opportunity for a job at a place like Stanford or Berkeley or Harvard. This means publishing important work, often while teaching outrageously high teaching loads at universities with primarily teaching missions and that means long hours, juggling many conflicting demands, and enormous individual drive.

If you are a scientist, you are often competing with other scientists for results. Getting yourself in a position to secure such results means successful grant-writing and attracting top students and post-docs to your lab. Now, this competition is often enjoyable and almost certainly drives innovation, but it can be stressful (and deadline filled!). There is nothing quite like the feeling of looking at some journal's table of contents that's shown up in your inbox and realizing you've been scooped on a problem you've spent years working on. There is always that little bit of fear in the back of your head pushing you to publish your results before someone else does.

Where Lee gets the idea that professors "teach as many classes as they want and what they want to teach" is a mystery to me. Universities (and colleges within universities) have rules for the number of courses their faculty are expected to teach. Sometimes, a professor can buy out of some teaching by securing more research funding that specifically budgets for such buy-outs. Within departments, there is the dreaded curriculum committee. My department's CC decided this year that I should teach all my courses in the Spring quarter. While it's been nice to have large chunks of research time this Fall, Spring is going to be horrible. This is hardly teaching as much or what I want to teach. Departments have instructional needs (i.e., "service courses") and someone needs to teach these. Junior faculty are often dumped upon to teach the service courses (e.g., history of the field, methodological courses) that very few students want to attend.

Writes Adams at Forbes, "The other thing most of the least stressful jobs have in common: At the end of the day, people in these professions can leave their work behind, and their hours tend to be the traditional nine to five." This is just crazy talk. I work every night, some nights are more effective than others, for sure, but, like many professions, I take this as a given for my job.

So being a university professor is hardly a stress-free life. This doesn't in any way mean that we don't like our jobs. Being a tenured professor at a major research university is good work if you can get it. The job carries with it a great deal of autonomy, flexibility, and the ability to pursue one's passion. As a professor, one interacts with interesting, curious people on a daily basis and helps shape future leaders. The job-related stress felt by a university professor is almost certainly not on par with, say, an infantry soldier or police officer, but the job is not stress-free. It never ceases to surprise me of how ignorant about the workings of universities critics often are. This is an instance where there is no obvious political agenda -- the study just got some facts badly wrong -- but studies like this contribute to disturbing anti-intellectualism (and concomitant disdain for empirical evidence) that has become a part of American public consciousness.

Thoughts on Black Swans and Antifragility

I have recently read the latest book by Nassim Nicholas Taleb, Antifragile. I read his famous The Black Swan a while back while in the field and wrote lots of notes. I never got around to posting those notes since they were quite telegraphic (and often not even electronic!), as they were written in the middle of the night while fighting insomnia under mosquito netting. The publication of his latest, along with the time afforded by my holiday displacement, gives me an excuse to formalize some of these notes here. Like Andy Gelman, I have so many things to say about this work on so many different topics, this will be a bit of a brain dump.

Taleb's work is quite important for my thinking on risk management and human evolution so it is with great interest that I read both books. Nonetheless, I find his works maddening to say the least. Before presenting my critique, however, I will pay the author as big a compliment as I suppose can be made. He makes me think. He makes me think a lot, and I think that there are some extremely important ideas is his writings. From my rather unsystematic readings of other commentators, this seems to be a pretty common conclusion about his work. For example, Brown (2007) writes in The American Statistician, "I predict that you will disagree with much of what you read, but you'll be smarter for having read it. And there is more to agree with than disagree. Whether you love it or hate it, it’s likely to change public attitudes, so you can't ignore it." The problem is that I am so distracted by all the maddening bits that I regularly nearly miss the ideas, and it is the ideas that are important. There is so much ego and so little discipline on display in his books, The Black Swan and Antifragile.

Some of these sentiments have been captured in Michiko Kakutani's excellent review of Antifragile. There are some even more hilarious sentiments communicated in Tom Bartlett's non-profile in the Chronicle of Higher Education.

I suspect that if Taleb and I ever sat down over a bottle of wine, we would not only have much to discuss but we would find that we are annoyed -- frequently to the point of apoplexy -- by the same people. Nonetheless, I find one of the most frustrating things about reading his work the absurd stereotypes he deploys and broad generalizations he uses to dismiss the work of just about any academic researcher. His disdain for academic research interferes with his ability to make cogent critique. Perhaps I have spent too much time at Stanford, where the nerd is glorified, but, among other things, I find his pejorative use of the term "nerd" for people like Dr. John, as contrasted to man-of-his-wits Stereotyped, I mean, Fat Tony off-putting and rather behind the times. Gone are the days when being labeled a nerd is a devastating put down.

My reading of Taleb's critiques of prediction and risk management is that the primary problem is hubris. Is there anything fundamentally wrong with risk assessment? I am not convinced there is, and there are quite likely substantial benefits to systematic inquiry. The problem is that the risk assessment models become reified into a kind of reality. I warn students – and try to regularly remind myself – never to fall in love with one's own model. Something that many economists and risk modelers do is start to believe that their models are something more real than heuristic. George Box's adage has become a bit cliche but nonetheless always bears repeating: all models are wrong, but some are useful. We need to bear in mind the wrongness of models without dismissing their usefulness.

One problem about both projection and risk analysis, that Taleb does not discuss, is that risk modelers, demographers, climate scientists, economists, etc. are constrained politically in their assessments. The unfortunate reality is that no one wants to hear how bad things can get and modelers get substantial push-back from various stakeholders when they try to account for real worst-case scenarios.

There are ways of building in more extreme events than have been observed historically (Westfall and Hilbe (2007), e.g., note the use of extreme-value modeling). I have written before about the ideas of Martin Weitzman in modeling the disutility of catastrophic climate change. While he may be a professor at Harvard, my sense is that his ideas on modeling the risks of catastrophic climate change are not exactly mainstream. There is the very tangible evidence that no one is rushing out to mitigate the risks of climate change despite the fact that Weitzman's model makes it pretty clear that it would be prudent to do so. Weitzman uses a Bayesian approach which, as noted by Westfall and Hilbe, is a part of modern statistical reasoning that was missed by Taleb. While beyond the scope of this already hydra-esque post, briefly, Bayesian reasoning allows one to combine empirical observations with prior expectations based on theory, prior research, or scenario-building exercises. The outcome of a Bayesian analysis is a compromise between the observed data and prior expectations. By placing non-zero probability on extreme outcomes, a prior distribution allows one to incorporate some sense of a black swan into expected (dis)utility calculations.

Nor does the existence of black swans mean that planning is useless. By their very definition, black swans are rare -- though highly consequential -- events. Does it not make sense to have a plan for dealing with the 99% of the time when we are not experiencing a black swan event? To be certain, this planning should not interfere with our ability to respond to major events but I don't see any evidence that planning for more-or-less likely outcomes necessarily trades-off with responding to unlikely outcomes.

Taleb is disdainful about explanations for why the bubonic plague didn't kill more people: "People will supply quantities of cosmetic explanations involving theories about the intensity of the plague and 'scientific models' of epidemics." (Black Swan, p. 120) Does he not understand that epidemic models are a variety of that lionized category of nonlinear processes he waxes about? He should know better. Epidemic models are not one of these false bell-curve models he so despises. Anyone who thinks hard about an epidemic process -- in which an infectious individual must come in contact with a susceptible one in order for a transmission event to take place -- should be able to infer that an epidemic can not infect everyone. Epidemic models work and make useful predictions. We should, naturally, exhibit a healthy skepticism about them as we should any model. But they are an important tool for understanding and even planning.

Indeed, our understanding gained from the study of (nonlinear) epidemic models has provided us with the most powerful tools we have for control and even eradication. As Hans Heesterbeek has noted, the idea that we could control malaria by targeting the mosquito vector of the disease is one that was considered ludicrous before Ross's development of the first epidemic model. The logic was essentially that there are so many mosquitoes that it would be absurdly impractical to eliminate them all. But the Ross model revealed that epidemics -- because of their nonlinearity -- have thresholds. We don't have to eliminate all the mosquitoes to break the malaria transmission cycle; we just need to eliminate enough to bring the system below the epidemic threshold. This was a powerful idea and it is central to contemporary public health. It is what allowed epidemiologists and public health officials to eliminate smallpox and it is what is allowing us to very nearly eliminate polio if political forces (black swans?) will permit.

Taleb's ludic fallacy (i.e., games of chance are somehow an adequate model of randomness in the world) is great. Quite possibly the most interesting and illuminating section of The Black Swan happens on p. 130 where he illustrates the major risks faced by a casino. Empirical data make a much stronger argument than do snide stereotypes. This said, Lund (2007) makes the important point that we need to ask what exactly is being modeled in any risk assessment or projection. One of the most valuable outcomes of any formalized risk assessment (or formal model construction more generally) is that it forces the investigator to be very explicit about what is being modeled. The output of the model is often of secondary importance.

Much of the evidence deployed in his books is what Herb Gintis has called "stylized facts" and, of course, is subject to Taleb's own critique of "hidden evidence." Because the stylized facts are presented anecdotally, there is no way to judge what is being left out. A fair rejoinder to this critique might be that these are trade publications meant for a mass market and are therefore not going to be rich in data regardless. However, the tone of the books – ripping on economists and bankers but also statisticians, historians, neuroscientists, and any number of other professionals who have the audacity to make a prediction or provide a causal explanation – makes the need for more measured empirical claims more important. I suspect that many of these people actually believe things that are quite compatible with the conclusions of both The Black Swan and Antifragile.

On Stress

The notion of antifragility turns on systems getting stronger when exposed to stressors. But we know that not all stressors are created equally. This is where the work of Robert Sapolsky really comes into play. In his book Why Zebras Don't Get Ulcers, Sapolsky, citing the foundational work of Hans Seyle, notes that some stressors certainly make the organism stronger. Certain types of stress ("good stress") improves the state of the organism, making it more resistant to subsequent stressors. Rising to a physical or intellectual challenge, meeting a deadline, competing in an athletic competition, working out: these are examples of good stresses. They train body, mind, and emotions and improve the state of the individual. It is not difficult to imagine that there could be similar types of good stressors at levels of organization higher than the individual too. The way the United States came together as a society to rise to the challenge of World War II and emerge as the world's preeminent industrial power comes to mind. An important commonality of these good stressors is the time scale over which they act. They are all acute stressors that allow recovery and therefore permit the subsequently improved performance.

However, as Sapolsky argues so nicely, when stress becomes chronic, it is no longer good for the organism. The same glucocorticoids (i.e., "stress hormones") that liberate glucose and focus attention during an acute crisis induce fatigue, exhaustion, and chronic disease when the are secreted at high levels chronically.

Any coherent theory of antifragility will need to deal with the types of stress to which systems are resistant and, importantly, have a strengthening effect. Using the idea of hormesis – that a positive biological outcome can arise from taking low doses of toxins – is scientifically hokey and boarders on mysticism. It unfortunately detracts from the good ideas buried in Antifragile.

I think that Taleb is on to something with the notion of antifragility but I worry that the policy implications end up being just so much orthodox laissez-faire conservatism. There is the idea that interventions – presumably by the State – can do nothing but make systems more fragile and generally worse. One area where the evidence very convincingly suggests that intervention works is public health. Life expectancy has doubled in the rich countries of the developed world from the beginning of the twentieth century to today. Many of the gains were made before the sort of dramatic things that come to mind when many people think about modern medicine. It turns out that sanitation and clean water went an awful long way toward decreasing mortality well before we had antibiotics or MRIs. Have these interventions made us more fragile? I don't think so. The jury is still out, but it seems that reducing the infectious disease burden early in life (as improved sanitation does) seems to have synergistic effects on later-life mortality, an effect is mediated by inflammation.

On The Academy

Taleb drips derision throughout his work on university researchers. There is a lot to criticize in the contemporary university, however, as with so many other external critics of the university, I think that Taleb misses essential features and his criticisms end up being off base. Echoing one of the standard talking points of right-wing critics, Taleb belittles university researchers as being writers rather than doers (echoing the H.L. Menken witticism  "Those who can do; those who can't teach"). Skin in the game purifies thought and action, a point with which I actually agree, however, thinking that that university researchers live in a world lacking consequences is nonsense. Writing is skin in the game. Because we live in a quite free society – and because of important institutional protections on intellectual freedom like tenure (another popular point of criticism from the right) – it is easy to forget that expressing opinions – especially when one speaks truth to power – can be dangerous. Literally. Note that intellectuals are often the first ones to go to the gallows when there are revolutions from both the right and the left: Nazis, Bolsheviks, and Mao's Cultural Revolution to name a few. I occasionally get, for lack of a better term, unbalanced letters from people who are offended by the study of evolution and I know that some of my colleagues get this a lot more than I. Intellectuals get regular hate mail, a phenomenon amplified by the ubiquity of electronic communication. Writers receive death threats for their ideas (think Salman Rushdie). Ideas are dangerous and communicating them publicly is not always easy, comfortable, or even safe, yet it is the professional obligation of the academic.

There are more prosaic risks that academics face that suggest to me that they do indeed have substantial skin in the game. There is a tendency for critics from outside the academy to see universities as ossified places where people who "can't do" go to live out their lives. However, the university is a dynamic place. Professors do not emerge fully formed from the ivory tower. They must be trained and promoted. This is the most obvious and ubiquitous way that what academics write has "real world" consequences – i.e., for themselves. If peers don't like your work, you won't get tenure. One particularly strident critic can sink a tenure case. Both the trader and the assistant professor have skin in their respective games – their continued livelihoods depend upon their trading decisions and their writing. That's pretty real. By the way, it is a huge sunk investment that is being risked when an assistant professor comes up for tenure. Not much fun to be forty and let go from your first "real" job since you graduated with your terminal degree... (I should note that there are problems with this – it can lead to particularly conservative scholarship by junior faculty, among other things, but this is a topic for its own post.)

Now, I certainly think that are more and less consequential things to write about. I have gotten more interested in applied problems in health and the environment as I've moved through my career because I think that these are important topics about which I have potentially important things to say (and, yes, do). However, I also think it is of utmost importance to promote the free flow of ideas, whether or not they have obvious applications. Instrumentally, the ability to pursue ideas freely is what trains people to solve the sort of unknown and unforecastable problems that Taleb discusses in The Black Swan. One never knows what will be relevant and playing with ideas (in the personally and professionally consequential world of the academy) is a type of stress that makes academics better at playing with ideas and solving problems.

One of the major policy suggestions of Atifragile is that tinkering with complex systems will be superior to top-down management. I am largely sympathetic to this idea and to the idea that high-frequency-of-failure tinkering is also the source of innovation. Taleb contrasts this idea of tinkering is "top-down" or "directed" research, which he argues regularly fails to produce innovations or solutions to important problems. This notion of "top-down," "directed" research is among the worst of his various straw men and a fundamental misunderstanding of the way that science works. A scientist writes a grant with specific scientific questions in mind, but the real benefit of a funded research program is the unexpected results one discovers while pursuing the directed goals. As a simple example, my colleague Tony Goldberg has discovered two novel simian hemorrhagic viruses in the red colobus monkeys of western Uganda as a result of our big grant to study the transmission dynamics and spillover potential of primate retroviruses. In the grant proposal, we discussed studying SIV, SFV, and STLV. We didn't discuss the simian hemorrhagic fever viruses because we didn't know they existed! That's what discovery means. Their not being explicitly in the grant didn't stop Tony and his collaborators from the Wisconsin Regional Primate Center from discovering these viruses but the systematic research meant that they were in a position to discover them.

The recommendation of adaptive, decentralized tinkering in complex systems is in keeping with work in resilience (another area about which Taleb is scornful because it is the poor step-child of antifragility). Because of the difficulty of making long-range predictions that arises from nonlinear, coupled systems, adaptive management is the best option for dealing with complex environmental problems. I have written about this before here.

So, there is a lot that is good in the works of Taleb. He makes you think, even if spend a lot of time rolling your eyes at the trite stereotypes and stylized facts that make up much of the rhetoric of his books. Importantly, he draws attention to probabilistic thinking for a general audience. Too much popular communication of science trades in false certainties and the mega-success of The Black Swan in particular has done a great service to increasing awareness among decision-makers and the reading public about the centrality of uncertainty. Antifragility is an interesting idea though not as broadly applicable as Taleb seems to think it is. The inspiration for antifragility seem to lie in largely biological systems. Unfortunately, basing an argument on general principles drawn from physiology, ecology, and evolutionary biology pushes Taleb's knowledge base a bit beyond its limit. Too often, the analogies in this book fall flat or are simply on shaky ground empirically. Nonetheless, recommendations for adaptive management and bricolage are sensible for promoting resilient systems and innovation. Thinking about the world as an evolving complex system rather than the result of some engineering design is important and if throwing his intellectual cachet behind this notion helps it to get as ingrained into the general consciousness as the idea of a black swan has become, then Taleb has done another major service.

New Publication, Emerging infectious diseases: the role of social sciences

This past week, The Lancet published a brief commentary I wrote with a group of anthropologist-collaborators. The piece, written with Craig Janes, Kitty Corbett, and Jim Trostle, arose from a workshop I attended in lovely Buenos Aires back in June of 2011. This was a pretty remarkable meeting that was orchestrated by Josh Rosenthal, acting director of the Division of International Training and Research at the Fogarty International Center at NIH, and hosted in grand fashion by Ricardo Gürtler of the University of Buenos Aires.

Our commentary is on a series of papers on zoonoses, a seemingly unlikely topic for about which a collection of anthropologists might have opinions. However, as we note in our paper, social science is essential for understanding emerging zoonoses. First, human social behavior is an essential ingredient in R_0, the basic reproduction number of an infection (The paper uses the term "basic reproductive rate," which was changed somewhere in production from the several times I changed "rate" to "number"). Second, we suggest that social scientists who participate in primary field data collection (e.g., anthropologists, geographers, sociologists) are in a strong position to understand the complex causal circumstances surrounding novel zoonotic disease spill-overs.

We note that there are some challenges to integrating the social sciences effectively into research on emerging infectious disease. Part of this is simply translational. Social scientists, natural scientists, and medical practitioners need to be able to speak to each other and this kind of transdisciplinary communication takes practice. I'm not at all certain what it takes to make researchers from different traditions mutually comprehensible, but I know that it's more likely to happen if these people talk more. My hypothesis is that this is best done away from anyone's office, in the presence of food and drink. Tentative support for this hypothesis is provided by the wide-ranging and fun conversations over lomo y malbec. These conversations have so far yielded at least one paper and laid the foundations for a larger review I am currently writing. I know that various permutations of the people in Buenos Aires for this meeting are still talking and working together, so who knows what may eventually come of it?

On Anthropological Sciences and the AAA

I guess the time has rolled around again for my annual navel-gaze regarding my discipline, my place within it, and its future. Two strangely interwoven events have conspired to make me particularly philosophical as we enter into the winter holidays. First, I am in the middle of a visit by my friend, colleague, and former student, Charles Roseman, now an associate professor of anthropology at the University of Illinois, Urbana-Champaign. The second is that the American Anthropological Association meetings just went down in San Francisco and this always induces an odd sense of shock and subsequent introspection.

Charles graduated with a Ph.D. from the Department of Anthropological Sciences (once a highly ranked department according the the National Research Council) in 2005. He was awarded tenure at UIUC, a leading department for biological anthropology, this past year and has come back to The Farm to collaborate with me on our top-secret sleeper project of the past seven years. We've made some serious progress on this project since he arrived and maybe I'll be able to write about that soon too.

The annual AAA meeting is one  that I never attended until about four years ago, coinciding with what we sometimes refer to as "the blessed event," the remarrying of the two Stanford Anthropology departments. It's actually a bit of coincidence that I started attending AAAs the same year that we merged but it has largely been business of the new Department of Anthropology that has kept me going back – largely to serve on job search committees. This year, I had two responsibilities that drew me to the AAAs. The first was the editorial board meeting for American Anthropologist, the flagship publication of the association.  I joined the editorial board this year and it seemed a good idea to go and get a feel for what is happening with the journal and where it is likely to head over the next couple years.

My other primary responsibility was chairing a session that was organized by two of my Ph.D. students, Yeon Jung Yu and Shannon Randolph. In addition to Yeon and Shannon, my Ph.D. student Alejandro Feged also presented work from his dissertation research.  All three of these students were actually accepted into Anthsci and are part of the last cohort of students to leave Stanford still knowing the two-department system.

It was a great pleasure to sit in the audience and watch Yeon, Shannon, and Alejandro dazzle the audience with their sophisticated methods, beautiful images, and accounts of impressive, extended -- and often hardcore -- fieldwork. For her dissertation research, Yeon worked for two years with commercial sex workers in southern China, attempting to understand how women get recruited into sex work and how social relations facilitate their ability to survive and even thrive in a world that is quite hostile to them. Her talk was incredibly professional and theoretically sophisticated. For her dissertation research, Shannon worked in the markets of Yaoundé, Cameroon, trying to understand the motivations for consumption of wild bushmeat. Shannon was able to share with the audience her innovative approaches to collecting data (over 4,000 price points, among other things) on a grey-market activity that people are not especially eager to discuss, especially in the market itself. Alejandro did his dissertation research in the Colombian Amazon, where he investigated the human ecology of malaria in this highly endemic region. His talk demonstrated that the conventional wisdom about malaria ecology in this region -- namely, that the people most at risk for infection are adult men who spend the most time in the forest -- is simply incorrect for some indidenous popualtions and his time-budget analyses made a convincing case for the behavioral basis of this violation of expectations. This was a pretty heterogeneous collection of talks but they shared the commonality of a very strong methodological basis to the research.

At at time when many anthropologists express legitimate concerns over their professional prospects, I have enormous confidence in this crop of students, all three of whom are regularly asked to do consulting for government and/or non-govermental organizations because of their subject knowledge and methodological expertise. Anthsci graduates -- there weren't that many of them since the department existed for less than 10 years -- have done very well in the profession overall. I will list just a couple here whose work I knew well because I was on their committees or their work was generally in my area

In addition to these grad students, I think that it's important to note the success of the post-docs who worked either in Anthsci or with former Anthsci faculty on projects that started before the merger. Some of these outstanding people include:

In a discipline that is lukewarm at best on the even very notion of methodology, I suspect that students with strong methodological skills -- in addition to the expected theoretical sophistication and critical thinking (note that these skills do not actually trade-off) -- enjoy a distinct comparative advantage when entering a less-than-ideal job market. Of course, I don't mean to imply that Anthsci didn't have its share of graduates who leave the field out of frustration or lack of opportunity or who get stuck in the vicious cycle of adjunct teaching. But this accounting gives me hope. It gives me hope for my both my current and future students and it gives me hope for the field. Maybe I'll even go to AAAs again next year...

This is Just What Greece Needs

Greece was officially deemed malaria-free in 1974. Recent reports, however, suggest that there is ongoing autochthonous transmission of of Plasmodium vivax malaria. According to a brief report from the Mediterranean Bureau of the Italian News Agency (ANSAmed), 40 cases of P. vivax malaria have been reported in the first seven months of 2012. Of these 40, six had no history of travel to areas known to be endemic for malaria transmission. The natural inference is thus that they acquired it locally (i.e., "autochthonously") and that malaria may be back in Greece.

More detail on the malaria cases in Greece can be found on this European Centre for Disease Prevention and Control website. The actual ECDC report on autochthonous malaria transmission in Greece can be found here. A point in that report that is not mentioned in the ANSAmed newswire is that 2012 marks the third consecutive year in which autochthonous transmission has been inferred in Greece. So much for Greece being malaria-free.

notes on human ecology, population, and infectious disease