Tag Archives: climate change

On the Uses of an Interdisciplinary Ph.D.

Today, I participated in a panel — along with super-smart colleagues Alex Konings and Kabir Peay — for the first-year Ph.D. students in the E-IPER program, an interdisciplinary, graduate interdepartmental program (IDP) at Stanford. As is the idiom for any E-IPER event, we spent a lot of time fretting about interdisciplinarity: what it means, how you achieve it, what costs it entails for jobs, etc.

I expressed the slightly heretical opinion that we should not pursue interdisciplinarity for interdisciplinarity’s sake. What matters — both in terms of the science and more instrumental outcomes such as getting published, getting a job, getting tenure — are questions. Yes, questions. One should ask important questions that people care about. Why are there so many species in the tropics? Where do pandemic diseases come from and how can we best control them? Does democracy and the rule of law provide the best approach to governance? How do people adapt to a changing climate?

Where the interdisciplinary Ph.D. program comes in is it provides students the opportunity to pursue whatever tools and approaches are required to answer the question in the best way possible. You don’t need to use a particular approach because that’s what people in your field do. Sometimes the best thing to do will be totally interdisciplinary; sometimes it will look a bit more like what someone in a disciplinary program would do. Always lead with the question.

Answering important questions using the best tools available is probably the best route to managing the greatest risk of an interdisciplinary degree. This risk, of course, is the difficulty in getting a job when you don’t look like what any given department had in mind when they wrote a job ad. The best way to manage this risk is simply to be excellent. If your work is strong enough, the specific discipline of your Ph.D. doesn’t really matter. Now, there are certainly some disciplines that are more xenophobic than others (anthropology and economics come immediately to mind), but if your work is really outstanding, the excuse that you don’t have the right degree for a given job gets much more tenuous. Two people who come immediately to mind are my colleague David Lobell and my sometime collaborator and former Stanford post-doc Marcel Salathé.

Is David a geographer? Geologist? Economist? Doesn’t really matter because he’s generally recognized as being a smart guy doing important work. Similarly with Marcel: population geneticist? Epidemiologist? Computer scientist? Who cares? He has important things to say and gets recognized for it.

Now, alas, we can’t all be David and Marcel, but we can strive to ask important scientific questions and let these questions lead us to both the skills and the bodies of knowledge we need. These then form the foundation of our research careers. Interdisciplinarity then is about following the question. It is not an end to itself.

On Global State Shifts

This is a edited version of a post I sent out to the E-ANTH listserv in response to a debate over a recent paper in Nature and the response to it on the website “Clear Science,” written by Todd Meyers. In this debate, it was suggested that the Barnosky paper is the latest iteration of alarmist environmental narratives in the tradition of the master of that genre, Paul Ehrlich. Piqued by this conversation, I read the Barnosky paper and passed along my reading of it.

The Myers’s piece on the “Clear Science” web site is quite rhetorically clever. Climate-change deniers have a difficult task if they want to convincingly buck the overwhelming majority of reputable scientists on this issue. Myers uses ideas about the progress of science developed by the philosopher Thomas Kuhn in his classic book, The Structure of Scientific Revolutions. By framing the Barnosky et al. as mindlessly toeing the Kuhnian normal-science line, he has come up with a shrewd strategy for dealing with the serious scientific consensus around global climate change. Myers suggests that “Like scientists blindly devoted to a failed paradigm, the Nature piece simply tries to force new data to fit a flawed concept.”

I think that a pretty strong argument can be made that the perspective represented in the Barnosky et al. paper is actually paradigm-breaking. For 200 years the reigning paradigm in the historical sciences has been uniformitarianism. Hutton’s notion — that processes that we observe today have always been working — greatly extended the age of the Earth and allowed Lyell and Darwin to make their remarkable contributions to human understanding. This same principle allows us to make sense of the archaeological record and of ethnographic experience. It is a very useful foil for all manner of exceptionalist explanatory logic and I use it frequently.

However, there are plenty of ways that uniformitarianism fails. If we wanted to follow the Kuhnian narrative, we might say that evidence has mounted that leads to increased contradictions arising from the uniformitarian explanatory paradigm. Rates of change show heterogeneities and when we trying to understand connected systems characterized by extensive feedback, our intuitions based on gradual change can fail, sometimes spectacularly. This is actually a pretty revolutionary idea, apocalyptic popular writings aside, in mainstream science.

Barnosky et al. draw heavily on contemporary work in complex systems. The theoretical paper (Scheffer et al. 2009) upon which the Barnosky paper relies heavily represents a real step forward in the theoretical sophistication of this corpus and does so by making unique and testable predictions about systems approaching critical transitions. I have written about it previously here.

The most difficult part of projecting the future state of complex systems is that human element. This leads too many physical and biological scientists to simply ignore social and behavioral inputs. This said, there are far too few social and behavioral scientists willing to step up and do the hard collaborative work necessary to make progress on this extremely difficult problem. The difficulty of projecting human behavior often leads to projections of the business-as-usual variety and, unfortunately, these are often mischaracterized by the media and other readers. Such projections simply assume no change in behavior and look at the consequences some time down the line. A business-as-usual projection actually provides a lot of information, albeit about a very hypothetical future. What if things stayed the way they are? Yes, behavior changes. People adapt. Agricultural production becomes more efficient. Prices increase, reducing demand and allowing sustainable substitutes. Of course, sometimes things get worse too. Despite tremendous global awareness and lots of calls to reduce greenhouse gas emissions, carbon emissions have continued to rise. So, there is nothing inherently flawed about a business-as-usual projection. We just need to be clear about what it means when we use one.

A criticism that emerged on the list is that Barnosky et al. is essentially “an opinion piece.” However, the great majority of the Barnosky et al. paper is, in fact, simply a review. There are numerous facts to be reviewed: biodiversity has declined, fisheries have crashed, massive amounts of forest have been converted and degraded, the atmosphere has warmed. They are facts. And they are facts in which many vested interests would like to sow artificial uncertainty for political purposes. Positive things have happened too (e.g., malaria eradication in temperate climes, increased food security in some places that used to be highly insecure, increased agricultural productivity — though this may be of dubious sustainability), though these are generally on more local scales and, in some cases, may simply reflect exporting the problems to rich countries to the Global South. The fact that they are not reviewed does not mean that the paper belongs in an hysterical chicken-little genre.

A common critique of the doomsday genre is the certainty with which the horrible outcomes are framed. The Barnosky paper is suffused with uncertainty. In fact, this is the main point I take away from it! The first conclusion of the paper is that “it is essential to improve biological forecasting by anticipating critical transitions that can emerge on a planetary scale and understanding how such global forcings cause local changes.” This suggests to me that the authors are acknowledging massive uncertainty about the future, not saying that we are doomed with certainty. Or how about: “the plausibility of a future planetary state shift seems high, even though considerable uncertainty remains about whether it is inevitable and, if so, how far in the future it may be”?

Myers writes “they base their conclusions on the simplest linear mathematical estimate that assumes nothing will change except population over the next 40 years. They then draw a straight line, literally, from today to the environmental tipping point.” This is a profoundly misleading statement. Barnosky et al. are using the fold catastrophe model discussed in Scheffer et al. (2009). The Scheffer et al. analysis of the fold catastrophe model uses some fairly sophisticated ideas from complex systems theory, but the ideas are relatively simple. The straight line that so offends Myers arises because this is the direction of the basin of attraction. In the figure below, I show the fold-catastrophe model. The abcissa represents the forcing conditions of the system (e.g., population size or greenhouse gas emissions). The ordinate represents the state of the system (e.g., land cover or one of many ecosystem services). The sideways N represents an attractor — a more general notion of an equilibrium. The state of the system tends toward this curve whenever it is perturbed away.

The region in the interior of the fold (indicated by the dashed line) is unstable while the upper and lower tails (indicated by solid lines) are stable and tend to draw perturbations from the attractor toward them. The grey arrows indicate the basin of attraction. When the system is perturbed off of the attractor by some random shock, the state tends to move in the direction indicated by the arrow. When the state is forced all the way down the top arc of the fold, it enters a region where a relatively small shock can send the state into a qualitatively different regime of rapid degradation. This is illustrated by the black arrow (a shock) pushing the state away from point F2. The state will settle again on the attractor, but a second shock will send the state rapidly down toward the bottom arm of the fold (point F1). Note that this region of the attractor is stable so it would take a lot of work to get it back up again (e.g., reduce population or drastically reduced total greenhouse gasses). This is what people mean when they colloquially refer to a “global tipping point.”

This is the model. It may not be right, but thanks to Scheffer et al. (2009), it makes testable predictions. By framing global change in terms of this model, Barnosky et al. are making a case for empirical investigation of the types of data that can falsify the model. Maybe because of the restrictions placed on them by Nature (and these are severe!), maybe because of some poor choices of their own, they include an insufficiently explained, fundamentally complex figure that a critic with clear interests in muddying the scientific consensus can sieze on to dismiss the whole paper as just more Ehrlich-style hysteria.

For me — as I suspect for the authors of the Barnosky et al. paper — massive, structural uncertainty about the state of our planet, coupled with a number of increasingly well-supported models of the behavior or nonlinear systems (i.e., not simply normal science) strongly suggests a precautionary principle. This is something that the economist Marty Weitzman suggested in his (highly technical and therefore not widely read) paper in 2009 and that I have written about before here and here. This is not inflammatory fear-mongering, nor is it grubbing for grant money (I wish it were that easy!). It is responsible scientists doing their best to communicate the state of the science within the constraints of society and the primary mode of scientific communication. Let’s not be taken in by writers pretending to present “just the facts” in a cool, detached manner but who actually have every reason to try to foment unnecessary uncertainty about the state of our world and impugn the integrity of people doing their level best to understand a rapidly changing planet.


Kuhn, T. 1962. The Structure of Scientific Revolutions. Chicago: University of Chicago Press.

Scheffer, M., J. Bascompte, W. A. Brock, V. Brovkin, S. R. Carpenter, V. Dakos, H. Held, E. H. van Nes, M. Rietkerk, and G. Sugihara. 2009. Early-Warning Signals for Critical Transitions. Nature. 461 (7260):53-59.

Weitzman, M. L. 2009. On Modeling and Interpreting the Economics of Catastrophic Climate Change. The Review of Economics and Statistics. XCI (1):1-19.


Three Questions About Norms

Well, it certainly has been a while since I’ve written anything here. Life has gotten busy with new projects, new responsibilities, etc. Yesterday, I participated in a workshop on campus sponsored by the Woods Institute for the Environment, the Young Environmental Scholars Conference. I was asked to stand-in for a faculty member who had to cancel at the last minute. I threw together some rather hastily-written notes and figured I’d share them here (especially since I spoke quite a bit of the importance for public communication!).

The theme of the conference was “Environmental Policy, Behavior, and Norms” and we were asked to answer three questions: (1) What does doing normative research mean to you? (2) How do your own norms and values influence your research? (3) What room and role do you see for normative research in your field? So, in order, here are my answers.

What does doing normative research mean to you?

I actually don’t particularly like the term “normative research” because it sounds a little too much like imposing one’s values on other people. I am skeptical of the imposition of norms that have more to do with (often unrecognized) ideology and less about empirical truth – an idea that was later reinforced by a terrific concluding talk by Debra Satz. If I can define “normative” to mean with the intent to improve people’s lives, then OK.  Otherwise, I prefer to do “positive” research.

For me, normative research is about doing good science. As a biosocial scientist with broad interests, I wear a lot of hats. I have always been interested in questions about the natural world, and (deep) human history in particular. However, I find that the types of questions that really hold my interest these days are more and more engaged in the substantial challenges we face in the world with inequality and sustainability. In keeping with my deep pragmatist sympathies, I increasingly identify with Charles Sanders Pierce‘s idea that given the “great ocean of truth” that can potentially be uncovered by science, there is a moral burden to do things that have social value. (As an aside, I think that there is social value in understanding the natural world, so I don’t mean to imply a crude instrumentalism here.) In effect, there is a lot of cool science to be done; one may as well do something of relevance.  I personally have little patience for people who pursue racist or otherwise socially divisive agendas and cloak their work in a veil of  free scientific inquiry.  This said, I worry when advocacy interferes with intellectual fairness or an unwillingness to accept that one’s position is not actually true.

I think that we are fooling ourselves if we believe that our norms somehow don’t have an effect on our research.  Recognizing what these norms that shape your research – whether implicitly or explicitly – helps you manage your bias. Yes, I said manage. I’m not sure we can ever completely eliminate it. I see this as more of a management of a necessary trade-off, drawing an analogy between the practice of science and a classic problem in statistics, between bias and variance. The more biased one is, the less variance there is in the outcome of one’s investigation. The less bias, the greater the likelihood that results will differ from one’s expectations (or wishes). Recognizing how norms shape our research also deals with that murky area of pre-science: where do our ideas for what to study come from?

How do your own norms and values influence your research?

Some of the the norms that shape my own research and teaching include:

transparency: science works best when it is open. This places a premium on sharing data, methods, and communicating results in a manner that maximizes access to information. As a simple example, this norm shapes my belief that we should not train students from poor countries in the use of proprietary software (and other technologies) that they won’t be able to afford when they return to their home countries when there are free or otherwise open-source alternatives.

fairness: this naturally includes a sense of social justice or people playing on an equal playing field, but it also includes fairness to different ideas, alternative hypotheses, the possibility that one is wrong. This type of fairness is essential for one’s credibility as a public intellectual in science (particularly supporting policy), as noted eloquently in this interview with Dick Lewontin.

respect for people’s ultimate rationality: Trying to understand the social, ecological, and economic context of people’s decision-making, even if it violates our own normative – particularly market-based economic – expectations.

flexibility: solving real problems means that we need to be flexible in our approach, willing to go where the solutions lead us, learning new tools and collaborating. Flexibility also means a willingness to give up on a research program that is doing harm.

good-faith communication: I believe that there is no room for obscurantism in the academy of the 21st century. This includes public communication. There are, of course, complexities here with regard to the professional development of young scholars.  One of the key trade-offs for young scholars is the need for professional advancement (which comes from academic production) and activism, policy, and public communication. Within the elite universities, the reality is that neither public communication nor activism count much for tenure. However, as Jon Krosnick noted, tenure is a remarkable privilege and, while it may seem impossibly far away for a student just finishing a Ph.D., it’s not really. Once you prove that you have the requisite disciplinary chops, you have plenty of time to to use tenure for what it is designed for (i.e., protecting intellectual freedom) and engaging in critical public debate and communication.

humility: solving problems (in science and society) means caring more about the answer to a problem than one’s own pet theory. Humility is intimately related to respect for others’ rationality.  It also means recognizing the inherently collaborative nature of contemporary science: giving credit where it is due, seeking help when one is in over one’s head, etc. John DeGioia, President of Georgetown University, quoted St. Augustine in his letter of support for Georgetown Law Student, Sandra Fluke against the crude attacks by radio personality Rush Limbaugh and I think those words are quite applicable here as well.  Augustine implored his interlocutors to “lay aside arrogance” and to “let neither of us assert that he has found the truth; let us seek it as if it were unknown to both.” This is not a bad description of the way that science really should work.

What room and role do you see for normative research in your field?

I believe that there is actually an enormous amount of room for normative research, if by “normative research,” we mean research that has the potential to have a positive effect on people’s lives. If instead we mean imposing values on people, then I am less sure of its role.

Anthropology is often criticized from outside the field, and to a lesser extent, from within it for being overly politicized. You can see this in Nicholas Wade’s critical pieces in the New York Times Science Times section following the American Anthropological Association’s executive committee excising of the word “science” from the field’s long-range planning document. Wade writes,

The decision [to remove the word ‘science’ from the long-range planning document] has reopened a long-simmering tension between researchers in science-based anthropological disciplines — including archaeologists, physical anthropologists and some cultural anthropologists — and members of the profession who study race, ethnicity and gender and see themselves as advocates for native peoples or human rights.

This is a common sentiment. And it is a complete misunderstanding. It suggests that scientists can’t be advocates for native peoples or human rights.  It also suggests that one can’t study race, ethnicity, or gender from a scientific perspective.  Both these ideas are complete nonsense.  For all the leftist rhetoric, I am not impressed with the actual political practice of what I see in contemporary anthropology. There is plenty of posturing about power asymmetries and identity politics but it is always done in such a mind-numbingly opaque language and with no apparent practical tie-in to policies that make people’s lives better. And, of course, there is the outright disdain for “applied” work one sees in elite anthropology departments.

Writing specifically about Foucault, Chomsky captured my take on this whole mode of intellectual production:

The only way to understand [the mode of scholarship] is if you are a graduate student or you are attending a university and have been trained in this particular style of discourse. That’s a way of guaranteeing…that intellectuals will have power, prestige and influence. If something can be said simply, say it simply, so that the carpenter next door can understand you. Anything that is at all well understood about human affairs is pretty simple.

Ultimately, the simple truths about human affairs that I find anyone can relate to are subsistence, health, and the well-being of one’s children. These are the themes at the core of my own research and I hope that the work I do ultimately can effect some good in these areas.

Jennifer Burney Lecture

I’ve spent the better part of the day editing web pages as I prepare to teach two courses this spring. Given that I’ve more-or-less wasted the day with necessary but not especially intellectually rewarding tasks, I thought that I would take a moment to post something really important and scientifically interesting. Jennifer Burney, of Stanford’s Program in Food Security and the Environment, gave a talk entitled “Food’s Footprint: Agriculture and Climate Change” at Oregon State‘s Food for Thought Series. We’ve known Jen for a long time now.  If memory serves me correctly, she was in my wife Libra‘s section of the American Civil War at Harvard in Fall of 1995. Later she was a student in Mather House, where we were resident tutors from 1997-2001. She went on to do a Ph.D. in physics at Stanford and then moved into a post-doctoral fellowship at FSE.

Jen and all the folks at FSE are doing great and fundamental work.  In this talk, she presents results that may seem somewhat counter-intuitive. Namely, she shows that the agricultural intensification attendant to the Green Revolution has been good for global carbon budgets — and feeding hungry people.  It’s all about counterfactuals. I am looking forward to reading this work since some of these counterfactuals depend critically on demographic assumptions.

As she says in the talk, just because the results suggest that intensive agriculture is good from a global warming perspective, doesn’t take Big Agriculture off the hook. There are items that their models don’t incorporate (but could in principle) and they don’t consider anything other than carbon budgets.  It would be nice to think of a way of uniting all the costs and benefits of intensification in a single framework.

This is very important stuff and the work highlights the complexities of population, environment, and food production. I look forward to seeing more work from Jen and her collaborators at FSE.

Update on Stanford Workshop on Migration and Adaptation

Since my last update, we have added another faculty member to the workshop on Migration and Adaptation. Loren Landau, the Director of the African Centre for Migration and Society (ACMS) (formerly Forced Migration Studies Programme, FMSP) at Wits University in Johannesburg, South Africa will be joining us to discuss conceptual issues in understanding African migration as well as research opportunities through ACMS. This means that we have the following confirmed speakers:

  • James Holland Jones, Department of Anthropology and Woods Institute for the Environment, Stanford University (organizer): Formal Models of Migration; Population Projection
  • Shripad Tuljapurkar, Department of Biology, Stanford University (organizer): Stochastic Forecasting
  • Eric Lambin, Environmental and Earth Systems Science and Woods Institute for the Environment, Stanford University: Pixels to People Approaches to Studying Migration
  • David Lobell, Environmental and Earth Systems Science and Woods Institute for the Environment, Stanford University: Global Climate Change and Food Insecurity
  • William H. Durham, Department of Anthropology and Woods Institute for the Environment, Stanford University: Smallholder Responses to Risk and Uncertainty
  • Ronald Rindfuss, Carolina Population Center, University of North Carolina and The East-West Center: Population and Environment; Microsimulation
  • Amber Wutich, School of Human Evolution and Social Change, Arizona State University, Water Insecurity
  • Lori Hunter, Department of Sociology, University of Colorado: Migration and Health
  • David Lopez-Carr, Department of Geography, University of California Santa Barbara: Migration and Fertility on the Forest Frontier
  • Loren Landau, African Centre for Migration Studies, Witwatersrand, Conceptual and Empirical Issues in African Migration

This is a great line-up and I’m very excited about this (and there are still a couple invitations pending based on complicated field schedules). We will hold the workshop at the IRiSS facility at 30 Alta Rd., bordering the main campus. This is a lovely spot for a workshop.

Details on applying for the workshop are contained here. We will pay for approved travel expenses of accepted students, post-docs, and junior faculty associated with NICHD-funded population centers.

New Formal Demography Workshop: Migration and Adaptation

We will be having another of our occasional Stanford Workshops in Formal Demography this April 28th-30th. The theme this time will be “Migration and Adaptation,” and we have a terrific lineup of speakers coming. As in the past, the workshop is funded by NICHD and receives substantial suport from the Stanford Institute for Research in the Social Sciences (IRiSS). What is somewhat different this time is that we actually have our own center now, The Stanford Center for Population Research (SCPR). Here’s the basic idea for the workshop:

Mobility is a common form of human adaptation to social or environmental risks.  Forms of human mobility vary with regard to permanency and spatial scale.  For example, foragers or pastoralists may move seasonally in response to resource scarcity and opportunity throughout a more or less stable greater home range. Smallholders and agrarian peasants might be displaced on a more permanent basis as a result of conflict or extreme resource scarcity, migrating internally to cities or other relatively nearby localities perceived to be less risky.  International economic migrants may travel long distances on a more or less permanent basis in search of economic opportunity abroad.

Global climate change is predicted to increase migration rates substantially by the middle of the 21st century.  This increase in migration is likely to result from multiple, interacting causal mechanisms including an increase in adverse weather events (e.g., droughts, floods), an increase in resource-related conflicts, or declining viability of local environments arising from various forms of land-use/land-cover change.  These increases will add to the already substantial movement of human population from rural to urban areas, in response to internal social displacement, and from other economic migration.

Understanding human migration requires the input from scientists from a wide range of disciplines. We are particularly interested in approaches that combine the formalism of demography, on-the-ground social research, and remotely-sensed information of the biophysical environment, the so-called “pixels to people” approach.

In this workshop, we will bring together demographers, anthropologists, economists, and geographers to develop a methodological toolkit for understanding migration as an adaptation to risk.  The specific aim of the workshop is to promote knowledge of methods and perspectives from different disciplines, disseminate information about the growing wealth of demographic data on the biophysical environment and human migration, and to foster collaborative and interdisciplinary work. The format will consist of lectures by invited researchers to an audience of other researchers, selected graduate students, and junior faculty. The three-day workshop will have approximately ten faculty and 20 students, whose travel, lodging, and meals will be covered.  The format provides substantial time for discussion. The workshop will be held at the Institute for Research in the Social Sciences (IRiSS), Stanford 28-30 April 2011.

Confirmed speakers include:

  • James Holland Jones, Department of Anthropology and Woods Institute for the Environment, Stanford University (organizer): Formal Models;
    Population Projection
  • Shripad Tuljapurkar, Department of Biology, Stanford University (organizer): Stochastic Forecasting
  • Eric Lambin, Environmental and Earth Systems Science and Woods Institute for the Environment, Stanford University: Pixels to People
  • David Lobell, Environmental and Earth Systems Science and Woods Institute for the Environment, Stanford University: Global Climate Change and Food Insecurity
  • William H. Durham, Department of Anthropology and Woods Institute for the Environment, Stanford University: Smallholder Responses to Risk and Uncertainty
  • Ronald Rindfuss, Carolina Population Center, University of North Carolina and The East-West Center: Population and Environment; Microsimulation
  • Amber Wutich, School of Human Evolution and Social Change, Arizona State University, Water Insecurity
  • Lori Hunter, Department of Sociology, University of Colorado: Migration and Health
  • David Lopez-Carr, Department of Geography, University of California Santa Barbara: Migration and Fertility on the Forest Frontier

A (rather large) printable flier for the workshop can be found here.  It includes information on how to apply.  Hopefully, we will soon have an all official-like webpage through IRiSS as well, which I will point to when it goes live.

Winter Weirding

As I listen to the deluge of reports of horrible winter weather from friends back on the east coast, I came across this video by Peter Sinclair from his YouTube series, “Climate Denial Crock of the Week.” The part I find most compelling is the animation toward the end of this short video showing what looks an awful lot like the displacement of cold Arctic air down into North America and Eurasia by much warmer (as high as 20 degrees F) in the Arctic.

Uncertainty and Fat Tails

A major challenge in science writing is how to effectively communicate real, scientific uncertainty.  Sometimes we just don’t know have enough information to make accurate predictions.  This is particularly problematic in the case of rare events in which the potential range of outcomes is highly variable. Two topics that are close to my heart come to mind immediately as examples of this problem: (1) understanding the consequences of global warming and (2) predicting the outcome of the emerging A(H1N1) “swine flu” influenza-A virus.

Harvard economist Martin Weitzman has written about the economics of catastrophic climate change (something I have discussed before).  When you want to calculate the expected cost or benefit of some fundamentally uncertain event, you basically take the probabilities of the different outcomes and multiply them by the utilities (or disutilities) and then sum them.  This gives you the expected value across your range of uncertainty.  Weitzman has noted that we have a profound amount of structural uncertainty (i.e., there is little we can do to become more certain on some of the central issues) regarding climate change.  He argues that this creates “fat-tailed” distributions of the climatic outcomes (i.e., the disutilities in question).  That is, the probability of extreme outcomes (read: end of the world as we know it) has a probability that, while it’s low, isn’t as low as might make us comfortable.

A very similar set of circumstances besets predicting the severity of the current outbreak of swine flu.  There is a distribution of possible outcomes.  Some have high probability; some have low.  Some are really bad; some less so.  When we plan public health and other logistical responses we need to be prepared for the extreme events that are still not impossibly unlikely.

So we have some range of outcomes (e.g., the number of degrees C that the planet warms in the next 100 years or the number of people who become infected with swine flu in the next year) and we have a measure of probability associated with each possible value in this range. Some outcomes are more likely and some are less.  Rare events are, by definition, unlikely but they are not impossible.  In fact, given enough time, most rare events are inevitable.  From a predictive standpoint, the problem with rare events is that they’re, well, rare.  Since you don’t see rare events very often, it’s hard to say with any certainty how likely they actually are.  It is this uncertainty that fattens up the tails of our probability distributions.  Say there are two rare events.  One has a probability of 10^{-6} and the other has a probability of 10^{-9}. The latter is certainly much more rare than the former. You are nonetheless very, very unlikely to ever witness either event so how can you make any judgement that the one is a 1000 times more likely than the other?

Say we have a variable that is normally distributed.  This is the canonical and ubiquitous bell-shaped distribution that arises when many independent factors contribute to the outcome. It’s not necessarily the best distribution to model the type of outcomes we are interested in but it has the tremendous advantage of familiarity. The normal distribution has two parameters: the mean (\mu) and the standard deviation (\sigma).  If we know \mu and \sigma exactly, then we know lots of things about the value of the next observation.  For instance, we know that the most likely value is actually \mu and we can be 95% certain that the value will fall between about -1.96 and 1.96. 

Of course, in real scientific applications we almost never know the parameters of a distribution with certainty.  What happens to our prediction when we are uncertain about the parameters? Given some set of data that we have collected (call it y) and from which we can estimate our two normal parameters \mu and \sigma, we want to predict the value of some as-yet observed data (which we call \tilde{y}).  We can predict the value of \tilde{y} using a device known as the posterior predictive distribution.  Essentially, we average our best estimates across all the uncertainty that we have in our data. We can write this as

 p(\tilde{y}|y,\mu,\sigma) = \int \int p(y|\mu,\sigma) p(\mu,\sigma|y) d\mu d\sigma.


OK, what does that mean? p(y|\mu,\sigma) is the probability of the data, given the values of the two parameters.  This is known as the likelihood of the data. p(\mu,\sigma|y) is the probability of the two parameters given the observed data.  The two integrals mean that we are averaging the product p(y|\mu,\sigma)p(\mu,\sigma|y) across the range of uncertainty in our two parameters (in statistical parlance, “integrating” simply means averaging).  

If you’ve hummed your way through these last couple paragraphs, no worries.  What really matters are the consequences of this averaging.

When we do this for a normal distribution with unknown standard deviation, it turns out that we get a t-distribution.  t-distributions are characterized by “fat tails.” This doesn’t mean they look like this. What it means is that the probabilities of unlikely events aren’t as unlikely as we might be comfortable with.  The probability in the tail(s) of the distribution approach zero more slowly than an exponential decay.  This means that there is non-zero probability on very extreme events. Here I plot a standard normal distribution in the solid line and a t-distribution with 2 (dashed) and 20 (dotted) degrees of freedom.

Standard normal (solid) and t distributions with 2 (dashed) and 20 (dotted) df.

We can see that the dashed and dotted curves have much higher probabilities at the extreme values.  Remember that 95% of the normal observations will be between -1.96 and 1.96, whereas the dashed line is still pretty high for outcome values beyond 4.  In fact, for the dashed curve,  95% of the values fall between -4.3 and 4.3. In all fairness, this is a pretty uncertain distribution, but you can see the same thing with the dotted line (where the 95% internal interval is plus/minus 2.09).  Unfortunately, when we are faced with the types of structural uncertainty we have in events of interest like the outcome of global climate change or an emerging epidemic, our predictive distributions are going to be more like the very fat-tailed distribution represented by the dashed line.

As scientists with an interest in policy, how do we communicate this type of uncertainty? It is a very difficult question.  The good news about the current outbreak of swine flu is that it seems to be fizzling in the northern hemisphere. Despite the rapid spread of the novel flu strain, sustained person-to-person transmission is not occurring in most parts of the northern hemisphere. This is not surprising since we are already past flu season.  However, as I wrote yesterday, it seems well within the realm of possibility that the southern hemisphere will be slammed by this flu during the austral winter and that it will come right back here in the north with the start of our own flu season next winter.  What I worry about is that all the hype followed by a modest outbreak in the short-term will cause people to become inured to public health warnings and predictions of potentially dire outcomes. I don’t suppose that it will occur to people that the public health measures undertaken to control this current outbreak actually worked (fingers crossed).  I think this might be a slightly different issue in the communication of science but it is clearly tied up in this fundamental problem of how to communicate uncertainty.  Lots to think about, but maybe I should get back to actually analyzing the volumes of data we have gathered from our survey!

On Freeman Dyson's Climate-Change Skepticism

A nice piece by Nicholas Dawidoff in the New York Times Magazine this week details the eminent physicist Freeman Dyson’s skepticism about the dangers of global warming. It seems that Mr. Dyson is concerned about the quality of the science that underlies the current scientific consensus about its perils.

One gathers from reading the Dawidoff piece that the major criticism Dyson levies climate science is against the computer models of Earth’s climate that provide much of the information we have about how Earth will respond to increased atmospheric concentrations of carbon dioxide and other greenhouse gasses (these increases are a fact that is not in dispute). The rub of planetary science is that planet-scale experiments are (for now) impossible (I think we have a way to go before the musings of Kim Stanley Robinson or Kevin J. Anderson come to pass). Our power to understand planetary processes is constrained by our N=1. There is only one Earth. It could be argued that our N is actually closer to three when you throw Venus and Mars into the mix, but the fact remains, our sample size of known planets is pretty small. My U Penn colleague David Gibson made an observation at the NAS/CNRS Frontiers of Science conference last November that studying planets is kind of like studying revolutions. I think some of the physical scientists in the room were scandalized by this vulgar analogy but I (and the other token social scientist in the room) think he made a terrific observation. In both cases, we have a very small number of relatively well-understood and, for all we know, completely eccentric cases and the poverty of this sample makes generalization highly problematic.

So what are our options for studying Earth’s climate other than computer models? I wholeheartedly agree that science is jeopardized whenever the scientist falls in love with his or her model. But there are, in fact, lots of models and these models are run by lots of independent groups emphasizing different aspects of the global circulation system in their particular specifications. It’s almost like science, actually. When one model makes an outlandish prediction, I don’t pay much attention. When all the models make that same outlandish prediction, I pay attention to it, no matter how crazy it might be. Note that this does not mean it’s correct. It does mean that the result merits attention.

Mr Dyson, it seems, thinks that global warming is a good thing. Increased atmospheric concentration of CO2 will increase plant productivity. At the very least, all we need to do to ameliorate putative negative effects of increased CO2 would be to plant lots (and lots) of super carbon-scrubbing trees (which apparently are just waiting to be genetically engineered). There are quite a few problems with this proposal. First, it is actually not completely clear that increases in CO2 will globally increase plant productivity. Ask a plant ecologist and she will tell you that there are other things that limit plant growth than CO2 (e.g, water, nitrogen, phosphorus, etc.). Then there is the fact that ecological enrichment experiments very frequently lead to decreases in biodiversity. Plants that are very good competitors for a particular resource thrive at the expense of plants that are not good competitors for that resource (but might be superior along other dimensions). There are lots of other issues that complicate the seemingly simple relationship between CO2 concentration and productivity such as an increase in ground-level ozone, ocean acidification, and the fact increased temperatures can reduce production independent of CO2 concentration. For someone who is so critical of sloppy science, it seems that Mr. Dyson needs to bone up a bit on his physiological ecology.

Dawidoff quotes Dyson as saying that ‘Most of the evolution of life occurred on a planet substantially warmer than it is now.’ Of course, the rub is that humans evolved in a cool planet. Many of the major events that have characterized the evolution of our species are thought to have involved cooling and drying (e.g., see the work of Steven Stanley or Elizabeth Vrba). What brought the first hominins out of the forest to walk bipedally across the entire planet? Probably climatic cooling and drying which broke tropical forests in Africa up into savanna mosaics. There is a very real sense in which humans are the cold-adapted ape.  I have little doubt that life of some sort will continue even in the most nightmarish of climate-change scenarios. The more parochial question that I think most people care about is: what about human life? An important addendum to this question is: what about the life that we care about?

I applaud Dyson’s contempt for orthodoxy and I admit a dis-ease that I feel among global-warming zealots. The problem with this particular windmill that he has chosen to tip at is that there are powerful economic and political interests that seek to subvert whatever good science is done in global change research for their own ends. Dyson ends up abetting the disinformationists and thereby supporting a much deeper orthodoxy than that of the marginalized community of scientists. This deeper orthodoxy is, of course, the neoliberal ideology that market forces are always preferred to scientifically-informed regulation, pecuniary reward always trumps gains in any other value system, growth-above-all, lie back and think of mother England, etc.

I think that zealotry is spawned by the difficulty of being taken seriously, especially when truth is, well, inconvenient. The loud and persistent mouths of activists are what keep ideas in the public consciousness. Global warming and its consequences are of the sort of scale that they are all too easily ignored. But I fear (and many other scientists share this fear) that we ignore the problem at our peril.

Speaking as someone who typically has an infantile response to group-think, my guess is that Dyson hangs around with a select crowd. In places like Princeton, NJ or Cambridge, MA or Palo Alto, CA, it’s easy to get the impression that everyone is a raving environmentalist (or at least wants others to think they are — a subject for a later post). I am reminded of the probably apocryphal (but so canny) story of the befuddled Democrat (Hollywood screen-writer, Manhattan socialite, Cambridge intellectual — I’ve heard versions using each), incredulous that Nixon could have won the 1972 election in a landslide, who uttered the immortal line, “but everyone I know voted for McGovern!” It’s all too easy in university towns like these to lose track of the fact that most people don’t really give a damn about global warming (or, while we’re at it, poverty, nuclear proliferation or science) and won’t until it has an undeniable impact on their lives. To see that acute concern over the impacts of global warming is not really part of some grand orthodoxy, perhaps Mr. Dyson should spend some time at the Heritage Foundation, Cato Institute, or, for that matter, just about any town in the United States besides Princeton!

A big part of Dyson’s critique, it seems, is that we don’t have enough information. Given the intractability of global experiments and a general discontent with general circulation models, we are going to need to live with a considerable amount of uncertainty. Harvard economist Martin Weitzman has written a very thought-provoking (and technically demanding) paper on the subject of cost-benefit analysis in the context of global climate change. He refers to the climate change situation as one characterized by “deep structural uncertainty.”  In a follow-up paper (in which he responds to criticisms from Yale economist, William Nordhaus), Weitzman makes the astute observation that inductive science is of limited utility when the object of study is an extremely rare event.  The world has not seen atmospheric concentrations of CO2 like what we will see in the near future in a very long time (at least 800,000 years) and we really know very little about such a world. It is very, very difficult to scientifically study extremely rare events.  This is the basis of our deep structural uncertainty and the reason that Mr. Dyson’s plea for gathering more data is unlikely to help all that much with decision-making.

Weitzman further notes that the most severely negative outcomes of global warming are unlikely.  Unfortunately, our systematic uncertainty over the likely course of atmospheric greenhouse gas accumulation, the functional response global climate to this accumulation, or the parameters of the different models of climate change means that these unlikely events are less unlikely than they would be if we knew more.  Uncertainty compounds.  (This probably merits its own blog posting but Spring Break is nearly over…) The probability distribution of future outcomes is “fat-tailed.” This means that the probability of truly catastrophic outcomes is not trivial.  A “long-tailed” distribution means that extreme events are possible but only vanishingly probable.  A fat-tailed distribution means that unlikely events are more likely than we might be comfortable with. Weitzman concludes that, given the fat tail of outcomes-of-global-warming distribution, a sensible cost-benefit analysis favors strong action to mitigate the future effects of this looming problem.

It’s such a shame that a man of science of the stature of Freeman Dyson is spending his time (apparently) unwittingly abetting the cause of anti-science and the neoliberal status quo. In contrast, I find Weitzman’s perspective very sensible indeed.  When we put our egos aside, we have to acknowledge the fact that there is huge amount of — probably intractable — uncertainty surrounding the future of global warming.  When there is a small (but non-trivial) probability of a catastrophic event, does it not seem prudent to take steps to avoid catastrophe?