Tag Archives: Anthro31

Risk Management: The Fundamental Human Adaptation

It was a conceptually dense week in class.  The first part of the week I spent talking about topics such as ecological complexity, vulnerability, adaptation, and resilience. One of the key take-home messages of this material is that uncertainty is ubiquitous in complex ecological systems.  Now, while systemic uncertainty does not mean that the world is unpatterned or erratic, it does mean that people are never sure what their foraging returns will be or whether they will come down with the flu next week or whether their neighbor will support them or turn against them in a local political fight. Because uncertainty is so ubiquitous, I see it as especially important for understanding human evolution and the capacity for adaptation. In fact, I think it’s so important a topic that I’m writing a book about it.  More on that later…

First, it’s important to distinguish two related concepts.  Uncertainty  simply means that you don’t know the outcome of a process with 100% certainty.  Outcomes are probabilistic.  Risk, on the other hand, combines both the likelihood of a negative outcome and the outcome’s severity. There could be a mildly negative outcome that has a very high probability of occurring and we would probably think that it was less risky than a more severe outcome that happened with lower probability. When a forager leaves camp for a hunt, he does not know what return he will get.  10,000 kcal? 5,000 kcal? 0 kcal? This is uncertainty.  If the hunter’s children are starving and might die if he doesn’t return with food, the outcome of returning with 0 kcal worth of food is risky as well.

Human behavioral ecology has a number of elements that distinguish it as an approach to studying human ecology and decision-making.  These features have been discussed extensively by Bruce Winterhalder and Eric Smith (1992, 2000), among others.  Included among these are: (1) the logic of natural selection, (2) hypothetico-deductive framework, (3) a piecemeal approach to understanding human behavior, (4) focus on simple (strategic) models, (5) emphasis on behavioral strategies, (6) methodological individualism.  Some others that I would add include: (7) ethological (i.e., naturalistic) data collection, (8) rich ethnographic context, (9) a focus on adaptation and behavioral flexibility in contrast to typology and progressivism.  The hypothetico-deductive framework and use of simple models (along with the logic of selection) jointly accounts for the frequent use of optimality models in behavioral ecology. Not to overdo it with the laundry lists, but optimality models also all share some common features.  These include: (1) the definition of an actor, (2) a currency and an objective function (i.e., the thing that is maximized), (3) a strategy set or set of alternative actions, and (4) a set of constraints.

For concreteness’ sake, I will focus on foraging in this discussion, though the points apply to other types of problems. When behavioral ecologists attempt to understand foraging decisions, the currency they overwhelmingly favor is the rate of energy gain. There are plenty of good reasons for this.  Check out Stephens and Krebs (1986) if you are interested. The point that I want to make here is that, ultimately, it’s not the energy itself that matters for fitness.  Rather it is what you do with it. How does a successful foraging bout increase your marginal survival probability or fertility rate? This doesn’t sound like such a big issue but it has important implications. In particular, fitness (or utility) is a function of energy return.  This means that in a variable environment, it matters how we average.  Different averages can give different answers. For example, what is the average of the square root of 10 and 2? There are two ways to do this: (1) average the two values and take the square root (i.e., take the function of the mean), and (2) take the square roots and average (i.e., take the mean of the function). The first of these is \sqrt{6}=2.45. The second is (\sqrt{10} + \sqrt{2})/2=2.29.  The function of the mean is greater than the mean of the function.  This is a result of Jensen’s inequality. The square root function is concave — it has a negative second derivative. This means that while \sqrt{x} gets bigger as x gets bigger (its first derivative is positive), the increase is incrementally smaller as x gets larger. This is commonly known as diminishing marginal utility.

Lots of things naturally show diminishing marginal gains.  Imagine foraging for berries in a blueberry bush when you’re really hungry.  When you arrive at the bush (i.e., ‘the patch’), your rate of energy gain is very high. You’re gobbling berries about as fast as you can move your hands from the bush to your mouth. But after you’ve been there a while, your rate of consumption starts to slow down.  You’re depleting the bush.  It takes longer to pick the berries because you have to reach into the interior of the bush or go around the other side or get down on the ground to get the low-hanging berries.

berryplot

Chances are, there’s going to come a point where you don’t think it’s worth the effort any more.  Maybe it’s time to find another bush; maybe you’ve got other important things to do that are incompatible with berry-picking. In his classic paper, Ric Charnov derived the conditions under which a rate-maximizing berry-picker should move on, the so-called ‘marginal value theorem’ (abandon the patch when the marginal rate of energy gain equals the mean rate for the environment). There are a number of similar marginal value solutions in ecology and evolutionary biology (they all arise from maximizing some rate or another). Two other examples: Parker derived an marginal value solution for the optimal time that a male dung fly should copulate (can’t make this stuff up). van Baalen and Sabelis derived the optimal virulence for a pathogen when the conditional probability of transmission and the contact rate between infectious and susceptible hosts trade off.

So, what does all this have to do with risk? In a word, everything.

Consider a utility curve with diminishing marginal returns.  Suppose you are at the mean, indicated by \bar{x}. Now you take a gamble.  If you’re successful, you move to x_1 and its associated utility.  However, if you fail, you move down to x_0 and its associated utility.  These two outcomes are equidistant from the mean. Because the curve is concave, the gain in utility that you get moving from \bar{x} to x_1 is much smaller than the loss you incur moving from \bar{x} to x_0.  The downside risk is much bigger than the upside gain.  This is illustrated in the following figure:

risk-aversion

When returns are variable and utility/fitness is a function of returns, we can use expected utility as a tool for understanding optimal decisions. The idea goes back to von Neumann and Morgenstern, the fathers of game theory. Expected utility has received some attention in behavioral ecology, though not as much as it deserves.  Stephens and Krebs (1986) discuss it in their definitive book on foraging theory.  Bruce Winterhalder, Flora Lu, and Bram Tucker (1999) have discussed expected utility in analyzing human foraging decisions and Bruce has also written with Paul Leslie (2002; Leslie & Winterhalder 2002) on the topic with regard to fertility decisions.  Expected utility encapsulates the very sensible idea that when faced with a choice between two options that have uncertain outcomes, choose the one with the higher average payoff. The basic idea is that the world presents variable pay-offs. Each pay-off has a utility associated with it. The best decision is the one that has the highest overall expected, or average, utility associated with it. Consider a forager deciding what type of hunt to undertake. He can go for big game but there is only a 10% chance of success. When he succeeds, he gets 10,000 kcal of energy. When he fails, he can almost always find something else on the way back home to bring to camp. 90% of the time, he will bring back 1,000 kcal.  The other option is to go for small game, which is generally much more certain endeavor. 90% of the time, he will net 2,000 units of energy.  Such small game is remarkably uniform in its payoff but sometimes (10%) the forager will get lucky and receive 3,000 kcal. We calculate the expected utility by summing the products of the probabilities and the rewards, assuming for simplicity in this case that the utility is simply the energy value (if we didn’t make this assumption, we would calculate the utilities associated with the returns first before averaging).

Big Game: 0.1*10000 + 0.9*1000 = 1900

Small Game: 0.9*2000 + 0.1*3000 = 2100

Small game is preferred because it has higher expected utility.

We can do a bit of analysis on our utility curve and show something very important about risk and expected utility. I’ll spare the mathematical details, but we can expand our utility function around the mean return using a Taylor series and then calculate expectations (i.e., average) on both sides.  The resulting expression encapsulates a lot of the theory of risk management. Let w(x) indicate the utility associated with return x (where I follow the population genetics convention that fitness is given by a w).

 \overline{w(x)} = w(\bar{x}) + \frac{1}{2} w? \mathrm{Var}(x).

Mean fitness is equal to the fitness of the mean payoff plus a term that includes the variance in x and the second derivative of the utility function.  When there is diminishing marginal utility, this will be negative.  Therefore, variance will reduce mean fitness below the fitness of the mean. When there is diminishing marginal utility, variance is bad. How bad is determined both by the magnitude of the variance but also by how curved the utility curve is.  If there is no curve, utility is a straight line and w?=0.  In that case, variance doesn’t matter.

So variance is bad for fitness.  And variance can get big. One can imagine it being quite sensible to sacrifice some mean return in exchange for a reduction in variance if this reduction outweighed the premium paid from the mean. This is exactly what we do when we purchase insurance or when a farmer sells grain futures.  This is also something that animals with parental care do.  Rather than spewing out millions of gametes in the hope that it will get lucky (e.g., like a sea urchin), animals with parental care use the energy they could spend on lots more gametes and reinvest in ensuring the survival of their offspring. This is probably also why hunter-gatherer women target reliable resources that generally have a lower mean return than other available, but risky, items.

It turns out that humans have all sorts of ways of dealing with risk, some of them embodied in our very biology.  I’m going to come up short in enumerating these because this is the central argument of my book manuscript and I don’t want to give it away (yet)! I hope to blog here in the near future about three papers that I have nearly completed that deal with risk management and the evolution of social systems, reproductive decision-making in an historical population, and foraging decisions by contemporary hunter-gatherers.  When they come out, my blog will be the first to know!

References

Charnov, E. L. 1976. Optimal foraging: The marginal value theorem. Theoretical Population Biology. 9:129-136.

Leslie, P., and B. Winterhalder. 2002. Demographic consequences of unpredictability in fertility outcomes. American Journal of Human Biology. 14 (2):168-183.

Parker, G. A., and R. A. Stuart. 1976. Animal behavior as a strategy optimizer: evolution of resource assessment strategies and optimal emigration thresholds. American Naturalist. 110 (1055-1076).

Stephens, D. W., and J. R. Krebs. 1986. Foraging theory. Princeton: Princeton University Press.

van Baalen, M., and M. W. Sabelis. 1995. The dynamics of multiple infection and the evolution of virulence. American Naturalist. 146 (6):881-910.

Winterhalder, B., and P. Leslie. 2002. Risk-sensitive fertility:The variance compensation hypothesis. Evolution and Human Behavior. 23:59-82.

Winterhalder, B., F. Lu, and B. Tucker. 1999. Risk-sensitive adaptive tactics: Models and evidence from subsistence studies in biology and anthropology. Journal of Archaeological Research. 7 (4):301-348.

Winterhalder, B., and E. A. Smith. 2000. Analyzing adaptive strategies: Human behavioral ecology at twenty-five. Evolutionary Anthropology. 9 (2):51-72.

Complexity and Nihilism

This week in class I tried to take on the topic of complexity, as in “complex systems theory.”  Complexity is a very important topic in human ecology, and biosocial science more generally.  It’s also a topic that worries me a bit. It worries for two reasons. First, it seems all too easy for people to fall in with the cult of complexity and I believe that the weight of evidence shows very clearly that people are not at their best when they are associated with cults. If a perspective on science provides novel (especially testable!) insights, then I’m all for it. When it takes on the doctrinaire elements of a religion, then I’m less convinced of its value.  The second reason complexity worries me is clearly related to the first. I am continually frustrated by anthropologists who, when confronted with complexity, throw their hands up and say it’s too complex to make predictions, why bother to do science or understand the principles underlying the system?  You’d need to be trained as a theoretical physicist to understand the theory and people who think they understand something are just deluding themselves (or at least the rest of us) with a masculinist, hegemonic fantasy anyway. Let’s just tell a narrative (preferably peppered with some mind-numbing post-structuralist social theory). Better, perhaps, that we describe history. I think that this view is misguided to say the least (though I agree that history is fundamentally important).

There are three very influential reviews, all written for the Annual Review of Anthropology (when Bill Durham was editor, might I add), by eminent ecological anthropologists that have fed this perspective. Ian Scoones, Steve Lansing, and William Baleé each wrote a review between 1999 and 2006 more or less on the topic of complexity in human ecology. Scoones (1999) reviewed the ‘New Ecology’ and its implications for the social science. Lansing (2003) introduced complexity proper , and Baleé (2006) wrote about ‘Historical Ecology.’ I think its probably fair to say that each of these authors has a different sensibility regarding the role of science in anthropology.

Baleé advocates for the perspective of historical ecology, which emphasizes historical contingency and human agency in shaping landscapes.  He seems to conflate systems ecology with an equilibrium episteme, noting that historical ecology is ‘at odds with systems ecology’ (Baleé 2006: 81) for the latter approach’s inability to allow human agency to increase biodiversity in some cases.  This is an odd critique, since there is nothing inherent in any systems theory of ecological dynamics that makes this the case.  He is also critical of island biogeography theory of MacArthur & Wilson (1967) because of its lack of attention to human agency as a cause of species invasions. Again, there is nothing inherent in island biogeography theory — or its modern inheritor, metapopulation biology — that excludes human agency as a mechanism for colonization. Presumably, the interested anthropologist could construct a model that included human facilitation of species invasions and explore both the transient and asymptotic (e.g., equilibrium) properties of this model.

Systems ecology, according to Baleé’s review, may have provided mathematical rigor to human ecology but it was static, ahistorical, and neglected political processes, a point first noted by Wolf in his Europe and the People without History. While it is certainly true that cultural ecologists studied relatively unstratified cultures (typically in isolation of other parts of the (human) world economic system), once again, there is nothing intrinsic in cultural ecology that makes this necessary. The idea of a cultural core (“the constellation of features which are most closely related to subsistence activities and economic arrangements” (Steward 1955:37)), central to Steward’s cultural ecology, is entirely applicable to stratified societies. It is more complex but that doesn’t make it irrelevant. Similarly, it seems that Steward’s multilinear evolutionary theory of culture, with its focus on broad cross-cultural patterns but emphasis of local particularities is also largely compatible with the tenets of historical ecology. I think that it is a fundamental misapprehension that every anthropologist who studies subsistence of face-to-face groups, following in the tradition of Julian Steward, is unaware of the larger political entanglements of foraging, farming, or pastoral people in a larger world political-economic system (see, e.g., Doug Bird‘s nice essay on the politics of Martu foraging). There is just a conditionality — or ‘bracketing’ if you prefer the phenomenological term — of subsistence activities.  Given that the Martu or Hadza (or whoever) forage, how do they go about doing it? What are the consequences for landscapes in which they are embedded? These are legitimate, important, and interesting questions.  So are questions about broader political economy.  A little secret: They’re not mutually exclusive.

Lansing writes about complex systems proper, and about the phenomenon of emergence in particular.  Emergence occurs when order arises solely out of local interactions and in the absence of central control. I agree completely with Lansing that an investigation of emergence is an important endeavor in ecological anthropology and, indeed, anthropology more generally. My concern that emerges from Lansing’s paper is simply the idea that we have no hope of understanding anything without really complex nonlinear models — models that are so complex they can only be instantiated in agent-based simulations. While I am engaged in the ideas of complex systems, I am not quite ready to give up on many traditional forms of analysis that use linear models. As we will see below, the devil is in the details in complex systems models and I don’t think it’s good for science to deprive ourselves of important suites of tools because of a priori assumptions about the nature of the systems we study. This statement should not be interpreted to mean that I think this is what Lansing is doing. I do worry about anthropologists who read this review being scared away from formal ecological analysis because the nonlinearity sounds scary.

It is Scoones (1999) who makes the most extreme statements about the consequences of complexity for human ecology.  Regarding the three unifying themes around which the new human ecology was coalescing, he writes (1999: 490), “Third is the appreciation of complexity and uncertainty in social-ecological systems and, with this, the recognition of that prediction, management, and control are unlikely, if not impossible.” I think that this statement, while it may be an accurate description of some unifying themes in recent human ecology is simply incorrect and more than a bit nihilist. In all fairness, Scoones goes on to ask what the alternatives to the usual practice are (1999: 495):

So, what is the alternative to such a managerialist approach? A number of suggestions have been made. They generally converge around what has been termed “adaptive management” (Holling 1978, Walters 1976). This approach entails incremental responses to environmental issues, with close monitoring and iterative learning built into the process, such that thresholds and surprises can be responded to (Folke et al 1998).

This is a fair statement, which is rather at odds with the previous quote. If prediction and management are impossible, why is adaptive management a viable replacement?  Does adaptive management not entail making predictions and, well, managing? Of course it does.

I have a series of critical questions that must be addressed before we accede to excessive complexity and stop trying to understand the process underlying human ecology.

  1. With nonlinearity (as with stochasticity), the devil is in the details. What is the shape of the response? Sometimes nonlinear models are remarkably linear over the relevant parameter space and time scope.  Sometimes they’re not.  We don’t know unless we ask.
  2. What is the strength of the response? With nonlinearity, the thing that matters for the difficulty in prediction, sensitivity to initial conditions, etc. is the strength of response. Sometimes this strength is not that high and linear models work amazingly well.
  3. How big are the possible perturbations? We might be able to make quite good predictions if perturbations are small. Of course, we shouldn’t assume that perturbations are always small (as much classical analysis does).  This is an empirical question.
  4. What is the effect of random noise?  Some of the deterministic models with exotic dynamics collapse into pretty standard models in the presence of noise.  Of course, sometimes randomness makes prediction even harder — this is partly a function of the previous three points (i.e., the shape of nonlinearity, the strength of the response, and the size of perturbations).

A couple figures can illustrate two of these points.  Consider the following hypothetical recruitment plot.  On the x-axis, I have plotted the population size, while on the  y-axis, I have plotted the number of recruits born. Suppose that the actual underlying process for recruitment was density-dependent (i.e., was nonlinear), as indicated by the dashed line. In this particular hypothetical case, you would not do all that badly with a linear model (solid line).  As we move across three orders of magnitude, the difference in recruitment between the linear and nonlinear models is two births. The process of recruitment is nonlinear (i.e., it’s density-dependent) but you would do just fine with predictions based on a linear model.

linear-nonlin-comp

Taking up on Bob May’s classic (1976) paper, we can use the logistic map (a discrete-time logistic population growth model)  to look at strength of response.  The logistic map is given by the following nonlinear difference equation X_{t+1} = a X_t (1 ? X_t). We can plot the relationship between X_t and X_{t+1}.  This shows the classic symmetric, humped recruitment curve characteristic of the logistic model.  Where a line X_{t+1} = X_t intersects the recruitment curve, the model has a fixed point. The stability of these fixed points is determined by the slope of the tangent line at the intersection of the curves. If the absolute value of this slope is greater than one, perturbations from the fixed point will grow — the model is unstable.  If the absolute value of this slope is less than one, then any trajectory in the neighborhood will return to the fixed point. The parameters used to make these figures create a complex 2-point series (i.e., the population oscillates between two fixed points) on the left-hand case, while for the right-hand case, there is a simple fixed point. By cranking up the parameter a in the logistic map, we can induce more and more exotic dynamics.  However, the key point here is that if the response is weak enough, the dynamics are not especially exotic at all. Note that we start to get the interesting behavior at values of a>3, or a tripling of population size each time step.  Human populations do not grow nearly this fast.  Not even close. This isn’t to say that some human processes with nonlinear dynamics don’t have very strong responses, but clearly not all must. Population growth is a pretty important problem for human ecology, and it’s dynamics are unlikely to be really exotic.  Maybe we can use some simple models to understand human population dynamics?  See last week’s post on the work of Tuljapurkar and colleagues for some exemplary contemporary work.

response-strength

So, there are two cases where understanding the nature of the nonlinearity makes an enormous difference in how we make predictions and otherwise understand the system.  Sometimes nonlinear models are effectively linear over important ranges of parameter space.  Sometimes the response of a nonlinear model is small enough that the system shows very predictable, well-mannered dynamics. But just so you don’t think that I don’t think complexity is an issue, let’s look at one more example.  This model is from a classic study by Hastings and Powell (1991) showing chaos in a simple model of a food chain.

The model has three species: producer, primary consumer, secondary consumer; and it is a simple chain (secondary consumer eats primary consumer eats producer). Hastings and Powell chose the model parameters to be biologically realistic — there’s nothing inherently wacky about the way the model is set up. Using the same parameters that they use to produce their figure 2, I numerically solved their equations (using deSolve in R).  The first plot shows the dynamics in time, with the bizarre oscillations in all three species.

series

In the second figure, I reproduce (more or less) their three-dimensional phase plot, which takes time out of the plot and instead plots the three population series directly against each other.

3d-phase

Finally, I plot some pair-wise phase-plots, which are easier to visualize than the false 3D image above.

phase-planes

On the whole, we see very complex behavior in a rather simple food chain. Hastings and Powell (1991: 901-902) summarize their findings: (1) contrary to conventional wisdom, they suggest that chaos need not be rare in nature, (2) chaotic behavior “need not lead to an erratic and unpatterned trajectory in time that one might infer from the usual (not mathematical) connotation of the word ‘chaos'” and (3) time scales matter tremendously — over short time scales, the behavior of the system is quite regular.

For me, the greatest lesson from the complex systems approach is the need to understand the specific details.  Contrary to the inclination to throw up one’s hands at the thought of a science of human ecology (let alone putting this science into practice with sensible management policies), it seems that the issues raised here mean that we should study these systems more, attempting to understand both their historical trajectories and the principles upon which they are organized. By all means, let’s jettison old-fashioned ideas about typology and homeostasis in nature.  No need to keep around the clockworks metaphor of ecological succession or the idea that the Dobe !Kung are Pleistocene remnants. Ecosystems, landscapes, whatever term you want to use, don’t necessarily tend toward equilibria. Uncertainty is ubiquitous. People are part of these systems and have been for a long time. Good, we’re agreed.  But can we please not give up on using all the scientific tools we have at our disposal to understand these complex systems in which human beings are embedded? Anthropologists have much to contribute to this area, not the least of which is long-term, place-based research on human-environmental systems.

The lesson of prediction over the short-term is another issue that comes up repeatedly in the complex systems literature.  I think that the work of George Sugihara and colleagues is especially good on this front. I have blogged (here and here) about a paper on which he is a co-author before (I should note that in this paper they suggest ways to make predictions of catastrophic events in complex systems with noise — just sayin’). There is a nice, readable article in Scientific American on his work on fisheries that summarizes the issues. This work combines so many things that I like (demography, fish, statistics, theoretical ecology, California), it’s a bit scary. Another nice, readable piece that also describes some of Sugihara’s work in finance can be found in SEED magazine here.

This post is already too long.  I clearly will need to write about the other topic for the week, risk and uncertainty, at a later date.

References

Baleé, W. 2006. The research program of historical ecology. Annual Review of Anthropology. 35:75-98.

Hastings, A., and T. Powell. 1991. Chaos in a three-species food chain. Ecology. 72 (3):896-903.

Lansing, J. S. 2003. Complex adaptive systems. Annual Review of Anthropology. 32:183-204.

MacArthur, R. H., and E. O. Wilson. 1967. The theory of island biogeography. Princeton: Princeton University Press.

May, R. M. 1976. Simple Mathematical-Models with Very Complicated Dynamics. Nature. 261 (5560):459-467.

Scoones, I. 1999. New Ecology and the social sciences: What prospects for a fruitful engagemnt? Annual Review of Anthropology. 28:479-507.

Models of Human Population Growth

The logistic equation is a model of population growth where the size of the population exerts negative feedback on its growth rate. As population size increases, the rate of increase declines, leading eventually to an equilibrium population size known as the carrying capacity.  The time course of this model is the familiar S-shaped growth that is generally associated with resource limitation. This model has only two parameters: r is the intrinsic growth rate and K is the carrying capacity. The rate of increase in the population declines as a linear function of population size.  In symbols:

 \frac{dN}{dt} = rN (1 ? \frac{N}{K})

When the population size is very small (i.e., when N is close to zero), the term in the parentheses is approximately one and population growth is approximately exponential.  When population size is close to the carrying capacity (i.e., N \approx K), the term in parentheses approaches zero, and population growth ceases. It is straightforward to integrate this equation by partial fractions and show that resulting solution is indeed an S-shaped, or sigmoid, curve.

Raymond Pearl was a luminary in human biology.  A professor at Johns Hopkins University, a founder of the Society for Human Biology and the International Union for the Scientific Study of Population (IUSSP), Pearl also re-discovered the logistic growth model (which was originally developed by the great Belgian mathematician Pierre François Verhulst).  In the logistic model, Pearl believed he had found a universal law of biological growth at its various levels of organization.  In his book, The Biology of Population Growth, Pearl wrote:

… human populations grow according to the same law as do the experimental populations of lower organisms, and in turn as do individual plants and animals in body size. This is demonstrated in two ways: first by showing as was done in my former book “Studies in Human Biology,” that in a great variety of countries all of the recorded census history which exists is accurately described by the same general mathematical equation as that which describes the growth of experimental populations; second, by bringing forward in the present book the case of a human population-the indigenous native population of Algeria-which has in the 75 years of its recorded census history practically completed a single cycle of growth along the logistic curve.

In addition to Algeria, Pearl fit the logistic model to the population of the United States from 1790-1930. The fit he produced was uncanny and he confidently predicted that the US population would level out at 198 million, since this was the best-fit value of K in his analysis.  I have plotted the US population size (from the decennial census) as black points below, with Pearl’s fitted curve in grey. We can see that the curve fits incredibly well for the period 1790-1930 (the span to which he fit the data), but the difference between prediction and empirical reality becomes increasingly large after 1950 (yep, that would be thanks to the Baby Boom).

pearl-badfit

Why does the logistic model fail so spectacularly in this case (and many others)?

The logistic model is phenomenological, rather than mechanistic. A phenomenological model is a mathematical convenience that we use to describe some empirical observations, but has no foundations in mechanisms or first principles. Such models can be useful when theory is lacking to explain some phenomenon or when the mathematics that would be required to model the mechanisms is too complicated. You can make a prediction from a phenomenological model, but I wouldn’t bet the farm on that prediction. In the absence of an actual understanding of the mechanisms producing the population change, the predictions can go horribly wrong, as we see in the case of Raymond Pearl’s fit.

Specifically, the logistic model  fails to consider mechanisms of population regulation. When density increases, what is affected?  Birth rates? Death rates? The r parameter in the logistic model is simply the difference in the gross birth and death rates when there are no conspecifics present.  In general, when the birth rate exceeds the death rate, a population increases.  The linear decrease in r with increasing population size presumably can come about by either the birth rate decreasing or the death rate increasing.  The logistic model is indifferent to the specific cause of slowing.  It just stops increasing when N=K. Is it possible that, in real populations, increasing the death rate and decreasing the birth rate might have qualitatively different effects on population growth? We’ll see.

This probably goes without saying, but there is no capacity for the positive feedbacks with population size. In her classic work, The Conditions of Agricultural Growth, Danish economist Esther Boserup noted that population growth often stimulates innovation. Population pressure might cause an agricultural group that has run out of land to intensify cultivation by improving the land or multi-cropping, thereby facilitating even greater population growth.  Various authors, including Ken Wachter and Ron Lee (both at Berkeley) and Jim Wood at Penn State have noted that real populations probably incorporate both Malthusian (i.e., conditions leading to increased mortality, decreased fertility, and general misery with increased population size) and Boserupian phases in their dynamics.  Wood coined the term “MaB Ratchet” (MaB = Malthus and Boserup) which describes the following dynamic: Malthusian pressure incites  Boserupian innovation, relaxing negative feedback and allowing further population growth.  While a population is undergoing a Boserupian expansion, quality of life improves. Alas, given enough time, the population will always return to “the same level of marginal immiseration.” (Wood 1998: 114). Such complex regimes of positive and negative population feedback are not a possibility .

One final problem with the logistic model is that there is no structure — all individuals are identical in terms of their effect on and contribution to population growth. Human vital rates vary predictably – and substantially – by age, sex, geographic region, urban vs. rural residence, etc. And then there’s the issue of unequal resource distribution.  All individuals in a population are hardly equal in their consumption (or production) and so we should hardly expect each to exert an identical force on population growth.

So are there better alternative models for human population growth that incorporate the sensible idea that as populations push the limits of their resource base, growth should slow down and eventually cease? There is now.  My Stanford colleague and collaborator in various endeavors, Shripad Tuljapurkar, has a series of papers in which he and his students develop mechanistic population models for agricultural populations that specifically link age-specific vital rates (i.e., survivorship, fertility), agricultural production and labor, and specific (age-specific) metabolic needs for individuals engaged in heavy physical labor.  The models start with an optimal energy supply for survival and reproduction.  As food gets more scarce, mortality increases and fertility decreases.  The model has an equilibrium where birth and death rates balance. A key feature of the model is the idea of the food ratio, which is the number of calories available to consume in a given year relative to the number of calories needed to maximize survival and fertility. The food ratio tells us how hungry the population is. In the first of a series of three papers, Lee and Tuljapurkar (2008) develop this model and show how changes in mortality, fertility, and agricultural productivity actually all have distinct effects on the population growth rate, equilibrium, and how hungry people are at equilibrium. Analysis of their model yielded the following results:

  • Increasing agricultural productivity or the amount of time spent working on agricultural production increases the food ratio, while keeping the population growth rate largely unchanged
  • Increasing baseline survival increases the food ratio but decreases the population growth rate
  • Decreasing fertility only decreases the growth rate – the food ratio remains unchanged

So, we see that it is possible that increasing the death rate and decreasing the birth rate might have qualitatively different effects on population growth. In fact, it seems quite likely, given Lee & Tulja’s model.

We don’t, as yet, have the kind of test that we gave Raymond Pearl’s application of the logistic model to US population size. It would be very nice if we could use the Lee-Tulja model to make a prediction about the future dynamics of some population (and its distribution of hunger) and challenge this prediction with data not used for fitting the model in the first place. This said, I think that theoretical exercise alone is enough to demonstrate the importance of moving beyond phenomenological population models whenever possible. We are unlikely to make accurate predictions or understand the response of population to environmental and social changes in the absence of mechanistic models.

References

Lee, C. T., and S. Tuljapurkar. 2008. Population and prehistory I: Food-dependent population growth in constant environments. Theoretical Population Biology. 73:473–482.

Wood, J. W. 1998. A theory of preindustrial population dynamics: Demography, economy, and well-being in Malthusian systems. Current Anthropology. 39 (1):99-135.

Response to Selection

I’m done now with the first week of the Spring quarter. It was a bit challenging because I had to attend the PAA meetings in Washington, DC for the latter part of the week, but Brian Wood ably covered for me on Thursday. I thought that I would use the blog as a tool for summarizing one of the key points I want students to take away from this fist week in which we discussed evolution and natural selection.

We spent a good deal of lecture time talking about adaptation.  Specifically, we discussed how adaptation can serve as a foil to typology and essentialism. Adaptation is local and must be seen within its specific environmental and historical context. Adaptations are dynamic because environments are.

Adaptationist thinking is powerful, but can easily be overdone. This is why I also think it is essential to understand the mechanics of selection, something that I’m afraid is not often addressed in introductory evolutionary anthropology classes.  So, in the very first lecture of class, I throw some quantitative genetics (and, thus, some math) at students.  Of course, these are Stanford students, so I’m confident they can handle a little techie-ness every now and then. We specifically discuss the multivariate breeder’s equation, sometimes known as Lande‘s equation:

\Delta \mathbf{\bar{z}} = \mathbf{G \beta}

,

where \Delta \mathbf{\bar{z}} is the change in the mean fitness of a multivariate trait, \mathbf{G} is the additive genetic variance-covariance matrix, and \beta is the selection gradient on \mathbf{\bar{z}}.

In effect, \beta is a vector pointing in the direction of the optimal change in the phenotype. The matrix \mathbf{G} does two things to this gradient pushing \mathbf{\bar{z}} toward its optimum: (1) it scales the response depending on how much additive variance there is in each trait and (2) it rotates it as a function of the covariances between traits. I won’t get too much into matrix multiplication here (this is a very nice reference too). The key point is that \mathbf{G} is a square k \times k matrix (where k is the number of traits we’re looking at) the diagonal elements of which are variances and the off-diagonal elements of which, g_{ij} represent the covariances between traits i and j.   Selection requires variance. Without sufficient variance, even strong selection won’t change the phenotype much between generations.  But variance isn’t all there is to it. When the covariances are positive, there will be substantial indirect selection, and when they are negative, you have genetic constraints at work. Selection may be pointing in a particular direction, but the structure of the trade-offs could very easily mean that you can’t actually get there.

Let’s consider three quick (toy) examples.  Say we have two traits, maybe “length” and “width” (this could be something less vague and insipid: Lande (1979) looks at brain mass and body mass in a serious two-trait example). We will assume that the selection gradient is \mathbf{\beta} = \{0.5, 0.25\}?. That is, the force of selection is twice as high on length as it is on width, but it is pretty strong and positive on both. We’ll demonstrate the effect of variance and constraint in three ways:  (1) more variance in the trait under weaker selection (\mathbf{G_1}), (2) positive covariance between the two traits (\mathbf{G_2}), and (3) negative covariance between the two traits (\mathbf{G_3}).

 \mathbf{G_1} = \left( \begin{array}{cc} 0.33 & 0.00 \\ 0.00 & 0.67 \end{array} \right)

 \mathbf{G_2} = \left( \begin{array}{cc} 0.33 & 0.33 \\ 0.33 & 0.67 \end{array} \right)

 \mathbf{G_3} = \left( \begin{array}{cc} 0.33 & -0.33 \\ -0.33 & 0.67 \end{array} \right)

The figure below plots the response to selection in the three different types of genetic architecture.  The direction of selection is indicated in the grey arrow. If the variances of the two traits were equal to 1 and there were zero covariances, this is where selection would move the phenotype pair (try it). We can see that the response to selection moves toward width (the trait under weaker selection) even when covariances are zero (black arrow).  Why? Because there is more variance for width than there is for length (0.67 \times 0.25 > 0.33 \times 0.5).  This effect becomes more pronounced when there is positive covariance between the traits (blue arrow) — the selection toward width is 0.33 \times 0.5 +0.67 \times 0.25 = 0.3325. When the covariances are negative, we see something cool (red arrow).  The response to selection is small and moves (almost) entirely in the direction of length. This is because the negative covariance between length and width, when acted on by the strong selection on length, all but cancels out the positive response to selection (-0.33 \times 0.5 + 0.67 \times 0.25 = 0.0025).

selection-constraint-plot

This simple demonstration shows that the response to selection can be complex. Making an argument that some trait would be under selection is not sufficient to say that it actually evolved (or will evolve) that way.  Entirely plausible arguments for the direction of selection are made all the time in evolutionary anthropology.  Here is one from a very important paper in paleoanthropology (Lovejoy 1981: 344):

Any behavioral change that increases reproductive rate, survivorship, or both, is under selection of maximum intensity. Higher primates rely on social behavioral mechanisms to promote survivorship during all phases of the life cycle, and one could cite numerous methods by which it theoretically could be increased.  Avoidance of dietary toxins, use of more reliable food sources, and increased competence in arboreal locomotion are obvious examples. Yet these are among the many that have remained under stadong selection throughout much of the course of primate evolution, and therefore unlikely that early hominid adaptation was a product of intensified selection for adaptations almost universal to anthropoid primates.

Arguing for selection without considering trade-offs can get you into trouble.  Selection in the presence of quantitative genetic constraints (or even differential variance in the traits) can produce counter-intuitive results. (Selectionists, don’t dispair. There are ways to deal with this, but it will have to wait for another post). In the case of Lovejoy’s argument, there are good reasons to think that survivorship and reproductive rate are, indeed, strongly negatively correlated. Which is under stronger selection? Which has more additive variance? How strong are the negative covariances?

When we make selectionist or adaptationist arguments, we should always keep in the back of our minds the three questions:

  1. How strong is the force of selection?
  2. How much variance is there on which selection can act?
  3. How is the trait constrained through negative correlations with other traits?

References

Lande, R. A. 1979. Quantitative genetic analysis of multivariate evolution applied to brain: body size evolution. Evolution. 33:402-416.

Lovejoy, C. O. 1981. The origin of man. Science. 211:341-350.