Tag Archives: publishing

On The Dilution Effect

A new paper written by Dan Salkeld (formerly of Stanford), Kerry Padgett (CA Department of Public Health), and myself just came out in the journal Ecology Letters this week.

One of the most important ideas in disease ecology is a hypothesis known as the “dilution effect”. The basic idea behind the dilution effect hypothesis is that biodiversity — typically measured by species richness, or the number of different species present in a particular spatially defined locality — is protective against infection with zoonotic pathogens (i.e., pathogens transmitted to humans through animal reservoirs). The hypothesis emerged from analysis of Lyme disease ecology in the American Northeast by Richard Ostfeld and his colleagues and students from the Cary Institute for Ecosystem Studies in Millbrook, New York. Lyme disease ecology is incredibly complicated, and there are a couple different ways that the dilution effect can come into play even in this one disease system, but I will try to render it down to something easily digestible.

Lyme disease is caused by a spirochete bacterium Borrelia burgdorferi. It is a vector-borne disease transmitted by hard-bodied ticks of the genus >Ixodes. These ticks are what is known as hemimetabolous, meaning that they experience incomplete metamorphosis involving larval and nymphal stages. Rather than a pupa, these larvae and nymphs resemble little bitty adults. An Ixodes tick takes three blood meals in its lifetime: one as a larva, once as a nymph, once as an adult. At different life-cycle stages, the ticks have different preferences for hosts. Larval ticks generally favor the white-footed mouse (Peromyscus leucopus) for their blood meal and this is where the catch is. It turns out that white-footed mice are extremely efficient reservoirs for Lyme disease. In fact, an infected mouse has as much as a 90% chance of transmitting infection to a larva feeding on it. The larvae then molt into nymphs and overwinter on the forest floor. Then, in spring or early summer a year after they first hatch from eggs, nymphs seek vertebrate hosts. If an individual tick acquired infection as a larva, it can now transmit to its next host. Nymphs are less particular about their choice of host and are happy to feed on humans (or just about any other available vertebrate host).

This is where the dilution effect comes in. The basic idea is that if there are more potential hosts such as chipmunks, shrews, squirrels, or skunks, there are more chances that an infected nymph will take a blood meal on a person. Furthermore, most of these hosts are much less efficient at transmitting the Lyme spirochete than are white-footed mice. This lowers the prevalence of infection and makes it more likely that it will go extinct locally. It’s not difficult to imagine the dilution effect working at the larval stage blood-meal too: if there are more species present (and the larvae are not picky about their blood meal), the risk of initial infection is also diluted.

In the highly-fragmented landscape of northeastern temperate woodlands, when there is only one species in a forest fragment, it is quite likely that it will be a white-footed mouse. These mice are very adaptable generalists that occur in a wide range of habitats from pristine woodland to degraded forest. Therefore, species-poor habitats tend to have mice but no other species. The idea behind the dilution effect is that by adding different species to the baseline of a highly depauperate assemblage of simply white-footed mice, the prevalence of nymphal infection will decline and the risk for zoonotic infection of people will be reduced.

It is not an exaggeration to say that the dilution-effect hypothesis is one of the two or three most important ideas in disease ecology and much of the explosion of interest in disease ecology can be attributed in part to such ideas. The dilution effect is also a nice idea. Wouldn’t it be great if every dollar we invested in the conservation of biodiversity potentially paid a dividend in reduced disease risk? However, its importance to the field or the beauty of the idea do not guarantee that it is actually scientifically correct.

One major issue with the dilution effect hypothesis is its problem with scale, arguably the central question in ecology. Numerous studies have shown that pathogen diversity is positively related to overall biodiversity at larger spatial scales. For example, in an analysis of global risk of emerging infectious diseases, Kate Jones and her colleagues form the London Zoological Society showed that globally, mammalian biodiversity is positively associated with the odds of an emerging disease. Work by Pete Hudson and colleagues at the Center for Infectious Disease Dynamics at Penn State showed that healthy ecosystems may actually be richer in parasite diversity than degraded ones. Given these quite robust findings, how is it that diversity at a smaller scale is protective?

We use a family of statistical tools known as “meta-analysis” to aggregate the results of a number of previous studies into a single synthetic test of the dilution-effect hypothesis. It is well known that inferences drawn from small samples generally have lower precision (i.e., the estimates carry more uncertainty) than inferences drawn from larger samples. A nice demonstration of this comes from the classical asymptotic statistics. The expected value of a sample mean is the “true mean” of a normal distribution and the standard deviation of this distribution is given by the standard error, which is defined as the standard deviation of the distribution divided by the square root of the sample size. Say that for two studies we estimate the standard deviation of our estimate of the mean to be 10. In the first study, this estimate is based on a single observation, whereas in the second, it is based on a sample of 100 observations. The estimated of the mean in the second study is 10 times more precise than the estimate based on the first because 10/\sqrt{1} = 10 while 10/\sqrt{100} = 1.

Meta-analysis allows us to pool estimates from a number of different studies to increase our sample size and, therefore, our precision. One of the primary goals of meta-analysis is to estimate the overall effect size and its corresponding uncertainty. The simplest way to think of effect size in our case is the difference in disease risk (e.g., as measured in the prevalence of infected hosts) between a species rich area and a species poor area. Unfortunately, a surprising number of studies don’t publish this seemingly basic result. For such studies, we have to calculate a surrogate of effect size based on the reported test statistics of the hypothesis that the authors report. This is not completely ideal — we would much rather calculate effect sizes directly, but to paraphrase a dubious source, you do a meta-analysis with the statistics that have been published, not with the statistics you wish had been published. On this note, one of our key recommendations is that disease ecologists do a better job reporting effect sizes to facilitate future meta-anlayses.

In addition to allowing us to estimate the mean effect size across studies and its associated uncertainty, another goal of meta-analysis is to test for the existence of publication bias. Stanford’s own John Ioannidis has written on the ubiquity of publication bias in medical research. The term “bias” has a general meaning that is not quite the same as the technical meaning. By “publication bias”, there is generally no implication of nefarious motives on the part of the authors. Rather, it typically arises through a process of selection at both the individual authors’ level and the institutional level of the journals to which authors submit their papers. An author, who is under pressure to be productive by her home institution and funding agencies, is not going to waste her time submitting a paper that she thinks has a low chance of being accepted. This means that there is a filter at the level of the author against publishing negative results. This is known as the “file-drawer effect”, referring to the hypothetical 19 studies with negative results that never make it out of the authors’ desk for every one paper publishing positive results. Of course, journals, editors, and reviewers prefer papers with results to those without as well. These very sensible responses to incentives in scientific publication unfortunately aggregate into systematic biases at the level of the broader literature in a field.

We use a couple methods for detecting publication bias. The first is a graphical device known as a funnel plot. We expect studies done on large samples to have estimates of the effect size that are close to the overall mean effect because estimates based on large samples have higher precision. On the other hand, smaller studies will have effect-size estimates that are more distributed because random error can have a bigger influence in small samples. If we plot the precision (e.g., measured by the standard error) against the effect size, we would expect to see an inverted triangle shape — or a funnel — to the scatter plot. Note — and this is important — that we expect the scatter around the mean effect size to be symmetrical. Random variation that causes effect-size estimates to deviate from the mean are just as likely to push the estimates above and below the mean. However, if there is a tendency to not publish studies that fail to support the hypothesis, we should see an asymmetry to our funnel. In particular, there should be a deficit of studies that have low power and effect-size estimates that are opposite of the hypothesis. This is exactly what we found. Only studies supporting the dilution-effect hypothesis are published when they have very small samples. Here is what our funnel plot looked like.

Note that there are no points in the lower right quadrant of the plot (where species richness and disease risk would be positively related).

While the graphical approach is great and provides an intuitive feel for what is happening, it is nice to have a more formal way of evaluating the effect of publication bias on our estimates of effect size. Note that if there is publication bias, we will over-estimate our precision because the studies that are missing are far away from the mean (and on the wrong side of it). The method we use to measure the impact of publication bias on our estimate of uncertainty formalizes this idea. Known as “trim-and-fill“, it uses an algorithm to find the most divergent asymmetric observations. These are removed and the precision of the mean effect size is calculated. This sub-sample is known as the “truncated” sample. Then a sample of missing values is imputed (i.e., simulated from the implied distribution) and added to the base sample. This is known as the “augmented” sample. The precision is then re-calculated. If there is no publication bias, these estimates should not be too different. In our sample, we find that estimates of precision differ quite a bit between the truncated and augmented samples. We estimate that between 4-7 studies are missing from the sample.

Most importantly, we find that the 95% confidence interval for our estimated mean effect size crosses zero. That is, while the mean effect size is slightly negative (suggesting that biodiversity is protective against disease risk), we can’t confidently say that it is actually different than zero. Essentially, our large sample suggests that there is no simple relationship between disease risk and biodiversity.

On Ecological Mechanisms One of the main conclusions of our paper is that we need to move beyond simple correlations between species richness and disease risk and focus instead on ecological mechanisms. I have no doubt that there are specific cases where the negative correlation between species richness and disease risk is real (note our title says that we think this link is idiosyncratic). However, I suspect where we see a significant negative correlation, what is really happening is that some specific ecological mechanism is being aliased by species richness. For example, a forest fragment with a more intact fauna is probably more likely to contain predators and these predators may be keeping the population of efficient reservoir species in check.

I don’t think that this is an especially controversial idea. In fact, some of the biggest advocates for the dilution effect hypothesis have done some seminal work advancing our understanding of the ecological mechanisms underlying biodiversity-disease risk relationships. Ostfeld and Holt (2004) note the importance of predators of rodents for regulating disease. They also make the very important point that not all predators are created equally when it comes to the suppression of disease. A hallmark of simple models of predation is the cycling of abundances of predators and prey. A specialist predator which induces boom-bust cycles in a disease reservoir probably is not optimal for infection control. Indeed, it may exacerbate disease risk if, for example, rodents become more aggressive and are more frequently infected in agonistic encounters with conspecifics during steep growth phases of their population cycle. This phenomenon has been cited in the risk of zoonotic transmission of Sin Nombre Virus in the American Southwest.

I have a lot more to write on this, so, in the interest of time, I will end this post now but with the expectation that I will write more in the near future!

 

New Publication, Emerging infectious diseases: the role of social sciences

This past week, The Lancet published a brief commentary I wrote with a group of anthropologist-collaborators. The piece, written with Craig Janes, Kitty Corbett, and Jim Trostle, arose from a workshop I attended in lovely Buenos Aires back in June of 2011. This was a pretty remarkable meeting that was orchestrated by Josh Rosenthal, acting director of the Division of International Training and Research at the Fogarty International Center at NIH, and hosted in grand fashion by Ricardo Gürtler of the University of Buenos Aires.

Our commentary is on a series of papers on zoonoses, a seemingly unlikely topic for about which a collection of anthropologists might have opinions. However, as we note in our paper, social science is essential for understanding emerging zoonoses. First, human social behavior is an essential ingredient in R_0, the basic reproduction number of an infection (The paper uses the term “basic reproductive rate,” which was changed somewhere in production from the several times I changed “rate” to “number”). Second, we suggest that social scientists who participate in primary field data collection (e.g., anthropologists, geographers, sociologists) are in a strong position to understand the complex causal circumstances surrounding novel zoonotic disease spill-overs.

We note that there are some challenges to integrating the social sciences effectively into research on emerging infectious disease. Part of this is simply translational. Social scientists, natural scientists, and medical practitioners need to be able to speak to each other and this kind of transdisciplinary communication takes practice. I’m not at all certain what it takes to make researchers from different traditions mutually comprehensible, but I know that it’s more likely to happen if these people talk more. My hypothesis is that this is best done away from anyone’s office, in the presence of food and drink. Tentative support for this hypothesis is provided by the wide-ranging and fun conversations over lomo y malbec. These conversations have so far yielded at least one paper and laid the foundations for a larger review I am currently writing. I know that various permutations of the people in Buenos Aires for this meeting are still talking and working together, so who knows what may eventually come of it?

(Text Processing) Paradigms Lost

Tom Scocca has wrote a brilliant essay in Slate today on the absurdities of Microsoft Word being the standard text processing tool in the age of digital publishing. I struggle to get students doing statistical and demographic analysis in R to not use Word because of all the unwanted junk it brings to the most trivial text-processing task. Using the word2cleanhtml website, Scocca shows how a two-word text chunk written in Word contains the equivalent of eight pages of unnecessary hidden text!

I encounter all the nonsense associated with the annoying default “annoying typographical flourishes” that Scocca discusses in my role as associate editor of a couple of journals and a regular reviewer for NSF. Both of these roles make extensive use of web-based platforms for managing workflows associated with writing-intensive tasks (ScholarOne for editing and Fastlane for NSF) and both snarf on the typographical annoyances Scocca enumerates (“smart” quotes, automatic em-dashes, etc.). When you do an NSF panel, you receive a briefing explaining that if you are going to write your panel summaries in Word, you need to turn off smart quotes and avoid other things that will lead to nonsense in the plain-text formatted fields of Fastlane. Of course, no one does this.

Don’t get me started on track changes…

I do the great majority of my own writing in a plain text-processor. My personal favorite is Aquamacs, a Mac-native variation on GNU Emacs. Emacs is definitely not for everyone, but there are lots of other possibilities. Scocca writes that he has turned to TextEdit, which is another Mac-native, but there are plenty of other options that run on different systems. Here is a list of possibilities.

It will be interesting to see how online collaborative tools such as Google Docs change the way people do text processing.  I find that more of my students do their work in Google Docs. It’s certainly not a majority yet but the fraction is growing rapidly each year. As Scocca notes, Google Docs provides a much more sane alternative to track changes, among other things.

Microsoft clearly needs to get serious and do a bit of innovation here if they want to stay in this particular game. I, for one, will not miss MS Word if it should go the way of WordStar.

On Newspaper Front Pages

Expert wrangler of predicaments Phillip Mendonça-Vieira has put together a very cool time-lapse movie from about 12,000 screenshots of the front page of the nytimes.com.  The movie is interesting to watch in a Koyaanisqatsi kind of way, but what I find most poignant is his commentary that accompanies the movie. Mendonça-Vieira writes,

Having worked with and developed on a number of content management systems I can tell you that as a rule of thumb no one is storing their frontpage layout data. It’s all gone, and once newspapers shutter their physical distribution operations I get this feeling that we’re no longer going to have a comprehensive archive of how our news-sources of note looked on a daily basis. Archive.org comes close, but there are too many gaps to my liking.

This, in my humble opinion, is a tragedy because in many ways our frontpages are summaries of our perspectives and our preconceptions. They store what we thought was important, in a way that is easy and quick to parse and extremely valuable for any future generations wishing to study our time period.

This really resonated with me.  Some time back, we wrote a paper that garnered quite a lot of media coverage. Indeed, we even got the ‘front page’ of the nytimes.com, if only fleetingly. I am very glad that I had the presence of mind to save that screen shot as a pdf so I would be able to preserve this 15 minutes of fame for posterity. If they had been available, I would have bought lots of paper copies.  However, what I am left with is this:

NYTimes_Front_Page

This really is a shame and clearly represents a serious challenge for the historians of tomorrow and the archivists of today.

Measuring Epidemiological Contacts in Schools

I am happy to report that our paper describing the measurement of casual contacts within an American high school is finally out in the early edition of PNAS. Stanford’s great social science reporter, Adam Gorlick, has written a very nice overview of our paper for the Stanford Report (also here in the LA Times and here on Medical News Today). The lead author, and general force of nature behind this paper, is Marcel Salathé, who until recently was a post-doc here at Stanford in Marc Feldman‘s lab.  This summer, Marcel moved to the Center for Infectious Disease Dynamics at Penn State, a truly remarkable place and now all the better for having Marcel.  From the Penn State end, there is a nice video describing our results as well as well as a brief note on Marcel’s blog.  This paper has not been picked up quite like our paper on plague dynamics this summer, probably because measuring casual contacts in an American high school generally does not involve carnivorous mice.

With generous NSF funding, we were able to buy a lot of wireless sensor motes — enough to outfit every student, teacher, and staff member at a largish American high school so that we could record all of their close contacts in a single, typical day. By “close contact,” we mean any more-or-less face-to-face interaction within a radius of three meters.  As Marcel was putting together this project, we were (once again) exceptionally lucky to find ourselves at Stanford along with one of the world authorities on wireless sensor technology, Phil Levis, of Stanford’s Computer Science department.  Phil and his students, Maria and Jung Woo Lee, made this work come together in ways that I can’t even begin to fathom.  This actually leads me to a brief diversion to reflect on the nature of collaboration.  As with our plague paper or SIV mortality paper, this paper is one where collaboration between very different types of researchers (viz., Biologists, Computer Scientists, Anthropologists) is absolutely fundamental to the success of the work.  In coming up for tenure — and generally living in an anthropology department — the question of what I might call the partible paternity of papers (PPP) comes up fairly regularly. “I see you have a paper with five co-authors; I guess that means you contributed 17% to this paper, no?”  Well, no, actually.  I call this the “additive fallacy of collaboration.” When a paper is truly collaborative, then the contributions of the paper are not mutually exclusive from each other and so do not simply sum.  To use a familiar phrase, the whole is greater than the sum of the parts.  Our current paper is an example of such a truly collaborative project.  Without the contributions of all the collaborators, it’s not that the paper would be 17% less complete; it probably wouldn’t exist. I can’t speak particularly fluently to what Phil, Maria, and Jung Woo did other than by saying, “wow” (thus our collaboration), but I can say that we couldn’t have done it without them.

I’ll talk more about our actual results later.  For now, you’ll either have to read the paper (which is open access), watch the video, or read the overview in the Stanford Report.

Most Cited Papers in Current Anthropology

A friend sent me a link the other day to the top 20 most cited articles in the journal, Current Anthropology. Much to my delight, I found that a paper that I co-authored is the #7 all-time citation leader and a paper co-authored by my Stanford colleague Rebecca Bird is the #19. As I walked over to Coupa café this morning to get coffee, I realized that I also made a small contribution to the #1 on this list, Leslie Aiello and Peter Wheeler’s paper on the Expensive Tissue Hypothesis.  At the time the manuscript was first circulated, I was a graduate student obsessed with brains, energetics, and scaling in human evolution. My advisor, Richard Wrangham, was asked to comment on the manuscript and he asked me if, given my obsessions, I might have something to say. Needless to say, I did. Having just read our comment, I think it stands pretty well (if I do say so): (1) basal metabolic rate (BMR) is not really a constraint and (2) what are the implications for allometric scaling of different organs with respect to body mass?  Most of the expensive organs scale isometrically (that is, with a scaling exponent of one) but the brain, of course, is a big exception. It scales with an exponent closer to 3/4. Because guts and brains scale differently with increasing body mass, perhaps larger brains could be maintained by dietary compensation?

My colleague Herman Pontzer has some very interesting things to say about energetics and constraints and I’m really looking forward to some forthcoming work of his on this topic.  In a paper in PNAS, he recently showed that, contrary to the expectations of a naïve trade-off model, mammals with larger home ranges actually have greater lifetime fertility and greater total offspring mass.  We have a lot to learn about trade-offs, both physiological and economic, and their role in shaping human behavior and life histories.

On Journal Impact Factors

How do we evaluate the quality of published work?  This has become an issue for me recently for one general and two more specific reasons.  The general reason is that as one approaches one’s tenure decision, one tends to think about the impact of one’s oeuvre. The specific reasons are, first, I have a paper that I know has been read (and used) by a substantial number of people but was published in a journal (The Journal of Statistical Software) that is not indexed by Thompson Scientific, the keepers of the impact factor. Will this hurt me or any of the other people who write useful and important software (and perform all the research entailed in creating such a product) when I am evaluated on the quality of my work?   The second reason this question has taken on relevance for me is that I am an Associate Editor of PLoS ONE, another journal that is not indexed by Thompson. One of my duties as an AE is to encourage people to submit high-quality papers to PLoS ONE.  This can be tricky when people live and die by a journal’s impact factor.

The thing that irks me about Thompson’s impact factors is how opaque they are.  Thompson doesn’t have to answer to anyone, so they are free to do whatever they want (as long as people continue to consume their products).  Why do some journals get listed and others don’t?  What constitutes a “substantive paper” (the denominator for the impact factor calculation)?  What might the possible confounds be?  What about biases? We actually know quite a bit about these last two.  We know very little about the first two.

Moyses Szklo has a nice brief editorial in the journal Epidemiology, describing a paper in that same journal by Miguel Hernán criticizing the use of impact factors in epidemiology.  The points clearly apply to science more generally.  Three key isues affecting a journal’s impact factor listed by Szklo are: (1) the frequency of self-citation, (2) the proportion of a journal’s articles that are reviews (review papers get cited a lot), and (3) the size of the field being served by the journal.  Hernán’s paper is absolutely marvelous.  He notes that the bibliographic impact factor (BIF) is flawed — as a statistical measure, not by the manipulations described by Szklo — for three reasons: (1) a bad choice of denominator (total number of papers published), (2) the need to adjust for variables that are known to affect the measure, (3) the questionability of the mean as a summary measure for highly skewed distributions (as we know BIFs have). Hernán makes his case by presenting a parallel case of a fictional epidemiological study. To anyone trained in epidemiological methods, this case is clearly flawed.  It is exactly analogous to the way that Thompson calculates BIFs, yet we continue to use them.  The journal, Epidemiology, also published a number of interesting responses to Hernán’s paper criticizing the use of BIFs (Rich Rothenberg, social network epidemiologist-extraordinaire has a nice counterpoint essay to these). The irony is that on the Epidemiology front page, they advertise the journal by touting its impact factor!

The rub, of course, is that formulating a less flawed metric of intellectual impact is clearly a very demanding task.  Michael Jenson, of the National Academies, has written The New Metrics of Scholarly Authority.  One of the key concepts is devising a metric that measures quality at the level of the paper rather than the level of the journal.  We’ve all seen fundamentally important papers that, for whatever reason, get published in obscure journals.  Similarly, we regularly see the crap that comes out in high-prestige journals like Science, Nature, and PNAS every week!  Pete Binfield, the managing editor of PLoS ONE notes that Jenson’s ideas are very difficult to implement.  Pete is leading the way for PLoS to think about alternative metrics like the number of downloads, the number of ping-backs from relevant (uh-oh, more subjectivity!) blogs, number of bookmarks on social bookmark pages, etc.  Another way to handle Thompson’s monopoly is to use alternative metrics such as those created by Scopus or Google Scholar.  This last suggestion, while worth pursuing in the spirit of competition, is still not entirely satisfying because to whom in Science do these organizations have to answer?  I am particularly leery of Scopus because it is run by Elsevier, a big for-profit publishing house that also clearly has it’s own agenda.  PubMed is, at least, public and for the public benefit.  Of course, they don’t index all journals either — not too many Anthropology journals indexed there!

Björn Brembs, another PLoS ONE AE, makes the very reasonable suggestion that an impact factor should, at the very least, be a multivariate measure (in accordance with the criticism of lack-of-adjustment for confounders in Hernán’s essay).  Björn, in another blog posting, cites a paper published last year in PLoS ONE that I have not yet read, but clearly need to.  This paper shows that BIF inconsistently ranks journals in terms of impact (largely because the mean is such a poor measure for citation distributions) and proposes a more consistent measure.  I need to carve some time out of my schedule to read this one carefully.

Always a Bridesmaid, Never a Bride

Well, it’s happened again.  My work has been written up in Science but I am not mentioned.  I’m actually not that concerned this time — we’re going to submit the paper for publication soon. I’ve been telling myself (and other people) that this thing we’ve ben working on (all the while being very cryptic about what this thing exactly is) is important.  Every once in a while, I wonder if I’ve just been fooling myself.  The fact that this work has been written up in Science the day after the paper was presented at the Montreal Conference on Retroviruses and Opportunistic Infections suggests to me that it is, indeed, important.

Further Adventures in Publishing

I finally received the pdf version of my recently published paper with a 2006 publication date.  My grad student, Brodie Ferguson, and I used demographic data from the Colombia censuses of 1973, 1985, 1993, and 2002 to calculate the magnitude of the marriage squeeze felt by women in Colombia.  The protracted civil conflict in Colombia means that there has been a burden of excess young male mortality in that country for at least 30 years (the measurement of which is the subject of a paper soon to be submitted).  This excess male mortality means that there are far more women entering the marriage market than there are men, putting the squeeze on women (i.e., making it more difficult for them to marry).  Our results show that in the most violent Colombian departments at the height of the violence (1993), the marital sex ratio was as low at 0.67.  This means for every 100 men entering the marriage market, there were 150 women.  This is a truly stunning number.  We discuss some of the potential societal consequences of these incredibly unbalanced sex ratios.  Two very important phenomena that we think are linked to these extraordinary sex ratios are: (1) the high rates of consensual unions (i.e., non-married couples “living together”) in Colombia and (2) the pattern of female-biased rural-urban migration.

The citation to the paper (even though it came out in 2008) is:

Jones, J. H., and B. D. Ferguson. 2006. The Marriage Squeeze in Colombia, 1973-2005: The Role of Excess Male Death. Social Biology. 53 (3-4):140-151.