The RealClimate blog has a very astute entry on how the controversy surrounding the recent report in the prestigious journal Science that bacteria living in the arsenic-rich waters of Mono Lake in California can substitute arsenic for phosphorous in their DNA. If true, this would be a major finding because it expands the range of environments in which we could conceivably find extraterrestrial life. In effect, this result would suggest a wider range of building blocks for life. Pretty heavy stuff. Now, I am way out of my depth on this topic, but it sounds like the paper published in Science suffers from some fairly serious problems. Some of the problems noted by experts in the field have been assembled by Carl Zimmer on his blog. Carl also provides a pithy treatment of the controversy in an article at Slate.com. John Roach has a similarly excellent review of the controversy, including what we learn about science from it on his Cosmic Log blog.
Regardless of the scientific merits of this work, this episode is actually an instructive example of the way that science works. As the RealClimate folks write,
The arseno-DNA episode has displayed this process in full public view. If anything, this incident has demonstrated the credibility of scientists, and should promote public confidence in the scientific establishment.
The post then goes on to list three important lessons we can draw from this whole incident:
- "Major funding agencies willingly back studies challenging scientific consensus." It helps if the challenge to scientific consensus is motivated by carefully reasoned theoretical challenges or, even better, data that challenge the consensus. Some yahoo saying that evolution is "just a theory" or that climate change isn't real because it was really cold last winter isn't enough. In the case of arseno-DNA, as Carl Zimmer notes, the National Academy of Sciences published a report in 2007 that suggested the theoretical possibility of arsenic-based biology. Carl also notes that some of the authors of this report are highly critical of the Science paper as well. The report challenged the orthodoxy that phosphate was a necessary building block of DNA, and the report's author's later called out NASA (the major funding source for this kind of extreme biology) for publishing sloppy science. Lots of orthodoxy being challenged here...
- "Most everyone would be thrilled to overturn the consensus. Doing so successfully can be a career-making result. Journals such as Science and Nature are more than willing to publish results that overturn scientific consensus, even if data are preliminary – and funding agencies are willing to promote these results." Individual scientists have enormous individual and institutional incentives to overturn orthodoxies if it is within their power. You become a star when you pull this feat off. And you better believe that every funding agency out there would like to take credit for funding the critical research that helped overturn a fundamental scientific paradigm.
- "Scientists offer opinions based on their scientific knowledge and a critical interpretation of data. Scientists willingly critique what they think might be flawed or unsubstantiated science, because their credibility – not their funding – is on the line." As a scientist, you have to do this if you are going to be taken seriously by your peers -- you know, the ones who do all that peer review that climate deniers, e.g., seem to get their collective panties in a wad about?
The RealClimate piece summarizes by noting:
This is the key lesson to take from this incident, and it applies to all scientific disciplines: peer-review continues after publication. Challenges to consensus are seriously entertained – and are accepted when supported by rigorous data. Poorly substantiated studies may inspire further study, but will be scientifically criticized without concern for funding opportunities. Scientists are not "afraid to lose their grant money".
Read the RealClimate post to get the full story. Obviously, these authors (who do excellent science and amazing public education work, a rare combination) are interested in what this controversy has to say about accusations of bias in climate science -- check out the RealClimate archives for some back-story on this. However, the post is so much more broadly applicable, as they note in the quote above. Science is not a monolithic body of information; it is a process, a system designed to produce positive (as opposed to normative) statements about the world around us. When it works correctly, science is indifferent to politics or the personal motivations of individual scientists because results get replicated. Everything about a scientific paper is designed to allow other researchers to replicate the results that are presented in that paper. If other researchers can't replicate some group's findings, those findings become suspect (and get increasingly so as more attempts to replicate fail).
So what does this mean for Anthropology as a science? You may remember that there has been some at times shrill "discussion" (as well as some genuine intellectual discussion) about the place for science in Anthropology and the American Anthropological Association in particular. For me, replicability is a sine qua non of science. The nature of much anthropological research, particularly research in cultural anthropology, makes the question of replication challenging. When you observe some group of people behaving in a particular way in a particular place at a particular time, who is to say otherwise? I don't claim to have easy answers here, but there are a few things we can do to ensure the quality of our science.
First, we need to have scientific theories that are sufficiently robust that they can generate testable predictions that transcend the particularities of time and place. Results generated in one population/place/time can then be challenged by testing in other populations/places/times. Of course, it is of the utmost importance that we try to understand how the differences in population and place and time will change the results, but this is what our research is really about, right? When we control for these differences, do we still see the expected results?
Second, we need to be scrupulous in our documentation of our results and the methods we employ to generate these results. You know, like science? It's never easy to read someone else's lab notebook, but we need to be able to do this in anthropology, at least in principle. Going back to the raw data as they are reduced in a lab notebook or its equivalent is probably the primary means through which scientific fraud is discovered. Of course, there are positive benefits to having scrupulously-kept field notes as well. They serve as a rich foundation for future research by the investigator, for instance.
Third, we need to be willing to share our data. This is expected in the natural sciences (in fact, it is a condition for publication in journals like Science and Nature) and it should be in Anthropology as well.
I think that the points of the RealClimate post all apply to anthropology as well. Surrounding the latest brouhaha on science in anthropology, one hears a lot of grousing about various cartels (e.g., the AAA Executive Board, the editorial boards of various journals, etc.) that keep anthropologists of different strips (yes, it happens on both sides) from receiving grants or getting published or invited to serve on various boards, etc. Speaking from my experience as both panelist and applicant, I can confidently say that the National Science Foundation's Cultural Anthropology Program funds good cultural anthropology of a variety of different approaches (there are also other BCS programs that entertain, and sometimes fund, applications from anthropologists) and the panel will happily fund orthodoxy-busting proposals if they are sufficiently meritorious. The editorial position of American Ethnologist not in line with your type of research? If you've done good science, there are lots of general science journals that will gladly take interesting and important anthropology papers (and, might I add, have much higher impact factors). I co-authored a paper with Rebecca and Doug Bird that appeared in PNAS not too long ago. Steve Lansing has also had a couple nice papers in PNAS as does Richard McElreath, or Herman Pontzer, or ... a bunch of other anthropologists! Mike Gurven at UCSB has had some luck getting papers into Proceedings of the Royal Society B. Mhairi Gibson and Ruth Mace have papers in Biology Letters and PLoS Medicine. Rebecca Sear has various papers in Proceedings of the Royal Society B. Monique Borgerhoff Mulder and a boat-load of other anthropologists (and at least one economist) have a paper in Science. Ruth Mace has papers in most of these journals as well as at least one in Science. Rob Boyd, Richard McElreath, Joe Henrich, and I all even have papers about human social behavior, culture, etc. in theoretical biology journals such as Theoretical Population Biology and the Journal of Theoretical Biology. There's lots more. As with my previous post, this is a total convenience sample of work with which I am already familiar. The point is that there are outlets for good scientific anthropology out there even if people like me are unlikely to publish in journals like PoLAR.
So, I'm sanguine about the process of science and the continuing ability for anthropologists to pursue science. My winter break is drawing to a close and I'm going to try to continue some of this myself!
5 thoughts on “That's How Science Works”
Is there a danger to the replicability criterion with the increasing trend of putting whole methods sections in supplementary material or, worse, summarizing them with little depth?
Chris, I think that this trend is an utter abomination. I really don't see how you can understand a scientific paper without having the methods as a central part of the exposition. It completely messes up my ability to read papers too. I realized the other day that I am conditioned to reading papers by flipping through them -- I actually have a hard time if I have to read from start to finish. This is perhaps a weakness of my dyslexic brain, but I do think there is real danger to the quality of science here.
I certainly think there are real challenges to sharing anthropological data as well. There are moves afoot to create a Genbank-like data repository for anthropological data but there are enormous problems with implementation. How do you accommodate all the different modes of data collected in anthropological research? Would anyone actually use such a database? It's sad that we need to ask this latter question, but it's true.