Tag Archives: dissertation

Risk-Aversion and Finishing One's Dissertation

It's that time of the year again, it seems, when I have lots of students writing proposals to submit to NSF to fund their graduate education or dissertation research.  This always sets me to thinking about the practice of science and how one goes about being a successful scientist. I've written about "productive stupidity" before, and I still think that is very important. Before I had a blog, I composed a series of notes on how to write a successful NSF Doctoral Dissertation Improvement Grant when I saw the same mistakes over and over again sitting on the Cultural Anthropology panel.

This year, I've find myself thinking a lot about what Craig Loehle dubbed "the Medawar Zone." This is an nod to the great British scientist, Sir Peter Medawar, whose book, The Art of the Soluble: Creativity and Originality in Science, argued that best kind of scientific problems are those that can be solved.  In his classic (1990) paper Loehle argues that "there is a general parabolic relationship between the difficulty of a problem and its likely payoff." Re-reading this paper got me to thinking.

In Loehle's figure 1, he defines the Medawar Zone.  I have reproduced a sketch of the Medawar Zone here.

medawar-zoneNow, what occurred to me on this most recent reading of this paper is that for a net payoff curve to look like this, the benefits with increased difficulty of the problem are almost certainly concave.  That is, they show diminishing marginal returns to increased difficulty.  Hard to say what the cost curve with difficulty would be – linear? convex? Either way, there is an intermediate maximum (akin to Gadgil and Bossert's analysis of intermediate levels of reproductive effort) and the best plan is to pick a problem of intermediate difficulty because that is where the scientific benefits, net of the costs, are maximized.

Suppose that a dissertation is a risky endeavor.  This is not hard for me to suppose since I know many people from grad school days who had at least one failed dissertation project.  Sometimes this led to choosing another, typically less ambitious project.  Sometimes it led to an exit from grad school, sans Ph.D.  Stanford (like Harvard now, but not when I was a student) funds its Ph.D. students for effectively the entirety of their Ph.D.  This is a great thing for students because nothing interferes with your ability to think and be intellectually productive than worrying about how you're going to pay rent.  The downside of this generous funding is that students do not have much time to come up with an interesting dissertation project, write grants, go to the field, collect data, and write up before their funding runs out. So, writing a dissertation is risky.  There is always a chance that if you pick too hard a problem, you might not finish in time and your funding will run out. Well, it just so happens that the combination of a concave utility function and a risk of failure is pretty much the definition of a risk-averse decision-maker.

Say there is an average degree of difficulty in a field.  A student can choose to work on a topic that is more challenging than the average but there is the very real chance that such a project will fail and in order for the student to finish the Ph.D., she will have to quickly complete work on a problem that is easier than the average.  Because the payoff curve with difficulty is concave, it means that the amount you lose relative to the mean if you fail is much greater than the amount you gain relative to the mean if you succeed.  That is, your downside cost is much greater than your upside benefit.

risk-aversionIn the figure, note that d1>>d2.  Here, I have labeled the ordinate as w, which is the population genetics convention for fitness (i.e., the payoff).  The bar-x is the mean difficulty, while x2 and x1 are the high and low difficulty projects respectively.

The way that economists typically think about risk-aversion is that a risk-averse agent is one who is willing to pay a premium for certainty.  This certainty premium is depicted in the dotted line stretching back horizontally from the vertical dashed line at x=xbar to the utility curve.  The certain payoff the agent is willing to accept vs. the uncertain mean is where this dotted line hits the utility curve. Being at this point on the utility curve (where you have paid the certainty premium) probably puts you at the lower end of the Medawar Zone envelope, but hopefully, you're still in it.

I think that this very standard analysis actually provides the graduate student with pretty good advice. Pick a project you can do and maybe be a bit conservative.  The Ph.D. isn't a career – it's a launching point for a career. The best dissertation, after all, is a done dissertation.  While I think this is sensible advice for just about anyone working on a Ph.D., the thought of science progressing in such a conservative manner frankly gives me chills.  Talk about a recipe for normal science!  It seems what we need, institutionally, is a period in which conservatism is not the best option. This may just be the post-doc period.  For me, my time at the University of Washington (CSSS and CSDE) was a period when I had unmitigated freedom to explore methods relevant to what I was hired to do.  I learned more in two years than in – I'd rather not say how many – years of graduate school. The very prestigious post-doctoral programs such as the Miller Fellowships at Berkeley or the Society of Fellows at Harvard or Michigan seem like they are specifically designed to provide the environment where the concavity of the difficulty-payoff curve is reversed (favoring gambles on more difficult projects).

There is, unfortunately, a folklore that has diffused to me through graduate student networks that says that anthropologists need to get a faculty position straight out of their Ph.D. or they will never succeed professionally.  This is just the sort of received wisdom that makes my skin crawl and, I fear, is far too common in our field.  If our hurried-through Ph.D.s can't take the time to take risks, when can we ever expect them to do great work and solve truly difficult problems?