Mandatory confessionals upon receipt of tenure.

Something just made my eyes fall out of my head (which could also just be a side effect of all the pseudoephedrine and zinc currently oozing out my gills).

It is this piece, which begins:

In August, it will be 10 years since I received my PhD and I will no longer be considered an early career researcher. Ear hairs. Prostate screening exams. Black socks pulled up to my knees. Scary things are on the horizon…

Before I cross over into old age, letting the younger generation take over the creative reigns, I need to do a retrospective. Let me share the lessons learned from 5 research publications that don’t sit well with me. This is my confession.

Why isn’t this mandatory?

I’ll say it again.


This guy, who has reached a point where he’s established, respected, and whatnot, decided to go back to his early publications to tell us what was wrong with them and what he’s learned. And I think that’s really powerful.

I’ve been thinking a lot lately about what science is going to look like when I’m where this guy is. Despite what some would have you believe, there are lots of people passionate and vocal enough to have drawn attention to things like the tyranny of top-tier journals, the rise of open access journals, and alt metrics. These revolutions in how we appraise a scientist’s worth are happening alongside the realizations that “alternative” careers are becoming the new norm for newly-minted PhDs, that the kinds of data collected and analyses required are changing, and that the same imbalanced power structures that are pervasive in every other corner of society are alive and well in science, too. Just to rattle off the first things that bubbled into consciousness here.

I’ve been thinking about how to grow into these changes, as individuals and as a field. I don’t think we’re going to abolish journals any time soon, for instance, nor do I think such an extreme measure is the answer. But given that they are the seat of power, the arbiters of reality, as it were, I think that they’re a good candidate for seeding change.

The above confessional reminded me of a formalized version of the ever-amusing Overly Honest Methods hashtag, which to me signaled a sea change in our attitudes towards scientists’ humanity. There are just too many of us toiling too-long hours, and the only thing that keeps our impostor syndrome at bay is to feel that we are not alone in the sorts of sins Overly Honest Methods lays bare. Hiding these sins, we’ve realized, makes us all crazy. It feels good to laugh about it. But this time…..this time it’s serious.

While the end goal may be flawless transparency and honesty across the board, most of us are learning as we go. The hardest thing about being on the final frontier of the Known is that no one can tell you how to bring unknowns into the light. I read Carl Sagan’s The Demon Haunted World: Science as a Candle in the Dark in high school, and I’m pretty sure that was the peak of my faith in the scientific method as an objective, dispassionate hypothesis testing machine. Carried out by flawed humans, there is a lot of trial and error, a lot of pride and prejudice too. They tell incoming cohorts of grad students to get comfortable with failure, and we hear it, but we don’t believe it, deep down. Not until it’s too late, anyway.

Even now, in my fifth year of a PhD, I look back at code I wrote in my first year and shudder to think what errors may lurk there. Even when I go back with a fine-tooth comb, this is no guarantee that I won’t miss something–and that’s just the known unknowns. What smells revolutionary, to me, is that Kashdan is posting not on that kind of mistake, but on the ones embedded in his very methodology. Things that reviewers didn’t call him out on. Things he realized were flaws in his designs, and learned from. That’s the kicker. He’s stepped up and thought of the children. One children in particular: the faceless and yet somehow bleary-eyed universal grad student trying to replicate some past finding who would probably have been better off if they’d learned about these flaws sooner.

Not all of us are in a position to self-critique in this way. And that’s a problem. But until we are, I think what Kashdan’s done here is extraordinary. I think it should be mandatory upon receipt of tenure, if not one’s 10-year anniversary of receipt of PhD, as he has opted for here. Wherever the line between ‘too young, too vulnerable’ and ‘Fuck all y’all, I’ve got tenure’ is drawn, that’s where I’d like to challenge every investigator to pause, reflect, pay it forward, disburse some nuggets of wisdom. Seriously, even a blooper reel at the end of your talk could go a long way. On our way there, we can all do our best to render our scientific efforts transparent (one researcher and friend I particularly admire on this front, Kirstie Whitaker, does an exemplary job of that by linking to her github repository here), but until we have a strong enough grasp of what we’re doing in the first place, it’s up to more established researchers to lead the charge.

Bravo, Dr. Kashdan. More like this, please.

This entry was posted in I didn't mean to write this it just happened, Meta-science & pedagogy. Bookmark the permalink.

4 Responses to Mandatory confessionals upon receipt of tenure.

  1. Babble speak. Meandering words tied together with meaningless adjectives. The early onset of premature senility. Preschoolers have more meaningful thoughts and more coherent thought processes.

    • I disagree completely. The individual words and adjectives, strung together, are perfectly clear, lucid and meaningful to me. The post was thoughtful and logically constructed. It’s point was bang-on, and the point well-taken.

      You, sir, are a moron. Go troll somewhere else.

  2. I’m not that much of a internet reader to be honest but your sites really
    nice, keep it up! I’ll go ahead and bookmark your site to come back later on. Cheers

  3. Ian Kaplan says:

    In a similar vein, there is the issue of negative results. I’ve spent months of hard work on a research project that resulted in hundreds of lines of R code and a 40 page Knitr paper (R + LaTex). All showing that my approach would not work.
    We don’t really have a venue for presenting negative results, yet behind every positive result there are negative results (or, if not, the researcher is not ambitious enough). But I don’t know how many people are interested in talks and papers that show a failed result. So these results rarely get shared.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s