dinsdag 3 juli 2012

More thoughts on Stapel, Smeesters, and scientific fraud in general

Whenever there is a new case of scientific fraud the question pops up: does publish or perish force scientists to lie about their results? What makes this question all the more relevant is the fact that many universities employ their academic staff (including me) under some form of tenure track. Here the publish or perish is translated into a principle of up or out: either you keep increasing your education evaluation scores, publication list, Hirsch Index, project acquisition, and so forth, or you're out of a job. Needless to say it gives quite an incentive to cook the books.

The first thing to realize here is that neither Stapel nor Smeesters are good examples of such a mechanism. Both had tenure, and Stapel has been making up data for the entire length of his career.

The second thing to realize, however, is that there are many forms of scientific misconduct, not all of which are outright fraud. Stapel is an extreme example of blatant fraud as he fabricated complete datasets. But there are more ways of behaving badly in science:
  • Skip observations that don't support your hypothesis. This is what Smeesters is being accused of.
  • Copy text or ideas without citing the source.
  • The mirror of that: support a claim with a reference to a source that does not provide such justification.
  • Leave out details of the research method that would have put your results in a different light.
  • Run lots and lots of regressions on any combination of variables. You are bound to find a statistically significant relation between one or more variables somewhere. Present it as something you intended to investigate in the first place. (Be aware that "statistically significant at 5%" means "the probability that this relation is due to random fluctuations is 5%", meaning that 1 in 20 of such "statistically significant" relations are really just a coincidence.)
  • Include the name of some big shot who hardly contributed to the paper but will make your paper look important. The big shot has yet another publication and you can bask in his glory.
  • When you do an anonymous peer review, tell the authors to cite some of your papers, especially the ones that improve your Hirsch Index if they are cited once more.
  • When you do an anonymous peer review, reject the paper if it presents results that you present in a paper that you just submitted to another journal. After all, you want to be the first to present the idea!
  • Or even worse than that: reject the paper (or a proposal) and submit the idea yourself. (Admittedly, given the huge time lag in publications you wouldn't have a high chance of success.)
Note how difficult it is to identify bad intentions behind some of these, and that the line between good scientific practice and scientific misconduct can be surprisingly thin:
  • You can have very good reasons to skip an observation (protest bids in contingent valuation surveys are one). This is Smeesters's defence.
  • You may have always thought that author X said Y in article Z, but actually you were confused with another article.
  • Nobody ever includes those details of the method in their papers, so why should you?
  • You're a PhD student and you don't want to let your professor down by not including him as an author - he is your supervisor, after all.
  • The paper you are reviewing would be incomplete without that reference, whether you wrote it or not.
It is easy to say that there are no such things as small sins and big sins: thou shalt not sin, period. But for most people it just doesn't work that way: they wouldn't mind crossing the speed limit by 10 km per h but object to crossing it by 100 km per h. And crossing the speed limit by 20 km per h may make you feel slightly worse about yourself, but when you are in a hurry it becomes easier to silence that guilty feeling.

So yes, I do believe the principles of publish or perish and up or out increase the incidence of scientific misconduct, but not in the way we read about it in the news. The cases you read about in the news are poor examples of such pressures. These are the sensational ones, the blatant fabrication of data by prestigious professors with big egos. The main damage is in the everyday nitty-gritty of science, and most of it may never be detected. Does that make it less bad? No, it may actually be worse because we don't see, let alone quantify, the damage.

So is tenure track bad? Well, to paraphrase Churchill, it is the worst system except for all the other ones. The alternative we had in The Netherlands, where you had to wait for the current professor to die or retire before you could become one, has stifled scientific progress and chased a lot of talent out of the country. I believe the solution lies not in abandoning tenure track, but rather in the way we publish our results - but I'll leave that for another post.

Geen opmerkingen:

Een reactie posten