On wednesday June 5th I attended the Erasmus Graduate School of Social Sciences and the Humanities symposium “How to prevent sloppy science? Defining good conduct in science“. This symposium offered much promise with keynotes by prof. Kees Schuyt and dr. Peter Verkoeijen, but eventually did little in defining what this good conduct in science actually is or should be.
The first keynote by prof. Kees Schuyt (Chairman of the KNAW committee) gave a general introduction to what sloppy science is, differentiating it from fraud (i.e. the Stapel-case) and focusing on scientific integrity: “doing the right thing when nobody is looking on you”. His main focus of preventing sloppy science was on data management, where good data management is necessary, although not sufficient, for good scientific conduct. I’m a proponent of data management, but what this should consist of remained vague; the publishing of raw data shouldn’t be necessary, as long as you explain how you cleaned up your data. This seems to me a rather strange proposition: if your data selection is sloppy, this approach of data management will not solve that. Other solutions proposed such as an oath seem like a waste of effort to me.
The second keynote by dr. Peter Verkoeijen (Associate Professor Cognitive Psychology, EUR) focused on the importance of replication studies in psychology and in science in general. In general, his proposition was that replication studies enable the detection of fraud and false positives (i.e. incorrectly concluding a hypothesis is true). His main source was the paper by Simmons, Nelson & Simonsohn (2011), in which it is demonstrated how easily false positives can come to be. The method-section in a paper should then provide sufficient information for the replication of the experiment, something I’ve experienced is not always trivial when trying to keep a paper short and focused. Moreover, where can successful replications be published? Enabling and performing replication thus appears limited to the researcher’s personal webpage.
Interestingly, both keynotes and the discussion afterwards focused on quantitative data. How should this be approached for qualitative research, for which replication is often impossible? The PhD-student sitting next to me wondering whether he should make the interviews for his research available still had the same question at the end of the day. Likewise, I still wondered what exactly good data management means. As such, the symposium was interesting, but failed to address the individual research practices that should be addressed, especially in the context of a graduate school.