Can pressure to publish distort research results?
Support for this research came from the 'Objective science' project, which received funding from the Seventh Framework Programme to assess the level of bias in natural and social sciences. The analysis of thousands of papers showed that researchers tend to report more 'positive' results for their experiments in a small number of elite institutions. But are 'positive' results more interesting than 'negative' ones? Surely, a 'negative' result is not one where nothing has been found, but where evidence suggests that the hypothesis was not correct. This has as much scientific validity as a positive result providing the hypothesis is correct. For example, the statement 'the data proves without doubt that anthropogenic global warming is real'. It is as valid as 'the data proves without doubt that global warming is not real', provided that it did. On the other hand, the condition of today's academics is commonly described by the expression 'publish or perish'. Their careers are increasingly evaluated based on the sheer number of papers listed in their CVs and the number of citations received. Therefore, to secure funding and jobs scientists need to publish continuously. And papers are likely to be accepted by journals and later cited based on the results they report. The 'Objective science' researchers asked scientists directly where they have committed research misconduct. Two percent of respondents admitted to have fabricated, falsified or 'cooked' data or results at least once - serious misconduct by any standards. The percentage was significantly higher when they were asked about the behaviour of colleagues. Considering that this survey asked sensitive questions, this may be a conservative estimate of the prevalence of scientific misconduct. To further support the hypothesis that competitive academic environments increases the bias against 'negative' results, data from the National Science Foundation were analysed. When examining the relationship between the percentage of paper reporting 'positive' results with institute productivity and expenditures in R&D, a distinct trend was observed. Most ground-breaking scientific research is conducted within a small number of elite institutions. This may not be an indication that the system is biased - it is an indication that talented individuals are concentrated with resources in places where they can achieve more than by working alone. Furthermore, higher-quality journals are much more selective of what they publish and many will only accept papers that advance their field - this means positive results.