The Reproducibility Project: Cancer Biology was an 8-year effort to replicate experiments from high-impact papers published between 2010 and 2012. The study found challenges to planning replication of 193 experiments from 53 papers. This included protocols not being completely described and only 2 % of experiments fully reporting their data. When replication was attempted, findings often conflicted with the original reports. “Some of this is due to how the work was designed, conducted, analysed and reported,” says Malcolm Macleod from the University of Edinburgh, the EQIPD project host. The EU-supported EQIPD project collated existing recommendations for the conduct of animal research to be implemented in a series of multicentre studies carried out across Europe. The team developed a ‘quality system’, prompting labs to consider issues such as research design, alongside data traceability and security. Training materials were also developed to help researchers understand the underlying principles and how best to implement them. “We’ve shown that standardised studies reduce variation between labs and increase reproducibility. Our quality system has already been rolled out across several labs, with feedback suggesting we have found the right balance between effort required and value added,” adds Macleod.
Building the quality assurance system
The project’s systematic review of animal research guidelines enabled the team to establish basic principles for research team roles, quality culture, data integrity, research processes, continuous improvement and sustainability. To hone these, the team undertook several rounds of discussion, including with a stakeholder group of over 100 participants, while integrating feedback from the wider scientific community. Some discussions were informal, with others were facilitated by structured techniques such as the consensus building Delphi method. After codifying the key quality criteria, experiments were then conducted to determine whether this standardised approach did increase reproducibility. The experiments chosen represented a range of experimental designs and purposes. They were: an open field test of rodent locomotion and exploration; a sleep/wake EEG reflecting aspects of Alzheimer’s disease animal models; and the Irwin test of drug toxicity. These pilots were conducted in five research institutes and commercial labs across mainland Europe and the United Kingdom, revealing more about the requirements of these different settings and the system’s ability to meet these needs. These external assessments not only validated performance but prompted researchers to question whether their strategies were sufficient to deliver high-quality research. The approach also helped identify blind spots, unnoticed problems and opportunities. “Our systematic review of Alzheimer’s disease modelling revealed hundreds of thousands of potentially relevant publications, so we developed a tool to automatically deduplicate these search results, now available to the entire review community,” notes Macleod.
Consolidating efforts and expanding scope
Aside from the imperatives of scientific rigour, the life sciences are also vested with the practical responsibility of reducing suffering. “New drug treatments depend on preclinical research. If we can identify failed drug candidates earlier, we will hasten progress and reduce costs, benefiting patients and the economy,” remarks Macleod. A description of the EQIPD quality system is available in an eLife publication, with some organisations, such as Scientist.com already encouraging their research providers to use EQIPD’s assessment. The team have also set up a non-profit organisation as guarantors of EQIPD (GoEQIPD), which aims to provide formal EQIPD lab certification. “The next challenge will be to apply the same approaches to in vitro life sciences research to identify research problems and what can be done about it,” concludes Macleod.
EQIPD, research, verification, life sciences, drug, treatments, preclinical, experiments, data, replication