Skip to main content
European Commission logo print header

Evaluation and development of measures to uncover and overcome bias due to non-publication of clinical trials

Final Report Summary - UNCOVER (Evaluation and development of measures to uncover and overcome bias due to non-publication of clinical trials)

Executive Summary:
Clinical trials are embedded in a highly differentiated system involving a wide variety of stakeholders who are guided in their decision-making by various societal rationalities (e.g. scientific, economic or political rationality). Due to the complex nature of the system and the differing strategic goals pursued by specific stakeholder groups and individuals, the dissemination process of research findings of clinical trials is prone to bias. In clinical research “publication bias” is the established term used for biases related to the selective dissemination of evidence.
The UNCOVER project is a direct contribution to overcoming publication bias related to non-publication of clinical studies that have been designed and executed as randomized controlled trials (RCTs). RCTs are currently the gold standard for assessing drug and device efficacy as they are designed to avoid or minimize both systematic and random errors in clinical studies. The UNCOVER project aims to identify and evaluate strategies and ways to overcome non-publication of clinical studies and to provide recommendations to change practice and support evidence-based medicine.
The issues of the publication bias were treated with quantitative, qualitative and participatory means (including tools such as stakeholder maps, institutional analysis, a systematic review, interviews, scenario building workshop, bibliometric analysis and software development) in an interdisciplinary approach.
All evidence acquired in the UNCOVER project indicates that a multi-intervention strategy is necessary to effectively reduce and eventually overcome publication bias due to non-publication of results of clinical trials. We recommend two complementary approaches together with a Catalytic Supplement: each of the three strands comprises a variety of individual measures. The Global Mandatory Approach focuses on a worldwide clinical trial registry, which contains all clinical trials with a unique identification number and summaries of results. This approach relies for its implementation on both hard law and soft law. The Individualized Voluntary Approach focuses on funding policy and journal policy. It is essentially a soft law approach. The Catalytic Supplement implies changes in the reward policy and an overall empowerment of healthcare professionals together with NGOs, patient organizations, education facilities, etc. The two approaches and the supplement are not to be seen as alternatives, but as complementary. Furthermore, we suggest a roadmap for the implementation of feasible interventions that considers the interdependencies of individual measures and the time frame. We believe that the process of responsible research and innovation (RRI) — which aims at stimulating a research and innovation (R&I) process that is ethically acceptable, sustainable and socially desirable — is a helpful framework to better align clinical research with the societal needs in conducting and publishing clinical trials.

Project Context and Objectives:
1.2.1 Project context
Clinical trials are embedded in a highly differentiated system involving a wide variety of stakeholders who are guided in their decision-making by various societal rationalities (e.g. scientific, economic or political rationality (1, 2)). Due to the complex nature of the system and the differing strategic goals pursued by specific stakeholder groups and individuals, the dissemination process of research findings of clinical trials is prone to bias. In clinical research “publication bias” is the established term used for biases related to the selective dissemination of evidence (3, 4). Publication bias “occurs when the publication of research results depends on the nature and direction of the results”, i.e. studies with significant or positive results are more likely to be published than those with non-significant or negative results (5). For the last three decades a multitude of other terms have been introduced and used to cover different aspects of bias. Depending on the source of the publication bias, i.e. the type of actor in the system and their individual interests, various types of publication bias exist (5, 6): e.g. non-publication (never or delayed), incomplete publication (outcome reporting or abstract bias), limited accessibility to publication (grey literature, language or database bias), or other biased dissemination (citation, duplicate or media attention bias).
Publication bias in its various shapes affects the overall knowledge base. It represents a major problem in the assessment of health care interventions as it threatens the validity of published research (7) and can reduce the significance of systematic reviews of drugs, medical devices, or medical procedures, a cornerstone of evidence-based medicine. Consequently, publication bias may have adverse consequences for public health due to ineffective or dangerous treatments. It also results in a waste of scarce research resources in terms of money and time as knowledge of research in futile quests is not shared and studies therefore unnecessarily repeated. Furthermore, patients and other research participants are misled in their understanding that they are contributing to scientific knowledge and the development of improved treatments (8).
As awareness has grown about the problem of publication bias, research has been conducted to examine the root causes and to identify preventative measures. For example, in 2010, Song et al. published an updated Health Technology Assessment (HTA) that identified and appraised studies on publication and related biases, assessed methods to deal with publication and related biases, and examined measures taken to prevent, reduce and detect dissemination bias (5). In addition, several important regulatory changes potentially influencing the public availability of clinical trial results were discussed during the course of the UNCOVER project. The Regulation of the European Parliament and of the Council on clinical trials on medicinal products for human use, repealing Directive 2001/20/EC was discussed and eventually adopted in Mid-April 2014 (10). The Regulation includes several new measures, such as the creation of a public EU-wide database for all clinical trials of new medicines, with a single portal for clinical trial registration. Overall, simplified processes are supposed to reduce bureaucracy and stimulate the conduct of clinical trials in the European Union (11).
In autumn 2012, the European Medicines Agency (EMA) stated that they are committed to the “proactive publication of data from clinical trials supporting the authorisation of medicines once the marketing-authorisation process has ended, which the EMA does not consider commercially confidential”(13). After a workshop organized by the EMA in November 2012, followed by several meetings of advisory groups, the EMA has published the final advice of these groups on the topics of protecting patient confidentiality, clinical trial data formats, rules of engagement, good analysis practice, and legal aspects (14) as well as a draft policy on the publication and access to clinical trial data (16). On 12 June 2014, the EMA Management Board finally agreed the policy on publication of clinical trial data that allows EMA to publish clinical trial data that can be used for academic and non-commercial research purposes (18).
The UNCOVER project is a direct contribution to overcoming publication bias related to non-publication of clinical studies that have been designed and executed as randomized controlled trials (RCTs) . RCTs are currently the gold standard for assessing drug and device efficacy as they are designed to avoid or minimize both systematic and random errors in clinical studies. They are the building blocks of systematic reviews – a cornerstone of evidence-based medicine for improved safety and effectiveness of patient outcomes. However, the inherent value of an RCT is dependent on knowledge of the trial’s existence and accessibility to the trial’s findings. A study examining the patterns of publication of clinical trials funded by the National Institutes of Health (NIH) and registered with ClinicalTrials.gov found that fewer than half of a sample of registered trials were published within 30 months of trial completion (19). Non-publication (i.e. not disseminating results) of RCT results may decisively reduce the benefit of systematic reviews of drugs, medical devices, or procedures because the research that is available, “differs in its results from the results of all the research that has been done in an area [and] readers and reviewers of that research are in danger of drawing the wrong conclusion about what that body of research shows” (20).
The UNCOVER project focuses on a specific aspect of publication bias, i.e. bias resulting from the non‐publication of clinical trials. Non-publication can occur in different forms: either the results are entirely unavailable / inaccessible; or the results are submitted to a regulatory agency but are unavailable to other researchers or systematic reviewers, or other stakeholders; or some of the results remain unavailable (e.g. selective outcome reporting bias) (6).
1.2.2 Objectives and approach
The UNCOVER project aims to identify and evaluate strategies and ways to overcome non-publication of clinical studies that have been designed and executed as randomized controlled trials (RCTs).
UNCOVER pursues three primary objectives:
1) To apply established and develop novel, solid, and useful methods for fact-finding and interventions into the socio-economic system defined by causes and sources of the publication bias;
2) To engage with stakeholders and identify strategies, barriers, and facilitating factors associated with the publication bias and its consequences; and
3) To synthesize lessons learned and recommend feasible measures to deal with the publication bias.
The issues of the publication bias were treated with quantitative, qualitative and participatory means (including tools such as stakeholder maps, institutional analysis, a systematic review, interviews, scenario building workshops, bibliometric analysis and software development) in an interdisciplinary approach in areas with little or no lines of evidence as to how they perform in practice. The tasks included:
• Framing the publication bias in terms of evidence-based medicine and system’s theory (including stakeholder mapping) to both acknowledge and reduce the complexity of the problem and focus on the main players in publishing studies as well as their strategies.
• Objective, systematic and balanced identification of key opinion leaders, as well as measures (law, regulations, policies, practices, guidelines, methods, and tools) to overcome bias, from documents and sites by bibliometric means and comprehensive site searches on the world-wide web.
• Systematic review of current measures substantiated by own experience (“inside-out”) as well as inclusion of experts and external knowledge of international methods groups (“outside-in”) in the field of systematic reviews and meta-analyses.
• Engagement of stakeholders through interviews and workshops with editors and other stakeholders based on stakeholder mapping/analysis to reflect measures in terms of experiences, own strategies and existing conflict of interests.
• Development of software solutions for the demonstration and treatment of unpublished studies on statistical meta-analyses.
• Recommendations for the implementation of feasible measures and milestones, as well as open gaps addressed by new research, to overcome non-publication.
UNCOVER builds upon an innovative approach to the framing of the publication bias and integrates methods that are currently considered to be advanced bibliometric and state-of-the-art means for comprehensive searches and extractions of documents and sites. It pursues a systematic and objective identification of key opinion leaders and assesses current prevention measures against publication bias in the context of RCTs. Furthermore, a series of workshops and interviews with stakeholders (e.g. editors of medical and public health related journals, patient organisations, and pharmaceutical industry) was conducted. Policy measures for clinical trial registration were in the centre of the engagement but various other types of measures will be pursued as well.
The UNCOVER consortium is grateful to the numerous interview partners and participants in the three stakeholder and expert workshops held during the course of this project. The interviewees and workshop participants provided valuable insights that enabled the identification and development of recommendations described in this report.

Project Results:
1.3.1 Conceptual base of publication bias
In a first step a conceptual base of the publication bias was defined to grasp the scope of publication bias and to provide a frame of reference for the project-inherent multidisciplinary research approach involving systems analysis, evidence-based medicine, and evaluation research. Definitions of important terms regarding publication bias and related biases were identified and categorized.
Since the concept of publication bias first appeared in the scientific literature many conflicting and/or overlapping terms for similar ideas have been used. We abstracted definitions from 64 full-text publications on the topic of publication bias, many of which used unique definitions of key terms and concepts. In 2010, Song and colleagues (5) published a landmark health technology assessment on publication bias which included a comprehensive glossary of relevant terms, in particular for different types of bias. We have chosen, to a great extent, to agree with these definitions as we believe that they are comprehensive, sensible, and usable, and that they will, in time, establish themselves as the “correct” definitions in this field. We have added information of citations from other publications where terms in the Song et al. glossary were absent, where additional information clarifies how terms have been used in the past, or slightly expands ideas of the definitions included. Table 1.1 provides an overview of the project-relevant terms concerning publication bias and related biases.
Table 1.1: Publication Bias and related bias terms (based on Song et al (5) unless otherwise specified).
Term Definition
Bias Bias refers to types of systematic errors in the collection, analysis, or interpretation of research data that distort the outcomes; bias at times may be either unrecognized or intentional, but both negate the validity of the study.
In statistics, the bias of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated.
Citation bias Occurs when the chance of a study being cited by others is associated with its result. For example, authors of published articles may tend to cite studies that support their position. Thus, retrieving literature by scanning reference lists may produce a biased sample of articles and reference bias may also render the conclusions of an article less reliable.
Database bias (indexing bias) Occurs when there is biased indexing of published studies in literature databases. A literature database, such as MEDLINE or EMBASE, may not include and index all published studies on a topic. The literature search will be biased when it is based on a database in which the results of indexed studies are systematically different from those of non-indexed studies.
Dissemination bias Occurs when the dissemination profile of a study’s results depends on the direction or strength of its findings. The dissemination profile is defined as the accessibility of research results or the possibility of research findings being identified by potential users. The spectrum of the dissemination profile ranges from completely inaccessible to easily accessible, according to whether, when, where and how research is published.
Full publication bias Occurs when the full publication of studies that have been initially presented at conferences or in other informal formats is dependent on the direction and/or strength of their findings.
Grey literature bias Occurs when the results reported in journal articles are systematically different from those presented in reports, working papers, dissertations or conference abstracts.
Language bias Occurs when languages of publication depend on the direction and strength of the study results.
Rationale: Authors having completed a clinical trial yielding negative results might be less confident about having it published in a large diffusion international journal written in English and would then submit it to a local journal. If these investigators work in a non-English speaking country the paper will be published in their own language in a local journal. Positive results by authors from non-English speaking countries are thus more likely to be published in English, and negative results in the investigators language.
Media attention bias Occurs when studies with striking results are more likely to be covered by the media (newspapers, radio and television news).
Multiple publication bias (duplicate publication bias) Occurs when studies with significant or supportive results are more likely to generate multiple publications than studies with non-significant or unsupportive results. Duplicate publication can be classified as ‘overt’ or ‘covert’. Multiple publication bias is particularly difficult to detect if it is covert, when the same data are published in different places or at different times without providing sufficient information about previous or simultaneous publication.
Non-publication See “publication bias” the term we use for non-publication of the results of clinical trials.
Outcome reporting bias Occurs when a study in which multiple outcomes were measured reports only those that were significant.
Selective [outcome] reporting bias in a study is defined as the selection, on the basis of the results, of a subset of analyses to be reported. Selective reporting may occur in relation to outcome analyses, subgroup analyses, and per protocol analyses, rather than in intention to treat analyses, as well as with other analyses. Three types of selective reporting of outcomes exist: the selective reporting of some of the set of study outcomes, when not all analysed outcomes are reported; the selective reporting of a specific outcome—for example, when an outcome is measured and analysed at several time points but not all results are reported; and incomplete reporting of a specific outcome—for example, when the difference in means between treatments is reported for an outcome but no standard error is given. A specific form of bias arising from the selective reporting of the set of study outcomes is outcome reporting bias, which is defined as the selection for publication of a subset of the original recorded outcome variables on the basis of the results.
Place of publication bias Place of publication bias is defined as occurring when the place of publication is associated with the direction or strength of the study findings. For example, studies with positive results may be more likely to be published in widely circulated journals than studies with negative results. The term was originally used to describe the tendency for a journal to be more enthusiastic towards publishing articles about a given hypothesis than other journals, for reasons of editorial policy or readers’ preference.
Furthermore, clinical trial results may be publically available (for example as PDFs via company or public webpages); however they may not be indexed in any databases and therefore practically difficult to locate.
Positive-outcome bias Preference (of journals) for (publishing) trials showing significant results.
Publication bias Occurs when the publication of research results depends on the nature and direction of the results. Because of publication bias, the results of published studies may be systematically different from those of unpublished studies.
The non-publication of clinical trials might mean that the results are entirely unavailable/inaccessible, that the results are submitted to a regulatory agency but are unavailable to other researchers, systematic reviewers, or other stakeholders, or that some of the results remain unavailable (see selective outcome reporting bias).
Time lag bias Occurs when the speed of publication depends on the direction and strength of the trial results. For example, studies with significant results may be published earlier than those with non-significant results.

1.3.2 Actors in the system: Stakeholder map
Publication bias due to non-publication of clinical trial results is a multi-dimensional problem including scientific, economic, legal, political and overall health issues. Therefore, a systemic approach is required to adequately grasp the problem and the actors involved. The systemic approach provides a framework for the discussion of impacts of interventions to reduce bias related to non-publication of clinical trial results. It considers stakeholders and stakeholder’s interactions on the one hand and intervention logics on the other.
Stakeholder Map
To reveal underlying “forces” driving the system, major stakeholder groups in clinical trials and the publishing process were identified and grouped according to their specific roles in the system and rationalities (stakeholder map), as a prerequisite for an institutional analysis. In total, 18 different roles were identified (e.g. funder, investigator, and author) as well as five different rationalities (scientific, economic, health, legal and political rationality) . The stakeholder map was developed as a tool to visualize the complexity of non-publication of clinical trial results and to provide a basis for adequately considered recommendations on changing undesired publication practice.
As a structuring principle, a functional approach was chosen and clinical trial results were conceptualized as an idealized value-chain process (Figure 1.1). Every process element represents a certain function – “Design CT”, “Conduct CT”, etc. – which creates a value. The clinical trial value-chain starts with a given body of knowledge as the first functional element and evolves towards the approval of drugs or other use of clinical trial results as the final functional elements. Stakeholders are depicted with regard to their roles towards each functional element. Additionally, the five societal rationalities are used as an ordering scheme: scientific, economic, health, legal, and political rationality.
The identification and mapping of stakeholders includes their clustering or grouping according to their specific roles and rationalities. Most roles are adequately captured as organizations. For a few roles it seems appropriate to focus on persons. This is especially true for authors, editors and reviewers. In their case it is assumed that the “personal decision sovereignty” outweighs the “organizational decision sovereignty”. For example, a researcher as part of a clinical trial team is in his/her decisions strictly guided by the organizational rules and routines, whereby the very same researcher has usually comparatively more sovereignty concerning the publishing of papers in scientific journals (unless it is an organizational instruction that publication of certain results is not permitted). Stakeholder mapping in UNCOVER recognizes therefore two categories of stakeholders: persons as stakeholders (such as authors, reviewers, doctors, patients etc.) and organizational stakeholders (such as companies, universities, hospitals etc.). Figure 1.1 visualizes the variety of stakeholders and their roles and Table 1.2 describes the roles of the stakeholders.


Figure 1.1: Clinical trial stakeholder map according to roles/functions (for a description of the stakeholders see Table 1.2). CT denotes “clinical trial”.
Table 1.2: Stakeholder roles involved in the process of clinical trials and publication of trial results.
Role Description
AUTHOR person writing or contributing to manuscripts describing clinical trials for publication; usually employed by an organization conducting a CT such as a company, a university hospital or research institute
DATABANK MANAGER entity/person providing infrastructure/service for prospective/retrospective CT registration; usually hosted by a medicines agency, a university or an intergovernmental medicines body
DECISION MAKER entity of public administration responsible for health decisions (hard law and soft law, rules of the game)
DOCTOR person professionally qualified and certified for medical treatment
ENABLER person/organization working on the improvement of public health; usually health care professionals, health education facilities, consumer advocates, patient organizations or other health related NGOs
EDITOR person who evaluates research advances and decides what to publish in a particular journal
ETHICAL ADVISOR independent body protecting the rights of CT participants and providing public assurance; usually an ethics committee
FUNDER organization providing funding for clinical research; usually a company, a private fund or public fund (funder, sponsor and investigator may be the same entity)
INSURER organization deciding about reimbursement of drugs, medical devices etc. in a locality; either private (company) or (semi)public insurer
INVESTIGATOR entity (i.e. principal investigator and team) responsible for the conduct of a CT at a trial site; usually employed by a company, a university hospital or research institute
LEGISLATOR national/supranational legislative body/bodies (e.g. parliament)
PUBLISHER organization publishing scientific journals/books or managing databases, or mass media (print, TV, web)
READER person who is either a CT specialist (author, investigator etc.) or an interested non-specialist
REGULATOR competent authority approving/licensing a drug, medical devices etc. for use in a locality; usually a governmental agency
REVIEWER person conducting scientific peer-review on behalf of an editor/publisher
SPONSOR person/organization responsible for the initiation, management and/or financing of a CT; usually a company or university hospital or research institute
SYSTEMATIC REVIEWER reviewer using explicit methods to identify, select, and critically appraise relevant research
USER person who consumes health care; usually as a patient and/or as a CT participant

Intervention logics
Starting from the above described stakeholder map, a conceptual framework for the identification of logics of interventions was developed. It is based on the ‘hard law’ and ‘soft law’ distinction, supplemented by the general institutional context. Whereas hard law follows the mandatory and legislation logic, soft law follows the voluntary and agreement logic.
The terms ‘soft law’ and ‘hard law’ (i.e. ‘soft policies’ and ‘hard policies’, respectively) are used to characterize two different dimensions in public governance – non-legally binding and legally binding (25-27). Whereas hard law indicates public governance on the basis of legislation (including taxes, standards and other forms of binding rules), soft law means public governance by guidelines, recommendations, declarations, self-commitment, voluntary agreements etc. In a nutshell:
• hard law changes behaviour by immediately changing the choice set of addressees (hierarchical approach)
• soft law changes behaviour without (immediately) changing the choice set of addressees (market approach)
In international relations, soft law proves useful where states prefer non-treaty obligations which are simpler and more flexible than treaty-related obligations (i.e. mutual confidence-building, useful in pre-treaty processes, simpler procedures, more rapid finalization, greater confidentially). Within the European Union soft law is used to allow member states and EU institutions to adopt policy proposals without binding those member states who do not wish to be bound and/or to motivate member states to do voluntarily what they are less willing to do if legally obligated. In public governance at the state level, soft law is used to motivate organizations as well as persons (i.e. in their professional roles) to change their behaviour in a desired direction, without simultaneously introducing legal sanctions. Especially here (i.e. when organizations/persons are concerned) soft law is used to change opportunity sets (i.e. organizational routines and community practices) which work on the basis of beliefs attitudes.
Although soft law has no legally binding effect, its impact can be significant. Soft law may have an impact on policy development and practice precisely by reason of its lack of legal effect. Actors (states, organizations, persons) may be willing to undertake voluntarily what they are less willing to do if legally obligated. Therefore, soft law can generally be seen as a more flexible instrument – compared to hard law – in achieving policy objectives.
1.3.3 Identification of key stakeholders
To identify feasible new measures, we engaged representative stakeholders and experts in workshop and interviews to explore motivations and barriers to counter publication bias. The key opinion leaders had been identified by exploiting knowledge and networks available in the UNCOVER consortium as well as by integrating bibliometrics and other web-based methods such as internet search and a web crawler for comprehensive searches and extraction of documents and sites (4, 28, 29).
Bibliometrics
We applied a quantitative bibliometric approach to identify key opinion leaders in the field of publication bias. We obtained bibliometric data (e.g. title, authors, institution, country, abstract, keywords, and references) of the relevant literature using the search phrases “publication bias”, “citation bias”, “language bias”, “location bias”, “reference bias”, and “reporting bias” from the ISI Web of Knowledge (Thomas Reuters). Based on the search results bibliometric analysis was conducted on co-authorships, networks of affiliated institutions, co-citation analysis and bibliographic coupling. Relationships between authors and between institutions were mapped and analysed with mapping software (BIBTECHMONTM). Research issues where identified by applying bibliographic coupling and co-citation analysis. Key opinion leaders were assessed by a network analysis of co-authorships and bibliometric indicators such as the number of publications, times cited and co-occurrence analysis.
The bibliometric analysis resulted in nominations of stakeholders for interviews and workshops (Figure 1.2). Criteria for nominations were: type of organization (international organizations, agencies, national organizations, industry, and sponsors), research issues and bibliometric indicators.

Figure 1.2: The top 21 authors (ranked by number of publications) of publications citing “publication bias” (marked with flags in the network and listed in the table).
Web-crawler
In addition, we identified the ’publication-bias’ community on the internet by means of social network analysis and delivered a list of sites which are relevant for publication bias. Measures were established to identify key positions and roles of organisations within the community network to finally identify prominent member organisations. In this context, the term ‘organisation’ refers to the responsible entity behind a website. A community identification agent (CIA) was customised and used to systematically locate and archive content and activities relating to ‘publication bias’ such as conferences, pressure group sites, standardization organisations, public forums and blogs with publicly accessible sites.
About 220,000 internet sites were scanned for ‘publication bias’ and related content. About 17,000 sites were identified as being part of the ‘publication bias’ community network on the internet. These sites are operated by 483 website providers. Based on these findings, a network analysis was carried out to explore the community structure. Finally, organisation types were classified according to the services they offer on their internet sites. This revealed that new and sometimes unconventional types of organisations are currently gaining importance in the ‘publication bias’ community. Blogs, e-journals, social networks, and video platforms, like YouTube, with videos about conference presentations, discussion forums and other new services can be considered potential sources for information about the current discussion on publication bias.
As a result of web crawling, we retrieved a list of organisations (identified by domain name) and their relating network position. The individual position of an organisation can be interpreted as an indicator for whether the organisation tends to be either an information hub for the community (a site with numerous outbound links) or whether the organisation is accepted as an authority by the community (a site with numerous inbound links). The domain-based network is an interpretation of the list of organisations, with a visual indicator for the network function of the identified organisation (Figure 1.3).

Figure 1.3: Network of domains
Other results gained from web crawling are:
• a site list of authorities in the publication bias community,
• a site list of hubs in the publication bias community,
• a list of identified domains with organisation names,
• a list of domains with organisation type, and
• a list of identified literature.
One main conclusion from internet scanning is that because of new media communication channels on the internet, new forms of information management are necessary to address the public awareness of ‘publication bias’, effectively. The automatic identification of epistemic communities is only a first step into this direction of automatic knowledge management. It will be a competitive advantage in the publication-bias community to use automatic issue management systems with issue identification, issue tracking, weak signal detection for emerging issues and other services. Although automatic issue management cannot substitute the manual research, it can support researchers with respect to their information management.
1.3.4 Assessing the effectiveness of current interventions to overcome publication bias
A systematic review was carried out to evaluate the effectiveness of interventions to prevent and reduce publication bias and to conduct a thematic analysis of the literature to identify factors acting as barriers or facilitators in the implementation of such interventions. The aim was to identify and appraise empirical studies on interventions to reduce publication bias, specifically with respect to prospective study registration, and to identify personal, social, organizational, and structural factors that can act as barriers or serve as facilitators in the implementation of interventions to prevent and reduce publication bias.
Although the research and scientific community has been aware of and calling for solutions to address the problem of publication bias for many decades, we located little evidence that showed that current measures are actually succeeding in reducing this problem.
• The only conclusion that we can support with a moderate rating for the quality of the body of evidence is that clinical trial registries do not provide comprehensive and accurate information about the methods and pre-specified outcomes of the registered clinical trials that would allow the detection and deterrence of selective outcome reporting, even if their use has increased markedly since 2005. Competing interests of sponsors and researchers are seen as a major barrier to prospective trial registration. Facilitators to overcome these barriers could be the implementation of enforcement mechanisms, like sanctions or penalties for not complying as well as clear rules of what data has to be entered.
• Another difficulty seems to be the existence of many different inconsistent registries all over the world within different legal systems. The desire for one open access comprehensive trial registry and worldwide legislation that mandates international linked registries that are able to exchange information among countries is often expressed in the literature. The use of a unique registration number could help to identify all studies. Raising awareness of the impact of publication bias through education may also be helpful. In order to implement an enforcement mechanism, create one comprehensive trial registry, and make it easy and practical to use financial and personnel resources are essential and have to be provided by state bodies or through a central fund supported by all stakeholders, including industry.
• Likewise, it seems as if electronic publishing has not been able to increase the number of negative results or the amount of information provided regarding the results of all outcomes of RCTs and that open access to scientific journals serves to discriminate against authors from developing countries without providing any benefit to reduce publication bias.
• The main barriers concerning the peer review process were biased reviewers and editors as well as the inconsistency of the peer reviewing process. Facilitators that could overcome these barriers included training for reviewers and editors, employment of experienced full time professionals and enforcement of transparency and objectivity. The peer review process seems to be influenced by social factors like norms and behavior in a culture and disciplinary cultures or personal factors like interests of the reviewer. Blinding peer reviewer may decrease geographical bias against non-US authors. We found no evidence that changing the peer reviewing process can reduce reporting or positive outcome bias or on the role of ethics commissions in ensuring protocols and publications are consistent. Even if one saw potential here for interventions against publication bias to be implemented, the poor quality of the information in trial registries and the tendency for this information to be altered over time currently prohibits accurate cross-checking by third parties such as reviewers or ethic commissions.
1.3.5 Developing new measures to overcome publication bias
Participative approach: Engaging stakeholders in interviews and workshops
To identify feasible new measures, we interviewed representative stakeholders and experts to explore motivations and barriers to counter publication bias (30). In addition, experts and stakeholders were invited to several workshops to collectively create multiple, alternative visions of the future (‘scenarios’) and identify both the key drivers and key activities to initiate change as well as the challenges and opportunities for individual stakeholders (31). Together with the stakeholders a roadmap was developed for a plausible scenario (32).
The following stakeholder groups were addressed in a series of interviews and the workshops (in alphabetical order): Ethics committees, Patient organizations, Pharmaceutical industry, Political decision makers, Publishing (journal editors, associations), Regulatory agencies and supporting organizations, Research funding agencies, and Research institutions and associations. Overall, we invited 89 interview partners and conducted 35 interviews. Fifty-five interview partners either did not reply or refused to be interviewed (reasons: no time, no expertise on this topic, organization’s communication policy, refusing to share information on this research topic).
Several interview partners although working in the field of biomedicine or regulatory affairs were not aware of the existence and the consequences of publication bias. Therefore, we conclude that although publication bias is a widespread phenomenon, only a small part of the scientific community is actually concerned about it.
In the interviews we could identify seven interventions or clusters of interventions to counter publication bias:
• Open Access policies
• Prospective registration of clinical trials and mandatory reporting of clinical trials
• Interventions targeting the field of publishing
• Monitoring Publication Status by Research Funders
• Monitoring Registration Status and/or Publication Status by Ethics Committees
• Interventions targeting Researchers and the Research System
• Raising awareness
A law which requires the mandatory registration of a clinical trial and the reporting of the results of a clinical trial was described as the most important intervention. Three different modes of operation to implement interventions to counter publication bias were identified: networks within stakeholder groups, cooperation between different stakeholder groups, and good practice models.
Another core aspect of the participatory approach was scenario building. Scenario building is a methodological practice used in science, public policy, business and community settings when it is not easy to predict future trends, but action needs to be taken; and when it is important to engage different stakeholders, recognizing diversity yet building common ground in imagining possible futures. Scenarios are thus ‘heuristic devices’ to break away from conventional thinking, focus on mental models and strategic dialogue, and thinking systemically and holistically. The other core aspect is the transformative stance which entails to design a collaborative environment for stakeholders to engage fruitfully in identifying novel ways to overcome barriers in publication bias.
For the development and description of a realistic scenario for preventing publication bias, a conceptual and organizational basis for involving stakeholders in the scenario development process was designed and the key issues of preventing publication bias in RCTs were embedded in wider socio/technical/economic/political (STEP) contexts. In addition, the participatory approach followed in this project was using the Responsible Research and Innovation Framework as heuristic device to structure the current practice and potential measures to overcome publication bias. We believe that the process of responsible research and innovation— which aims at stimulating a research and innovation (R&I) process that is ethically acceptable, sustainable and socially desirable — is a helpful framework to better align clinical research with the societal needs in conducting and publishing clinical trials.
The goal is to convene a stakeholder group of actors covering the “whole system” of publication bias: taking different perspectives and expectations regarding the issue of publication bias associated with clinical trials into account. To facilitate the scenario building process two-day stakeholder workshops were conducted in June and September 2013 in Vienna. Experts and stakeholders were invited to collectively create multiple, alternative visions of the future (‘scenarios’). Scenario frames were developed in interactive dialogue sessions and aim to address, first, goals and needs for change, second, key drivers and key activities to initiate change, and third, challenges and opportunities for different stakeholders.
During the two workshops, participants composed a total of 19 short logical narratives of hypothetical future developments (31) and identified metaphors and names for each scenario (‘scenario frames’) for overcoming publication bias. The following scenario frames to prevent publication bias were developed: “Completeness of Results and High Quality Reporting”; “Name and Shame – A brave world”; “Scientific Practice Influenced by Public Involvement and Implemented by Public and Private Collaboration”; “Backfire”; “Danger to Personalized Medicine”; “Loss of Control”; “Agenda 2035”; “Rx Gold Standards”; “Our Own Worst Enemy”; “Scientific Shift”; “We Can See Clearly Now…”; “Mandatory HIA for Public Regulator”; “Funders Take Over”; “‘Big Brother’ Trial”; “Citizens Uprising”; “Death of the Impact Factor” ; “Journals ‘R’ Us”; “Give Us the Data!” and “2084”
Starting from these scenarios stakeholders developed a feasible roadmap to overcome publication bias (Figure 1.5).

Bibliometric approach:
To reveal characteristic bibliometric features of the publication bias, an analysis of data sources and bibliometric features was conducted. To this end, three literature/registry databases (PubMed, Web of Science and ClinicalTrials.gov) with differing field content were combined for the analysis of datasets from three different medical cases that had previously been used for a systematic review. Hypotheses were formulated on how to overcome those aspects of publication bias that can be elaborated with bibliometric approaches. In a further step, a bibliometric analysis of characteristic features distinctive between registered vs. non-registered studies was done to investigate the influence of registered vs. non-registered studies on the obtained “bibliometric profile”.
With the introduction of the US registry for clinical trials, ClinicalTrials.gov in 2000 and other registries in the following years, an increasing number of authors of publications on clinical trials started to quote the reference ID of clinical trials from trial registries. As a consequence, it is now possible to identify registration IDs in publications that are indexed in the literature databases Web of Science or PubMed. The number of publications that refer to a registered clinical trial ID has been increasing considerably since 2005 when the International Committee of Medical Journal Editors (ICMJE) started to require trial registration as a precondition for publication under the Uniform Requirements for Manuscripts Submitted to Biomedical Journals (URM). This demonstrates that the ICMJE exerts a considerable influence on the overall number of publications referring to registered clinical trials.
The bibliometric analyses revealed the following:
• The network analysis demonstrated that the most active authors with a high centrality in the network also tend to publish registered clinical trials.
• Science maps of research fronts (RF) and knowledge bases (KB) demonstrated that RF and KB offer a much broader view on research issues with respect to clinical trials, meta-analysis and systematic reviews.
• Bibliographic coupling helps to identify additional literature on clinical trials and related research.
• The pharmaceutical industry sponsored the majority of those registered clinical trials that were referred to in all identified publications (from the analysed medical test cases) and comprise thereby the dominating sector of funding bodies acknowledged in publications on registered clinical trials.

Statistical modelling approach: Developing a new software
An open-source software program (SAMURAI - Sensitivity Analysis of a Meta-analysis with Unpublished but Registered Analytical Investigations) was designed, developed, and tested that will utilize the available information from unpublished studies in trial registries to estimate the potential impact of these studies on the results of a given meta-analysis.
The new software is meant to assist researchers who are conducting meta-analyses to gauge the potential impact of missing data of ongoing and unpublished studies on their results and conclusions in meta-analyses.
The R package SAMURAI can handle meta-analytic data sets of clinical trials with two independent treatment arms. The outcome of interest is currently binary (but we are testing the expansion to continuous outcomes). For each unpublished study, the data set only requires the sample sizes of each treatment arm and the user predicted “outlook” for the studies. Outlooks are chosen by the user from among pre-defined outlooks that vary from those strongly favoring the intervention treatment to those strongly favoring the control treatment.
SAMURAI assumes that control arms of unpublished studies have effects similar to the effect across control arms of published studies. For each intervention arm of an unpublished study, utilizing the user-provided outlook, SAMURAI randomly generates an effect estimate using a probability distribution, which may be based on a summary effect across published trials. SAMURAI then calculates the estimated summary intervention effect using a random effects model using the DerSimonian & Laird method, and outputs the results as forest plots.
By utilizing information about sample sizes of treatment groups in registered but unpublished clinical trials, SAMURAI has an advantage over other assessments of publication bias, such as the trim and fill method, which come with more stringent assumptions about the number and enrollment of unpublished studies. The forest plot provides the end-user an easy way to see how the inclusion of unpublished studies could change the meta-analytic summary intervention effect.
A large variety of scenarios were explored, including the ones mentioned above. The overall philosophy adopted was that the software should be ‘user-driven’ – that is, that the user has the flexibility to choose their desired scenarios. The effect of unpublished studies in a particular meta analysis depends on many aspects – from the number of published/unpublished studies, the sizes of the published/unpublished studies, the actual magnitude of effects in the published studies, and the potential magnitude of effects in the unpublished studies – the latter is driven by the user. We provided several choices (‘outlooks’) for the user to select: very positive, positive, no effect, negative, very negative, very positive CL, Positive CL, Current effect, negative CL, very negative CL.
1.3.6 Recommendations
As a final result, UNCOVER made recommendations for feasible measures to better cope with publication bias arising from non-publication of RCTs.
Due to the multi-dimensionality of the problem of publication bias related to non-publication of clinical trial results, we recommend:
1. a three-pronged multi-intervention strategy to overcome publication bias in clinical trials (general recommendation);
2. individual measures of the multi-intervention strategy (specific recommendations); and
3. a roadmap that integrates individual measures into a feasible implementation plan with a time frame for activities (roadmapping recommendations).

Multi-intervention strategy
All evidence acquired in the UNCOVER project indicates that a multi-intervention strategy is required for effectively overcoming publication bias arisen from non-publication of clinical trials results. To overcome publication bias, we recommend two approaches together with a Catalytic Supplement comprising a variety of building blocks and individual measures (Figure 1.4):
• Global Mandatory Approach
• Individual Voluntary Approach
• Catalytic Supplement
These three approaches should not be seen as alternatives, but as complementary.

Figure 1.4: Three interlinked approaches to overcome publication bias related to non-publication of results of clinical trials and their building blocks.
The Global Mandatory Approach focuses on a worldwide CT-registry, which contains all clinical trials with a unique number as well as at least summaries of results of all of these clinical trials. We are aware that the basis for such a registry already exists in the form of the WHO International Clinical Trials Registry Platform (ICTRP), but also that these efforts require stronger support by nation states worldwide and the EU. This approach combines hard law and soft law.
On the one hand, it is the goal that nation states worldwide implement CT law on the basis of global harmonized standards. Thereby, US and EU should act as first movers and serve as role models by synchronizing the US clinicaltrials.gov and the European EudraCT with ICTRP. The WHO as facilitator should monitor the progress of nation states’ implementation of mandatory CT-registration and eventually promote the achievement of a globally accepted declaration on CT-registration (international treaty law).
On the other hand, these global harmonized standards should be developed in a step-by-step process including learning and feedback with participation of different stakeholders from investigators, authors, financiers/funders, publishers, editors, and reviewers to lobbyists, NGOs, doctors, health care professionals, and patients. To facilitate the process, the WHO should provide a forum for the integration and supervision of the development of global harmonized standards.
Simultaneously, the general institutional context should gradually improve by taking up the “call for transparency”. This should be mirrored in health institutions, such as regulatory bodies and educational and training facilities, and in the professional assessment of the nature and effects of non-publication of clinical trial results.
The Individualized Voluntary Approach focuses on funding policy and journal policy. It is essentially a soft law approach which will unfold its efficiency in the interlinking of funding and journal policy.
Public funders should require that researchers publish clinical trial results by following principles such as The European Code of Conduct for Research Integrity. There are already several national initiatives (e.g. the Research Councils UK Policy on Open Access) which demonstrate that public funders are aware of their leverage power and that they are ready to use it. Ideally, the proactive up-taking of fostering the publishing of all kinds of results (successful as well as unsuccessful CTs etc.) by journals could be a signal towards private funders to consider themselves a “publish all kinds of results” policy. The general institutional context in form of the open science movements backed by Web 2.0 (from professional online-databases to wiki-type crowd-source information) supports these developments.
A Catalytic Supplement to the Global Mandatory Approach and the Individualized Voluntary Approach is a change in the reward policy and an overall empowerment.
A change in the reward policy means that the academic reward system changes towards the appreciation of publication of all kind of results (e.g. inclusion in the impact factor system, performance measures used for career advancement should also include a researcher’s record in making data publicly available), and the business and funders reward system change likewise. Overall empowerment means that health care professionals together with NGOs, patient organizations, education facilities etc. raise general awareness and provide knowledge to better inform health care users – who are then better respected by the professional health care experts and who are able to behave as advanced demanders within the health care system.

Individual measures
The three approaches of the multi-intervention strategy for overcoming publication bias require a set of practical measures for their implementation. Table 1.3 provides an overview of recommended measures and indicates how they correspond to the overall architecture of recommendations. In addition to the identified building blocks of the strategy we suggest measures that should support the overall learning process related to publication bias.
Table 1.3: Building blocks of the multi-intervention strategy and individual measures.
Building blocks Individual measures
A1 Mandatory CT-registration
incl. mandatory publishing of result • All clinical trials should be registered in one worldwide meta-registry and catalogued using a unique trial identification number.
• The World Health Organization (WHO) should be responsible for the administration of this single worldwide trial registry.
• Regulatory agencies should not accept any trials for approval of drugs, medical devices etc., unless these have been prospectively registered. In addition, reporting of results should also be a prerequisite for the acceptance of trial results by these agencies.
• Approval institutions and funders, in addition to ensuring that trials are registered, should review the quality of entries (quality management) and should secure compliance with mandatory data requirements.
• The results of clinical trials must be publically available (at least as summaries).
A2 CT-registration standards • Global harmonized standards should be developed for trial registration and results of clinical trials.
A3 Facilitator WHO • The WHO should be strengthened as a facilitator.
A4 Transparency movement • Medical journals should increase transparency.
• The public “call for transparency” (e.g. through awareness raising campaigns) should be done justice .
A5 Health institutions • Health care professionals together with civil society organisations (patient organisations, consumer advocates, etc.) should raise general awareness and knowledge of the general public with the objective to empower citizens to become “informed, autonomous, critical and respected” partners.
B1 Funding policy • Funding bodies should support trial registration and mandatory reporting of results. Funding (public or private) for clinical trials should be dependent on the trials being prospectively registered.
• Funding of future applications for grants should be dependent upon the prospective registration record of the applicant.
• Funding bodies should support the implementation of The European Code of Conduct for Research Integrity.
• Funding bodies should include researchers’ data sharing activities in their assessment.
B2 Journal policy • Journals (via editors) should revise their policies on information contained in abstracts and in reference lists to contain the unique trial identification number.
• Journal editors should only accept manuscripts based on registered trials.
• Editors and peer reviewers should check that the trial registration number is stated in all publications.
• Journals should follow their own rules seriously and also set consequences if researchers are not willing to share data.
• Medical journals should increase transparency.
• Trial registration numbers should be stated in all publications.
B3 Open science & Web 2.0 • Automated issue management systems should be used in awareness raising campaigns (see also C2).
C1 Reward policy • There should be incentives for making data publically available.
C2 Empowerment
(& awareness raising) • Awareness of publication bias and its detrimental effects should be raised among all stakeholder groups and the general public.
• Automated issue management systems should be used in awareness raising campaigns (see also B3).
• Awareness of the issue should be increased through the assessment of the extent of publication bias.
• The consequences of publication bias on the estimated effects of a meta-analytic summary effect should be assessed.
- Fostering future learning • Cooperation within stakeholder groups should be reinforced.
• Ongoing and future strategies to counter publication bias should be evaluated.

Roadmap
In order to structure the approaches and measures to overcome publication bias in a systematic manner, it is recommended to move along an ‘ideal’, yet feasible roadmap (Figure 1.5) with selected key measures and key stakeholders in three major phases.
A feasible transformation scenario to overcome publication bias in clinical trials can be structured into three major phases.
• Phase 1 (from the present until 2020) is mainly characterized by the Individualized Voluntary Approach with a focus on funding policy and journal policies and a Catalytic Supplement with a focus on change in the reward policies and overall empowerment.
• Phase 2 (2020-2025) is mainly characterized by the Global Mandatory Approach with a focus on a worldwide CT-registry, facilitated and orchestrated by the WHO.
• Phase 3 (2025-2035 and beyond) is mainly characterized by global cooperation and harmonization of laws and standards.
Phase 1 (from the present until 2020) is mainly characterized by raising public awareness due to lobbying by civil society organizations such as patient groups, consumer advocates etc. Funding policy plays a central role for improving CT data availability and knowledge translation. Registration and publication will become a prerequisite for CT funding, as well as CT data collection and synthesis. In terms of capacity-building, CT results are increasingly ‘translated’ into user-friendly information for health care providers, decision makers, the interested public, etc. Health institutions increase education and training activities and upgrade existing registries. Overall, increasing public pressure results in buy-in from regulators and legislators.
Phase 2 (2020–2025) is characterized by consensus-building among regulators and legislators worldwide in the sense of the Global Mandatory Approach. The World Health Organization (WHO) assumes a key facilitating role with proposals and agreements which gradually paves the way for global collaboration. Key stakeholders show improved accountability in their reward policies. Pharmaceutical companies adopt open access principles, notwithstanding naming and shaming in case of non-compliance. In the wake of the open science movement, publishers adopt new business models and new publication guidelines. More and more online journals (scientific and popular) appear. Automated issue management is used by public relations (PR) experts and also by activist scientists in the health domain for systematic information retrieval, lobbying and agenda setting.
Phase 3 (2025-2035 and beyond) finally sees the European Union and the United States in the lead to implement mandatory CT registration as well as globally harmonized CT registration standards due to intensive networking among nation states. CT knowledge translation is a pervasive phenomenon which ultimately leads to better informed health care users.
Figure 1.5 depicts in a stylized manner the various temporal pathways towards minimizing publication bias and maximizing patient well-being. Although the vertical lines suggest linear or even isolated developments, measures actually frequently interlink horizontally with other measures, providing or receiving necessary inputs for or from other measures.


Figure 1.5: Feasible roadmap to overcome publication bias (CSO: civil society organisations; CT: clinical trial; EMA: European Medicines Agency; ERB: Ethical Review Board; HCP: health care professionals).
1.3.7 Conclusions
For the last decades, various preventive measures have been implemented to counter publication bias. The most prominent strategies currently pursued are prospective registration of clinical trials and reporting of results, as well as awareness raising campaigns, such as the AllTrials campaign . However, setting up trial registry systems, reporting standards, publication guidelines and the like as isolated measures is likely to suffer from a lack of compliance, poor data quality, etc. as long as these interventions are not embedded in a comprehensive strategy that takes into consideration the systemic dependencies and rationalities of actors and mechanisms at work in clinical research.
This effect is intensified when the policies behind the interventions do not carry the force of law and non-compliance with the policies does not entail legal sanctions or penalties. This is confirmed by a systematic review that assessed the effectiveness of particular interventions (e.g. trial registration, peer review process) to counter publication bias. It found little evidence that current measures are successful in dealing with the problem (34). In line with this, only a few stakeholder groups are convinced that they actually have influence when it comes to preventing publication bias (30). Not surprisingly, there is evidence that publication bias has even increased over the years despite the existing wealth of knowledge on the matter and suggested remedial measures that have been researched and debated for many years (8, 35).
To effectively reduce and eventually overcome publication bias, we believe that it is necessary to apply a systemic approach and implement individual measures as orchestrated sets of interventions that are carefully aligned with the social system context. We suggest the implementation of several interlinked and complementary approaches that comprise bundles of individual measures and rely on hard law as well as soft law interventions.

1.6 References
1. Buchinger E. Deliverable 1.2 of the UNCOVER FP7 funded project under contract number 282574: Stakeholder map. 2012.
2. Buchinger E. Deliverable 3.3 of the UNCOVER FP7 funded project under contract number 282574: Framework for the discussion of publication bias prevention: From stakeholder mapping to the hard law & soft law distinction. 2013.
3. Bax L, Moons KG. Beyond publication bias. Journal of clinical epidemiology. 2011;64(5):459-62.
4. Klerx J. Deliverable D3.1 Part A of the UNCOVER FP7 funded project under contract number 282574: Publication Bias: Identification of the Internet Community. 2013.
5. Song F, Parekh S, Hooper L, Loke YK, Ryder J, Sutton AJ, et al. Dissemination and publication of research findings: an updated review of related biases. Health Technology Assessment. 2010;14(8):1-222.
6. Thaler K, Chapman A, Gartlehner G, Buchinger E, Bangdiwala S. Deliverable D1.1 of the UNCOVER FP7-funded project under contract number 282574: Definition and consequences. 2012.
7. McGauran N, Wieseler B, Kreis J, Schueler Y-B, Koelsch H, Kaiser T. Reporting bias in medical research - a narrative review. Trials. 2010;11.
8. Joober R, Schmitz N, Annable L, Boksa P. Publication bias: What are the challenges and can they be overcome? J Psychiatry Neurosci. 2012;37(3):149-52.
9. European Parliament/Legislative Observatory. Clinical trials on medicinal products for human use. 2013 [24.07.2013]; Available from: http://www.europarl.europa.eu/oeil/popups/ficheprocedure.do?reference=2012/0192%28COD%29&l=EN.
10. European Commission. Clinical trials - General information. 2014 [29/08/2014]; Available from: http://ec.europa.eu/health/human-use/clinical-trials/index_en.htm.
11. European Union. Regulation (EU) No 536/2014 of the European Parliament and of the Council of 16 April 2014 on clinical trials on medicinal products for human use, and repealing Directive 2001/20/EC. 2014 [29/08/2014]; Available from: http://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32014R0536&from=EN.
12. European Medicines Agency. European Medicines Agency publishes final advice from clinical-trial advisory groups. 2013 [24.7.2013]; Available from: http://www.ema.europa.eu/ema/index.jsp?curl=pages/news_and_events/news/2013/04/news_detail_001778.jsp&mid=WC0b01ac058004d5c1.
13. European Medicines Agency. European Medicines Agency publishes final advice from clinical-trial advisory groups. 2013.
14. European Medicines Agency. Documents from advisory groups on clinical-trial data. 2013 [29/08/2014]; Available from: http://www.ema.europa.eu/ema/index.jsp?curl=pages/special_topics/document_listing/document_listing_000368.jsp.
15. European Medicines Agency. Documents from advisory groups on clinical-trial data. 2013 [24.7.2013]; Available from: http://www.ema.europa.eu/ema/index.jsp?curl=pages/special_topics/document_listing/document_listing_000368.jsp&mid=WC0b01ac058067d984#section1.
16. European Medicines Agency. Draft policy 70: Publication and access to clinical-trial data. 2013; Available from: http://www.ema.europa.eu/ema/index.jsp?curl=pages/includes/document/document_detail.jsp?webContentId=WC500144730&mid=WC0b01ac058009a3dc.
17. European Medicines Agency. Draft policy 70: Publication and access to clinical-trial data. 2013 [24.07.2013]; Available from: http://www.ema.europa.eu/ema/index.jsp?curl=pages/includes/document/document_detail.jsp?webContentId=WC500144730&mid=WC0b01ac058009a3dc.
18. European Medicines Agency. European Medicines Agency agrees policy on publication of clinical trial data with more user-friendly amendments. 2014 [29/08/2014]; Available from: http://www.ema.europa.eu/ema/index.jsp?curl=pages/news_and_events/news/2014/06/news_detail_002124.jsp&mid=WC0b01ac058004d5c1.
19. Ross JS, Tse T, Zarin DA, Xu H, Zhou L, Krumholz HM. Publication of NIH funded trials registered in ClinicalTrials.gov: cross sectional analysis. British Medical Journal. 2012;344.
20. Rothstein H SA, Borenstein M. . Publication bias in meta-analysis : prevention, assessment and adjustments. : Chichester, England; Hoboken, NJ: Wiley;; 2005.
21. Orac. Is there publication bias in animal studies? 2010.
22. Luhmann N. Social systems. Stanford: Stanford University Press; 1995 [1984].
23. Luhmann N. Observations of modernity. Stanford: Stanford University Press; 1998 [1992]
24. Luhmann N. Essays on self-reference. : New York: Columbia University Press; 1990.
25. Abbot KW, Snidal D. Hard law and soft law in international governance. International Organization. 2000;54(3):421-56.
26. Brennan G, Brooks M. On the 'cashing out' hypothesis and 'soft' and 'hard' policies. European Journal of Political Economy. 2011;27:601-10.
27. Eurofund. European industrial relations dictionary. http://www.eurofound.europa.eu/areas/industrialrelations/dictionary [2013-09-11], 2013.
28. Schiebel E, Züger M-E, Holste D. Deliverable D3.1 (Part B) of the UNCOVER FP7-funded project under contract number 282574: Bibliometric analysis of the research community in the field of publication bias. 2013.
29. Schiebel E, Züger M-E. Publication bias in medical research: Issues and communities. In: Juan Gorraiz ES, Christian Gumpenberger, Marianne Hörlesberger, Henk Moed, editor. 14th International Society of Scientometrics and Informetrics Conference; Vienna2013. p. 1419-30.
30. Kien C, Van Noord M, Wagner-Luptacik P. Deliverable 3.4 of the UNCOVER FP7-funded project under contract number 282574: Perspectives of stakeholders on publication bias in clinical trials. 2013.
31. Wagner-Luptacik P, Nußbaumer B, Van Noord M. Deliverable 5.1 of the UNCOVER FP7-funded project under contract number 282574: Scenario building to uncover feasible solutions against publication bias. 2013.
32. Wagner-Luptacik P, Nußbaumer B. Deliverable 5.3 of the UNCOVER FP7-funded project under contract number 282574: Road mapping a feasible scenario to overcome publication bias. 2014.
33. AllTrials Campaign. All Trials Registered - All Trials Reported. 2013 [July 24, 2013]; Available from: http://www.alltrials.net
34. Thaler K, Nußbaumer B, Van Noord M, Kien C, Griebler U, Gartlehner G. Deliverable 3.2. of the UNCOVER FP7-funded project under contract number 282574: The effectiveness of interventions for reducing publication bias. 2013.
35. Fanelli D. Positive results receive more citations, but only in some disciplines. Scientometrics. 2013;94(2):701-9.


Potential Impact:
1.4.1 Expected impacts (call requirements)
The expected impacts as given in the call topic HEALTH.2011.4.1-2 were:
• “more empirical evidence is needed to gain insight into this issue”;
• “assess the impact and seek ways to detect effectively and reduce the impact of non-publication of negative studies and study results”; and
• “provide insights on how to avoid duplication of research efforts and allow a more effective funding of health research”.
The work carried out in the UNCOVER project contributed to achieve all of the three required aspects by providing more evidence, impact assessment, a systematic assessment as well as suggestions for viable measures to prevent non-publication and raise resource allocation efficiency.
Evidence & insight into the issue:
UNCOVER managed to provide more insight into the issue of publication bias through the conceptualisation of the publication bias in terms of evidence-based medicine and systems theory (including stakeholder mapping and institutional analysis) which allowed for both acknowledging and reducing the complexity of the problem. This insight helped the project team to focus on the main players in publishing studies (stakeholder map) and their strategies (systematic review, interviews). Evidence was particularly generated through the identification of stakeholders (bibliometric analysis and web crawling) and measures, and the identification of barriers and facilitating factors to overcome publication bias (systematic review and stakeholder engagement). The systematic review revealed that measures previously applied in an attempt to reduce publication bias, were hardly effective, especially because of mal-compliance to the measures. It became obvious that either new measures have to be devised to counteract publication bias more effectively, or, what seems to be more promising to bring about a systemic change, measures have to be embedded in the system by following a systemic approach. Interviews revealed that there is little awareness of the problem among stakeholders and that they do not consider themselves influential when it comes to overcoming publication bias. In this context, the project contributed directly to awareness raising.
Impact assessment & detection:
Impact was conceptualized and measured in two ways: (1) in form of bibliometric evaluation and science maps and (2) in form of statistical modelling of unpublished but registered studies (software development). Users have now a tool at hand that helps to assess the magnitude of publication bias resulting from unpublished studies and can take counteractions to reduce the impact of non-publication in meta-analyses.
Insight into the system to avoid negative consequences of publication bias:
As the stakeholder map allowed grasping the relevant stakeholder groups in the field of publication bias, key stakeholders could be engaged throughout the project by means of interviews and workshops to reflect measures, strategies and existing conflict of interests. Thereby insights into how the system really works could be gained and effective, feasible, and institutionally accepted measures were generated.
The central results of the UNCOVER project have increased the transparency about different rationalities and strategies of different stakeholders and thereby affect the further adoption of policies and good practices by stakeholders such that a change of practice contributes to reduce or even overcome publication bias due to non-publication of results of clinical trials.
1.4.2 Project products (results) and potential impact
Stakeholder map and institutional analysis: In the project, stakeholder mapping and system analysis were adapted to the system spanned by sponsored studies, clinical trials registration, publication of study results and other corner posts. This was a relatively new area of application for mapping and systems analysis, and contributed to the design and preparation of interviews and workshops. Therefore, the stakeholder map had a major impact on the project consortium itself. On the one hand, it was a convenient framework for the design of the participative process and for identifying and selecting the most relevant stakeholder groups for interviews and workshops. On the other hand, it helped to organize and visualise the complexity of the publication process in clinical trials and to provide a basis for discussion of recommendations for changing undesired publication practice in clinical research. This was of importance not only for discussions within the project consortium but served also as a guiding frame in UNCOVER workshops with stakeholder engagement. Similarly, the institutional analysis was an indispensable prerequisite for developing the intervention logic of the final recommendations.
Bibliometric output indicators (citation frequency, network authorities etc.): Bibliometric indicators relevant for providing empirical evidence on publication bias are useful tools for raising awareness among the scientific community working in the field and especially editors and publishers. In addition, they help to improve the overall knowledge base. The current citation practice in selected systematic reviews and the larger scientific field was also illuminated by bibliometric analyses. This newly gained knowledge contributed to an overall awareness raising and improvement of the knowledge base. The findings derived from the bibliometric analyses were presented at the 14th International Society of Scientometrics and Informetrics Conference in Vienna (AT) in 2013; a corresponding paper was published in the proceedings of the Conference. Currently, a second paper on the bibliometric findings is being prepared.
Inventories of relevant stakeholders/key opinion leaders, and of relevant activities in the field: These inventories were a main basis for the organisation of the stakeholder participation in the project. The lists are available on the project website (except for the list with the interview partners) and can be used by researchers and clinical investigators as well as by publishers and policy makers to support their own research or policies.
Systematic review of effectiveness of measures to prevent and reduce publication bias: This review compiled and assessed measures that had been employed previously in an attempt to overcome publication bias. It revealed that little evidence could be located that showed that current measures are actually succeeding in reducing the problem of publication bias. However, one major conclusion is that clinical trial registries do not provide comprehensive and accurate information about methods and pre-specified outcomes of the registered clinical trials that would allow the detection and deterrence of selective outcome reporting, even if their use has increased markedly since 2005, when the International Committee of Medical Journal Editors started to require trial registration as a precondition for publication. Competing interests of sponsors and researches are seen as major barriers to prospective trial registration. Another difficulty seems to be the existence of many different inconsistent registries all over the world within different legal systems. Likewise, it seems as if electronic publishing has not been able to increase the number of negative results or the amount of information provided regarding the results of all outcome of RCTs. The information derived from the systematic review is an improvement of the knowledge base that informs policy makers. It is also of relevance for most other stakeholder groups, particularly those who rely on comprehensive and accurate information in trial registries, such as institutions funding clinical trials, researchers, clinical investigators, systematic reviewers and publishers. Not least, databank managers of trial registries must be alerted to realise this. The results of the systematic review were presented several times at international scientific conferences including the Evidence Live Conference, at the University of Oxford (UK) in 2013 and the Cochran Colloquium in Ottawa (CN) in 2013. A peer-reviewed publication (Thaler et al. “Current interventions for reducing publication bias are not supported by evidence about their effectiveness: a systematic review”) has been submitted to the Journal of Clinical Epidemiology.
Interviews with editors and other stakeholders: The series of interviews with editors and other key stakeholders constituted an intervention into the system and provided information on known and anticipated yet unknown internal and external barriers and facilitating factors concerning non–publication of clinical trials; on policies that are (and are not) working; and on institutional acceptance by individual and organizational stakeholders. However, the engagement of stakeholders through interviews did not only serve to improve the knowledge base of the project consortium, but was also an effective means of raising awareness across all interviewed stakeholder groups. After all, not all interviewees had been aware of the problem of publication bias by then. In addition, the majority of stakeholder groups had the impression that they were not influential forces in overcoming publication bias. Out of ten stakeholder groups, only four considered themselves as important in preventing publication bias (ranked by count): the research community, regulatory agencies, policy decision-takers (incl. EC), and research funding agencies. The majority, however, considers other stakeholders as (more) important for preventing publication bias in clinical trials and is thus ‘shifting the burden’. A peer-reviewed publication of the results derived from the interviews (Kien et al. “Barriers to and facilitators of interventions to counter publication bias: thematic analysis of scholarly articles and stakeholder interviews”) has been submitted to BMC Health Services Research.
Novel open source software: Novel open source software was released to the public. This software will result in a better understanding of the consequences of unpublished study results on meta-analyses. Consequently, this tool has impact on all researchers in clinical research, though especially on those conducting systematic reviews and/or meta-analyses. The new software is meant to assist researchers who are conducting meta-analysis to gauge the potential impact of missing data of ongoing and unpublished, but registered studies on their results and conclusions in meta-analyses. The software is freely and openly available. It was demonstrated during a webinar conducted by the University of North Carolina at Chapel Hill in 2013, presented at the Evidence Live Conference at the University of Oxford (UK), in 2013, and at the Cochrane Colloquium, Ottawa (CA), in 2013. Recently, it was published open access (Noory Y Kim et al. (2014) SAMURAI: Sensitivity analysis of a meta-analysis with unpublished but registered analytical investigations (software). Systematic Reviews Vol. 3/Issue 1).
Scenarios based on feasibility, institutional acceptance and effectiveness of prevention measures: The participatory approach to the scope and development of scenarios, as well as the translation of a feasible scenario into a roadmap are expected to contribute significant advance in the field of publication bias, by incorporating and reflecting various aspects of the publication bias through direct engagement of stakeholders. The need to overcome publication bias, and to adequately address the complexity of the issue and to ultimately chart pathways to change current practice was anticipated in UNCOVER’s participatory approach which emphasized the critical importance of learning through interaction by experts and stakeholders. This approach aimed to develop a viable ‘breakthrough’ scenario for overcoming publication bias in clinical trials. Within this context, roadmapping aimed to sketch viable pathways for long term sustainable ‘systemic innovation’ required for overcoming publication bias in clinical trials – recognizing the need for the development of an entire socio-technical ‘transformation’ system, including the institutional, market, policy, educational and regulatory and technological issues.
The developed scenarios and roadmap are of considerable interest for a wide group of stakeholders; not least interest and (publication) bias methods groups, evidence-based medicine and scientific community, editors and publishers as well as policy makers and funding agencies. The scenarios provide not only possible visions of the future of the publication process in clinical research; the roadmap even outlines a feasible plan on how to achieve a desirable future, i.e. a future in which publication bias has been overcome or at least reduced. All stakeholder groups are likely to benefit from the roadmap as dependencies of actions and actors are graphically depicted in a way that provides guidance for various temporal pathways towards minimizing publication bias and maximizing patient well-being. Although individual measures suggest linear or even isolated developments, measures actually frequently interlink with other measures, providing or receiving necessary inputs for or from other measures. UNCOVER implemented a process of dialogue, where communication with different stakeholders was promoted throughout the whole project duration to include a great diversity of perspectives in the ongoing discussions. Because of the participatory project events, dissemination is conceptualised as a two-way interaction process between providers and users of specific knowledge from various areas. Information about the project is expected to be broadly disseminated through active engagement of stakeholders in group settings. In addition, results of the participative approach to overcome publication bias were presented at the IFKAD Conference, Matera (IT), in 2014. Currently, a peer reviewed publication is being prepared that explores how the process of responsible research and innovation can become a helpful framework to better align clinical research with the societal needs in conducting and publishing clinical trials.
Recommendations: It is expected that the recommendations derived from the UNCOVER project are of interest for all involved stakeholder groups. After all, we believe that to effectively reduce publication bias, it is necessary to apply a systemic approach and implement individual measures as orchestrated sets of interventions that are carefully aligned with the social system context and that imply collaboration of various stakeholder groups. We suggest the implementation of several interlinked and complementary approaches that comprise bundles of individual measures and rely on hard law as well as soft law interventions. The recommendations will be a significant input for future efforts concerning basic and methods research in the field of publication bias. Currently, a peer-reviewed manuscript on the identified recommendations is being prepared.

List of Websites:
Website: For further information on the project see: www.publicationbias.eu

Project Coordinator:
Dr Manuela Kienegger / Dr Dirk Holste
AIT Austrian Institute of Technology
Innovation Systems Department
Donau-City-Straße 1
1220 Vienna (Austria)

The UNCOVER project team:
AIT Austrian Institute of Technology GmbH, Innovation Systems Department (Coordinator)
Manuela Kienegger, Dirk Holste
Eva Buchinger, Joachim Klerx, Brigitte Palensky, Petra Wagner, Beatrice Rath, Petra Schaper-Rinkel, Edgar Schiebel, Silvia Steinbrunner, Maria- Elisabeth Züger
Donau-Universität Krems, Department for Evidence-based Medicine and Clinical Epidemiology
Gerald Gartlehner
Evelyn Auer, Andrea Chapman, Flamm, Ursula Griebler, Ludwig Grillich, Christina Kien, Klerings, Müllner, Barbara Nussbaumer, Michaela Strobelberger, Birgit Teufer, Kylie Thaler, Megan Van Noord
University of North Carolina at Chapel Hill, Department of Biostatistics
Shrikant I. Bangdiwala
Noory Y Kim