Below are 2 articles about the bias in published studies. Data on adverse events or other problems are simply not published, so only positive data is shown to the public.
One in Three Newly Approved Drugs Has Safety Issues | Medpage Today – by Salynn Boyles – May 09, 2017
Nearly a third of drugs approved by the FDA from 2001 through 2010 had new safety issues detected in the years after they entered the market, researchers found.
Among 222 novel therapeutic drugs approved during the period, 71 (32%) had postmarket safety events.
Three, including the COX-2 pain reliever Bextra, were withdrawn due to safety concerns and 61 boxed warnings were issued, according to the study published online May 9 in JAMA.
Downing said the findings highlight the importance of continuous safety monitoring in the years after new drugs enter the market.
pre-market clinical trials, which usually involve fewer than 1,000 patients who take the drugs for a few months, are, by definition, not designed to identify long-term safety issues with the drugs
in the current de-regulatory political climate, as the FDA is called on to accelerate drug approvals, the study findings provide insights into the agency’s process.
The main study outcome was a composite of
- market withdrawals due to safety concerns,
- FDA issuance of incremental boxed warnings added postmarket, and
- FDA issuance of safety communications
hrough Feb. 28, 2017.
Of the 222 drugs approved during the decade examined, 183 (82.4%) were pharmaceuticals and 39 (17.6%) were biologics. The median total review time was 311 days, and the total review time was less than 200 days for 54 novel therapeutics. Approximately one quarter of the drugs (23.6%) were near-regulatory deadline approvals
Among all approved drugs, the median duration of market availability was 11.7 years (IQR 8.7 to 13.8 years), and there were 123 postmarket safety events affecting 71 of the novel therapeutics (32%).
Among the main findings:
Class warnings were issued for
- triptans (almotriptan, frovatriptan, eletriptan) in 2006,
- phosphodiesterase 5 inhibitors (vardenafil, tadalafil) in 2007,
- bisphosphonates (zolendronic acid, ibandronate) in 2008, and
- dipeptidyl peptidase 4 inhibitors (saxagliptin, sitagliptin) in 2015.
In multivariable analyses, postmarket safety events were more frequent for biologics compared with pharmaceuticals and among psychiatric compared with cancer and hematologic therapies.
Priority review and orphan drug status were not significantly associated with postmarket safety events, while accelerated approval and near-regulatory deadline approval were associated with increased frequency of postmarket safety events.
Adverse events (AEs) are harmful or undesirable outcomes that occur during or after the use of a drug or intervention but are not necessarily caused by it. Information on the adverse events of health care interventions is important for decision-making by regulators, policy makers, health care professionals, and patients. Serious or important adverse events may occur rarely and, consequently, systematic reviews and meta-analyses that synthesize harms data from numerous sources (potentially involving both published and unpublished datasets) can provide greater insights.
The perceived importance of systematic reviews in assessing harms is exemplified by the growing number of such reviews published over the past few years.
The Database of Abstracts of Reviews of Effects (DARE) includes 104 reviews of adverse events published in 2010 and 344 in 2014. We have previously noted that systematic reviewers, in line with current guidance, are increasingly conducting broader searches that include unpublished sources, such as
- theses and dissertations,
- conference proceedings,
- trial registries, and
- information provided by authors or industry.
This is despite the difficulties in obtaining and incorporating unpublished data into systematic reviews.
Serious concerns have emerged regarding publication bias or selective omission of outcomes data, whereby negative results are less likely to be published than positive results.
This has important implications for evaluations of adverse events because conclusions based on only published studies may not present a true picture of the number and range of the events.
The additional issue of poor reporting of harms in journal articles has also been repeatedly highlighted.
The problem of missing or unavailable data is currently in the spotlight because of campaigns such as AllTrials (www.alltrials.net), the release of results in trial registries (through the 2007 Food and Drug Administration Amendment Act), and increased access to clinical study reports (CSRs).
In addition, reporting discrepancies, including omission of information, have recently been identified between studies published as journal articles and the corresponding unpublished data.
These emerging concerns indicate that publication and reporting biases may pose serious threats to the validity of systematic reviews of adverse events.
Hence, we aimed to estimate the potential impact of additional data sources and the extent of unpublished information when conducting syntheses of adverse events.
This present methodological review updates previous work by focusing on quantifying the amount of unpublished adverse events data as compared to published data, and assessing the potential impact of including unpublished adverse events data on the results of systematic reviews.
The 28 studies included in our review give an indication as to the amount of data on adverse events that would have been missed if unpublished data were not included in assessments. This is in terms of the number of adverse events, the types of adverse events, and the risk ratios of adverse events.
We identified serious concerns about the substantial amount of unpublished adverse events data that may be difficult to access or “hidden” from health care workers and members of the public.
Incomplete reporting of adverse events within published studies was a consistent finding across all the methodological evaluations that we reviewed.
This was true whether the evaluations were focused on availability of data on a specific named adverse event, or whether the evaluations aimed to assess general categories of adverse events potentially associated with an intervention.
Our findings suggest that it will not be possible to develop a complete understanding of the harms of an intervention unless urgent steps are taken to facilitate access to unpublished data.
The extent of “hidden” data has important implications for clinicians and patients who may have to rely on (incomplete) published data when making evidence-based decisions on benefit versus harms.
Our findings suggest that poor reporting of harms data, selective outcome reporting, and publication bias are very serious threats to the validity of systematic reviews and meta-analyses of harms.
In support of this, there are case examples of systematic reviews that arrived at a very different conclusion once unpublished data were incorporated into the analysis.
These examples include the Cochrane review on a neuraminidase inhibitor, oseltamivir (Tamiflu), and the Health Technology Assessment (HTA) report on reboxetine and other antidepressants.
Although the process of searching for unpublished data and/or studies can be resource-intensive—requiring, for example, contact with industry, experts, and authors and searches of specialist databases such as OpenGrey, Proquest Dissertations and Theses, conference databases, or websites and trial registries—we strongly recommend that reviews of harms make every effort to obtain unpublished trial data.
In addition, review authors should aim to specify the number of studies and participants for which adverse outcome data were unavailable for analysis.
The overall evidence should be graded downwards in instances where substantial amounts of unpublished data were not accessible.
Our review demonstrates the urgent need to progress towards full disclosure and unrestricted access to information from clinical trials.
This is in line with campaigns such as AllTrials, which are calling for all trials to be registered and the full methods and results to be reported.
We are starting to see major improvements, however, in the availability of unpublished data based on initiatives of the European Medicines Agency (EMA) (since 2015, EMA policy on publication of clinical data has meant the agency has been releasing clinical trial reports on request, and proactive publication is expected by September 2016), European law (the Clinical Trial Regulation), the FDA Amendments Act of 2007—which requires submission of trial results to registries—and pressure from Cochrane to fully implement such policies
The discrepant findings in numbers of adverse events reported in the unpublished reports and the corresponding published articles are also of great concern.
Systematic reviewers may not know which source contains the most accurate account of results and may be making choices based on inadequate or faulty information.
Journal editors and readers of systematic reviews should be aware that a tendency to overestimate benefit and underestimate harm in published papers can potentially result in misleading conclusions and recommendations
the available examples clearly demonstrate that availability of unpublished data leads to a substantially larger number of trials and participants in the meta-analyses.
We also found that the inclusion of unpublished data often leads to more precise risk estimates (with narrower 95% confidence intervals), thus representing higher evidence strength according to the GRADE (Grades of Recommendation, Assessment, Development and Evaluation) classification, in which strength of evidence is downgraded if there is imprecision
Few studies compared published and unpublished data for nonpharmacological interventions. Yet the importance of publication bias for adverse events of nondrug interventions may be just as important as for drug interventions.
Unpublished adverse events data of nondrug interventions may differ from unpublished adverse events data of drugs because of aspects such as regulatory requirements and industry research
In conclusion, therefore, there is strong evidence that substantially more information on adverse events is available from unpublished than from published data sources and that higher numbers of adverse events are reported in the unpublished than the published version of the same studies.
The extent of “hidden” or “missing” data prevents researchers, clinicians, and patients from gaining a full understanding of harm, and this may lead to incomplete or erroneous judgements on the perceived benefit to harm profile of an intervention.
Authors of systematic reviews of adverse events should attempt to include unpublished data to gain a more complete picture of the adverse events, particularly in the case of rare adverse events.
In turn, we call for urgent policy action to make all adverse events data readily accessible to the public in a full, unrestricted manner.
Copyright: © 2016 Golder et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All data are within the paper and its Supporting Information files