Tuesday, June 19, 2012

Skewed Results? Failure to Account for Clinical Trial Drop-Outs Can Lead to Erro

Note: Many times when subjects drop out of trials authors use Lost
Observation Carried Forward (LOCF). The Last Observation Carried
Forward (LOCF) imputation method is often used when
data are longitudinal (i.e. repeated measures have been taken per
subject by time point). The last observed non-missing value is used to
fill in missing values at a later point in the
study. However the assumption is that the subject didn't leave the
trial because of adverse effects. Another way of dealing with this
issue is to exclude all data from subjects who dropped out, but this
can bias results as well.

Similar issues have been raised regarding behavioral interventions
such as CBT and GET. ""There is limited evidence about adverse effects
associated with behavioural interventions. Withdrawals from treatment
in RCTs suggest that there may be an issue but the evidence is often
difficult to interpret because of poor reporting." Chambers et al 2006


Skewed Results? Failure to Account for Clinical Trial Drop-Outs Can
Lead to Erroneous Findings in Top Medical Journals

Up to a third of clinical trials studied that found an intervention
effective might, in fact, be wrong
According to Professor Akl of UB, it has always been suspected, but
never proven, that loss to follow-up introduces bias into the results
of clinical trials.

Release Date: June 13, 2012

Buffalo, N.Y. -- A new University at Buffalo study of publications in
the world's top five general medical journals finds that when clinical
trials do not account for participants who dropped out, results are
biased and may even lead to incorrect conclusions.

Published recently in the British Medical Journal, the methodological
study (athttp://www.bmj.com/content/344/bmj.e2809) consisted of a
systematic analysis of 235 clinical trials published in the world's
top five general medical journals between 2005 and 2007 that claimed a
statistically significant effect.

"We found that in up to a third of trials, the results that were
reported as positive -- in other words, statistically significant --
would become negative -- not statistically significant, if the
investigators had appropriately taken into consideration those
participants who were lost to follow-up," says Elie A. Akl, MD, MPH,
PhD, lead author, and associate professor of medicine, family medicine
and social and preventive medicine at the University at Buffalo School
of Medicine and Biomedical Sciences and School of Public Health and
Health Professions. He also has an appointment at McMaster University.

"In other words, one of three claims of effectiveness of interventions
made in top general medical journals might be wrong," he says.

In one example, a study that compared two surgical techniques for
treating stress urinary incontinence found that one was superior. But
in the analysis published this month, it was found that 21 percent of
participants were lost to follow-up. "When we reanalyzed that study by
taking into account those drop-outs, we found that the trial might
have overestimated the superiority of one procedure over the other,"
Akl says.

According to Akl, it has always been suspected, but never proven, that
loss to follow-up introduces bias into the results of clinical trials.
"The methodology we developed allowed us to provide that proof," he
says.

The methodology that he and his coauthors developed consists of
sensitivity analyses, a statistical approach to test the robustness of
the results of an analysis in the face of specific assumptions, in
this case, assumptions about the outcomes of patients lost to
follow-up.

"This study gives us a better understanding of the problem of loss to
follow-up in clinical trials and provides us with better tools to
address it," Akl says.

"This methodology will allow those who conduct the trials and those
who use their results, including clinicians, other scientists,
developers of clinical guidelines, policymakers and bodies like the
Food and Drug Administration, to better judge the risk of bias,"
concludes Akl.

The studies that were analyzed had previously been published in Annals
of Internal Medicine, British Medical Journal, the Journal of the
American Medical Association, Lancet and the New England Journal of
Medicine. To be included, the trials that were studied had to have
reported a significant effect.

Akl led this major study, funded by Pfizer, which took three years to
complete. His co-authors, 20 clinical epidemiologists, are from the
following institutions: McMaster University; University Hospital
Basel; Kaiser Permanente Northwest; Hospital for Sick Children in
Toronto; Institute for Work and Health, Universit√ɨ de Sherbrooke;
University Children's Hospital Tuebingen; Pontificia Universidad
Catolica de Chile; Tel Aviv University; the University of Ottawa; the
University of Freiburg and the University of Oxford.

No comments: