Wednesday, July 29, 2009

Aerobic Capacity

[Moderator's Note:
The original post to which this commentary refers can be found at  ]

With reference to the above-titled article recently published by Ali
A. Weinstein et al (5 associated with NIH and 2 with George Mason
University in nearby Northern Virginia):

There are serious problems with this study, reflecting the
limitations of most research I have seen conducted by the U.S.
Government regarding "CFS" (Chronic Fatigue Syndrome).

Right now the best estimate of the prevalence of CFS in the U.S. is
one million patients. CDC's estimate is 4-7 million patients.
Rheumatoid Arthritis (RA) is at least as prevalent; I am not familiar
with the prevalence of Polymyositis (PM).

Surely these researchers could have found more subjects for their
study. They used 9 patients with PM, 10 with RA, and 10 with CFS - 29
patients in all.

The way statistical analysis of this type works is to compare the
difference between the study cohort and a normal cohort, using
information about the mean, the number of subjects, and the variance.

If the study fails to find a statistically significant difference
between the patient group and the control group it has NOT
"disproved" the thesis that there is such a difference (in this case,
it would be a difference in performance on a VO2 MAX stress test).

The study has merely failed to demonstrate such a difference.

In statistical research, the likelihood of finding a significant
result is directly tied to the size of the study.

In this case, with only 9-10 patients in each of the three categories
of disease in this study - and only 29 patients in total, a
statistically significant difference would be a surprise.

The "magic number" for statistical testing is 50. Below that, there
are strong "small sample" problems.

A second error sadly common in government research with regards to
CFS is the apparent lack of familiarity with other studies conducted
on the same subject.

The recent study published by Staci Stevens and others on the same
subject (the VO2 MAX test and CFS) showed that with patients who were
moderately affected with CFS, the scores on a single day of testing
were comparable to those of a sample of normals.

On the SECOND day of the study, however, Stevens found a significant
and large difference in the VO2 MAX score. The CFS cohort scores, on
average, were half those of the matched sample.

This information was readily available to the authors before the
publication of the study.

So the knowledge in the field had already moved beyond testing
patients on just one day, to making use of the CDC-defined symptom of
"post-exertional malaise" in CFS to test their response a second day
- and it was on the second day that the findings were notably robust.

The Weinstein study was outdated on the day it was published; even if
it was not, well-known small sample problems in statistical research
should have cast doubt on the viability of a study that chose 9
patients from a disease that impacts 1 million.

One might even suspect the purpose was to continue to tell a story
about CFS as a "MUPS" illness, using insurance parlance - a syndrome
with "Medically Unexplained Physical Symptoms."

Surely that was not the purpose.

However, NIH now has another dilemma to explain - what's wrong with
all these patients, formerly diagnosed with CFS, who do have abnormal
scores on the VO2 MAX?

Mary M. Schweitzer, Ph.D.

No comments: