It is a long time since I looked at this but it seems to make sense. As far as I can see, what they really needed to have was a "rollng" mean - a baseline mean column for the subset that remained at each point.I didn't see this thread until you mentioned it elsewhere. If you want to see how meaningless and bad this study is:
Take Table 3, copy/paste it into a spreadsheet. It is the data for Group A total and Group A responders.
Use formulas to calculate N and EIPS (mean) for Group A non-responders. For N, that would be Group A total minus Group A responders. You can also calculate the mean EIPS.
You get the Group A non-responder group of N=27 whose mean EIPS goes from 3.77 at time=0 (N=27) to 6.45 at time=24 (6 years) (N=1). Wow, it "looks" like the non-responders improved their EIPS score even more than the responders!! But "responder" is defined as a patient with at least a change of 1 on the EIPS and "non-responder" as less than a change of 1 on the EIPS. So the non-responders after 6 years supposedly "look" like they improved their EIPS from 3.77 to 6.45, which is an increase of 2.68 points, but they're defined as patients who improved less than 1 point. Does this look like it makes absolutely no sense?! Well, it makes sense that it doesn't make sense because it's totally not legit to compare the EIPS over time of subsets of different N patients dropping out over time. Most of that increase in EIPS score is from drop outs to 5% of the original patient base. Which interestingly happens to look like the statistic I've seen that about 5% of CFS patients get well.
Pardon me for being harsh about this paper but Good Lord. I honestly can't believe a paper like this can even be published, even in a journal I've never heard of, and that any MD would put their name on this. Unless their point was to show that patients who don't improve drop out and patients who improve stick with their current treatment/doctor.
Of course, please feel free to point out if I've made any errors.
Gerwyn also had a point about the journal the study was published in; my 'source' (who wishes to remain anonymous and who has CFS) does not believe there is a bias against getting CFS papers published - he believes good journals would love to publish groundbreaking work on CFS - that's the type of thing they live for - but that really good papers with really significant findings are rare. THe XMRV paper was a notable exception. Really good, rigorous papers, he felt, should be able to find their way to really good journals. he fact that this paper was not in a significant journal was not a strong point.
Problem is unfortunately even the bad psychogenic papers are immensely better scientifically than this paper. If this is an example of CFS research by the best CFS researchers have to offer, then it's no wonder why research into this disease is going nowhere. It's absolutely no surprise that a paper like this can't get published anywhere better. It's a surprise that it was able to get published anywhere at all. I wouldn't even be able to pass it by a TA for a class assignment without getting busted.
If this is an example of CFS research by the best CFS researchers have to offer, then it's no wonder why research into this disease is going nowhere.
Maybe you're confused because this is not "an example of CFS research by the best CFS researchers". This is a retrospective study of a collection of clinical data. It is not "research" in the commonly accepted sense. To suggest that it is an example of the best CFS researchers have to offer is naive at best.
This review and analysis simply looked at the clinical results of many years of one clinical physician's practice. It is not like formal research where the experiments are planned, the patient set carefully selected, the experiments planned, etc. As such, it would not typically be published in a major journal. A paper like this is more typically written for the benefit of other clinical practitioners -- to demonstrate the clinical results of a certain treatment plan, for example.
We have asked that successful clinical practitioners report the results of their treatment. Then we criticize one for doing just that because it wasn't something else. And we want clinical physicians and medical researchers to help us?
People may not like this review and analysis, and that's fine. But judge it for what it is intended to be, not for what it isn't.
The fastest racehorse won't win a dog show.
It does not even do that. You cannot demonstrate clinical results by comparing means of 100% of patients against the remaining 6% of patients. The only thing this demonstrates is the effect of dropout.
However, I do agree with your first statement. I don't actually think this is the best CFS researchers have to offer. I have seen better. This was among the worst. That was in response to people wondering why these papers don't get published or in better journals and claiming bias as the reason.
Actually, I don't ask for clinicians to report results, and if they do, not to just report results haphazardly. I ask for sound methodology no matter what kind of thing they're doing. As Dolphin was trying to say earlier, they should at least compare the same patient to the same patient over time.
That was in response to people wondering why these papers don't get published or in better journals and claiming bias as the reason.
Actually, I don't ask for clinicians to report results,...
I understand that other patients do ask for this. I was just pointing out that there was no contradiction in some people wanting this and some people criticizing it.
The Lancet publishes all kinds of medical papers: different types of experiments, reviews, case reports. This paper is structured as a "formal research" paper. Hypothesis, methods, results, conclusion. It's published under the category of "Original Research" in the journal. Which if you Google Dove Medical Press, you'll find a lot of discussion on scientists/professors blogs about this publisher group.
It itself is not a major paper, but since Dr. Montoya at Stanford cites it and Dr. Lerner is featured on the new Stanford site, it became more concerning to me. Up till now, I just assumed that a medical professor at a top institution would know better than me. I think I will try to find people more qualified than I am to get a perspective.
I do not mean it has to be a double-blind randomized placebo-controlled trial. It can be any of the many valid research types. You just can't compare means over time with 94% dropout in any kind and then determine any relationship to treatment.
If this were Wessely or anyone doing a CBT study, even some small time researcher with no budget, substitute "antiviral" with "CBT", maybe it will seem more clear.