• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

The Race to Retract Lombardi 2009...

biophile

Places I'd rather be.
Messages
8,977
Lee, I used those 3 CBT studies as a general example of very important data being routinely excluded from published papers, which you apparently claimed "never ever" happens in science (unless I misunderstood your comments), not as an example of mislabeling or misconduct. Here is a mislabeling example though, when using physical function normative datasets to derive the lower threshold for "normal" (the mean minus 1 SD), White et al 2011 (PACE Trial) mislabeled a general population as a working age population in the Lancet paper despite showing previous understanding of all the figures and citations involved in the published protocol, so mislabeling also seems to occur in other areas of ME/CFS research.

As noted in the introduction of Wiborg et al 2010, increase in activity levels is a common aim of CBT. Whether or not CBT actually leads to increases in activity as presumed, rather than perceived fatigue or even just reactivity bias on subjective questionnaires, has been controversial for the ME/CFS community, and the meta-analysis finally takes a look at this issue. The authors of the original studies omitted important objective data which challenged a major premise of their pet therapy. It did not negate the conclusions of their studies regarding subjective outcomes, but it was highly relevant to the interpretation of these subjective outcomes.

PACE also purchased the equipment to take these measurements at baseline but for some reason refused to take them again at followup, oddly claiming that it would have been too much of a burden to patients. So again we have the situation where important data on a fundamental issue which could potentially embarrass the authors' pet therapies is being avoided. Now the PACE Trial is being bandied around as evidence that patients are "getting back to normal" lives due to CBT/GET, based on subjective outcomes. I realize this comparison between biological science vs psychobehavioural science isn't exactly perfect.

None of the above is necessarily indicative of fraud or misconduct to me personally, but when these omissions and mistakes all just happen to coincide with making the authors' pet therapies look more relevant than they really are, some patients and advocates start getting just as suspicious as other people are towards Mikovits. I'm not accusing anyone of fraud and I'm not really defending Mikovits as such, the failure to mention azacytidine does seem significant. I admit I am not following the Mikovits gelslidegate as much as I'm following other areas of ME/CFS research, but from my limited understanding, accusations of fraud at this stage because of a single mislabeling and a single omission of data is premature.

However, I haven't yet looked into this specific claim of purposeful data falsification:

[Lee] wrote:

JM admitted that in the Toronto presentation she showed a slide that had a label that misrepresented what was in that lane. She did it intentionally, to make it 'less confusing.' That was her justification for it. That ACTION, BY DEFINITION, is data falsification. She falsely presented her data. She showed us one thing , and falsely said that it was something else. She told us she did it on purpose, to be 'less confusing.'

She took a control sample, but told us it was a patient sample. On purpose, so as not to confuse us. Because apparently it would have been confusing not to see a negative patient sample. Which we actually didn't see, because SHE DID NOT SHOW US THE DATA FOR ONE. She only falsely told us she did.

That is data falsification. I don't care what words she used, or how she justified it - she described to us her false presentation of data. Yes, she copped to it.
 

Angela Kennedy

Senior Member
Messages
1,026
Location
Essex, UK
I give up.

biophile, the followup by Wiborg does not change the initial conclusions about CBT. The followup, using the actometer data, examined a hypothesis for WHY the CBT results were obtained, and concluded the they are not mediated by physical activity.

The initial papers concluded that CBT leads to small improvements on some measures. Wiborg follows up and concludes that the changes are not mediated by physical activity.

Where is the misconduct in that? Where are the CBT samples that got labeled as 'natural course' because it is less confusing? Where is the disclosure that they actually gave the CBT patients a drug and didn't bother to tell anyone in the initial papers?

Where is ANTHING similar to what happened here?

What I don't see anywhere in what you cited, is a case where they presented one set of data, labeled as if it were another. Or where they made up data. Or where they failed to describe what they did. Disagreeing with their experimental design or with their conclusions is NOT evidence for data falsification.

I find myself flabbergasted at a defense of continued trust in someone who has admitted making false statements, and justified it because she was speaking to patients and didn't want to confuse patients. Especially when that false statement is a violation of one of the fundamental precepts of science.

Oh - so you ARE paying attention to the PACE trial, and - oh look!- defending it, even when it clearly clashes with your assertions about science and data.

Weak, no-sense excuses defending PACE, by the way. It's quite clear the Wiborg 'follow-up' does not support PACE, and PACE stands on it's own. There clearly was a manipulation of data, and a failure to show data that nullified their pet hypothesis (that's without all the other discrepancies). Your case was weak to start with, but attempting to defend PACE has really got your argument up the creek without a paddle. Your inconsistency of approach is very obvious now.
 

Mula

Senior Member
Messages
131
Lee. Science/Coffin/editors removed 5-aza and switched codes during the peer review process. There are other blots to the one you have had the privilege of for 24 months.
 

biophile

Places I'd rather be.
Messages
8,977
Yes, this is what PACE deemed too burdensome for patients to tolerate

I didn't think Lee was specifically defending the PACE Trial. Anyway, I don't know what specific model PACE purchased and used, but here is a general idea of what they amusingly deemed too much of a "burden" to patients as THE reason for not using it at 52 week followup:

Afb0233.jpg

Making patients wear this massively burdensome device (LOL) to help settle one of the most controversial aspects of CBT/GET research and confirm the presumed increases of overall physical activity? OH THE HUMANITY!!! From some relevant notes I had:

Friedberg & Sohl 2009 conducted a small CBT study involving 11 patients and actigraphy was used to compare the patients' self-reports to measured behavioral outcomes. Although most patients reported improvements, clinically significant actigraphy increases were recorded in only 3 patients, while decreases were recorded in 4 patients and 2 others had no change. The authors concluded that "The nature of clinical improvement in CBT trials for high-functioning CFS patients may be more ambiguous than that postulated by the cognitive-behavioral model." (http://www.ncbi.nlm.nih.gov/pubmed/19213007)

Kop et al 2005 studied the activity levels of 38 patients with CFS and/or FM and found that self-reported physical levels (as measured by the physical function subscale of the SF-36 health survey) significantly correlated with peak physical activity as measured objectively by actigraphy but not with average physical activity. Therefore, using the PF/SF-36 in the PACE Trial as a primary outcome may have been an unreliable way to determine the average physical activity of participants, those reporting increased physical activity levels may be engaging in more intense activity but not more activity overall, an important distinction. Note that the authors also conclude that "activity levels appear to be contingent on, rather than predictive of, symptoms", which also challenges to role of "deconditioning" in CFS symptoms. (http://www.ncbi.nlm.nih.gov/pubmed/15641057)

When considering all this with the previously mentioned meta-analysis of 3 CBT trials in Wiborg et al 2010 which found no significant increase in physical activity after CBT, chances were rather high that usage of actigraphy in the PACE Trial would have yielded data inconsistent with the subjective questionnaires, casted doubt on some of the assumptions in the therapy manuals about increasing activity levels, and possibly embarrassed the authors who are CBT/GET proponents staking part of their reputation on these therapies and now claiming patients are "getting back to normal" (based on subjective outcomes which open up another massive can of worms about how these were employed and goalposted in the trial).

Although Richard Horton of the Lancet called PACE "utterly impartial", some people have wondered if the authors dropped actigraphy for reasons other than their supposed heart-felt concern for patients? OH THE TIN-FOIL HAT CONSPIRACY!!! I have doubts about HGRV, but I've noticed that a major pillar of CBT is collapsing much like XMRV did, but without the thousands of snarky online comments, accusations of fraud, and chest-beating skepticism from people claiming to be concerned about patients falling for bad science.

chestbeat.jpg
 
Messages
13,774
Matters arround deception and the psychosocial model are always more complicated than with more objective science.

eg: I think that it is deceptive to claim that CBT is improving 'fatigue' levels if actometer measures show that there has been no increase in activity levels. I think that activity levels are likely to be a more effective measure of 'fatigue' than questionnaire scores, and certainly, no less important. But 'fatigue' is difficult to define. It's difficult to imagine their justification for failing to release this data with their initial results, but it is not a clear-cut matter.

With PACE, I do not think it is reasonable to claim that patients were "back to normal" if they were more seriously ill than was needed to be classed as suffering from severe and disabling fatigue at the start of the trial - but this is not clear-cut. Maybe the researchers believe it is 'normal' to suffer from severe and disabling fatigue, and therefore, they can cure patients simply by changing the labels which are applied to them.

That this sort of thing is seen as acceptable within psychiatry may explain why so many patients do not want more funding to be directed in this direction.

The followup, using the actometer data, examined a hypothesis for WHY the CBT results were obtained, and concluded the they are not mediated by physical activity.

That is what they claimed. They could have done it the other way round, and used the actometer measures to show that CBT was ineffective, and then used the questionnaire results to show that the efficacy of CBT would be massively exaggerated by those studies which relied only upon questionnaire data.

The same results could be spun in entirely different ways depending upon the desires of those writing the papers.
 

jace

Off the fence
Messages
856
Location
England
Biophile said
I have doubts about HGRV, but I've noticed that a major pillar of CBT is collapsing much like XMRV did, but without the thousands of snarky online comments, accusations of fraud, and chest-beating skepticism from people claiming to be concerned about patients falling for bad science.

Remember the semantic trap, XMRV = VP62, a specific clone, rather than XMRV = xenotropic murine leukemia virus-related virus. I'd say that the CBT/GET pillars are already collapsed without major bucks being spent, or the army of "skeptics" determined to undermine confidence on patient websites, whereas XMRV/HGRV/retroviral implication is being buried alive with indecent haste and all of the above strategies.

Yet we can only access CBT/GET and palliative medical treatments, some of which meds, like the talk and exercise therapies, make us worse, not better. Our lords and masters are unconcerned. You couldn't make it up.

We do not know the truth yet about HGRVs. All we know is what we are told. Move along. Nothing to see here.

I do not want any area of biomedical research to be shut down. Two years is no time in science. Why does the retroviral hypothesis always create such a storm? No other area does...
 

Angela Kennedy

Senior Member
Messages
1,026
Location
Essex, UK
I didn't think Lee was specifically defending the PACE Trial. Anyway, I don't know what specific model PACE purchased and used, but here is a general idea of what they amusingly deemed too much of a "burden" to patients as THE reason for not using it at 52 week followup:

View attachment 6395

Making patients wear this massively burdensome device (LOL) to help settle one of the most controversial aspects of CBT/GET research and confirm the presumed increases of overall physical activity? OH THE HUMANITY!!! From some relevant notes I had:



When considering all this with the previously mentioned meta-analysis of 3 CBT trials in Wiborg et al 2010 which found no significant increase in physical activity after CBT, chances were rather high that usage of actigraphy in the PACE Trial would have yielded data inconsistent with the subjective questionnaires, casted doubt on some of the assumptions in the therapy manuals about increasing activity levels, and possibly embarrassed the authors who are CBT/GET proponents staking part of their reputation on these therapies and now claiming patients are "getting back to normal" (based on subjective outcomes which open up another massive can of worms about how these were employed and goalposted in the trial).

Although Richard Horton of the Lancet called PACE "utterly impartial", some people have wondered if the authors dropped actigraphy for reasons other than their supposed heart-felt concern for patients? OH THE TIN-FOIL HAT CONSPIRACY!!! I have doubts about HGRV, but I've noticed that a major pillar of CBT is collapsing much like XMRV did, but without the thousands of snarky online comments, accusations of fraud, and chest-beating skepticism from people claiming to be concerned about patients falling for bad science.

View attachment 6396

But Lee nevertheless has just defended the PACE trial.

Agree with the rest of these comments though. We have a clear inconsistency of approach - a 'pick and mix' scepticism - a cherry picking, if you like. It's fallacious reasoning, and it is being done to advance psychogenic explanations which are dangerous to patients. This community has every right to call those engaging in that fallacious reasoning and inconsistency.
 

Angela Kennedy

Senior Member
Messages
1,026
Location
Essex, UK
Matters arround deception and the psychosocial model are always more complicated than with more objective science.

eg: I think that it is deceptive to claim that CBT is improving 'fatigue' levels if actometer measures show that there has been no increase in activity levels. I think that activity levels are likely to be a more effective measure of 'fatigue' than questionnaire scores, and certainly, no less important. But 'fatigue' is difficult to define. It's difficult to imagine their justification for failing to release this data with their initial results, but it is not a clear-cut matter.

With PACE, I do not think it is reasonable to claim that patients were "back to normal" if they were more seriously ill than was needed to be classed as suffering from severe and disabling fatigue at the start of the trial - but this is not clear-cut. Maybe the researchers believe it is 'normal' to suffer from severe and disabling fatigue, and therefore, they can cure patients simply by changing the labels which are applied to them.

That this sort of thing is seen as acceptable within psychiatry may explain why so many patients do not want more funding to be directed in this direction.



That is what they claimed. They could have done it the other way round, and used the actometer measures to show that CBT was ineffective, and then used the questionnaire results to show that the efficacy of CBT would be massively exaggerated by those studies which relied only upon questionnaire data.

The same results could be spun in entirely different ways depending upon the desires of those writing the papers.

Yes, and the failure of the 'skeptics' and the self-proclaimed 'scientists' to address these painfully obvious problems that confound psychiatric research and, most importantly, the extrapolations made leading to certain assertions, is a massive inconsistency that the ME/CFS community have every right to critique.

When such subjectivity becomes claimed as 'science' and accepted by those who want to shut the door firmly on biomedical research (such as the possible HGRV link), then those claiming 'skepticism' about HGRV but defending PACE are acting similarly to those defending The Lightning Process, homeopathy, and Gillian McKeith. This is another one of those elephants in the room we keep finding, unfortunately.

Note the lack of concern about the Lightning Process among certain so-called 'skeptics'. That in itself is shocking, another indication of the inconsistency of 'skepticism' this community is faced with.
 

currer

Senior Member
Messages
1,409
Until there is sound biomedical evidence that ME is a physical (not psychiatric) disorder these debates will multiply endlessly.

There is no substitute for sound biomedical research.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
Matters arround deception and the psychosocial model are always more complicated than with more objective science.

eg: I think that it is deceptive to claim that CBT is improving 'fatigue' levels if actometer measures show that there has been no increase in activity levels. I think that activity levels are likely to be a more effective measure of 'fatigue' than questionnaire scores, and certainly, no less important. But 'fatigue' is difficult to define. It's difficult to imagine their justification for failing to release this data with their initial results, but it is not a clear-cut matter.

With PACE, I do not think it is reasonable to claim that patients were "back to normal" if they were more seriously ill than was needed to be classed as suffering from severe and disabling fatigue at the start of the trial - but this is not clear-cut. Maybe the researchers believe it is 'normal' to suffer from severe and disabling fatigue, and therefore, they can cure patients simply by changing the labels which are applied to them.

That this sort of thing is seen as acceptable within psychiatry may explain why so many patients do not want more funding to be directed in this direction.

I won't go into much depth about this here because it's the wrong thread... But the PACE trial results were actually fairly clear cut... And if we had the raw data, I'm certain that they'd be exceptionally clear cut.

CBT and GET could only be shown to benefit up to 16% (* see below) of CFS patients using even the authors' flawed "post-hoc" analysis, and on average there was a minimal improvement as a result of CBT and GET- an improvement that was not always above the clinically useful threshold. CBT failed to improve physical disability, and GET barely made any difference. Both GET and CBT left ME patients severely physically disabled at the end of a year. So, considering the other weaknesses of the trial, and the flawed analyses, it seems pretty clear cut to me that both GET and CBT failed to deliver, and failed to prove the authors' theories that ME is caused by a fear of activity etc.

I think the authors of the PACE trial expected CBT to increase physical function whilst not making much difference to the subjective feelings of fatigue. I think they wanted to show that fatigue is purely subjective and that physical function could increase despite feelings of fatigue... I came to this opinion from reading the protocol, but I can't remember if it's spelled out so explicitly... In the end, it was the other way around, which confounded the authors' expectations - CBT failed to increase physical disability, whilst very marginally improving subjective fatigue.

As for "normal range", it is a flawed and faulty statistical analysis, as well as a flawed concept, which tells us nothing about anything. We do not know how many were in the "normal range" at the beginning of the study, so it is meaningless to be told how many were in "normal range" at the end of the study. Along with this, it was possible to be made worse by CBT/GET, and still be declared in the "normal range". Based on this corrupted analysis, we are told that 30 to 40% of patients 'recovered'. The "normal range" analysis was a statistical cock-up caused by changing the analyses and entry criteria half way through the trial (to improve the presentation of the results - the trial massively failed using the originally proposed analyses), and the "normal range" analysis should be retracted.


* The other post-hoc analysis (where I got the 16% figure from) is also flawed because the calculation for a 'clinically useful outcome' was meaningless:

"A clinically useful difference between the means of
the primary outcomes was defined as 05 of the SD of
these measures at baseline, equating to 2 points for
Chalder fatigue questionnaire and 8 points for short
form-36.
"

2 points on chalder, or 8 points on SF-36 PF does not indicate a significant change in the scores... It just requires one change in one question for scores to change that much... And it doesn't make any sense to base their calculation on average scores at baseline... How is that relevant to an individual's results?
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
I have doubts about HGRV, but I've noticed that a major pillar of CBT is collapsing much like XMRV did, but without the thousands of snarky online comments, accusations of fraud, and chest-beating skepticism from people claiming to be concerned about patients falling for bad science.

Note the lack of concern about the Lightning Process among certain so-called 'skeptics'. That in itself is shocking, another indication of the inconsistency of 'skepticism' this community is faced with.

Yes, the silence re the PACE Trial and the Lightning Process Trial is deafening.
The Lightning Process Trial is taking place, and inevitably harming the long term psychological welfare and physical health of children, right now, but where are the protests about the safety of children from the people who like to talk about "bad science"?
 

currer

Senior Member
Messages
1,409
Bob that was a fantastic analysis, even if it is on the wrong thread.

You have almost made me feel enthusiastic about the possibilities of statistical research, and demolished my dislike of psychiatric papers. (Almost!)

What are you going to do with the results of your analysis? Are you going to try to get it published?
 

Angela Kennedy

Senior Member
Messages
1,026
Location
Essex, UK
I won't go into much depth about this here because it's the wrong thread... But the PACE trial results were actually fairly clear cut... And if we had the raw data, I'm certain that they'd be exceptionally clear cut.

CBT and GET could only be shown to benefit up to 16% (* see below) of CFS patients using even the authors' flawed "post-hoc" analysis, and on average there was a minimal improvement as a result of CBT and GET- an improvement that was not always above the clinically useful threshold. CBT failed to improve physical disability, and GET barely made any difference. Both GET and CBT left ME patients severely physically disabled at the end of a year. So, considering the other weaknesses of the trial, and the flawed analyses, it seems pretty clear cut to me that both GET and CBT failed to deliver, and failed to prove the authors' theories that ME is caused by a fear of activity etc.

I think the authors of the PACE trial expected CBT to increase physical function whilst not making much difference to the subjective feelings of fatigue. I think they wanted to show that fatigue is purely subjective and that physical function could increase despite feelings of fatigue... I came to this opinion from reading the protocol, but I can't remember if it's spelled out so explicitly... In the end, it was the other way around, which confounded the authors' expectations - CBT failed to increase physical disability, and only marginally improved subjective fatigue.

As for "normal range", it is a flawed and faulty statistical analysis, as well as a flawed concept, which tells us nothing about anything. We do not know how many were in the "normal range" at the beginning of the study, so it is meaningless to be told how many were in "normal range" at the end of the study. Along with this, it was possible to be made worse by CBT/GET, and still be declared in the "normal range". Based on this corrupted analysis, we are told that 30 to 40% of patients 'recovered'.


* The other post-hoc analysis (where I got the 16% figure from) is also flawed because the calculation for a 'clinically useful outcome' was meaningless:

"A clinically useful difference between the means of
the primary outcomes was defined as 05 of the SD of
these measures at baseline, equating to 2 points for
Chalder fatigue questionnaire and 8 points for short
form-36.
"

2 points on chalder, or 8 points on SF-36 PF does not indicate a significant change in the scores... It just requires one change in one question for scores to change that much... And it doesn't make any sense to base their calculation on average scores at baseline... How is that relevant to an individual's results?

I agree with most of what you say Bob - except the idea 'ME' patients were being trialled. That is not tenable.

Firstly, there were mechanisms to exclude people with neurological signs and symptoms using an ad hoc version of the Oxford criteria. I know there is a myth that ME sufferers have no signs, but this is a myth. There is much research showing clinical signs associated with neurological deficits in patients given ME (or even CFS!) diagnosis. In any case, the way the selection process was used, it was easy to exclude patients with signs and symptoms of neurological deficits associated with ME, by use of the Oxford Criteria, the Reeves make-over version of Fukuda, and the White make-over version of the 'London' criteria.

This is supported by the huge proportion of ME/CFS (or 'CFS/ME') sufferers referred to the clinics and EXCLUDED from the study under Oxford criteria alone.

Previous research undertaken by Wessely et al have shown they EXCLUDE patients with signs of organic dysfunction from their research - but still appear to keep them at the CFS clinics. I have other anecdotal evidence to show this is happening in practice to people. But Newton et al's paper shows this is happening as well.

Then we have White's response to Malcolm Hooper- where he clearly says they were NOT studying 'CFS/ME'.

(There was also, in the selection process, ample opportunity to INCLUDE various psychiatric illnesses- the devil is in the linguistic detail and grey shading of boxes, ironically.)

This is without considering that seriously affected patients were unable to take part- those likely to exhibit the signs and symptoms associated with neurological and other physiological deficits.

What this all does is point to serious discrepancies on the patient cohort selection method (among all the other problems) - so that it is NOT clear what kind of patients were under study (making any claims about 'ME patients' highly unsound- and dangerous.) This is why more information needs to be made and fully investigated, on this one issue alone (let alone the other issues highlighted).

It is therefore possible that even fatigued patients with mental health problems, but without signs or symptoms of physiological dysfunction, on ad hoc administration of anti-depressants (which was happening) STILL didn't do well on CBT or GET!
 

Angela Kennedy

Senior Member
Messages
1,026
Location
Essex, UK
Yes, the silence re the PACE Trial and the Lightning Process Trial is deafening.
The Lightning Process Trial is taking place, and inevitably harming the long term psychological welfare and physical health of children, right now, but where are the howls of protest about the safety of children?

Good point - the concern about 'children' is conspicuous by its absence.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
Bob that was a fantastic analysis, even if it is on the wrong thread.

You have almost made me feel enthusiastic about the possibilities of statistical research, and demolished my dislike of psychiatric papers. (Almost!)

What are you going to do with the results of your analysis? Are you going to try to get it published?

hahaha... well, if it's given you a new found enthusiasm for the possibilities of statistics, then I don't know if i've done you a favour or not!?!? :eek:

The good thing about the PACE Trial is that it totally works in our favour (not the spin, but the facts), in that it was a complete and utter failure. The trial clearly demonstrates that CBT and GET do not help ME patients in a meaningful way, and that's even with the dodgy and manipulative methodologies that they used.

Even the authors can't legitimately say that CBT and GET are effective treatments (except they do spin the results in various ways).

It totally demonstrates that ME is not propagated by maladaptive behaviours or cognition, and that ME is not cured or effectively treated by psychological interventions.

Graham and some others are working on an online analysis of the PACE Trial - soon to be completed - A number of people on the forum have contributed to the analysis - on the impenetrable PACE Trial thread.

I'm doing a couple of small ongoing projects of my own as well, but they won't be published anywhere - but i will put them online if I ever finish them.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
I agree with most of what you say Bob - except the idea 'ME' patients were being trialled. That is not tenable.

Firstly, there were mechanisms to exclude people with neurological signs and symptoms using an ad hoc version of the Oxford criteria. I know there is a myth that ME sufferers have no signs, but this is a myth. There is much research showing clinical signs associated with neurological deficits in patients given ME (or even CFS!) diagnosis. In any case, the way the selection process was used, it was easy to exclude patients with signs and symptoms of neurological deficits associated with ME, by use of the Oxford Criteria, the Reeves make-over version of Fukuda, and the White make-over version of the 'London' criteria.

This is supported by the huge proportion of ME/CFS (or 'CFS/ME') sufferers referred to the clinics and EXCLUDED from the study under Oxford criteria alone.

Previous research undertaken by Wessely et al have shown they EXCLUDE patients with signs of organic dysfunction from their research - but still appear to keep them at the CFS clinics. I have other anecdotal evidence to show this is happening in practice to people. But Newton et al's paper shows this is happening as well.

Then we have White's response to Malcolm Hooper- where he clearly says they were NOT studying 'CFS/ME'.

(There was also, in the selection process, ample opportunity to INCLUDE various psychiatric illnesses- the devil is in the linguistic detail and grey shading of boxes, ironically.)

This is without considering that seriously affected patients were unable to take part- those likely to exhibit the signs and symptoms associated with neurological and other physiological deficits.

What this all does is point to serious discrepancies on the patient cohort selection method (among all the other problems) - so that it is NOT clear what kind of patients were under study (making any claims about 'ME patients' highly unsound- and dangerous.) This is why more information needs to be made and fully investigated, on this one issue alone (let alone the other issues highlighted).

It is therefore possible that even fatigued patients with mental health problems, but without signs or symptoms of physiological dysfunction, on ad hoc administration of anti-depressants (which was happening) STILL didn't do well on CBT or GET!

I totally agree with you that the entry criteria were manipulative, unhelpful, and unrepresentative.
They didn't use internationally recognised entry criteria, which was unhelpful to say the least.
And they excluded severly affected patients (who we know do not benefit from psychological interventions from the results of the FINE Trial), so the study was not representative of the whole patient community anyway (even though the authors insist it is.)

I don't have personal knowledge about the patients who were excluded from the PACE Trial, so I can't make a conclusive judgement about that.


Previous research undertaken by Wessely et al have shown they EXCLUDE patients with signs of organic dysfunction from their research - but still appear to keep them at the CFS clinics. I have other anecdotal evidence to show this is happening in practice to people. But Newton et al's paper shows this is happening as well.

That's very interesting.
 

Angela Kennedy

Senior Member
Messages
1,026
Location
Essex, UK
Clearly there are enough discrepancies for the 'findings' to be unsafe at best, and ungeneralisable to the ME/CFS (or even CFS/ME) community, hence my own specific complaint to the Lancet.
 
Messages
13,774
I won't go into much depth about this here because it's the wrong thread... But the PACE trial results were actually fairly clear cut... And if we had the raw data, I'm certain that they'd be exceptionally clear cut.

I guess - things like their claims about the Bowling SF36-PF data are pretty clear cut wrong. If Lee is claiming Mikovit's re-labelling of the WB result for illustration in a lecture means that no-one can believe anything she says, then there probably are comparable things in PACE.

The new Crawley piece is even more misleading.

It's always hard to compare different situations fairly. To me, the relabelling of the WB is a simple issue of unknown importance, whereas PACE is much more important but more complicated, and involving more potential ambiguities and requiring a lot more work to really understand.
 

Angela Kennedy

Senior Member
Messages
1,026
Location
Essex, UK
I guess - things like their claims about the Bowling SF36-PF data are pretty clear cut wrong. If Lee is claiming Mikovit's re-labelling of the WB result for illustration in a lecture means that no-one can believe anything she says, then there probably are comparable things in PACE.

The new Crawley piece is even more misleading.

It's always hard to compare different situations fairly. To me, the relabelling of the WB is a simple issue of unknown importance, whereas PACE is much more important but more complicated, and involving more potential ambiguities and requiring a lot more work to really understand.

Yes - and those ambiguities and complexities - of vital importance, are what allows others to ignore them ("it's all too complicated").

Nevertheless - some of us have been handling that complexity, and making it less difficult to elucidate, for one thing. Retrovirology is also a highly complex field which this community attempts to engage in. All of this is completely different from the way PACE and its discrepancies have been ignored - in the scientific press, by the inconsistent 'skeptics' etc. etc. and instead people raising these issues (like myself, like Hooper) have been personally attacked publicly.

That becomes something different. It is then failure to engage with the critics of PACE demonstrates a level of perversity which saturates medical and 'scientific' behaviour towards this community, and has been present for years.

So the inconsistency between the way say, the XMRV issue, and PACE, is dealt with, is palpable, and relevant, and reeking of bad faith, unfortunately.
 
Messages
13,774
Yes - and those ambiguities and complexities - of vital importance, are what allows others to ignore them ("it's all too complicated").

...

That becomes something different. It is then failure to engage with the critics of PACE demonstrates a level of perversity which saturates medical and 'scientific' behaviour towards this community, and has been present for years.

So the inconsistency between the way say, the XMRV issue, and PACE, is dealt with, is palpable, and relevant, and reeking of bad faith, unfortunately.

I see how it can feel like bad faith, when some present themselves as fighting for scientific rationalism and Truth, but then shy away from looking at any of the issues of concern to patients... but I think that this is just humans being humans, and a bit lazy and biased. It's fair to criticise it, but I try to also be understanding and forgiving - I'm sure I make some highfalutin claims for myself that I utterly fail to live up to too - we all have our own interests, and areas of shameful ignorance and disinterest.