1. Patients launch $1.27 million crowdfunding campaign for ME/CFS gut microbiome study.
    Check out the website, Facebook and Twitter. Join in donate and spread the word!
Nitric oxide and its possible implication in ME/CFS (Part 1 of 2)
Andrew Gladman explores the current and historic hypotheses relating to nitric oxide problems in ME/CFS. Part 1 of a 2-part series puts nitric oxide under the microscope and explores what it is, what it does and why it is so frequently discussed in the world of ME/CFS. Part 1 focuses...
Discuss the article on the Forums.

Examples of misleading statements in CFS papers from biopsychosocialists

Discussion in 'Latest ME/CFS Research' started by biophile, Jan 23, 2012.

  1. biophile

    biophile Places I'd rather be.

    Messages:
    1,371
    Likes:
    4,293
    We all make mistakes, but for the "expert" authors of biopsychosocial papers on CFS it seems to happen rather frequently, coincidently in their favour too. It takes a lot of time and effort to thoroughly investigate even a single claim, and so many of them are made in various papers. To systematically examine the issue would be overwhelming, I usually discover them when I need to occasionally follow something else up or just happen to be already familiar with the source being cited.

    It happens often enough that I suspect there are probably hundreds if not thousands of examples either already known or waiting to be discovered, anyone could probably select a paper at random and find a claim that is not supported by the reference given, and I don't mean just differences of opinion or interpretation or cherry picking (another can of worms) but blatant errors. Is this problem normal in wider academia? Is it spin or incompetence, and where are the peer reviewers in all this? How does it fit into the accusations of "zombie science" and criticisms of the (abuse of) "evidence based medicine" as practiced by proponents of and lobbyists for the biopsychosocial paradigm?

    When I first read the term "smoke and mirrors" being applied to the biopsychosocial approach and cognitive behavioural model of CFS, I was relatively naive and thought that perhaps this wording was too strong, surely it couldn't be that bad even if there were some issues with the available evidence. However since then I have come across so many questionable studies and related statements that "smoke and mirrors" does indeed seem like an accurate description afterall. If all the papers and citations were fed into a computer model, would it look similar to a giant web of spin?

    I'm starting this thread to document such misleading statements in the hope others will contribute. I will post several examples that I have worked on recently while they are relatively fresh. I'm sure there are countless examples that have already been discussed on other threads at Phoenix Rising or are embedded in my notes somewhere but I'm not going to wade through to find more examples right now, it was difficult enough preparing these examples.
    Dolphin and oceanblue like this.
  2. biophile

    biophile Places I'd rather be.

    Messages:
    1,371
    Likes:
    4,293
    Burgess et al (inc. Chalder) 2011 on CBT (http://www.kcl.ac.uk/innovation/groups/projects/cfs/publications/assets/2011/Burgessface2face.pdf) :

    The Cochrane 2008 systematic review on CBT in question (http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD001027.pub2/pdf) did not show this at all.

    For rates of a "clinical response" in fatigue it was 40% for CBT vs 26% for usual care (4 studies, 371 participants), so the figure should be 14% for CBT over and above usual care (one could argue that the remaining 26% in the CBT group may report greater improvement than the 26% in the usual care group, but Cochrane does not go into that and only reports small-moderate group differences in fatigue at post-treatment of [SMD -0.39, 95% CI -.0.60 to -0.19] in 6 studies or 373 participants).

    Also, there was no statistical difference in clinical response at short-term followup (3 studies, 353 participants) so the ability of CBT to elicit a self-rated clinical response in fatigue appears to be transient. However, difference in fatigue severity between groups apparently remained significant at short/medium-term followup ie [SMD -0.47, 95%CI -0.69 to -0.25] in 4 studies or 330 participants, but this discrepency may be explained by the following statement: "At follow-up, 1-7 months after treatment ended, people who had completed their course of CBT continued to have lower fatigue levels, but when including people who had dropped out of treatment, there was no difference between CBT and usual care."

    Furthermore, on the outcome of average self-rated physical function scores between groups, there was no statistically significant difference at either post-treatment (4 studies or 318 participants) or short/medium-term followup (3 studies or 275 participants). Note the temporal definitions for above mentioned followup periods: "Outcomes were classified as post treatment, short term followup (1-6 months post-treatment), medium term follow-up (7-12 months post-treatment) and long term (longer than 12 months)."

    So it was incorrect for Burgess et al to claim this Cochrane 2008 showed that "CBT improves fatigue and physical functioning in about 40% of patients".
    PhoenixDown, Dolphin and oceanblue like this.
  3. biophile

    biophile Places I'd rather be.

    Messages:
    1,371
    Likes:
    4,293
    Collins et al (inc. Crawley) 2011 on PACE (http://www.biomedcentral.com/content/pdf/1472-6963-11-217.pdf) :

    This of course is referring to the infamous PACE Trial (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3065633). The figure of 30-40% seems to be based on a combination of two figures: 1) the 30%/28% of CBT/GET participants being within "normal" range in fatigue (CFQ ?18/33 points Likert) and physical function (PF/SF-36 ?60/100 points); 2) 41% of participants in the CBT/GET groups who reported being "much better or very much better" on the CGI-I scale.

    The reasons why their definition of "normal" is dubious is worthy of an entire paper, but suffice to say that ?18/33 points on the fatigue scale and ?60/100 points in physical function is an inappropriate threshold for normal not to mention recovery, and "much better or very much better" on the clinical global impression scale is not necessarily a recovery either. The PACE authors themselves stated in their authors' reply to criticism that "It is important to clarify that our paper did not report on recovery; we will address this in a future publication." (http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(11)60651-X/fulltext)

    The physical function threshold of ?60 points is the most dubious because in the same trial the authors deemed ?65 points a sign of "significant disability". Also, in the 2007 trial protocol, the physical function criterion for recovery was much higher at ?85 points (http://www.biomedcentral.com/1472-6882/7/12).

    By the same logic used by Collins et al, the de facto SMC control group in the PACE Trial had a "recovery rate" (cough) of 15-25% which would indicate that the true "recovery rate" (cough) of CBT and GET would be more like 15% over and above SMC, an inconvenient statistic left out by Collins et al who give the false impression that CBT/GET is responsible for a 30-40% recovery rate.
    Dolphin and oceanblue like this.
  4. biophile

    biophile Places I'd rather be.

    Messages:
    1,371
    Likes:
    4,293
    (continued) [Numerous] on PACE ...

    Such confusion between "normal" and "recovery", repeated by the BMJ as "cure" (http://www.meassociation.org.uk/?p=5757) and The Guardian (http://www.guardian.co.uk/society/2011/feb/18/study-exercise-therapy-me-treatment) comes at no surprise when the PACE authors themselves talked at a press conference about how CFS prevents people from "leading a normal life" and that CBT/GET doubles the odds of "[getting] back to normal levels of functioning and fatigue" (http://www.meactionuk.org.uk/pacepressconf.html).

    Contributing to this confusion was the Lancet editorial which accompanied the PACE 2011 Trial paper, where authors Bleijenberg & Knoop quote the "about 30%" proportion of CBT/GET participants who were "normal" in fatigue and physical function at 52 weeks in PACE and claim that "PACE used a strict criterion for recovery" derived from "a healthy person's score" (http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(11)60172-4/fulltext). Both these statements are false, as there was no recovery reported, and because the dataset used to derive the threshold of normal for physical function was from a general population which included unhealthy people and the elderly (note that the PACE authors erroneously described it a "working age population" in their 2011 Lancet paper but admitted this error in their authors' reply).

    Bleijenberg & Knoop's blunder was ironic considering that both of them co-authored a CFS paper with White (lead author of the PACE Trial) which required a physical function score of ?80 points to be considered recovered (http://www.cfids-cab.org/rc/Knoop-1.pdf), and especially for Bleijenberg who co-authored a CFS-like paper where "A cut-off of ?65 was considered to reflect severe problems with physical functioning." (http://eurpub.oxfordjournals.org/content/20/3/251.long).

    Apparently one can be significantly/severely disabled and "recovered" at the same time? To my knowledge, unofficial reports that the Lancet will issue a correction for the editorial ( based on email exchanges with the Lancet) has not yet resulted in an actual correction. I'm not holding my breath for one and wonder how common it is for blatant errors to go uncorrected in the Lancet.
    Dolphin and oceanblue like this.
  5. alex3619

    alex3619 Senior Member

    Messages:
    7,017
    Likes:
    10,793
    Logan, Queensland, Australia
    Hi biophile, I agree but I can see their counter-argument. It was indeed 40% of patients undergoing CBT/GET who improved, everyone receives standard medical care so why is this an issue? If the effect is not always prolonged, that just means they needed more treatment, it was stopped too early. In the next study patients require a much longer treatment period.

    There is usually a way to spin their outcomes to appear as though its a misunderstanding. Recently I came to the view that it is not a mistake that matters. Anyone can claim a mistake is just a glitch, or a misunderstanding and not really a mistake. What matters is a pattern of mistakes. In the case of the PACE trial it is a pattern of highly deceptive and misleading statments from initial design through to post-publication media interviews. Its the pattern thats important. That is why I like this thread. Establishing a pattern is difficult to do. More minds make it easier.

    One of the biggest problems with the biopsychosocial view is that they are operating in their own insular paradigm. Nobody else is there. As a result normal robust scientific criticism does not really exist, and they have been progressively allowed to get away with blunder after blunder, accumulating a long history of blunders. It would be really nice to be able to show this.

    So far I am not looking at individual papers. I am looking at a framework to find and classify error in this field, a meta-criticism if you like. This will take months to do at least.

    Bye, Alex
    Enid and oceanblue like this.
  6. biophile

    biophile Places I'd rather be.

    Messages:
    1,371
    Likes:
    4,293
    Cella et al (Sharpe & Chalder) 2011 on occupational outcomes for CBT and GET (http://www.kcl.ac.uk/innovation/gro...2011ThereliabilityofWASAinCFSpatientsJoPR.pdf) :

    However, the cited paper does not appear to discuss work/employment/occupation-related outcomes for CBT and only mentions GET (http://occmed.oxfordjournals.org/content/55/1/32.full.pdf):

    Further, these claims appear to be unsubstantiated even for GET, as explained below ...

    * Fulcher & White 1997 (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2126868/pdf/9180065.pdf) : the comparison of improved occupational status was uncontrolled at followup because it was a crossover study and did not account for dropouts etc, the authors acknowledge this weakness but then try to dismiss it by claiming that spontaneous improvement was an unlikely explanation because it didn't occur in a "similar sample" in another study.

    * Powell et al 2001 (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC26565/pdf/387.pdf) : reports work status at baseline but not post-treatment.

    * Powell et al 2004 (http://bjp.rcpsych.org/content/184/2/142.full.pdf) : followup of Powell et al 2001 above but did not report occupational status at any point.

    * Wearden et al 1998 (full text not easily available but a Cochrane 2004 systematic review refers to this study as "Appleby 1995" because of multiple publications - http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD003200.pub2/pdf) : the improvement in "functional work capacity" in the GET group compared to the control group at 12 weeks and at 26 weeks was not statistically significant.

    So all in all, no good evidence for the statement that "occupational outcomes tend to improve substantially for CFS patients who receive treatment such as [CBT] and [GET]".
    oceanblue likes this.
  7. biophile

    biophile Places I'd rather be.

    Messages:
    1,371
    Likes:
    4,293
    Hi Alex, thanks for the comments. I can see their counte-arguments you suggested, but it is unacceptable for these people to routinely flout intervention improvement rates without control rates. Just imagine if 40% of the CBT group had a clinical response vs 40% of the usual care group. Claiming that fatigue improved in 40% of the CBT group would still be correct but completely meaningless and deceptive. Also, their allusion that Cochrane 2008 showed an improvement in physical function in 40% of the CBT group appears to be false as "clinical response" relates to fatigue scales, and there was no statistically significant improvement in physical function between groups at any time point.

    I agree the pattern of mistakes is most important but difficult to do. Personally I would rather just stick to the overall hypothesis of the cognitive behavioural model for CFS and then compare with the overall evidence relating to each pillar of the model and perhaps also with the cherry picking of proponents to see the discrepancies. Good point on the lack of normal robust scientific criticism and getting away with blunder after blunder. I look forward to seeing what you come up with in an analysis.
    Dolphin likes this.
  8. RustyJ

    RustyJ Contaminated Cell Line 'RustyJ'

    Messages:
    920
    Likes:
    338
    Brisbane, Aust
    Hi Alex, doesn't the exclusion of the dropouts undermine your argument? The figure is then not 40%. To leave out this point seems very important to the suggested outcome of the study. It suggests manipulation of the data, or at the very least omission of vital information, not just a matter of differences of interpretation.
  9. alex3619

    alex3619 Senior Member

    Messages:
    7,017
    Likes:
    10,793
    Logan, Queensland, Australia
    Hi RustyJ, ahh, but (hang on a sec, need to put on my black hat) those dropouts don't count unless they dropped out at the end of the study. Since they are delusional psychiatric cases they are resistant to therapy. If they had stayed there is no evidence that they would not have improved too. Let me remind you that almost nobody reported an adverse effects, so there was no good reason for them to drop out except for their psychiatric condition induced bias.

    Putting on a white hat, this argument is based on circular reasoning and so is invalid, but it could sound nice to some. The issue about under-reporting of harms due to excessively high criteria is also now recognized by some. Also, I wonder if they even looked for harms or recorded harms in those who dropped out. I could be wrong, but I do not recall reading about this.

    Bye, Alex
  10. markmc20001

    markmc20001 Guest

    Messages:
    877
    Likes:
    80
    This type of creative writing has turned science into kind of an "art" as well. :cool:

    Impossible to untangle all that "slight of hand" crap.
    Enid likes this.
  11. Enid

    Enid Senior Member

    Messages:
    3,309
    Likes:
    840
    UK
    Interesting thread many thanks - can I just make a point about the notion of everyone receiving standard medical care - in fact this is nonsense - the standard here is a zero - complete inability to diagnose the multi dysfunctions we see coming from researchers if looked for hard enough. They do not in the first place.
  12. Snow Leopard

    Snow Leopard Senior Member

    Messages:
    2,261
    Likes:
    1,658
    Australia
    The pattern besides confirmation bias? To the point of withholding the objective measures of improvement (which are usually null).
  13. alex3619

    alex3619 Senior Member

    Messages:
    7,017
    Likes:
    10,793
    Logan, Queensland, Australia
    Hi Snow Leopard, the PACE trial publications are the rosetta stone. They made so many mistakes I think it constitutes a pattern right there. It cannot be reasonably argued to be an accident. Its part of what I am looking into. The big issue though about patterns is that many papers are written by different people at different times, there are many loopholes for someone to claim coincidence. Bye, Alex
  14. oceanblue

    oceanblue Senior Member

    Messages:
    1,174
    Likes:
    343
    UK
    Great work by Biophile here, thanks!

    I also agree with alex3619 that what matters is the pattern, not individual cases, and that as a whole PACE could be the 'rosetta stone' (nice). Certainly for me that's where they crossed over from possibly overenthusiastically championing a pet theory to deliberately setting out to mislead (without actually lying).

    One note of caution: claiming references prove one things when they don't at all is a widespread practice. I have come across it many times (to my frustration) in papers supporting a biomedical explanation of CFS and in non-CFS clinical research too. Even when doing my biochemistry degree, where research papers were the main source of information and research standards were so much higher than in the CFS field, I'd often chase a promising reference only to find the referencing authors had put an unjustified spin on it.
  15. biophile

    biophile Places I'd rather be.

    Messages:
    1,371
    Likes:
    4,293
    van der Meer & Lloyd 2012 - Editorial Comment: A controversial consensus comment on article by Broderick et al.

    The article in question is a critique of ME-ICC, and this post was spawned from a related post on a different thread (http://phoenixrising.me/forums/show...us-Criteria-ME&p=235812&viewfull=1#post235812).

    oceanblue beat me to it (http://forums.phoenixrising.me/show...and-editorial)&p=235792&viewfull=1#post235792) but I will still post as I go into more detail. Relevant background is unbolded and important quote is bolded and/or inbetween asterisks:

    I comment in more detail on the first part on the other thread, but basically, once again we have the typical assertion or allusion that the "controversial" debate about CFS is fuelled by the ideological simpletons while the enlightened researchers such as themselves are above all that and just want to get on with the science. CFS-biopsychosocialist perspectives of the controversy tend to ignore the fact their claims relating to CFS are part of the controversy as well and are being questioned on a rational scientific basis. And which group of "protagonists" do the PACE authors belong in the above mentioned groups?

    Anyway, although there may have been a few unscientific arguments towards the trial and personal attacks on the authors over the internet, the detractors of the critics usually issue a blanket dismissal of all criticisms without acknowledgment that legitimate criticisms could exist as well and are what drives the response towards PACE. van der Meer & Lloyd cite all 8 published Lancet letters to the editor (and related authors' reply?) on the PACE Trial for the claim of "unscientific and sometimes personal attacks on the researchers in [the] scientific literature". I quickly went through the letters to summarize each one below:

    Keeping in mind that I only provided a brief summary, do these letters sound "unscientific" to anyone? They are referenced with reasonable arguments. I did not see one instance of a "personal attack", nor is such mentioned in the authors' reply (much of which is the authors just reiterating what they already wrote in the trial paper and reassuring us that the methodology was sound). It is possible that van der Meer & Lloyd never read any of these letters and just accepted without question the rumours about them and/or whatever they were told by the Lancet editorial team?

    van der Meer & Lloyd continue to claim that these (alleged) unscientific arguments and (alleged) personal attacks on the researchers also occurred "via the Internet", but the reference given is a CFS review paper from 2006 which is utterly irrelevant to evidence of the occurrences in 2011 (a citation error perhaps?). So once again from detractors of the critics of the PACE Trial, we have unsubstantiated claims about critics issuing "personal attacks" towards PACE authors. They claim that criticism of the PACE Trial is unscientific, without refuting any of the arguments made against it. The followup statement about XMRV as a similar "example" again alludes that the PACE Trial vs criticism was all about "science vs emotion".
    Valentijn, Sean, Dolphin and 2 others like this.
  16. alex3619

    alex3619 Senior Member

    Messages:
    7,017
    Likes:
    10,793
    Logan, Queensland, Australia
    Hi oceanblue, I had the same experience, even in textbooks. Who has time to check up on every reference in every paper they read though? Bye, Alex
  17. alex3619

    alex3619 Senior Member

    Messages:
    7,017
    Likes:
    10,793
    Logan, Queensland, Australia
    In reply to biophile's post 15, here is what I wrote on the other thread to the comment there:

    Hi biophile, just to pick up on a point you made I agree with, on cherry picking supporting info: this is not only widely done but inevitable in any complex topic. Its not like you can reference all 5000 papers on ME and CFS in any publication, you have to use selection criteria. In this sense the cherry picking of the biopsychosocial proponents is justifiable. However, there are larger and overlapping issues. When faced with specific scientific challenges, especially those that go the the very foundation of the biopsychosocial hypothesis, what you use to support argument is much more critical. It has to address the issue at hand, and do so in a rational and data supported way. Typically this is not the case. For the PACE trial this is not the case. Instead we get multiple claims of violent patients, unscientific attacks, and so on. Arguing the man: a logical fallacy. They rarely address the complaints and issues, many of which come from respected scientists and clinicians: instead they divert attention to those hysterical patients again.

    This is politics and spin, not science. I think the problem stems from the historical situation. For decades they have not been substantially challenged. Nobody took them seriously, and they were in their own little isolated area: people outside this area just ignored them for the most part. Now more and more people are realizing just how foundationally baseless and methodologically flawed the biopsychosocial research is. BOOKS are being written about it by medical academics (I have one on order). The charge is not being led by hysterical patients, its being led by medical academics, including other psychiatrists. They have never had to face this level of criticism, and it is getting worse as more and more wake up to what they have been saying and doing. They need a scapegoat, a straw-man, and they selected us, either consciously or unconsciously - I am in no position to infer motive or whether its intentional, but I can point to the fact of it happening.

    Bye, Alex
    Tito likes this.
  18. Snow Leopard

    Snow Leopard Senior Member

    Messages:
    2,261
    Likes:
    1,658
    Australia
    If they weren't legitimate criticisms, they wouldn't have been published in the first place - remember there were five times as many letters submitted than were published. It would be more reasonable to question the validity of those letters.

    I think they are hoping that we simply take their word for it (that the letters were not credible). But the end result is that van der Meer and Lloyd are insulting the intelligence of their readers.
    Enid, Dolphin and Sean like this.
  19. Enid

    Enid Senior Member

    Messages:
    3,309
    Likes:
    840
    UK
    Oh Wow - produce as many stinking papers as they like to enhance their egos presented as science - come on all us on PR who know the real thing SPEAK.
  20. Dolphin

    Dolphin Senior Member

    Messages:
    6,584
    Likes:
    5,186
    Good work, biophile.
    I suspect I have posted about others over the years here. However, my guess is I won't look back at my own messages much. So if people see points of mine, feel free to re-post my point or even re-word it the way you see it yourself if you prefer.

See more popular forum discussions.

Share This Page