1. Patients launch a $1.27 million crowdfunding campaign for ME/CFS gut microbiome study.
    Check out the website, Facebook and Twitter. Join in donate and spread the word!
Hyperparathyroidism: An Often Overlooked Differential Diagnosis to ME/CFS
Andrew Gladman puts hyperparathyroidism under the microscope, exploring what the disease is, how it can mimic ME/CFS in presentation and how it is treated.
Discuss the article on the Forums.

PACE Trial and PACE Trial Protocol

Discussion in 'Latest ME/CFS Research' started by Dolphin, May 12, 2010.

  1. biophile

    biophile Places I'd rather be.

    Messages:
    1,348
    Likes:
    3,979
    Here and elsewhere, I really think you're onto something that's more potent than the depression/medication confounder. A staggering 80% of candidates were excluded from the trial. 3 sweeps of exclusions?! Others have already discussed on this thread the discrepancies between clinical cohorts vs research cohorts so I won't need to.

    Diagnosis of exclusion has been taken too far by the biopsychosocialists. We would expect common and obvious explanations for CFS-like symptoms to be excluded, but now anything that even remotely suggests organic disease is excluded as well, to a fault. Such a CFS construct changes from an "unexplained physical syndrome" into a circular ideology of: let's exclude all symptoms or signs or tests which may indicate obscure pathological disease processes as well, (pretend to) wonder why no pathology is found, assume or claim CFS is a functional disorder, then imply blanket application to all forms of ME/CFS. If the DSM-5 continues down the path it is, they can always fall back on the new SSSD and CSSD categories whenever organic pathology is found!

    Obvious neurological disorders and "organic brain diseases" would have been excluded from PACE. However, I think there is a grey area, or a limbo perhaps, between classical neurological signs and symptoms vs the more subtle manifestations which may not result in a diagnosis of a neurological disease but would exclude a patient from PACE. White (and Reeves) would want to exclude this grey area as much as possible to get a "clean" cohort. We know that White dislikes the CCC for including "neurological-like symptoms".

    I enjoyed David Tuller's response to this claim from the principal investigators of the PACE trial: "Patients and their doctors now have robust evidence that there are two safe treatments that can improve both symptoms and quality of life, however the illness is defined." Tuller's answer was basically, your 3 different CFS cohorts were all subgroups of the same Oxford criteria, not a gold standard of truly different cohorts. I'd like to see White et al justify (with a straight face) the use of 60 as the threshold for "normal" levels of the physical function subscale of the SF-36, and then watch Bleijenberg & Knoop repeat their editorial with a straight face. Have any "non ME/CFS community" articles about PACE picked that up about the absurdly low threshold for normal.

    When I first started reviewing the research on CBT and GET for ME/CFS and was open to whatever the "science" suggested, even if it contradicted my own anecdotal experiences, I also came across polite but strong wording in patient advocate commentary such as "smoke and mirrors", which at the time I thought was probably an exaggeration of valid criticisms or at least a colourful description of them. However it seems this is accurate afterall! Much of it seems to be "ideo-psychological" research.
  2. biophile

    biophile Places I'd rather be.

    Messages:
    1,348
    Likes:
    3,979
    I missed that! Most of the other measurements have 90%+ patients giving data.

    Interesting.

    Good point. The average participant was only ill for a few years and seemed to have relatively rapid improvements in the first 12 weeks. Placebo responses are also plausible.

    This was a very controlled and clean cut trial which is unlikely to reflect what happens in the real world. So yes, I would imagine the efficacy-effectiveness gap is significant for CBT and GET in PACE. The same could be said for SMC, how many patients in the NHS receive STFU "therapy" and GTFO "therapy" rather than "standardised specialist medical care"? On the other hand, SMC probably didn't involve any exotic treatments some patients may try and find effective. White et al have argued that the poor results and negative reports from patient surveys are the result of badly applied therapy. I think Dolphin(?) has pointed out somewhere that many of these patients went to professional services. This may support the existence of a efficacy-effectiveness gap and is very concerning if CBT and GET will now be further "rolled out" into the NHS.

    This is an interesting point from a general perspective beyond how data is cherry picked to boost the apparent "success" of a RCT. Here we are dealing with researchers who believe "abnormal illness beliefs" are of fundamental importance to ME/CFS and are concerned about patients joining support groups and going on the internet to read about their condition. I would not put it past such people to carefully phrase information or even lie to patients "for their own benefit", including assfacts about how effective CBT/GET is.

    Do you mean that the SD is low because of the cut-off points skew the distribution?

    Yes, shouldn't a publicly funded trial give open access to its raw data?!

    This is a real possibility, reactivity bias is also a problem. The whole thing was unblinded too: "As with any therapy trial, participants, therapists, and doctors could not be masked to treatment allocation and it was also impractical to mask research assessors. The primary outcomes were rated by participants themselves. The statistician undertaking the analysis of primary outcomes was masked to treatment allocation."

    Good point, especially for GET. It is unlikely the 41% of the GET group reporting feeling "much better or very much better" (vs 25% for SMC) had decent 6WMD scores.

    Which is course now makes the 75%-debunked model even more generic than it already was. Very interesting when considering talk about how CBT is "used in other medical conditions". There is CFS-specific CBT but it was would be ironic for the proponents of it if the mild successes with it were generic and had nothing to do with CFS per se.

    This bar graph from Bowling et al 1999 really was a wonderful find. If I recreated it from scratch using the same data (and your modifications), would that evade copyright issues? Hopefully most of the other graphs I've been working on will be ready on the weekend.
  3. anciendaze

    anciendaze Senior Member

    Messages:
    824
    Likes:
    749
    At the risk of irritating some who have been following this discussion, I will try to make my point again, after that I will give up.

    A normal distribution is defined by two parameters: mean and variance. (SD is tied to variance.) These to numbers tell you everything you can possibly know about that distribution. No matter how long you study it you can't extract a scrap of meaningful information beyond these parameters. Correlations and p-values depend on these parameters.

    The whole point of this exercise was ostensibly to learn something about the population from which these 600 or so patients were drawn. If the purpose was to entertain 600 people and provide harmless employment for researchers, the British taxpayer should hear about this expenditure of 5M pounds. If the mathematical model of that population, a subpopulation of the general population, was so thoroughly flawed you can't even assign consistent meaningful values to those two parameters, any statistical inferences drawn concerning the population outside those patients actually in the study are invalid.

    I don't think I am being some kind of purist to insist that numbers on which the whole subsequent argument depends should have some meaning. The alternatives I've tried to bring up show that there is nothing particular about the numbers chosen. They could have had other values. This would change the bounds used in the study, or at least the meaning of those bounds.

    In fact the choice of bounds dominates virtually every aspect of the data. If this is an arbitrary choice by researchers, they can pretty well make the numbers say whatever they want.
  4. urbantravels

    urbantravels disjecta membra

    Messages:
    1,333
    Likes:
    505
    Los Angeles, CA
    Apologies for repeating the same long post in two topics, but I just realized I posted this, meant for this thread, in one of the NYT threads about Tuller instead:

    OK, since I've brought up questions about the placebo effect and the nocebo effect, I wanted to take a look at the APT arm protocol to see what was really in there. I had, inexcusably, been relying on some chance comment someone had made about how rigid the PACE version of "pacing" was, and I think I may have even been the one to coin the term "the evil version of pacing."

    On the face of the the APT used in PACE doesn't really look *that* bad or that radically unlike what we would understand as "pacing." But even on my first quick read I definitely saw some poison pills in there.

    This is the one arm of the trial which claims to use a "pathological" model of ME, i.e. that it is a physical disease. The manual is sprinkled with familiar-sounding quotes from people with ME about how they learn not to do two major errands in one day, for instance, and how they learn to stop *before* they feel really exhausted, stuff like that.

    To be noted, though: there is constant emphasis on how this approach should be communicated to patients as "not a cure", and that the best it can do for you is "create the conditions for natural recovery to occur." (So even this model doesn't contemplate ME as a disease that might be incurable, or that an individual might never recover "naturally.") The quotes from patients, though *we* know them all to be accurate representations of how pacing works - I gotta tell you those quotes sound like a lot of doom and gloom to the uninitiated. When I was first ill and would read statements like "don't do the laundry and get the groceries on the same day," I was NOT ready to hear that; in fact it would set off a fury of grief that I had to accept such awful limitations on my life, when I used to do dozens of things each day. The grief process involved in accepting that is a major, major undertaking - and these poor people weren't actually getting any emotional support about it, or any real reason to hope (say, by being told that research is ongoing and someday there might be better treatments available. Because as far as the authors of the PACE trial are concerned, all the necessary research has been done & *they* know the cure already - they're deliberately giving people in this arm a treatment that they themselves think is ineffective.)

    On the other hand, the GET and CBT arms are filled with positive messages about self-empowerment, encouraging you not to think of yourself as really (or permanently) limited, "helping" you to "identify" your bad habits that are perpetuating this "vicious cycle" of fear of activity/deconditioning, etc., telling you over and over again that you can overcome this vicious cycle and improve your condition.

    And then measure outcomes subjectively, after a good year of inculcating the proper attitude in each group of patients about what they can expect from their therapy.

    Now think about the cohort issues: we've got a majority of patients in the trial who would never meet CCC, an unknown number of whom have never even experienced true PEM, a large number of whom probably have primary depression or some other fatiguing condition. Would being trained in "pacing" do these people any good at all? When the therapy is delivered with such a strong underlying message that "this probably won't help you improve at all"?

    Even if you somehow accidentally got into this trial with real M.E. (it would have to be a mild case), the expectation that there is some "natural recovery" that might *possibly* occur would certainly lead to disappointment with the APT treatment. As far as I understand pacing, it's not going to be a cure or even make me feel dramatically *better* in any way; what it does is cut down on the worst of the suffering. Not an effect you'd feel if you weren't acutely suffering going in; and those folks were pretty well screened to eliminate anyone who was really suffering physically. And if you actually *were* fatigued because you were depressed and deconditioned, of *course* you wouldn't feel better after your 52 weeks with Eeyore being told to lie down and think of England. And you'd be pretty mad that you didn't even get any "natural recovery."

    The folks hanging out with Tigger in the other two arms, where everything is wonderful and the power of positive thinking rules all, are being encouraged to believe they feel better. And, of course, if they really had been deconditioned and depressed, they might feel a bit better, especially in the GET arm - and they'd have that nice "sense of control" that they accomplished it through their own good efforts.

    OK, guesses as to which group(s) get the placebo effect and which group gets the nocebo effect?

    This is based on a very quick read and I'll have to delve deeper to flesh out these thoughts some more - some things still strike me as odd, such as the pacing group being *forbidden* to use heart rate monitors (?) and rely only on their "perception" of how fatigued they felt ... and the fact that some positive aspects of real pacing seem to have snuck their way into the CBT arm rather than being put in the APT arm.
  5. Dolphin

    Dolphin Senior Member

    Messages:
    6,439
    Likes:
    4,665
    Possibly still worthwhile submitting a letter

    The rules for submitting letters to the Lancet are two weeks following publication.

    One E-mail suggested this was two weeks after it was first published online but this would deny people who get the print edition the opportunity to write.

    The print edition came out 4th/5th March (the 5th is a Saturday but perhaps that is true with Saturday post).

    So I think one could still send in a letter if you have something to say.
    250 words max, 5 refs max. (really 4 when quote original article) so one can't write forever.

    Register at http://ees.elsevier.com/thelancet/ and then go back there and press "Author Login" and follow the steps.

    Submissions will hopefully be collated. I'm not sure how many people will read this thread in the future but published letters will have an impact and even all collated letters would also, hopefully.
  6. anciendaze

    anciendaze Senior Member

    Messages:
    824
    Likes:
    749
    random walks and patient foresight

    A second point which has bothered me has been how to incorporate patient beliefs into a random-walk model which might serve as a null hypothesis.

    Actually modeling foresight in a computer is artificial intelligence. What I can do instead is to assume patients have some such beliefs which sometimes work, and check the effect these have on a random walk. This is not some off-the-wall suggestion. The idea of a self-perpetuating belief system is at the core of the psychosocial mode.

    My first run of a model would have patients grouped according to selection at the time they entered the study moving up or down scales randomly. When this run is complete, I would then go back and examine those walks which showed a negative trend toward the end of the study. If I assume patients who acquire negative beliefs about the efficacy of the treatment they are receiving to improve their health drop out as soon as these beliefs become firm, I will eliminate those who show downward trend later in the study, regardless of their position on the scale at the time they drop out. A better model might use a weighted probability of dropping out at each step.

    You don't need to eliminate patients at the bottom of the scale, at the time they drop out, to bias results. They leave based on their own perception of benefits they are receiving. The more effective a treatment is in eliminating only those with negative foresight, the more effective it will appear. You can run this on a computer and play with various weighting schemes to see the effect it has. What we see in this study is within the range of such effects even for modest numbers of dropouts.
  7. Marco

    Marco Old blackguard

    Messages:
    1,136
    Likes:
    737
    Near Cognac, France
    "The folks hanging out with Tigger in the other two arms"

    Peach!:D
  8. Marco

    Marco Old blackguard

    Messages:
    1,136
    Likes:
    737
    Near Cognac, France

    Seeing as I was the one who raised the issue of statistical purity, I'll reply.

    There is no argument or disagreement here.

    The fact that the PACE authors have used parametric statistics inappropriately is undeniable and can and should be highlighted. It logically follows that any analysis derived has no validity whatsoever. However this is a point best made by a professional statistician. I believe one of the PACE team is a medical statistician and would be the one to respond to such suggestions. I suspect the ensuing argument would be in the nature of 'oh no we didn't - oh yes you did etc' and they could probably pull some 'we performed a log transformation on raw scores to approximate a normal distribution argument or similar waffle (leaving aside that in doing so they would obliterate the very nature of the underlying distribution - but thats another matter). Whether of not highlighting this failing will convince anyone of the underlying 'bad science' is anyone's guess.

    The other approach, which we seem to be taking here, is to set to aside this legitimate criticism, and to work from the data provided and the underlying statistical assumptions as they appear in the published paper. Accepting that they are rubbish, they are the basis on which the PACE authors are basing their limited success. Even accepting their erroneous assumptions, its relatively easy to point out startling deficiencies in their analyses, not least the pathetically low baselines set for 'normal ranges'.

    So my last word on this is, there is a very valid point to be made on the ropey stats which invalidates all their results. But this point may be lost on many (particularly policy makers). There are also many points to be made on the data as presented which are probably likely to be more meaningful to the average observer. Equating 'normal' functioning as being that of a 65+ year old is a fairly damning thing to highlight.

    If the PACE authors choose to talk about means and SD's then so be it.
  9. Dolphin

    Dolphin Senior Member

    Messages:
    6,439
    Likes:
    4,665
    One person's response to Lancet article

    Here's one person's response which they said I could circulate.
    (Ideally I would recommend responses that are referenced)

  10. oceanblue

    oceanblue Senior Member

    Messages:
    1,174
    Likes:
    342
    UK
    Like most people, this is probably beyond my understanding, but while trying to get to grips with biostatistics (studied with a textbook chosen expressly because it promised jokes) I came across this, that might be relevant:
    But since I'm out of my depth, it might not be relevant too. However, I thought your contributions deserved some reply; I do try to read your posts but, apart from the good gags, I don't really grasp them.
  11. oceanblue

    oceanblue Senior Member

    Messages:
    1,174
    Likes:
    342
    UK
    Apparently comment pieces are not normally peer reviewed. I think B&K made a genuine mistake, which goes to show how deceptive PACE were.

    In fact, Knoop did a study on recovery which appears to be the origin of the 'within 1 SD of the mean' formula. However, this study explicitly applied the forumla to a healthy population defined as general pop excluding those who reported a long-term health issue. This gave a reasonable SF36 threshold of 80. B&K assumed PACE did the same thing, but PACE just used a general population, including the sick, giving a threshold of 60. Sneaky, eh?

    In case you think it was a simple 'mistake' by PACE to use the wrong population, it wasn't: Peter White co-authored that Knoop study which had explicity used a healthy population.
  12. oceanblue

    oceanblue Senior Member

    Messages:
    1,174
    Likes:
    342
    UK
    Problem with self-reports acknowledged in a CBT study on MS

    How refreshingly honest.

    from A Randomized Controlled Trial of Cognitive Behavior Therapy for Multiple Sclerosis Fatigue
  13. Dolphin

    Dolphin Senior Member

    Messages:
    6,439
    Likes:
    4,665
    Letters sought in reply to PACE Trial article in a free newspaper for Irish doctors

    If anyone wants to send in a reply to this, it'd be appreciated.
    It was included in a free newspaper for Irish doctors.


    Last year, following publishing a piece on the Santhouse et al. editorial in the British Medical Journal, they published not one but five letters over a series of weeks (John Greensmith, Tom Kindlon, Gerwyn Morris, Orla Ni Chomhrai & Vance Spence (only two with Irish addresses) - that was most of the people who wrote in, as I recall.
    They may be glad to fill up space in their newspaper.


    People can also put comments online but letters would be preferred. You can always post your letter as a comment if you prefer.


    If you sent in a letter to the Lancet, you could get a chance to re-use it (ordinary newspapers might find it too technical). Probably best to not put the references underneath - just put the name of the first author + et al. + year in brackets e.g. (White et al., 2011) to refer to Lancet paper. If you want me to look at it, feel free.

    References aren't essential of course.

    Even if your point doesn't relate to what is in the Irish Medical Times article, one can still criticise the study.


    Probably best to keep letters under 400 words and ideally less than that again.
    Address is: editor@imt.ie that's editor @ imt.ie


    Don't forget to put your address in the letter and also a telephone number (which won't be published).


    Thanks



  14. anciendaze

    anciendaze Senior Member

    Messages:
    824
    Likes:
    749
    central limit theorem

    With the invocation of the central limit theorem, we are now officially in deep water.

    Generally, the CLT is used in the opposite direction from the way they are going. If you have large numbers of IID processes (Independent Identically-Distributed), these may result in an overall process with normal distribution even if the individual process distributions are very different. Their individual distributions need a few properties, like the existence of a mean, and unbounded deviations. They must also be truly identical and statistically independent. The behavior of large numbers of identical electrons or molecules can give rise to normal distributions in this way. We are talking about a single distribution for the entire UK population which is demonstrably non-normal.

    The universe of samplings talked about here would have to be quite large. I'm sure researchers would appreciate a chance to run 10^9 5M pound studies. :D
  15. Dolphin

    Dolphin Senior Member

    Messages:
    6,439
    Likes:
    4,665
  16. Marco

    Marco Old blackguard

    Messages:
    1,136
    Likes:
    737
    Near Cognac, France
    SF-36 : Norms, skewness and 'top box' scores in a surgical population.

    I'd completely forgotten about this paper and apologies if it has already been posted.

    It might be some help with letters :

    "A review2 of surgical QoL studies has
    found that there were several deficiencies
    in the conduct of these studies. One of the
    most common problems was inappropriate
    statistical analysis. The proper statistical
    analysis of data is essential in interpreting
    the results of any study.3 Commonly,
    data from the SF-36 have been presented as
    means with standard deviations or standard
    errors of the mean. The basic assumption
    of these studies is that the data follow
    a normal (gaussian) distribution, having a
    bell-shaped curve. However, many of
    these studies did not perform the statistical
    tests4 needed to determine if, indeed, the
    data follow the normal distribution necessary
    to use this type of statistical analysis."



    "Conclusions: The SF-36 data did not follow a normal
    distribution in any of the domains. Data were always
    skewed to the left, with means, medians, and modes different.
    These data need to be statistically analyzed using
    nonparametric techniques. Of the 8 domains, 5 had a significant
    frequency of top-box scores, which also were the
    domains in which the mode was at 100, implying that
    change in top-box score may be an informative method
    of presenting change in SF-36 data"

    http://archsurg.ama-assn.org/cgi/reprint/142/5/473.pdf
  17. Bob

    Bob

    Messages:
    7,438
    Likes:
    8,583
    England, UK
    There's now a 'web appendix' published on the Lancet which i haven't seen before...

    It lists the nature of all the 'serious adverse advents' many of which, as the authors state, don't seem to be related to ME or the treatments...

    See Page 5:
    http://download.thelancet.com/mmcs/...b72946c:606a418:12ecddf56bf:20861300538790422
    http://download.thelancet.com/mmcs/...72946c:606a418:12ecddf56bf:-35b81300536269641
    http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(11)60096-2/fulltext#sec1
    (One of these links might work, but I think you have to be logged into the Lancet.)

    Some of the serious adverse events listed under the categories 'Inpatient investigation' and 'Increase in severe and persistent significant disability/incapacity' could be ME related, but otherwise they all seem unrelated.
  18. Angela Kennedy

    Angela Kennedy *****

    Messages:
    1,026
    Likes:
    152
    Essex, UK
    Problem is, devil is the detail (or lack of thereof).

    If we take as given (as i think we should, at least when considering this issue - think even of Peter White's dept denying that bowel problems are part of 'CFS/ME' in the NICE guidelines comments, for example), then these authors are ignoring neurological ME-related problems (they don't believe in the ME with neurological features, Canadian defined ME etc.).

    Caveat about the efficient process of exclusion of neurological ME patients not withstanding, IF ANY actual ME (or other misdiagnosed 'fatigue') patients were in the mix, increase in disability might well be a result of 'treatment' upon abnormal response to increasing exertion etc.

    This is fact goes to the crux of the matter. They are blanket-claiming CBT/GET as safe for people with (even neurological) ME, against the evidence it is contraindicated (which they don't adequately address, of course).

    Adverse outcome details should have been made explicit, and just writing vague descriptions like these, and then saying something like "indepdent investigators thought they were nothing to do with our treatment" which is basically what they've done, is really not good enough, and should have been picked up at peer review.
  19. Bob

    Bob

    Messages:
    7,438
    Likes:
    8,583
    England, UK
    Yes, you have a good point Angela...

    But it would be hard for us to categorise many of the events, such as surgery, hip replacements, accidental head injury and allergic reaction to bites, as being directly related to the treatment, although they could easily be ME related.

    Events like the 'head injury' and 'hip replacement' could be due to a patient falling over as a direct result of a weak body over-doing the GET. And pregnancy complications could be ME related due to a flare-up after GET. So, yes, the devil is in the detail, and we won't ever know what the exact details are.

    Some of the events listed could obviously be directly due to treatment related flare ups (i.e. blackouts, chest pain, "acutely unwell", epileptic seizure, "Investigation of headache" and chest infection etc). And none (?) of these have been acknowledged as related to the treatments.
  20. anciendaze

    anciendaze Senior Member

    Messages:
    824
    Likes:
    749
    This fellow does know what he is talking about, but lacks the collection of merit badges required for weight of authority. Top box score analyses are not accepted standards in any field I'm aware of, however. As a comment suggested, there are many other non-parametric alternatives.

    Unfortunately, the number of studies with the same fundamental flaw is large enough for incompetent researchers to outvote objectors. Also, consider that these studies were based on surgery, associated with presumption of organic causation. Had they reviewed psychological literature the state of the art would have been considerably worse.

    Even after his presentation, we have one response to the talk indicating someone (McCarthy) fully intends to keep doing what he has been doing. There is no apparent awareness that the behavior he claims to see in data could be the result of the sampling process instead of the population being sampled.

    Meanwhile, someone in parliament should ask what will the UK do about 80,000 predicted zombies.:angel:

See more popular forum discussions.

Share This Page