• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Edzard Ernst - Critiques: RCT Accupuncture for cancer-related fatigue - or PACE?

Firestormm

Senior Member
Messages
5,055
Location
Cornwall England
I recall when this study was published back in October. Quite an interesting/amusing (depending on your point of view) critique of this and the way in which is was publicised.

I came across this earlier this morning from Edzard Ernst. In his blog titled: "No negatives please we are alternative!" I have taken the following extract (date: 21 November 2012):

...You might say that the above-mentioned acupuncture trial does still provide important information. Its authors certainly think so and firmly conclude that “acupuncture is an effective intervention for managing the symptom of cancer-related fatigue and improving patients’ quality of life”.

Authors of similarly designed trials will most likely arrive at similar conclusions. But, if they are true, they must be important!

Are they true? Such studies appear to be rigorous – e.g. they are randomised – and thus can fool a lot of people, but they do not allow conclusions about cause and effect; in other words, they fail to show that the therapy in question has led to the observed result.

Acupuncture might be utterly ineffective as a treatment of cancer-related fatigue, and the observed outcome might be due to the extra care, to a placebo-response or to other non-specific effects. And this is much more than a theoretical concern: rolling out acupuncture across all oncology centres at high cost to us all might be entirely the wrong solution.

Providing good care and warm sympathy could be much more effective as well as less expensive. Adopting acupuncture on a grand scale would also stop us looking for a treatment that is truly effective beyond a placebo – and that surely would not be in the best interest of the patient.

I have seen far too many of those bogus studies to have much patience left. They do not represent an honest test of anything, simply because we know their result even before the trial has started. They are not science but thinly disguised promotion. They are not just a waste of money, they are dangerous – because they produce misleading results – and they are thus also unethical.

Hmmm... well I wonder if he'd say the same about the PACE Trial and CBT and GET methods, application, and results? Of course not! :)

"Adopting CBT or GET on a grand scale would also stop us looking for a treatment that is truly effective beyond a placebo – and that surely would not be in the best interest of the patient."

Aren't I naughty? Ha! :rofl: Don't take that seriously folks! Blimey. It could be re-posted all across t'internet if I'm not careful with no context or disclaimer :D

I digress, it's an interesting blog article I think and he's an interesting chap - reminds me of a German doctor I was seeing some time back - he was another who had a low tolerance threshold and looked like a 'mad scientist' :)

Returning to that acupuncture study, Edzard comments thus - I mean damn you could take this to relate to PACE and CBT/GET - but only if you were feeling naughty and biased of course ;) :

Since several years, researchers in this field have adopted a study-design which is virtually sure to generate nothing but positive results.

It is being employed widely by enthusiasts of placebo-therapies, and it is easy to understand why: it allows them to conduct seemingly rigorous trials which can impress decision-makers and invariably suggests even the most useless treatment to work wonders.

One of the latest examples of this type of approach is a trial where acupuncture was tested as a treatment of cancer-related fatigue.

Most cancer patients suffer from this symptom which can seriously reduce their quality of life.

Unfortunately there is little conventional oncologists can do about it, and therefore alternative practitioners have a field-day claiming that their interventions are effective. It goes without saying that desperate cancer victims fall for this.

In this new study, cancer patients who were suffering from fatigue were randomised to receive usual care or usual care plus regular acupuncture.

The researchers then monitored the patients’ experience of fatigue and found that the acupuncture group did better than the control group.

The effect was statistically significant, and an editorial in the journal where it was published called this evidence “compelling”. :whistle:

Due to a cleverly over-stated press-release, news spread fast, and the study was celebrated worldwide as a major breakthrough in cancer-care. :whistle:

Finally, most commentators felt, research has identified an effective therapy for this debilitating symptom which affects so many of the most desperate patients. :whistle:

Few people seemed to realise that this trial tells us next to nothing about what effects acupuncture really has on cancer-related fatigue. :zippit:

Sorry. All this humour is overriding the seriousness of daft studies like this one. 'Self-serving' I think is what springs to mind. The notion that even an RCT (hyped RCT) can be 'tailored' to produced flattering results - I mean that's... well that's... scandalous!! :)

Back to some maths:

In order to understand my concern, we need to look at the trial-design a little closer.

Imagine you have an amount of money A and your friend owns the same sum plus another amount B.

Who has more money? Simple, it is, of course your friend: A+B will always be more than A [unless B is a negative amount].

For the same reason, such “pragmatic” trials will always generate positive results [unless the treatment in question does actual harm].

Treatment as usual plus acupuncture is more than treatment as usual, and the former is therefore more than likely to produce a better result.

This will be true, even if acupuncture is no more than a placebo – after all, a placebo is more than nothing, and the placebo effect will impact on the outcome, particularly if we are dealing with a highly subjective symptom such as fatigue.

I can be fairly confident that this is more than a theory because, some time ago, we analysed all acupuncture studies with such an “A+B versus B” design.

Our hypothesis was that none of these trials would generate a negative result. I probably do not need to tell you that our hypothesis was confirmed by the findings of our analysis. Theory and fact are in perfect harmony.

Sorry about the poor attempt at sarcastic humour. Hope you don't lose the message. I couldn't resist after reading all that I have this morning.

Will need to go through all this RCT paper and headline pronouncements myself again I think, but the argument - if it has been correctly applied which I think it has in this case (like I'm someone who can claim otherwise!) - is compelling.

Give a patient more, and they will feel they have been treated better. Paid more attention. Taken seriously, etc. etc. etc. unless they die of course.

Until such time as you can prove what effect your therapy/treatment actually has on (in this case) the 'fatigue' associated with cancer how can you ever claim that it works?

And for that to happen, you'd need to better understand what 'fatigue' actually is caused by, and not operate blindly. The 'fatigue' needs to be quantified and any improvements need to be.... well you know where this is going. It isn't enough to do what has been done here.

You also need to be able to explain that your therapy/treatment is actually capable of doing something i.e. that it is demonstrably effective and not merely a placebo.

Oh for sure, it will get cancer patients through the doors of an acupuncturist I have no doubt. And who am I (or we) to pooh-pooh anyone who feels they have benefitted from an alternative intervention? But then, I'm not and neither is Edzard.

This is about a seeming RCT claiming that the intervention has proven clinical significance. When it hasn't. At least not "beyond reasonable doubt" - to quote a familiar phrase.

Fire :ninja:
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Hi Fire, all this has already been discussed in relation to PACE over the last year or so. Indeed I am heading up to analysis along these lines which is reflected in some of my blogs. What is missing from the argument though is bias (in myriad forms) and problems in rational analysis that occur with a verificationist strategy. This may indeed be poor science, but it also appears that the problems are conveniently overlooked. Add this to Zombie Science, for which my latest blog is mostly done, and the publication process itself sides with poor studies. The whole peer review process is failing far too often, as is editorial policy in major journals (I cite a case in my next blog).

The really alarming thing though is that doctors, researchers, reviewers and editors are not making more noise about this. Is silence consent? Or is it just ignorance or being too busy in the modern world to stop and smell the aroma of the research?

Bye, Alex
 

Firestormm

Senior Member
Messages
5,055
Location
Cornwall England
The really alarming thing though is that doctors, researchers, reviewers and editors are not making more noise about this. Is silence consent? Or is it just ignorance or being too busy in the modern world to stop and smell the aroma of the research?

I suspect in the case above of alternative medicine - it is the sound of tills ringing and the air of (false) legitimacy that drives this kind of 'research' forward, Alex.

BTW you might also want to read some of the comments attached to this piece from Edzard. There weren't many last I checked, but they were predictable.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Hi Firestormm, I did read the comments.There are only a handful.

A further point of what you quoted from my in post 3 is that such a financial motive does not apply to all the doctors and researchers who are ignoring similar failings in medicine. As I will report in my next blog, this now extends to some editors of major medical science journals. Now its not surprising that most doctors, and even most researchers, wont find the bad science and comment - so much of it is outside their field, or they are too busy, or whatever. It should be surprising that editors and reviewers of medical journals don't always respond. What is really surprising is just how pervasive this seems to be.This could be because I hear of selected instances that are discovered, and so this may not apply more broadly.

Now psychological researchers can be congatulated on at least recognizing many of these issues and trying to address them. Similarly the move to put science back into "evidence" based medicine is a good move, as are other proposals that have been discussed recently to counter bias in scientific research.

Decades ago most science funding was from government. Now its from companies. This alters the truth dynamic to a profit dynamic. It only takes a moderate percentage of studies to be biased to start to skew science into Zombie Science.

I suspect that a pervasive issue is a culture of covering up, of keeping the medical profession from disrepute. The problem is that eventually a lot of this will come out, and the entire profession may be brought into disrepute. In order for this not to happen medical practitioners have to engage with the research ... and I am not just speaking about ME.

Bye, Alex
 

Esther12

Senior Member
Messages
13,774
So many of the problems identified by those criticising alternative medicine do also apply to how CFS has been treated by those claiming to represent 'evidence based medicine'. Sadly, I think that 'CFS' is just taken less seriously, so people are less concerned. A lot of people also prefer to think that patient concerns are entirely related to 'anti-psychiatry' or 'stigma', rather than taking the time to looking laboriously through the evidence, and seeing how it has been spun by those promoting particular treatments.

Hopefully we'll be able to get some more unpublished data released, and that will help clarify things:

http://forums.phoenixrising.me/inde...d-releasing-data-on-recovery-from-pace.20243/
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
The problem is alternative medicine is outside the mainstream, but psychobabble is accepted mainstream dogma. If they acknowledge it is poor, misleading or downright wrong, then its all the responsibility of the medical systems who should have stepped in decades ago. They have failed, systematically, over generations. No, that couldn't happen, surely not ...
 

Esther12

Senior Member
Messages
13,774
His latest blog post seems to include relevant points for us too:

http://edzardernst.com/2012/11/how-to-fool-people-with-clinical-trials/

Generally, I'm not that interested in the debunking of alternative medicine (I just never took it seriously enough to think it needed debunking), so I think I've slightly glossed over people's posts about Ernst and others, but so many of his points are relevant to problems with poor mainstream research too.
 

Esther12

Senior Member
Messages
13,774
How lazy my post was compared to Fires! I've shamed myself.

Here are some notes, but they're all pretty obvious points, so I expect that you'd be better off just reading the piece for yourself.

Another easy method to generate false positive results is to omit blinding. The purpose of blinding the patient, the therapist and the evaluator of the outcomes in clinical trials is to make sure that expectation is not the cause of or contributor to the outcome. They say that expectation can move mountains; this might be an exaggeration, but it can certainly influence the result of a clinical trial. Patients who hope for a cure regularly do get better even if the therapy they receive is useless, and therapists as well as evaluators of the outcomes tend to view the results through rose-tinted spectacles, if they have preconceived ideas about the experimental treatment. Similarly, the parents of a child or the owners of an animal can transfer their expectations, and this is one of several reasons why it is incorrect to claim that children and animals are immune to placebo-effects.

The impossibility of truly blinding cognitive and behavioural interventions seems like more of a problem to me than many people like to realise. Generally there seems to be a desire to present the placebo affect as a real treatment - it has a serious impact upon improving people's health; rather than simply being patients attempting to speak more positively about their condition in certain circumstances (once they've spent time with a therapist who they like and believe is working towards their recovery, a belief that they themselves are performing tasks working towards their own recovery, a belief that 'positive thinking' is helpful, etc). I know this is all contested, controversial and uncertain, but with CBT for CFS, we do have some reason to think that the improvements it can lead to in questionnaire scores do not represent real improvement in patient's health: from PACE, there was no significant improvement in employments levels or patients scores for the 6 minute walking test (the closest things to objective measures of capacity that we had). We have the three RCTs which found CBT improved questionnaire scores, but did not lead to improvements in the amount of activity patients could carry out as measured by actometers, or improvements of objective measures of neuropsychiatric performance. To me it seems likely that relying on questionnaire scores will exaggerate the value of cognitive and behavioural approaches.


While these options for producing false positives are all too obvious, the next possibility is slightly more intriguing. It refers to studies which do not test whether an experimental treatment is superior to another one (often called superiority trials), but to investigations attempting to assess whether it is equivalent to a therapy that is generally accepted to be effective. The idea is that, if both treatments produce the same or similarly positive results, both must be effective. For instance, such a study might compare the effects of acupuncture to a common pain-killer. Such trials are aptly called non-superiority or equivalence trials, and they offer a wide range of possibilities for misleading us. If, for example, such a trial has not enough patients, it might show no difference where, in fact, there is one. Let’s consider a deliberately silly example: someone comes up with the idea to compare antibiotics to acupuncture as treatments of bacterial pneumonia in elderly patients. The researchers recruit 10 patients for each group, and the results reveal that, in one group, 2 patients died, while, in the other, the number was 3. The statistical tests show that the difference of just one patient is not statistically significant, and the authors therefore conclude that acupuncture is just as good for bacterial infections as antibiotics.


Even trickier is the option to under-dose the treatment given to the control group in an equivalence trial. In our hypothetical example, the investigators might subsequently recruit hundreds of patients in an attempt to overcome the criticism of their first study; they then decide to administer a sub-therapeutic dose of the antibiotic in the control group. The results would then apparently confirm the researchers’ initial finding, namely that acupuncture is as good as the antibiotic for pneumonia. Acupuncturists might then claim that their treatment has been proven in a very large randomised clinical trial to be effective for treating this condition, and people who do not happen to know the correct dose of the antibiotic could easily be fooled into believing them.
Obviously, the results would be more impressive, if the control group in an equivalence trial received a therapy which is not just ineffective but actually harmful. In such a scenario, the most useless or even slightly detrimental treatment would appear to be effective simply because it is equivalent to or less harmful than the comparator.


One problem with PACE is that it's quite possible that there is no useful treatment for CFS. The version of Pacing that they were testing involved encouraging patients to only do 70% of what they felt comfortable doing... that sounds likely to be unhelpful to me.

Also, in my view 'pacing' is simply what one has to do if one is sensible and suffers from reduced capacity. I'm sceptical about the value of creating 'expert pacing therapists' who should be paid to manage patients. (I know others disagree). Surely, given the poor results for GET... most GET patients ended up pacing? If they really were able to continue increasing their activity levels, we would have seen better results.

A variation of this theme is the plethora of controlled clinical trials which compare one unproven therapy to another unproven treatment. Perdicatbly, the results indicate that there is no difference in the clinical outcome experienced by the patients in the two groups. Enthusiastic researchers then tend to conclude that this proves both treatments to be equally effective.

This sounds so much like the editorial which accompanied PACE.

One fail-proof method for misleading the public is to draw conclusions which are not supported by the data. Imagine you have generated squarely negative data with a trial of homeopathy. As an enthusiast of homeopathy, you are far from happy with your own findings; in addition you might have a sponsor who puts pressure on you. What can you do? The solution is simple: you only need to highlight at least one positive message in the published article. In the case of homeopathy, you could, for instance, make a major issue about the fact that the treatment was remarkably safe and cheap: not a single patient died, most were very pleased with the treatment which was not even very expensive.

30-40% recovery rate?

And finally, there is always the possibility of overt cheating. Researchers are only human and are thus not immune to temptation. They may have conflicts of interest or may know that positive results are much easier to publish than negative ones. Certainly they want to publish their work – “publish or perish”! So, faced with disappointing results of a study, they might decide to prettify them or even invent new ones which are more pleasing to them, their peers, or their sponsors.

The results from PACE were so poor that I think we can be confident that they were not tampered with.

Am I claiming that this sort of thing only happens in alternative medicine? No! Obviously, the way to minimise the risk of such misconduct is to train researchers properly and make sure they are able to think critically.
Hmmm... that seems a rather generous interpretation of the cause of these problems.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Yes Esther12, APT, adaptive pacing therapy, was a non-therapy. With that as a benchmark doing Tequila shots would probably show up as an effective therapy.

Are you sure this guy isn't specifically writing about the PACE trial? ;)