• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

MAIMES: Would you like a Public Enquiry into the lack of care and treatment for people with M.E?

Laelia

Senior Member
Messages
243
Location
UK
I don't think this is sufficient to call it good evidence. It is anecdotal evidence, as far as I know not backed up by a double blind controlled trial.

On that basis @trishrhymes, would it be correct to say that you don't think that the large numbers of ME patient testimonies reporting deterioration from GET provides good evidence that GET is harmful to ME patients?

(Sorry Trish, I hope you don't mind me challenging you on this point. What I'm really trying to understand is when patient and/or practitioner testimony becomes "good evidence" in the eyes of the medical authorities)
 
Messages
2,158
Feel free to challenge me, @Laelia, I often get things wrong.

I think @Jonathan Edwards already made the point that evidence for a treatment to be recommended by NICE and/or get drug approval has to come from a properly run, per review published medical trial with all the precautions of double blinding where possible.

On the other hand evidence that a treatment has side effects bad enough for recommendations or approvals to be withdrawn can come from accumulated 'anecdotal' reports from patients and their doctors when the treatment is being used clinically outside a trial.

So evidence FOR must come from trials, evidence AGAINST can be accumulated anecdotal.

On this basis GET for ME/CFS fails both on efficacy, since PACE and FINE were null trials, and fails on safety from large patient surveys.
 
Last edited:

Laelia

Senior Member
Messages
243
Location
UK
So evidence FOR must come from trials, evidence AGAINST can be accumulated anecdotal.

I'm assuming from what you say here that anecdotal evidence can be considered "good evidence". What I'm still failing to understand in that case, is why anecdotal evidence from clinical audits cannot provide us with good evidence for any particular treatment (even if this evidence is not good enough to for a treatment to be recommended by NICE).

Or to put it another way: Why is anecdotal evidence AGAINST a treatment considered valuable but anecdotal evidence FOR a treatment considered valueless?

(Sorry to labour the point here but I'm really struggling to get my head around this).
 
Last edited:
Messages
2,158
On last try, @Laelia, then I'm opting out if this discussion.

Anecdotal evidence that a treatment seems to work for a particular condition is useful in prompting further research, for example when Fluge and Mela came across several cancer patients they treated with Rituximab reporting that their ME got better.

This led to clinical trials which, if successful in showing benefit for a significant number of patients will prompt more trials, and eventually acceptance as a valid treatment.

They will also need to record adverse reactions, so it can be made clear the balance of risk/ reward. This should be part of the trial reporting, and I'm sure will be.

If some doctors had not done these clinical trials, but had instead set themselves up in private practice offering Rituximab to anyone with chronic fatigue and then said lots of their patients got better and were able to return to work, we would have no way of knowing whether it was the Rituximab that helped the patients recover.

Perhaps inadvertently, these hypothetical doctors selected patients who were already recovering anyway or didn't have ME, or were recording patients as recovered simply because they went away, or deciding when the treatment didn't work that they can't have had ME after all, or the patients were so grateful to be getting treatment they had a temporary surge in wellbeing....

The point is, without a double blind trial, we can't know whether the evidence is good or not. So such claims of successful treatment would be nonsense.

On the other hand, once a drug is recommended on the basis of sufficiently robust clinical trials, and goes into general use, it can be withdrawn if lots of reports come in from patients and doctors that serious side effects are occurring in some patients using the drug that perhaps didn't show up in the clinical trials.

Edit to add: Getting back to the point of this thread, I do not feel able to support this particular campaign because i do not accept the claim in point 5 that there are many efficacious treatments. They may be good treatments or they may not. The evidence is not strong enough to pass scientific scrutiny, so could damage our case on all the other points. How can we argue that GET (Edit: I mean PACE) is bad science yet claim unresearched treatments are valid.
 
Last edited:

Laelia

Senior Member
Messages
243
Location
UK
Yup, they are a waste of time, because bias creeps in at all stages with this sort of comparison. That is the whole point of proper trial design. We have double blind randomised controlled trials to get away from this sort of comparison.

Anecdotal evidence that a treatment seems to work for a particular condition is useful in promoting further research

Yes that's what I thought, anecdotal evidence is useful for this purpose. It is for this reason that I am failing to understand why @Jonathan Edwards says that clinical audits are a "waste of time".
 
Messages
2,158
Yes that's what I thought, anecdotal evidence is useful for this purpose. It is for this reason that I am failing to understand why @Jonathan Edwards says that clinical audits are a "waste of time".

It may be simply a case of terminology. I don't think JE and I are talking about the same thing here.

I was talking about observations based on a few interesting cases that lead to a hypothesis to be tested, as with Rituximab, or with Dr Myhill's patients who she has observed seem to improve with some of her treatments. These are preliminary findings suggesting further research.

I have just looked up 'clinical audit' and find it's about assessing whether practitioners are complying with best practice guidelines, not about finding out whether treatments work.

These are two very different things.

For example this clinical audit of COPD doesn't even ask whether what the nurses are doing is useful or effective, it simply asks whether they are doing what they've been told to do:
https://www.nice.org.uk/media/default/sharedlearning/721_clinicalaudit_report_729_re-auditcopd.pdf
 
Messages
1,478
Hi. @Laelia Here's a graphical representation. It's basically a concept funnel to illustrate the innovation process from ideas to final concept. You can use a number of techniques to whittle down ideas to ones that work. In the field of research it is common to start with literature reviews, anecdotal reports etc, interesting data collected from other studies. This is then pared down a bit by qualitative research (more specific patient testimonials) or prototype experiments, progressing to in depth research based on an experimental design (lab based with metrics etc). When this shows promise you then isolate the concept you want to launch to market and then do a validation trial (called systematic review on the diagram). Validation trials are expensive and time consuming and involve a number of key checks and balances to make sure what you are launching is safe and still valid. So you only do the. When you've done the cheaper steps and are sure you are on to something and have eliminated risks of it failing (including harming people)

image.jpeg



Basically the whole argument here is that the validation trial at the end (PACE) hasn't been conducted properly or at all really. That doesn't preclude anecdotal evidence being used at the beginning. The problem is myhill is being just as bad (or worse actually) by trying treatments without a clinical trial to prove that they work.

Many researchers will publish papers right at the beginning of the funnel and then never follow anything up (a lot of the time to get a long list of papers with their name on so they feel good about themselves). These may not be progressed further for good reasons, but to quote papers of this sort and skip to treatment is pretty irresponsible.

Apologies if this is all stuff you already know.....just thought a picture might help
 

Jonathan Edwards

"Gibberish"
Messages
5,256
Yes that's what I thought, anecdotal evidence is useful for this purpose. It is for this reason that I am failing to understand why @Jonathan Edwards says that clinical audits are a "waste of time".

Because audits are not used for that purpose. When Fluge and Mella made the anecdotal observation that an ME patient improved after cancer treatment they used that to plan a study that produced reliable evidence. Audits are not used in that way. They are chiefly used to assess whether people are following the policy they say they are following, not to tell whether it is any good. Unfortunately, that has slipped people are sometimes using audits as if they were going to give reliable evidence of efficacy and they do not. There might be some unusual situations where someone had a piece of anecdotal evidence that a patient improved after a treatment and then they went back to check all the patients who had had that treatment to see if they improved - which is I suppose a bit like what you originally suggested. However, that is not what is meant by audit and it would just be a larger piece of anecdotal evidence that would need following up with a reliable study.

The problem is that a high proportion of people involved in testing things in medicine are very muddled about what constitutes reliable evidence - as in the PACE authors' case.
 

Jonathan Edwards

"Gibberish"
Messages
5,256
I actually think this discussion is directly relevant to Dr MPhil's campaign. It is important that people can judge its usefulness and that will depend on a clear understanding of what constitutes reliable evidence. So I think laelia's questions are very pertinent.
 
Messages
15,786
Or to put it another way: Why is anecdotal evidence AGAINST a treatment considered valuable but anecdotal evidence FOR a treatment considered valueless?
With anecdotal reports of benefit, it's well-established that bias can be a huge factor, especially if it's a treatment that the practitioner believes in, and is heavily invested in. Trials help avoid that bias, if they are appropriately designed.

Of course trials for a treatment are often set up by people who are promoting that treatment. But if the trial is reported properly, the claims of benefits can be scrutinized and even replicated - neither is possible with anecdotal reports.

Even for harms, anecdotal reports are inferior to results in trials. But due to researcher bias, those harms might be ignored or glossed over (see PACE). And trials are taking place in a specific time frame, under specific conditions, and with a specific population.

Hence anecdotal reports can add to the evidence, likely without really contradicting the harms found in the trial. Maybe there's was an unanticipated reaction with another drug, which trial participants would not have taken due to being excluded. Maybe there's a side-effect which takes years to manifest. Maybe the treatment acts different in people who are young, elderly, or obese.
 

user9876

Senior Member
Messages
4,556
On that basis @trishrhymes, would it be correct to say that you don't think that the large numbers of ME patient testimonies reporting deterioration from GET provides good evidence that GET is harmful to ME patients?

(Sorry Trish, I hope you don't mind me challenging you on this point. What I'm really trying to understand is when patient and/or practitioner testimony becomes "good evidence" in the eyes of the medical authorities)

I would see them as different questions. To establish the efficiency of a treatment you need to look for a sample over the full set and its like putting a universal quantifier (for all) on a statement - i.e. making a statement that for all people with diagnosis X there is a probability that they will recovery from treatment = Y.

With harm it is more like looking for a counter example and hence its an existential quantifier on the statement saying there exists people with diagnosis X who get harmed from treatment. If the evidence of harm is sparse then it may be that the statement gets ignored but if there are a number then the treatment is considered potentially harmful. In putting the statement in this way there is not really an attempt to quantify the harm but just to say the evidence has or has not reached a threshold.

Of course in looking at a treatment you may want to know things about the effect size and chance of recovery, that it might cause harm and the harm caused by non-treatment.

With GET I would argue that there is evidence that it can be harmful. The issue that could be argued about is around the definition of GET and whether the patients reporting harm with GET were actually getting GET rather than some alternative thing that the deliverers thought was GET. This is perhaps where an audit would be useful because it would look at the process being followed and hence the equivalence of treatments,
 

Laelia

Senior Member
Messages
243
Location
UK
What I'm still failing to understand in that case, is why anecdotal evidence from clinical audits cannot provide us with good evidence for any particular treatment (even if this evidence is not good enough to for a treatment to be recommended by NICE).

Or to put it another way: Why is anecdotal evidence AGAINST a treatment considered valuable but anecdotal evidence FOR a treatment considered valueless?

I can answer my own questions now that you have all explained things to me (thank you eveyone):

1) Clinical audits don't on the whole provide us with any anecdotal evidence as they don't normally collect the sort of information I was referring to in the hypothetical audit I described earlier (the reason for confusion here is that the type of audit Dr Myhill describes, performed by the British Society for Ecological Medicine (BSEM) does appear to collect this information)

2) No anecdotal evidence is valueless (again this misunderstanding was linked to the above)

On last try, @Laelia, then I'm opting out if this discussion.

I think we have got there now Trish. I might think of a few more questions relating to other types of evidence, but feel free to opt out of answering them. Thank you so much for your patience. :)

Because audits are not used for that purpose.

My next question would be: Why aren't audits used for this purpose? (but this is straying a bit far from the topic of this thread so don't feel you have to answer that one)

@user9876 I really wanted to 'like' your post because it seemed like a very intelligent and thoughtful answer but unfortunately most of it went over my foggy ME head! :confused:

Thank you again everyone for your input. These things might seem obvious to some of you but for those of us who haven't spent years following the PACE saga and don't have Phds or careers in acadamia, it's not obvious at all.
 
Last edited:

Wolfiness

Activity Level 0
Messages
482
Location
UK
I thought that you weren't allowed to devise studies to try to formally prove that x is harmful because it would involve deliberately trying to harm people? You can only devise studies to prove that x is safe which fail. No?
 

Jonathan Edwards

"Gibberish"
Messages
5,256
My next question would be: Why aren't audits used for this purpose? (but this is straying a bit far from the topic of this thread so don't feel you have to answer that one)

The general idea of audit in other spheres is housekeeping quality control. So audit was introduced to medicine to check that path labs were all agreeing on their results or that rheumatologists were all giving the drugs they were supposed to or checking people for osteoporosis when they had agreed they should.

Unfortunately, audit is quite often hijacked as a 'soft' way of trying to show a treatment works. 'Let's see if the treatment we have been using is making people better'. But the only justifiable route to testing a treatment misses out this step altogether. The anecdote provides the 'lightbulb moment' - 'aha, perhaps giving parsnip juice will prevent dementia because this lady who is addicted to parsnips can do the Times crossword puzzle at the age of 112'. The NEXT stage should be to confirm that in a way that gives reliable evidence - a controlled trial with features where appropriate such as randomisation, blinding etc. An 'audit' which attempts to confirm a new hypothesis without doing things properly is just a way of trying to prove what you want to prove. If your lightbulb moment comes from lab science rather than an anecdote, as in my case, there is a place for a small preliminary trial that is not expected to provide reliable evidence that the treatment is effective but can be expected to give some idea whether the whole thing is a waste of time or whether a full trial should be set up. That should still be done prospectively and documented according to predefined criteria. There are other slight variations on the theme but the one thing there is no place for is a confirmatory study that is unreliable.

The real problem here is that ME physicians have had their lightbulb moments and then gaily treated large numbers of patients without knowing whether the treatment works. Presumably they tell the patients it will work and themselves at least half believe it works but not having done a proper trial they have no real idea. This simply should not happen. It is hard to see how it can be other than a con if patients are paying. Going back and 'auditing' the results of treating 100 people in this way is useless, because of all the bias problems.

Sometimes it is pointed out that physicians may not have the resources to do proper trials. The answer to that is that they should not then sell unproven treatments while writing books indicating that they are well founded. Moreover, as I have said before, when I was in this position I raised the money to pay for the rituximab from my own bank account and some helpful friends who thought the project worthwhile. Doing things properly is tough but there are ways if you think it is worth it.
 

Jonathan Edwards

"Gibberish"
Messages
5,256
I thought that you weren't allowed to devise studies to try to formally prove that x is harmful because it would involve deliberately trying to harm people? You can only devise studies to prove that x is safe which fail. No?

It does not really work like that. There are lots of studies looking at harm, with HRT and thrombosis, anti-inflammatories and stroke, or GI bleeding etc. You do not deliberately try to harm people. You study a treatment that is thought to be of likely benefit but for which there is a concern about possible harm to see if indeed the harm might outweigh the benefit or be a contraindication in a subpopulation.