"Got ME? Just SMILE!" - Media coverage of the SMILE trial…..

Barry53

Senior Member
Messages
2,391
Location
UK
Dr. Keith Geraghty said:
The SMILE trial compared LP plus specialist medical care (SMC) to SMC alone (commonly a mix of cognitive behavioural therapy and graded exercise therapy).
[My bold]

I had missed this point. I was just presuming SMC to be akin to that used in PACE, but of course now SMC really does include CBT and GET. @Keith Geraghty's statement is backed up by the SMILE paper which says ...
Interventions
All participants were offered SMC which focused on improving sleep and using activity management to establish a baseline level of activity (school, exercise and social activity) which is then gradually increased.
So the claim that SMILE is ...
Design
Pragmatic randomised controlled open trial.
... is rubbish, because there was no valid control arm at all. Unless of course that is EC's bastardisation of the English language by calling it "pragmatic"? i.e. Not controlled at all, but an EC-controlled trial.
 

Londinium

Senior Member
Messages
178
[My bold]

I had missed this point. I was just presuming SMC to be akin to that used in PACE, but of course now SMC really does include CBT and GET. @Keith Geraghty's statement is backed up by the SMILE paper which says ...

So the claim that SMILE is ...

... is rubbish, because there was no valid control arm at all. Unless of course that is EC's bastardisation of the English language by calling it "pragmatic"? i.e. Not controlled at all, but an EC-controlled trial.

Yes, ironically the positive result for LP might have been due to patients 'offered'* SMC in the LP arm focusing more on standing in circles and shouting at symptoms than doing the (even more) actively harmful GET. In the same way homeopathy 'worked' in the past because the patient spent more time drinking purified water and less time bloodletting.

*it would be interesting to know if attendance for GET/CBT in the SMC+LP arm was less than the SMC arm - 'offered' doesn't mean 'took up'.
 

BurnA

Senior Member
Messages
2,087
£25,000 to tell us what we already know about CBT and GET? Am I missing something here, can you please talk me through your reasoning?

Have you seen the journal of health psychology special edition on PACE?

One of the most important publications on ME.

You are forgetting that it doesn't matter how much "we" know, the only thing that matters is what gets published in a peer reviewed journal.

That's why we need Keith and the 25k from MEA.
 

Kalliope

Senior Member
Messages
367
Location
Norway
The Conversation: Research Check: can "Lightning Process" coaching program help youths with chronic fatigue?
By John Malouff - Associate professor, School of Behavioural, Cognitive and Social Sciences, University of New England

The study findings are important enough to suggest that more research on the Lightning Process is warranted. But the findings are from a single study, with a single set of researchers. As such, they do not justify a conclusion that someone with the disorder ought to seek this specific treatment.

Edit To Add:
John Malouff takes part in the comment section to the article. As does Mark Davis.
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
The participants were randomised into 2 groups but then within those groups they did the treatment from the other group (???) and then people had to CHOOSE to do LP.

The choice was to be in the LP study in the first place (before randomisation), hence many people declined participate in the study because they knew LP is a quack therapy.
 

Mithriel

Senior Member
Messages
690
Location
Scotland
The trial says this
"We recruited 100 participants, of whom 51 were randomised to SMC+LP. Data from 81 participants were analysed at 6 months. Physical function (SF-36-PFS) was better in those allocated SMC+LP (adjusted difference in means 12.5(95% CI 4.5 to 20.5), p=0.003) and this improved further at 12 months (15.1 (5.8 to 24.4), p=0.002). "

which makes it sound randomised

but then there is this table (I can't get it to copy so typing it in)

51 allocated to SMC +LP
39 did SMC + full LP
3 did SMC +part LP
9 did SMC only

49 allocated to SMC only
46 did SMC only
1 received LP only
2 did SMC +LP

So the initial randomisation is meaningless. So were her calculations based on the allocated groups or the groups the patients themselves divided up into? A 14 year old would be failed this as a class project.

It is unbelievable that this passed peer revue and was not picked up by the people from the Science Media Centre.
 

Mithriel

Senior Member
Messages
690
Location
Scotland
Just seen this by Esther12 on another thread

"Dropout rates were lower than I expected, with them getting some data from a decent percentage of participants at 1 year follow-up:

SMC: 49 participants ->37
SMC+LP: 51 partcipants -> 44"

So it looks like the figures are for the original groups not from what treatment they had!!! Only 45 did LP according to her own figures:confused:
 

Esther12

Senior Member
Messages
13,774
Just seen this by Esther12 on another thread

"Dropout rates were lower than I expected, with them getting some data from a decent percentage of participants at 1 year follow-up:

SMC: 49 participants ->37
SMC+LP: 51 partcipants -> 44"

So it looks like the figures are for the original groups not from what treatment they had!!! Only 45 did LP according to her own figures:confused:

I think it makes the most sense to analyse SMILE data according to randomisation. It's not exactly ideal that participants deviated from the treatments they were randomised to, but the researchers can't physically stop that from happening in a trial like this, and I think they dealt with this reasonably. (Although it would be better to have more information about all this).
 

user9876

Senior Member
Messages
4,556
I think it makes the most sense to analyse SMILE data according to randomisation. It's not exactly ideal that participants deviated from the treatments they were randomised to, but the researchers can't physically stop that from happening in a trial like this, and I think they dealt with this reasonably. (Although it would be better to have more information about all this).

My assumption would be those in the LP group who didn't do LP were filtered out as not suitable since they seem to have some sort of process to filter those who are unlikely to be believers. If this is the case then it is right to keep that group together.
 

charles shepherd

Senior Member
Messages
2,239
Have you seen the journal of health psychology special edition on PACE?

One of the most important publications on ME.

You are forgetting that it doesn't matter how much "we" know, the only thing that matters is what gets published in a peer reviewed journal.

That's why we need Keith and the 25k from MEA.
£25,000 to tell us what we already know about CBT and GET? Am I missing something here, can you please talk me through your reasoning?

There is a brief explanation as to why we are helping to fund Dr Keith Geraghty's quite diverse research work at the University of Manchester for the next two years on the MEA website announcement that I provided

It sounds as though you are not aware of the numerous papers that he has been getting published over the past few months where he has been challenging the academic basis to the use of CBT, GET and the Lightning Process in the management of ME/CFS

Keith has also spent a great deal of time going through the vast amount of data that we collected for our MEA 'patient evidence' report on CBT, GET and Pacing and turning it into academics papers that are now being published

I suspect that you may be outside the UK where this sort of academic research may not be necessary. But I can assure you that it is a vital component of our strategy to challenge the current position here in the UK where CBT and GET are the only recommended forms of treatment for people with mild or moderate ME/CFS

So we are very happy with our decision to fund Dr Geraghty's research

CS
 

CFS_for_19_years

Hoarder of biscuits
Messages
2,396
Location
USA

The greatest contribution to science made by Crawley et al (2017) is how the paper serves as an important reminder to always critically consider statistical results, and not get carried away by declaring a commercial product based on pseudo-science an effective treatment for any condition on the basis of a single non-blinded trial.
clap.gif
 

Benji

Norwegian
Messages
65
Hi, I have not read the thread.
But I have a question that someone might be able to answer...I discuss a little with a pwME who knows statistics...she said that it would be really helpful to judge the SMILE trial if we have an organization asking for the data. That we would see more clearly.
Do anyone knows if anyone have asked for the data?
 

user9876

Senior Member
Messages
4,556
Hi, I have not read the thread.
But I have a question that someone might be able to answer...I discuss a little with a pwME who knows statistics...she said that it would be really helpful to judge the SMILE trial if we have an organization asking for the data. That we would see more clearly.
Do anyone knows if anyone have asked for the data?

i think @JohntheJack has asked for data. I think one of the things that is needed is the school reported absence figures. But I don't think any stats they have/haven't done are the issue here. The whole thing is the subjective measures are meaningless when LP tries to get people to think about symptoms differently. Perhaps the subjective measures could be seen as success in that people change reports hence think about symptoms differently - but it doesn't mean they are any better.

I think it would be interesting to do a detailed analysis of sf36-pf answers and answer changes but that is really asking for detailed data. Perhaps correlating these with school attendance and other measures.
 

Chrisb

Senior Member
Messages
1,051
The whole thing is the subjective measures are meaningless when LP tries to get people to think about symptoms differently. Perhaps the subjective measures could be seen as success in that people change reports hence think about symptoms differently - but it doesn't mean they are any better.

Last week I was watching a documentary about the Vietnam War. A journalist, who was there, was discussing the "metrics" of "body count" to assess who was winning. He referred to Robert McNamara's observation "measure what is important, don't make important what you can measure."

He added a critical twist by saying "If you cannot measure what is important, make important what you can measure."

This seems to be precisely what is done in most of these studies. They measure the answers to the questions, not the underlying realities, which are too opaque.
 

JohntheJack

Senior Member
Messages
198
Location
Swansea, UK
i think @JohntheJack has asked for data. I think one of the things that is needed is the school reported absence figures. But I don't think any stats they have/haven't done are the issue here. The whole thing is the subjective measures are meaningless when LP tries to get people to think about symptoms differently. Perhaps the subjective measures could be seen as success in that people change reports hence think about symptoms differently - but it doesn't mean they are any better.

I think it would be interesting to do a detailed analysis of sf36-pf answers and answer changes but that is really asking for detailed data. Perhaps correlating these with school attendance and other measures.

Yes, I have asked for the data. In fact, I had asked before the trial was published, and then requested again the day the study became available.

I imagine they'll try to find some exemption. But we'll see.
 

Benji

Norwegian
Messages
65
Thanks for you hjelp. I really appreciate it.

We are trying to get an article in a journal in Norway and are checking out a lot of things.

This article with comments are really informative
From this https://theconversation.com/researc...rogram-help-youths-with-chronic-fatigue-84769
Here, a comment from Dan Clarke says
That there was no significant difference between groups for their original primary outcome? That they appear to have failed to release results for school attendance figures verified by the participants school, instead relying on self-reported attendance which could be more prone to problems with biased reporting? That data on school attendance was missing for a third of the LP group at 12 months?

The fact that data on school attendance was missing for a third of the LP group at 12 months, is it a fact? Does anyone knows where that statement comes from? Is it reliable and can be used?
 
Back