• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To register, simply click the Register button at the top right.

Clinical and cost-effectiveness of the Lightning Process for chronic fatigue syndrome


Senior Member
Free full text: http://adc.bmj.com/content/early/2017/09/20/archdischild-2017-313375

Clinical and cost-effectiveness of the Lightning Process in addition to specialist medical care for paediatric chronic fatigue syndrome: randomised controlled trial

  1. Esther M Crawley1,
  2. Daisy M Gaunt2,3,
  3. Kirsty Garfield2,3,
  4. William Hollingworth2,
  5. Jonathan A C Sterne2,
  6. Lucy Beasant1,
  7. Simon M Collin1,
  8. Nicola Mills2,
  9. Alan A Montgomery3,4
Author affiliations
  1. esther.crawley@bristol.ac.uk

Objective Investigate the effectiveness and cost-effectiveness of the Lightning Process (LP) in addition to specialist medical care (SMC) compared with SMC alone, for children with chronic fatigue syndrome (CFS)/myalgic encephalitis (ME).

Design Pragmatic randomised controlled open trial. Participants were randomly assigned to SMC or SMC+LP. Randomisation was minimised by age and gender.

Setting Specialist paediatric CFS/ME service.

Patients 12–18 year olds with mild/moderate CFS/ME.

Main outcome measures The primary outcome was the the 36-Item Short-Form Health Survey Physical Function Subscale (SF-36-PFS) at 6 months. Secondary outcomes included pain, anxiety, depression, school attendance and cost-effectiveness from a health service perspective at 3, 6 and 12 months.

Results We recruited 100 participants, of whom 51 were randomised to SMC+LP. Data from 81 participants were analysed at 6 months. Physical function (SF-36-PFS) was better in those allocated SMC+LP (adjusted difference in means 12.5(95% CI 4.5 to 20.5), p=0.003) and this improved further at 12 months (15.1 (5.8 to 24.4), p=0.002). At 6 months, fatigue and anxiety were reduced, and at 12 months, fatigue, anxiety, depression and school attendance had improved in the SMC+LP arm. Results were similar following multiple imputation. SMC+LP was probably more cost-effective in the multiple imputation dataset (difference in means in net monetary benefit at 12 months £1474(95% CI £111 to £2836), p=0.034) but not for complete cases.

Conclusion The LP is effective and is probably cost-effective when provided in addition to SMC for mild/moderately affected adolescents with CFS/ME.

Trial registration number ISRCTN81456207,


Senior Member
Rapid responses to articles published
Letters in response to articles published in the Archives of Disease in Childhood are welcome and should be submitted electronically via the journal’s website and NOT to Scholar One. Contributors should go to the abstract or full text of the article in question. At the top right corner of each article is a “contents box”. Click on the “eLetters: Submit a response to this article” link.

Letters relating to or responding to previously published items in the journal will be shown to those authors, where appropriate.

Word count: up to 300 words
Abstract: not required
Tables/Illustrations: up to 2 (but must be essential)
References: up to 5


Senior Member
how the hell do these frauds get taxpayers money for this non science I will say this once again weighted questions on biased documents will never give a true overvue of any trial/treatment once again they left out the use of any objective measurements . the people who sign of on these waste of time studies should have to justify why they give taxpayers money to enhance the egos of such individuals. fuming is an understatement whenever I read about so called studies based on the use of such ineffective data gathering. gps do not believe how ill their patients are but for some reason they will believe any old tosh if its based on one of these poor studies.


Senior Member
how the hell do these frauds get taxpayers money for this non science I will say this once again weighted questions on biased documents will never give a true overvue of any trial/treatment once again they left out the use of any objective measurements . the people who sign of on these waste of time studies should have to justify why they give taxpayers money to enhance the egos of such individuals. fuming is an understatement whenever I read about so called studies based on the use of such ineffective data gathering. gps do not believe how ill their patients are but for some reason they will believe any old tosh if its based on one of these poor studies.

This trial was funded by the Linbury Trust (grant number LIN2038) and the Ashden Trust (grant numbers ASH1062, 1063, 1064). EMC was funded by an NIHR Clinician Scientist fellowship followed by an NIHR Senior Research Fellowship (SRF- 2013-06-013) during the trial. SMC was funded by an NIHR Post Doctoral Fellowship during the analyses of the trial.
For those who are confused about the term "Trust" in this context, it refers to a charitable body.
There's also this from the SMC:



The groups were reasonably balanced at baseline. There is some loss to follow up in both groups, with slightly higher drop out in the SMC-only group. Loss to follow up can introduce bias. However, the authors report that those who did not complete follow up were similar to those who did and also used an appropriate statistical method to estimate the missing values, which are still statistically significant.

It is possible that the results might vary depending upon whether participants completed all of the LP course. To ensure this was not biasing the results, the authors undertook a CACE analysis, which is an appropriate statistical approach, and the results still showed greater improvements in the SMC+LP group than the group with SMC only.


Although SMC is defined in the study protocol, it is not always possible for researchers to ensure that this usual care is delivered in the same way to all participants. Moreover, SMC is in fact a number of different practices that are patient specific – the number and timing of sessions varied by need and some will have chosen graded exercise therapy or received CBT. So although everyone received SMC, the actual content of that SMC may have differed.

The study uses a patient reported outcome. Whilst this can be a strength, because they measure whether the patient feels they have improved, as the authors point out in this study, the participants were not blinded and this may have biased the outcome. Participants knew whether they were in the LP group and may have completed their outcome questionnaires accordingly. LP can be costly (the introduction suggests £620 per participant) and may be seen as desirable so it is possible that the outcome measures may have been completed by patients in a way that were biased in its favour.

The authors also rightly point to the low uptake of the study amongst those who were eligible. This study may, therefore, be a self-selected group and its results may not apply to the population of young people with CFS more widely.

The study experienced relatively high loss to follow up and quite a large proportion of participants did not complete the full LP course.

The authors have used appropriate statistical methods throughout to deal with these issues and provide estimates. However, results based on models are never quite as good as, for example, not losing people to follow up. It is not clear why there was attrition from both arms of the study, nor why some people who were randomised to LP did not complete all their sessions. We do not know the implications for the results, however, this might have implications if this treatment were to be rolled out more widely.


Senior Member
The study will never reflect ME patient when three months of fatigue some light dizziness is enough to get a diagnosis...?

Low study uptake. Did patients know that this was a LP study at first when they got recruited? My theory is that many of the patients that wanted to opt-out had read about LP, and though "this is not for me, I don't have depression, I have no psychological problems, I just suffer from fatigue, PEM and pain".

Why not also add Jason (2006)? I simply don't understand why they call this a ME study.
It seems likely that Tuller, Coyne, etc are going to be looking at this, so I've not tried to get myself to go through this in datail, but here are my notes from the other thread.

I've really just ignored the subjective self-report outcomes. They seem pointless for a trial of LP.

Dropout rates were lower than I expected, with them getting some data from a decent percentage of participants at 1 year follow-up:

SMC: 49 participants ->37
SMC+LP: 51 partcipants -> 44


But Table 3 has the data for school attendance, and missing data is more of a problem here. This is the closest thing that they have to an objective outcome:

six months: 37 participants
twelve months: 36 participants

six months: 41 participants
twelve months: 34

The difference was only significant at twelve months, when data was missing for 17 of 51 participants from the LP group.


For school attendace at six months there was no significant difference between groups, and I think that this was intended to be SMILE's primary outcome.

This paper mentions the change in outcome:

changed our recommendation for the primary outcome for the full study from school attendance to disability (SF-36 physical function subscale) and fatigue (Chalder Fatigue Scale).


They say this on SMC:

The number and timing of the sessions were agreed with the family depending on each adolescent’s needs and goals. Those with significant anxiety or low mood were offered additional CBT. Participants could choose to use physiotherapist-delivered graded exercise therapy, which provides detailed advice about exercise and focuses on an exercise programme rather than other activities.

I couldn't see any info on how many participants in the different arms made use of CBT or GET. That seems really important. Could anyone else see this information anywhere?

It's possible that those in the LP arm were less likely to receive CBT/GET, so got all the positive narrative/response bias stuff, but avoided a lot of the worst stuff from CBT/GET - like how Tuller pointed out that FITNET could have been better than the control as the control often involved face to face CBT/GET.

I've not been able to find out what was going on with CBT/GET provision in the different arms.

The uptake was relatively low, so this could be an unusual sub-group of teens with CFS.

Summary: This trial look pretty worthless to me, as we expected from the protocol. But it's going to be great for letting Phil Parker make lots of money from the desperate parents of sick children. Good work Mary Jane Willows, you deserve an OBE.
Last edited:
Is there any data on school attendance before treatment? I also note they only look at attendance in the week before which doesn't give a very accurate view. Especially as participants chose when they filled in the survey and have been given sessions on positive thinking. They could have selected a good week to write about.

There should have been an ongoing record of school attendance


Senior Member
San Francisco
Creepy history related to LP:

From what I've read, LP is highly manipulative. You are always supposed to say you're doing great, for example, and if you're not doing well, it's because you haven't been doing the process correctly, or not doing it enough. I.e., blaming the victim. So people who've spent a lot of money--or maybe a relative's money--or are simply true believers will push themselves, and if they actually have cfs/me they will crash. (Which may be why 1/3 of the kids in the LP arm of the trial dropped out.)

I did some reading about NLP today. (It gets complex; I'm not going into all the details.) One of the influences on NLP was a speech pathologist named Wendell Johnson. When he was teaching at the University of Iowa in 1939 he and grad student Mary Tudor conducted an experiment on children who stuttered. Some of the children suffered from speech and psychological problems for the rest of their lives. Here's what Wikipedia says about it:

"It was dubbed the "Monster Study" as some of Johnson's peers were horrified that he would experiment on orphan children to confirm a hypothesis. The experiment was kept hidden for fear Johnson's reputation would be tarnished in the wake of human experiments conducted by the Nazis during World War II. Because the results of the study were never published in any peer-reviewed journal, Tudor's thesis is the only official record of the details of the experiment.[1]"

I don't entirely know what to make of this. I don't how much influence the Monster Study had on Johnson's later work, such as his book People in Quandaries, which is very, very similar to NLP; or at least the embryonic version of NLP I knew about because I was friends with a linguistics student who took classes from NLP’s developers in the 1970s.

I can't imagine how Crawley's study got approved. Anyone with a free couple of hours could've discovered the Monster Study and the dangers of psychological experimentation (read: manipulation) on children. I suppose good intentions were had by all.
Couple of thoughts:

- What is SMC? There is no known, reliable treatment for ME, so any treatment someone receives will probably be symptom-based and might involve a bit of guesswork. We know that even in groups that are less heterogeneous outcomes with medications vary quite a lot, and with just ~40 people at follow up it is really hard to know how the SMC arm 'should' look (i.e., how it would look over a larger sample size).

- like Esther12 already mentioned, if you do not have the data on who received additional CBT/GET sessions this becomes even more of a mess. On a side note, I am not at all convinced physiotherapists are good at delivering exercise programs that have other goals than restoring function after injury and similar events. Training is actually remarkably complicated and extremely easy to do wrong in sick people, especially when there is no agreed upon protocol to follow and no one actually knows how the responses 'should' look (do we go beyond anaerobic threshold? do we restore movement patterns, reducing pain? do we just go for walks for 5 minutes so we learn that it is our horrible fear of going for walks that induces a constant state of panic? without good data, a lot can happen here that we will not know about).

- The SMC release says:

This means (and the authors do point this out) that the conclusion can only be that LP is effective in addition to SMC.

Not even this is true here if one were to take the sentence literally. This a bit more nitpicking than necessary I guess, and I know what they mean is that this is the only conclusion the trial design structurally allows (which is true), but since it is not actually known how the SMC arm was put together and it is, from what we seem to have available, theoretically possible that they made everyone worse in the non-LP arm by torturing patients with GET and give the people from the LP arm some form of medication that actually improved their symptoms a bit the trial is uninterpretable without a look at the actual raw data, assuming they even bothered to collect everything meaningful. If one has to control for even more splitting into subgroups because SMC as they defined it may be very random, the statistical power becomes laughable. This is a similar problem as every 'CBT'-trial ever has because talking therapies cannot be standardized the same way dosages of medication can. The sample size needed to control for fluctuations in different compositions of this 'SMC' would be huge.

- the very possible pre-selection bias in recruiting for LP and recruiting 'ME-patients' with the criteria they used in general might skew the results

- how do the methods they apparently used for drop-out work? We know ME is a relapsing/remitting kind of disease in many people. We cannot pre-select patients for this because we do not know whose illness would swing hard in the next 12 months and whose would not.

- they changed their primary outcome measure from something kinda objective to something subjective in an unblinded trial. We've had that discussion, but it is especially bad here because the LP appears to be primarily about people rephrasing how they think about what is happening and, in my opinion, threatening them with actual psychological violence if they dare not to do so.

- even school attendance is not that objective, because you can be convinced or force yourself to go even when you normally would not want to. Without somehow controlling for differences while at school this does not say much. Even comparing grades would not tell the whole story because the kids might put in a lot more effort to stay at the same level (this study excluded subjects who were too sick to attend the sessions, so probably no one was even at the true house-bound level in the first place. They do say mild/moderate.)

- I am no aware if we actually know how the illness progression in mild/moderate adolescent patients with unspecified onset would look like over time without intervention. E.g., people with glandular fever induced ME-type symptoms are said to improve quite a lot anyway, but I am not aware if different triggers/subsets do so as well. So if one cannot control for this, especially with just about a hundred subjects, randomly assigning them to groups will skew the results in an unpredictable manner.

- I suppose patients knew they were receiving a treatment that is supposed to cost almost a grand. This might increase placebo responses and induce a tendency to misreport actual outcomes based on how humans generally react to these things.

- the effect size would need to be huge to infer anything here if you consider all of the above, even if one was assuming that the recruited population had the same underlying medical conditions and response to coercion was comparable among groups which is, as stated, already a leap of faith.

and the big one:

This is literally a trial about telling adolescents to ignore their symptoms and claim recovery while ignoring actual changes in health. They measure how well they did in an unblinded trial by asking them to fill out a questionnaire. This does not measure how well people do, this measures how well one is able to convince kids to say things one told them to say. If you want to draw any conclusion on actual changes in health from this, the effect size would need to be gigantic to be even remotely confident there might be any causation here at all.
Last edited:

Jenny TipsforME

Senior Member
This does not measure how well people do, this measures how well one is able to convince kids to say things one told them to say
Exactly. If it wasn't significant all it would show is you're really bad at delivering LP :rolleyes:

And activity trackers are so cheap now, it almost makes you think they don't want to know about objective improvements ;)

A shame we can't show LP is better than GET though, that would have been an interesting development.