PACE Trial and PACE Trial Protocol

Dolphin

Senior Member
Messages
17,567
Well if they expect patients with scores of 50-60 to work, then logically, they should also abolish the aged pension. ;)
:thumbsup:
And remember that the people with ME/CFS often achieve these scores when they are not working and generally living reduced lives. So they might say they're "limited a little" for walking a km/mile or half a km/mile (different versions actually use different measurements!) but they might not answer the same way if they try to do 20/30/40 hours a week of work like people of similar age.

In my view, actometer readings would be much better to see if people have "normal" functioning rather than the physical functioning subscale.

Going out for a short walk once a day/every second day as a lot of their total activity for their day shouldn't really count. It's like those disability assessments - people with ME/CFS may be able to do something once but that doesn't mean they could do it for eight hour shifts or whatever.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
I've taken another look at this in the paper, and I see what you mean. However, I think the reason the researchers didn't quote differences is this:

With PF or Fatigue scores you can compare, say, the mean PF score of CBT with the mean PF score for SMC. You are comparing the difference of means.

However, when you're looking at the proportion of a group that meet a particular threshold, eg 61% of GET group have 'improved', i don't think it's statistically correct to measure the difference between groups. I think you can say eg the GET group improved more than the SMC group and quote a p value for this, but can't be precise about the size of difference. So while a 'net increase of 15%' is probably a good indication of the size of difference, I don't think it's statistically robust. So if this is right, the authors are probably reporting the data in the right way (even if they've changed the definition of 'improved' from the protocol).

However, I'm happy to be corrected on this if anyone knows better (Dolphin?).

ocean, I think you might be making things over-complex here...
It would have been easy for the authors to state that 16% people benefited from GET when compared to the SMC control group.
I'm absolutely certain that they would have made the comparison if it was something that would work in their favour.

I've thought about this further, and I see what you mean, ocean...
I've been over-simplistic, so I guess we have to keep scrutinising the statistics!
Big thanks to all of you who are doing that.
 

oceanblue

Guest
Messages
1,383
Location
UK
I've thought about this further, and I see what you mean, ocean...
I've been over-simplistic, so I guess we have to keep scrutinising the statistics!
Big thanks to all of you who are doing that.

It does depend on context. If you're making a general point, you're right about the net benefit being 16%. if you're trying to nail a point in a letter to the Lancet then interminable scrutinising of the statistics is the order of the day, which is as dull as it's frustrating!
 

Doogle

Senior Member
Messages
200
Question

What is the definition of the difference between the "As randomised" and "actual" numbers in Table 1 of the study? Sorry it it's obvious.
 

Mark

Senior Member
Messages
5,238
Location
Sofa, UK
A couple of quick points; sadly I have failed to keep up with the thread due to distractions so apologies if this has been noted.

There has been discussion on Bad Science of the possibility that the patients could have showed a kind of placebo effect; the possibility that maybe they [said they] got a bit better just because they believed in the treatment. The point was made that the patients' pre-trial expectations of CBT were lower than their expectations of the 'usual treatments', and the conclusion drawn that this suggests that effect didn't apply.

I suspect it may work like this. The outcome measures are ultimately all in the nature of questions asking "are you content with this treatment?" - indirectly that question is there behind all the measures because patients that feel positively towards the people involved are more likely to say positive things about them. So...

If one went into a treatment with very low expectations, any apparent positive experience would then seem more significant relative to expectation compared with the experience of going in with naively high hopes and getting absolutely nowhere. "Pleasantly surprised" versus "hopes dashed" could explain those statistically small effects, perhaps? If you don't expect much, you won't be disappointed...

More general point: I think it's time to get some Good Science onto the Bad Science forum. They have a little superficial discussion going, as always, but it is at least nominally science-based and surely, surely to goodness there are some points that could be made there by our top people that would make at least a few people think again?

All that moving of goalposts halfway through, in particular - anything that would stand out clearly as bad practice to a proper scientist - anything stark like the facts about the actometers and the changing of outcome measures, the really killer points that suggest bad practice - that should be their stock in trade and I can't see how they'd wriggle out of that actometer point. On what planet is it good science to say clearly in public that you're going to measure things one way and then change all those definitions after your results come in so as to make your study say what you want it to say?

Note that any truth spoken there has to be definitively referenced and backed up: in general they're not interested in going looking to find out what truth there is in claims they are biased against, just in picking holes in anything they don't happen to like that doesn't fit the rules of their game - but some of them do occasionally seem to take bits of the truth on board when they're handed to them on a plate, in their own language...

There are people on Bad Science who are representing...but it could always use a few more, and this seems like as good a study to 'go large' on if we're talking Bad Science. The usual health warnings re:BS apply though: thick-skinned advocates only - their forum is dedicated to the "great british sport of moron-baiting", and many of them seem to like nothing better than mocking vulnerable people (they tend to lack empathy and they like/need to feel superior to someone, is my psychoanalysis of it), so don't anybody let 'em get to you - pegs on noses and sick bags at the ready... :)
 

Dolphin

Senior Member
Messages
17,567
What is the definition of the difference between the "As randomised" and "actual" numbers in Table 1 of the study? Sorry it it's obvious.
They tried to stratify the participants using certain factors/predictors but for some reason incorrect labels for people were used in some cases. "Actual" is the real figure. I think this has thrown a few people as I've seen the "as randomised" figure quoted.

A database programmer undertook treatment
allocation, independently of the trial team. The fi rst three
participants at each of the six clinics were allocated with
straightforward randomisation. Thereafter allocation was
stratifi ed by centre, alternative criteria for chronic fatigue
syndrome12 and myalgic encephalomyelitis,13 and
depressive disorder (major or minor depressive episode or
dysthymia),14 with computer-generated probabilistic
minimisation.
[..]

Because some errors were made in stratification at
randomisation, we used true status variables rather than
status at stratification as covariates.

ETA: here are more details from the protocol paper but most people won't really need to know this much detail I reckon:
Randomisation and Enrolment procedure
Participants will be allocated to one of the four trial arms (ratio 1:1:1:1) by the Mental Health & Neuroscience Clinical Trials Unit (MH&N CTU) based at the Institute of Psychiatry. Allocation will be stratified by centre, CDC Criteria (met or unmet), London Criteria (met or unmet) and depressive disorder (major, minor depressive episode and dysthymia being present or absent) using minimisation with a random component [45]. The stratification on these criteria is to ensure equal proportions in each treatment arm. The first N cases (N will not be disclosed) will be allocated using simple randomisation to further enhance allocation concealment.

Once an eligible participant has completed the baseline assessment and given written informed consent, the RN will contact the MH&N CTU for treatment allocation by facsimile, giving the criteria needed for randomisation. Minimisation is carried out with a random component using a customised Microsoft Access database which will be used to hold the basic details collected to facilitate subsequent verification and to generate the allocation. Allocation is concealed because an independent group are responsible for this allocation. The confirmation of stratification details and treatment allocation will be communicated by email or facsimile to the RN within 24 hours. The RN sends back an acknowledgement of receipt to the CTU. This whole procedure is kept independent and separate from the trial statisticians. The RN will on the same day inform the participant of his/her treatment group in person or by phone, and will also inform the SSMC doctor and appropriate therapist. The therapist will contact the participant to arrange the first treatment appointment as soon as possible (within 5 working days). The SSMC doctor will also arrange to see the participant within one month of treatment allocation. The individual assignments will be available to the local team on a need-to-know basis, with the exception of the trial statisticians.
 

Mark

Senior Member
Messages
5,238
Location
Sofa, UK
Hi Dolphin, and thanks so much for all the great work you're doing.

I agree that this thread could be long for somebody and people might give up reading before actually turning what they read into any action e.g. write a letter. Although I don't think it should happen with every paper, it could be argued that a separate thread could be set up with the nuggets from this thread. Perhaps somebody could go through it and either try to summarise it as you did, or simply copy and paste the interesting points.

Anyway, I don't want to land work on anybody....

But I think we should be allowed have a discussion that flows naturally enough as it has been up to now, with on-topic points.

Absolutely, and apologies for butting in occasionally without having had chance to follow the whole thread. Really sorry if my interventions interrupt the flow - people, please do tell me if so.

Yes, I was meaning to suggest a separate thread or threads for that I suppose.

What I was pointing at was that this reminds me of a pattern we get into: we have a great discussion here and some fantastic stuff is uncovered along the way, but then what happens to it?

Summaries, on separate threads particularly, don't happen nearly enough IMO. Finishing the job off seems to me always a problem: distilling the results, writing them up, proofing and re-proofing and fully referencing the most important points, driving up the quality of documentation together. It gets harder and more boring to maintain that sort of focus as the job nears completion, and the temptation to finish prematurely is massive, but that seems to me something that the community could help with: there are lots of folk out there who can pick up the baton, spot spelling mistakes and improve the style and accuracy of the prose, after the fine minds have done all the hard work of the initial untangling of the web of deceit...

All of us, not just on PR, are going to need to continue to do a lot of work on this one, and IMO it should be the top priority and the focus for quite a while: it's so, so important to counter this study and exploit its many failings to the max. So there can't be too many threads on the PACE trial for me...
 

Dolphin

Senior Member
Messages
17,567
More general point: I think it's time to get some Good Science onto the Bad Science forum. They have a little superficial discussion going, as always, but it is at least nominally science-based and surely, surely to goodness there are some points that could be made there by our top people that would make at least a few people think again?

All that moving of goalposts halfway through, in particular - anything that would stand out clearly as bad practice to a proper scientist - anything stark like the facts about the actometers and the changing of outcome measures, the really killer points that suggest bad practice - that should be their stock in trade and I can't see how they'd wriggle out of that actometer point. On what planet is it good science to say clearly in public that you're going to measure things one way and then change all those definitions after your results come in so as to make your study say what you want it to say?

Note that any truth spoken there has to be definitively referenced and backed up: in general they're not interested in going looking to find out what truth there is in claims they are biased against, just in picking holes in anything they don't happen to like that doesn't fit the rules of their game - but some of them do occasionally seem to take bits of the truth on board when they're handed to them on a plate, in their own language...

There are people on Bad Science who are representing...but it could always use a few more, and this seems like as good a study to 'go large' on if we're talking Bad Science. The usual health warnings re:BS apply though: thick-skinned advocates only - their forum is dedicated to the "great british sport of moron-baiting", and many of them seem to like nothing better than mocking vulnerable people (they tend to lack empathy and they like/need to feel superior to someone, is my psychoanalysis of it), so don't anybody let 'em get to you - pegs on noses and sick bags at the ready... :)
If people want to go out and play over on "Bad Science", I hope they'll make time to "do their homework" and send a letter to the Lancet. :Retro smile: (Internet discussions can be ephemeral - published letters can be quoted long into the future, as well as hopefully influencing people too and annoying the authors!).
 

Dolphin

Senior Member
Messages
17,567
Hi Dolphin, and thanks so much for all the great work you're doing.



Absolutely, and apologies for butting in occasionally without having had chance to follow the whole thread. Really sorry if my interventions interrupt the flow - people, please do tell me if so.

Yes, I was meaning to suggest a separate thread or threads for that I suppose.

What I was pointing at was that this reminds me of a pattern we get into: we have a great discussion here and some fantastic stuff is uncovered along the way, but then what happens to it?

Summaries, on separate threads particularly, don't happen nearly enough IMO. Finishing the job off seems to me always a problem: distilling the results, writing them up, proofing and re-proofing and fully referencing the most important points, driving up the quality of documentation together. It gets harder and more boring to maintain that sort of focus as the job nears completion, and the temptation to finish prematurely is massive, but that seems to me something that the community could help with: there are lots of folk out there who can pick up the baton, spot spelling mistakes and improve the style and accuracy of the prose, after the fine minds have done all the hard work of the initial untangling of the web of deceit...

All of us, not just on PR, are going to need to continue to do a lot of work on this one, and IMO it should be the top priority and the focus for quite a while: it's so, so important to counter this study and exploit its many failings to the max. So there can't be too many threads on the PACE trial for me...
Thanks Mark.
I can see your point about long threads.
Also, they can also put people off reading the info at all - I have avoided some long threads on XMRV in the past (partly because I'm out of my depth as well). If people have ideas how to mobilse the troops on this, it'd be great. It's a pity people can't send in e-letters on this one - it is a bit of work to send in a proper letter while with journals like the BMJ, lots of people can see their e-letter/rapid response up there on the site. Although, saying that, the letters can only be 250 words here so people will hopefully not have put in too much effort if their letter isn't published. There may be other ways to respond e.g. letters to newspapers, also.
 

anciendaze

Senior Member
Messages
1,841
Selection Effects

Mark has alluded to some selection effects which could affect outcome in the absence of efficacy. I have previously mentioned the Hawthorne Effect, in which knowing that you are part of a study changes results. If this did not affect patients, it would certainly affect participating therapists.

I think it is significant that the number of patients who declined to participate, plus the number who failed to complete the study, is roughly equal to the number who took part. I believe it is safe to assume those who did participate thought this might benefit them, and that those who persisted throughout a year were motivated to improve.

If you ran a study on 1,200 people, then selected only those who believed they would get better, and persisted in trying, how much would this subset improve with no treatment whatsoever? A null hypothesis based on this idea might actually achieve the modest results claimed.

A healthy person should have little trouble walking at 6 km./hr. This is handy because it means they would cover 100 m./min., or 600 m. in 6 min. At outset, patients were performing at about 53% of this pace, the most successful group reached around 62% after a year. This sounds reasonable, until you realize they were not walking for an hour, as healthy people do, but only for 6 min.

Could the subgroup which self-selected out of the study after starting have made this much difference? Not out of the question. It should certainly reduce this apparent success.

This pretty well consumes objective results. What about self-assessments? For this I direct attention to the remarkable history of a French pharmacist mile Cou Anything sound familiar?
 

Mark

Senior Member
Messages
5,238
Location
Sofa, UK
I find it surprising (not really) that Bad Scientists fascinated by placebo effects are so credulous of the significance of such tiny effects in such a complex and questionable experimental setting.

The PACE results clearly can't be explained by the hypotheses on which the therapies are based, or the results of these therapeutic interventions would surely be dramatic, with willing volunteers who want to get well and therapists who've had years and years to hone their techniques.

So if we want to explain these results, effects like the Hawthorne effect, the effect of self-delusion, and the distortions introduced by the experimental and statistical details cited here seem to me the most likely candidates.
 

Sean

Senior Member
Messages
7,378
This sounds reasonable, until you realize they were not walking for an hour, as healthy people do, but only for 6 min.

And only once, with no repeat test at 24 or 48 hours.
 

oceanblue

Guest
Messages
1,383
Location
UK
I think it is significant that the number of patients who declined to participate, plus the number who failed to complete the study, is roughly equal to the number who took part. I believe it is safe to assume those who did participate thought this might benefit them, and that those who persisted throughout a year were motivated to improve.

If you ran a study on 1,200 people, then selected only those who believed they would get better, and persisted in trying, how much would this subset improve with no treatment whatsoever? A null hypothesis based on this idea might actually achieve the modest results claimed.

A really interesting point. And presumably this could also help explain why for trials in general (as well as CBT for CFS in particular), real-world clinical outcomes are usually not as good as trial outcomes.
 

oceanblue

Guest
Messages
1,383
Location
UK
What I was pointing at was that this reminds me of a pattern we get into: we have a great discussion here and some fantastic stuff is uncovered along the way, but then what happens to it?.

This is crucial. Letters to the Lancet is one thing we need to do, as I believe Dolphin may have mentioned here once or twice :).

I also think this would be great material for Ben Goldacre's Bad Science blog and Guardian column (as opposed to the Bad Science forums). If he ran the story it would make quite an impact - and I believe the column is well-read within the scientific community too.
 

anciendaze

Senior Member
Messages
1,841
selection versus adverse outcomes

I'm going to restate an argument which turned up earlier in this long thread. I am not claiming to have discovered this, and I confess to putting my own interpretation on it.

This study is remarkable mainly for avoiding adverse outcomes found in other studies. How did this come about?
We've already noticed that GET has been watered down to the level of "puttering around the home". Is there a way formerly adverse outcomes could become selection effects boosting results?

Here's one example:

An adverse outcome will be considered to have occurred if the physical
function score of the SF-36 has dropped by 20 points from the previous
measurement.

This scale gives the following points:

- Bathing or dressing yourself = 10
- Bending, kneeling, or stooping = 10
- Walking one block @5 or 10
- Climbing one flight of stairs @ 5 or 10 points
- Lifting or carrying groceries @ 0 or 5 points

This says anyone making it to a session on their own should get a minimum 35-45 points. (Patients with a score of 65 were considered ill enough to enter the study. I don't yet know an exact cut-off. 75?) My inference is that patients starting at the low end and suffering a decline of 20 points would be unlikely to complete the study, a selection effect eliminating adverse outcomes in the lower tail of the distribution, and even possibly extending above the mean.

It seems to me a patient starting with a score of 55, and suffering a drop of 20 points, would be marginally able to continue the trial. A few adverse outcomes could be reported for patients starting around 75. None should be expected to come from patients starting around 35.

Did patients at the low end of the scale receive assistance to make it to sessions?
Can those more familiar with the numbers improve this argument?
 

oceanblue

Guest
Messages
1,383
Location
UK
This says anyone making it to a session on their own should get a minimum 35-45 points. (Patients with a score of 65 were considered ill enough to enter the study. I don't yet know an exact cut-off. 75?) My inference is that patients starting at the low end and suffering a decline of 20 points would be unlikely to complete the study, a selection effect eliminating adverse outcomes in the lower tail of the distribution, and even possibly extending above the mean.

It seems to me a patient starting with a score of 55, and suffering a drop of 20 points, would be marginally able to continue the trial. A few adverse outcomes could be reported for patients starting around 75. None should be expected to come from patients starting around 35.

Did patients at the low end of the scale receive assistance to make it to sessions?
Can those more familiar with the numbers improve this argument?

I'm pretty sure the trial protocol (http://www.biomedcentral.com/1471-2377/7/6/) had stuff on ensuring follow-up of drop-outs to cover exactly the issue you raise, though I'm afraid I can't remember all the details - and I don't know how well this worked in practice.
 

anciendaze

Senior Member
Messages
1,841
followup on dropouts, quotable sentence

I'm sure there were words about followup. The suspicion which bothers me is that this is implemented in the following way:
As judged by the Research Nurse: can the fatigue be distinguished from low mood, sleepiness and low motivation?

I now understand that a score of 65 was an upper bound for entry. This would confine adverse outcomes without dropout, and thus likely to be reported, to approximately the range 55-65. A strong selection effect is possible.

As I am not a citizen of the UK, and could easily blunder on subjects which are common knowledge there, I do not intend to send a letter to The Lancet myself. Below is a sentence which I propose might be included in one.

These data support the hypothesis that a substantial proportion of patients in this cohort, approaching half, suffer from undetected organic disease which is as unresponsive to psychological treatment as multiple sclerosis.
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
Re Anciendaze post #195,

That is a good point - adverse effects might have not been reported, simply because the patients dropped out instead. Lets be honest, once dropped out, patients tend to refuse answering further questionnaires....

The problem is that the drop out rates are all over the place:
11 (7%) 17 (11%) 10 (6%) 14 (9%) 050
apt - cbt - get - smc - p value

But the other point remains that the study systematically ignores those with low functioning and therefore the physical functioning means are biased towards those patients with higher functioning.

As far as letters go, why not? The more the merrier. There are more than a few people who are more than willing to help proof read etc.
 

Esther12

Senior Member
Messages
13,774
As far as letters go, why not? The more the merrier. There are more than a few people who are more than willing to help proof read etc.

I'd be happy to read over anything people want to send (I'm not expert... but I'm British!).



PS: To me, the drop outs don't look bad. I think that they may have toned down their therapies because they knew this study was looking at potential hamr... it would be good if this work did lead to a more cautious approach to CBT/GET being taken at CFS centers across the country.
 

Doogle

Senior Member
Messages
200
One of the issues I see is that the advocates of the study and the news articles are tying to redefine fatigue from physical and mental exhaustion to "tiredness" in their interviews and statements.
 
Back