• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

PACE Trial and PACE Trial Protocol

anciendaze

Senior Member
Messages
1,841
While I've said this before, I will say it again. The parametric measures of significance being used depend on the assumption of a Gaussian (normal) underlying distribution. That is not merely falsifiable, but demonstrably false.

In checking for possible selection effects, you must consider those who are not in the study. The combined numbers of those who declined to participate and those who dropped out are roughly equivalent to those who completed trials. You also have 31% of the most 'successful' group not completing both 6MWT, the only objective measure presented. One might expect those who did not take a test involving a short walk to be in worse shape than those who did. Plenty of room for non-random selection. The surprise is that they couldn't make this look better.

You should also check on evidence of selecting forces, even beyond investigator bias. With 93% of the most 'successful' group reporting one or more adverse events, even after a redefinition of adverse events to make them less likely, these forces are in evidence.
 

Dolphin

Senior Member
Messages
17,567
Bob said:
Here's a blog article on The Psychologist website (The British Psychological Society)...
It's, surprisingly, quite a balanced and informed article, especially compared to the recent newspaper articles...
There's a couple of errors or contradictions that I've noticed, esp re pacing/APT and re depression/anxiety...
There's a helpful facility for comments.

Fatigue evidence gathers PACE

http://www.thepsychologist.org.uk/blog/11/blogpost.cfm?threadid=1947&catid=48
This article is in print form now in the April issue of The Psychologist. I'm going to try to write a letter in response. They publish quite long letters so it's worth a go.

Also, it would be worth thinking about submitting an academic article to this journal. Articles can be shorter than those in most journals. The Psychologist has a very large readership compared to most academic publications - over 40,000.

Guidelines for submitting are here:

http://www.thepsychologist.org.uk/contribute/how.cfm

I think you have to be a psychologist to submit a paper.

If anyone would like to collaborate with me to submit something, let me know. Before I had to give up work I was an academic psychologist and journal editor.

Jenny
Hi Jenny, both sound great.

I'm overcommitted myself at the moment (e.g. have reviewers' comments to deal with on a paper I've submitted) so can't help but have highlighted your message to a few people.

Best of luck.
 

Dolphin

Senior Member
Messages
17,567
Some good points there. Again, I agree the phrasing of CBT and GET vs APT is a significant enough problem for the PACE trial, since being a psychological study using subjective questionnaires and aimed at changing patients' perceptions and beliefs.

If PACE really wanted to adequately test the "pathological model of ME" (as they claim they do in the APT arm), they shouldn't have used such a highly selective process that went to great lengths to exclude anyone with potential pathology or symptoms from the CCC which White simply doesn't like. 80% of candidates were excluded from the trial (admittedly, some of those refused). What they were really testing was the psychological effects of the "pathological model of ME" on people they believe have "abnormal illness beliefs", certainly not a CCC-like "pathological model of ME". White et al have never agreed with pacing, and APT itself was a classic strawman argument used to discredit a position that the ME/CFS community doesn't hold.
Good way of putting it. It would have been interesting if some biological measures (that might be part of a "pathological model") had been used e.g. measures of oxidative stress, viral titres, etc. what the results would have been.

oceanblue wrote:

Problem with self-reports acknowledged in a CBT study on MS

"An additional limitation was that outcome assessment in this study depended on self-rated outcome measures. No objective measures exist for subjectively experienced fatigue, so we chose reproducible measures that are sensitive to change. However, self-reports are amenable to response bias and social desirability effects. Future studies could also assess more objective measures of change such as increases in activity levels and sleep/wake patterns using actigraphs or mental fatigue using reaction time tasks."

How refreshingly honest.

from A Randomized Controlled Trial of Cognitive Behavior Therapy for Multiple Sclerosis Fatigue http://www.psychosomaticmedicine.org/cgi/content/abstract/70/2/205

I think they are forced to be more "honest" in established diseases. CFS is more of a free for all. Anything that is "subjective" is more susceptible to spin and post-modernist interpretations.
I'm hoping letters and the like can put pressure on them to be honest. If they never know when somebody may point out that they've done some misleading, they might be a bit more careful with claims eventually. Throughout the 2000s, generally they were able to get away with all sorts of rubbish in journals.

biophile said:
oceanblue wrote: "So the net effect of CBT or GET, after 1 year, was to move particpants from around the bottom 13% of SF-36 scores to around the bottom 15%." Dolphin responded: "Excellent way of putting it. (And that's giving them

58 as the 15th percentile of the adult population - I'd say it could be a bit lower based on figures I've seen)."
That sounds about right, 58 or 60/100 points was roughly the 15th percentile for the general population in Bowling et al, which includes the diseased and elderly. Wouldn't be surprised if it is more like 5% percentile for a healthy age-matched control group. And because of the skewed distribution we have the odd situation where roughly 75% of the population are scoring above the average! 60 may be the 15th percentile but 85 is still only (roughly) the 25th percentile, a figure White et al used to help define a full recovery but omitted from the PACE trial results.
I'm not sure how you're getting percentiles from the Bowling paper?
If one looks at the top left histogram in figure 1, the percentages look a lot smaller.
 

Dolphin

Senior Member
Messages
17,567
[Dolphin and oceanblue on the double standards with CBT caveats and definitions of "normal" when researched in different conditions]

Researchers who assume CFS is largely a psychosocial deviancy involving a faulty perception of functional symptoms, I don't think they would be concerned much about response bias and "social desirability effects" in self-reports, along the lines of "as long as we can get these patients believing and behaving as if they were healthy then they effectively are so why worry".
The response bias and "social desirability effects" point is still relevant.

It is thing one if "patients (were) believing and behaving as if they were healthy" and there was objective proof of this e.g. actometers or whatever.

However, questionnaires that may not actually tell one how patients are behaving (because participants give the answers the researchers want to hear, etc.) so they don't give proof that the "patients are behaving as if they are healthy".
 

Dolphin

Senior Member
Messages
17,567
biophile said:
oceanblue wrote:

GET is based on the theory that CFS is perpetuated by physical deconditioning and the only outcome of physical condition is the 6MWT. I've already posted that the improvement in 6MWT for GET (relative to SMC control) is below the 'clinically useful difference' (CUD) threshold of 0.5 baseline SD.

However, the CUD measure was only specificed by the authors for the primary outcomes of fatigue and physical function. So instead I've looked at the 6MWT test with a generic measure of effect, called Cohen's d, that is widely used to compare medical studies, in meta-analyses in particular. In other words it's perfectly appropriate to apply this measure to 6MWT.

The Cohen's d for GET 6MWT is 0.34, and crucially that ranks as a small effect (which is consistent with the the increase not making a 'clinically useful difference').

This means that GET, a therapy based on treating a perceived physical deconditioning makes only a small difference to physical condition after one year. Which in turn suggests that a) the therapy isn't much good and b) the deconditioning theory it's based on is probably wrong too.
Great idea! When I used online calculators to input the mean and SD values for SMC and GET into the equation I kept arriving at about d=0.30 which is a little worse.

Assuming the same SD of 100m, about 400m in the GET group would be required for the d=0.5 threshold for a "moderate" effect and about 431m for the d=0.8 threshold for a "large" effect (the value required to reach these thresholds increases a little as the SD increases), although 431m would have still been well below the standard deviation of the mean for a healthy population.

The authors defended the 6MWT as "objective", and now they should live with the result. As oceanblue points out, GET compared to SMC does not even meet their own definition of a "clinically meaningful" improvement (+0.5SD). No group average hit the 400m mark while most comparable healthy people will score about 600-650m on average. The 6MWD values for all the PACE groups including the GET group at 52 weeks are similar to a wide range of serious medical conditions.
And don't forget that they don't have the 6MWD for 31% of the participants for GET.
If one did a "last value carried forward" analysis, the Cohen's d values would likely be smaller again.

biophile said:
I wonder how many of the 15% of SMC participants and 28% of GET participants who reported "normal" fatigue and physical functioning (which we now know were still abnormal) also scored a more normal 6WMD of 600-650m? Instead we get this: "6-min walking distances were greater after GET than they were APT and SMC, but were no different after CBT compared with APT and SMC. [...] The objective walking test favoured GET over CBT, whereas CBT provided the largest reduction in depression."
Good point about using the 6WMD for normal functioning (or part of the recovery definition).
 

Sean

Senior Member
Messages
7,378
Small but significant practical point arising from this post (and some subsequent posts):

Reply to thread pages 64-66

Better to quote the post numbers because they are invariant, unlike the number of posts per page which is a customisable setting in each member's control panel (Settings > General Settings > Thread Display Options). My settings are for 30 posts per page, which means I only have 25 pages for this thread. Or just use a direct link to each post you are discussing.
 

Dolphin

Senior Member
Messages
17,567
Better to quote the post numbers because they are invariant, unlike the number of posts per page which is a customisable setting in each member's control panel (Settings > General Settings > Thread Display Options). My settings are for 30 posts per page, which means I only have 25 pages for this thread.
You learn something new everyday (or I do anyway!). Have joined you with that setting.
 

oceanblue

Guest
Messages
1,383
Location
UK
And don't forget that they don't have the 6MWD for 31% of the participants for GET.
If one did a "last value carried forward" analysis, the Cohen's d values would likely be smaller again.
According to my quick calculation, using the baseline mean for the missing 31% and adding them back in to the sample would give a mean 20m lower. Assuming this led to a difference with SMC also 20m lower that would give a Cohen's d of 0.15, which is classed as 'trivial' (and probably not significant too). Are you sure that the missing particpants are excluded and not included under 'last value carried forward'? I couldn't see anything in the paper that made this clear.
 

Dolphin

Senior Member
Messages
17,567
Dolphin said:
And don't forget that they don't have the 6MWD for 31% of the participants for GET.
If one did a "last value carried forward" analysis, the Cohen's d values would likely be smaller again.
According to my quick calculation, using the baseline mean for the missing 31% and adding them back in to the sample would give a mean 20m lower. Assuming this led to a difference with SMC also 20m lower that would give a Cohen's d of 0.15, which is classed as 'trivial' (and probably not significant too). Are you sure that the missing particpants are excluded and not included under 'last value carried forward'? I couldn't see anything in the paper that made this clear.
Maybe not 100% sure but the low percentages taking the test suggest it. Haven't seen anything in the paper or protocol paper to suggest they used "last value carried forward" for secondary outcome measures. Somebody else who is used to reading papers came to a similar conclusion.
 

oceanblue

Guest
Messages
1,383
Location
UK
Maybe not 100% sure but the low percentages taking the test suggest it. Haven't seen anything in the paper or protocol paper to suggest they used "last value carried forward" for secondary outcome measures. Somebody else who is used to reading papers came to a similar conclusion.
Well, if that's the case then GET basically made no difference to physical condition. They got nothin'!

Checking over the paper I found this:
We excluded participants from the intention-to-treat
population for whom we had no primary outcome data
in the final analysis,
does this mean that drop outs are excluded if they didn't get SF36/CFQ data for them? Which seems to contradict the principle of ITT, and could make a significant difference if these drop-outs had in fact deteriorated.
 

biophile

Places I'd rather be.
Messages
8,977
Reply to thread post range 701-720

WillowJ wrote: yes, because doctors can find out everything they need to know via a cursory visual examination and standard screening tests. they do not want or need patients to "self-report" the "status" of being "sick" or experiencing "symptoms"

Dolphin wrote: Except when they are doing UK5m (US$8m) trials of CBT and GET in which case what patients self-report on particular questionnaires is perfectly fine.

urbantravels wrote: Only when they have been thoroughly indoctrinated first about what answers they ought to give.

Dolphin wrote: And used the questionnaires before in smaller trials.

Hehe, so true!

[wdb quoting the GET patient manual: (in bold)] "in previous research studies most people with CFS/ME felt either 'much better' or 'very much better'".

Equivalent claims are common, but it looks like the PACE results dispute the "most" part, when using these outcomes on a modified clinical global impression: 41% for CBT/GET vs 31% for APT and 25% for SMC.

In the original protocol: "We propose that a clinically important difference would be between 2 and 3 times the improvement rate of SSMC." We are not given odds ratios for primary measures, and I don't know how they arrived at these values, but we are given OR for the clinical global impression:

APT vs SMC: 13 (0821); p=031 | CBT vs SMC: 22 (1239); p=0011 | GET vs SMC: 20 (1235); p=0013.

CBT vs APT: 17 (1027); p=0034 | GET vs APT: 15 (1023); p=0028.

oceanblue said: it's the sticking to a '70% limit' of perceived energy that dooms APT to failure, in my view.

I agree. Pacing is not about avoiding all exacerbations, and I also think "perceived energy" is not the only measurement or necessarily the best description. CFS is not just the reduction or absence of something "good" but also the increase or presence or something "bad", both have to be taken into account.

urbantravels wrote: [...] a sizable portion of this cohort in fact probably had primary depression, for whom this particular line of treatment would have been disastrous.

urbantravels wrote: Do patients among themselves tell each other that pacing will bring about "natural recovery"? Do the designers of the PACE protocol themselves *believe* in this natural recovery? Of course not, because they believe that ME is a cycle of fear of activity and deconditioning, and that pacing will only perpetuate the disease. Where did they get this idea about "natural recovery" anyway?

Good points.

WillowJ wrote: I think one of the problems with APT is that they really jerk the patients around in terms of expectations. [...]

The inconsistency you mention is probably because they don't understand pacing nor believe in it. Then of course there are suspicions that they don't want it to succeed because it is a threat to them, and the rationale behind it competes/conflicts with their own approach. This is just as much of an ideological struggle as a scientific one. The PACE trial was probably never designed to give pacing the best chance, but more about demonstrating their own CBT/GET approach is superior.

WillowJ wrote:

true that CBT is not a silver bullet for any psychiatric diagnosis. however the working model of the wessely school crowd is that cfs is not even equivalent to a psychiatric illness. an abnormal illness belief is a delusion, not an actual illness like a standard psychiatric illness.

of course they have so much doublespeak they are likely to claim in the self-same paper that (a) cfs is hysteria and (b) it would be awful for anyone to say cfs is somehow not real, but even whilst arguing the latter point the best they can do is compare cfs to the disordered self-assessment of anorexia nervosa. [this is not to suggest that anorexia nervosa is somehow not real, not serious, or not devastating, because it is all of those, but it does include an element of incorrect self-assessment, which is simply not the case in me/cfs (except for the part where the PWME is likely to overestimate her or his ability), but the point here is that cfs is not an inability to determine when one's self is truly ill and in fact disabled, which is what the psychobabblers are suggesting]

For a hysteria, claim (a), CBT should work just fine and ought to be a silver bullet

the fact that it is not a silver bullet calls their model into serious question, doublespeak and all--inability to determine when self is ill and disabled should also be readily correctable through CBT, supposing social support is provided which was ostensibly done in this particular trial, at least at a surface level (my contention is that this support is hypocritical since it involves actually disbelieving the patient while providing verbal support).

Some beliefs and cognitions can be rather difficult to change, but I agree there is double-speak and a disconnect between hypothesis and reality. I think they have toned down some of the rhetoric or at least worded it better for different audiences, and it is difficult to say how much goalpost-shifting has occurred over 20 years. The so-called "Wessely School" see themselves as the moderates between the purely organic position the purely psychological position. Their current theme in a soundbite is: CFS symptoms are "physical" but "functional" and misinterpreted as a disease process, primarily perpetuated by cognitive and behavioural factors. Wessely seems to think of CFS as a delayed recovery from an event that everyone else recovers from naturally, that's why he allows for an infectious "trigger". Their comparison to anorexia nervosa is a good example of how they view the "physiology" of CFS. There is an article on the KCL website about the "Physiological Aspects of CFS" which is a better example, limited to the effects of deconditioning, anxiety, hyperventilation, stress, depression and circadian rhythm disturbance, and we are told (in bold!) "It is important to point out that these changes are reversible with physical rehabilitation and/or exercise."

In my opinion, psychologists like Leonard Jason and Fred Friedberg are the real "moderates". They acknowledge that psychological factors can play a role in CFS but apparently have not been seduced by psychobabble, hyperbole, and the convenient dismissal of biomedical research.

Snow Leopard wrote: They cannot claim that the CBT model works, unless they have objective measures of behavioural changes, specifically actometer measurements. Otherwise the change is merely 'cognitive', and potentially due to inflated self-efficacy and potentially due to 'response bias'. You could say that the patients didn't want to feel that they had wasted their time, as well as the time of those who provided the treatment and thus were likely to report subjective improvements on questionnaires. Hence the minimal effect shown on 5 year follow-ups.

I agree, and "inflated self-efficacy" is an interesting confounder for a "mind over body" attitude.

oceanblue wrote: Could you clarify how you did this and maybe give links to your online calculator. I used a pooled SD (combining SD of both SMC and GET) to get my figure of 0.34 but I'm not sure if I did this the right way.

I just redid it and you're right, pooled meanSD of SMC at 0 and 52 weeks vs GET advantage over SMC at 52 weeks was Cohen's d=0.348. I probably did something stupid under a haze of brain fog so I deleted it.

oceanblue wrote:

CBT is very effective for some psychological disorders eg it has a large effect on generalised anxiety, PTSD and Depression (see this review of meta-analyses). Given this, the fact that they have clearly identified 'flawed thoughts', CBT is supposed to be good a tackling such flawed thinking and they've had 20 years to optimise their treatment I don't think they can realistically blame the patient.

Also, the high levels of patient statisfaction (82%) and strong therapeutic alliance (independently rated as 6.5/7) for PACE CBT suggests the therapy was implemented effectively; it's failure indicates a problem with the underlying biopsychosocial model.

Good points.

Sean wrote:

As I recall both Prins and Chalder have made that assertion, and no doubt others have too in one form or other.

Of course, such an explanation is totally unfalsifiable. No matter how much CBT doesn't work, they can just keep saying you are not trying hard enough, which is a particularly nasty assertion as it can never be proved or disproved, leaving patients at the mercy of the subjective opinion of the vested interest ridden 'expert'.

The fact that they have to resort to such blatantly self-serving pseudo-science to defend their model is powerful ammunition against it, and they should be questioned vigourously about that whenever possible, without mercy.

It is just the establishment version of the same poisonous double-bind loaded drivel found in that New Age fraud The Secret.

I'm afraid that you are correct. I would be scared to have Chalder as a therapist.
 

biophile

Places I'd rather be.
Messages
8,977
Some of the news articles on PACE talked about "hope" or "relief" for patients as a result of the trial. But after reading other peoples comments, and attempts at examining the PACE trial for myself (some of which caused a crash), I do not feel such hope or relief but instead I feel dismay.

What about the majority of patients who don't gain much from CBT/GET or experience adverse effects from these in the real world? Not looking forward to, and feel sorry for all the patients who will now have to endure a renewed blanket push for CBT/GET as a result of a large flawed study. CBT/GET studies often exclude people who have already undergone therapy, but no doubt that patients who have already tried these without success will be goaded into further attempts despite the failings.

The continuous onslaught of flawed psychological research and the uncritical swallowing of it are concerning, and combined with illness limitations these concerns make me feel powerless against the psychobabble juggernaut. It is reasonable to view White et al as "sincere ideologues" but after the PACE trial "trust" isn't the first thing that comes to mind. I don't think they fabricated data but they definitely engaged in spin.

Dolphin wrote: It would have been interesting if some biological measures (that might be part of a "pathological model") had been used e.g. measures of oxidative stress, viral titres, etc. what the results would have been.

I agree. As I recall, the PACE authors stated they didn't use biological measures because none were validated or reliable or whatever.

Dolphin wrote: I'm hoping letters and the like can put pressure on them to be honest. If they never know when somebody may point out that they've done some misleading, they might be a bit more careful with claims eventually. Throughout the 2000s, generally they were able to get away with all sorts of rubbish in journals.

Has anything changed in the 2010's so far?

Dolphin wrote:

The response bias and "social desirability effects" point is still relevant.

It is thing one if "patients (were) believing and behaving as if they were healthy" and there was objective proof of this e.g. actometers or whatever.

However, questionnaires that may not actually tell one how patients are behaving (because participants give the answers the researchers want to hear, etc.) so they don't give proof that the "patients are behaving as if they are healthy".

I agree, I just find it hard to believe that White et al are not already aware of this problem to some degree. They just don't seem to care all that much about it, or as a more cynical possibility, let it slide because it works in their favour. They also showed resistance to actigraphy and gave a rather strange excuse. If they believe "deconditioning" was a major factor, the 6WMT was probably expected to show a much larger improvement than it did.

Sean wrote: Better to quote the post numbers because they are invariant, unlike the number of posts per page which is a customisable setting in each member's control panel (Settings > General Settings > Thread Display Options). My settings are for 30 posts per page, which means I only have 25 pages for this thread. Or just use a direct link to each post you are discussing.

Thanks for the advice Sean, I changed the titles.

ancientdaze wrote: You also have 31% of the most 'successful' group not completing both 6MWT, the only objective measure presented. One might expect those who did not take a test involving a short walk to be in worse shape than those who did. Plenty of room for non-random selection. The surprise is that they couldn't make this look better.

oceanblue wrote: According to my quick calculation, using the baseline mean for the missing 31% and adding them back in to the sample would give a mean 20m lower. Assuming this led to a difference with SMC also 20m lower that would give a Cohen's d of 0.15, which is classed as 'trivial' (and probably not significant too). Are you sure that the missing particpants are excluded and not included under 'last value carried forward'? I couldn't see anything in the paper that made this clear.

Dolphin wrote: Maybe not 100% sure but the low percentages taking the test suggest it. Haven't seen anything in the paper or protocol paper to suggest they used "last value carried forward" for secondary outcome measures. Somebody else who is used to reading papers came to a similar conclusion.

Another good point/idea.
 

oceanblue

Guest
Messages
1,383
Location
UK
Protocol uses 'within 1SD of the mean' in a very different way

The biggest con of the published trial was the creation of a post-hoc measure of 'normal', defined as within 1 SD of the mean, which they wrongly calculated as a SF36 score of 60.

Someone pointed out to me that the protocol also used this 'within 1 SD of the mean' formula, but in a far saner way:
The SF-36 physical function sub-scale [29] measures physical
function, and has often been used as a primary outcome
measure in trials of CBT and GET. We will count a
score of 75
[(out of a maximum of 100)or more, or a 50%
increase from baseline in SF-36 sub-scale score] as a positive
outcome. A score of 70 is about one standard deviation
below the mean score about 85, depending on the
study for the UK adult population [51,52]
.

So, for SF36, 'within 1 SD of the mean' is interpreted as:
Protocol: 5 points higher = 'Improved', but not recovered.
Published: normal

Which is a big difference, and I think goes to show both how far PACE moved the goalposts and also how well the PACE authors originally thought the trial was going to work.

Note also that the Protocol uses the correct figure for working age SF36 norms, while the published Lancet paper uses the wrong figures, giving a mean-SD threshold 10 points too low..
 

anciendaze

Senior Member
Messages
1,841
At the risk of being tiresome, I wish to point out that a normal (Gaussian) distribution should have mean, median and mode approximately equal. Here they are using a mean of 85 when the population mode is 95 or 100. You can go through an arithmetical calculation of squaring, adding and taking root values to get a "standard deviation" for numbers following any distribution. This "standard deviation" does not have the same meaning as 1 SD for a normal distribution. We still await clarification of just what the number means.

The discrepancies here are on the same order of magnitude as the results trumpeted as proof of effectiveness, and the bound used in the trial is set by subtracting "1 SD" from the offset mean, roughly doubling the effect of discrepancies between mean and mode. Someone has paid 5M pounds for gibberish.
 

oceanblue

Guest
Messages
1,383
Location
UK
This "standard deviation" does not have the same meaning as 1 SD for a normal distribution. We still await clarification of just what the number means.
Though it turns out (by chance?) that the 'mean - 1SD' formula sets at threshold at the 17th percentile, which is very close to where it would be if the distribution were normal. Bigger problems in this case are a) they didn't use the figures for a working age population and b) the 'mean -1SD' forumla itself as a basis for establishing a 'normal' threshold, regardless of whether or not that populaton is gaussian.
 

anciendaze

Senior Member
Messages
1,841
Though it turns out (by chance?) that the 'mean - 1SD' formula sets at threshold at the 17th percentile, which is very close to where it would be if the distribution were normal. Bigger problems in this case are a) they didn't use the figures for a working age population and b) the 'mean -1SD' forumla itself as a basis for establishing a 'normal' threshold, regardless of whether or not that populaton is gaussian.
If you look at the population data carefully, you will see that the healthy population has a much stronger central tendency than a normal distribution. I've tried to find a comparable group equally far from the mode for comparison. This isn't an exhaustive search, but what I've found so far includes: people recovering from surgery, people with serious conditions like cancer, COPD or heart failure, people well over 65. It is virtually impossible to find healthy people way out in that tail for comparison. In every other case, so far, the comparison group has serious organic problems resulting from either a known disease or aging.

The argument about percentiles if the distribution were the assumed Gaussian distribution is a little like saying "if all horses are zebras...".
 

oceanblue

Guest
Messages
1,383
Location
UK
The argument about percentiles if the distribution were the assumed Gaussian distribution is a little like saying "if all horses are zebras...".
What I was trying to say is if you want used that formula on a normal population the threshold would be set as 'better than the bottom 15%', and the threshold happens to be effectively the same using non-normal SF36 scores and the same formula. It may be a coincidence, but the effect is the same. Had the result been the 10th percentile or the 25th percentile we could argue that the use of the formula on a non-normal distribution was producing a very different type of threshold - but it isn't.

And yes, I agree, down at 60 you do not have healthy people, that's the problem of using mean-1SD (or rather the 15th percentile) to define 'normal' when 22%+ of the same population report long-term health issues.
 

anciendaze

Senior Member
Messages
1,841
And yes, I agree, down at 60 you do not have healthy people, that's the problem of using mean-1SD (or rather the 15th percentile) to define 'normal' when 22%+ of the same population report long-term health issues.
My point is that I have not been able to find any part of a healthy, working-age population in that range. I can find plenty of people in the categories listed above, plus those with such obvious problems as rheumatoid arthritis or serious obesity. When it comes to finding otherwise healthy, deconditioned people to compare, I have a problem. Those diagnosed with CFS according to these criteria appear to be sui generis.

What does that tell you about using statistics derived from the general population for comparison?
 

Sean

Senior Member
Messages
7,378
The argument about percentiles if the distribution were the assumed Gaussian distribution is a little like saying "if all horses are zebras...".

Reminds me of that physics joke about eliminating inconvenient variables: 'Assume a cow is a sphere'...