• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Response to the editorial by Dr Geraghty by the PACE Trial team

Messages
2,158
'Presentation of the principles and methods of data description and elementary parametric and nonparametric statistical analysis. Examples are drawn from the biomedical literature, and real data sets are analyzed by the students after a brief introduction to the use of standard statistical computer packages. Statistical techniques covered include description of samples, comparison of two sample means and proportions, simple linear regression and correlation.'

This is a good start, but the kind of statistical tests being quoted in some of the studies we have been shown here are far more sophisticated that this.

What you describe here is at the level I used to teach in A level statistics to 16 to 18 year old school pupils.

And from what I've seen of what is taught in the social science degrees, it's even more basic than this, and mostly non-parametric tests which seem to me pretty unsophisticated.

I think a serious scientific / medical study these days, with all the sophisticated computer stats packages available needs someone with at least a masters degree in statistics and experimental design, not just a module or two. Anyone setting up a research study should have the basics, as described above, but they need experts to help with experimental design as well as analysis and interpretation.

I agree with your general principle, @Barry53 and @CFS_for_19_years , making use of a university department of biostatistics, which will have staff who have PhD's and many years experience should be an essential requirement.

One thing we found with PACE was that they got away with changing outcome definitions (recovery, improvement) and managed to get approval for these, and get them past peer review.

What this tells me is that approval bodies and journal peer reviewers need much more expertise too.

The nonsense misuse of the normal distribution for heavily skewed data should have been a red flag, for example. And in many of the psych studies, there are basic errors of interpretation of correlation as implying causation. This is basic high school statistics.
 

Jonathan Edwards

"Gibberish"
Messages
5,256
I am an engineer, not a medical professional, but to me from the outside looking in, it seems some research projects maybe make a fundamental oversight/blunder when determining team members.

There is a whole raft of skills required for a medical research project; obviously the core medical/clinical (I may not be using the right terminology here) skills are fundamental, but there must be plenty of other non-clinical skills essential within a team also. A team is exactly that - each individual is not expected to have all skills in depth, though an awareness at least is probably good.

In fact I think that before any research project is approved, there should be a standard list of baseline skills that are required for any clinical research project, and part of the approval process should be to demonstrate categorically (no flimflam) that the project has such skills available to it, and will employ them correctly throughout the trial.

I actually think the problem may be the reverse. If you gather together lots of people each supposedly having a relevant skill like statistics or trial design or clinical assessment you end up with exactly the camel that is PACE. The statistician did not understand what they were applying their statistics to, etc etc. The only skill that was missing in PACE was common sense - if you have a treatment that encourages people to say they are better then they will say they are better, but that is no reason to think they are. There is no point in employing a statistician to propose a method of analysis of meaningless data. All medical students are trained to know that the PACE design is useless. The problem is that the dumb ones do not understand how to apply what they have been taught. So what was lacking was not skills but basic intelligence.
 
Last edited:
Messages
2,391
Location
UK
I actually think the problem may be the reverse. If you gather together lots of people each supposedly having a relevant skill like statistics or trial design or clinical assessment you end up with exactly the camel that is PACE. The statistician did not understand what they were applying their statistics to, etc etc. The only skill that was missing in PACE was common sense - if you have a treatment that encourages people to say they are better then they will say they are better, but that is no reason to think they are. There is no point in employing a statistician to propose a method of analysis of meaningless data. All medical students are trained to know that the PACE design is useless. The problem is that the dumb ones do not understand how to apply what they have been taught. So what was lacking was not skills but basic intelligence.
Yes. Although I am not overly convinced all trials necessarily have all the "headline skills" they should have, the common sense issue is at the heart of, and pretty much maps across, everything. I read in a book once that Reginald Mitchell (Chief Designer at Supermarine Aviation, of Spitfire fame) said that there is one quality above all else that a design engineer needs, and without which any design of theirs will at best be mediocre: common sense. Common sense tells us that Mitchell's observation is pertinent way beyond engineering (weak joke intentional). I have known some engineers state they do not know what common sense is :(. And in any case, if a trial lacks a common sense underpinning, everything else is suspect anyway.

So yes, I wholeheartedly concur with the crucial need for common sense, and the consequences of its absence. But I also think that other skills might sometimes be lacking, albeit likely a knock-on consequence of not having the required common sense to recognise the omissions.
 
Messages
2,391
Location
UK
PACE had a trial statistician http://www.ema.europa.eu/docs/en_GB/document_library/contacts/johnsona1_CV.pdf . He is an author on the papers including the recovery papers. So I would suggest it is either his lack of attention to the trial, that he allowed others to do bad stats or simply that he is incompetent. He works or worked for the MRC clinical trial unit so perhaps that is another reason the MRC ignored the bad practices from PACE.
When I talk about having the required skills I do mean just that - having them, not merely having qualifications claiming them. Stats is not a strong point of mine, but reading in PR it seem clear that PACE and statistics have a very loose/uneasy relationship ... so no matter what qualifications their statistician apparently had, their actual statistical mettle seems to have been lacking, or overridden. Basically a trial needs skills, not just alleged skills.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
I think common stupidity is much more common than common sense. To me it seems as if doctors are required to think less and less due to steadily encroaching rules. Eventually we will see not only nurse practitioners taking many of the roles, as we are seeing now, but also what I would call med techs. People will be doing undergraduate degrees and replacing doctors at a fraction of the cost, blindly following algorithms rather than thinking through problems. They can be churned out in large numbers. There will still be a need for doctors but mostly for specialists. The GP is done for in the long run unless something changes.
 
Messages
2,158
I agree with what everyone says about common sense. There also has to be honesty and willingness on the part of the researchers to accept when a study shows the opposite of what they hoped it would show.

My point earlier about having a statistician involved who understands the more sophisticated tests used in some of these studies is, in my opinion, mainly to stop researchers using them!

In most medical studies where one treatment is being compared with another, a straightforward test of significance using a p value a lot less than the 5% they keep using should be sufficient. For medical trials, where lives are at stake it should be more like 0.1% significance level, in my opinion.

I think they often use much more complicated analyses precisely because this straightforward process fails to give them the answers they want, so they plug all the data into a stats package and when it churns out a mass of meaningless results, they p-hack their way through to try to find 'significant' associations to which they can attribute causation according to their favoured model.

Crawley is particularly fond of doing this. A really good and honest statistician should point out that this is not valid, and not allow it to be published. Unfortunately I suspect some statisticians like showing off how clever they are with their fancy analyses.

In the case of the PACE trial, all you really need to do is count how many patients in each group reached the Protocol specified level of recovery. This number was so miniscule as to be insignificant in every group. That should be the end of the story. The treatments didn't work. No fancy stats test will prove otherwise. A statistician should have told them this, and not allowed their name to be attached to the papers unless they made that conclusion loud and clear.
 
Last edited:
Messages
2,391
Location
UK
I agree @trishrhymes. Getting a different result to what you hoped for may be disappointing, but in itself is not bad science if trialled properly; it still contributes to the greater pool of understanding. Even if not done properly (mistakes do happen inevitably), but done with good intent and lessons are learnt, it is still in the right quadrant of the moral compass in my view; if those mistakes get covered up however then not so good. But PACE seems to have gone the whole hog: got stuff terribly wrong by design; not only tried to cover it up, but corruptly misrepresented the results; misrepresented how it makes ME sufferers look to the rest of society; misrepresented where ME sufferers fit within the social benefits system. And then bleat how people are supposedly giving them a hard time, like a smooth talking mugger moaning their victim tried to fight back.
 
Last edited:

user9876

Senior Member
Messages
4,556
When I talk about having the required skills I do mean just that - having them, not merely having qualifications claiming them. Stats is not a strong point of mine, but reading in PR it seem clear that PACE and statistics have a very loose/uneasy relationship ... so no matter what qualifications their statistician apparently had, their actual statistical mettle seems to have been lacking, or overridden. Basically a trial needs skills, not just alleged skills.

I think he has the knowledge and has just failed to apply it. I don't think someone will get a job in the MRCs trial unit without having the skills.

Within PACE there are some fairly trivial stats errors such as in the setting of the normal range. But there are also some more subtle errors in the way they treat questionnaires as scales and assume they are linear but without validation. These errors are not confined to PACE but repeated again and again in many trials. FINE had Prof Dunn and I did like a book he wrote (with Everett) on stats modelling. But he was the one who was switching scoring schemes with the CFQ and assuming both versions were interval scales of fatigue. I suspect one of the issues is that they are unwilling to think out all the issues and as academics they are interesting in things like dealing with missing data and randomization algorithms rather than verifying the basics are done correctly.
 

me/cfs 27931

Guest
Messages
1,294
Eventually we will see not only nurse practitioners taking many of the roles, as we are seeing now, but also what I would call med techs. People will be doing undergraduate degrees and replacing doctors at a fraction of the cost, blindly following algorithms rather than thinking through problems. They can be churned out in large numbers. There will still be a need for doctors but mostly for specialists. The GP is done for in the long run unless something changes.
In the USA, trying to replace GPs with nurse practitioners at HMOs to save money was overall a disaster. Nurse Practioners (generally) just don't have the med school experience to deal with complex cases, particularly the elderly with multiple conditions and a dozen meds.

The workplace demand for nurse practitioners has essentially dried up. Too expensive to work as an RN, and without the skill set (generally) to work as GPs.

Edit: My wife works as an NP, and she likely could not change employers if she wanted to. No one hires NPs now. When NPs retire, they are not generally replaced.

The exception I've seen is that small private doctors' offices still do hire NPs. But not big corporate medicine.
 
Last edited:
Messages
2,158
I think nurse practitioners in the UK are useful when they specialise in a very narrow field and become expert in it, for example I know of one who just does gastroscopies, another who deals with women with threatened miscarriages, another who advised people with Parkinson's disease, another who does the routine checks with asthma patients in a GP practice.

If they are well trained in their narrow area, they can take the pressure off consultants and GPs , leaving them to deal with more complex cases. I agree nurses should not be expected to act as GPs.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
The workplace demand for nurse practitioners has essentially dried up.
Interesting. I have read the opposite, in articles written by doctors. However I do not know how old this information is, it dates back one or several years. In one case I know of personally the patient found the NPs were actually better than any doctors they were seeing. The trend I see, however, is for doctors to become less and less capable of treating complex cases. Further, the trend to algorithmic approaches is continuing. Now I have nothing against algorithmic approaches by themselves, so long as human discretion is permitted.
 

me/cfs 27931

Guest
Messages
1,294
I think nurse practitioners in the UK are useful when they specialise in a very narrow field and become expert in it, for example I know of one who just does gastroscopies, another who deals with women with threatened miscarriages, another who advised people with Parkinson's disease, another who does the routine checks with asthma patients in a GP practice.

If they are well trained in their narrow area, they can take the pressure off consultants and GPs , leaving them to deal with more complex cases. I agree nurses should not be expected to act as GPs.
I asked my wife, and can see you are correct. I was a bit off.

The situation isn't as dire as I painted in the US. But jobs for NPs are still often hard to come by.

There is still some demand for NPs in specialty areas, where as you say, they can take pressure off GPs.

However, more and more, instead of hiring NPs, physician assistants are hired instead.

Why? Because PAs have no union. When a hospital hires a PA, they can work them to death. Nights, weekends, whenever without having to worry about union regulations.

On the side of NPs, when a hospital bills Medicare, the reimbursement rate for NPs is substantially better than it is for PAs.

There is a bit of a battle right now in this era of GP shortage, where many hospitals would rather hire PAs than NPs to bypass union contracts.
 
Last edited:
Messages
2,391
Location
UK
I have been looking at paragraph 9 of PW et al's rebuttal, and have a few comments.
9. The second criticism concerned our secondary analysis paper about recovery (White et al., 2013). Dr Geraghty states that ‘… some trial participants had reached the level required to be classified as improved or recovered at trial entry’. This is incorrect; 3/640 (<1%) of participants had scores within the normal population ranges for both fatigue and physical function at trial entry, which was onlyone of the criteria necessary to be considered as recovered. To meet the criteria for recovery, a participantalso had to have met additional criteria: no longer be considered a case of CFS (using the trial definition of CFS) and rated their overall health as ‘much’ or ‘very much’ better compared to trial entry.No participants met the full criteria for recovery at trial entry.
What KG originally said was (see http://journals.sagepub.com/doi/pdf/10.1177/1359105316675213):-
Critics have also pointed out a crucial methodological anomaly, that the PACE team had lowered the threshold for improvement and recovery from a score of 85 on SF-36, to a score of 60, at the analysis stage. This change meant that some trial participants had reached the level required to be classified as improved or recovered at trial entry, before they had even taken any treatment course (Walwyn et al., 2013; White et al., 2013). The trial authors have not offered a reasonable explanation for this observation. The other parameters rested on patients reporting feeling better using selfreport measures and no longer meeting the Oxford Criteria (White et al., 2007).
(PW et al) No participants met the full criteria for recovery at trial entry
So what? Means nothing. By that argument someone could be a just hair’s breadth away from full recovery at entry, needing only a cup of tea and a biscuit to then tip across the threshold into full recovery. “Not fully recovered” means exactly that; it does not in the slightest mean “presenting with full illness symptoms”.
(PW et al) Dr Geraghty states that ‘… some trial participants had reached the level required to be classified as improved or recovered at trial entry’.
To be clear, KG was specifically referring to the SF-36 score in isolation, it being reduced from a pass-rate originally of >= 85, to >= 60 subsequently. When referring to KG’s original text in context, this is unambiguous.
(PW et al) Dr Geraghty states that ‘… some trial participants had reached the level required to be classified as improved or recovered at trial entry’. This is incorrect; 3/640 (<1%) of participants had scores within the normal population ranges for both fatigue and physical function at trial entry, which was onlyone of the criteria necessary to be considered as recovered.
No, KG is correct when not taken out of context.

If you look at the SF-36 scores alone, in the same context as KG’s original statement, there were 81/640 participants with SF-36 scores >= 60 at entry, i.e. 12.7%.

Only if you do what PW et al have done, and say how many participants, at entry, met both the SF-36 score (>= 60) and the Chalder fatigue score (Likert <= 18), then you do indeed get 3 participants (rows 134, 205, & 565) meeting both criteria at entry. But this “rebuttal” fallaciously uses a very different metric to KG’s original statement it claims to refute.

Also, the rebuttal is therefore employing two criteria, not one as claimed.

And the claim that a SF-36 score >= 60 is “within the normal population range” … I think it has been well discussed before that this is highly dubious.
 
Messages
2,391
Location
UK
The original link at the start of this thread had the full free text, but now it sees to be behind a paywall.
 

Cheshire

Senior Member
Messages
1,129
According to Coyne, White, Chalder and Sharpe have asked the editor that Keith Geraghty declares being a patient as a conflict of interest.
This is really appalling.

Should authors declare a conflict of interest because they suffer from the illness they are writing about?

The email conveying the demand is reproduced below. Basically, the investigators from the PACE trial of cognitive behavior therapy and graded exercise therapy for chronic fatigue syndrome demanded:


  • Partial retraction of an article critical of their work.
  • Issuing of a conflict of interest because the author of the critique suffered from the illness targeted by the trial.
  • The corrected article be posted with a full response from the PACE investigators, and not appear until readers could compare the two.
https://jcoynester.wordpress.com/20...re-writing-about/comment-page-1/#comment-2976

Thread: http://forums.phoenixrising.me/inde...authors-declare-a-conflict-of-interest.52479/
 
Last edited: