• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Response to the editorial by Dr Geraghty by the PACE Trial team

Jonathan Edwards

"Gibberish"
Messages
5,256
Although on superficial reading it looks like authoritative researchers have slammed down the maverick (so short term gain) they've shot themselves in the foot. Even altfacting their own previous online statements. With all those cosigners someone should have spotted the mistake!

I don't think these people go in for spotting their mistakes!
 

TiredSam

The wise nematode hibernates
Messages
2,677
Location
Germany
I don't think these people go in for spotting their mistakes!
Is there an example anywhere of any of them ever admitting to a mistake or conceeding a point? They seem almost superhuman in their unblemished track record of always being right. I've heard of the doctrine of papal infallibility, but these levels of supreme authority, revelation, doctrine etc would surely make even the Pope blush.
 

Michelle

Decennial ME/CFS patient
Messages
172
Location
Portland, OR
And, of course, when White et al say CBT, they mean something very different from the sort of CBT patients in the survey they cite would be expecting and that patients with any other disease would get. But, as someone upthread says, it takes a lot of time to explain that for the "lay" reader (i.e. outside the ME/CFS field).

Frankly. I'm amazed these guys didn't go into marketing or political consulting. Their ability to use language in such a mendacious but superficially reasonable way shows remarkable skill. There's even a tiny part of me that can't help but admire it a little. :jaw-drop:
 

Dolphin

Senior Member
Messages
17,567
They admitted to a mistake when replying to the letters published in the Lancet back in the day. Yet no correction was issued directly for the online manuscript!?!
Though they basically repeated the error nearly 2 years later in the recovery paper:
We changed our original protocol’s threshold score for being within a normal range on this measure from a score of >=85 to a lower score as that threshold would mean that approximately half the general working age population would fall outside the normal range. The mean (S.D.) scores for a demographically representative English adult population were 86.3 (22.5) for males and 81.8 (25.7) for females (Bowling et al. 1999). We derived a mean (S.D.) score of 84 (24) for the whole sample, giving a normal range of 60 or above for physical function.
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
Though they basically repeated the error nearly 2 years later in the recovery paper:

We changed our original protocol’s threshold score for being within a normal range on this measure from a score of >=85 to a lower score as that threshold would mean that approximately half the general working age population would fall outside the normal range. The mean (S.D.) scores for a demographically representative English adult population were 86.3 (22.5) for males and 81.8 (25.7) for females (Bowling et al. 1999). We derived a mean (S.D.) score of 84 (24) for the whole sample, giving a normal range of 60 or above for physical function.

It's like they don't even understand statistics, I guess they've never heard of skew, ceiling effects (the median of healthy people of the demographically representative population is ~95 (out of 100). Or the idea that they're supposed to be comparing to a healthy population since other chronic illnesses are systematically excluded from the study (the SD for the healthy population is much smaller than 15!).

Dear anyone involved in organising the PACE trial who is reading this: this kind of sloppy analysis is simply not acceptable. If you were involved in writing the manuscript, you should be ashamed.
 

Jenny TipsforME

Senior Member
Messages
1,184
Location
Bristol
@Solstice questionnaires in research are essentially surveys but in the research design they should take extra things into account like how they get the sample and you usually know more about the people participating.

Patient surveys can be slightly looked down on because of sampling bias (and probably epistemic injustice). Eg there's a difference between members of Tymes and AYME. In the Tymes quick poll about MAGENTA 100% respondents think that MAGENTA should be suspended until PACE has been reanalysed. The majority of people who know about the issues do think this IMO, but if you asked the same question to followers of the AYME account I don't think the response would be 100%

So how do you know that a survey response is representative of the patient population or if those people joined that group because they already hold the shared view? Large surveys hold more weight. We should have a big survey for all the members of the main UK charities.

Also researchers tend to use standardised questionnaires like the SF36 but surveys tend to be off the top of someone's head. Standardised questionnaires have more authority but may be less relevant. When questionnaires were on paper, I was always scribbling in the margins about the options representing my experience being missing. Eg there's only the option to enjoy activities less or the same. In my experience I appreciate what I can do more. A decent series on TV can make my month
 

Solstice

Senior Member
Messages
641
I would think that questionnaire based science, though it has its place when no better options available, are going to be far more vulnerable to abuse, intentional or otherwise.

Who decides if there are better options available? Left up to the psychologists the answer is always gonna be, we can't do better than this.
 

Barry53

Senior Member
Messages
2,391
Location
UK
Who decides if there are better options available? Left up to the psychologists the answer is always gonna be, we can't do better than this.
I am an engineer, not a medical professional, but to me from the outside looking in, it seems some research projects maybe make a fundamental oversight/blunder when determining team members.

There is a whole raft of skills required for a medical research project; obviously the core medical/clinical (I may not be using the right terminology here) skills are fundamental, but there must be plenty of other non-clinical skills essential within a team also. A team is exactly that - each individual is not expected to have all skills in depth, though an awareness at least is probably good.

In some projects, team members with the core skills may also have the other skills needed as well (statistical rigour (not just "knows statistics"), research trial regulations, research trial design, etc, etc, etc), but in other projects it may be necessary to recruit additional team members to fill essential-skills gaps, even though such members may not contribute to the core clinical skills. My worry is that some medical research projects look to be run with the arrogant belief that if you have all the core clinical skills in a team, then those team members will implicitly have all the other skills required. I worry there may almost be a "keep it in the club" mentality that leaves essential-skills gaps. Projects such as PACE should sound loud alarms that this aspect of clinical trials maybe needs revisiting. For all we know PACE could be the tip of an iceberg. And pondering PACE - maybe there was an arrogance in them thinking they really even had all the core clinical skills needed!

In fact I think that before any research project is approved, there should be a standard list of baseline skills that are required for any clinical research project, and part of the approval process should be to demonstrate categorically (no flimflam) that the project has such skills available to it, and will employ them correctly throughout the trial.
 

CFS_for_19_years

Hoarder of biscuits
Messages
2,396
Location
USA
In some projects, team members with the core skills may also have the other skills needed as well (statistical rigour (not just "knows statistics"), research trial regulations, research trial design, etc, etc, etc), but in other projects it may be necessary to recruit additional team members to fill essential-skills gaps, even though such members may not contribute to the core clinical skills. My worry is that some medical research projects look to be run with the arrogant belief that if you have all the core clinical skills in a team, then those team members will implicitly have all the other skills required

I hope you won't mind while I use my beloved alma mater as an example of where one can obtain the skills to be a biostatistician, someone with expertise in statistical theory and methodology for biological investigations.

Biostatistics is an academic field in its own right. It's possible to earn a degree at the Master's or PhD level:
https://www.biostat.washington.edu/program/degrees

Free biostatistical consulting is offered by the dept. of Biostatistics to other University of Washington faculty and students. It is best to receive the consultation prior to carrying out an investigation, in order to be sure that the plan includes recruiting sufficient numbers of subjects in order to test a hypothesis. I would hope that other colleges and universities would offer the same consultations to their faculty and students where such a dept. exists.
http://www.stat.washington.edu/consulting/

These are some of the courses offered:
https://www.biostat.washington.edu/courses/schedule/autumn-2016

After receiving my Bachelor's degree at the UW I took a few post-graduate courses, one of which was Biostatistics. It was a long time ago, so it's hard to say which course it is now, but this looks close enough:
https://www.biostat.washington.edu/courses/course/BIOST/511
Presentation of the principles and methods of data description and elementary parametric and nonparametric statistical analysis. Examples are drawn from the biomedical literature, and real data sets are analyzed by the students after a brief introduction to the use of standard statistical computer packages. Statistical techniques covered include description of samples, comparison of two sample means and proportions, simple linear regression and correlation.

IMHO a Biostatistician deserves a spot in most investigations. Medical doctors won't usually have depth of training in biostatistics or any kind of statistics for that matter.

There are plenty of undergraduate courses in statistical analysis and are usually required for psychology majors.
 

user9876

Senior Member
Messages
4,556
I am an engineer, not a medical professional, but to me from the outside looking in, it seems some research projects maybe make a fundamental oversight/blunder when determining team members.

There is a whole raft of skills required for a medical research project; obviously the core medical/clinical (I may not be using the right terminology here) skills are fundamental, but there must be plenty of other non-clinical skills essential within a team also. A team is exactly that - each individual is not expected to have all skills in depth, though an awareness at least is probably good.

In some projects, team members with the core skills may also have the other skills needed as well (statistical rigour (not just "knows statistics"), research trial regulations, research trial design, etc, etc, etc), but in other projects it may be necessary to recruit additional team members to fill essential-skills gaps, even though such members may not contribute to the core clinical skills.

PACE had a trial statistician http://www.ema.europa.eu/docs/en_GB/document_library/contacts/johnsona1_CV.pdf . He is an author on the papers including the recovery papers. So I would suggest it is either his lack of attention to the trial, that he allowed others to do bad stats or simply that he is incompetent. He works or worked for the MRC clinical trial unit so perhaps that is another reason the MRC ignored the bad practices from PACE.