• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

BBC Radio 4: The Life Scientific with Simon Wessely, 14th Feb 2017

SamanthaJ

Senior Member
Messages
219
There was an item on the World Service this morning about a computer programme that could detect mistakes or fraud in research. The man who developed it mentioned psychological research! And problems with peer review. Some scientists are calling it "a new form of harrassment" apparently. Who'd say a thing like that?! It's on this show, about 45 minutes in:

http://www.bbc.co.uk/programmes/p04rxgly
http://www.bbc.co.uk/programmes/p04rxgly
 

slysaint

Senior Member
Messages
2,125
Very interesting indeed.
'Fake Science' good phrase.
The computer program is called STATCHECK.
found this too:
https://www.theguardian.com/science/2017/feb/01/high-tech-war-on-science
"If Fanelli’s estimate is correct, it seems likely that thousands of scientists are getting away with misconduct each year. Fraud – including outright fabrication, plagiarism and self-plagiarism – accounts for the majority of retracted scientific articles. But, according to RetractionWatch, which catalogues papers that have been withdrawn from the scientific literature, only 684 were retracted in 2015, while more than 800,000 new papers were published. If even just a few of the suggested 2% of scientific fraudsters – which, relying on self-reporting, is itself probably a conservative estimate – are active in any given year, the vast majority are going totally undetected. “Reviewers and editors, other gatekeepers – they’re not looking for potential problems,” Hartgerink said."

"But if none of the traditional authorities in science are going to address the problem, Hartgerink believes that there is another way. If a program similar to Statcheck can be trained to detect the traces of manipulated data, and then make those results public, the scientific community can decide for itself whether a given study should still be regarded as trustworthy."
"
There are probably several very famous papers that have fake data, and very famous people who have done it"

Maybe they could run the PACE data thro it? (and all the rest of them)

@Dx Revision Watch
 

user9876

Senior Member
Messages
4,556
Very interesting indeed.
'Fake Science' good phrase.
The computer program is called STATCHECK.
found this too:
https://www.theguardian.com/science/2017/feb/01/high-tech-war-on-science
"If Fanelli’s estimate is correct, it seems likely that thousands of scientists are getting away with misconduct each year. Fraud – including outright fabrication, plagiarism and self-plagiarism – accounts for the majority of retracted scientific articles. But, according to RetractionWatch, which catalogues papers that have been withdrawn from the scientific literature, only 684 were retracted in 2015, while more than 800,000 new papers were published. If even just a few of the suggested 2% of scientific fraudsters – which, relying on self-reporting, is itself probably a conservative estimate – are active in any given year, the vast majority are going totally undetected. “Reviewers and editors, other gatekeepers – they’re not looking for potential problems,” Hartgerink said."

"But if none of the traditional authorities in science are going to address the problem, Hartgerink believes that there is another way. If a program similar to Statcheck can be trained to detect the traces of manipulated data, and then make those results public, the scientific community can decide for itself whether a given study should still be regarded as trustworthy."
"
There are probably several very famous papers that have fake data, and very famous people who have done it"

Maybe they could run the PACE data thro it? (and all the rest of them)

@Dx Revision Watch

Its important to understand the difference between fake data and spun analysis. I would be surprised if PACE had manipulated the data instead they manipulated the way they analyzed the results. This is much harder for a computer to pick up because protocols are not written in any formal language but instead are described in english as are the papers reporting results. With data I suspect they are looking for patterns that look have particular characteristics that suggest they have been changed (people aren't good at random).
 

slysaint

Senior Member
Messages
2,125
Its important to understand the difference between fake data and spun analysis. I would be surprised if PACE had manipulated the data instead they manipulated the way they analyzed the results. This is much harder for a computer to pick up because protocols are not written in any formal language but instead are described in english as are the papers reporting results. With data I suspect they are looking for patterns that look have particular characteristics that suggest they have been changed (people aren't good at random).

I understand that but as well as the actual stats it also checks the references etc.........a report showing how most of the BPS research studies rely on each other to back up their 'findings' could be very illuminating. One study being checked in isolation might not show anything, but put them all together..............

eta:"If a program similar to Statcheck can be trained to detect the traces of manipulated data"
 
Last edited:

RogerBlack

Senior Member
Messages
902
Very interesting indeed.
'Fake Science' good phrase.
The computer program is called STATCHECK.

Maybe they could run the PACE data thro it? (and all the rest of them)
@Dx Revision Watch

Well - no.
As has been mentioned elsewhere, this doesn't help at all.
The problems in the PACE trial wouldn't be picked up by software that does what STATCHECK does.
It has a very limited range of things it can check - and none of the flaws with PACE are of that type.

Think of it as a spell checker, when what's really needed is both access to the original data, and a skilled reviewer.
 

Barry53

Senior Member
Messages
2,391
Location
UK
Well - no.
As has been mentioned elsewhere, this doesn't help at all.
The problems in the PACE trial wouldn't be picked up by software that does what STATCHECK does.
It has a very limited range of things it can check - and none of the flaws with PACE are of that type.

Think of it as a spell checker, when what's really needed is both access to the original data, and a skilled reviewer.
What a sufficiently (very) sophisticated program should be able to pick up on, in the context of PACE, are things like:-
  1. Alleged recovery criteria overlapping initial sickness entry criteria.
  2. Later follow-up statements claiming '1' excusable because other criteria masked the issue (as if that excuses it).
  3. Alleged recovery fitness levels commensurate with 80+ year olds, when trial participants much younger.
  4. Etc.
In fact as I write this it occurs to me there are lessons to be learned from industry here, in the form of continuous quality improvement etc. There should be a standard check list (maybe there already is in some form?) of things to check, even though it it would normally seem absurd to even think of needing to check them: Overlapping entry/recovery criteria; recovery criteria matched to sane comparison groups, etc. And it should be an ongoing process of reassessment and improvement, based on bad practices observed that must be prevented in future.
 

RogerBlack

Senior Member
Messages
902
What a sufficiently (very) sophisticated program should be able to pick up on, in the context of PACE, are things like:-
  1. Alleged recovery criteria overlapping initial sickness entry criteria.
  2. Later follow-up statements claiming '1' excusable because other criteria masked the issue (as if that excuses it).
  3. Alleged recovery fitness levels commensurate with 80+ year olds, when trial participants much younger.
This isn't really stuff that can be found by 'software' before you get software that is basically human equivalent.
 

Barry53

Senior Member
Messages
2,391
Location
UK
This isn't really stuff that can be found by 'software' before you get software that is basically human equivalent.
I do not fully agree with that. The entry criteria and revised recovery criteria, for instance, were specific metrics that could be sanity checked. As a software engineer I am very well aware that such an undertaking would be nothing like as simple as it is to state, but I would have thought technically feasible given today's AI technologies. But definitely non-trivial.
 

user9876

Senior Member
Messages
4,556
I do not fully agree with that. The entry criteria and revised recovery criteria, for instance, were specific metrics that could be sanity checked. As a software engineer I am very well aware that such an undertaking would be nothing like as simple as it is to state, but I would have thought technically feasible given today's AI technologies. But definitely non-trivial.

Its not really possible because the definitions are buried in english and then buried a bit more to make it hard to really get how the numbers are generated.

If protocols were written in a formal language then it would of course be easy. The generation of suitable stats code from the protocol specification would probably be possible as well as a certain amount of model checking looking for inconsistencies. Such a system would also require some of the unjustifiable assumptions to be written down for a checker to work such as CFQ and Sf36-pf being interval scales or even monotonic scales (particular issues here with the CFQ).

I don't understand why trial protocols are not written in such a form a part from those writing them probably don't have the skills (I don't think it is that hard). But it would mean that the semantics get better defined at least down to a set of common elements. If this were the case PACE wouldn't have been published in the form it was published in as it would be glaringly obvious that the changes were unjustified by the excuses given.

Whilst I think it is possible even with a formal description it is not easy. There are model checkers (such as proverif) that check crypto protocols which are simpler and relatively easy to characterise and yet they still have issues picking up on key sharing and the general use of state. There is quite a research community who are looking at such techniques.
 

Barry53

Senior Member
Messages
2,391
Location
UK
Just to tidy it up a bit ... :)

bool checkForPileOfCrap(int eAuthorID)
{
bool bPileOfCrap = false;​

switch(eAuthorID)
{
case eSharpe:
case eWessely:
case eCrawley
case eWhite:
case eChalder:​

// Above list will likely need updating as events unfold.
bPileOfCrap = true;​

break;​

default:​

// Assume here that the author actually is a decent, competent human being.
break;​
}​

return bPileOfCrap;​
}
 
Last edited:

Esther12

Senior Member
Messages
13,774
Anyone seen any sign that critics of Wessely's work have been involved? I've not, so am expecting a pure puff-piece, undoubtedly soon to be followed by expressions of pain about the fact that there was such a vituperative response, from a small minority of course, to their attempt to cover this important and controversial issue fairly.
 

Molly98

Senior Member
Messages
576
Listening now. All very chummy. Mentioned CFS and gulf war syndrome. No details yet.
OOOh you are very brave @trishrhymes , I would not just be hurling abuse at the radio if I heard his voice, I would be hurling everything within reaching distance.
Please keep us updated, I can better tolerate it coming via you than listening to him direct.
 

trishrhymes

Senior Member
Messages
2,158
Talking about his childhood.
Father raised in Prague. Kindertransport rescued 1939. Family all killedin WWII. Settled in UK.
Mother English. Both parents teachers.
Doesn't think background had any effect on him. 'normal happy childhood'.
Decided to study medicine at 15.
Reading Anthony Clare's book persuade d him to do psychiatry.
Fellow students at first psych job at Maudsley have stuck together.
Got interested in CFS at Queens square clinic. CFS thought to be a muscle disease then. Idea dismissed . No one wanted to see these patients. passed to psychiatrist s.
 
Last edited: