• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Nature article on misconduct, lying by ommision, etc

Messages
13,774
This reminded me of some CFS research, and I thought I'd post a couple of graphs from Chalder after it as an illustration. One thing I see with CFs is that often the data in papers is then taken out of context and manipulated elsewhere, and people assume that it can be trusted without checking.

Redefine misconduct as distorted reporting

To make misconduct more difficult, the scientific community should ensure that it is impossible to lie by omission, argues Daniele Fanelli.
13 February 2013


Against an epidemic of false, biased and falsified findings, the scientific community’s defences are weak. Only the most egregious cases of misconduct are discovered and punished. Subtler forms slip through the net, and there is no protection from publication bias.
Delegates from around the world will discuss solutions to these problems at the 3rd World Conference on Research Integrity (wcri2013.org) in Montreal, Canada, on 5–8 May. Common proposals, debated in Nature and elsewhere, include improving mentorship and training, publishing negative results, reducing the pressure to publish, pre-registering studies, teaching ethics and ensuring harsh punishments.
These are important but they overestimate the benefits of correcting scientists’ minds. We often forget that scientific knowledge is reliable not because scientists are more clever, objective or honest than other people, but because their claims are exposed to criticism and replication.
The key to protecting science, therefore, is to strengthen self-correction. Publication, peer-review and misconduct investigations should focus less on what scientists do, and more on what they communicate.
What is wrong with current approaches? By defining misconduct in terms of behaviours, as all countries do at present, we have to rely on whistle-blowers to discover it, unless the fabrication is so obvious as to be apparent from papers. It is rare for misconduct to have witnesses; and surveys suggest that when people do know about a colleagues’ misbehaviour, they rarely report it. Investigators, then, face the arduous task of reconstructing what a scientist did, establishing that the behaviour deviated from accepted practices and determining whether such deviation expressed an intention to deceive. Only the most clear-cut cases are ever exposed.
Take the scandal of Diederik Stapel, the Dutch star psychologist who last year was revealed to have been fabricating papers for almost 20 years. How was this possible? First, Stapel insisted on collecting data by himself, which kept away potential whistle-blowers. Second, researchers had no incentive to replicate his experiments, and when they did, they lacked sufficient information to explain discrepancies. This was mainly because, third, Stapel was free to omit from papers details that would have revealed lies and statistical flaws.
In tackling these issues, a good start would be to redefine misconduct as distorted reporting: ‘any omission or misrepresentation of the information necessary and sufficient to evaluate the validity and significance of research, at the level appropriate to the context in which the research is communicated’.
“Focus less on what scientists do and more on what they communicate.”​
Some might consider this too broad. But it is no more so than the definition of falsification used by the US Office of Science and Technology Policy: “manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record”. Unlike this definition, however, mine points unambiguously to misconduct whenever there is a mismatch between what was reported and what was done.
Authors should be held accountable for what they write, and for recording what they did. But who decides what information is necessary and sufficient? That would be experts in each field, who should prepare and update guidelines. This might seem daunting, but such guidelines are already being published for many biomedical techniques, thanks to initiatives such as the EQUATOR Network (equator-network.org) or Minimum Information for Biological and Biomedical Investigations (mibbi.sourceforge.net).
The main task of journal editors and referees would then be to ensure that researchers comply with reporting requirements. They would point authors to the appropriate guidelines, perhaps before the study had started, and make sure that all the requisite details were included. If authors refused or were unable to comply, their paper (or grant application or talk) would be rejected. The publication would indicate which set or sets of guidelines were followed.

By focusing on reporting practices, the community would respect scientific autonomy but impose fairness. A scientist should be free to decide, for example, that ‘fishing’ for statistical significance is necessary. However, guidelines would require a list of every test used, allowing others to infer the risk of false positives.

Carefully crafted guidelines could make fabrication and plagiarism more difficult, by requiring the publication of verifiable details. And they could help to uncover questionable practices such as ghost authorship, exploiting subordinates, post hoc hypotheses or dropping outliers.
Graduate students could, in addition to learning the guidelines, train by replicating published studies. Special research funds could be reserved for independent replications of unchallenged claims.
The current defence against misconduct is prepared for the wrong sort of attack: the community tries to regulate research like any other profession, but it is different. The reliability of scientific ‘products’ is ensured not by individual practice, but by collective dialogue.

http://www.nature.com/news/redefine-misconduct-as-distorted-reporting-1.12411


Here is the graph Chalder uses to sell her expertise and views about CFS to others in a presentation:

Chalder slide on six months post GF.JPG


http://www.mental-health-forum.co.uk/assets/files/11.20 Trudie Chalder FINAL 169FORMAT.pdf (slide 18)

Here is a graph of the data from the study she cites:

Chalder trial on fatigue post GF data from purple.jpg


I wonder why she failed to include the 12 month data, where the difference between the two groups falls below statistical significance? I wonder if she took the time to explain that the 'positive' affects of treatment could be explained by those who returned to health in the treatment group just being more likely to send back their questionnaires at six months due to feeling a bit grateful for the therapist's time. The presentation was from 2012, and the study is about a decade old (although Chalder seems to have got the date wrong in her slide, making it difficult for anyone to check up on her claims).

More info and link to paper here: http://forums.phoenixrising.me/inde...l-intervention-to-aid-reco.13326/#post-333285
 

OverTheHills

Senior Member
Messages
465
Location
New Zealand
Esther are you planning to write a comment for posting on this article and/or send some PACE information to the corresponding author? I often think if we patients could interest an academic who specializes in this area they could prove a very effective ally/whistleblower/trojan horse. PACE would make a fine case study for them. I know I'm trying to lumber you with a job here but unfortunately I'm not in a position to do it myself.
Hoping;)

OTH