• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Science is broken: fixing fiddling and fraud in research

user9876

Senior Member
Messages
4,556
The entire justification of the biopsychosocial idea was systems theory, if you read Engel.
Chaos theory and associated math is hard systems theory. It has potential. The flip side is soft systems. Systems can be embraced without getting into mathematics.

One of the problems of math based systems theory is that it is very vulnerable to minute uncertainties in measurement and quantification. Add in some dynamics and the whole mathematical system becomes too variable to be precise. This is why climate modellers tweak hundreds of mathematical models and look for common patterns in outcomes. If a wide range of parameters give similar outcomes, then the model is considered to have some predictive value. The uncertainty is however high. For many situations I think a non-mathematical (actually it is based on maths, but graph theory not equations) approach suits many problems far more.

I've just got a copy of Engels paper now so I will try to read it over the next few days.

By mathematics I don't necessarily mean equations. I see logic and descrete mathematics (including graph theory) as important modelling tools that can be used to specify and simulate complex systems. What the mathematisation gives is a formalisation that tries to ensure concepts can be clearly expressed, understood and reasoned about. Finding the right conceptual framework (and hence formalisation) for a given problem can be hard and sometimes becomes the key to the correct understanding of a system.

I have problems with descriptive text where the meaning can be reinterpreted over the years and where vagueness in the arguments makes it hard to confirm or deny statements let alone check for the internal consistancy of an argument.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Hi user9876, in that case we are in agreement. Even soft systems is a formalization, and a method to create more specific interpretations particularly in respect to relationships between processes in a model.

What makes soft systems different from hard systems methods is that hard systems have numerical values, whereas soft systems is about modelling relationships between things and the processes that make them work. Its more explanatory than precise. Its weakness is that it is less appropriate for well defined quantizable issues, but its strength is its better for less well defined or understood problems. Soft systems is an investigatory analysis that helps interpret such problems. Any solutions come out of that interpretation, though it is fair to say that like complex systems analysis soft systems often uses many models and looks for models of best utility. I say interpretation instead of definition, because the system should not be confused with the reality.

Psycho-psychiatry is too vague for hard systems, whereas most engineering problems are ideal for hard systems approaches. Soft systems approaches and psychology go well together, a point I hope to follow up on in depth over the next few years. In fact my old soft systems PhD supervisor is now doing work in psychology, though I have yet to make contact with her again.

Bye, Alex
 

Simon

Senior Member
Messages
3,789
Location
Monmouth, UK
How citation distortions create unfounded authority: analysis of a citation network

Conclusion Citation is both an impartial scholarly method and a powerful form of social communication. Through distortions in its social use that include bias, amplification, and invention, citation can be used to generate information cascades resulting in unfounded authority of claims. Construction and analysis of a claim specific citation network may clarify the nature of a published belief system and expose distorted methods of social citation.

This study is about understanding beliefs in scientific claims, by looking at the pattern of citations. It doesn't try to establish the truth about claims, but whether or not the citations fairly represent the underlying evidence. In this study, the citations were demonstrably unrepresentative of the evidence, leading the to a collective belief in claims that isn't really justified. Quite possibly, this problem applies in many fields.

Effectively, author Steven Greenberg has created a new method for robustly analysing citations and how they can falsely create 'authority. It's worth a look.

About the Beta-amyloid protein example used [not essential]
The belief system studied is that a protein, β amyloid, known for its role in injuring brain in Alzheimer’s disease, is also produced by and injures skeletal muscle fibres in the muscle disease sporadic inclusion body myositis.

Greenberg identifies 3 types of 'distortion': citation bias, amplification and invention:

1. Citation bias

Although hundreds of papers were included in the studies, just ten provided primary data (as opposed to, say reviewing or hypothesising). The 4 positive papers received almost all the attention while the remaining 6, that contradicted or weakened the main claim, were largely ingnored.

"the supportive [4] papers received 94% of the 214 citations to these primary data, whereas the six papers containing data that weakened or refuted the claim received only 6% of these citations (differing citation frequency, P=0.01)."
As shown in this graph:

f2b.jpg



Was there a good reason for citing some papers and ignoring others?
This whole analysis only makes sense if the researchers are being arbitrary in citing some papers (who's findings they 'believe') while ignoring ones they don't like. And it looks like there isn'ta good reason for such skewed citation. There were significant flaws in the 'supporting' papers. Flaws in the 'critical' papers were not discussed, but crucially:
"No papers refuted or critiqued the critical data, but instead the data were just ignored".

2. Amplification - The magnifying glass effect
Amplification occurs when a few key papers, eg review papers - containing no data on claim validity focused citation on particular primary data papers supportive of the belief, while isolating others that weakened it. The effect is similar to a magnifying lens collecting light.

People cite review papers rather than primary data papers, and consequently the bias in those review papers gets magnified throughout the literature

3. Invention
Three ways of effectively creating new facts
  • Citation diversion—citing content but claiming it has a different meaning eg saying a study supports the claim when most of the evidence in that study contradicts the claim. I've seen this surprisingly often
  • Citation transmutation—the conversion of hypothesis into fact through the act of citation alone. One author hypothesises in the discussion section, another author then cites that paper as hard evidence
  • Dead end citation—support of a claim with citation to papers that do not contain content addressing the claim. Seen this quite a few times too.
 

user9876

Senior Member
Messages
4,556
On citations.

I've had times where reviewers have insisted that we cite their papers even when they are irrelivant or when we think there argument is wrong. I've seen it when writing papers that cross over from computer science into economics and it seems to be economists that work this way. Don't know about the medical world
 

Esther12

Senior Member
Messages
13,774
This study is about ...

Thanks a lot for that Simon.

This is exactly the sort of thing we've been complaining about with CFS. What a handy paper I found!

Unfortunately, I've also revealed that I am a fine example of this problem, as I did not want to read the whole full paper, so instead hoped to have someone else provide a nice summary. I'm too lazy to look at the raw data!

A few years back, when I knew less about academic publishing, I asked the editor of a journal if peer reviewers would check to see if the use of citations in an article was accurate. They said something like: "It's more likely they'll just read through it while eating a sandwich. They do it for free, so they're not going to spend lots of time doing fresh research in order to review a paper". If you end up with a small group of people with similar beliefs who consider themselves 'experts' in a small field, it's quite likely that they'll end up reviewing one another's papers, and sharing one another's 'blind spots'.

I think that with a topic like CFS peer review could do more harm than good, by encouraging unwarranted faith in what is then published.

Alternatively - I have heard that if you are trying to publish a paper which challenges the views of a small group who consider themselves to be experts, then peer review can really be a pain.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Thanks user9876, this appears to be a special edition devoted to this topic.

http://pps.sagepub.com/content/7/6/689.full.pdf html
This one is particularly interesting. DSM-V is claiming a new psychiatric disorder, it sounds familiar:

DSM-5 Task Force Proposes Controversial
Diagnosis for Dishonest Scientists
Matthew J. Gullo1 and John G. O’Gorman2
[Alex, these researchers are based in my two universities that I studied at]

The essential feature of pathological publishing is the “persistent
and recurrent publishing of confirmatory findings (Criterion
A) combined with a callous disregard for null results
(Criterion B) that produces a “good story” (Criterion C), leading
to marked distress in neo-Popperians (Criterion D).” Diana
Gleslo, M.D., who chairs the task force developing the fifth
edition of the Diagnostic and Statistical Manual of Mental
Disorders (DSM-V), said the new diagnosis will help combat
the emerging epidemic of scientists engaging in questionable
research practices. “The evidence is overwhelming,” Gleslo
told reporters. “We can no longer dismiss this as merely ‘a
few bad apples’ trying to further their career. This is a medical
condition—one we fear may be highly infectious.”

Alex again. This very claim is a whole chapter in my book. I was claiming it as a philosophical failure, and yes I am a neo-Popperian (actually a pan critical rationalist). It is highly amusing to me that DSM-V classifies it as a psychiatric disorder.

Bye, Alex

PS Please note this DSM-V article was satire and not serious, as pointed out by Suzi Chapman. It however exactly matches an argument I am constructing against the irrational claims made by people pushing the dysfunctional belief model of CFS.

Many claims and processes made and used by those pushing the DBM of CFS match the processes claimed by many logicians of science for nonscience and pseudoscience.
 

Simon

Senior Member
Messages
3,789
Location
Monmouth, UK
This one is particularly interesting. DSM-V is claiming a new psychiatric disorder, it sounds familiar:

DSM-5 Task Force Proposes Controversial
Diagnosis for Dishonest Scientists
Matthew J. Gullo1 and John G. O’Gorman2
[Alex, these researchers are based in my two universities that I studied at]

The essential feature of pathological publishing is the “persistent
and recurrent publishing of confirmatory findings (Criterion
A) combined with a callous disregard for null results
(Criterion B) that produces a “good story” (Criterion C), leading
to marked distress in neo-Popperians (Criterion D).” Diana
Gleslo, M.D., who chairs the task force developing the fifth
edition of the Diagnostic and Statistical Manual of Mental
Disorders (DSM-V), said the new diagnosis will help combat
the emerging epidemic of scientists engaging in questionable
research practices. “The evidence is overwhelming,” Gleslo
told reporters. “We can no longer dismiss this as merely ‘a
few bad apples’ trying to further their career. This is a medical
condition—one we fear may be highly infectious.”

Alex again. This very claim is a whole chapter in my book. I was claiming it as a philosophical failure, and yes I am a neo-Popperian (actually a pan critical rationalist). It is highly amusing to me that DSM-V classifies it as a psychiatric disorder.

Think this might be a rather brilliant spoof about all the problems of unreproducible science. A few gems from it:

Dishonest Publishing in Science—Choice or Disease?
WASHINGTON, July 20, 2012 [actually published in the journal in November]

...
Professor Brian Nacs, a neuroscientist at Oxford University, agrees. Research in his laboratory has uncovered widespread neurological deficits in scientists found guilty of academic misconduct. “When these people are put in a [brain] scanner and presented with significant p values, we find large activations in the reward areas of the brain, much larger than those of control scientists.” Professor Nacs likened the neural activity to that of cocaine addicts presented with images of cocaine. “Independent studies show the same pattern of findings using high citation counts and h-indexes. Even words like ‘tenure’ and ‘Nobel’ trigger the response. We are talking about a disease of the brain here—these people need medical intervention.

However, many scientists remain skeptical, accusing the task force of moving too quickly to medicalize the phenomenon. These critics point to a large body of evidence that contradicts the disease hypothesis. “The problem is we can’t get any of it published!” said Professor Ali Den of Columbia University. “We have run several studies and all have found no significant difference between the brains of scientists who are guilty of misconduct and the brains of those who are not.”
...
The task force is not convinced. According to Dr. Gleslo, it will not consider unpublished research findings. “I’m sorry, but if your study is not interesting enough to be published in a peer-reviewed journal, it is not science and has no place in our deliberations. Don’t get me wrong—I have the greatest respect for Professor Den and her team—but if what she is saying were true, that would mean we are all infected!”
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Hi Simon, this was in the original link I posted. However it is the irony that this is even being proposed that I find amusing. In my book I do not think its either a disorder or a choice. I think its a failed methodology embracing obsolete and dangerous philosophy. Bye, Alex

PS Just adding the comment that Suzy Chapman pointed out this was a satirical article and is not serious.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Suzy Chapman has posted that the DSM-V piece is deliberately satirical and not real. I am looking into that. Here is the link:

http://forums.phoenixrising.me/inde...nding-up-for-science.20231/page-5#post-310173

Diana Gleslo does not have any internet existence aside from this article. The authors who wrote it do however. The abstract says this:

Abstract:
Satirical piece for Perspectives on Psychological Science.

I wonder if its an attempt to discredit counter-arguments based on exactly the points made in the satirical arguement. I also wonder at the apparent coincidence that these authors are from the same universities as me.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
I regard anything DSM-V as suspect, my comment was for the amusement factor, so I guess the satire worked. The article and associated commentary did not comment on the satirical nature, and I had not even begun checking any of it when Suzy commented.

The irony is this:

The essential feature of pathological publishing is the “persistent
and recurrent publishing of confirmatory findings (Criterion
A)

The publication of vaguely confirmatory findings is part of the whole scientific non-credibility of much of psychosomatic medicine, and I gather much of psychiatry though I have not looked at this. Failure to test the theories or use objective markers or objective evidence is part of why its not science. In the case of CBT/GET studies which use objective evidence show, I think universally, that the therapy does not work. In addition the underlying model has not only never been tested, it can't be tested as it has no objective criteria to test.

combined with a callous disregard for null results
(Criterion B)

The "callous" bit is a giveaway in retrospect, its emotional rhetoric ... but then I expect nonsense from DSM-V. It is no coincidence that my book is tentatively titled Embracing the Null Hypothesis. Contrary evidence in vast abundance is routinely ignored in psychosomatic research especially the DBM or dysfunctional belief model.


that produces a “good story” (Criterion C),

This resembles my "pursuasive rhetoric" remark. They do indeed tell a story instead of giving an objective testable model. That story changes with the audience too. As I intend to show that story uses the same logic as humour, switching meanings of words to give outcomes never logical inferable from the evidence.

leading to marked distress in neo-Popperians (Criterion D).

This was another red flag which I missed earlier. I was starting to wonder why this had anything to do with neo-Popperians when Suzy made her comment. Sure my argument is a neo-Popperian argument, but why would DSM-V care about that? Neo-Popperians have been claiming much of psychiatry is nonscience or pseudoscience for over half a century. Maybe that is the point of the satire. In using six different criteria for nonscience, I think I can show that much of the DBM qualifies as nonscience on each of the six sets of criteria.

I wonder at the target of this satirical piece. Is it patient advocacy, or psychs in general, or both? So many in psychiatry seem completely oblivious to these issues, yet others are writing serious articles on them.

These are just some thoughts before my brain melts and I have to sleep. The article is still very funny and deeply ironic. I seem to have misplaced a big piece of my funny bone today though.

Bye, Alex
 

Simon

Senior Member
Messages
3,789
Location
Monmouth, UK
Another gem from the 'Replication Crisis' special issue.
Alex - think you might like this as it has a lot to say about the lack of falsifiability in psychology.

Fergusun & Heene, Nov 2012

Much of this paper is a detailed discussion of publication bias and why psychology fails to publish null/negative results. The main point, though, is that if negative results don't get published it beomes impossible to falsify any theory 'as failed replications are largely ignored'.
Given that science is dependent on the process of falsification, we argue that these problems reduce psychological science’s capability to have a proper mechanism for theory falsification, thus resulting in the promulgation of numerous “undead” theories that are ideologically popular but have little basis in fact.


The are some striking and strident comments so I've compiled some of the strongest quotes below:
...consistent with ideas expressed by Ioannidis (2005), we suggest that publication bias is more likely in fields that are newer, politicized, or ideologically rigid, where small groups of researchers have invested heavily in a particular theoretical model, where pressures to publish exist (Fanelli, 2010a),
This may or may not apply to CFS research - but I find it interesting there is open acknowledgement that such ideologiacl rigidity and researcher commitment to specific models can be an issue in psychology.

...
The Invincibility of Psychological Theories
This concluding section carries the harshest criticism of those that defend the status quo in psychological research.

The authors say they understand that their comments about low research standards will upset many psychologists who often view their field's standards as higher than those of other sciences. They suggest that open recognition of such problems "would inevitably crumble the façade of psychology as a purely objective science".
...
Nonetheless, the aversion to the null and the persistence of publication bias… renders a situation in which psychological theories are virtually unkillable. Instead of rigid adherence to an objective process of replication and falsification ... the end result of which is to allow poor quality theories to survive indefinitely.

Proponents of a theory may, in effect, reverse the burden of proof, insisting that their theory is true unless skeptics can prove it false...

In the absence of a true process of replication and falsification, it becomes a rather moot point to argue whether individual theories within psychology are falsifiable as, in effect, the entire discipline risks a slide toward the unfalsifiable.

In such an environment many theories, particular perhaps those tied to politicized or “hot” topics, are not subjected to rigorous evaluation and, thus, are allowed to survive in a semi-scientific status long past their utility.
...
Fanelli (2010b) found that theory supportive results are far more prevalent in psychology and psychiatry than in the "hard" sciences (91.5% versus 70.2% in the space sciences, for instance). Although it may be true that psychologists are almost always right about their theories, we find it more plausible to suggest that the fluidity and flexibility of social science merely makes it easy for scholars, even those acting in good faith, to appear to be right.

We suspect a good number of theories in popular use within psychology likely fit within this category; theories that explain better how scholars wish the world to be than how it actually is.

Finally, the authors urge psychology research to raise it's game:
Otherwise psychology risks never rising above being little more than opinions with numbers
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Hi Simon, this sums it up:

"In the absence of a true process of replication and falsification, it becomes a rather moot point to argue whether individual theories within psychology are falsifiable (Wallach & Wallach, 2010) as, in effect, the entire discipline risks a slide toward the unfalsifiable. This is a systemic discipline-wide problem in the way that theory-disconfirmatory data is managed. In such an environment many theories, particular perhaps those tied to politicized or “hot” topics, are not subjected to rigorous evaluation and, thus, are allowed to survive in a semi-scientific status long past their utility. This is our use of the term undead theory, a theory that continues in use, having resisted attempts at falsification, ignored disconfirmatory data, negated failed replications through the dubious use of meta-analysis or having simply maintained itself in a fluid state with shifting implicit assumptions such that falsification is not possible."

I am slowly working my way through all these papers ... there were a lot. In a way this is very encouraging. Psychiatry and psychology have to go through self-reflection in order to advance. They also have to embrace or create more rigorous and rational methodologies.

Its no coincidence that you will hear me talk about zombies a lot.

Bye, Alex
 

Esther12

Senior Member
Messages
13,774
Thanks Simon. Thinking about this, quite a lot of stuff has been falsified in CFS. Lots of null results published... but it doesn't really seem to matter much. There's a small group of researchers, and to be fair to them, they do occasionally test their theories, but when they show that they're completely wrong, that's not thought to indicate that they may be less expert on these matters than was believed. It just leads to ever more 'pragmatic' justifications: 'Well, we thought we were helping by doing this, but that's not the case... however we can still get questionnaire scores to go up, so here's another story about how we're helping, and why it's worth giving us more money.'
 

Simon

Senior Member
Messages
3,789
Location
Monmouth, UK
Thanks Simon. Thinking about this, quite a lot of stuff has been falsified in CFS. Lots of null results published... but it doesn't really seem to matter much. There's a small group of researchers, and to be fair to them, they do occasionally test their theories, but when they show that they're completely wrong, that's not thought to indicate that they may be less expert on these matters than was believed. It just leads to ever more 'pragmatic' justifications: 'Well, we thought we were helping by doing this, but that's not the case... however we can still get questionnaire scores to go up, so here's another story about how we're helping, and why it's worth giving us more money.'
That's a fair point. Annoyingly, I read something recently that addressed exactly this issue where null results simply lead to ducking, weaving and reformulation of the theory to fit the ever-changing evidence. I guess the lack of gain in physical activity measured by actometers is a case in point: the Dutch/Belgian authors simply decided recovery was about attitudes not increase in activity.

I think that's partly what the authors had in mind when they said:
We suspect a good number of theories in popular use within psychology likely fit within this category; theories that explain better how scholars wish the world to be than how it actually is.
Wish I could find that quote though.
 

Simon

Senior Member
Messages
3,789
Location
Monmouth, UK
academic bias vs financial bias

Another interesting quote from John Ioannidis.
http://pps.sagepub.com/content/7/6/645.full

In contrast to such corporate bias, psychological science seems to be infiltrated mostly by biases that have their origin at academic investigators. As such, they revolve mostly along the axes of confirmation and allegiance biases. Academics may want to show that their theories, expectations, and previous results are correct, regardless of whether this has also any financial repercussions or not.