1. Patients launch $1.27 million crowdfunding campaign for ME/CFS gut microbiome study.
    Check out the website, Facebook and Twitter. Join in donate and spread the word!
The ePatient Revolution
Ryan Prior shares his experience and his thoughts from attending the Stanford Medicine X Conference as he contemplates the rising of the ePatient Revolution ...
Discuss the article on the Forums.

Science is broken: fixing fiddling and fraud in research

Discussion in 'Other Health News and Research' started by Simon, Nov 3, 2012.

  1. user9876

    user9876 Senior Member

    Messages:
    794
    Likes:
    1,950
    I've just got a copy of Engels paper now so I will try to read it over the next few days.

    By mathematics I don't necessarily mean equations. I see logic and descrete mathematics (including graph theory) as important modelling tools that can be used to specify and simulate complex systems. What the mathematisation gives is a formalisation that tries to ensure concepts can be clearly expressed, understood and reasoned about. Finding the right conceptual framework (and hence formalisation) for a given problem can be hard and sometimes becomes the key to the correct understanding of a system.

    I have problems with descriptive text where the meaning can be reinterpreted over the years and where vagueness in the arguments makes it hard to confirm or deny statements let alone check for the internal consistancy of an argument.
     
  2. alex3619

    alex3619 Senior Member

    Messages:
    7,703
    Likes:
    12,562
    Logan, Queensland, Australia
    Hi user9876, in that case we are in agreement. Even soft systems is a formalization, and a method to create more specific interpretations particularly in respect to relationships between processes in a model.

    What makes soft systems different from hard systems methods is that hard systems have numerical values, whereas soft systems is about modelling relationships between things and the processes that make them work. Its more explanatory than precise. Its weakness is that it is less appropriate for well defined quantizable issues, but its strength is its better for less well defined or understood problems. Soft systems is an investigatory analysis that helps interpret such problems. Any solutions come out of that interpretation, though it is fair to say that like complex systems analysis soft systems often uses many models and looks for models of best utility. I say interpretation instead of definition, because the system should not be confused with the reality.

    Psycho-psychiatry is too vague for hard systems, whereas most engineering problems are ideal for hard systems approaches. Soft systems approaches and psychology go well together, a point I hope to follow up on in depth over the next few years. In fact my old soft systems PhD supervisor is now doing work in psychology, though I have yet to make contact with her again.

    Bye, Alex
     
  3. Simon

    Simon

    Messages:
    1,530
    Likes:
    4,901
    Monmouth, UK
    This study is about understanding beliefs in scientific claims, by looking at the pattern of citations. It doesn't try to establish the truth about claims, but whether or not the citations fairly represent the underlying evidence. In this study, the citations were demonstrably unrepresentative of the evidence, leading the to a collective belief in claims that isn't really justified. Quite possibly, this problem applies in many fields.

    Effectively, author Steven Greenberg has created a new method for robustly analysing citations and how they can falsely create 'authority. It's worth a look.

    Greenberg identifies 3 types of 'distortion': citation bias, amplification and invention:

    1. Citation bias

    Although hundreds of papers were included in the studies, just ten provided primary data (as opposed to, say reviewing or hypothesising). The 4 positive papers received almost all the attention while the remaining 6, that contradicted or weakened the main claim, were largely ingnored.

    As shown in this graph:

    f2b.jpg


    Was there a good reason for citing some papers and ignoring others?
    This whole analysis only makes sense if the researchers are being arbitrary in citing some papers (who's findings they 'believe') while ignoring ones they don't like. And it looks like there isn'ta good reason for such skewed citation. There were significant flaws in the 'supporting' papers. Flaws in the 'critical' papers were not discussed, but crucially:
    2. Amplification - The magnifying glass effect
    Amplification occurs when a few key papers, eg review papers - containing no data on claim validity focused citation on particular primary data papers supportive of the belief, while isolating others that weakened it. The effect is similar to a magnifying lens collecting light.

    People cite review papers rather than primary data papers, and consequently the bias in those review papers gets magnified throughout the literature

    3. Invention
    Three ways of effectively creating new facts
    • Citation diversion—citing content but claiming it has a different meaning eg saying a study supports the claim when most of the evidence in that study contradicts the claim. I've seen this surprisingly often
    • Citation transmutation—the conversion of hypothesis into fact through the act of citation alone. One author hypothesises in the discussion section, another author then cites that paper as hard evidence
    • Dead end citation—support of a claim with citation to papers that do not contain content addressing the claim. Seen this quite a few times too.
     
  4. user9876

    user9876 Senior Member

    Messages:
    794
    Likes:
    1,950
    On citations.

    I've had times where reviewers have insisted that we cite their papers even when they are irrelivant or when we think there argument is wrong. I've seen it when writing papers that cross over from computer science into economics and it seems to be economists that work this way. Don't know about the medical world
     
  5. Esther12

    Esther12 Senior Member

    Messages:
    5,385
    Likes:
    5,891
    Thanks a lot for that Simon.

    This is exactly the sort of thing we've been complaining about with CFS. What a handy paper I found!

    Unfortunately, I've also revealed that I am a fine example of this problem, as I did not want to read the whole full paper, so instead hoped to have someone else provide a nice summary. I'm too lazy to look at the raw data!

    A few years back, when I knew less about academic publishing, I asked the editor of a journal if peer reviewers would check to see if the use of citations in an article was accurate. They said something like: "It's more likely they'll just read through it while eating a sandwich. They do it for free, so they're not going to spend lots of time doing fresh research in order to review a paper". If you end up with a small group of people with similar beliefs who consider themselves 'experts' in a small field, it's quite likely that they'll end up reviewing one another's papers, and sharing one another's 'blind spots'.

    I think that with a topic like CFS peer review could do more harm than good, by encouraging unwarranted faith in what is then published.

    Alternatively - I have heard that if you are trying to publish a paper which challenges the views of a small group who consider themselves to be experts, then peer review can really be a pain.
     
    user9876 likes this.
  6. user9876

    user9876 Senior Member

    Messages:
    794
    Likes:
    1,950
    I came across this issue of a psychological science journal with a number of papers around the crisis in replication and papers on methodology. There is one paper by Ioannidis. Full papers are accessable at the moment.

    http://pps.sagepub.com/content/7/6.toc
     
    alex3619 and Simon like this.
  7. alex3619

    alex3619 Senior Member

    Messages:
    7,703
    Likes:
    12,562
    Logan, Queensland, Australia
    Thanks user9876, this appears to be a special edition devoted to this topic.

    http://pps.sagepub.com/content/7/6/689.full.pdf html
    This one is particularly interesting. DSM-V is claiming a new psychiatric disorder, it sounds familiar:

    DSM-5 Task Force Proposes Controversial
    Diagnosis for Dishonest Scientists
    Matthew J. Gullo1 and John G. O’Gorman2
    [Alex, these researchers are based in my two universities that I studied at]

    The essential feature of pathological publishing is the “persistent
    and recurrent publishing of confirmatory findings (Criterion
    A) combined with a callous disregard for null results
    (Criterion B) that produces a “good story” (Criterion C), leading
    to marked distress in neo-Popperians (Criterion D).” Diana
    Gleslo, M.D., who chairs the task force developing the fifth
    edition of the Diagnostic and Statistical Manual of Mental
    Disorders (DSM-V), said the new diagnosis will help combat
    the emerging epidemic of scientists engaging in questionable
    research practices. “The evidence is overwhelming,” Gleslo
    told reporters. “We can no longer dismiss this as merely ‘a
    few bad apples’ trying to further their career. This is a medical
    condition—one we fear may be highly infectious.”

    Alex again. This very claim is a whole chapter in my book. I was claiming it as a philosophical failure, and yes I am a neo-Popperian (actually a pan critical rationalist). It is highly amusing to me that DSM-V classifies it as a psychiatric disorder.

    Bye, Alex

    PS Please note this DSM-V article was satire and not serious, as pointed out by Suzi Chapman. It however exactly matches an argument I am constructing against the irrational claims made by people pushing the dysfunctional belief model of CFS.

    Many claims and processes made and used by those pushing the DBM of CFS match the processes claimed by many logicians of science for nonscience and pseudoscience.
     
  8. Simon

    Simon

    Messages:
    1,530
    Likes:
    4,901
    Monmouth, UK
    Think this might be a rather brilliant spoof about all the problems of unreproducible science. A few gems from it:

     
  9. alex3619

    alex3619 Senior Member

    Messages:
    7,703
    Likes:
    12,562
    Logan, Queensland, Australia
    Hi Simon, this was in the original link I posted. However it is the irony that this is even being proposed that I find amusing. In my book I do not think its either a disorder or a choice. I think its a failed methodology embracing obsolete and dangerous philosophy. Bye, Alex

    PS Just adding the comment that Suzy Chapman pointed out this was a satirical article and is not serious.
     
  10. alex3619

    alex3619 Senior Member

    Messages:
    7,703
    Likes:
    12,562
    Logan, Queensland, Australia
    Suzy Chapman has posted that the DSM-V piece is deliberately satirical and not real. I am looking into that. Here is the link:

    http://forums.phoenixrising.me/inde...nding-up-for-science.20231/page-5#post-310173

    Diana Gleslo does not have any internet existence aside from this article. The authors who wrote it do however. The abstract says this:

    Abstract:
    Satirical piece for Perspectives on Psychological Science.

    I wonder if its an attempt to discredit counter-arguments based on exactly the points made in the satirical arguement. I also wonder at the apparent coincidence that these authors are from the same universities as me.
     
  11. alex3619

    alex3619 Senior Member

    Messages:
    7,703
    Likes:
    12,562
    Logan, Queensland, Australia
    I regard anything DSM-V as suspect, my comment was for the amusement factor, so I guess the satire worked. The article and associated commentary did not comment on the satirical nature, and I had not even begun checking any of it when Suzy commented.

    The irony is this:

    The essential feature of pathological publishing is the “persistent
    and recurrent publishing of confirmatory findings (Criterion
    A)

    The publication of vaguely confirmatory findings is part of the whole scientific non-credibility of much of psychosomatic medicine, and I gather much of psychiatry though I have not looked at this. Failure to test the theories or use objective markers or objective evidence is part of why its not science. In the case of CBT/GET studies which use objective evidence show, I think universally, that the therapy does not work. In addition the underlying model has not only never been tested, it can't be tested as it has no objective criteria to test.

    combined with a callous disregard for null results
    (Criterion B)

    The "callous" bit is a giveaway in retrospect, its emotional rhetoric ... but then I expect nonsense from DSM-V. It is no coincidence that my book is tentatively titled Embracing the Null Hypothesis. Contrary evidence in vast abundance is routinely ignored in psychosomatic research especially the DBM or dysfunctional belief model.


    that produces a “good story” (Criterion C),

    This resembles my "pursuasive rhetoric" remark. They do indeed tell a story instead of giving an objective testable model. That story changes with the audience too. As I intend to show that story uses the same logic as humour, switching meanings of words to give outcomes never logical inferable from the evidence.

    leading to marked distress in neo-Popperians (Criterion D).

    This was another red flag which I missed earlier. I was starting to wonder why this had anything to do with neo-Popperians when Suzy made her comment. Sure my argument is a neo-Popperian argument, but why would DSM-V care about that? Neo-Popperians have been claiming much of psychiatry is nonscience or pseudoscience for over half a century. Maybe that is the point of the satire. In using six different criteria for nonscience, I think I can show that much of the DBM qualifies as nonscience on each of the six sets of criteria.

    I wonder at the target of this satirical piece. Is it patient advocacy, or psychs in general, or both? So many in psychiatry seem completely oblivious to these issues, yet others are writing serious articles on them.

    These are just some thoughts before my brain melts and I have to sleep. The article is still very funny and deeply ironic. I seem to have misplaced a big piece of my funny bone today though.

    Bye, Alex
     
    Jarod likes this.
  12. Simon

    Simon

    Messages:
    1,530
    Likes:
    4,901
    Monmouth, UK
    Another gem from the 'Replication Crisis' special issue.
    Alex - think you might like this as it has a lot to say about the lack of falsifiability in psychology.

    Fergusun & Heene, Nov 2012

    Much of this paper is a detailed discussion of publication bias and why psychology fails to publish null/negative results. The main point, though, is that if negative results don't get published it beomes impossible to falsify any theory 'as failed replications are largely ignored'.

    The are some striking and strident comments so I've compiled some of the strongest quotes below:
    This may or may not apply to CFS research - but I find it interesting there is open acknowledgement that such ideologiacl rigidity and researcher commitment to specific models can be an issue in psychology.

    ...
    The Invincibility of Psychological Theories
    This concluding section carries the harshest criticism of those that defend the status quo in psychological research.

    The authors say they understand that their comments about low research standards will upset many psychologists who often view their field's standards as higher than those of other sciences. They suggest that open recognition of such problems "would inevitably crumble the façade of psychology as a purely objective science".
    ...
    ...
    Finally, the authors urge psychology research to raise it's game:
     
    alex3619 and Jarod like this.
  13. Esther12

    Esther12 Senior Member

    Messages:
    5,385
    Likes:
    5,891
    Thanks Simon.
     
  14. alex3619

    alex3619 Senior Member

    Messages:
    7,703
    Likes:
    12,562
    Logan, Queensland, Australia
    Hi Simon, this sums it up:

    "In the absence of a true process of replication and falsification, it becomes a rather moot point to argue whether individual theories within psychology are falsifiable (Wallach & Wallach, 2010) as, in effect, the entire discipline risks a slide toward the unfalsifiable. This is a systemic discipline-wide problem in the way that theory-disconfirmatory data is managed. In such an environment many theories, particular perhaps those tied to politicized or “hot” topics, are not subjected to rigorous evaluation and, thus, are allowed to survive in a semi-scientific status long past their utility. This is our use of the term undead theory, a theory that continues in use, having resisted attempts at falsification, ignored disconfirmatory data, negated failed replications through the dubious use of meta-analysis or having simply maintained itself in a fluid state with shifting implicit assumptions such that falsification is not possible."

    I am slowly working my way through all these papers ... there were a lot. In a way this is very encouraging. Psychiatry and psychology have to go through self-reflection in order to advance. They also have to embrace or create more rigorous and rational methodologies.

    Its no coincidence that you will hear me talk about zombies a lot.

    Bye, Alex
     
  15. Simon

    Simon

    Messages:
    1,530
    Likes:
    4,901
    Monmouth, UK
  16. Esther12

    Esther12 Senior Member

    Messages:
    5,385
    Likes:
    5,891
    Thanks Simon. Thinking about this, quite a lot of stuff has been falsified in CFS. Lots of null results published... but it doesn't really seem to matter much. There's a small group of researchers, and to be fair to them, they do occasionally test their theories, but when they show that they're completely wrong, that's not thought to indicate that they may be less expert on these matters than was believed. It just leads to ever more 'pragmatic' justifications: 'Well, we thought we were helping by doing this, but that's not the case... however we can still get questionnaire scores to go up, so here's another story about how we're helping, and why it's worth giving us more money.'
     
  17. Simon

    Simon

    Messages:
    1,530
    Likes:
    4,901
    Monmouth, UK
    That's a fair point. Annoyingly, I read something recently that addressed exactly this issue where null results simply lead to ducking, weaving and reformulation of the theory to fit the ever-changing evidence. I guess the lack of gain in physical activity measured by actometers is a case in point: the Dutch/Belgian authors simply decided recovery was about attitudes not increase in activity.

    I think that's partly what the authors had in mind when they said:
    Wish I could find that quote though.
     
  18. Simon

    Simon

    Messages:
    1,530
    Likes:
    4,901
    Monmouth, UK
    academic bias vs financial bias

    Another interesting quote from John Ioannidis.
    http://pps.sagepub.com/content/7/6/645.full

     
    biophile and alex3619 like this.

See more popular forum discussions.

Share This Page