1. Patients launch $1.27 million crowdfunding campaign for ME/CFS gut microbiome study.
    Check out the website, Facebook and Twitter. Join in donate and spread the word!
Dr. Kerr, I presume?
Clark Ellis brings us a rare interview with British researcher Dr. Jonathan Kerr who is now living in Colombia.
Discuss the article on the Forums.

Science is broken: fixing fiddling and fraud in research

Discussion in 'Other Health News and Research' started by Simon, Nov 3, 2012.

  1. Simon

    Simon

    Messages:
    1,346
    Likes:
    4,245
    Monmouth, UK
    Psychologists to the rescue?

    A piece in the Guardian newspaper yesterday recaps the widespread failings in scientific research, from outright fraud to flawed data analysis. The authors also set out how the current system encourages bad science, so that novel findings have become more important than finding the truth. They want to see more replication, to keep scientists honest, but they also want a new system that rewards good science over publishing lots of papers.

    More on this in a moment, but it's intriguing how psychology research takes centre-stage in these debates--particularly as psychology research has made so many claims about the causes of CFS. Certainly, many of the most famous examples of fraud and dubious practice come from psychology research. But it's also true that many of those pointing the finger, and arguing for higher standards, are themselves psychologists. The Guardian article is a case in point:

    Scientific fraud is rife: it's time to stand up for good science
    By psychologist Pete Etchells and PhD student Suzi Gage.

    They point out that, above all, career progression and "success" depend on publication in high-impact journals. So publications become an end in themselves, rather than the goal being to make scientific progress. That gives a strong incentive to get positive results rather than the truth. Other problems they cite include:
    Pulling no punches, the authors say that too often:

    Fixing science
    The first solution from Etchells and Gage is more replication studies to sift the wheat from the abundant chaff in the literature. Replication keeps science honest.

    Their biggest single idea, though, is transparency: "The scientific process must be as open to scrutiny as possible", including pre-registering all studies methodology so that researchers can't just rummage around in the data until they find a 'significant' result. They also want the secretive review process opened up. BioMed Central already do this, so anyone can see if the review process is a rigorous check, or just a cusory check. Others have gone further and argue for openly releasing data, so that other researchers can put the data to the test.

    Etchells and Gage are running a session on academic malpractice at a Nature conference next week, looking for more practical ideas on how to fix science research.

    Psychologists at the forefront
    Etchells is not the only psychologist trying to raise standards. A recent case of fraud by Dirk Smeesters, a Dutch psychologist, was only identified after another psychologist raised suspicions that his data seemed too good to be true. An investigation found that it was, and Smeesters resigned.

    21 words to save science
    And last month three psychologists published a call for research transparency. They ask all researchers to include a simple, 21-word disclosure "We report how we determined our sample size, all data exclusions (if any), all manipulations, and all measures in the study." In lay terms: "we haven't tortured the data to get this result".

    As an aside, one of the three authos is Uri Simonsohn, the psychologist who exposed Smeesters fraud.

    A Beatles' song can literally make you younger!
    The same three psychologists published an eye-catching study last year showing how listening to the song "When I'm Sixty-four" can actually make you younger. Obviously this is absurd, but that was the authors' point. They showed that with enough flexibility in how the study is conducted, and how data is analysed, it's almost inevitable that even some absurd results will be statistically significant.
    ----------​

    Will this really make a difference
    I suspect that many psychologists are feeling distinctly uncomfortable about these waves of exposure and exhortations to better, more rigorous research. But this process may become unstoppable.

    Last week an article by the scourge of biomedical research, John Ioannidis, appeared in the Journal of Psychosomatic Research, home to many papers finding associations between psychological factors and CFS. Ioannidis' article stated that the recently debunked association between Type D personality and cardiac death could be the tip of the iceberg. Time will tell.

    There are many more examples of psychologists trying to raise the bar, and I'll try to post a few more examples when I have the energy.
    Enid, Jarod, biophile and 1 other person like this.
  2. Little Bluestem

    Little Bluestem Senescent on the Illinois prairie, USA

    Messages:
    2,537
    Likes:
    1,883
    Midwest, USA
    I thought replication was essential to something even being science. Is not repeatability part of the definition of science?
    barbc56 and Simon like this.
  3. alex3619

    alex3619 Senior Member

    Messages:
    7,186
    Likes:
    11,258
    Logan, Queensland, Australia
    These are many of the same issues I am looking at for my book. I also think there is a political spectrum to the issue. The politics of science needs to be debated as well as the more traditional scientific processes. Science is a human endeavour, humans mess about politically, I think we need to pay attention to the way organizations that are involved in regulating science, and in particular what checks and balances exist to make sure they do not go haywire.

    Repeatability is necessary for science, and so is refutability. If nobody bothers however .. this is part of the problem. In the case of major pharmaceuticals, for example, who is going to pay ten million dollars to repeat a study they will never get published? This is part of the problem - the interaction of economics, politics and science are often working to the detriment of science as a means to improve the human condition.

    A lot of psychiatry is however non-science: it cannot be adequately refuted because the theory/model is too vague and changeable to be properly tested. Thats non-science. I prefer to call it nonsense. We need a new name for science-like endeavours that don't fit the definitions of science. We have simply been lumping psycho-psychiatry with bio-psychiatry, and giving this research the same privilege under law, but I think it should be severed from conventional medicine and treated differently.

    Bye, Alex
    meandthecat, jimells and merylg like this.
  4. Simon

    Simon

    Messages:
    1,346
    Likes:
    4,245
    Monmouth, UK
    I agree that replication should be an essential part of the scientific process: real science can be replicated. I've tweeked my post slightly to clarify this.

    Psychic Science
    And as Alex points out, cultural and other norms cut in here. A famous example is the 'Pyschic' study published by Bem last year. He published a series of studies showing 'precognition' which is illustrated by his most successful experiment:

    Unsurprisingly, this finding of pyschic effect was met with some scepticism. Bem had encouraged scientists to try to replicate his findings, and several scientists did just that - but found no effect. However, they struggled to get their findings published. The original journal refused to publish on the grounds that 'they didn't publish replications'. Several other journals were equally dimissive. One sent the paper out to review but one of the reviewers was Bem himself - who advised rejection of the paper...

    Eventually, a series of three studies - all unable to find any psychic effect - were published in the open access journal PLoS One, which will publish anything considered technically sound, regardless of perceived 'merit'.

    Apart from providing strong evidence against the claimed psychic effect, this case illustrates the problems with publishing the replications that are essential if science is to 'self-correct'. A further problem is that even if published, replications are considered 'junior' science, giving little incentive for scientists to bother.

    Agreed, and that's a major focus of efforts to raise research standards. Too often, institutions have denied there might even be a problem. That Guardian piece argues that research funders also need to change their systems to promote scientific integrity.
  5. user9876

    user9876 Senior Member

    Messages:
    755
    Likes:
    1,817
    I'm not sure science is broken when I talk to physisists they dont talk about great methodological concerns.

    Seems that the problems are in psycology, psychiatry and more generally medicine. Seems to me the basic problem is that they lack an underlying formalism in which to express any theories and models. In fact there seems to be a lack of well defined theories in the papers I have read. Many papers are very unclear about the assumptions made. This is particularly critical where they use statistical models such as regression models and yet they fail to discuss assumptions of independance between different random variables. This means that those relying on results cannot easily quantify where they might apply and how they might generalise.
    Valentijn and Simon like this.
  6. Simon

    Simon

    Messages:
    1,346
    Likes:
    4,245
    Monmouth, UK
    That's exactly right, science itself isn't broken - perhaps I should have been clearer that this whole faulty-science debate is focused on the biomedical, social and psychological sciences. In fact, part of the argument for problems in these areas is the comparison with areas like physics and space science where there is much greater emphasis on replication and negative results are frequently published. Something like 95% of all psychological papers published report positive resutls, and even the most brilliant of scientists might struggle to be right that often.

    Likewise the abuse rather than use of statistics is a big problem, as is failure to be clear about hypotheses in advance. There's a great blog dissecting flaws in that psychic study illustrating the kind of thing you are talking about (I think, let me know if I got that wrong).
    Enid likes this.
  7. alex3619

    alex3619 Senior Member

    Messages:
    7,186
    Likes:
    11,258
    Logan, Queensland, Australia
    While I agree the primary area of science that is a problem is psychiatry, followed by medicine in general, I am not sure some of the issues do not apply to science more broadly. Its just that in softer sciences the problems are more severe and obvious. In wider science I think there are still problems in relation to interpretation and funding of scientific studies. Too much emphasis is being placed on consensus and not enough on analysis, evidence and reason. Furthermore there has been a huge funding shift in the last three to five decades from public funding to private funding.

    I do agree that science with highly formalized theoretical frameworks like physics show much less of a problem than otherwise. Physics is precise - if the evidence stacks up badly for a model its in trouble. Too much psychiatry is waffly - do we even know for sure if evidence is against a model? There is often wriggle room.

    I am also not sure I would call social sciences "science", despite the name. They are a discipline, yes, but a science? It would be like calling economics a science, but I don't think it is. Not everything has to be a science to be valuable. The issue though is that the public acceptance of results from "softer sciences" is becoming to be similar to "harder sciences". I don't think I am explaining this adequately either - or even understand it adequately. A lot of work needs to be done to formalize research methodologies and how they are being used in a wide range of disciplines.

    Mind you from what I have read I am happier with social sciences than psycho-psychiatry. In social science there is a strong awareness of limitations of models and of subjective interpretations. They recognize issues and are trying to come to grips with them. Psycho-psychiatry takes itself far too seriously ... and society does too. There is a huge culture problem in psycho-psychiatry. It has far more the flavour of splinter-cults than scientific rigour. At least in the last decade or so this has started to be openly discussed, but its only a beginning. I do wonder how much of this has to do with medical culture more broadly. Medicine is far too self regulated, and the trend to take power away from medicine is in the wrong direction - the trend is toward managed medicine. Sure medicine needs management, but science is taking a diminishing role. Commercial interest and scientific interest are not necessarily the same.

    With regards biochemistry, there is a problem in that most of the science is highly reductionist yet biochemical systems are complex dynamic networks. The entire paradigm of how to deal with such things is strained. This is why, in my opinion, pharmaceutical companies become unstuck. The problem is often not the results - though bias creaps in to these studies due in part to funding issues. The problem is the interpretation of the results. A drug can often have proven effectiveness in doing what its supposed to yet is still a very bad drug. Biochemical interactions with other pathways than the intended, secondary pathways, genetics, environmental toxins etc. can modify results in the real world. Whats worse these results are highly variable person by person due to variations in circumstances. These cannot be controlled for in conventional ways. This is recognized though - there is research in things like pharmacogenomics to find out how genes interplay with drugs.

    It was blatantly obvious to me that drugs like Vioxx were a bad idea for long term use, at least a year before the public became aware of the issue. I would have thought so earlier but had not looked at Vioxx. It took me all of a few minutes to see a problem once I started looking at it. It was dead obvious. Yet drug companies cannot see problems like this? One has to wonder why not? How did the FDA miss it? How did the wider medical audience? Sure I was biased in that I had a stronger background in eicosanoids than many at the time but drug companies have people who know these things.

    I think many recognize these issues in medicine and psychiatry. There is growing unease in the scientific literature. I think that it is going to take a paradigm shift to deal with it. How its been handled in recent decades is woefully inadequate. That in part was why the biopsychosocial model could get a foothold - people do realize a change is necessary. Unfortunately the direction and nature of the biopsychosocial model is not adequate to the task, especially given the bias toward the -psycho- component.

    The current broad bias in science that I think is becoming more obvious is about funding, not philosophical issues like repeatabilty and refutability. Economic bias has become more obvious as commercial interests and scientific interests have merged. When science is about profit but independent research is in massive decline, the traditional processes of science seem to have less of an effect. This even applies to research in tertiary education institutions like universities. Commercialization distorts the scientific process. I am basing this analysis in part on funding shifts in the USA - I do wonder however how much that is reflected in other countries? I have not really investigated that.

    Bye, Alex
    Jarod likes this.
  8. AFCFS

    AFCFS Senior Member

    Messages:
    312
    Likes:
    243
    NC
    I find this interesting, essentially documenting what has long been suspected. I also find it a bit hypocritically amusing, in that the authors of False-Positive Psychology Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant seem to use “faulty science” to prove “faulty science.”

    The researches set out to prove the fault of False-Positive Psychology. And set up some example tests to make the point. One being: “How Bad Can It Be? A Demonstration of Chronological Rejuvenation.” Here they state:

    “To help illustrate the problem, we conducted two experiments designed to demonstrate something false: that certain songs can change listeners’ age.” and “Study 2 investigated whether listening to a song about older age makes people actually younger.”

    Then they report the results (Study 2):

    “An ANCOVA revealed the predicted effect: According to their birth dates [that the study participants reported], people were nearly a year-and-a-half younger after listening to “When I’m Sixty-Four” (adjusted M = 20.1 years) rather than to “Kalimba” (adjusted M = 21.5 years), F (1, 17) = 4.92, p = .040.”

    But the results do not show that listening to song “When I’m Sixty-Four” “makes people actually younger.” It shows that study participants reported a birthday that indicated they were younger, but that is not the same as making people actually younger. Using the father’s age to control for variation in baseline age across participants does not help their flawed inference. Granted their point is well made, I just find it humorous that it seems that these researchers actually misinterpreted the results of a test – used “faulty science” - to help illustrate “faulty science.” Still a good article.
  9. Esther12

    Esther12 Senior Member

    Messages:
    5,266
    Likes:
    5,460
    Sorry for posting links to things that I have not read, but I've just stumbled upon a couple of things which sound relevant to the discussion here, but that I will not have time to read today:

    How citation distortions create unfounded authority: analysis of a citation network

    Conclusion Citation is both an impartial scholarly method and a powerful form of social communication. Through distortions in its social use that include bias, amplification, and invention, citation can be used to generate information cascades resulting in unfounded authority of claims. Construction and analysis of a claim specific citation network may clarify the nature of a published belief system and expose distorted methods of social citation.

    http://www.bmj.com/content/339/bmj.b2680

    Also, I'm not sure if this is true, but apparently this RCT on CBT for command hallucinations found CBT to be ineffective, but in all the 3 times it's been cited, have been cited as evidence for efficacy:

    http://www.sciencedirect.com/science/article/pii/S0005796711002658
  10. alex3619

    alex3619 Senior Member

    Messages:
    7,186
    Likes:
    11,258
    Logan, Queensland, Australia
    Hi Esther12, I still have to read the bmj article in full, but it looks interesting. I have been making similar but less formal arguments. Recently I have started discussing propagation of errors in explanatory frameworks. When the major or only process is verification, not falsification, errors can more easily propagate in a explanation or model with early errors leading to serious distortions later on. I see this in Babble, but I also see it as a problem generally. Its one that I am at risk of in my book for example. Things have to be questioned. In science this means testing. In psychobabble there is almost no way to test their hypotheses or models, they often rely on testing outcomes. This means that errors in their model become substantiated by outcome.

    Of course the arguments raisedin the bmj article are more suggestive than proved. I may say more after I have read it in full.

    On the cbt article I think we need to read the full paper, but its behind a paywall.

    Bye, Alex
  11. Esther12

    Esther12 Senior Member

    Messages:
    5,266
    Likes:
    5,460
    Ta Alex. I thought that paper could be of interest to others... I'm not sure if I'll be able to find time to give it a proper look, so good to have your summary.
  12. jimells

    jimells Senior Member

    Messages:
    425
    Likes:
    579
    northern Maine
    Very well said. I agree with all of this. I think a big part of the problem is the nature of hierarchical institutions. No doubt some lowly researcher knew the Vioxx was no good, and maybe even told their supervisor. But once the info gets into the management hierarchy, it was probably buried. So often the boss is told what they want to hear, not how their decision will cost the company millions. With inaccurate info, the top managers can't possibly make good decisions, except maybe by luck. In fact, I believe, based on personal experience, that most companies make money inspite of their best efforts to do otherwise.

    Telling the bosses what they wanted to hear played a significant role in the collapse of German Democratic Republic (East Germany). Misinformation probably helped bring down the USSR as well, and certainly helped cause the recent and ongoing worldwide financial catastrophies.

    When telling the truth can end one's career, it's much easier to stay quiet, especially with a mortgage to pay and mouths to feed.
  13. alex3619

    alex3619 Senior Member

    Messages:
    7,186
    Likes:
    11,258
    Logan, Queensland, Australia
    This is one of the problems endemic in the world. When societies, including democracies, don't have accurate information - whether it be media or education - they are at risk of distorting issues and responding to problems based on the wrong information. This means that more things get stuffed up, and fewer things get fixed.

    The decline of the fourth estate is an issue for democracy. The rise of the fifth estate (the internet) is a hope.

    Bye, Alex
    ggingues likes this.
  14. ggingues

    ggingues $10 gift code at iHerb GAS343 of $40

    Messages:
    4,037
    Likes:
    882
    Concord, NH
    Sounds accurate to me, that's why the US is hanging in the balance right now!

    GG
  15. user9876

    user9876 Senior Member

    Messages:
    755
    Likes:
    1,817
    There is quite a lot of work on complex dynamic systems coming out of chaos theory. Its not an area I've looked at much. I know that Bristol University has a multidisaplinary centre to look at complex systems and this includes life sciences. http://bccs.bristol.ac.uk/research/applications.html
  16. alex3619

    alex3619 Senior Member

    Messages:
    7,186
    Likes:
    11,258
    Logan, Queensland, Australia
    I began looking at complex systems in 1993, and systems theory in 1986. I am currently re-examining some more recent work. Some of the theory is indeed there, but I am unsure that we yet have an understanding of how to use it effectively in biochemistry or any other similar discipline.

    I am pro-systems theory, but it would be wrong to suggest that it is particularly advanced yet. For example, the biopsychosocial model is based on systems theory, but I see no evidence of it being used in developing biopsychsocial approaches. Systems biology on the other hand is a growing field. It has promise. Social sciences embraced systems theory some time ago. The uptake and development of systems theoretic methodologies is only beginning.

    Systems theory is a different paradigm. Its big picture (ignoring for now the issue of system scope). It can't be tacked on to established programs. Established programs and methodologies will have to change to effectively use systems theory, and part of that has to be systems theoretic research in various disciplines. Some of that is happening, but systems theory has a long way to go.

    Systems theory was integral to my informatics degree, 1989-1992.
  17. user9876

    user9876 Senior Member

    Messages:
    755
    Likes:
    1,817
    I agree it is early days in this type of research but I see it as a positive thing that there are people taking this approach. One thing I see as really positive is the involvement of mathematicians who will hopefully provide an underlying formulism that can be used to analyse complex biological systems.

    I'm not sure I would describe the biopsychosocial approach as a systems approach since it doesn't seem to have any underlying systems theory (just wooly words). Although that might just be the small number of papers I have looked at.
  18. alex3619

    alex3619 Senior Member

    Messages:
    7,186
    Likes:
    11,258
    Logan, Queensland, Australia
    The entire justification of the biopsychosocial idea was systems theory, if you read Engel. It was the motivation, the justification and the explanation. It was embraced as an excuse to justify eclectic psychiatry - distorted right from the beginning. The initial principle was systems based, how it developed was a vague nod to systems theory and zero application of systems theory. Its something I am looking at in my book - I think embracing systems theory is one way that psychiatry can dig itself out of a very deep hole. The methodology to do that exists but probably has much more development to go before it becomes feasible.

    Chaos theory and associated math is hard systems theory. It has potential. The flip side is soft systems. Systems can be embraced without getting into mathematics.

    One of the problems of math based systems theory is that it is very vulnerable to minute uncertainties in measurement and quantification. Add in some dynamics and the whole mathematical system becomes too variable to be precise. This is why climate modellers tweak hundreds of mathematical models and look for common patterns in outcomes. If a wide range of parameters give similar outcomes, then the model is considered to have some predictive value. The uncertainty is however high. For many situations I think a non-mathematical (actually it is based on maths, but graph theory not equations) approach suits many problems far more.
  19. Simon

    Simon

    Messages:
    1,346
    Likes:
    4,245
    Monmouth, UK
    Wow, I wasn't expecting quite so much or such wide ranging discussion on this thread, and I'm struggling to keep up. The systems theory stuff is definitely beyond me.

    The BMJ piece on citation distortion looks really interesting, and I will try to post some brief notes on this soon (delighted if someone beats me to it). Meanwhile, I wanted to give an other example of a psychologist blazing a trail in research rigour: James Coyne.

    "False positive psychology"

    James Coyne does not pull his punches. He is a professor of psychology who has castigated his peers for over-claiming the impact of psychological factors on physical health. He even suggests this is may be down to professional insecurity. This from his keynote article:
    Ouch.

    Coyne is a Director of Behavioural oncology, and believes that psychologists can play a hugely important role in helping patients live with cancer. But he doesn't think it will help cure their illness.
    The paper has its own thread. Coyne also authored the new study that unpicks the claimed association between 'Type D' personality and cardiac deaths.
  20. barbc56

    barbc56 Senior Member

    Messages:
    1,456
    Likes:
    870
    Here's another article by Coyne from the Science Based Medicine blog.

    (bold added)

    http://www.sciencebasedmedicine.org/index.php/nih-funds-training-in-behavioral-intervention-to-slow-progression-of-cancer/#more-23285

    Barb C.:>)
    Little Bluestem, Simon and alex3619 like this.

See more popular forum discussions.

Share This Page