The 12th Invest in ME Research Conference June, 2017, Part 2
MEMum presents the second article in a series of three about the recent 12th Invest In ME International Conference (IIMEC12) in London.
Discuss the article on the Forums.

Why RCTs don’t tell you what you want to know

Discussion in 'Other Health News and Research' started by natasa778, Apr 7, 2014.

  1. natasa778

    natasa778 Senior Member

    Messages:
    1,773
    Likes:
    2,456
    http://jeromeburne.com/2013/04/28/w...d-trials-dont-tell-you-what-you-want-to-know/

     
    Valentijn and Esther12 like this.
  2. Esther12

    Esther12 Senior Member

    Messages:
    8,449
    Likes:
    28,522
    I don't really understand Healy's criticism of RCTs.

    He points out lots of real problems with the way in which RCTs are used, but not really any that make randomisation or the use of control groups seem like a bad idea. For all trials we need to use outcome measures which are of real value to patients. For some conditions and treatments there is a problem with lumping people together into large and disparate groups, and we need to do more to identify subgroups which would benefit the most. Also, some researchers and clinicians go beyond the evidence and make unfounded claims to patients... but I don't think that's the RCT approach.

    I don't see why RCTs should lead to a magic bullet approach.

    I really tried to understand this before, as Healy does seem to make a lot of legitimate points, but I just don't see how they add up to what he thinks they do.

    There are incentives in medicine for people to exaggerate how valuable they are to patients. This is wrong and should be fought against. Results from RCTs can be spun in a way which plays into this problem - but so can results from other trials.

    Can anyone see what I'm missing here? Am I being dim?
     
    wdb, barbc56 and WillowJ like this.
  3. alex3619

    alex3619 Senior Member

    Messages:
    12,492
    Likes:
    35,116
    Logan, Queensland, Australia
    I don't know that he is really complaining about the RCTs, but about their use. I think you are right about that @Esther12. Yet RCTs are often wrong, and it has been estimated that even the very large trials have a 10% chance of being wrong, though I do not fully understand the arguments for that. This is because the system as a whole has biases.

    Part of the bias is what RCTs are used for. Managed medicine, I suspect, is using them as a magic bullet. Refuse the bullet and they refuse further care.

    This is more about politics and management than about science. Yet RCTs are still failing us, and this has been regularly shown. As we know from ME, CFS, CBT and GET psychogenic research, metastudies that are amalgamated based on smaller studies that are not methodologically sound, and/or are contradicted by evidence, still get amalgamated and used to claim they are evidence based. The argument is basically that their evidence pile is bigger than your evidence pile, and evidence they are wrong can safely be ignored.
     
    WillowJ and Valentijn like this.
  4. Valentijn

    Valentijn Senior Member

    Messages:
    14,281
    Likes:
    45,816
    I think the problem with RCTs is that simply being an RCT doesn't make the results accurate. Yet many in the medical profession or health organizations or even politics will treat them as if they are infallible. That's how we end up with PACE being cited as proving that we all just need to exercise, even though the actual results indicate something between "outright failure" and "outcome cannot be determined due to poor methodology and incomplete reporting of data".

    "RCT" is used an excuse to rubber stamp the (spun) results far too often, without a careful reading and analysis of the trial itself. Random and controlled trials are quite an improvement over the alternatives, but are still no guarantee of quality, and a lot of people who should know better tend to forget that.
     
    Last edited: Apr 8, 2014
    Esther12, biophile, alex3619 and 2 others like this.
  5. alex3619

    alex3619 Senior Member

    Messages:
    12,492
    Likes:
    35,116
    Logan, Queensland, Australia
    An RCT is just a big fancy version of a statistical significance check. As I have said repeatedly, small results, results applying to only a small subset, incorrect or biased results, fraudulent results and results of association and not causation can all be significant. WHAT is found is just as important as whether or not it is significant. The results still have to be interpreted. Statistically significant bias is not the same as a reliable result. Sure the results are less likely to be due to chance ... but so are many highly biased results.

    The randomization, particularly if double blinded, is only able to counter some of the bias. Methodological bias, bias in cohorts, bias in study measurements take, these can all produce shifts in the outcome that do not support the hypothesis that an intervention works.

    For example in PACE the study outcome measures are not rational, invalid statistical methods were used to define endpoints, terms were redefined and then used ambiguously in communicating the results, patient cohorts are not validated to be properly representative, the patients were not blinded to the study arm (which they could not avoid), objective outcome measures were dropped mid-study, and instead reliance was given on subjective thoughts about activity, after intense psychotherapy to change how they think.

    This is further complicated by the issue that it is almost impossible to implement a fully blinded trial in psychiatric disorders involving therapy. You can attempt to hide what kind of drug you give, but not what kind of therapy. The people assessing the results are also often aware of what treatment arm is used, unless patients are given a random ID.
     
    Last edited: Apr 8, 2014
    biophile and Valentijn like this.
  6. barbc56

    barbc56 Senior Member

    Messages:
    3,652
    Likes:
    5,006
    According to the following, article, it makes sense that there would be a lot of studies that turn out to be false. Here is one reason out of many, some mentioned in above posts, why we should expect to see this:

    .

    Why the field of medicine is even more likely to produce RCTs that are false:

    (I edited the above by seperating sentences within a paragraph for easier reading.)

    http://marginalrevolution.com/marginalrevolution/2005/09/why_most_publis.html

    I had to reread several parts of this article before I understood what the author was saying. But it's absolutely fascinating reading!

    Barb

    ETA
    I would think the second quote could also be applied to studies in alternative medicine because of the large number of hypothesis that are generated.
     
    Last edited: Apr 8, 2014
    biophile and Valentijn like this.
  7. alex3619

    alex3619 Senior Member

    Messages:
    12,492
    Likes:
    35,116
    Logan, Queensland, Australia
    @barbc56

    http://marginalrevolution.com/marginalrevolution/2005/09/why_most_publis.html
    Yes, Ioannidis is one of the authors whose work convinced me of this issue. While I understand bits of the argument, I think there are many layers he is beginning to unravel.

    CBT/GET for ME has tiny effect sizes, ignores contrary research, tests and review each other's hypotheses in a small group, and rely heavily on limited sources of subjective information. Though they have in some cases used moderate cohort sizes, those cohorts may not be valid cohorts. That validity is part of what they should be testing.

    Point six is about testing. Most psychogenic research appears to be designed around confirming. This is a holdout from logical positivism. Its bad science, in in many cases not science at all. I would give some of the papers we are complaining about a Pseudoscience Award.
     
    Valentijn likes this.
  8. Simon

    Simon

    Messages:
    1,921
    Likes:
    14,541
    Monmouth, UK
    Agree with those points, but I think he makes two valid criticisms of RCT's themselves:

    In other words if the RCT outcome measure is flawed then the findings don't add up to much. Another example might be, say, using self-reports in unblinded trials of behavoural treatments, esp where those behavioural treatments are likely to influence the way patient self-report.

    Not a new point, but one that is often glossed over.
     
    Valentijn likes this.
  9. Esther12

    Esther12 Senior Member

    Messages:
    8,449
    Likes:
    28,522
    I mentioned that all trials need to have meaningful outcome measures... I didn't miss that one!

    I totally agree with lots of his criticisms of RCTs, but it seems that they all apply just as well to non-randomised trials with no controls.

    re the specialised set up: Also, patients who agree to be randomised will presumably be keen on the possible interventions than many, eg the hypochondria CBT RCT which had to screen thousands of patients to get their sample (and still ended up with poor results). But if you don't have randomisation, you're still likely to end up with less useful results.

    One could try to argue that RCTs provide a false level of confidence and legitimacy to medical claims... but it's not as if pre-RCTs doctors were known for their modesty! I totally agree that we need to push back against the way in which RCTs are over-sold and misrepresented, but I see that as more being an argument against quackery than against RCTs.
     
    Valentijn likes this.
  10. Esther12

    Esther12 Senior Member

    Messages:
    8,449
    Likes:
    28,522
    I was just reading this article on N-of-1 trials, which I've seen some (maybe Healy?) argue should be seen as preferable to RCTs, although to me it seems like this approach would need to be supplementary to RCTs.

    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2690377/
     
    Simon likes this.

See more popular forum discussions.

Share This Page