• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Why RCTs don’t tell you what you want to know

natasa778

Senior Member
Messages
1,774
http://jeromeburne.com/2013/04/28/w...d-trials-dont-tell-you-what-you-want-to-know/

Healy’s case against RCT’s is not based on bare-faced fiddling of results by the drugs companies, although he’s often exposed it. Even if all trials were squeaky clean they would still be a serious barrier to developing really effective ways of tackling the various lifestyle diseases that are threatening to cripple Western health services.

the one-size-fits-all approach is going to become increasingly irrelevant in the face of an explosion of genetic information pinpointing detailed individual differences and the ways genes and environment interact to affect health. It’s also not an effective way of assessing the preventative and tailored life-style changes needed to deal with the epidemic of chronic metabolic disorders.
 
Messages
13,774
I don't really understand Healy's criticism of RCTs.

He points out lots of real problems with the way in which RCTs are used, but not really any that make randomisation or the use of control groups seem like a bad idea. For all trials we need to use outcome measures which are of real value to patients. For some conditions and treatments there is a problem with lumping people together into large and disparate groups, and we need to do more to identify subgroups which would benefit the most. Also, some researchers and clinicians go beyond the evidence and make unfounded claims to patients... but I don't think that's the RCT approach.

The RCT magic bullet approach says: “This treatment has been proved to be effective and so it’s appropriate for you.

I don't see why RCTs should lead to a magic bullet approach.

I really tried to understand this before, as Healy does seem to make a lot of legitimate points, but I just don't see how they add up to what he thinks they do.

There are incentives in medicine for people to exaggerate how valuable they are to patients. This is wrong and should be fought against. Results from RCTs can be spun in a way which plays into this problem - but so can results from other trials.

Can anyone see what I'm missing here? Am I being dim?
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
I don't know that he is really complaining about the RCTs, but about their use. I think you are right about that @Esther12. Yet RCTs are often wrong, and it has been estimated that even the very large trials have a 10% chance of being wrong, though I do not fully understand the arguments for that. This is because the system as a whole has biases.

Part of the bias is what RCTs are used for. Managed medicine, I suspect, is using them as a magic bullet. Refuse the bullet and they refuse further care.

This is more about politics and management than about science. Yet RCTs are still failing us, and this has been regularly shown. As we know from ME, CFS, CBT and GET psychogenic research, metastudies that are amalgamated based on smaller studies that are not methodologically sound, and/or are contradicted by evidence, still get amalgamated and used to claim they are evidence based. The argument is basically that their evidence pile is bigger than your evidence pile, and evidence they are wrong can safely be ignored.
 
Messages
15,786
I think the problem with RCTs is that simply being an RCT doesn't make the results accurate. Yet many in the medical profession or health organizations or even politics will treat them as if they are infallible. That's how we end up with PACE being cited as proving that we all just need to exercise, even though the actual results indicate something between "outright failure" and "outcome cannot be determined due to poor methodology and incomplete reporting of data".

"RCT" is used an excuse to rubber stamp the (spun) results far too often, without a careful reading and analysis of the trial itself. Random and controlled trials are quite an improvement over the alternatives, but are still no guarantee of quality, and a lot of people who should know better tend to forget that.
 
Last edited:

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
An RCT is just a big fancy version of a statistical significance check. As I have said repeatedly, small results, results applying to only a small subset, incorrect or biased results, fraudulent results and results of association and not causation can all be significant. WHAT is found is just as important as whether or not it is significant. The results still have to be interpreted. Statistically significant bias is not the same as a reliable result. Sure the results are less likely to be due to chance ... but so are many highly biased results.

The randomization, particularly if double blinded, is only able to counter some of the bias. Methodological bias, bias in cohorts, bias in study measurements take, these can all produce shifts in the outcome that do not support the hypothesis that an intervention works.

For example in PACE the study outcome measures are not rational, invalid statistical methods were used to define endpoints, terms were redefined and then used ambiguously in communicating the results, patient cohorts are not validated to be properly representative, the patients were not blinded to the study arm (which they could not avoid), objective outcome measures were dropped mid-study, and instead reliance was given on subjective thoughts about activity, after intense psychotherapy to change how they think.

This is further complicated by the issue that it is almost impossible to implement a fully blinded trial in psychiatric disorders involving therapy. You can attempt to hide what kind of drug you give, but not what kind of therapy. The people assessing the results are also often aware of what treatment arm is used, unless patients are given a random ID.
 
Last edited:

barbc56

Senior Member
Messages
3,657
According to the following, article, it makes sense that there would be a lot of studies that turn out to be false. Here is one reason out of many, some mentioned in above posts, why we should expect to see this:

Sadly, things get really bad when lots of researchers are chasing the same set of hypotheses. Indeed, the larger the number of researchers the more likely the average result is to be false!

The easiest way to see this is to note that when we have lots of researchers every true hypothesis will be found to be true but eventually so will every false hypothesis.

Thus, as the number of researchers increases, the
probability that a given result is true goes to the probability in the
population, in my example 200/1000 or 20 percent
.

Why the field of medicine is even more likely to produce RCTs that are false:

Ioannidis says most published research findings are false.

This is plausible in his field of medicine where it is easy to imagine that there are more than 800 false hypotheses out of 1000.

In medicine, there is hardly any theory to exclude a hypothesis from being tested.

Want to avoid colon cancer? Let's see if an apple a day keeps the doctor away. No? What about a serving of bananas? Let's try vitamin C and don't forget red wine.

Studies in medicine also have notoriously small sample sizes. Lots of studies that make the NYTimes involve less than 50 people - that reduces the probability that you will accept a true hypothesis and raises the probability that the typical study is false

(I edited the above by seperating sentences within a paragraph for easier reading.)

http://marginalrevolution.com/marginalrevolution/2005/09/why_most_publis.html

I had to reread several parts of this article before I understood what the author was saying. But it's absolutely fascinating reading!

Barb

ETA
I would think the second quote could also be applied to studies in alternative medicine because of the large number of hypothesis that are generated.
 
Last edited:

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
@barbc56

http://marginalrevolution.com/marginalrevolution/2005/09/why_most_publis.html
1) In evaluating any study try to take into account the amount of background noise. That is, remember that the more hypotheses which are tested and the less selection which goes into choosing hypotheses the more likely it is that you are looking at noise.

2) Bigger samples are better. (But note that even big samples won't help to solve the problems of observational studies which is a whole other problem).

3) Small effects are to be distrusted.

4) Multiple sources and types of evidence are desirable.

5) Evaluate literatures not individual papers.

6) Trust empirical papers which test other people's theories more than empirical papers which test the author's theory.

7) As an editor or referee, don't reject papers that fail to reject the null.

Yes, Ioannidis is one of the authors whose work convinced me of this issue. While I understand bits of the argument, I think there are many layers he is beginning to unravel.

CBT/GET for ME has tiny effect sizes, ignores contrary research, tests and review each other's hypotheses in a small group, and rely heavily on limited sources of subjective information. Though they have in some cases used moderate cohort sizes, those cohorts may not be valid cohorts. That validity is part of what they should be testing.

Point six is about testing. Most psychogenic research appears to be designed around confirming. This is a holdout from logical positivism. Its bad science, in in many cases not science at all. I would give some of the papers we are complaining about a Pseudoscience Award.
 

Simon

Senior Member
Messages
3,789
Location
Monmouth, UK
I don't really understand Healy's criticism of RCTs.

He points out lots of real problems with the way in which RCTs are used, but not really any that make randomisation or the use of control groups seem like a bad idea. For all trials we need to use outcome measures which are of real value to patients. For some conditions and treatments there is a problem with lumping people together into large and disparate groups, and we need to do more to identify subgroups which would benefit the most. Also, some researchers and clinicians go beyond the evidence and make unfounded claims to patients... but I don't think that's the RCT approach.
[]

I don't see why RCTs should lead to a magic bullet approach.

I really tried to understand this before, as Healy does seem to make a lot of legitimate points, but I just don't see how they add up to what he thinks they do.

There are incentives in medicine for people to exaggerate how valuable they are to patients. This is wrong and should be fought against. Results from RCTs can be spun in a way which plays into this problem - but so can results from other trials.

Can anyone see what I'm missing here?
Agree with those points, but I think he makes two valid criticisms of RCT's themselves:

But there is more fundamental problem. Properly conducted RCTs are supposed to tell you if a drug is effective. But what does “effective” mean?...

The RCT system finds many drugs effective not because they cure anything but because they change some marker or risk factor for disease, such as reducing your cholesterol level or the size of your tumour. But often changing these markers has no effect on long term outcome. For instance Ezetimibe is very effective at lowering cholesterol; however there is no evidence it has any effect on your chances of developing heart disease.
In other words if the RCT outcome measure is flawed then the findings don't add up to much. Another example might be, say, using self-reports in unblinded trials of behavoural treatments, esp where those behavioural treatments are likely to influence the way patient self-report.

An RCT is a highly specialised set up: the patients are carefully selected. Usually they are younger and healthier than the people who will actually be taking drug. They will only have one thing wrong with them and only be taking that one drug. People in the real world will be older and very likely have two or three conditions for which they are also taking five or more drugs.

A study comparing the cost effectiveness of a drug based on the results from RCT trials with results in a clinic found they were about five times less effective. This not only raises the issue of effectiveness – many widely used drugs only work for 25 to 40 per cent of patients according to RCTs – but it also puts a big question mark over safety.
Not a new point, but one that is often glossed over.
 
Messages
13,774
In other words if the RCT outcome measure is flawed then the findings don't add up to much.

I mentioned that all trials need to have meaningful outcome measures... I didn't miss that one!

I totally agree with lots of his criticisms of RCTs, but it seems that they all apply just as well to non-randomised trials with no controls.

re the specialised set up: Also, patients who agree to be randomised will presumably be keen on the possible interventions than many, eg the hypochondria CBT RCT which had to screen thousands of patients to get their sample (and still ended up with poor results). But if you don't have randomisation, you're still likely to end up with less useful results.

One could try to argue that RCTs provide a false level of confidence and legitimacy to medical claims... but it's not as if pre-RCTs doctors were known for their modesty! I totally agree that we need to push back against the way in which RCTs are over-sold and misrepresented, but I see that as more being an argument against quackery than against RCTs.
 
Messages
13,774
I was just reading this article on N-of-1 trials, which I've seen some (maybe Healy?) argue should be seen as preferable to RCTs, although to me it seems like this approach would need to be supplementary to RCTs.

Abstract
Context
When feasible, randomized, blinded single-patient (n-of-1) trials are uniquely capable of establishing the best treatment in an individual patient. Despite early enthusiasm, by the turn of the twenty-first century, few academic centers were conducting n-of-1 trials on a regular basis.

Methods
The authors reviewed the literature and conducted in-depth telephone interviews with leaders in the n-of-1 trial movement.

Findings
N-of-1 trials can improve care by increasing therapeutic precision. However, they have not been widely adopted, in part because physicians do not sufficiently value the reduction in uncertainty they yield weighed against the inconvenience they impose. Limited evidence suggests that patients may be receptive to n-of-1 trials once they understand the benefits.

Conclusions
N-of-1 trials offer a unique opportunity to individualize clinical care and enrich clinical research. While ongoing changes in drug discovery, manufacture, and marketing may ultimately spur pharmaceutical makers and health care payers to support n-of-1 trials, at present the most promising resuscitation strategy is stripping n-of-1 trials to their essentials and marketing them directly to patients. In order to optimize statistical inference from these trials, empirical Bayes methods can be used to combine individual patient data with aggregate data from comparable patients.

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2690377/