Misrepresentation of Randomized Controlled Trials in Press Releases & News Coverage: A Cohort Study

Dolphin

Senior Member
Messages
17,567
Free full text: http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.1001308#pmed-1001308-t004


Misrepresentation of Randomized Controlled Trials in Press Releases and News Coverage: A Cohort Study

Amélie Yavchitz1,2,3, Isabelle Boutron1,2,3*, Aida Bafeta1,2,3, Ibrahim Marroun4, Pierre Charles4, Jean Mantz5, Philippe Ravaud1,2,3
1 INSERM, U738, Paris, France, 2 Centre d'Épidémiologie Clinique, AP-HP (Assistance Publique des Hôpitaux de Paris), Hôpital Hôtel Dieu, Paris, France, 3 Université Paris Descartes, Sorbonne Paris Cité, Faculté de Médecine, Paris, France, 4 Department of Internal Medicine, Hôpital Foch, Suresnes, France, 5 Department of Anesthesiology and Critical Care, Beaujon University Hospital, Clichy, France

Abstract

Background

Previous studies indicate that in published reports, trial results can be distorted by the use of “spin” (specific reporting strategies, intentional or unintentional, emphasizing the beneficial effect of the experimental treatment). We aimed to (1) evaluate the presence of “spin” in press releases and associated media coverage; and (2) evaluate whether findings of randomized controlled trials (RCTs) based on press releases and media coverage are misinterpreted.

Methods and Findings

We systematically searched for all press releases indexed in the EurekAlert! database between December 2009 and March 2010. Of the 498 press releases retrieved and screened, we included press releases for all two-arm, parallel-group RCTs (n = 70). We obtained a copy of the scientific article to which the press release related and we systematically searched for related news items using Lexis Nexis.

“Spin,” defined as specific reporting strategies (intentional or unintentional) emphasizing the beneficial effect of the experimental treatment, was identified in 28 (40%) scientific article abstract conclusions and in 33 (47%) press releases. From bivariate and multivariable analysis assessing the journal type, funding source, sample size, type of treatment (drug or other), results of the primary outcomes (all nonstatistically significant versus other), author of the press release, and the presence of “spin” in the abstract conclusion, the only factor associated, with “spin” in the press release was “spin” in the article abstract conclusions (relative risk [RR] 5.6, [95% CI 2.8–11.1], p<0.001). Findings of RCTs based on press releases were overestimated for 19 (27%) reports. News items were identified for 41 RCTs; 21 (51%) were reported with “spin,” mainly the same type of “spin” as those identified in the press release and article abstract conclusion. Findings of RCTs based on the news item was overestimated for ten (24%) reports.

Conclusion
“Spin” was identified in about half of press releases and media coverage. In multivariable analysis, the main factor associated with “spin” in press releases was the presence of “spin” in the article abstract conclusion.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
"It’s not news that press releases are skewed. Previous research found that most press releases left out important caveats on safety or applicability of the research, and many flat out exaggerated the importance of results. “Our study adds to these results showing that ‘‘spin’’ in press releases and the news is related to the presence of ‘‘spin’’ in the published article,” say the authors. In other words – the root of the problem lies in how we write up research results in the first place."

Another quote from the article in Scientific American. Nice timing, together with another thread, as I am writing two connected blogs related to this.

Its clear that misleading press releases are causing major problems all over medical science. However, in the UK if a science journalist wants to follow up and do some in depth investigating, where do they go? Which "experts" do they interview? Do they use the Science Media Centre as a source? The impact of normal misreporting in science is likely to be increased by the particular circumstances around ME and CFS psychobabble.

Bye, Alex
 

biophile

Places I'd rather be.
Messages
8,977
[edit: this post is in reply to this thread and any other recent threads covering similar issues, such as "Psychologists are facing up to problems with replication" and "Are most positive findings in health psychology false.... or at least somewhat exaggerated?".]

It does seem like exactly the sorts of problems we've complained about in CFS, and had dismissed as a reflection of patients failing to understand how mind and body interact, or a stigmatisation of mental health issues, are now beginning to be recognised as a serious problem within much of psychology. They just couldn't believe it when it came from CFS patients.

The number of papers and articles on research misconduct and misrepresentation have increased significantly in recent years. This is very good news as sooner or later the spotlight may be shone on the CFS backwater. I doubt "they" will believe yet it applies to CFS, unless there are specific examples published in journals. However, if psychology and psychiatry are the backwater of the scientific and medical communities, and CFS is a backwater subject in psychology and psychiatry, then I can imagine the problem must be rife in CFS research. Guilt by association is not enough though, we need specific examples. I guess some outsiders would consider the WPI/Mikovits/XMRV saga an example, but it would be unusual if a backwater subject in backwater disciples (biopsychosocial approach to CFS) was somehow free of this problem. I bet some patients/advocates are waiting with glee for scandalous examples to be exposed, which may be inevitable and have been suspected for a long time now. Personally I doubt there is much outright fraud, but plenty of more grey area examples such as selective reporting and spin doctoring, etc.

Dolphin said:
“Spin” was identified in about half of press releases and media coverage. In multivariable analysis, the main factor associated with “spin” in press releases was the presence of “spin” in the article abstract conclusion.

With the PACE Trial, the "spin" about normal and recovery was in the accompanying editorial and from the authors themselves at the Lancet press conference.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
It was the spin from the PACE trial that inspired me to start research for a book. There is a lot more than just involving PACE. Unsupported claims are made all the time in this research, typically by implying more recovery than shown, or by implying causation when all they have is association. Bye, Alex
 

biophile

Places I'd rather be.
Messages
8,977
It was the spin from the PACE trial that inspired me to start research for a book. There is a lot more than just involving PACE. Unsupported claims are made all the time in this research, typically by implying more recovery than shown, or by implying causation when all they have is association. Bye, Alex

The Lancet are complicit in the spin doctoring. 19 months later they still have not even published a correction for the editorial in question (there were reliable rumours about a planned correction last year, after previous resistance, but which now seems to have amounted to lip service from the Lancet). I tried to have another error corrected in a different editorial, but there was no interest either even though the error was admitted. I wonder about their commitment to accuracy.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
How are these for examples of 'spin' within the published PACE Trial paper?:

"Interpretation
CBT and GET can safely be added to SMC to moderately improve outcomes for chronic fatigue
syndrome, but APT is not an effective addition."

Here, they forget to qualify their remarks.
Firstly, severely affected patients were not investigated, and so the results do not apply to severely affected patients.
Secondly, CBT 'moderately improved' outcomes only in one of the two primary outcomes. There was no clinical improvement for 'physical function'. (Not to mention the results of the six minute walking distance test.)
Thirdly, it says that CBT and GET can be 'safely' added to SMC, but they have not published the 'deterioration rates' using the same measure as the 'improvement rates', so they are basing their assertion only on their own arbitrary measures of safety.
So the whole paragraph is full of cleverly-crafted half-truths.


Panel 2: Research in context
Interpretation
In the pacing, graded activity, and cognitive behaviour therapy:
a randomised evaluation (PACE) trial, we affirm that cognitive
behaviour therapy and graded exercise therapy are moderately
effective outpatient treatments for chronic fatigue syndrome
when added to specialist medical care, as compared with
adaptive pacing therapy or specialist medical care alone.”

At no place in the text of the paper, or the discussion of the results, do they mention that CBT failed to clinically improve physical function or physical disability. (This information is only to be found with an analysis of the data tables.)

They repeatedly refer to CBT as being 'moderately effective', without ever qualifying their assertion. But CBT moderately improved just one of the two primary outcomes, so CBT could equally be called 'clinically ineffective'.

In the published paper, the only place where they make a reference to CBT failing to clinically improve physical function (one of the two primary outcomes) was in a fleeting oblique reference to it, as follows:

“Mean differences between groups on primary
outcomes almost always exceeded predefined clinically
useful differences for CBT and GET when compared
with APT and SMC.”


An example of ambiguous language:

“Discussion
When added to SMC, CBT and GET had greater success
in reducing fatigue and improving physical function than
did APT or SMC alone”

From reading this, the reader might think that CBT and GET were more successful therapies than SMC, and that CBT and GET were all-round successful therapies.
Not true. The SMC group saw greater improvements than the incremental additional improvements for CBT or GET.
And CBT did not demonstrate clinical success at improving physical function.

In such a high profile, multi-million pound medical research trial, would it have been so difficult to use wording that was unambiguous, such as:
"When used as a supplement to SMC, the incremental/additional effect size of GET was 'moderate'. For CBT, the incremental effect size for fatigue was 'moderate', but there was not a clinically useful improvement for physical function." ?
 

user9876

Senior Member
Messages
4,556
[edit: this post is in reply to this thread and any other recent threads covering similar issues, such as "Psychologists are facing up to problems with replication" and "Are most positive findings in health psychology false.... or at least somewhat exaggerated?".]



The number of papers and articles on research misconduct and misrepresentation have increased significantly in recent years. This is very good news as sooner or later the spotlight may be shone on the CFS backwater. I doubt "they" will believe yet it applies to CFS, unless there are specific examples published in journals. However, if psychology and psychiatry are the backwater of the scientific and medical communities, and CFS is a backwater subject in psychology and psychiatry, then I can imagine the problem must be rife in CFS research. Guilt by association is not enough though, we need specific examples. I guess some outsiders would consider the WPI/Mikovits/XMRV saga an example, but it would be unusual if a backwater subject in backwater disciples (biopsychosocial approach to CFS) was somehow free of this problem. I bet some patients/advocates are waiting with glee for scandalous examples to be exposed, which may be inevitable and have been suspected for a long time now. Personally I doubt there is much outright fraud, but plenty of more grey area examples such as selective reporting and spin doctoring, etc.



With the PACE Trial, the "spin" about normal and recovery was in the accompanying editorial and from the authors themselves at the Lancet press conference.

I think a spot light on research misconduct will start to change peoples behaviour. Currently I think many people think they can get away with spinning their results, mis quoting other articles. Reviewers think they can get away with a quick read and a few comments rather than a detailed review. If people feel the spot light may fall on them then they may behave better.

However, I think certain researchers have developed such a degree of arrogance that it will not change their behaviour. I wonder if Wessely and White have got away with such poor work for so many years that they don't recognise the problems. Reading Wesselys papers in particular I feel he lacks an analytic logical thought process but his writing style allows him to gloss over incoherant arguments.
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
The problem with misrepresentation goes much deeper though, because it is a sociological problem relating to prestige and funding of science and scientists trying to justify their work.
It can't merely be solved by somehow forcing scientists themselves to be more honest, but we need to increase both the general societal knowledge of how science works, its importance and so forth, such that scientists are not completed to fall in this marketing trap in the first place.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
In the last several decades the funding for science has become more and more dependent on private funding, including corporations doing applied research. In addition government funded research has been more about justification and spinning results - can't afford to waste a penny, or if that fails can't afford to be seen wasting a penny. To this end its important that research results are spun as effective. Future funding depends on it. This is a fundamental distortion of the funding base, leading to Zombie Science. There is a desire by some to hide failure wherever possible. The old notion of science as objective enquiry is disappearing. The other side of the distortion is that corporations want particular results. There is a lot of pressure to give it to them. Bye, Alex
 

currer

Senior Member
Messages
1,409
I suppose with PACE it must have been too expensive to have been allowed to fail in public.
 

Dolphin

Senior Member
Messages
17,567
I think a spot light on research misconduct will start to change peoples behaviour. Currently I think many people think they can get away with spinning their results, mis quoting other articles. Reviewers think they can get away with a quick read and a few comments rather than a detailed review. If people feel the spot light may fall on them then they may behave better.

However, I think certain researchers have developed such a degree of arrogance that it will not change their behaviour. I wonder if Wessely and White have got away with such poor work for so many years that they don't recognise the problems. Reading Wesselys papers in particular I feel he lacks an analytic logical thought process but his writing style allows him to gloss over incoherant arguments.
This is one of the things that motivates me to write letters, e-letters, etc.
 

user9876

Senior Member
Messages
4,556
In the last several decades the funding for science has become more and more dependent on private funding, including corporations doing applied research. In addition government funded research has been more about justification and spinning results - can't afford to waste a penny, or if that fails can't afford to be seen wasting a penny. To this end its important that research results are spun as effective. Future funding depends on it. This is a fundamental distortion of the funding base, leading to Zombie Science. There is a desire by some to hide failure wherever possible. The old notion of science as objective enquiry is disappearing. The other side of the distortion is that corporations want particular results. There is a lot of pressure to give it to them. Bye, Alex

I have a different perspective but from research into IT. I think one of the big problems is the preasure within universities to get resarch grants and to publish lots of papers to meet the statistics. This means that academics tend not to challange the popular research directions. Imagine doing research into autoimmunity and ME and trying to publish - many of the best journals will get psyciatrists to do the review as they claim to have the expertise. When as a researcher you are judged every year against the publications you have in tier 1 journals its not an encouragement to work in that area.

Researchers in industrial research often have more freedom as there is not the preasure to publish and get research grants - however I can't think of many industrial research labs. R&D functions have a very different role which is to develop product.
 

user9876

Senior Member
Messages
4,556
This is one of the things that motivates me to write letters, e-letters, etc.

Definately a good thing to do looking back at some papers you have picked them up on some very important points (as have others).

Unfortunately White et al are getting away with dismissing critisism from patients as personal attacks - again more unjustified spin. Perhaps thats a sign of their weakness since they can't argue the points.
 

biophile

Places I'd rather be.
Messages
8,977
I fear that much of the research into CFS is some sort of "post-modern science".

The posts from user9876 and alex3619 also reminded me of the neuroscientist Ramachandran: "too much of the Victorian sense of adventure [in science] has been lost" [...] "But where I'd really like to go is back in time. I'd go to the Victorian age, before science had professionalized and become just another 9–5 job, with power-brokering and grants nightmares. Back then scientists just had fun. People like Darwin and Huxley; the whole world was their playground." (http://en.wikipedia.org/wiki/Vilayanur_S._Ramachandran).

CBT/GET proponents have done a good job at dismissing criticism without adequately addressing it, convincing outsiders that critics have nothing important to say, and downplaying the fact that their own research cannot demonstrate much if any objective benefit (despite presumed benefits and supposed recoveries, which influence policy and opinion).
 

Dolphin

Senior Member
Messages
17,567
As people are taught as children, "sticks and stones may break my bones, but words will never hurt me": if one makes a point that is fair, valid, and focused on the facts, it's not the end of the world if it is dismissed. If you've been on internet forums a while, you likely have taken bigger hits. So I wouldn't let that put people off.

These days CBT/GET proponents aren't necessarily annoyed by internet criticism as they've already had plenty. What they won't like is criticism in front of their peers with letters and e-letters. That's one of the reasons I think it's worth doing.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
I have a different perspective but from research into IT. I think one of the big problems is the preasure within universities to get resarch grants and to publish lots of papers to meet the statistics. This means that academics tend not to challange the popular research directions. Imagine doing research into autoimmunity and ME and trying to publish - many of the best journals will get psyciatrists to do the review as they claim to have the expertise. When as a researcher you are judged every year against the publications you have in tier 1 journals its not an encouragement to work in that area.

Researchers in industrial research often have more freedom as there is not the preasure to publish and get research grants - however I can't think of many industrial research labs. R&D functions have a very different role which is to develop product.

Hi user9876, I agree with this too. The problems are multifactorial, and layered. I see it as a mutually reinforcing system. Other factors I didn't discuss include media limitations. The rise of the internet has seen budgets for journalism decline - investigation is becoming more superficial and often overlooked in favour of regurgitating press releases: churnalism in the articles that have been posted lately.

A decline in public understanding of science might also be involved, but I am less sure about this: anti nuclear protestors back in the 1950s (I think) walked around with placards like "I don't want no damned atoms around here!". I suspect science education was better for a while in the late twentieth century, but its in decline again in many countries (though not in Asia).

Industrial research also has to show results though. It may be less competitive (as in grants) but its still competitive in the long run. In the USA total research funding has gone up, but funding into pure research is in massive decline: a very great percentage is focussed on product development.

Economic rationalism is something I keep coming back to. Things began to change in the 1980s, and one of the factors was the drive to cut waste. An overemphasis on cutting costs is bad if universally applied. Its important not to waste money, but an excessive drive to cut non-productive research can lead to restriction to either short term goals or an emphasis on just how good the research is: spin. In other words if it looks good its ok, so spin has a managerial and financial benefit.

Then there is product testing, especially pharmaceuticals. The scientists have non-disclosure agreements. A company can fund a whole lot of studies, then pick the most favourable for publication. This practice is one which led to the term Zombie Science being coined. Combine this with editorial and reviewer bias in journals and what is published tends to be even more distorted. I also suspect that with modern demands on time from medical professionals, too many are reading abstracts instead of papers. Who has time for more if they have a busy medical practice? Abstracts are frequently distorted and sometimes misleading, reading the paper is critical - the abstract just lets you know the paper might be interesting. Anyone who does research should know that, but too many who talk about science seem to be unaware of it.

I am fairly sure there are other factors that are slipping my mind at the moment.

Bye, Alex
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Definately a good thing to do looking back at some papers you have picked them up on some very important points (as have others).

Unfortunately White et al are getting away with dismissing critisism from patients as personal attacks - again more unjustified spin. Perhaps thats a sign of their weakness since they can't argue the points.

Hi, if you look at the history of pseudoscience, this is a hallmark feature: I see it again and again. They use pursuasive rhetoric to combat criticism, not reason and evidence. Bye, Alex

PS This is similar to what biophile said in post 16.
 

Dolphin

Senior Member
Messages
17,567
Then there is product testing, especially pharmaceuticals. The scientists have non-disclosure agreements. A company can fund a whole lot of studies, then pick the most favourable for publication. This practice is one which led to the term Zombie Science being coined. Combine this with editorial and reviewer bias in journals and what is published tends to be even more distorted. I also suspect that with modern demands on time from medical professionals, too many are reading abstracts instead of papers. Who has time for more if they have a busy medical practice? Abstracts are frequently distorted and sometimes misleading, reading the paper is critical - the abstract just lets you know the paper might be interesting. Anyone who does research should know that, but too many who talk about science seem to be unaware of it.
There is a drive for RCTs to be registered, with some journals saying they won't publish RCTs that haven't been registered. This should make it harder for some selective reporting.

A more recent phenomenon is a drive to register reviews and meta-analyses so that again there can't be selective reporting.
 

Sean

Senior Member
Messages
7,378
It can't merely be solved by somehow forcing scientists themselves to be more honest, but we need to increase both the general societal knowledge of how science works, its importance and so forth, such that scientists are not completed to fall in this marketing trap in the first place.
A huge chunk of the problem we face is that the psych crowd are telling the political and economic elites (and the 'moral' thugs) what they want to hear, so the psychs get lots of funding and political protection.
 
Back