• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Media spinning of research starts with the researchers themselves: A study

Messages
3,263
Last edited:

Jeckylberry

Senior Member
Messages
127
Location
Queensland, Australia
journos all work to put an angle on a story. A biased report probably just hands it to them and makes it easier to draw a conclusion. The researchers obviously found a relationship between the two. Did they offer any insight to the process?
 

SOC

Senior Member
Messages
7,849
Misrepresentation of Randomized Controlled Trials in Press Releases and News Coverage: A Cohort Study

This article concludes that one of the major determinants of whether research results are "spun" in the media is whether they were spun in the abstract of the study itself.

In other words, the "spinning" most commonly starts with the researchers, and is then reproduced in media articles.

Edit: Thread title should say "Media" not "Medica", I really should check my typing more carefully!
I can easily see how this happens. A responsible scientist discusses the limitations of the research in the abstract and doesn't dramatize or exaggerate the implications of the result. That kind of cautious realism is not interesting to journalists. The abstracts that make dramatic claims without caveats are much more likely to catch the eye of a journalist unwilling to dig into the paper itself to know what's really going on. Don't think that poor scientists who write those kinds of unclear abstracts don't know exactly what they're doing. :rolleyes:
 
Messages
3,263
Did they offer any insight to the process?
If you click on the title, you get a link to the fulltext of the article.

They don't have much to say about whether the spinning of results in abstracts is deliberate or not. All they say in the conclusions is:

... reviewers and editors of published articles have an important role to play in the dissemination of research findings and should be particularly aware of the need to ensure that the conclusions reported are an appropriate reflection of the trial findings and do not overinterpret or misinterpret the results.
 
Messages
3,263
This figure illustrates the main findings quite nicely (and easily for those currently feeling brain-fogged):
journal.pmed.1001308.g002.png
 

SOC

Senior Member
Messages
7,849
... reviewers and editors of published articles have an important role to play in the dissemination of research findings and should be particularly aware of the need to ensure that the conclusions reported are an appropriate reflection of the trial findings and do not overinterpret or misinterpret the results.
Duh. This was heavily emphasized to me from the time I did my first review of an undergraduate classmate's research report through every paper I got from a journal to review. It's one of the primary things you look for when doing a review -- do the conclusions accurately reflect the results? What's the point of a review if you don't do something as simple and basic as that? It's a real shame something so absolutely fundamental as this has to be written up as research.

It appears this is about medical research. Is the quality of science or science education that much worse in medicine than in other fields? If so, why? This is not about the type of research, but about scientific integrity, which shouldn't be different among branches of science. Different fields have different degrees of accuracy and/or precision in the kinds of results that can be achieved, I understand that. But this is about fundamentals of science and basic integrity. That should be universal in scientific research.
 

barbc56

Senior Member
Messages
3,657
Sometimes the PR department doesn' event run the press release by the researchers. This happened in Colorado where the PR department gave the opposite results of a research study and the researchers got a lot of flack because of the way it was presented.

The media will sometimes report the PR release word for word.

Barb
 

Simon

Senior Member
Messages
3,789
Location
Monmouth, UK
This article concludes that one of the major determinants of whether research results are "spun" in the media is whether they were spun in the abstract of the study itself.

In other words, the "spinning" most commonly starts with the researchers, and is then reproduced in media articles.

This is a really interesting study, but I don't wholly agree with the conclusion, or at least not with the implication that removing spin from abstracts would largely fix the problem (though would obviously help stop the study being misinterpreted by other reseachers, clinicians, patients etc who read the abstract for themselves).

What may well be driving this is not the abstract spin per se, but the researchers intention to spin the results. That is seen first in the abstract, but likely to be seen again in the press release and press briefing, as well as any private chats with journalists too. If journals did their job properly, and ensured that abstracts fairly reflected the research piece, I'm sure that spin in the media would fall. But I also suspect that researchers would put more effort into other ways of communicating with the media to encourage spin in reporting all the same.

(I'm sure there's some fancy scientific word to describe what I've suggested - it's not exactly confounding - but that what's driving this is something deeper than the abstract spin itself.)
 
Messages
3,263
@Simon, the article does say that they used the abstract as a proxy because it was easier to quantify spin in that than in the full article. But their conclusions were general to the research study as a whole, not specific to the abstract.
 
Messages
13,774
There's also this study, which gave PACE a clean bill of health for spin from press-release, and only noticed spin in media:

Objective To identify the source (press releases or news) of distortions, exaggerations, or changes to the main conclusions drawn from research that could potentially influence a reader’s health related behaviour.

Design Retrospective quantitative content analysis.

Setting Journal articles, press releases, and related news, with accompanying simulations.

Sample Press releases (n=462) on biomedical and health related science issued by 20 leading UK universities in 2011, alongside their associated peer reviewed research papers and news stories (n=668).

Main outcome measures Advice to readers to change behaviour, causal statements drawn from correlational research, and inference to humans from animal research that went beyond those in the associated peer reviewed papers.

Results 40% (95% confidence interval 33% to 46%) of the press releases contained exaggerated advice, 33% (26% to 40%) contained exaggerated causal claims, and 36% (28% to 46%) contained exaggerated inference to humans from animal research. When press releases contained such exaggeration, 58% (95% confidence interval 48% to 68%), 81% (70% to 93%), and 86% (77% to 95%) of news stories, respectively, contained similar exaggeration, compared with exaggeration rates of 17% (10% to 24%), 18% (9% to 27%), and 10% (0% to 19%) in news when the press releases were not exaggerated. Odds ratios for each category of analysis were 6.5 (95% confidence interval 3.5 to 12), 20 (7.6 to 51), and 56 (15 to 211). At the same time, there was little evidence that exaggeration in press releases increased the uptake of news.

Conclusions Exaggeration in news is strongly associated with exaggeration in press releases. Improving the accuracy of academic press releases could represent a key opportunity for reducing misleading health related news.

http://www.bmj.com/content/349/bmj.g7015
 
Messages
3,263
@esther, interesting paper! Compared to the the PLOS paper, this one seems to have much more limited definition of spin, (had to involved inappropriate advice, or statements of causation)
The PLOS paper in the current discussion defined spin thus:
paper said:
We considered “spin” as being a focus on statistically significant results (within-group comparison, secondary outcomes, subgroup analyses, modified population of analyses); an interpretation of statistically nonsignificant results for the primary outcomes as showing treatment equivalence or comparable effectiveness; or any inadequate claim of safety or emphasis of the beneficial effect of the treatment.
PACE would meet that definition with no trouble at all!