• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Understanding Health Research - new online tool for assessing scientific papers

sarah darwins

Senior Member
Messages
2,508
Location
Cornwall, UK
This is a bit above my pay grade but I'm posting it because the scientists and statisticians among you may find it interesting. It seems very on-point for a lot of what gets discussed here in the research-related threads.

I'm not competent to evaluate this evaluation tool (!) but perhaps someone with the necessary scientific chops would like to give it a whirl to assess, oh, I don't know, PACE or something.

Apparently it takes about 30 minutes to run for a given study, and I think you need the original research paper to hand.

****************

The Guardian today reports a new online tool launched by Medical Research Council/Chief Scientist Office Social and Public Health Sciences Unit, University of Glasgow (MRC/CSO SPHSU) at University of Glasgow aimed at helping the public evaluate scientific research.

Guardian article: https://www.theguardian.com/science...-that-helps-the-public-decode-health-research

The reality is that studies can be notoriously difficult to decode in isolation. The mere fact a study exists showing a particular result is not in itself evidence that result is robust or true. It is crucial to be aware that not all studies are created equal, and some are much higher quality than others. This is a particular concern in the medical field, where confounding factors frequently skew conclusions. For example, studies with only a small number of participants are often statistically underpowered, and results from these might give a misleading picture of reality .

Addressing the difficulties in interpreting research results has therefore been a driving motivation for a project undertaken by the Medical Research Council/Chief Scientist Office Social and Public Health Sciences Unit, University of Glasgow (MRC/CSO SPHSU) at University of Glasgow. The result is Understanding Health Research (UHR), a free service created with the intention of helping people better understand health research in context.

Essentially, UHR it functions as an interactive field-guide to evaluating the strengths and weaknesses of any given health paper. In addition, it gives clear and understandable explanations of important considerations like sampling, bias, uncertainty and replicability. This has the potential to be invaluable for improving public understanding of science and ultimately to improving our collective well-being. After all, as Dr Shona Hilton, deputy director of MRC/CSO SPHSU says “without the tools to assess contradictory health messages and claims about new discoveries and treatments, the public are vulnerable to false hope, emotional distress, financial exploitation and serious health risks.”

The actual tool is found here:

http://www.understandinghealthresearch.org
 

sarah darwins

Senior Member
Messages
2,508
Location
Cornwall, UK
One note: there are only a few comments so far below the Guardian article, but they're all pretty skeptical about the usefulness of this tool. It would be nice to get an assessment from someone who knows their way around research.
 

A.B.

Senior Member
Messages
3,780
It's good that there is awareness and discussion about this problem. I'm not impressed with the tool. Merely listing good and bad points of a particular study is a very superficial way to evaluate it. The user can easily be misled into believing that strengths can make up for weaknesses. That the study was peer reviewed, that conflicts of interests were declared, and that it was funded by public sources and conducted in a university setting doesn't make up for a lack of blinding. It only takes one source of significant bias or error to produce misleading results.