Dr David Tuller: Lancet Paper Claims “Exercise” Should be “Prioritized” in Long COVID Rehabilitation

Countrygirl

Senior Member
Messages
5,673
Location
UK

https://virology.ws/2025/08/16/tria...zaZ2RpjSJ8AvrEM8pg_aem_3Ijpjpap-_Lt2kVMoRinrw


Trial By Error: Lancet Paper Claims “Exercise” Should be “Prioritized” in Long COVID Rehabilitation​

Leave a Comment / By David Tuller / 16 August 2025

By David Tuller, DrPH
Added: On X, @mecfsskeptic has posted a very useful thread explaining how loosely the investigators applied the meaning of “Long COVID” in accepting trials for their meta-analysis.

A Lancet journal, eClinicalMedicine, has just published a paper called “Effects of therapeutic interventions on long COVID: a meta-analysis of randomized controlled trials.” The study reviewed randomized controlled trials (RCTs) that tested seven types of interventions—”exercise training, respiratory muscle training, telerehabilitation, transcranial direct current stimulation (tDCS), olfactory training, palmitoylethanolamide with luteolin (PEA-LUT), and steroid sprays”–in adults identified as having Long COVID. Primary outcomes included cardiopulmonary function, exercise capacity, fatigue, and olfactory recovery
And here’s how the investigators summarized the most significant results: “Exercise training should be prioritized for improving cardiopulmonary function and exercise capacity in Long COVID, supported by high-certainty evidence.”

Not surprisingly, some prominent members of the biopsychosocial ideological brigades promoted this finding on social media. Professor Alan Carson, a neuropsychiatrist at the University of Edinburgh, linked to the study on X and wrote: “Effects of therapeutic interventions on long COVID: a meta-analysis of randomized controlled trials – eClinicalMedicine. No surprises here but good to see. What many of us who folliwed [sic] the evidence have been suggesting”

But did the meta-analysis prove much of anything, as Professor Carson seems to believe?

Meta-analyses are based on the premise that combining data from lots of studies can yield robust and convincing collective findings, given the greater power of larger numbers. That could be true if like is being compared to like, and if each study is itself robust. If not, then meta-analysis results are hard to interpret and can just muddy the waters further.
In this case, the investigators identified 51 studies that met the study’s selection criteria, with 4026 participants in total. One problem here is that the investigators accepted the broadest, most porous definitions of Long COVID and then lumped everyone together. Long COVID is a useful, patient-generated moniker to describe an unprecedented worldwide phenomenon. But it is an umbrella term covering an enormous range of clinical presentations that are presumably caused by a broad range of possible pathophysiological mechanisms.

The criteria for identifying research participants in these 51 studies varied greatly. It doesn’t take a genius to recognize that effective research into Long Covid ultimately requires investigators to carefully characterize sub-groups of patients with related issues rather than dumping everyone into the same huge bucket. When. people with all sorts of different medical complaints are analyzed as if they all have the exact same condition, the result is just a mish-mash of numbers.
Moreover, people generally recover from viral infections. That means that many or most people with persistent symptoms at one or two or even three months after an acute bout of COVID-19–as in some of these studies–are experiencing a post-viral syndrome that is likely to self-resolve within several months or in some cases a year or more. A good example of this phenomenon might be Professor Paul Garner, who suffered nasty post-COVID-19 symptoms for what appeared to be six or seven months months but routinely attributes his recovery to the power of his strong manly cognitions, not to the body’s natural healing processes.

A second problematic reality for this meta-analysis is that the research base is highly suspect, according to the investigators themselves. To determine the likelihood that trial responses were impacted by bias, they used Cochrane’s Risk of Bias Tool, noting that it “assesses seven domains of bias: random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting, and other sources of bias.“................................
 
Back