• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Cytokine responses to exercise and activity in patients with chronic fatigue syndrome: Case control

notmyself

Senior Member
Messages
364
Summary of Activities of TGF-beta:
  • Causes collateral damage in infections
  • Causes the growth/changes in tissue
  • Decreases acetylcholine
  • Decreases slow wave or deep sleep
  • Decreases muscle regeneration
  • Decreases the action of the vitamin D receptor
  • Increases free radicals
  • Can decrease bone density
  • It inhibits proliferation of most other cell types
  • Suppresses red blood cell formation and lymphocytes (T and B cells)
  • Suppresses antibody production
  • Suppresses Cytotoxic T Cell (CD8) and Natural Killer cell activity…this can cause viral infections to get out of control.
  • Deactivates macrophages
  • Promotes oral tolerance
  • Suppresses inflammation
  • Promotes wound healing and new blood vessel formation (angiogenesis)
  • Induces local inflammation and fibrosis
  • Stimulates extracellular matrix deposition
  • Promotes switch to IgA (R)
  • Can increase cancer growth
  • Cause negative changes in the airways
  • Can benefit cognitive function (when very mildly elevated)
  • Can activate EBV or Epstein Barr Virus (R)
 

Valentijn

Senior Member
Messages
15,786
Why include patients with phobias at all? Unless the assumption is that CFS is an exercise phobia:
We excluded those with comorbid psychiatric disorders (with the exception of simple phobias) ....


How sure are they that the allowed meds don't have an impact on the immune system? Paracetamol certainly acts as an anti-inflammatory, which would implicate immune involvement. And I have to wonder why their non-depressed patients are on antidepressants:
We excluded participants (patients and controls) if they had regularly been taking any prescribed medications in the past two weeks that might affect the immune system or exercise challenge. We allowed the use of selective serotonin reuptake inhibitors, tricyclic antidepressants and paracetamol.


How do you judge that the thyroid disease is truly in remission, and not the cause of ongoing fatigue?
Participants could have certain co-morbid medical conditions if in remission (e.g. hypothyroidism if biochemically euthyroid on thyroxine replacement therapy) [1].


This is actually pretty good:
An aerobic exercise challenge was completed on day 14. On this day blood was drawn before, immediately after, and 3 hours after exercise. On day 16 (two days after the exercise) a final blood sample was taken at the hospital and follow-up questionnaires were administered. Participants were studied 48 hours after the exercise, rather than 72 hours afterwards as in the pilot study [11], since our clinical impression suggested that CFS patients perceive most post-exertional symptoms at this time.


All of this seems rather dodgy:
Ad hoc self-rated five item Likert scales were used to measure the effect of activity/exercise on delayed fatigue, pain and malaise (unwell) at the same times. Response options ranged from ‘strongly disagree’ through ‘neither agree nor disagree’ to ‘strongly agree’. Fear of exercise was determined on day 14 using the Tampa scale for kinesiophobia for fatigue [28], with healthy volunteers being asked to recall the last time they felt extreme tiredness that was not related to a medical condition, being pregnant, or dehydration.


Why such a big gap? And why such a big delay in publishing?
The TGF- samples were analysed in three batches by two laboratory technicians, with one technician analysing the first batch of samples in 2009 and the other technician analysing the second and third batches of the samples in 2011


Why use the mean instead of the first result, for one batch?
For quality assurance, in the first batch, where an analyte looked to be a possible outlier we re-analysed the sample and took the mean of the two assays. These values did not alter the results so we did not repeat this with the remaining batches.


If symptoms didn't increase after the exertion, ME/CFS is an unlikely diagnosis:
Cases who agreed that their fatigue, malaise or pain increased after the exercise test did not have significantly greater relative changes in TNF, IL8 or IL-6 protein or RNA either immediately, 3 hours or 2 days after the exercise, compared to those who disagreed.


Perhaps the reason for the 2 year delay in batch analysis was driven by a need to find some fatigue patients to water down the big differences found in 2009:
This shows that batch one TGF- concentrations (both for cases and controls) were significantly higher than the concentrations for batches 2 and 3.


They made dozens of comparisons with few participants, so it's impressive anything was statistically significant with the deck (deliberately?) stacked against finding significance:
After correcting for multiple analyses, TGF-â protein was the only cytokine protein or RNA that showed significantly different values between CFS and control groups.


I would suggest that this study is too much of a mess to support any conclusions:
A systematic review of the association between circulating cytokines and CFS showed that only TGF- was elevated in the majority of case control studies [10]; a finding we were unable to replicate. Another systematic review concluded that cytokine concentrations were not abnormal after exercise in CFS [9]; a finding we replicated. We suggest that circulating levels of cytokines are unlikely to be important in the pathophysiology of CFS.


So Peter White can share the blame for some poor design aspects, and providing fatigue patients to the study. But his involvement otherwise seems very peripheral:
We thank the Barts Charity for funding this work. We would also like to thank Professor Anthony Pinching for co-leading the grant application. PDW, LVC, MB and VV designed the study, which was run by LVC. PDW and GM recruited patients from their clinics. CM and EW undertook RNA analyses. MB oversaw cytokine analyses. MS, LVC and NT analysed the study. All authors contributed to and approved the manuscript. We are particularly grateful for discussions with Megan Roerink regarding an earlier draft of this paper, and we are also grateful to our reviewers for their wise advice.
 
Last edited:

Valentijn

Senior Member
Messages
15,786
29% of patients were on anti-depressants, and several more were on anti-histamines and thyroid hormones:
demographics.jpg



How lovely, the average patient is "recovered" on the SF36-PF by White's definition from PACE :lol::
scores.jpg



Note that the VO2 wasn't a VO2max. The patients reached the same threshold as the controls, but in a much shorter amount of time. I'm not sure why they are marking people as "completing" it or not, since that's only possible in a maximal CPET:
VO2.jpg



This is dodgy as fuck. They're using median values instead of means, and basically "no detectable value" (given different numerical values) is the most common median, so the patients and controls end up with the same scores. And no p-values are given here:
medians.jpg



This doesn't support their vague claim of operator error, since the same person supposedly to blame for a faulty batch 1 also ran batches 2 and 3. I'm also suspicious of the dearth of controls in batch 1, since patients were supposed to recruit them from friends and non-blood family members if possible:
batches.jpg


Anyone know where the protocol is published? I'm curious to see if using medians was part of the original plan, and when they started asking patients to bring their own controls.
 

BurnA

Senior Member
Messages
2,087
I wonder what Alan Carson or The SMC will say now, little bit awkward.


Alan Carson, Reader in Neuropsychiatry, University of Edinburgh, said:

“Researchers from Stanford University have reported a link between CFS and a number of immune-system cytokines, whose concentrations in the blood correlated with the disease’s severity. The laboratory aspects of the study are well conducted and emerging laboratory techniques have allowed this study to be conducted with greater precision than previous reports.

“What is less clear is who the patients were and the description of how they came to be recruited, their diagnostic work up and concomitant medication use is much less persuasive. That said it I wouldn’t doubt the main findings that there are alterations in cytokine response in the condition and that elevated TGF-β in particular is elevated. There have been reports on this for the previous 2 decades and a meta analysis of 38 papers in 2015 demonstrated it as a consistent finding (Blundell et al 2015).

“As that 2015 paper commented, what is less clear is what that means – one of the signature features of ME/CFS is post exertional malaise; patients feel awful after activity. The previous studies of TGF-β showed its levels did not correlate with this core phenomenon, which begs the question of what the inflammatory role actually is.

“This study therefore confirms what was already known but doesn’t take the field forward. In particular as it was cross sectional in nature we don’t know whether the findings were the cause or the effect of living with ME/CFS or indeed the result of a confounding factor – e.g. antidepressant medication may cause increases in TGF-β and sleep disturbance has an effect on cytokines.
 

Esther12

Senior Member
Messages
13,774
The way they think that the assay distorted their results was an odd thing to have in a paper.

There did seem to be some unusually good things about this study too. I'm not really following the cytokine research enough to comment on the results. It's understandable to have people just dismiss anything White does at this point, but it's probably worth being a bit cautious too. He might be right on this. We don't want overly strident criticism of other studies to distract from concerns about PACE.
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
So questionable case-control matching (within testing batches) and lack of statistical power means no effect?

A meta analysis of the 11 studies so far will still likely suggest that TGF-β is associated with CFS.

It's interesting that they claim they noticed a bimodal pattern of TGF levels, leads to speculation as to "true" vs "false" CFS cases, given they used the non-specific CDC "empirical criteria" (that found 10x more patients in CDC population based studies, compared to the original CDC "Fukuda" criteria) and the relatively mild mean scores on the SF-36, Fatigue scale compared to other studies. Perhaps there are some non-Fukuda (and non-CCC) cases that are skewing the results?
 

Sean

Senior Member
Messages
7,378
Twenty-four patients fulfilling Centers for Disease Control criteria for CFS
The criteria you use when you want to use the discredited Oxford criteria, but don't want to be seen to be using the discredited Oxford criteria.

Chronic fatigue syndrome (CFS) is characterized by fatigue after exertion.
When I read stuff like this it seems like they have not advanced a single step in the last 30 years. They are still obsessed with a single symptom.

How lovely, the average patient is "recovered" on the SF36-PF by White's definition from PACE :lol::
Oops. Again. :rolleyes:
 

notmyself

Senior Member
Messages
364
isn't this TGF-beta testable in public labs aswell?..and we all can test ourself for it?i'm considering to do it,especially becuase is corelatting with disrupt sleep,muscle atrophy fatigue..wich i have
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
TGF-β was measured at six different time points, the authors claim the results were skewed by variance in the results due to one of the batches in 2009, with subsequent batches by a second technician in 2011.

However, they only provide the data for one of these time points, in table 7. I would like to see the other equivalent tables for the other time periods.

Secondly, the description of batches/technicians in table 7 contradicts what the authors stated here here:

The TGF-β samples were analysed in three batches by two laboratory technicians, with one technician analysing the first batch of samples in 2009 and the other technician analysing the second and third batches of the samples in 2011.

They state:
We were unable to ascertain the differences in laboratory processing that led to the differences between batches, but assume that the difference was due to using different centrifuge times, which might affect TGF-β release from platelets, leading to differences in TGF-β concentrations

Which seems plausible.

Secondly, if the 2009 results are simply discarded, from the analysis, or overcorrected for in the regression, then there is a subsequent loss of statistical power. The authors only talk about the original power analyses, not the effect of discarding this data.

The authors do however acknowledge:
This study had some limitations. With 24 cases in the patient sample it is a relatively small sample particularly for the number of cytokines we wished to investigate; it is likely that we were underpowered.

(And even less power if discarding data due to methodological issues mentioned above!)

They also used the statistical faux-pas of using a pilot study result to guide power analysis. This is statistically questionable, as discussed here:
http://journal.frontiersin.org/article/10.3389/fpsyg.2017.01184/full
http://blogs.discovermagazine.com/neuroskeptic/2017/07/29/underpowered-studies-justified/

All in all, a bit of a mess and (as always), should not be interpreted on its own.
 

user9876

Senior Member
Messages
4,556
CFS cases had higher TGF-beta protein levels compared to controls at
rest (median (quartiles) = 43.9 (19.2, 61.8) versus 18.9 (16.1, 30.0)
ng/ml) (p = 0.003), and consistently so over a nine-day period. However,
this was a spurious finding due to variation between different assay
batches.

In other words they weren't competent enough to run a trial and wasted the money.
 

Simon

Senior Member
Messages
3,789
Location
Monmouth, UK
The basic idea behind this study, of measuring cytokines before and after exertion, is a good one (as a few others have noted). But the fundamental flaw was to have a study with 16 measures and only 24 subjects: that's hugely underpowered as it could only detect a massive effect. It's fanciful to presume that such a small study could trump those an order of magnitude bigger (Hornig 2015, Montoya 2017) and write off a pathological role for cytokines in mecfs*.

On top of that they selected fairly healthy patients, 40% of whom were meeting national guidelines for physical exercise (thanks for info and analysis @Valentijn) making the chances of findings any effect smaller still.

Then there's the assay fiasco: sorry, can't remember who pointed out they gave no reason for preferring technician 2 over tech 1 but its matters little. All batches need to be run together, not years apart, so the only way to rescue the study would be to run all batches again, with internal controls to check the assay is working properly. Instead, they repeated tech one results they didn't like but failed to repeat those that they did.

There is no shortage of badly-done biomed mecfs studies but I don't think we need another one.

* Jonathan Edwards commented after the Hornig paper that it was more likely that cytokine abnormalities re a sign of action elsewhere rather than driving the illness, and I'm not convinced they play a causal role. I'd like good research to find out.
 
Last edited:

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
Then there's the assay fiasco: sorry, can't remember who pointed out they gave no reason for preferring technician 2 over tech 1 but its matters little. All batches need to be run together so the only way to rescue the study would be to run all batches again, with internal controls to check the assay is working properly. Instead, they repeated tech one results they didn't like but failed to repeat those that they did.

During the my 100+ "experiments" during my undergraduate years, I learned how easy it is to screw stuff up. If you screw it up, you repeat the whole thing from scratch. (and you explicitly mention in the discussion that you screwed it up and did it again and state anything you learned from the screw up)

Most studies make sure the technician is competent and perform assays in multiplicate, to make sure the finding is consistent and reproducible. The fact that they didn't do this suggests, well, that they're amateurs.
 

Sidereal

Senior Member
Messages
4,856
TGF-B results seem pretty consistent thus far, which makes it a bit suspect that it was the positive result from this paper, and which they have then declared to be a false positive.

Yeah, very dodgy.

I suspect the TGF-β elevation is real given how many studies found it to be the case but I no longer believe the Lipkin cytokine short vs long duration of illness paper since the Montoya study failed to replicate those findings. Montoya also found no differences in patients (as a whole) vs controls on any of the measured cytokines except TGF-β, it was only in post hoc analyses of severity that some interesting patterns emerged.
 

A.B.

Senior Member
Messages
3,780
There did seem to be some unusually good things about this study too. I'm not really following the cytokine research enough to comment on the results. It's understandable to have people just dismiss anything White does at this point, but it's probably worth being a bit cautious too. He might be right on this. We don't want overly strident criticism of other studies to distract from concerns about PACE.

We can just ignore this study. 20 and something participants? Montoya had 186.

TGF-beta is probably playing some role, but it's hard to imagine that it's important because it's decreased in mildly affected patients.
 
Last edited:

ljimbo423

Senior Member
Messages
4,705
Location
United States, New Hampshire
I was researching th1 th2 cytokines the other day, which lead me to hepatitis C infection (HCV). I was surprised to see that the studies show all different kinds of cytokine profiles.

So I just did another quick search and clicked on the first 3 studies that came up. One said HCV was primarily a th1 driven disease, the next said it was a th2 driven disease and the third said it was both th1 and th2 driven disease! So cfs is not the only disease with cytokine profiles all over the place.

How many people would say HCV is not a physical disease? None! It is obviously caused by a virus, yet there is no consensus on what cytokines are involved in the disease.

So it seems to me that a changing cytokine profile does not determine if there is a real disease pathology or not in some diseases and HCV is a good example of that. I feel quite certain cytokines are involved in causing cfs symptoms.

However, it seems like until there are better tests available for cytokine testing, it's not going to be a significant help in cfs. Unless somebody comes up with a really good "out of the box way of thinking" for testing cytokines.:)

Jim
 

Jonathan Edwards

"Gibberish"
Messages
5,256
What does that mean?
How do they determine it was spurious I wonder?

The write up is quite bizarre. If you establish that your methodology is inconsistent you bin you results and either try again or forget it. You do not publish the fact that you think you did not do the experiment properly. Samples from controls and patients should have been run along side each other on the same assays. That is the most basic sort of quality control you have to have for studies like this. If technicians were getting different results and putting patients on to one lot of plates and controls on to another - probably indicating that the samples were not blinded -then this is just incompetent lab work.

It is interesting that TGFbeta did not change after exercise but I would not particularly expect it to. For some reason several studies have identified TGFbeta as higher in ME. In this study presumably all they can say is that they screwed up so they do not know - not that the result was due to an artefact. We just do not know what it was due to. I am disappointed that CLin Exp Immunol has drifted down to this level. It was once a rather good journal.