• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

ADCLS and CFS share common phenotype, study shows.

msf

Senior Member
Messages
3,650
More brilliant logic from the researchers:

'In BC, Lyme disease prevalence in the tested population is well below 1%' So obviously these can't be actual positives, because we know that Lyme is very uncommon in BC!

I'm going through the paper now. It actually seems more substantial than the abstract suggested - for example they used the CC rather than Fukuda.
 

msf

Senior Member
Messages
3,650
They are also terrible at maths, or I am:

'Independent evaluation
has found that specificity at alternative laboratories can be less than 50% In BC, Lyme
disease prevalence in the tested population is well below 1%, meaning that false positive
diagnoses from an alternative lab can exceed true positives by a ratio of at least 50 to one.'

I'm pretty sure they should have said that the ratio was one to one.
 
Last edited:

msf

Senior Member
Messages
3,650
'We also made considerable effort to recruit Post-Treatment Chronic Lyme
Syndrome (PTCLS) subjects without success. While PTCLS patients might avoid mainstream
clinics, our experience more likely reflects low prevalence of PTCLS and mitigation of
symptoms in most people treated for undisputed Lyme disease.'

Problem? What problem?
 

msf

Senior Member
Messages
3,650
The other thing I noticed, on first read through, is that they claimed to have tested the groups for unstimulated and LPS-stimulated cytokine production (using E-Coli LPS, for some unknown reason) but they only reported the findings from the unstimulated cytokine production. Perhaps this was because there were no significant differences, but they could have mentioned that. Does anyone have access to the online supplement, or am I asking for too much?
 

Valentijn

Senior Member
Messages
15,786
'Independent evaluation has found that specificity at alternative laboratories can be less than 50% In BC, Lyme disease prevalence in the tested population is well below 1%, meaning that false positive
diagnoses from an alternative lab can exceed true positives by a ratio of at least 50 to one.'

I'm pretty sure they should have said two to one.
I think they mean that if you took 100 people at random, 1 would be infected and 99 would not. They suggest that testing at the alternative lab would result in about half of those 99 having positive results. So there would be about 50 false positives and one true positives.

Except they've never bothered to prove that the actual false positive rate is anywhere near that high, especially among symptomatic patients such as the ones in the study.
 

msf

Senior Member
Messages
3,650
Ah, I don't get statistics obviously, I thought that a specificity rate of 50% meant that if you test 100 people, you get 50 true positives and 50 false positives, but now I see that this would be affected by the numbers of true positives and false positives you test. How is specificity testing done? Do you have 50 true positives and 50 controls? I'm confused.

Anyway, if you're right, it comes back to the claims about the 50% specificity rate, and also the 1% true positive rate in the general population, an obvious tautology.
 

Esther12

Senior Member
Messages
13,774
I've seen that used as an example of a common misunderstanding in medical statistics @msf, so only an example of normal human stupidity - and most people don't go on to realise they got it wrong!
 

msf

Senior Member
Messages
3,650
Well, I might not have if Valentjin had not corrected me! Statistics has always been my Achilles' heel, along with a rubbish immune system obviously...

I dont understand this either: 'Cytokine differences between groups were not significant given the number of
comparisons.' Doesn't this that there were individual cytokine differences between groups but if the cytokines were taken en masse then there weren't any significant differences between groups? And why would have assume that there had to be significant differences in the latter? Surely one raised and one lowered cytokine in group A compared with group B might tell you something? Also, how do you know you tested a suitable selection of cytokines?
 

Valentijn

Senior Member
Messages
15,786
I dont understand this either: 'Cytokine differences between groups were not significant given the number of comparisons.'
I think they're referring to their study being too underpowered to get significant results. Any time a comparison is made, such as IL-6 levels in controls versus patients, there is a chance that an abnormal result will be a false positive, just due to random luck. So either the p-value is lowered or somehow the comparison of results is changed to take it into account when there are dozens of comparisons being made.

Basically having more comparisons results in more potential false positives, which results in a higher threshold for determining if a result is likely to be significant. So having dozens of tests and 4 groups of patients and controls means there's a lot of opportunities for false positives to pop up. If someone still wants to be able to look for significant correlations when doing so many comparisons, they need to compensate by including a much greater number of patients and controls in the study.

On the other hand, if their goal is to superficially disprove something, an underpowered study is the perfect way to do it. Insufficient patients + too many tests creates a situation where it is nearly impossible for any significant result to emerge. This results in an abstract which flatly refutes the undesirable competing theory, with the ridiculous lack of power to obtain any significant results hidden away in the (often paywalled) discussion if it's explicitly acknowledged at all. BPS practitioners have used this tactic before with CFS research, though not often.
 
Last edited:

msf

Senior Member
Messages
3,650
I don't get the logic of that...the more things you test the more false positives you get. That is obviously true, but it doesn't mean that the rate of false positives will increase, so each positive is no less significant. Isn't the rate of false positives partially determined by the size of the sample, rather than the number of comparisons?
 

Valentijn

Senior Member
Messages
15,786
I don't get the logic of that...the more things you test the more false positives you get. That is obviously true, but it doesn't mean that the rate of false positives will increase, so each positive is no less significant. Isn't the rate of false positives partially determined by the size of the sample, rather than the number of comparisons?
If you have one test, then the default (and somewhat arbitrary) assumption might be that there is a 5% chance it will result in a false positive. If 20 comparisons are made, then based on the same assumption it's almost guaranteed that there will be a false positive. If 100 comparisons are made, there are probably going to be approximately 5 false positive results. So to use that same threshold, as if there was only a single comparison, would result in abstracts proudly proclaiming that they found 5 significant results even though the odds are that they are coincidence rather than indicative of an actual correlation. Corrections are accordingly applied to set a higher standard, so that correlations are just as certain as being accurate in a study with hundreds of comparisons as they are in a study with a single comparison.

Both sample size and number of comparisons can impact the likelihood of false positives, and they can help to compensate for each other's weaknesses. If only a small sample size is available, then the researchers should probably restrict themselves to testing for the few most relevant measurements, since additional comparisons will doom the entire thing to failure.
 
Last edited:

msf

Senior Member
Messages
3,650
Thanks for the explanation, but I'm not sure I understand the logic entirely. Anyway, it's probably better for me, my migraine, and everyone else's sanity if I just read up on statistics rather than trying to work it all out for myself.
 

msf

Senior Member
Messages
3,650
The trouble is, I hate statistics, it's actually one of the reasons why I did History at university rather than Biochemistry. I think I will leave it to the professionals, and just listen to people like Valentjin and Prof. Edwards whenever the statistics in an ME study seem to be problematic.
 

Kati

Patient in training
Messages
5,497
It is very difficult to speak up because this team is local to me, and there may be further problems for me in speaking up but I am questionning their assessment of whether a patient has orthostatic intolerance on 2 readings of pulse and blood pressure at the physical exam, on after reclining for a short while, then upon standing 1minute. They haven't found much in that regards but they managed to find healthy controls to have orthostatic intolerance.

Patients from our patient population have reported OI, POTS, NMH and our experts especially Dr Nancy Klimas have been very successful in diagnosing patients with these conditions and treatment has helped them.

It is absolutely frivolous to not perform the appropriate testing and say these patients do not have orthostatic intolerance, especially when patients use electrolyte drinks, beta blockers and compression garments on a daily basis which could have contributed in masking symptoms during the assessment.

I would suggest that the appropriate and competent use of a tilt table (not standing) and appropriate heart rate measurement technology to measure the beat to beat is needed with the knowledge that patients have delayed effects, which mean some will not show immediate reactions but it will be delayed. Dr Klimas tilts her patients for 30 minutes and not 10 minutes as per American Neurology Association standards.

Patients need to be considered as competent in providing input to researchers. We are experts of our own bodies. We follow the science. We know the pittfalls. Unfortunately we've been taken on a roller coaster ride too many times.
 
Last edited:

Antares in NYC

Senior Member
Messages
582
Location
USA
Kati, the entire study doesn't pass the small test. It's shoddy, inconsistent, and clearly aimed to discredit non-official Lyme disgnostics methods.

I envy some folks like Valentijn, who can sink their teeth on bad research papers like this one and expose their fraudulent and flawed methods. For full disclosure, I have three college degrees, a bachelors and two masters degrees, yet I feel dumb as a fence post since my brain fog is so extremely disabling. I wish I had more of my old cognitive abilities, but unfortunately I even forget the names of my relatives every other day. Sad, I know. Wish I could be of more help, but glad others in this community remain vigilant.

Thank you guys for the amazing job you do in these forums. Keep on fighting the good fight. Good night.
 
Last edited:

Valentijn

Senior Member
Messages
15,786
It is very difficult to speak up because this team is local to me, and there may be further problems for me in speaking up but I am questionning their assessment of whether a patient has orthostatic intolerance on 2 readings of pulse and blood pressure at the physical exam, on after reclining for a short while, then upon standing 1minute.
Agreed ... that was something else I was wondering about. And their failure to replicate the findings of bigger and more robust ME studies of OI is yet another big red flag. It also indicates that they're looking to downplay ME/CFS nearly as much as atypical Lyme.
 

Kati

Patient in training
Messages
5,497
Agreed ... that was something else I was wondering about. And their failure to replicate the findings of bigger and more robust ME studies of OI is yet another big red flag. It also indicates that they're looking to downplay ME/CFS nearly as much as atypical Lyme.

To be fair I don't think they failed to replicate studies of OI. They haven't tried and it was not the goal of the study.

I think they cannot define OI as a 1 minute standing test.
I think they should have called it drop of BP at one minute standing.

i also hate the word intolerance because it assumes that the patient is showing weakness of character.
 

Valentijn

Senior Member
Messages
15,786
For full disclosure, I have three college degrees, a bachelors and two masters degrees, yet I feel dumb as a fence post since my brain fog is so extremely disabling.
It was all pretty new to me when I got sick. I nearly failed biology in high school, and barely passed the statistics class I needed to get my undergrad degree in Law & Justice. Mostly it's been practice which has been helpful, and reading what other people post about the methodological problems with studies here. Then it's just careful (very slow) reading, and looking for some basic things and noting the things which don't make sense. And sometimes I'm just too sick to read any papers, so I don't :p

I also took a Coursera statistics course which probably helped a lot. I failed it, but mostly due to having to learn a programming language to do it, plus a crash. I can still handle the logic of programming, but can't reliably remember the functions or syntax. Anyhow, there are probably easier statistics classes, and I'd like to take one again - if I go in assuming I'll fail, then there's no pressure but I'm still learning in the process :D I also took an easy Coursera class on understanding medical research, and that was great regarding the basics.