• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Dr Avindra Nath (NIH intramural study) to give Solve webinar, 21 April

viggster

Senior Member
Messages
464
You are misunderstanding or misstating the issue. I understand the limit of 40 patients. But it is a somewhat small sample, especially when dozens of comparisons are being made.

Making too many comparisons increases the chance that meaningful differences will be calculated to be statistically insignificant. Adding 2 or 3 control groups doubles or triples the number of comparisons being made, which results in a much larger difference needed to show statistical significance.
Hmm...you're right I don't understand the basis of your complaint. They'll compare patients to healthy controls, and separately, to the asymptomatic Lyme patients. So I don't see how adding the Lyme controls has any bearing on the findings of patients versus healthy controls.

Edit: I have a friend with a PhD in biostats from Berkeley...I just asked her to explain this issue to me.
 
Last edited:

Valentijn

Senior Member
Messages
15,786
Hmm...you're right I don't understand the basis of your complaint. They'll compare patients to healthy controls, and separately, to the asymptomatic Lyme patients. So I don't see how adding the Lyme controls has any bearing on the findings of patients versus healthy controls.
Any time a comparison is made between a group of patients and a control group, it's possible that abnormal results between those groups are due to random chance. Basically, it might just be a meaningless fluke, or "false positive". So p-values are used to set a basic threshold to determine if those results are probably meaningful or not. A typical p-value is 0.05 or 0.01.

When multiple comparisons are made between the patient group and control group, each comparison introduces an additional opportunity for a false positive. So the p-value is "corrected for multiple comparisons", and there has to be a more drastic difference on any of the comparisons for it to satisfy the threshold of probably being meaningful.

So if someone is just comparing blood glutamate levels between patients and controls, it can be pretty easy to show that there is a meaningful difference, even if it's a fairly small difference using a small group of patients and a small group of controls. But if the researcher decides he wants to look at all of the amino acids and adds in another 19, you're going from making 1 comparison to making 20 comparisons. So now there is a big chance that modest differences are due to random chance, and the threshold for showing a statistically significant difference in the levels of any one of the amino acids becomes much much higher.

Similarly, if an extra control group is added, extra comparisons are being made. This might not be a problem if just looking at glutamate levels in ME patients versus healthy controls versus sedentary controls, for example. That's just 2 comparisons: ME glutamate versus one group, then ME glutamate versus the other group.

But if using two control groups to compare ME results for all 20 amino acids, that's 20 comparisons between ME patients and each group, resulting in 40 comparisons being made total. That's a lot of opportunity for false positive results, which makes it much harder to meet the requirements for statistical significance ... the differences now must be pretty dramatic to be considered meaningful. So if only really interested in glutamate, it's probably best to refrain from testing the other 19 amino acids and to seriously reconsider the usefulness of the extra control group.

(This actually happened in a study which Dr Mark Hallett and Dr Silvina Horovitz co-authored, albeit with a single control group. 22 patients with Focal Hand Dystonia and 22 controls. The study is all about debunking an earlier study showing significant results regarding GABA levels, according to the abstract. But if you read the paper they added in 19 other metabolites which they mostly weren't interested in or even discussing, and dutifully corrected for the excess comparisons. They found a 10% drop in GABA levels in patients, but it needed to be 30% or greater. They would have needed 80 patients and 80 controls for the 10% difference to be statistically significant, according to their own analysis. But the study is still used as a null result, to support the claim that GABA levels are not different in FHD patients versus controls.)

This means that if someone is going to make more than a few comparisons between controls and patients, they need a decent sample size of both patients and controls. If they're making dozens of comparisons, they probably need a pretty big sample size of both patients and controls. And if they're going to then add multiple control groups, the problem pretty much explodes into something completely unmanageable.
 
Last edited:

Bob

Senior Member
Messages
16,455
Location
England (south coast)
And that's why most genetic studies are worthless - because there are such a huge number of variables and there's so much noise in the data that you need large numbers of subjects to make sense of the stats. And that's why the UK's 'omics' study is planned to include thousands of participants. For the NIH study, if it were to be done to full scale, it would also need thousands of participants because they are making a huge number of measurements. But I've always understood that phase 1 of the NIH study is an initial observational study that is simply looking for clues upon which to base the next phases of the research, and to dig deeper, rather than an attempt to immediately define ME in phase 1. I thought they were looking for small groupings of patients showing clear abnormalities, that will lead them on to the next stage. If they don't find clear abnormalities in a small number of patients (e.g. like Davis has reportedly found with mitochondria) then I think it might not be such a useful study. They've said that the second phase will include more participants. But maybe I don't know enough about research methodology to interpret it properly.
 
Last edited:

Amaya2014

Senior Member
Messages
215
Location
Columbus, GA
If I could walk 3500 steps in a week let alone in a day, I probably would never ask for anything else again as long as I lived and I don't see how this is the equivalent of "handing someone a cane." Not everyone can walk a mile although if I could, I would be sobbing with joy. I'm sure you didn't mean it this way, but I found it hurtful.
Hi @Gingergrrl so sorry you were offended. That wasn't my intent. I'm in prayer for yours, mines, and all our recovery!
 

viggster

Senior Member
Messages
464
And that's why most genetic studies are worthless - because there are such a huge number of variables and there's so much noise in the data that you need large numbers of subjects to make sense of the stats. And that's why the UK's 'omics' study is planned to include thousands of participants. For the NIH study, if it were to be done to full scale, it would also need thousands of participants because they are making a huge number of measurements. But I've always understood that phase 1 of the NIH study is an initial observational study that is simply looking for clues upon which to base the next phases of the research, and to dig deeper, rather than an attempt to immediately define ME in phase 1. I thought they were looking for small groupings of patients showing clear abnormalities, that will lead them on to the next stage. If they don't find clear abnormalities in a small number of patients (e.g. like Davis has reportedly found with mitochondria) then I think it might not be such a useful study. They've said that the second phase will include more participants. But maybe I don't know enough about research methodology to interpret it properly.
That's my understanding as well. And when Nath says he's calculated sample sizes needed to find abnormalities driving the disease, I think he's talking about exactly the kind of stats that Valentijn mentions.

Also, the problems with whole genome scan studies that associate SNPs with disease is the opposite - so many comparisons are made that if you set a p value of 0,5 for significance, some falsely associated SNPs will pop out. In follow-up studies, many SNPs that at first look associated with diseases fall apart.

Edit: I also want to add that Nath and others have communicated to me they are confident they can find the immune abnormalities driving the illness. They are not deliberately setting this up to fail. I guess if Valentijn thinks they are, or thinks they're incompetent, there's little reason to continue the discussion.
 
Last edited:

Bob

Senior Member
Messages
16,455
Location
England (south coast)
I also want to add that Nath and others have communicated to me they are confident they can find the immune abnormalities driving the illness. They are not deliberately setting this up to fail. I guess if Valentijn thinks they are, or thinks they're incompetent, there's little reason to continue the discussion.
I don't think Nath is setting it up to fail (i trust him from what I've seen of him) but I do understand why there is a major lack of trust with other players at the NIH.
 
Last edited:

duncan

Senior Member
Messages
2,240
Ok, @viggster, the flaws that might result in failure could be inadvertent. A failure would still be a failure, yes?.

Just to be safe, wouldn't it seem prudent to identify the potential flaws - already done, in large measure - and remedy them?
 

Valentijn

Senior Member
Messages
15,786
That's my understanding as well. And when Nath says he's calculated sample sizes needed to find abnormalities driving the disease, I think he's talking about exactly the kind of stats that Valentijn mentions.
Do you know where the exact quote for that is?
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
Do you know where the exact quote for that is?
I remember Nath saying something like that (i.e. that his stats colleagues have looked at, and approved, his study design), and i think it's in the Solve ME/CFS Initiative webinar transcription, Val.

Edit: But I can't find it. (And it's possible that I've mis-remembered.) Sorry - I'm not being any help here!
 
Last edited:

Bob

Senior Member
Messages
16,455
Location
England (south coast)
In the webinar, Nath discusses some of the issues we've been taking about...
So there are two ways of doing studies. You can do a lot of patients and do few things to them, or you can take a few patients and do a lot of things to them. And the intramural program is good at studying small sample sizes but studying them extensively. The extramural folks are really very good at multi-center studies and stuff whereby you can do very large numbers of patients.

So the first order usually should be to study a small number of patients, well-defined groups, and we look at them extensively, and that will allow you to define a set of parameters that you can then take to larger studies.

So after we conduct Phase One, hopefully find the handful of things that are worthy of pursuing further, then we would go to a Phase Two study. And so the Phase Two study would then validate the biomarkers in a longitudinal study. So now we can follow patients for a longer period of time and that will allow us to then establish endpoints if we can validate those in the longitudinal study. Then you can say that, OK, these are the things that seem to consistently be elevated or depressed or whatever it is. And if we now modulate one of those, will it really make a difference? So that's when you come to a Phase Three study.

We also realize that if you're going to use clinical criteria, not only for this disease but any other disease — and in neurology there are lots of them that way — they never have the perfect criteria and you're going to have some patients who probably don't have the disease, and you're going to have some kind of heterogeneity. But that doesn't actually bother me because as you study those patients you'll find outliers that don't fit into the rest of the group.

And so, depending on your sample size, you can actually add more patients or you can exclude them — the outliers — and re-analyze their data or sometimes the outliers can be actually very interesting. You can study them separately as well.

So there are ways of making adjustments to your study as you go along to try and define a population closer and closer and closer. So let's say I find, you know, there are ten patients out of the forty that are really clustering into a particular type of immune phenotype, then I'll try to understand what those are, and then try to bring in patients who keep matching that phenotype so I can correct right there for them.

So there are many ways of being able to handle it, so I'd like to alleviate people's anxiety that we might end up with wrong populations and all that kind of stuff. There are many ways of being able to handle those kinds of things and as neurologists we have lot of experience with those kinds of studies.
 

duncan

Senior Member
Messages
2,240
A problem with Nath's reassurances is they don't seem to take into account the fact that the history of trying to characterize ME/CFS has been marred, repeatedly, by studies that have done precisely what he seems intent on shrugging off, i.e, including non-pwME in the study sample.

The results almost invariably have been very very bad for us.
 

Amaya2014

Senior Member
Messages
215
Location
Columbus, GA
It's alright and I knew it was not your intent but just wanted to let you know how the words came across to someone 100% wheelchair bound who can only walk a mile in their dreams.
Gurrl, the struggle is real!:hug:;) Speaking of dreams, I had a most delicious one the other day that I was biking or running...not for sure but it was exhilarating. I wish it could be used as proof that my subconscious hadn't gotten the "all in your mind" memo;):D

Also, when I was still crashing severely I was offered assistive devices numerous times. I walk a thin line with the fear that I could worsen further and lose mobility. I'm very grateful for the function that I have and I don't take the subject lightly. This illness is devastating at every level.
 
Last edited:

viggster

Senior Member
Messages
464
A problem with Nath's reassurances is they don't seem to take into account the fact that the history of trying to characterize ME/CFS has been marred, repeatedly, by studies that have done precisely what he seems intent on shrugging off, i.e, including non-pwME in the study sample.

The results almost invariably have been very very bad for us.
He's not shrugging it off. Patients must meet CCC criteria with PEM. I believe this is the first US federal study with such strict entry criteria.
 

viggster

Senior Member
Messages
464
Also, I want to say that I understand why folks are suspicious and anxious. I get it. NIH has a bad track record and they've included a few suspect people on this (very large) study team. I think it's fine to analyze the study and look for ways to improve it. At the same time, there seems to be a certain amount of willful ear closing going on. Issues that NIH people have addressed keep being brought up as if they have not been addressed.

-Nath, a neurovirologist, is the PI. He said final interpretation of data lies with him. He said he's not interested in psychology.
- Nath explained that he has run stats on sample sizes and that he has flexibility to add more patients if necessary
- NIH has repeatedly said patients must meet Canadian Consensus Criteria with PEM, and yet a poster a few pages back wrote a long analysis of how this study will fail because NIH is using Fukuda criteria. (Huh???)

It's almost like half the community watches Fox News and the other half MSNBC. It's hard to have a dialogue when each side feels entitled to its own "facts".
 

halcyon

Senior Member
Messages
2,482
-Nath, a neurovirologist, is the PI. He said final interpretation of data lies with him. He said he's not interested in psychology.
The large criticism isn't with Nath and data interpretation I don't think. It's with Walitt and patient selection. The study design could be 100% bulletproof, but if you run it on the wrong patients it's game over. If the misdiagnosis rate in the US is anywhere near what it is in the UK (and I don't think we have any data on this) then this is of course a huge concern.
 

Denise

Senior Member
Messages
1,095
I would very much like to know how NIH is characterizing PEM and how that characterization plays out in the study. As we know, PEM is not fatigue but I feel as though that is how I have heard NIH speak of PEM. I seem to remember hearing the phrase post-exertional fatigue...
Unless we are very clear on what we mean when we use terms and unless we are clear on what NIH (and everyone else) means when they use terms - we may be speaking about different things.