Any ME study needs to begin with the research criteria that is Most exclusive to the known disease. This should include outbreak patients as they are best defined as they originate the investigation of the disease. The ME-ICC is the closest definition for research purposes we have.
Clinically those who see many patients are able to 'know them when they see them,' as we patients often recognize each other. This is fine for care and treatment as most doctors begin to treat on a presumptive diagnosis, but not for a research study.
If we had say ME, MS, and Lupus patients all mixed in a study as a single cohort of a 'broad study' we may well find commonalities but they may be meaningless and confusing to investigating ME. Looking at other patient groups less well fitting the ME-ICC criteria could be done later once some clear findings show a recognizable pattern.
To include a 'broad cohort' confounds the findings and makes them meaningless. This was the whole point of the multiple 'chronic fatigue' wastebasket definitions. Some of the recent attempts to 'redefine' the disease looks at the history of muddled research of 'fatigue' and crafts a muddled definition exactly due to the inclusion of patient data who should have never been included. It is self-fulfilling, GIGO.
It is also a tactic to do overly broad studies with many variables as you can pick and choose any results as 'statistically relevant.' This was shown by the hoax study last year that led to widespread headlines of "Eating chocolate ... can even help you LOSE weight!" and findings such as "Not only does chocolate accelerate weight loss, the study found, but it leads to healthier cholesterol levels and overall increased well-being."
The authors explain how the study worked:
"Think of the measurements as lottery tickets. Each one has a small chance of paying off in the form of a “significant” result that we can spin a story around and sell to the media. The more tickets you buy, the more likely you are to win. We didn’t know exactly what would pan out—the headline could have been that chocolate improves sleep or lowers blood pressure—but we knew our chances of getting at least one “statistically significant” result were pretty good."
He also mentioned the problem with large legitimate studies:
"For example, the Women’s Health Initiative—one of the largest of its kind—yielded few clear insights about diet and health. “The results were just confusing,” says Attia. “They spent $1 billion and couldn’t even prove that a low-fat diet is better or worse.” Attia’s nonprofit is trying to raise $190 million to answer these fundamental questions. But it’s hard to focus attention on the science of obesity, he says. “There’s just so much noise.”
An overly broad cohort does exactly that, it introduces too much noise, it has too many variables, it allows cherry picking of results to spin a story around. And they are cleverly getting you to buy into it even before it begins. This has a two-fold result, once you have committed to approving it, you have difficulty in rejecting it in the future no matter what proof is offered. Second, no matter what evidence comes that the study is flawed, no matter the results promoted, no matter how many patients and advocates and groups decry it in the future, they have a 'get out of jail free card' in the 'patient petition' that 'approved' the study.