I fail to understand how criteria based on self-reported symptoms can be deemed accurate of any disease?
Some maintain that CFS is ME and some believe the latter to be a sub-group of the former etc. etc. This paper is trying to establish something in relation to criteria - but it isn't clear to me what that might be - can you explain? Thanks.
Asking someone if they suffer from fatgiue/extreme tiredness provides the top results - but is only 87.6% accurate of something... It's not terribly good is it really.
To be honest, I didn't read the paper in full detail either yet. Just skimmed it, reading bits and pieces.
If you can get the right combination of self-reported symptoms, the accuracy rate can be pretty good. And until we have a consistent and easily testable biomarker, that may be the best we can get. Many symptoms are pretty distinctive, and people can tell whether they have them or not. If you believe the patient, then, you can diagnose them based on what they're reporting. It's the same as getting a diagnosis of anxiety based on saying you're super anxious, etc.
My impression is that the paper is trying to compare the various diagnostic tools for ME (Fukuda, CCC, etc.) and how successful they are at actually distinguishing ME or CFS patients from healthy controls, as well as comparing that success with the method of using their set of questions instead. In this case, it looks like they were only concerned with distinguishing between ME/CFS patients and healthy controls. But a similar process could be done trying to create and include questions that would distinguish ME patients from people with other fatiguing illnesses as well (because there are probably statements people with MS or Lyme disease would make, for example, that we would not, and vice versa).
When you make the criteria too vague, you misdiagnose too many healthy people or people with other illnesses. When you make them too specific, you risk excluding people who are legitimately ill with ME but don't happen to have that particular symptom. It's not an easy balance.
These scientists were suggesting using a group of questions rather than a single one, which would help compensate for what you're talking about with the 87,6% accuracy issue. One question on its own is not particularly helpful, but in combination it's less likely to get false positives or false negatives. Accuracy takes into account how well it successfully identifies both people with the illness and people without it.
That doesn't mean that a single question is necessarily any good, though. The fatigue question, for example, gets a high rating because it's great at not leaving out people who have ME. Almost everyone with ME said they had either fatigue or extreme tiredness. But tons of healthy people said that too, so it's not a good measure by itself.
But if you combine that with something like feeling physically trained or sick after mild activity, it's likely to weed out the folks who are healthy but just feeling a bit tired.
Note that I'm not saying these folks have the answer. Just trying to explain what they're saying with the paper, in case it's helpful. I don't think these particular questions are as optimal as they could be, but it is nice to see people trying to improve the process.