• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Aleatoric Medicine

If a tree falls in the woods, and a statistician declares this a random event, does it make a sound?

This question ran through my mind when I read responses to news out of Columbia University concerning ME/CFS/SEID/WTF.

The report on plasma immune signatures of physiological pathology in the first three years after onset of ME/CFS in the Hornig/Lipkin study is welcome news because it is described as "robust evidence" that there is physical disease lasting much longer than the months typically considered the result of infectious disease at onset. No one reading this should take my criticisms of the state of the art as personal criticism of Drs. Hornig or Lipkin, who seem to be having an uphill battle. (I wouldn't wish an appearance on the Dr. Oz show on most enemies.)

The work was done under some fairly serious limitations, especially a lack of existing official diagnostic criteria which would exclude primary mental illness, and a lack of government funding for assembling such a cohort. The $10,000,000 grant to the Chronic Fatigue Initiative is by itself about twice total annual spending on this subject by federal agencies. The Stanford initiative was also funded by a substantial grant from an anonymous donor. Absent these two sources of external funding it is doubtful an equally meaningful study could have taken place at all. The samples on which it depended would simply not have been available.

Meanwhile, what was taking place within HHS control? The IOM study, which was strictly focused on past research, for which official diagnostic criteria were more likely to exclude patients with evidence of physiological disease than those with primary psychiatric disorders. This exercise cost one million dollars and took over one year. It ended up recommending a new name which completely omitted any reference to neurological or immunological abnormalities. The most important symptoms to consider in diagnosis were "exertional intolerance", "orthostatic intolerance", "cognitive impairment" and "sleep disorders". This pretty well overlooked the problem that cardiologists and physical therapists already have a definition of "exercise intolerance" which conflicts with the "exertional intolerance" considered most important here. Also, I have personally discovered that it is possible to suffer syncope, reported as an apparent seizure, causing admission to hospital via the emergency department, without anyone even suspecting "orthostatic intolerance". Absent these two criteria the resulting IOM diagnostic algorithm fits various longstanding psychiatric diagnoses remarkably well.

Against this background, the study by Hornig, et al. is quite positive. This is about like saying a candle flame is quite bright in a coal mine.

I want to say right now that I consider this work about the best you can expect from current practice in the study of infectious disease. This is not even close to a ringing endorsement of the field, which has a long history of trouble dealing with chronic diseases, even including TB and AIDS. Major successes there were largely due to prevention and containment, with incidence declining as patients with chronic disease died. (Think TB disappeared when streptomycin was introduced? I can supply a list of notable victims who obstinately continued to die anyway, e.g. George Orwell, Dashiell Hammett, Vivien Leigh.)

One fundamental problem stems from the kind of exclusions of physiological disease being made. When the majority of these excluded conditions are "of unknown etiology", you have to wonder how researchers knew they were irrelevant. It is entirely possible, even quite likely, the disease in question exists in a spectrum of severity. Current diagnostic criteria would exclude the most serious cases, where pathology would be most prominent. In fact, even if patients were not excluded by design, many severely-affected patients would have been unable to participate because they could not reach the clinics where samples were drawn without assistance. You might ponder what effect this would have had on research into, say, poliomyelitis.

A second problem is very widespread in medicine, despite considerable evidence it is based on invalid assumptions. Medical measurements are assumed to have mean values which can be compared to population norms and thresholds. Variation around these means is assumed to be random. You can actually construct a typical normal statistical distribution by starting sample points at the mean value, then conducting a series of unbiased one-dimensional random walks to disperse them. I can easily point to research on heart rate variability which shows that decreased variability is a dangerous sign of impaired adaptability. When near mean values healthy heart rates actual exhibit antipersistence: intervals between beats are more likely to change than to remain the same. This is not at all what theories based on simple homeostasis predict.

The same general characteristics also appear in breathing, gait, etc. These are the most fundamental physiological variables we know, and they do not behave like simple random walks around a desirable mean value. There is even interesting work on the nonlinear dynamics of posture which may be directly relevant to the problem above of quantifying "orthostatic intolerance".

It is likewise no problem to show many other measurements exhibit strong diurnal variation. So, why do doctors and medical researchers persist in using defective models of physiological parameters? Because these are convenient for treating generic patients without a great deal of thought and effort. You simply compare a single isolated measurement against thresholds.

The classic example today is using a thermometer to decide if the patient has a fever.
(This is much more recent than many people realize. In 1868, a pioneering physician, Carl Wunderlich, used a thermometer about a foot long to measure axillary or armpit temperatures in patients. Each measurement took about 20 minutes. Relatives born before this breakthrough actually lived into my lifetime.) This can fail if patients have normal low temperature or weak immune response, as frequently happens with elderly patients. Despite repeated warnings that serious infections can present without fever in such cases people still die because doctors applying generic thresholds don't recognize infections without fever meeting common criteria until it is too late to treat them successfully.

You may also notice that fever is a common sign of immune response, like visible inflammation. What is more it is notoriously variable. (You might also consider the general literary meaning of perfervid in comparison to stability.)

What I'm getting at here is that language has already absorbed the notion that signs of immune response are highly variable, even on short time scales. This is highly inconvenient for medical practice and research, where variability is treated as something to avoid or remove. The cartoon fever charts at the foot of beds were once real measures of how a patient with an infectious disease was progressing. Doctors in that period were very conscious of time variation in immune response.

One obvious reason to find both pro-inflammatory and anti-inflammatory cytokines in patient samples may well be that you are sampling dynamic processes at different stages, which other sampling criteria did not distinguish at all. (Do we have any idea how long patients had been upright that day before samples were drawn, or if they had eaten? Do patients reading this consider this important?)

Added: we even have prior research I had failed to mention showing that cytokines appear in response to moderate exercise in ME/CFS patients. This is an example of non-random variation directly affecting the principle variables in the Hornig/Lipkin study.

This has another relation to assumptions used in research: the use of linear models. When dynamic behavior of physical variables is weakly non-linear you can get much useful information from linear models. When behavior is strongly non-linear you cannot even depend on variation moving in the same direction in response to input. A mixture of pro and anti-inflammatory markers certainly sounds like this, but unfortunately we have little idea of how these vary on short timescales or with physiological stressors. All reports of the "natural history" of this disease emphasize the importance of variation and stressors, though patients may not be correctly attributing causes to effects.

The discovery that variable rates of change were the key to understanding physical processes was so important that Newton actually hid it in a Latin anagram to protect his priority in discovery. (Sorry guys, I'm talking about Isaac, not Helmut.)

Newton was also the one who introduced the term momentum in a truly quantitative way. Before this it simply meant some unspecified quantity of motion. Anyone who has read older work on astronomy should recognize the extraordinary advance this represented, even beyond Kepler's law of equal areas in equal times. It was never again necessary to reduce all motion to compounded uniform circular motion.

This concept not only held up when dramatic changes took place in the form of Relativity and Quantum Mechanics, it also produced the idea of a phase space which Poincaré used to approach problems which had previously been intractable.

This was the culmination of a long development which started with Newton and passed on to Lagrange's idea of generalized coordinates and generalized momenta. Names like Jacobi and Poisson also appear here, in addition to the spectacular generalization of Hamiltonian dynamics. This linked these physical ideas with deep results in the calculus of variations, which were later used by Emmy Noether to produce a remarkable theorem on the relation between symmetries, group theory and conservation laws. Anyone who thinks this stuff is simple compared to medicine should investigate the extraordinary success this has had in very complicated chemistry, solid-state physics and particle physics, where some appropriate conservation laws were by no means apparent at the outset.

It is possible to deduce a great deal about unknown dynamics when you have reliable time series measurements of a single variable. The problem is that even when we have such series, and analysis shows interesting characteristics of those dynamics, this has had almost no impact on the practice of medicine. The ideas are simply too foreign.

Even when you don't have appropriate time series there are measures which bear on the way a dynamical system can be expected to change in response to variations. These show up in sensitivity analysis, and neglected problems there have even resulted in big engineering disasters. Most ME/CFS patients I've communicated with have unusual sensitivity to a number of things. In some cases these are chemicals, or components of diet. In other cases they are physical parameters like ambient temperature. I would say the dynamics involved lacks robust stability, like a unicycle, yet this kind of report is typically ignored as medically irrelevant.

In the case of medicine we commonly have almost no information concerning rates of change, either w.r.t. time or proportional to environmental influence, and when we do have them we may well lack corresponding values of the conjugate variables which are changing. (If this isn't concrete enough, imagine a traffic cop who knows how fast you are going, but has no idea which speed zone you are in. Velocity, or momentum, and position are conjugate variables. Both are necessary if you end up in court.) The result is that the phase space described by clinical measurements loses half the number of dimensions. If the result is a blob of overlapping trajectories which is particularly hard to interpret, no one should be in the least surprised. The diagrams of cytokine networks in this paper make me think of exactly this kind of collapse of dimensions in physical problems. We have already found that opponents of physiological disease hypotheses can easily fail to see evidence of anything that fails to fit their idée fixe except randomness.

This ties in with another lacuna in medical thinking when variation is considered random: mean values seldom kill; extremal values which occur exactly once in a lifetime do.

At some point we will simply have to start considering health and life as dynamic processes in which change in response to changing contexts is more important than mean values.

Comments

(continued)
This is a different world from the one in which physicists who studied "schizophrenic gaze" using movie film digitized by hand were rebuffed at a medical conference. Their results showed this was largely the result of simple non-linear dynamics, implying a rather simple neurological basis for the phenomenon. If the medical profession had not "known" (on some basis I can't understand) that this was impossible, we might have seen some real progress in that field in the last 30 years.

What about people who do not imagine aliens on the Moon are beaming thought control messages at them? Shouldn't it be possible to accept some of their information about what is coming in along afferent nerves as evidence of hidden damage at such locations as dorsal root ganglia which produces no convenient clinical signs? We've already been through this with MS, where for many years a definitive diagnosis could only be made at autopsy. If you are looking for problems not just in the brain, but in false beliefs held in the less tangible mind inhabiting that brain, when the fundamental problem is actually diffuse damage in nerves you haven't even considered, you can expect to make little progress over a period of decades. Unfortunately, there are some who seem to find this satisfactory.
 
I'm glad someone liked this. I was beginning to think it had fallen "stillborn from the press" to quote David Hume. I'm sure much of this was far outside readers' education, and I don't expect most people to grasp the tremendous theoretical advances which have taken place in dynamics. It is mainly important to understand that big advances did take place, and the theoretical tools mentioned, as well as others, are available. Ignorance is no longer acceptable, even for those who disagree. Anyone who still thinks variations in medicine must be random should have an understanding of ergodic theory. It is quite possible for deterministic processes to produce apparently random behavior. There are even simple examples.

I was prepared to argue with people who thought I was suggesting this was all about differential equations. I am well aware that biological behavior frequently involves more advanced concepts like integral equations, differential-delay equations, etc. All of these may now be approached via the concepts of iterated maps, sections and suspensions. Having computers widely available to do the heavy lifting has transformed the study of dynamics. These are now even available in consumer products for use by healthy people during exercise, if they have not been incorporated in mobile phones. (The iPhone 6 can tell when you are climbing stairs.)
 
I'm not sure it's that doctors and researchers don't understand relative difference (re: 'normal' temperatures), or recognize that not all their data will fit a linear pattern; it's that determining someone's health relative to their prior measurements is WORK, and in a system where you see your patient for ten minutes, anything that requires intricacy of thought is going to be ignored.

Then again, I recently went to the doctor having had 102-degree fever that had been reduced through the liberal use of anti-inflammatories (aspirin AND naproxen AND feverfew). When I arrived, the doctor took my temperature and it was in the high 98s. "That's fine," she said. "Not for me," I replied, "my normal temperature is in the 97s these days." She looked as though I were speaking in tongues.

Perhaps some people are ignorant and others are lazy. ;)
 
There are several problems in the incident you describe, and I don't know how to cope with them in clinical situations. There would be problems in assuming the average patient has one teste and one ovary. There are instrumental errors in measurements. There are dynamic variations in the majority of physiological measurements which tell how the underlying biological machinery is operating.

It was the last kind of problem which concerned me most in writing the above. Simply assuming only mean values are real, and that all variation around these is random, ignores a great deal of information which could be useful, as in interbeat intervals in heart rate. Loss of that variability is definitely a bad sign.

My belief is not that the same kind of dynamics found in physics will always apply to medicine. That would surprise me. What I am getting at is that you can't study unknown dynamics if you completely ignore rates of change because you "know" change is random.

I wouldn't try too hard to get individual doctors to pay attention to your abnormal "normal" temperature, (though this observation of subnormal temperatures has been made before,) that simply is not part of the culture of this tribe. You were speaking Xhosa to a Zulu. The real problem is at the level of research and training.

We have no means at present to judge the state of health of patients unless they exhibit clear clinical signs or show measurements far outside of the broad range of variation generations of doctors have decided to ignore. You need to be careful about pushing doctors to do something in this state of ignorance.
 
I had to split this up because it was too long!

"You need to be careful about pushing doctors to do something in this state of ignorance."

Because acting out of ignorance could result in iatrogenic harm, or because it induces cognitive dissonance in the doctor, who may then become irrationally angered? Or is the answer to that question, "yes"?

"Simply assuming only mean values are real, and that all variation around these is random, ignores a great deal of information which could be useful..."

Agreed. I stand by the idea that it's laziness that causes the type of thinking in which only the average or even median value is 'normal'. All it requires is bimodal thinking: it's in range/out of range. That's seriously all you have to know. When you consider the alternative: looking for trends both in patient subsets and in individuals as markers for pre-disease – why, that sounds like work.

I'm not sure I agree that physicians consider ALL variations around the mean to be random. Certainly values that fall out of range are *sometimes* examined. On the other hand, I was having a conversation the other day with PWME in which we observed that, no matter how far our immunoglobulins fell out of range, this was deemed a 'blip' not worth pursuing. (Sometimes it seems as though you throw a sentence out there that I feel could have a multitude of meanings, but then move on before describing your idea in depth.)
 
Part II:

"What I am getting at is that you can't study unknown dynamics if you completely ignore rates of change because you "know" change is random. "

Ugh, yes.

"I wouldn't try too hard to get individual doctors to pay attention to your abnormal "normal" temperature..."

I stated it, she looked baffled, I moved on. ;)

"....simply is not part of the culture of this tribe. You were speaking Xhosa to a Zulu."

Haha, oh dear. I love this metaphor. At the same time, my stubbornest side says that tribes are composed of people, and that when their behavior changes, so alters the mores of the tribe.

Or, you know, they're excommunicated. This is also a possibility.

-J
 
My comment about pushing doctors to "do something" while they are ignorant was based on personal experience. I survived. Not every patient will.

When isolated readings fall outside a "normal" range, but don't fall into any conceptual slot, they are almost ignored. I'm thinking now about a patient whose liver enzymes ran at 10x the healthy maximum. "We'll have to watch that." Dr. says. That is precisely what they do, watch. If the patient gets better, then it can be dismissed as laboratory error, or some idiopathic mystery. If the patient ends up in the hospital, it generally becomes someone else's problem.

For some laboratory tests you may find 20% in error. In the case of Lyme disease this means the major function of testing is a kind of theater to reassure patients that someone knowledgeable is dealing with the problem. In fact, you can't get a reliable negative without repeating the test enough to make false positives likely. Once a patient is treated it is generally impossible to tell if the infectious agent is really gone, instead of going into a state of low activity from which it may emerge later.
 

Blog entry information

Author
anciendaze
Read time
8 min read
Views
905
Comments
7
Last update

More entries in User Blogs

  • Daily doodal dandy
    Just testing this out
  • Covid day 75
    Well since my last few updates I started to suffer from exhaustion and...
  • Pray
    If you pray, will you pray for me please? I have covid pneumonia and...

More entries from anciendaze