• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Distribution of Irony

In past posts I've already said some things about situations where normal distributions should not be expected. These actually turn up all over biomedical statistics, and the real puzzle is how researchers avoid seeing them. I'll repeat some of that in this post; bear with me. In this discussion of an unusually dry subject, I can promise to uncover a piquant irony.

I've had some experience with cases where normal (Gaussian) distributions do arise, and I'll start with such an example to expose the differences.

Pointing a telescope is a classic case. The base on which you place the pier probably is not perfectly level. Even if it is, there are Earth tides which cause microscopic tilts twice a day. For most purposes these are negligible, but a high-powered telescope is an exception. The pier itself will not be perfectly straight. There will also be changes in shape due to wind loads, uneven heating, or some damn fool standing on the base pad. The polar axis will not be perfectly aligned with the Earth's axis, even if you make every effort to adjust the mount on the pedestal. (The Earth's rotational axis shifts by tiny amounts from year to year. Precession is fairly predictable, but nutation is faster and less predictable.) The bearings will have some tiny play. The shaft may not be perfectly straight, and the declination shaft may not be perfectly at right angles to the polar axis. More errors show up in the way the tube of the telescope is connected to the mount, the way the lens or mirror is attached to the tube, and the way your eyepiece or instrument is attached to the other end.

(I hope anyone with actual experience in taking measurements with telescopes will forgive me for using an old example which says a great deal about how things are not done today. We go to great lengths to avoid this concatenation of errors, but the reasoning involved then becomes much harder to explain to the general public.)

The important feature of all these errors, even after you do your best to eliminate them, is that they combine by adding. This allows you to compensate for errors in the base and pier by adjusting the mount in the opposite direction. Even those errors you can't control subtract about as often as they add. A normal distribution is a good model for such errors. You can go through a similar analysis for measuring microscopes or many other common instruments. These are typical examples of instrumental errors. Errors caused by random refraction of star light by the atmosphere are more complicated, but once you eliminate the largest systematic error, caused by angle above the horizon (angular altitude), a normal distribution is a useful approximation to the truth. (If you are dealing with active optics, you will probably need to do better, but that is another story.)

(That minor point about working hard to eliminate systematic error before you declare the remaining behavior random has a special personal significance. I was once nearly blown off the crest of a hill by an artillery projectile that should have been several standard deviations away from the hill. The gunner had simply forgotten to level the bubbles, introducing very large error that was only "random" in a different, non-mathematical sense than the random errors recorded in tables. I regularly see biomedical statistics with substantial uncorrected systematic errors. People will then tell you that correcting those errors would introduce "bias", ignoring the bias already present. If such sloppy thinking got researchers blown off hills there would be fewer published absurdities.)

At this point we encounter some heavy mathematics called the Central Limit Theorem (CLT) which tells us that random processes with distributions that have only a few required properties will combine to produce a normal distribution in the limit of large numbers of separate distributions. The most common prerequisite is that those other distributions are independent and identically distributed. Independence here means linearly independent, and non-linear combinations can fool you this way. (There can be nonlinear dependencies which show little or no linear dependence even though the results are entirely deterministic. The common examples are the logistic function and the "tent map".) There are more general forms of the CLT which relax requirements on identical distributions in favor of having many different distributions. The other requirements seem harmless enough: a well-defined mean and bounded variance. What typically gets passed over in silence is the "obvious" idea that errors combine additively.

There are very important situations where errors combine multiplicatively. I first ran into one in reliability calculations of systems with many critical parts. If the probability of any single critical part functioning correctly reaches zero it scarcely matters what happens to other parts -- the system will fail. This has direct application to the construction of mortality tables for people. If you look at distribution of life spans you will see a long tail after age 30. It looks to me like that is roughly the point where most of us stop repairing something and accumulate faults like a machine.

(When probabilities combine by multiplying there is no way to compensate for a previous low number, even if the cumulative probability has not reached zero, that would require a probability greater than 1.00. If you ever wonder why the mathematical groups describing the symmetries in the Standard Model of particle physics are all "unitary" groups, which I seriously doubt you do, it is because the probability that SOMETHING will happen must be exactly 1.00. You can make a great fool of yourself if you allow probabilities greater than 1.00 or less than zero.)

It happens that there are stable analytic distributions, which do not exhibit bounded variance, where things combine multiplicatively. These are called Lévy distributions, for example. If you are working with samples from these, the bounded variance you get will be determined by the bounds you impose on the sampling process and the number of sample points. This should be a cautionary tale for researchers. Many are unconsciously manipulating results without intent to deceive. They have simply learned that you have to do things a certain way in order to get results worth publishing. We tend to jump all over people who use statistics in novel ways, but we fail to question people who use customary techniques in situations where they do not apply.

The hoary example of a normal distribution from biomedical statistics is the distribution of adult heights. Data are typically taken from measurements of Army recruits. These do show a pretty fair approximation of a normal distribution. I suppose the lengths of individual bones really should combine additively to determine height, especially after you eliminate anyone with rickets, kyphosis, amputations, etc. One problem with this mathematically ideal example is that the recruits they are talking about are all typically healthy males. You can do the same thing with female recruits today, and get a different normal distribution. The problem arises when you combine these into a distribution for the entire population. The result is almost a bimodal distribution, but not quite.

Even for heights, you need to check that the variables you are measuring are actually normally distributed in the particular population you are considering. Combining two distributions falls far short of infinity, it is not even a large number, so you can't appeal to the CLT to save you. The limitation of measuring healthy recruits also ought to be a clear warning for medical researchers, but few seem to worry. Are they studying healthy people or sick ones, who can be presumed to be far from the mean?

With that showing up in the paradigmatic example, you might guess that other physiological measurements depart even more from the ideal. Many biological distributions plot as straight lines on log-log graphs, and obey power laws quite different from normal distributions.

(Heart rate variability is a good example of great medical importance. See the research of Bruce J. West, for example. The common assumption that variation in HR with time is random, amounts to assuming variation in each interval is independent of variation in other intervals, that we are looking at a random walk. There is a simple test which requires no particular mathematical sophistication, if you randomly permute the intervals the behavior should look fundamentally the same. A random walk can't get any more random. If people did this, they would immediately see this behavior is far from random even near the mean value. This could predict when patients are no longer healthy well before they cross some arbitrary threshold. Does anyone want to do this?)

Reasoning involving parametric statistics and confidence intervals based on assumed normal distributions will fail badly in these cases. This problem is widespread in biomedical research, even by highly-respected researchers.

It was while thinking about this, and the relationship to funding, that I hit upon a really ironic coincidence.

Perhaps the largest single scientific research project in the world is the Large Hadron Collider at CERN. This is where normal distributions abound, because elementary particles really can be counted on to be identical, and instrumental errors do typically add. High-energy physics is the best place to find normal distributions. Distributions for such things as a Bose-Einstein condensate, which are far from normal, show up at the other end of the scale, close to absolute zero.

The irony shows up when you plot funding for research projects on a graph. These are more likely to resemble Pareto distributions known to economists. (You could think of Pareto distributions as a limiting case of Lévy distributions. The entire distribution is like the "tail" of a Lévy distribution, and will plot as a straight line on a log-log graph.) The rich have the resources to defend their interests, while the poor do not. This can result in 80 percent of the wealth in the hands of 20 percent of the population. This is also true of research projects, which accumulate more support and better defenses as they employ more people, and more important people.

So, the places where you will find heaviest use of normal distributions and strong belief in their importance are also excellent examples of statistical distributions which are very far from normal when it comes to funding. I'm willing to place a bet that this is also true in biomedical research.

Any takers?

Comments

There are no comments to display.

Blog entry information

Author
anciendaze
Read time
7 min read
Views
307
Last update

More entries in User Blogs

More entries from anciendaze