While researchers dealing with biochemistry of the central nervous system were plagued by doubts, uncertainty and confusion, medical practitioners exhibited far more confidence. The distance between "this just might make you feel better" and "this will fix what ails you" is short enough to fit on a prescription pad.
Prescriptions which don't actually cure anything have the advantage of producing a steady stream of repeat customers. Snap judgments of what to do about a problem, delivered with confidence, also increase the rate at which physicians can "treat" patients, and generate corresponding cash flow.
(This does not imply that this is a conscious decision. People don't need to make detailed, elaborate and nefarious plans to exploit a resource any more than cats have to understand what fishing boats do over the horizon that makes it worthwhile to hang out at docks.)
Pharmacists have long known that far more prescriptions are written to make the patient (temporarily) go away than to cure disease. It was a pharmacist who developed the popular patent medicine called Coca-cola. Early versions did actually contain some cocaine, in addition to a heavy jolt of caffeine and sugar. This led to a need for a second invention of a formula which was not officially habit-forming.
(Despite urban legends, adding aspirin to Coke will not precipitate anything of recreational interest. Nonetheless, elderly southern relatives of mine continued to refer to the beverage as "dope" long after it moved from under-the-counter back to over-the-counter.)
Opium addiction had been known since ancient times. (You can find an indirect reference in Odysseus' stay with the 'lotus eaters' in the Odyssey.) The laudanum mentioned earlier in a quote from the 17th century was the name given by Paracelsus to a tincture of opium. Medical students had serious problems with opiate addiction in the period we covered in discussing the discovery of anesthesia.
Education of medical students still includes lectures with the same warnings about addiction. At one time, not too far in the past, about one doctor in ten could be expected to have a serious problem with substance abuse, probably involving opiates, (though I don't know if or how widespread testing has changed this.) Adding cocaine and amphetamine to the options widened possibilities for substance abuse. So did barbiturates. Even more bizarre experiments became possible in the last generation or two. What some medical students have survived would make your hair stand on end, even if you were not a patient of theirs. There are exclusive and discreet clinics specifically for rehabilitation of medical professionals.
(Arthur Conan Doyle was sufficiently familiar with cocaine dependence among medical doctors to provide a compelling description of Sherlock Holmes's struggle with cocaine.)
With this background, you can imagine what happened to patients with mysterious complaints as new pharmaceuticals appeared. Anything which produced an immediate reduction in complaints was a candidate treatment. One problem we now know about this is that anything which produces euphoria ahead of side effects is likely habit-forming. It took a long time to learn this. Some have not yet learned.
Skipping a great deal of unusually colorful history, let's move into the era of modern psychopharmaceuticals. The discovery that lithium salts could reduce mania was not accepted for well over a decade. Why was this?
An earlier experiment with lithium salts as a salt substitute for those with high blood pressure had been a disaster. Some patients with heart conditions simply dropped dead when their hearts stopped. To be effective, lithium salts have to be within a factor of two of the toxic threshold. This requires regular blood tests, and conscious awareness of possible interactions.
Unipolar depression was not considered an indication for lithium salts. The first modern antidepressant (apart from stimulants like amphetamines) was Iproniazid, a non-selective, irreversible monoamine oxidaze inhibitor (MAOI). It was only discovered in 1952, when patients receiving it for treatment of tuberculosis became "inappropriately happy". What's that?
With all I've said before, you might wonder why doctors would be worried about patients feeling good. This takes us back to yet another episode some would like to forget.
Many of the symptoms of illness are caused by immune response, not the disease itself. When corticosteroids became available they were tried on all kinds of illnesses where inflammation was a problem. In the case of tuberculosis, this led to another disaster when patients reported feeling much better even as the bacterial infection ran wild in their bodies. Steroids suppressed immune response, not infection. Doctors testing Iproniazid for its effects on tuberculosis were on the lookout for just this kind of problem. Some doctors do learn. Some patients do survive.
(You might well also ask why experimental treatments for tuberculosis with drugs of very questionable potential was going on in 1952. Hadn't Penicillin eliminated the disease?)
The discovery that there existed antidepressant chemicals which did not produce euphoria and addiction set off a gold rush. Several new classes of antidepressants were introduced. Earlier ones seldom disappeared, despite risks. A few patients always failed to respond to other drugs.
(Those who can't obey restrictions for MAOIs are in danger of hypertensive crises. This is not a property of the molecules, it comes from the context in which they are used. Other patients have tolerated the same drugs for decades.)
When you understand that the original antidepressant was being tested for treating infectious disease, you will gain some appreciation for the state of the art at the time it was introduced. Detailed study of the twists and turns of this vibrant growth industry would take us far afield. I want to concentrate on the mindset of medical doctors prescribing these drugs.
When you recall that many were trained before most neurotransmitters were discovered, (there are now hundreds) and had not kept up with the wild shifts in research, you will understand the need for a simplified model. Nobody could be very confident about research findings, patients needed reassurance they might understand. This was based on levels of biochemicals in the body.
If you were depressed the idea was that you had too little serotonin in your body, and the pills would raise this level until you felt happy. If you were schizophrenic, it was because you had too much dopamine, and so on.
Had this been true, the simple solution, for depression, would have been to ingest cheap generic L-tryptophan, a precursor of serotonin. (It was not true, and I don't recommend anyone do so, especially if they are taking antidepressants.)
Level-based models have an inherent problem: people don't go nuts every time they eat, or become hungry, or when their diet changes. When you ingest drugs, the concentration in the brain changes in hours, while the beneficial effects of many non-addictive psychoactive drugs require weeks. Brains have to keep working despite substantial changes in the internal biochemical milieu. This kind of robust stability is missing from simple models.
Evolution found a sophisticated solution long before humans existed. In engineering, it is called differential signaling. Biology takes this to heights seldom approached by humans. The significant feature is that strength of response depends on ratios of uptake rates at receptors. (Not reuptake rates, that is another story.) The dependence is also strongly non-linear, (which makes many aspects counterintuitive to people trained to use linear mathematics.) Robust stability requires this.
With this background, we have to ask why the above model became so popular. At one time, I guessed this was just a fable for patients. Long contact with a number of physicians, some personal friends, finally convinced me they really believed more or less what they were telling patients. (I can't speak for all, but I think this applies to the majority.)
Where did this model come from? Something with a close resemblance has been around for a very long time, the theory of humors. The difference is that doctors now talk about low serotonin or high dopamine instead of melancholy = black bile, excess phlegm = phlegmatic, too much blood = sanguine, yellow bile = feeling bilious, etc. The elements in this system were: earth, air, fire and water. The words changed, but the thinking behind them did not.
There is another kicker in research results since these explanations became popular. Receptor mapping now extends far outside the brain. We now know something like 80% of serotonin receptors are in the gut, with still more in the nervous system outside the brain. Inside the brain the distribution of receptors doesn't tell the whole story, because the vast majority of receptors are closed by glial blockade. (Those white cells of the brain that don't get respect do more than anyone had guessed.) The part of the nervous system which might be able to respond according to the simple model of antidepressant action uses only about 1% of all possible receptors of the given type.
Disclaimer: I don't want to stop anyone from taking medications which might keep them alive and out of hospitals. As has been said before, I don't refuse dinner because I don't understand how my stomach digests it. Quinine was used long before the cause of malaria was understood, and the molecular basis of the drug's action made clear.
If you don't feel like living, and no other treatment works, have a trial of antidepressants instead of taking more drastic action. I have seen people off their medication for other conditions, and it isn't pretty. If the price of being off medication is mania, terror or hallucinations, that is too much to pay for the privilege. In the absence of cures, palliative treatment may be needed to keep people alive until cures arrive. I want all of us to make it to that goal of a cure.
What I want everyone to understand is that the models in use in treatment are seriously flawed. Anyone reasoning on that basis who tells you they know exactly what a particular medication will do for a specific person is lying. By chance, they might turn out to be right some percentage of the time, but at this stage they cannot know.
When doctors say it would be unethical to experiment on patients, they are deliberately ignoring the fact that in large areas of medicine every new prescription is an experiment. Even if drugs are uniform commodities, patients are not. We live in a time when there is great change and uncertainty about what will be appropriate treatment next year. Treatment in the case of ME/CFS remains empirical. This is not as different from traditional practice as regularly claimed.
Find doctors who will check that they are not doing harm. Make sure you have one who understands that if he/she has been doing something for a reasonable time, with no evidence this is working, it is appropriate to try something else. One other thing, when you are in an area where treatment must, of necessity, be empirical, don't take anything on authority.
Prescriptions which don't actually cure anything have the advantage of producing a steady stream of repeat customers. Snap judgments of what to do about a problem, delivered with confidence, also increase the rate at which physicians can "treat" patients, and generate corresponding cash flow.
(This does not imply that this is a conscious decision. People don't need to make detailed, elaborate and nefarious plans to exploit a resource any more than cats have to understand what fishing boats do over the horizon that makes it worthwhile to hang out at docks.)
Pharmacists have long known that far more prescriptions are written to make the patient (temporarily) go away than to cure disease. It was a pharmacist who developed the popular patent medicine called Coca-cola. Early versions did actually contain some cocaine, in addition to a heavy jolt of caffeine and sugar. This led to a need for a second invention of a formula which was not officially habit-forming.
(Despite urban legends, adding aspirin to Coke will not precipitate anything of recreational interest. Nonetheless, elderly southern relatives of mine continued to refer to the beverage as "dope" long after it moved from under-the-counter back to over-the-counter.)
Opium addiction had been known since ancient times. (You can find an indirect reference in Odysseus' stay with the 'lotus eaters' in the Odyssey.) The laudanum mentioned earlier in a quote from the 17th century was the name given by Paracelsus to a tincture of opium. Medical students had serious problems with opiate addiction in the period we covered in discussing the discovery of anesthesia.
Education of medical students still includes lectures with the same warnings about addiction. At one time, not too far in the past, about one doctor in ten could be expected to have a serious problem with substance abuse, probably involving opiates, (though I don't know if or how widespread testing has changed this.) Adding cocaine and amphetamine to the options widened possibilities for substance abuse. So did barbiturates. Even more bizarre experiments became possible in the last generation or two. What some medical students have survived would make your hair stand on end, even if you were not a patient of theirs. There are exclusive and discreet clinics specifically for rehabilitation of medical professionals.
(Arthur Conan Doyle was sufficiently familiar with cocaine dependence among medical doctors to provide a compelling description of Sherlock Holmes's struggle with cocaine.)
With this background, you can imagine what happened to patients with mysterious complaints as new pharmaceuticals appeared. Anything which produced an immediate reduction in complaints was a candidate treatment. One problem we now know about this is that anything which produces euphoria ahead of side effects is likely habit-forming. It took a long time to learn this. Some have not yet learned.
Skipping a great deal of unusually colorful history, let's move into the era of modern psychopharmaceuticals. The discovery that lithium salts could reduce mania was not accepted for well over a decade. Why was this?
An earlier experiment with lithium salts as a salt substitute for those with high blood pressure had been a disaster. Some patients with heart conditions simply dropped dead when their hearts stopped. To be effective, lithium salts have to be within a factor of two of the toxic threshold. This requires regular blood tests, and conscious awareness of possible interactions.
Unipolar depression was not considered an indication for lithium salts. The first modern antidepressant (apart from stimulants like amphetamines) was Iproniazid, a non-selective, irreversible monoamine oxidaze inhibitor (MAOI). It was only discovered in 1952, when patients receiving it for treatment of tuberculosis became "inappropriately happy". What's that?
With all I've said before, you might wonder why doctors would be worried about patients feeling good. This takes us back to yet another episode some would like to forget.
Many of the symptoms of illness are caused by immune response, not the disease itself. When corticosteroids became available they were tried on all kinds of illnesses where inflammation was a problem. In the case of tuberculosis, this led to another disaster when patients reported feeling much better even as the bacterial infection ran wild in their bodies. Steroids suppressed immune response, not infection. Doctors testing Iproniazid for its effects on tuberculosis were on the lookout for just this kind of problem. Some doctors do learn. Some patients do survive.
(You might well also ask why experimental treatments for tuberculosis with drugs of very questionable potential was going on in 1952. Hadn't Penicillin eliminated the disease?)
The discovery that there existed antidepressant chemicals which did not produce euphoria and addiction set off a gold rush. Several new classes of antidepressants were introduced. Earlier ones seldom disappeared, despite risks. A few patients always failed to respond to other drugs.
(Those who can't obey restrictions for MAOIs are in danger of hypertensive crises. This is not a property of the molecules, it comes from the context in which they are used. Other patients have tolerated the same drugs for decades.)
When you understand that the original antidepressant was being tested for treating infectious disease, you will gain some appreciation for the state of the art at the time it was introduced. Detailed study of the twists and turns of this vibrant growth industry would take us far afield. I want to concentrate on the mindset of medical doctors prescribing these drugs.
When you recall that many were trained before most neurotransmitters were discovered, (there are now hundreds) and had not kept up with the wild shifts in research, you will understand the need for a simplified model. Nobody could be very confident about research findings, patients needed reassurance they might understand. This was based on levels of biochemicals in the body.
If you were depressed the idea was that you had too little serotonin in your body, and the pills would raise this level until you felt happy. If you were schizophrenic, it was because you had too much dopamine, and so on.
Had this been true, the simple solution, for depression, would have been to ingest cheap generic L-tryptophan, a precursor of serotonin. (It was not true, and I don't recommend anyone do so, especially if they are taking antidepressants.)
Level-based models have an inherent problem: people don't go nuts every time they eat, or become hungry, or when their diet changes. When you ingest drugs, the concentration in the brain changes in hours, while the beneficial effects of many non-addictive psychoactive drugs require weeks. Brains have to keep working despite substantial changes in the internal biochemical milieu. This kind of robust stability is missing from simple models.
Evolution found a sophisticated solution long before humans existed. In engineering, it is called differential signaling. Biology takes this to heights seldom approached by humans. The significant feature is that strength of response depends on ratios of uptake rates at receptors. (Not reuptake rates, that is another story.) The dependence is also strongly non-linear, (which makes many aspects counterintuitive to people trained to use linear mathematics.) Robust stability requires this.
With this background, we have to ask why the above model became so popular. At one time, I guessed this was just a fable for patients. Long contact with a number of physicians, some personal friends, finally convinced me they really believed more or less what they were telling patients. (I can't speak for all, but I think this applies to the majority.)
Where did this model come from? Something with a close resemblance has been around for a very long time, the theory of humors. The difference is that doctors now talk about low serotonin or high dopamine instead of melancholy = black bile, excess phlegm = phlegmatic, too much blood = sanguine, yellow bile = feeling bilious, etc. The elements in this system were: earth, air, fire and water. The words changed, but the thinking behind them did not.
There is another kicker in research results since these explanations became popular. Receptor mapping now extends far outside the brain. We now know something like 80% of serotonin receptors are in the gut, with still more in the nervous system outside the brain. Inside the brain the distribution of receptors doesn't tell the whole story, because the vast majority of receptors are closed by glial blockade. (Those white cells of the brain that don't get respect do more than anyone had guessed.) The part of the nervous system which might be able to respond according to the simple model of antidepressant action uses only about 1% of all possible receptors of the given type.
Disclaimer: I don't want to stop anyone from taking medications which might keep them alive and out of hospitals. As has been said before, I don't refuse dinner because I don't understand how my stomach digests it. Quinine was used long before the cause of malaria was understood, and the molecular basis of the drug's action made clear.
If you don't feel like living, and no other treatment works, have a trial of antidepressants instead of taking more drastic action. I have seen people off their medication for other conditions, and it isn't pretty. If the price of being off medication is mania, terror or hallucinations, that is too much to pay for the privilege. In the absence of cures, palliative treatment may be needed to keep people alive until cures arrive. I want all of us to make it to that goal of a cure.
What I want everyone to understand is that the models in use in treatment are seriously flawed. Anyone reasoning on that basis who tells you they know exactly what a particular medication will do for a specific person is lying. By chance, they might turn out to be right some percentage of the time, but at this stage they cannot know.
When doctors say it would be unethical to experiment on patients, they are deliberately ignoring the fact that in large areas of medicine every new prescription is an experiment. Even if drugs are uniform commodities, patients are not. We live in a time when there is great change and uncertainty about what will be appropriate treatment next year. Treatment in the case of ME/CFS remains empirical. This is not as different from traditional practice as regularly claimed.
Find doctors who will check that they are not doing harm. Make sure you have one who understands that if he/she has been doing something for a reasonable time, with no evidence this is working, it is appropriate to try something else. One other thing, when you are in an area where treatment must, of necessity, be empirical, don't take anything on authority.