• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Why Psychologists Should Reject Complementary and Alternative Medicine: A Science-Based Perspective

Dolphin

Senior Member
Messages
17,567
Free full text:
https://www.researchgate.net/public...ernative_Medicine_A_Science-Based_Perspective

Why psychologists should reject complementary and alternative medicine: A science-based perspective.

Swan, Lawton K.; Skarsten, Sondre; Heesacker, Martin; Chambers, John R.

Professional Psychology: Research and Practice, Vol 46(5), Oct 2015, 325-339.
http://dx.doi.org/10.1037/pro0000041

Abstract

Professional psychology is in apparent conflict about its relationship to “complementary” and “alternative” medicine (CAM)—some scholars envision a harmonious partnership, whereas others perceive irreconcilable differences.

We propose that the field’s ambivalence stems at least partly from the fact that inquiring psychologists can readily point to peer-reviewed empirical evidence (e.g., published reports of randomized controlled trials) to either substantiate or refute claims for the efficacy of most CAM modalities.

Thankfully, recent intellectual developments in the fields of medicine and scientific psychology—developments which we refer to collectively as the science-based perspective—have led to the identification of several principles that may be used to judge the relative validity of conflicting health intervention research findings, including the need to consider
(a) the prior scientific plausibility of a treatment’s putative mechanism-of-action; and, commensurately,
(b) the degree of equivalence between treatment and control groups—except for the single active element of the treatment believed to cause a specific change, all else between the 2 groups should be identical.

To illustrate the potential of this approach to resolve psychology’s CAM controversy, we conducted a rereview of the research cited by Barnett and Shale (2012) regarding the efficacy of 11 types of CAM that psychologists might endorse.

Fewer than 15% of the studies we reviewed (N = 240) employed research designs capable of ruling out nonspecific effects, and those that did tended to produce negative results.

From a science-based perspective, psychologists should reject CAM in principle and practice.

http://psycnet.apa.org/journals/pro/46/5/325/
 

Dolphin

Senior Member
Messages
17,567
One can argue that this is important/relevant for ME/CFS where therapies based on dubious premises (CBT, GET, etc.) have been tested in RCTs.

In general, these evidence quality pyramids have been successful in aiding healthcare providers who need to compare evidence between major study classes. For instance, when comparing the effectiveness of several different antibiotics, a systematic review is more likely to provide robust evidence than would any single clinical trial, in part because of the systematic review’s larger sample size. Similarly, it is unlikely that, when confronted with results from both an RCT and a cohort study, the cohort study (which does not allow for valid inferences about causation) would provide more reliable and valid results. But there are circumstances under which these simple hierarchies fail; circumstances under which even systematic reviews and RCTs will lead science-minded clinicians to spurious conclusions, and to unnecessary patient costs. The science-based medicine movement arose explicitly to address such conditions.

A case for prior plausibility. For science-based medicine proponents, the elevation of RCTs on most evidence quality pyramids rests on a critical (but usually unstated) premise: that “lower level” studies, particularly those that focus on suspected pathophysiological mechanisms or basic psychological principles, will have already been performed satisfactorily by the time an RCT report is published (see Gorski & Novella, 2014). That is, in the ideal realization of evidence-based medicine principles, each successful upward-step on the study quality pyramid represents a necessary criterion for advancing to the next, such that the very existence of an RCT for any health intervention should signal that prior plausibility has already been established in basic, proof-of-concept research. In reality, orphan RCTs are quite common, and health researchers are under no particular obligation to demonstrate prior plausibility in a systematic fashion. CAM research seems to violate the upward-progression assumption more so than research on “conventional” treatments (see Novella et al., 2013; Offitt, 2013; and Renckens & Ernst, 2003), typically by offering positive results from a top tier evidentiary class (human clinical trials) before conducting studies which might reveal meditating mechanisms.

On the surface, it might seem sensible to begin and end with the “highest quality” research design. That is, if RCTs represent a “gold standard” for obtaining scientific evidence, it ostensibly stands to reason that discovering a statistically and clinically significant difference between randomly assigned treatment and control groups would always trump evidence gleaned from studies lower on the pyramid. Science-based medicine rejects this premise on two grounds.

First, the failure to establish prior plausibility before conducting human clinical trials has led indirectly to wasted research funding, and, more importantly, to patient harm. Take, for instance, a large ($30M) study commissioned to investigate the efficacy of chelation therapy (a technique for removing heavy metals from the body) as a treatment for coronary artery disease, despite the facts that (a) no cogent theories positing a causal role for heavy metal poisoning existed anywhere in the peer-reviewed literature (i.e., no prior plausibility), and that (b) the chelation procedure is risky and invasive (see Nissen, 2013 for a history of these failed trials). Similarly, consider a 2006 Cochrane review that called for more RCTs on the drug Laetrile, a cyanide-based anti-cancer treatment which 20 years earlier was shown to be “toxic and not effective” in basic research (Gorksi, 2014).

Second, even RCTs—endorsed by the APA (2006) as the most effective way to mitigate threats to internal validity—suffer to varying degrees from a host of biases in design and execution. For instance, when control groups (e.g., no-treatment or wait-list conditions) are not equivalent to the treatment group in all respects besides the putative treatment “ingredient,” statistical analyses are likely to skew in favor of the research hypothesis, rather than the null. We will return to and expound on this particular claim in a later section of this article. But first, to explicate the nature and magnitude of the alleged false positive problem, we turn now to introduce scientific psychology’s burgeoning “new statistics and replication” movement.
 
Last edited:

Dolphin

Senior Member
Messages
17,567
Identifying “researcher degrees of freedom.”

In a now-landmark subsequent paper, Simmons, Nelson, and Simonsohn (2011) tested an impossible hypothesis: that listening to a song about older age (“When I’m Sixty-Four” by The Beatles) can make people (college students) grow younger. They did not predict that students would feel younger (perhaps plausible), but rather that they would actually believe themselves to be chronologically younger (decidedly implausible). Indeed, the authors found that people on average reported being nearly a year-and-a-half younger after listening to “When I’m Sixty-Four” (M = 20.1 years) than their (randomly assigned and therefore equivalent) control group counterparts (M = 21.5 years) who listened to a non-age-related song, p < .05. Thus, Simmons et al. provided a tongue-in-cheek analogue to Bem’s precognition studies—a situation in which prior science provides no plausibility for the research hypothesis, but in which standard experimental trials nevertheless found support for it. More importantly, Simmons et al. nominated and described four discrete causes of such false positive findings: flexibility in researchers’ ability to choose (a) dependent variables, (b) sample size, (c) covariates, and (d) which subsets of experimental conditions to report. Nondisclosure of these behind-the-scenes decisions (deemed researcher degrees of freedom), according to the authors, effectively allows social scientists to present anything as statistically significant, and, therefore, as “real.” In the view of Simmons et al., researcher degrees of freedom are largely unconscious—they capitalize on the instinctive human tendency to seek out information that confirms (rather than information that falsifies) what one already believes. Thus, when making routine, seemingly innocuous decisions about study design, data collection, and statistical analyses, generally researchers are unwittingly disposed to err in favor of their hypotheses. And because frequentist statistics—those statistical equations that calculate statistical significance (p) values from a sample to draw conclusions about an entire population—are inextricably tied to those researcher degrees of freedom, each additional decision improperly biased in favor of confirming a hypothesis inflates the likelihood of a false positive above and beyond the baseline of 5%—the percentage of studies that, given that there is no genuine effect, will produce a false positive result at alpha = .05.
 

Dolphin

Senior Member
Messages
17,567
Summary: The Science-Based Perspective
When a study lacks prior plausibility, defined either by a paucity of basic research on an intervention’s putative mechanisms of action and/or the invocation of concepts that contradict well-established knowledge from other scientific disciplines, the science-based perspective calls for a commensurately higher standard for acceptance—one which acknowledges the high likelihood that researcher degrees of freedom or some other form of error3 may have skewed the results of that study toward a false positive conclusion. Thus, when experiments purport to show that some people possess psychic abilities, that listening to a song about old age can cause clocks to run backward, or that inserting needles into a person’s skin unblocks the flow of “vital energy,” the most likely explanation is a flaw in the research process.
Similarly with claims about CBT and GET for ME/CFS.
 

user9876

Senior Member
Messages
4,556
One can argue that this important for ME/CFS where therapies based on dubious premises (CBT, GET, etc.) have been tested in RCTs.
But it is important to say that running an RCT doesn't necessarily give support for a treatment with PACE the preferred treatments will tend to be supported by the measurement systems (in that treatments change perceptions of illness and measurements get perceived fatigue and physical function).

So we really should never accept that and RCT is necessarily going to produce evidence.
 

Dolphin

Senior Member
Messages
17,567
The section "Ceteris Paribus: A Science-Based Heuristic for Evaluating CAM Research" explains the theory and gives an example.
 

Dolphin

Senior Member
Messages
17,567
Recommended Reading

Placebo Effects
Novella, S. P. (2008). The placebo effect. Science-Based Medicine. Retrieved from http://www.sciencebasedmedicine.org/index.php/the-placebo-effect/
A conceptual primer for readers of all backgrounds.

Kirsch, I. (2010). The emperor’s new drugs: Exploding the antidepressant myth. New York, NY:
Basic Books.
Chapters 5 and 6 together provide an excellent and accessible overview of the placebo effect’s underlying mechanisms (written for a lay audience).

Wampold, B. E., Minami, T., Tierney, S. C., Baskin, T. W., & Bhati, K. S. (2005). The placebo is powerful: Estimating placebo effects in medicine and psychotherapy from randomized clinical trials. Journal of Clinical Psychology, 61(7), 835-854.
Discusses the problems with using double-blind placebo control groups in psychotherapy.

Lilienfeld, S. O., Ritschel, L. A., Lynn, S. J., Cautin, R. L., & Latzman, R. D. (2014). Why ineffective psychotherapies appear to work: A taxonomy of causes of spurious therapeutic effectiveness. Perspectives on Psychological Science, 9(4), 355–387.
Lilienfeld et al. explain in detail how “placebo effects” manifest in diverse ways in the psychotherapeutic context.

Boot, W. R., Simons, D. J., Stohart, C., & Stutts, C. (2013). The pervasive problem with placebos in psychology: Why active control groups are not sufficient to rule out placebo effects. Perspectives on Psychological Science, 8(4), 445-454.
Boot et al. argue that even “active” (ceteris paribus) control groups in psychology very often fail to adequately control for expectancy effects.
 

Esther12

Senior Member
Messages
13,774
Similarly with claims about CBT and GET for ME/CFS.

I don't know - one of the problems I have with 'Science Based Medicine' is that it seems to give great power to those in positions of authority who get to decide what is 'plausible'. The consensus view within medicine over the last decades seems to have been that CBT/GET are plausible treatments for CFS, but homeopathy is not. So while there is no good evidence that CBT/GET are more effective than homeopathy, they are still viewed as 'mainstream' medicine, not quacky CAM.

I think I prefer simpler Evidence Based Medicine, but with lessons learned from the way utterly implausible CAM can gain 'positive' results in poorly designed trials. Allowing SBM to just dismiss positive CAM trials, but then endorse similarly designed trials that happen to fit in better with the current medical consensus sounds pretty unscientific to me. If implausible treatments are consistently gaining positive results in trials, the problem is with the trials, not the treatment's implausibility.
 

Dolphin

Senior Member
Messages
17,567
I don't know - one of the problems I have with 'Science Based Medicine' is that it seems to give great power to those in positions of authority who get to decide what is 'plausible'. The consensus view within medicine over the last decades seems to have been that CBT/GET are plausible treatments for CFS, but homeopathy is not. So while there is no good evidence that CBT/GET are more effective than homeopathy, they are still viewed as 'mainstream' medicine, not quacky CAM.

I think I prefer simpler Evidence Based Medicine, but with lessons learned from the way utterly implausible CAM can gain 'positive' results in poorly designed trials. Allowing SBM to just dismiss positive CAM trials, but then endorse similarly designed trials that happen to fit in better with the current medical consensus sounds pretty unscientific to me. If implausible treatments are consistently gaining positive results in trials, the problem is with the trials, not the treatment's implausibility.
The CBT and GET assume that the symptoms can be explained by deconditioning.
If one can show that a similar symptom pattern is not present in those who are deconditioned or something similar, then the CBT and GET models are not plausible and so trials supposedly showing their effectiveness should be treated with scepticism.
 

Esther12

Senior Member
Messages
13,774
The CBT and GET assume that the symptoms can be explained by deconditioning.
If one can show that a similar symptom pattern is not present in those who are deconditioned or something similar, then the CBT and GET models are not plausible and so trials supposedly showing their effectiveness should be treated with scepticism.

But it seems they can always come up with a new 'plausible' explanation (or concoction of various explanations, to be selected from and adapted to the particular patient in a holistic manner). More and more people giving up on deconditioning as an explanation of symptoms in CFS hasn't done much to slow the promotion of CBT/GET as effective treatments.
 

barbc56

Senior Member
Messages
3,657
We often read or hear something in the media and then later hear what seems to be contradictory information.

A lot of that has to do with the media reporting preliminary studies, studies on animals yet are presented in such a way that it sounds like they are definitive studies. It makes good news. Also the press releases put out by the universities, labs, where the studies are conducted do the same thing. Sometimes without the researchers knowing or editing the announcement and the person writing it does not have a science background. See here.

In the long run science should be changing as new things are discovered.

A priori is a bit different than just relying on the plausibility of the hypothesis. Even though plausibility is important the meaning is a bit different than how it's usually used, so it's easily misinterpreted. A prori also includes the type of stastics used. Baysian stastics definition of proof vs only using the P value as the holy grail of proof. That's just the beginning.

It's easily mispreceived, even by scientists. I know I had a problem not only understanding this concept but how it's put to use when I first learned about it.

SBM is part of EBM but a bit more refined. It's EBM that actually relies more on authority and many studies on alternative medicine fall in this category.

I have a hugh file on SBM in my computer and when I have more time, hopefully tomorrow, I will post here.

The above references are also helpful but sometimes hard to plow through as it's a lot of information, sometimes made more confusing because of some of the articles overreliance on hyperlinks which the authors assume readers will click, to substitute information that might be more understandable if it was simply put in the text. A product of our times which has advantages and disadvantages.

If anything I have said seems inaccurate, please let me know as this is my interpretation and still in the process of learning.
 
Last edited:

Esther12

Senior Member
Messages
13,774
A lot of that has to do with the media reporting preliminary studies, studies on animals yet are presented in such a way that it sounds like they are definitive studies. It makes good news. Also the press releases put out by the universities, labs, where the studies are conducted do the same thing. Sometimes without the researchers knowing or editing the announcement and the person writing it does not have a science background. See here.

Isn't a lot of that to do with ust competently engaging in EBM, rather than reason to insert prior assumptions about plausibility?

EBM is often done badly, and it's always good to be pushing to raise standards, but I'm just uncomfortable with the way some in SBM seem to think that that can put implausible CAM in a separate category to things like CBT, GET, other 'plausible' behavioral interventions. I think that of similarly designed trials are showing similar evidence of efficacy for these sorts of interventions we should be equally sceptical/accepting.

It's easily mispreceived, even by scientists. I know I had a problem not only understanding this concept but how it's put to use when I first learned about it.

I could have misunderstood something. I am largely going off debates I've read in the comment sections of blogs.
 

worldbackwards

Senior Member
Messages
2,051
I don't know - one of the problems I have with 'Science Based Medicine' is that it seems to give great power to those in positions of authority who get to decide what is 'plausible'. The consensus view within medicine over the last decades seems to have been that CBT/GET are plausible treatments for CFS, but homeopathy is not. So while there is no good evidence that CBT/GET are more effective than homeopathy, they are still viewed as 'mainstream' medicine, not quacky CAM.
Agree. If ME patients are going to start talking about "well established knowledge" regarding treatments, they're simply going to find themselves up against "well, look int the textbooks". And I believe we all know what's in the textbooks.
 

Dolphin

Senior Member
Messages
17,567
The CBT and GET assume that the symptoms can be explained by deconditioning.
If one can show that a similar symptom pattern is not present in those who are deconditioned or something similar, then the CBT and GET models are not plausible and so trials supposedly showing their effectiveness should be treated with scepticism.
Even if we can't convince everyone else that the basis for some therapies are not justified, we could still use it to argue why we ourselves are sceptical i.e. it isn't simply based on prejudice about mental health or whatever.