• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

"Why summaries of research on psychological theories are often uninterpretable" (Meehl, 1990) (free)

Dolphin

Senior Member
Messages
17,567
I just saw the influential and renegade psychologist, James C. Coyne re-tweet a message linking to this paper:
"Why summaries of research on psychological theories are often uninterpretable"

Meehl, Paul E.
Psychological Reports, Vol 66(1), Feb 1990, 195-244. doi: 10.2466/PR0.66.1.195-244

Abstract

Summary.-Null hypothesis testing of correlational predictions from weak substantive theories in soft psychology is subject to the influence of ten obfuscating factors whose effects are usually (1) sizeable, (2) opposed, (3) variable, and (4) unknown. The net episternic effect of these ten obfuscating influences is that the usual research literature review is well-nigh uninterpretable. Major changes in graduate education, conduct of research, and editorial policy are proposed.

Free at: http://www.tc.umn.edu/~pemeehl/144WhySummaries.pdf

It looks interesting, but I don't have the time to read it at the moment, but will be interested to read what anyone says who does read it.
 

anciendaze

Senior Member
Messages
1,841
He hits a number of points I've made elsewhere. I particularly like his reference to Jacob Cohen's work on statistical power. This is widely ignored when people shop for metrics which will give their papers more impressive numbers. Applying Cohen's D to the PACE data would pretty well damp down claims of effectiveness.

His references to work by Carnap and Lakatos on inductive reasoning in science are also pertinent. These are far from trivial objections to common practices.

His overall thrust has to do with the effect of many flawed studies on a field of research as a whole. This is where he states that combined results are uninterpretable, i.e. not even wrong.

Here's a quote which will give you the flavor of this paper:
Crud factor: In the social sciences and arguably in the biological sciences,
"everything correlates to some extent with everything else." This truism, which I
have found no competent psychologist disputes given five minutes reflection, does
not apply to pure experimental studies in which attributes that the subjects bring
with them are not the subject of study (except in so far as they appear as a source of
error and hence in the denominator of a significance test).

Again we are well aware of biological attributes of subjects which were ignored in PACE because these were not the subject of the study. The most striking was age, compared to the mean value for the general population. Physiological status was also largely ignored. Showing that patients could be compared to people over 65 with heart conditions was not exactly the same as showing their problems were psychological.

While he is not specifically attacking medical recommendations based on psychological theories, you may also appreciate his conclusion concerning some professional psychologists:
In evaluating faculty for raises, promotion, and tenure, perhaps there
should be more emphasis on Science Citation Index counts, Annual Review
mentions, and evaluation by top experts elsewhere, rather than on mere publication
yardage. The distressing thing about this is that while academics regularly condemn
"mere publication count," a week later in a faculty meeting or a Dean's advisory
meeting they are actually counting pages in comparing Smith with Jones. This is
a disease of the professional intellectual, resting upon a vast group delusional
system concerning scholarly products, and I know my recommendations in this
respect have a negligible chance of being taken or even listened to seriously. Since
the null hypothesis refutation racket is "steady work" and has the merits of an
automated research grinding device, scholars who are pardonably devoted to
making more money and keeping their jobs so that they can pay off the mortgage
and buy hamburgers for the wife and kids are unlikely to contemplate with
equanimity a criticism that says that their whole procedure is scientifically feckless
and that they should quit doing it and do something else. In the soft areas of
psychology that might, in some cases, mean that they should quit the academy and
make an honest living selling shoes, which people of bookish temperament
naturally do not want to do.