• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

"The British amateur who debunked the mathematics of happiness"

Tom Kindlon

Senior Member
Messages
1,734
The British amateur who debunked the mathematics of happiness
The astonishing story of Nick Brown, the British man who began a part-time psychology course in his 50s – and ended up taking on America's academic establishment

The Observer, Sunday 19 January 2014

http://www.theguardian.com/science/2014/jan/19/mathematics-of-happiness-debunked-nick-brown

[..]

On whistle-blowing:

Fredrickson is the object of widespread admiration in the field of psychology. Martin Seligman, former president of the American Psychological Association and a bestselling author in his own right, went so far as to call her "the genius of the positive psychology movement". On top of which she is also an associate editor at American Psychologist.

By contrast, Brown was a first-term, first-year, part-time masters student who was about to take early retirement from what he calls a "large international organisation" in Strasbourg, where he had been head of IT network operations. Who was he to doubt the work of a leading professional which had been accepted by the psychological elite? What gave him the right to suggest that the emperor had gone naturist?

"The answer," says Brown when I meet him in a north London cafe, "is because that's how it always happens. Look at whistleblower culture. If you want to be a whistleblower you have to be prepared to lose your job. I'm able to do what I'm doing here because I'm nobody. I don't have to keep any academics happy. I don't have to think about the possible consequences of my actions for people I might admire personally who may have based their work on this and they end up looking silly. There are 160,000 psychologists in America and they've got mortgages. I've got the necessary degree of total independence."

On the specific problem:
"She's kind of hoping the Cheshire cat has disappeared but the grin is still there," says Brown, who is dismissive of Fredrickson's efforts at damage limitation. "She's trying to throw Losada over the side without admitting that she got conned. All she can really show is that higher numbers are better than lower ones. What you do in science is you make a statement of what you think will happen and then run the experiment and see if it matches it. What you don't do is pick up a bunch of data and start reading tea leaves. Because you can always find something. If you don't have much data you shouldn't go round theorising. Something orange is going to happen to you today, says the astrology chart. Sure enough, you'll notice if an orange bicycle goes by you."

This is why replication is so important. Also, sometimes one can test within the data e.g. dividing up the data using a training set.


Another reason why one needs outsiders as whistle-blowers:
But social psychology is full of theorising and much of it goes unquestioned. This is particularly the case when the research involves, as it does with Fredrickson, self-report, where the subjects assess themselves.

As John Gottman says: "Self-report data is easier to obtain, so a lot of social psychologists have formed an implicit society where they won't challenge one another. It's a collusion that makes it easier to publish research and not look at observational data or more objective data."

In general, says Gottman, the results of self-report have been quite reliable in the area of wellbeing. The problem is that when it comes down to distinguishing, say, those who "languish" from those who "flourish", there may be all manner of cultural and personal reasons why an individual or group might wish to deny negative feelings or even downplay positive ones.
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
"Not many psychologists are very good at maths," says Brown.

"Not many psychologists are even good at the maths and statistics you have to do as a psychologist. Typically you'll have a couple of people in the department who understand it. Most psychologists are not capable of organising a quantitative study. A lot of people can get a PhD in psychology without having those things at their fingertips. And that's the stuff you're meant to know. Losada's maths were of the kind you're not meant to encounter in psychology. The maths you need to understand the Losada system is hard but the maths you need to understand that this cannot possibly be true is relatively straightforward."

http://www.theguardian.com/science/2014/jan/19/mathematics-of-happiness-debunked-nick-brown
 

SOC

Senior Member
Messages
7,849
Sadly, this type of pseudo-mathematical claptrap in not occasional, but common. As a graduate student in engineering, I was horrified to find our psychology faculty and grad students using a simple statistical package with so little understanding of the statistics or software that they were making huge errors just in the input. They didn't even bother to read the manuals; they just asked the (totally ignorant) person at the next desk. Their understanding of the output was even worse, so the conclusions they drew were beyond outrageous. What they called evidence was simply a lot of Garbage Out. It made me sick.

And just for giggles -- I had business dealings with the psychology academic (yes one of those faculty members at my U) who was called "a leading authority in the psychology of successful relationships". The man had the social/interpersonal skills of a rock and his personal life was not exemplary of any understanding of successful relationships. What a laugh... if it weren't so academically horrifying. He's one of the reasons I am suspicious of the integrity of psychology and psychiatry.
 
Last edited:

SOC

Senior Member
Messages
7,849
Sadly, this type of pseudo-mathematical claptrap in not occasional, but common. As a graduate student in engineering, I was horrified to find our psychology faculty and grad students using a simple statistical package with so little understanding of the statistics or software that they were making huge errors just in the input. They didn't even bother to read the manuals; they just asked the (totally ignorant) person at the next desk. Their understanding of the output was even worse, so the conclusions they drew were beyond outrageous. What they called evidence was simply a lot of Garbage Out. It made me sick.

And just for giggles -- I have business dealings with the psychology academic (yes one of those faculty members at my U) who was called "a leading authority in the psychology of successful relationships". The man had the social/interpersonal skills of a rock and his personal life was not exemplary of any understanding of successful relationships. What a laugh... if it weren't so academically horrifying. He's one of the reasons I am suspicious of the integrity of psychology and psychiatry.

ETA: For the record -- I ha veknown a couple of psych clinicians who were wonderful people with amazing skills in helping others. One probably saved me from a life of misery. So I don't think they're all worthless. ;) Unfortunately, I've known far, far more charlatans and downright evil people working as therapists, psychologists, and psychiatrists. Sad, really, that the profession doesn't police itself better.
 
Last edited:

Roy S

former DC ME/CFS lobbyist
Messages
1,376
Location
Illinois, USA
That's a good article. It's a little surprising that even after that they have to go through more hoops to get another response published.

"After initially being turned down, Brown, Sokal and Friedman went through American Psychologist's lengthy appeals procedure and won the right to reply to Fredrickson's reply. They are currently working on what is certain to be a very carefully considered response."
 

anciendaze

Senior Member
Messages
1,841
Psychology and psychiatry are not the only fields with seriously flawed statistical reasoning. You can find errors in logic all over medical statistics. Convenient assumptions about normal or Gaussian distributions lie behind almost all parametric statistics. When it is necessary to justify the assumption, researchers often appeal to the Central Limit Theorem, though few have gone through proofs of this theorem to see if it applies. There are several preconditions for the theorem, though different proofs may use different sets of these. The broadest ones are that the separate distributions being combined to give the one you want to justify: have well-defined means, bounded variance, and that they combine additively. It is this last "obvious" assumption which I've addressed in a number of posts. There are in fact theorems about combining distributions by multiplication which yield distributions far from "normal". These often fail to have well-defined variance or standard deviation.

The old joke about normal distributions is that mathematicians think this is an experimental fact, while experimentalists think it is a mathematical guarantee. Both are mistaken.

I addressed some of these issues in a series of posts here. Just to avoid confusion, I want to explain now that I'm talking about two different ways in which I know things combine by multiplication: probability of survival and efficiency of operation. That second combination likely also applies to multiple defects in systems with redundancy, though I haven't gone through the details in that case.

As one mathematically sophisticated correspondent said "Ha! I can't even imagine trying to explain something like this to the medical doctors!!" I'm still working on it.
 

Simon

Senior Member
Messages
3,789
Location
Monmouth, UK
Do read the comments, especially the highly recommended ones..
My two favourites:

Toadjuggler
"Just as zero degrees celsius is a special number in thermodynamics," wrote Fredrickson in Positivity, "the 3-to-1 positivity ratio may well be a magic number in human psychology."

This would have raised eyebrows, had any real scientists read it. Zero Celsius is completely arbitrary and all thermodynamic calculations are done in Kelvin.I loathe these "sciences" where they pretend that mathematical models are mathematical proofs, economics is another. There is nothing wrong with a mathematically approximate model (if it works) so long as it is reported as such, but claiming Absolute Truth...

littlepump,19 January 2014 6:09am

The Brown et al., paper is great, I can't remember the last time I chuckled so much reading a scientific article. I particularly liked
They appear to assert that the predictive use of differential equations abstracted from a domain of the natural sciences to describe human interactions can be justified on the basis of the linguistic similarity between elements of the technical vocabulary of that scientific domain and the adjectives used metaphoric ally by a particular observer to describe those human interactions. If true, this would have remarkable implications for the social sciences. One could describe a team’s interactions as “sparky” and confidently predict that their emotions would be subject to the same laws that govern the dielectric breakdown of air under the influence of an electric field.

It is truely shocking that such thinking could be presented in a peer-reviewed article, you don't need to understand any maths at all to see what complete nonsense this is. That such thinking - nevermind the abuse of maths, which would be equally obvious to anyone with even the slightest of maths background (no expertise needed to see the deficits there either) - got through peer-review is bad for the Journal that published this crap, but that 300 odd people then cited this work suggests there is something fundamentally wrong with psychology as an academic field.

Though as @anciendaze says, psychology and psychiatry are not the only fields with problems with maths and stats, as shown by the excellent blog: 6 Shocking Studies That Prove Science Is Totally Broken | Cracked.com

Science tends to require the use of numbers. And while most of us probably have a tough time figuring out what all those numbers and letters and Greek symbols in algebra equations are supposed to mean, we're content to leave it to the experts to do all the understanding for us. Man, it would be hilariously terrifying if those experts turned out to be as clueless as the rest of us, wouldn't it?

Enter Kimmo Eriksson, a Swedish mathematician. He decided midway through his career that pure math wasn't doing it for him anymore and moved into cultural studies. It was at that point he realized his new colleagues were basically awful at math. So he conducted an experiment to find out how widespread the issue was. Eriksson picked two research papers at random and sent them out to a bunch of scientists. In half of the papers he randomly added an equation that had nothing to do with the study whatsoever, and in context was utter nonsense.

Eriksson asked the recipients to judge the quality of the research. The mathematicians and physicists were basically unimpressed, but in every other field the inclusion of the equation got the papers a higher ranking, even though it was pointless bullshit -- it just looked more impressive with the complicated math in there. More than 60 percent of the medical researchers, the people trying to save all of our lives, ranked the junk papers as better on the grounds of, "It must be right -- look at all this awesome math shit he's got in there!"

231305.jpg

Seems legit.

The research by Eriksson (or "Kimmo the number wizard," as he is known in the humanities) is not the only evidence that scientists treat math as some mysterious occult force. Research into ecology and evolution show that papers are 28 percent less likely to be cited for every additional equation per page. It seems that basically everyone that isn't a physicist or engineer treats math with a policy of "run away as quickly as possible."
 

Little Bluestem

All Good Things Must Come to an End
Messages
4,930
"Look on Google you get something like 27,000 hits. This theory is not just big in academia, there's a whole industry of coaching and it intersects with business and business schools. There's a lot of money in it."

And these books were not only marketed like a previous generation of self-help manuals, they often shared the same style of cod-sagacious prose. "Positivity opens your mind naturally, like the water lily that opens with sunlight," writes Fredrickson in Positivity.

Then there was the lucrative lecture circuit. Both Seligman and Fredrickson are hired speakers. One website lists Seligman's booking fee at between $30,000 and $50,000 an engagement. In this new science of happiness, it seemed that all the leading proponents were happy.

As Brown puts it in characteristic manner. "This particular paper wasn't an act of fraud and it wasn't about statistics. It's that someone had a brain-fart one day."
A very lucrative brain-fart.
 

Firestormm

Senior Member
Messages
5,055
Location
Cornwall England
Around the time Brown first came across Fredrickson's work, a case came to light in Holland in which a psychologist called Diederik Stapel, who was dean of faculty at Tilburg University, was caught by his graduate students making up data. It turned out he'd been falsifying his research for the previous 15 years. Brown, who is currently translating Stapel's autobiography, got in touch with him and asked him why he did it.

"The way he describes it," says Brown, "is that the environment was conducive to it. He said, 'I could either do the hard work or put my hand in the jar and take out a biscuit'." It does a massive amount of harm to science when this sort of thing happens. Nobody's accusing Fredrickson of making anything up. She just basically invented her own method. Is that worse than inventing your own data?"

Does CBT represent a 'cookie-jar'? Arguably it does given the widespread use of it across medicine these days and the sub-contracting to those 'specialists' outside of e.g. the NHS.

She just basically invented her own method. Is that worse than inventing your own data?

Left me wondering about the use of subjective outcome measures... ;)

Ah, yes:

But social psychology is full of theorising and much of it goes unquestioned. This is particularly the case when the research involves, as it does with Fredrickson, self-report, where the subjects assess themselves.

As John Gottman says: "Self-report data is easier to obtain, so a lot of social psychologists have formed an implicit society where they won't challenge one another. It's a collusion that makes it easier to publish research and not look at observational data or more objective data."

In general, says Gottman, the results of self-report have been quite reliable in the area of wellbeing. The problem is that when it comes down to distinguishing, say, those who "languish" from those who "flourish", there may be all manner of cultural and personal reasons why an individual or group might wish to deny negative feelings or even downplay positive ones.

"It's a lot more complicated than Fredrickson is suggesting," says Gottman.

And, 'there may be all manner of cultural and personal reasons why an individual or group' might feel better today than yesterday, less fatigued today than yesterday, or that a particular intervention or even chat with a clinician has 'helped' their overall wellbeing.

I do think it is important to allow patients input into any measure of outcomes, but only to the extent that such measures are secondary to objective ones. Claimed improvements to 'Wellbeing' or 'Quality of Life' or the 'Helpfulness of a therapy' should encompass much more of an evidence base than self-report alone.

I am a kid from the 80's and 90's. For me 'the power of positive thinking' was debunked ages ago. It was overhyped and oversold - but the same assumptions still persists in psychology. Too many assumptions and theories and too little actual science.

Said it before and I'll say it again - the claims made about psychological interventions are too ambitious for the evidence and PACE was no different in this regard. We need a better way of determining 'feeling better' after therapy and determining whether or not and indeed how such an intervention can impact on disability.

That said, I just had a really nice cup of coffee whilst reading that article from the Guardian at long last and feel quite relaxed and happy.

I will now be doing some work. BUT am I returning to that work - and not resting or doing something more pleasurable - because I am feeling happy and less fretful about my symptoms? Hmm.... :p
 
Last edited:

Roy S

former DC ME/CFS lobbyist
Messages
1,376
Location
Illinois, USA
This is from last year --
 
"Here is the modus operandi of the positivity lady. She goes to a scientific field and picks up the jargons. Then she uses those jargons to write a complex paper, whose conclusion has something to do with human well-being, positive emotions, etc. Neat, isn’t it? Did we say that the technical terms are used in meaningless way to impress the naive reader?"
"In the meanwhile, positivity lady moved on to new field of research – genomics. Her paper linking positivity with gene expression came out in PNAS a week back. Check the last sentence (emphasis ours), if not anything else."
 
http://www.homolog.us/blogs/Blog/2013/08/05/tragedy-of-the-day-pnas-got-duped-by-positivity-lady/
there is also a link to a relevant James Coyne blog:
http://blogs.plos.org/mindthebrain/...-health-by-pursuing-meaning-versus-happiness/