trishrhymes
Senior Member
- Messages
- 2,158
Perhaps one of the biggest mistakes in the last century was to re-label psychology, sociology etc as social sciences.
As far as I can see they are not sciences, but they pretend that by going through some of the same processes scientists do they are somehow doing science.
So they set up hypotheses, measure things, or rather ask questions and pretend the answers can be numerically scaled.
This gives them loads of numerical data on which they carry out statistical tests of significance. These days with computer packages that can be used to carry out very sophisticated statistical analysis on large sets of data they can produce loads of p values.
They use the 'rule of thumb' 5% level as a 'magic number', and trawl through loads of p values produced by their analyses to find these magic p values. I gather this is called p-hacking.
They then imagine that they have discovered important correlations between factors like catastrophising, symptom focusing etc and illness severity, and they interpret these according to their beliefs about the illness.
They show no understanding at all of any of the following:
Correlation does not imply causation. And if causation is inferred it may be in the wrong direction, eg does symptom focusing make you sicker, or does being sicker make you focus more on your symptoms...
A p value of 5% means there is a 1 in 20 chance that the 'significance' they have found is not significance, but just chance variation.
Statistical significance is not the same as clinical significance.
Most of the stuff they 'measure' is not on a linear scale, eg being able to walk a mile is not twice as healthy as being able to climb one flight of stairs, etc.
People fill in questionnaires in the way they think they are expected to do, especially after 'treatment' designed to persuade them to change their beliefs about their health.
Some of their scales are complete nonsense, eg Chalder fatigue scale has a ceiling effect and a nonsense scoring system and ridiculous descriptors.
Garbage in - Garbage out. If you feed a statistics package with garbage, it will spew our even more garbage.
And don't let me even begin on the ethics of any of this stuff...
Not science.
Not ethical.
Not funny.
Merry Christmas.
As far as I can see they are not sciences, but they pretend that by going through some of the same processes scientists do they are somehow doing science.
So they set up hypotheses, measure things, or rather ask questions and pretend the answers can be numerically scaled.
This gives them loads of numerical data on which they carry out statistical tests of significance. These days with computer packages that can be used to carry out very sophisticated statistical analysis on large sets of data they can produce loads of p values.
They use the 'rule of thumb' 5% level as a 'magic number', and trawl through loads of p values produced by their analyses to find these magic p values. I gather this is called p-hacking.
They then imagine that they have discovered important correlations between factors like catastrophising, symptom focusing etc and illness severity, and they interpret these according to their beliefs about the illness.
They show no understanding at all of any of the following:
Correlation does not imply causation. And if causation is inferred it may be in the wrong direction, eg does symptom focusing make you sicker, or does being sicker make you focus more on your symptoms...
A p value of 5% means there is a 1 in 20 chance that the 'significance' they have found is not significance, but just chance variation.
Statistical significance is not the same as clinical significance.
Most of the stuff they 'measure' is not on a linear scale, eg being able to walk a mile is not twice as healthy as being able to climb one flight of stairs, etc.
People fill in questionnaires in the way they think they are expected to do, especially after 'treatment' designed to persuade them to change their beliefs about their health.
Some of their scales are complete nonsense, eg Chalder fatigue scale has a ceiling effect and a nonsense scoring system and ridiculous descriptors.
Garbage in - Garbage out. If you feed a statistics package with garbage, it will spew our even more garbage.
And don't let me even begin on the ethics of any of this stuff...
Not science.
Not ethical.
Not funny.
Merry Christmas.