• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

(general) Faking it: Social desirability response bias in self-report research

Dolphin

Senior Member
Messages
17,567
Faking it: Social desirability response bias in self-report research. Aust J Adv Nursing. 2008;25:408.

Free full text at: http://www.ajan.com.au/Vol25/Vol_25-4_vandeMortel.pdf

I thought it was interesting as it:
(i) gives some examples of the bias
(ii) shows that some studies actually try to control for it.

Given the amount of questionnaires that are used in ME/CFS research, this may have some relevance.

ABSTRACT

Objective

The tendency for people to present a favourable image of themselves on questionnaires is called socially desirable responding (SDR). SDR confounds research results by creating false relationships or obscuring relationships between variables. Social desirability (SD) scales can be used to detect, minimise, and correct for SDR in order to improve the validity of questionnairebased research. The aim of this review was to
determine the proportion of health-related studies that used questionnaires and used SD scales and estimate the proportion that were potentially affected by SDR.

Methods

Questionnaire-based research studies listed on CINAHL in 2004-2005 were reviewed. The proportion of studies that used an SD scale was calculated. The influence of SDR on study outcomes and the proportion of studies that used statistical methods to control for social desirability response bias are reported.

Results

Fourteen thousand two hundred and seventy-five eligible studies were identified. Only 0.2% (31) used an SD scale. Of these, 43% found SDR influenced their results. A further 10% controlled for SDR bias when analysing the data. The outcomes in 45% of studies that used an SD scale were not influenced by SDR.

Conclusions

While few studies used an SD scale to detect or control for SD bias, almost half of those that used an SD scale found SDR influenced their results.

Recommendations

Researchers using questionnaires containing socially sensitive items should consider the impact of SDR on the validity of their research and use an SD scale to detect and control for SD bias.
 
Messages
13,774
Doesn't Chalder think that CFS patients are particularly prone to distortions driven by consideration of social desirability (underpinning her working linking it to anorexia?)

I doubt she'd bring it up in this regard.
 

Dolphin

Senior Member
Messages
17,567
Doesn't Chalder think that CFS patients are particularly prone to distortions driven by consideration of social desirability (underpinning her working linking it to anorexia?)

I doubt she'd bring it up in this regard.
Afraid I haven't read that paper so will leave it to others to answer.

Basically what the authors are referring to is individuals not truthfully answering questionnaires on a wide variety of things, which is a slightly different issue from people feeling some sort of social pressure to act/think/whatever in a certain way but where their answers to questions are what they really think/feel/is really the way they behave.
 

CBS

Senior Member
Messages
1,522
I thought it was interesting as it:
(i) gives some examples of the bias
(ii) shows that some studies actually try to control for it.

Given the amount of questionnaires that are used in ME/CFS research, this may have some relevance.

SDR is also known as "faking good."

Well designed questionnaires will have sub-scales built into them that assess SDR.

In a broader context, the process of properly validating an instrument (such as a set of questions used for a diagnostic criteria for a specific disorder - such as the ICC) is time consuming, expensive and it requires a significant background in the appropriate statistical methods. Look what happened with the Reeves definition, a ten-fold increase in the number of "CFS" patients. I'm hoping that Lenny Jason wasn't a co-author of the ICC because he wanted to independently validate it and not because he didn't feel comfortable with it.