Effect of a physical activity intervention on bias in self-reported activity (gen)

Dolphin

Senior Member
Messages
17,567
Given that Graded Exercise Therapy and GET/GAT-based CBT often use self-report tools to measure activity/check for improvements, this trial is very interesting. I had previously hypothesised it could be an issue but it is useful to have some empirical evidence for it.

Free full text at: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2746093/?tool=pubmed

The effect of a physical activity intervention on bias in self-reported activity.

Ann Epidemiol. 2009 May;19(5):316-22. Epub 2009 Feb 20.

Taber DR, Stevens J, Murray DM, Elder JP, Webber LS, Jobe JB, Lytle LA.


Source

Department of Epidemiology, University of North Carolina, Chapel Hill, NC 27599, USA. dtaber@email.unc.edu


Abstract

PURPOSE:

A positive outcome in self-reported behavior could be detected erroneously if an intervention caused over-reporting of the targeted behavior. Data collected from a multi-site randomized trial were examined to determine if adolescent girls who received a physical activity intervention over-reported their activity more than girls who received no intervention.

METHODS:

Activity was measured using accelerometers and self-reports (3-Day Physical Activity Recall, 3DPAR) in cross-sectional samples preintervention (6th grade, n = 1,464) and post-intervention (8th grade, n = 3,114). Log-transformed accelerometer minutes were regressed on 3DPAR blocks, treatment group, and their interaction, while adjusting for race, body mass index, and timing of data collection.

RESULTS:

Preintervention, the association between measures did not differ between groups, but post-intervention 3DPAR blocks were associated with fewer log-accelerometer minutes of moderate-vigorous physical activity (MVPA) in intervention girls than in control girls (p = 0.002). The group difference was primarily in the upper 15% of the 3DPAR distribution, where control girls had >1.7 more accelerometer minutes of MVPA than intervention girls who reported identical activity levels. Group differences in this subsample were 8.5%-16.2% of the mean activity levels; the intervention was powered to detect a difference of 10%.

CONCLUSION:

Self-report measures should be interpreted with caution when used to evaluate a physical activity intervention.


PMID: 19230711 [PubMed - indexed for MEDLINE] PMCID: PMC2746093

It talks about other studies but I'm not sure how much time I'll have to read them.
 

oceanblue

Guest
Messages
1,383
Location
UK
Thanks, Dolphin.

Initially, I thought this looked like the smoking gun, and it makes some great points like
Interventions, in their effort to encourage individuals to change behavior, may promote social desirability and inadvertently increase over-reporting of that behavior.

But after reading it I'm less sure:

1. The putative over-reporting made no difference to the result of the trial
Overall, however, the magnitude of over-reporting in the intervention group was not large enough to change the conclusion about the interventions effect, which was non-significant according to both self-report and accelerometer.

2. The basic data looks suspect. The key thing is the correlation between the Self-Report measure 3DPAR and actometers. Here are the correlations:
Control: baseline=0.14; follow-up=0.30
Intervention: baseline=0.21; follow-up=0.20

- First thing to note is that earlier work showed the correlation between actometers and 3DPAR should be in the range 0.28-0.46 and all but the control follow-up are significantly below that range and showing very weak correlations. Something looks wrong here, and that makes me concerned about the findings.
- Also, the correlation doesn't change in the intervention group between baseline and follow-up (ie participants aren't more likely to over-report activity after the intervention). It's the change in the control group that makes the intervention results important.
- The baseline and follow-up data are not for the same individuals: the study used separate samples for baseline and follow-up, so some of the difference found may be because they are looking at different people with different tendencies to over/under report.

Sorry to pour cold water on this - I'm disappointed too.

To be honest, i don't fully understand the analysis they present in figs 1 & 2, which appears more powerful, but the 2 concerns listed above makes me doubt this study. Happy to be put right, though.

My main thought, though, reading this, is that self-report of physical activity is highly questionable (regardless of social desirability/over reporting). Those correlations with the actometer are terrible (and actometers themselves are imperfect measures of activity). Goes to show, it's not just CFS research that's hopelessly flaky at times.

Will check out the references in this paper as there does seem to be some interesting discussions going on.
 

Dolphin

Senior Member
Messages
17,567
Thanks for replying. Just a quick reply - have a phone call in a while to prepare for.

The figures are important. They show where the over-reporting is (the gaps between the two lines). The over-reporting didn't take place throughout the sample - not at the low levels of moderate to vigorous activity levels and at the low levels of vigorous activity levels - which is why it may not make a difference a difference to the overall figures.

I have read quite a lot of papers in this area in the last week. While the correlations of actometers with calorimetry and double-labelled water aren't in the 0.8s and 0.9s (IIRC), it has generally been considered a good objective measure. Self-reports have all sorts of problems with much lower correlations. There are quite a range of actometers e.g. some can move in three dimensions which is better for some sorts of activities. Also the positioning can be an issue e.g. IIRC, waist is better than ankle.

I haven't read the associated paper so far but a quick look suggests if they had just gone by the subjective measures, the intervention would have been portrayed as some sort of success http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2275165/pdf/nihms41002.pdf
 

oceanblue

Guest
Messages
1,383
Location
UK
The figures are important. They show where the over-reporting is (the gaps between the two lines). The over-reporting didn't take place throughout the sample - not at the low levels of moderate to vigorous activity levels and at the low levels of vigorous activity levels - which is why it may not make a difference a difference to the overall figures.

I have read quite a lot of papers in this area in the last week. While the correlations of actometers with calorimetry and double-labelled water aren't in the 0.8s and 0.9s (IIRC), it has generally been considered a good objective measure. Self-reports have all sorts of problems with much lower correlations. There are quite a range of actometers e.g. some can move in three dimensions which is better for some sorts of activities. Also the positioning can be an issue e.g. IIRC, waist is better than ankle.

I haven't read the associated paper so far but a quick look suggests if they had just gone by the subjective measures, the intervention would have been portrayed as some sort of success http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2275165/pdf/nihms41002.pdf
Interesting that the original paper claims some improvement while this paper, using the same data, says there was no significant overall improvement according to accelerometry or self-report!

I'd be interested to see anything you come across that validates actometers in free-living (as opposed to lab) conditions, since that's the relevant comparison in these trials. In any event, they seem to be way ahead of self-report measures. I can't believe the general complacency about the short-comings of self-report measures.
 

Dolphin

Senior Member
Messages
17,567
Interesting that the original paper claims some improvement while this paper, using the same data, says there was no significant overall improvement according to accelerometry or self-report!
I've read it now. The original paper in effect refers to two trials: one that ended in 2005 where the girls got the program for 7th and 8th grade (the preplanned trial) and then an addition they did where other girls got the program in 6th and 7th grade and then a cheaper follow-up program was performed in 8th grade. There was no difference for the program that just did 7th and 8th grades - the improvement was only see in the program for 6th, 7th and 8th grades.

The papers that talked about motion sensors/actometers were review papers rather than individual trials. They were papers referenced in this paper e.g. Shephard. So I'm afraid I can't help much with details.
 
Back