Discussion in 'Latest ME/CFS Research' started by Dolphin, Aug 2, 2015.
Probably not of interest to many but I've just read this paper and will post some thoughts.
I'd be interested to know if patients were involved in either directly developing the pacing scale, or commenting on the APQ-38, or revised APQ-26, so patients could assess face validity. They might just have something to add.
I also wonder with all these scales that demonstrate 'statisfactory' internal consistency, test/retest etc, like it's an ok-ish pass. How about trying to develop something really good? Because that's how it will be interpreted when used in future studies.
Sorry, pre-empting you there. As you were.
They weren't really asked to assess face validity as measures of pacing.
However, the questionnaire was influenced a little by the results from patients:
This led to a heading "Face Validity of the APQ-38" in the results but I'm not sure it's a great measure of face validity:
The main point I want to make is how pacing is changing from what people like Ellen Goudsmit traditionally wrote about pacing for M.E.
Here are some examples of questions
To me these are more like graded activity than pacing. The authors do say these sorts of questions haven't been included in other pacing questionnaires.
Such findings may explain the increased numbers reporting being made worse by pacing in the 2015 ME Association survey:
Some of the comments in the survey also show that some of the therapists giving pacing courses were using graded activity or "graded pacing".
Another area that hasn't been in a lot of forms of pacing is a focus on activity goals e.g.
The questionnaire also focused more on time contingent pacing rather than symptom contingent pacing
Such behaviour may not be the best strategy in M.E.
As the abstract says, the 26 questions were broken down, using factor analysis, into five factors or groups of questions which were given the headings: activity adjustment, activity consistency, activity progression, activity planning, and activity acceptance.
The authors then correlated the results with the scores on various instruments measuring:
Current pain; Usual pain; Physical fatigue; Mental fatigue; Anxiety; Depression; Cognitive anxiety; Escape and avoidance; Fearful thoughts; Physiological anxiety; Physical function; Mental function.
They reported these results in two ways: one as correlations where it is not clear which came first but also they started saying some types of pacing seemed better than others so ignoring the point that these were correlations. It is good that they said one couldn't be sure about cause and effect but given that they do try to say at some points some pacing strategies seem better, it is unclear whether they mention that one shouldn't read the correlations as cause and effect because they are honest scientists or alternatively because one or more reviewers insisted on it.
The point about correlations is that people who are doing well may be able to use certain strategies that people who are worse are unable to do. This doesn't mean that the strategies themselves make people better and similarly that not doing them makes one worse.
An example of where the authors are coming from:
Just part of the ongoing propaganda process of blurring GET into pacing, and vice-versa, so that this dishonest cowardly profession does not have to face truth about what it has done to us, and that we were right all along.
Here's a tip: Anytime these gits invoke deconditioning, you can be pretty sure that what follows is shite.
Better stop there for the sake of my own reputation.
You can also try a Google Site Search
Separate names with a comma.