Discussion in 'Latest ME/CFS Research' started by Tom Kindlon, Sep 8, 2017.
Free full text: http://www.oatext.com/the-effects-o...ould-be-assessed-using-objective-measures.php
Loading Tweet... https://twitter.com/statuses/906167348486393856
Loading Tweet... https://twitter.com/statuses/906168797131825152
Yes! I want patients to be able to walk longer distances, sit up more often, socialize more often, and go back to school/work. Not feel less fatigued but still unable to be more active.
Should be CFL≥12
Should be CFL ≤17
This could give the impression that CBT did better than specialist medical care alone on the six minute walking test when in fact there was basically no numerical difference (SMC increased by 1.5 m more with the adjusted figure)
Nice to see this study being challenged:
I came to the conclusion some time ago that any study that used subjective measures and not objective as primary outcomes was worthless. The more we can do to get this message out the better.
I thought I would plug this commentary which never got that much attention possibly because it is not PubMed-listed
I am not sure I have seen "buy-in effects" used in a paper before. I googled it but don't see many results. I think it may refer to patients finding their treatment credible/believing it will work.
I can see that approach but I would probably soften it to any trial that isn't adequately supported by objective measures as a secondary outcome is worthless. For example, the RituxME trial is based on self-report as its primary outcome. If it reports a positive outcome supported by actometry data I don't think it would be a worthless result. Whereas if we have a scenario, like PACE, where we have slightly positive self-report data but where objective measures are either not presented or contradict the main finding, then I would agree it's garbage.
The key point @Jonathan Edwards made in his PACE article that it's the combination of an unblinded trial with subjective measures that is the problem. The Rituximab trial is a double blind trial, so patients subjective reports can't be influenced by their expectations. In that case subjective measures can be useful and valid.
Yes, one has to be careful not to be too dogmatic about objective measures. Remember that all rheumatoid arthritis trials have primary outcome measures that have a subjective component and therefore can show a statistically significant difference just due to the subjective aspect. Rituximab is used for autoimmune disease on that basis. The proof of concept RA trial had a partially subjective primary outcome measure. But it was double blinded and objective measures were there to back up the primary outcome.
We have debated this before and I would personally like to see standard measure for ME like the ACR criteria used for RA. Both subjective and objective components go in, so that even if you can get a statistically significant difference with subjective features alone it is extremely hard to get a clinically significant difference without both sorts of measure changing substantially. There is no problem with multiple measures because you have a fixed formula for getting a single answer out.
I would think buy-in effects would be inevitable to expect for many such trials. For any project you become wedded to, you effectively become a member of the project team and, human nature being what it is, dearly want that ("your") project to succeed. When answering multiple-choice questionnaires, there will often be uncertainty which answer to choose, and the desire to benefit the project as a whole will very possibly bias which answer you choose, and maybe even bias which range of answers you might select from.
I think the point is less that about being open label trials but more about differences in treatments and how patients perceive the effects and hence reporting biases. Placebo controlled trials are good because they try to present the same treatment to all patients apart from the active component. Things like PACE set different expectations for their made up therapy (APT) and effectively had a wait-list control (SMC) again setting little expectation of recovery. Could an open label CBT trial be suitably controlled so that the same expectations (for example, to try and change different beliefs) but when I think about it I keep coming back to the two arms would be very close and eventually the same in order to have the same expectations.
I assume that there can be problems with placebo controlled trials in that reactions can be different. If I remember correctly some of those supporting CBT criticized Fluge and Mella's Rituximab trial for this exact reason. Which makes me think that they get more about the measures they use than they would let on.
I'm not keen on the idea that you can pre-pick a measure and use that (subjective or objective) I much prefer the idea of more measures and having to have them correlated. Just a subjective measure (or several) that aren't supported by more objective measures suggests the potential for bias. If you have a set of measures and one doesn't improve then I assume that may say something interesting. I also don't like the idea of effect sizes on single measures; if we have multiple measures we need multivariate effect sizes. Of like with the EQ5d scale have a utility function to combine values in a way that has meaning (unlike the CFQ that combines different forms of fatigue with arbitrary weightings).
Trouble is, the treatments themselves are partly about changing expectations, which in some scenarios is a valid treatment option (not the false illness beliefs cr*p aimed at PwME I don't mean, but for example low self esteem etc). The expectation effect is actually part of the active component, so blinding would effectively remove at least part of the active ingredient.
So I think it comes back to the what has been emphasised before, that if you cannot blind subjects to their treatments and expectations of those treatments, then you just have to have objective outcome measures.
Yes in my thought process I tried to separate them out but I ended up thinking that you would end up with the same CBT in each arm to set the same expectations. It would be hard to find sensible beliefs to change (trying to change beliefs about the colour of the sky would set the wrong expectations).
I also think objective measures are becoming much easier. If I were to run a trial I would give all participants a fitbit (or similar) for the duration and look at activity changes. (Its not perfect and doesn't measure mental activity) but it would also help with safety by measuring compliance to any activity program. Also a fitbit isn't hard to wear and I think may even be something of a fashion statement.
I suppose in a way there are different aspects of a trial that you can blind. In the conventional terminology blinding means to insulate the trial's input data from subjective influence. But you can also insulate the outputs from subjective influence - by using objective outcome measures. In a way, using objective outcome measures is a form of blinding.
Totally agree. Subjective + no blinding = red flag. But I would add an additional red flag where subjectively and objectively measured results are inconsistent with each other (or objective measures in the trial protocol are mysteriously not presented), even in a blinded trial. PACE+FINE had the former and the latter: some subjective 'improvement' but failed miserably on objective stuff such as employment, six-minute walking test.
But remember that in the double blinded comparison of albuterol with placebo for asthma (www.ncbi.nlm.nih.gov/pubmed/21751905) the subjective measure showed 50% improvement for albuterol and 45% for placebo but the objective measure showed 20% improvement for albuterol and 7% for placebo. So even with blinding subjective measures are unreliable. The gold standard should be objective measures.
Pity they didn't also have a non-treatment, non-placebo arm to provide an absolute comparison for the two other arms.
I agree that wherever possible objective measures should be used. The relationship between objective and subjective outcomes is important info.
It maybe depends on what the measurement expectations/interpretations are. The above figures show that people's asthma, measured objectively, showed a tangible improvement compared to no treatment. But it also shows that people's perception of improvement was negligible. But that doesn't necessarily clash, because two quite different things are being measured, and both could well be right. I could easily believe there to be a very non-linear relationship between real improvement of asthma symptoms and our perception of it.
Suppose a thought experiment, where you could just dial into a test subject varying levels of asthma severity, controlled by turning a dial (obvious why I'm suggesting this only as a thought experiment). You start your experiment with full blown asthma symptoms, and ask the subject what their asthma feels like, and plot the dial setting on the x-axis and the subjects perceived asthma severity on the y-axis. You then progressively reduce the dialled in asthma severity, taking lots more readings along the way, effectively moving from right to left along the x-axis. I would be very surprised if the resultant plot was a straight line down to the 0,0 origin. Much more likely I suspect that the y values would stay high for quite a large part of the plot, even though the dialled in values were coming down. I'm not sure, but it may be that until the symptoms drop below a certain level of severity, a sufferer may not perceive much difference. I suspect it would be a sort of S-curve, because at the low end people also probably don't much notice at lower severity levels. But I have to emphasise I have no way of knowing if I'm right here, but I would be amazed if it was a nice straight line relationship - the world rarely works that way, especially between real and perceived.
To me this means that both readings are very possibly perfectly valid, provided no one tries to pretend that the perceived severity readings are synonymous with the actual severity readings and then build treatment regimes based on that falsehood. PACE etc of course tries to insist and mislead that they are synonymous.
I've also seen in recent times the BPS crew suggesting that observing people's perceptions is fine, and is as valid as objective observations even if they do differ significantly, because a person's perceptions of their condition is what really matters and can be deemed the more significant factor in their condition. Only someone high on BPS-brew could really believe that one!
You can also try a Google Site Search
Separate names with a comma.