It would have been a shoe in.He should have tried for the Guild of Cobblers.
It would have been a shoe in.He should have tried for the Guild of Cobblers.
That's not quite right, as patients were not required to not fulfil any aspect of the trial entry criteria (which was: assessed as having Oxford, and SF36-PF of 65 or under, and Chalder Fatigue bimodal score of 6 or under), but they were required to not fulfil all of them. So a patient could still report a decline in SF36-PF score from baseline and be classed as recovered, but they could not fulfil all aspects of the trial entry criteria and be classed as recovered.
Simple!
10 people in the trial deteriorated on the SF-36 physical functioning subscale but rated themselves as much better or very much better on the CGI.But as a patient reporting a decline in baseline physical function even from 65 to 60 would presumably not rate themselves as much better or very much better on the CGI
That's surprising, although the number is still very small in comparison to the number of participants overall. I am surprised there were any though .Is there any explanation for this? Is it that their fatigue was much improved despite worse physical function, or something else?10 people in the trial deteriorated on the SF-36 physical functioning subscale but rated themselves as much better or very much better on the CGI.
That's surprising, although the number is still very small in comparison to the number of participants overall. I am surprised there were any though .Is there any explanation for this? Is it that their fatigue was much improved despite worse physical function, or something else?
It is actually an excellent illustration of how vulnerable self-reporting outcomes are, when devoid of sanity checks, to prevailing-perception induced skew. It is all based on prevailing subjective perceptions, not backed up by objective measures.That's surprising, although the number is still very small in comparison to the number of participants overall. I am surprised there were any though .Is there any explanation for this? Is it that their fatigue was much improved despite worse physical function, or something else?
There is an extra column in the data here @Dolphin, the -ve values.trialarm cfqlsov0 cfqbsov0 pcfqls52 pcfqbs52 dgiq52F pfov0 p_pfov52 pgiq52F wtmts.0 wtmts.52 o_ov52cor
3 24 9 19.00 8.00 3 55 35.00 -20.00 2 556 650 0
2 33 11 33.00 11.00 2 20 0.00 -20.00 2 315 311 1
4 28 9 19.00 6.00 1 55 45.00 -10.00 1 305 367 1
2 33 11 33.00 11.00 2 40 30.00 -10.00 2 360 #NULL! 1
2 27 10 8.00 2.00 1 45 35.00 -10.00 2 412 380 0
3 22 11 15.00 4.00 #NULL! 55 45.00 -10.00 2 367 377 1
2 31 11 16.00 5.00 1 55 50.00 -5.00 1 377 425 0
3 32 11 19.00 8.00 3 50 45.00 -5.00 2 520 570 0
2 32 11 28.00 11.00 2 35 30.00 -5.00 2 341 321 1
2 26 11 25.00 11.00 2 35 30.00 -5.00 2 200 319 1
I am probably stating the obvious here, but has anyone scrutinised how the Chalder Fatigue Scale was created in the first place? (1993) It seems to me, like most of the studies that use it, that it was tweaked until it provided the results they wanted.It is all based on prevailing subjective perceptions, not backed up by objective measures.
I am probably stating the obvious here, but has anyone scrutinised how the Chalder Fatigue Scale was created in the first place? (1993) It seems to me, like most of the studies that use it, that it was tweaked until it provided the results they wanted.
I found this paper where they were assessing it's useability for fatigue in M.S.;
Chalder Fatigue Questionnaire-MS - King's College London
https://www.google.co.uk/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&cad=rja&uact=8&ved=0ahUKEwi4-834oPTSAhXMKcAKHWEkCKkQFggtMAI&url=https://kclpure.kcl.ac.uk/portal/files/37819671/PURE_Revised_MS_Fatigue_CFA_final_submitted_MS_JUNE_2015_v4_.docx&usg=AFQjCNH3g1GFld7eIk3ua6XkTm3lgKxtAg
There was no instrument available to measure subjective fatigue, so I simply invented one, which would later get modified into the Chalder Fatigue Scale, which also became a citation ‘hit’. And basically that was that.
this was the actual study 1993:There was some Q&A with Wessely where he talked about creating it - sounded like he just jotted some questions down on the back of a napkin.
I am probably stating the obvious here, but has anyone scrutinised how the Chalder Fatigue Scale was created in the first place? (1993) It seems to me, like most of the studies that use it, that it was tweaked until it provided the results they wanted.
I found this paper where they were assessing it's useability for fatigue in M.S.;
Chalder Fatigue Questionnaire-MS - King's College London
https://www.google.co.uk/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&cad=rja&uact=8&ved=0ahUKEwi4-834oPTSAhXMKcAKHWEkCKkQFggtMAI&url=https://kclpure.kcl.ac.uk/portal/files/37819671/PURE_Revised_MS_Fatigue_CFA_final_submitted_MS_JUNE_2015_v4_.docx&usg=AFQjCNH3g1GFld7eIk3ua6XkTm3lgKxtAg
I think the judgement of anyone using such a scale has to be called into question especially statisticians.
Reading this makes me realise: Just talking about medical trials generally, if a trial is to be correctly peer reviewed, then should that not mean that all the component parts that contribute should, themselves, have been verified or peer reviewed in some way? It is only as strong as the weakest link. So if a trial is going to measure outcomes using Method X, then surely that must mean Method X itself has to have been fully validated, else that part of the outcome cannot itself pass a peer review ... surely? You cannot just say "Well, we invented Method X because it suited us to use it, and because we are such clever bar-stewards we must be right so don't argue?!".Then there are the two marking schemes. PACE say that they changed from a bimodal to a likert scheme to increase accuracy. But that is highly misleading - it is not like measuring in mm rather than cm. They are different marking schemes and with the same answer set under one scheme patient A may be more fatigued than patient B but under the other the opposite could be true. In effect this means that one, or the other or both are not linear scales and so it is not valid to quote the mean or SD. There is no evidence I have come across to suggest which marking scheme is more linear than the other and hence which is more valid. Within the PACE and FINE data there are patients who both improved and got worse depending on the marking scheme.
I think its use is not that wide spread outside a small group in the UK and possibly the netherlands.That's why I asked the question. It is soooo widely used (as is the CIS-R) in so many research papers and taken to be reliably accurate.
Reading this makes me realise: Just talking about medical trials generally, if a trial is to be correctly peer reviewed, then should that not mean that all the component parts that contribute should, themselves, have been verified or peer reviewed in some way? It is only as strong as the weakest link. So if a trial is going to measure outcomes using Method X, then surely that must mean Method X itself has to have been fully validated, else that part of the outcome cannot itself pass a peer review ... surely? You cannot just say "Well, we invented Method X because it suited us to use it, and because we are such clever bar-stewards we must be right so don't argue?!".
So the fatigue score should have been peer reviewed ages ago, before it was ever allowed to be used in such a lives-changing clinical trial. Other scientists and mathematicians should have had their chance to identify the flaws in it, so it could be honed into something viable. Same for any other methods or practices employed.
This whole PACE thing is like an archaeological dig into a midden.
Google it.........they use it all over the place now and not just for CFS:I think its use is not that wide spread outside a small group in the UK and possibly the netherlands.
Exactly. As a design engineer myself I cannot help feeling quite appalled at how ad hoc and downright lackadaisical some of the clinical trial processes come across as being, but PACE and PR have really been my only exposure to it.I keep thinking that one of the real issues is the lack of a formalism behind medical trials and so no desired properties are stated and hence none are checked.