1. Patients launch $1.27 million crowdfunding campaign for ME/CFS gut microbiome study.
    Check out the website, Facebook and Twitter. Join in donate and spread the word!
Knitting Equals Pleasure, Despite ME/CFS
Jody Smith loves knitting. Again. She thought her days of knitting and purling were long over but ... she's back ...
Discuss the article on the Forums.

A cost effectiveness of the PACE trial

Discussion in 'Latest ME/CFS Research' started by user9876, Aug 1, 2012.

  1. user9876

    user9876 Senior Member

    Messages:
    707
    Likes:
    1,628
    I think this paper may be important in looking at the eq-5d results. It will take me a while to read it and a while before I can find the time so I thought I would post it.

    Basically its saying that statistical significance of comaprisons are very dependent on the weights used to compute the single utility value. So a result could be significant in the UK but not in the netherlands

    Just to warn people its quite mathematical

    http://openaccess.city.ac.uk/1503/1/0810_parkin-et-al.pdf
  2. Bob

    Bob

    Messages:
    7,949
    Likes:
    9,868
    England, UK
    I'm trying to summarise the most interesting data published in the PACE Trial's cost analysis paper.
    Before I try to make my summary more succinct, I'm posting my analysis so far.
    If anyone is able and willing to, I'd be very grateful for any feedback, please.
    (i.e. Have I made any obvious and glaring errors?)
    I'll have to study the whole paper again, to make sure I haven't made any mistakes, and I'm obviously not expecting anyone else to give that sort of detailed feedback.


    Cost Effectiveness Analysis paper:
    http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0040808


    Please note that I am ignoring APT in this analysis, so where-ever I say that there were 'no significant differences', etc., between treatment groups, this might not always apply when comparing the therapy groups to APT.



    Lost employment hours

    They've given us lost employment 'hours', and lost employment 'costs', but not details of number of individuals back to work, or the number who increased working hours.


    Lost employment 'days' are given in Table 2, and lost employment 'costs' are given in Table 3.

    The paper says that CBT and GET did not improve employment prospects:
    "There was no clear difference between treatments in terms of lost employment."

    Note that there were lost employment improvements in the CBT, GET and SMC groups, but the differences between CBT/GET and SMC were not significant, so CBT/GET did not improve outcomes.





    Table 4: Welfare Benefits and Other Financial Payments

    http://www.plosone.org/article/info...RI=info:doi/10.1371/journal.pone.0040808.t004

    Note that these figures (for benefits) are not included in the cost effectiveness evaluations.
    So although this data is published, it isn't used for any analysis.
    I think that the 'benefits' data is the only 'cost' data in the paper that is excluded from the final analysis.

    For the overall differences, for all (welfare and private) 'benefits', taken as a whole, the paper seems to assert that there was no significant difference between the CBT/GET groups and the SMC control group, although it's not very clear what exactly they mean by 'benefits' in the following text (I think they are lumping all private and welfare benefits together.):
    "However, with the exception of a difference between CBT and APT, there were no significant differences in either lost work time or benefits between the treatments during follow up. In fact, benefits increased across all four treatments."


    Note that they say that "benefits increased across all four treatments." (So, overall benefits increased after treatment with GET and CBT, as well as with SMC.)
    So, for overall benefits claims (all welfare and private benefits, lumped together), there was an absolute increase in the proportion of participants making claims, in each of the therapy groups.

    For both 'income-related benefits' and 'income protection schemes or private pensions', the increases in claimants for CBT/GET are higher (worse) than for SMC, but they just say that the differences were not 'substantial'. They don't say that the differences are not significant, so the outcomes for CBT and GET might be significantly worse, when compared with the SMC control group, in both of these benefit categories.

    Interestingly, there is no data specifically in relation to 'private medical insurance' claims. They only publish data for income protection schemes, and private pensions. I don't know if they collected data for private medical insurance. If they did, then perhaps the data wasn't to their liking, because they didn't include it.



    Here is a breakdown of the individual types of benefit claims (it includes private 'benefits'):


    Income-Related Benefits:
    The proportion of participants claiming Income-related benefits increased in every therapy group.
    Looking at the unadjusted figures, there is little difference between the changes in each therapy group (CBT, GET, and SMC), so it looks like CBT and GET made no significant difference to income-related benefits.

    The text says:
    "Relatively few patients were in receipt of income-related benefits or payments from income protection schemes and differences between groups were not substantial."


    Illness/disability benefits:
    The proportion of participants claiming illness/disability benefits increased in each therapy group.
    By my estimation, using the unadjusted figures, CBT & GET resulted in a relatively lower increase in numbers on illness/disability benefits, when compared with the SMC control group (i.e. CBT and GET resulted in a less bad outcome in relation to SMC, but there was still an absolute increase in the CBT and GET groups). By my estimation there was about a 12 or 13 percentage point less of an increase for CBT/GET than for SMC.

    The paper doesn't comment on this. It just says:
    "Receipt of benefits due to illness or disability increased slightly from baseline to follow-up (Table 4). Patients in the SMC group had the lowest level of receipt at baseline but the figures at followup were similar between groups."

    They seem to be looking at absolute numbers claiming benefits in each group, rather than the relative changes in numbers claiming benefits in each group over time. So they completely fail to comment on the relative changes in illness/disability benefits. Maybe there's no statistical significance but they don't make that clear.

    So for Illness/disability benefits, there were absolute increases for CBT and GET, but relative lower increases for CBT/GET than for SMC. The paper doesn't seem to comment about whether the differences between the changes in each groups are significant in this category, so I can't comment.





    Income protection schemes or private pensions:
    The proportion of participants claiming for income protection schemes or private pensions was higher in every therapy group.
    And CBT and GET both resulted in relative increases in claims, compared with SMC, (but I don't know if they are statistically significant increases), in the private benefits category (payments from income protection schemes or private pensions.)
    (Using the unadjusted figures, there was roughly a 4 to 6 percentage points increase in participants making claims in the CBT and GET groups, compared with SMC.)

    Keeping in mind that at least one of the authors works for an insurance company, the paper avoids commenting on the increase in payments from income protection schemes and private pensions, as a result of CBT and GET:
    "Relatively few patients were in receipt of income-related benefits or payments from income protection schemes anddifferences between groups were not substantial."
    (Note, that they do not say that the differences were not 'significant', they just say 'not substantial'! Crafty!)








    I'll try to make a succinct summary soon, but in the mean time, here's a temporary very-brief summary, which I think is safe enough to use, considering the lack of detailed analysis for each benefit category, in the published paper.
    (My issue with the following summary is that I'm not sure if the differences between therapy groups are insignificant if we separate 'private payment claims' from 'welfare benefit claims' - there might only be insignificant differences between the therapy groups when all the 'benefit' categories are lumped together.)

    Brief summary:

    Considering the lack of detail in some of the cost-analysis paper's analysis, it seems safe to say that CBT and GET have not resulted in significant improvements in:
    1. Employment hours,
    2. Welfare benefit claims (consisting of income-related, and illness/disability benefits), or
    3. Private payment claims. (consisting of payment protection insurance, and private pensions.)

    CBT and GET actually resulted in worse outcomes (when using SMC as a control group) for private payment claims (private payment claims seem to consist of: payment protection plans, and private pensions.)
  3. Mark

    Mark Acting CEO

    Messages:
    4,512
    Likes:
    1,929
    Sofa, UK
    One little detail of all that, that strikes me as mildly interesting just as it did when I first read it:

    "Receipt of benefits due to illness or disability increased slightly from baseline to follow-up (Table 4). Patients in the SMC group had the lowest level of receipt at baseline but the figures at followup were similar between groups."

    What seems odd to me about that sentence is that the various groups at baseline are supposed to be randomly allocated (I presume there's no relevant systematic difference in that allocation to the 4 treatment groups, at least not one that's acknowledged?) and therefore there's supposed to be no statistical significance to any difference between them at baseline.

    Yet they note in their text that the SMC group had "the lowest level" at baseline, even though (if the allocation is valid) that difference should not be statistically significant. I guess the precise "lowest level" here is in table 4, so one can check the actual figures and see how much of a difference is required to justify them noting that it's "the lowest" - but it should be within the margins of random noise. And if so, then surely they are sloppy to turn that data into this text - if it's not significant in any way, then surely it's not a rigorous methodology that allows them to write this text? But this is how their raw data get translated into misleading sentences...

    In this case, it might appear on casual reading of this sentence that SMC led to an increase in benefits to a greater extent than the other treatments. It would really be very easy to read this particular sentence and think there is some small degree of significance (of unstated size) to that 'finding'. Yet there's no indication of how big an effect that was, in this text, and if it was statistically significant in any way then that means the study design was flawed and the SMC group were significantly different in this respect, at baseline, from the other groups. Why, then, are they allowed to even write a sentence like this? In short, if it is not statistically significant then what is the value of noting it?

    So I think it just highlights the looseness - and uselessness - of the way they use this kind of language - they seem to do it habitually, and frequently they really over-play it to the benefit of whatever case they want to make. In this particular case, it seems like a rare example where the 'spin' is of no particular benefit to any particular point they might want to make. So one can even imagine that this kind of translation of data into technically 'true' but statistically meaningless soundbites is so habitual in their everyday working practices that they don't even question whether it's valid or acceptable to write text like this - and maybe (a stretch, I know) they don't even notice when their personal prejudices come out in the words they choose, the conclusions they emphasise, and the sentences they choose to construct.

    That's just another part of the reason why it's so important to just get the raw data, and then have a variety of academic statistical work assessing what valid conclusions can be drawn from that data. All this method of writing up data is giving us is the cherry-picking of soundbites based on the data, driven by the writers' prejudices. There's nothing scientific about that process as far as I can see.
    Bob likes this.
  4. Bob

    Bob

    Messages:
    7,949
    Likes:
    9,868
    England, UK
    Does anyone understand exactly how Table 2 should be interpreted re lost employment in relation to the number using services?

    For the number participants "using services", does it indicate:

    1. The number of participants for whom the lost employment data was relevant?
    2. The number of participants who lost some (any amount of) days from work due to illness?
    3. Some other interpretation, based on the "human capital approach" (the method used to calculate lost employment in this paper.)

    Here's a link for Table 2:
    http://www.plosone.org/article/info...RI=info:doi/10.1371/journal.pone.0040808.t002

    Edit: I've changed these options, since posting.
    Enid likes this.
  5. Enid

    Enid Senior Member

    Messages:
    3,309
    Likes:
    840
    UK
    Following it through Bob and totally bemused - like one day the bathroom, even a stroll down the road (if lucky). Any chance the PACE researchers seriously found answers.
  6. WillowJ

    WillowJ Senior Member

    Messages:
    2,931
    Likes:
    2,357
    WA, USA
    I'm not going to read/re-read all 300 messages of this thread to see if this has been brought up before, but I was looking at this paper and I noticed the following:

    They claim that APT and SMC has more informal care costs compared to CBT and GET. But look at what changed due to trial:

    6 month pre-randomization period:

    Patients utilizing informal care:
    key: therapy (total in group): n (%) using services... mean (sd) contacts per user
    APT (n=159): 118 (74%)... 11.5 (11.1)
    CBT (n=161): 106 (66%)... 10.4 (8.3)
    GET (n=160): 120 (75%)... 9.6 (9.3)
    SMC (n=160): 128 (80%)... 12.3 (13.7)

    aside: why do we have one extra patient in CBT and one less in APT anyway?

    12-month post-randomization period:

    key: therapy (total in group): n (%) using services... mean (sd) contacts (n)
    APT (n=146): 108 (74%)... 11.0 (10.7) (n=159)
    CBT (n=145): 96 (66%)... 8.0 (8.6) (n=161)
    GET (n=140): 98 (70%)... 7.7 (8.7) (n=160)
    SMC (n=148): 111 (75%)... 11.4 (11.6) (n=160)

    the APT and SMC groups already had the highest usage of informal care.

    the percentage of patients needing informal care - not counting dropouts - held steady in APT and CBT. It dropped exactly the same number of points in GET and SMC.

    mean number of contacts per user - not counting dropouts - reduced across all groups, but slightly more in CBT and GET. it has already been noted that these patients were encouraged to stop seeking as much help from family, and do more activity potentially including housework, respectively.
  7. Esther12

    Esther12 Senior Member

    Messages:
    5,174
    Likes:
    5,157
    Thanks Willow. I don't remember that being pointed out previously.
    WillowJ likes this.
  8. WillowJ

    WillowJ Senior Member

    Messages:
    2,931
    Likes:
    2,357
    WA, USA
    ETA: Dolphin says this is not relevant it seems I misunderstood

    also wondering why the contacts per patient was calculated by the original group sizes instead of by the group sizes at 12 months (or possibly, by the number of patients using that service?)

    If I recalculate the mean using the group sizes at 12 months, the table looks like this:

    pre:
    APT (n=159): 118 (74%)... 11.5 (11.1)
    CBT (n=161): 106 (66%)... 10.4 (8.3)
    GET (n=160): 120 (75%)... 9.6 (9.3)
    SMC (n=160): 128 (80%)... 12.3 (13.7)

    post:
    APT (n=146): 108 (74%)... 11.0 12.0 (10.7) (n=159)
    CBT (n=145): 96 (66%)... 8.0 8.9 (8.6) (n=161)
    GET (n=140): 98 (70%)... 7.7 8.8 (8.7) (n=160)
    SMC (n=148): 111 (75%)... 11.4 12.3 (11.6) (n=160)

    Mean number of contacts for APT and SMC patients have now risen slightly compared to pre-trial. Number of contacts for CBT and GET have now dropped by about a one fewer contact per patient than with the other calculation. GET now changes due to trial by less than one contact per patient, and CBT by only one and one half. I wonder what that does to the cost effectiveness calculation?

    Tricksy.

    I don't know if SD changes due to sample size being different.

    ETA:

    comparing differences to favored treatment, CBT, to see what changes:

    pre:
    APT (n=159): 118 (74%)... 11.5 (11.1)... +1.1
    CBT (n=161): 106 (66%)... 10.4 (8.3)... tare
    GET (n=160): 120 (75%)... 9.6 (9.3)... -0.8
    SMC (n=160): 128 (80%)... 12.3 (13.7)... +1.9

    post compared to post:
    APT (n=146): 108 (74%)... 11.0 12.0 (10.7) (n=159)... +3.0 +3.1
    CBT (n=145): 96 (66%)... 8.0 8.9 (8.6) (n=161)... tare
    GET (n=140): 98 (70%)... 7.7 8.8 (8.7) (n=160)... -0.3 -0.1
    SMC (n=148): 111 (75%)... 11.4 12.3 (11.6) (n=160)... -3.4 +3.4

    does appear to affect frequency of interactions, although I couldn't tell you whether this is statistically significant (I forgot how to do these things). other than the reason above, another potential confounding factor could be that the more weak patients might have dropped out of CBT and especially GET.

    The SD fell across all intervention groups, which could poentially be accounted for by weaker patients dropping out, especially if it's claimed some are improving. If 20% are improving and the others are not, the SDs should be getting bigger, right?

    remind me to check other entries in the table and other tables, some other day...
    Valentijn likes this.
  9. Valentijn

    Valentijn Activity Level: 3

    Messages:
    6,034
    Likes:
    8,385
    Amersfoort, Netherlands
    Nicely spotted ... where the hell did they learn math that they think it's okay to get an average by dividing the total amount of contacts by the "number in the group" + "some extra random number"? They've made a very basic error, which might look very bad for them, especially if it's pointed out in a letter to be published or brought to the publisher's attention if it's too late for responses to the paper.
    WillowJ likes this.
  10. WillowJ

    WillowJ Senior Member

    Messages:
    2,931
    Likes:
    2,357
    WA, USA
    Valentijn, I don't think they did learn maths. :rolleyes:

    :angel:
    Bob and Valentijn like this.
  11. Valentijn

    Valentijn Activity Level: 3

    Messages:
    6,034
    Likes:
    8,385
    Amersfoort, Netherlands
    We can add that to the list: ethics, logic, statistics, biology, physiology...
    Shell, Bob, ukxmrv and 1 other person like this.
  12. Dolphin

    Dolphin Senior Member

    Messages:
    6,584
    Likes:
    5,181
    Just looking at this quickly. I'm not sure if this would an example where one would use "last value carried forward"/similar. This can mean that if you don't have the figures for certain individuals on completion, you use the last value you have for them e.g. the baseline score.
    WillowJ likes this.
  13. Valentijn

    Valentijn Activity Level: 3

    Messages:
    6,034
    Likes:
    8,385
    Amersfoort, Netherlands
    That wouldn't make much sense though, since the point is to evaluate changes in how much services CF patients are using. And if you're averaging, then resorting to old data when new data is missing would just make an unnecessary mess of things - far more logical to just use the new data. If anything, it would seem more appropriate to exclude the old data from the drop outs from final calculations, to evaluate change.

    I'm also not sure if reusing old data when new data is missing could account for the changes. SMC, for example, had no encouragement to stop seeing doctors in any form, and if using Willow's calculations their doctor visits stayed the same instead of decreasing.

    Of course, they'll never release the actual numbers, so no one will ever be able to check. Convenient! :cautious:
  14. Dolphin

    Dolphin Senior Member

    Messages:
    6,584
    Likes:
    5,181
    I'm not sure how this is different from other cases where "last value carried forward" is used with missing data in outcome measures.

    I'm not sure the rationale for it - perhaps it is that if there are more parts of missing data in some arms than others, it is suspicious that maybe the results were worse.
  15. WillowJ

    WillowJ Senior Member

    Messages:
    2,931
    Likes:
    2,357
    WA, USA
    so they might have a legitimate reason for using the original figures. Ok, thanks. As Valentijn said, this makes number of visits dicey (maybe that's why CBT and GET decreased more; fewer patients... but it's hard to tell for sure), but it's difficult to know how to compensate for dropouts in any case.
    Valentijn likes this.
  16. Dolphin

    Dolphin Senior Member

    Messages:
    6,584
    Likes:
    5,181
    No, it doesn't look like they used "last value carried forward" because if they did, the sample sizes would be the same at both timepoints. What I was trying to say was this is what they could have done to deal with missing data (but didn't).
    Valentijn and WillowJ like this.
  17. Tom Kindlon

    Tom Kindlon Senior Member

    Messages:
    250
    Likes:
    759
    Simon likes this.
  18. Tom Kindlon

    Tom Kindlon Senior Member

    Messages:
    250
    Likes:
    759
    Some correspondence/e-letters between me and the lead author of this paper:

    ------------



    --------------------

    Simon, Valentijn and biophile like this.
  19. Firestormm

    Firestormm Guest

    Messages:
    5,824
    Likes:
    5,965
    Cornwall England
    @Tom Kindlon

    I don't understand: what was informal care as a cost being used for: a replacement for the cost of a therapist for CBT and GET?

    And if in the paper this cost was zero, is White saying, that even if they had used minimum wage, or an average of therapist's wages, that GET and CBT would still have shown benefit?

    Kind of then begs the question, why not simply show the costs of therapy - doesn't it?

    Thanks.
  20. Tom Kindlon

    Tom Kindlon Senior Member

    Messages:
    250
    Likes:
    759
    Firstly, the costs of the therapies were given.

    Bodies such as NICE will often analyse the costs of a therapy against the savings it gives. In this case, the only savings were in terms of patients with CBT and GET reporting they required less informal care for patients (the investigators probably imagined that there would be less sickness absence from work but this didn't happen). Depending on how one values such care, one can then decide whether one considers the therapies worth paying for or not. The authors mentioned in the statistical plan they would look at home care in three ways:
    However, when they published the paper, they only looked at one value, value the cost of the informal care at £14.60 per hour.

    They claimed in the paper: "However, sensitivity analyses revealed that the results were robust for alternative assumptions."
    However, this is simply not true for the two other scenarios mentioned in the statistical plan i.e. using "a zero cost and a cost based on the national minimum wage for informal care".

See more popular forum discussions.

Share This Page