1. Patients launch $1.27 million crowdfunding campaign for ME/CFS gut microbiome study.
    Check out the website, Facebook and Twitter. Join in donate and spread the word!
A disease with two faces? Re-naming ME/CFS
Persuasion Smith covers the bases on the misleading and disreputable name for our disease we've all been saddled with ...
Discuss the article on the Forums.

A cost effectiveness of the PACE trial

Discussion in 'Latest ME/CFS Research' started by user9876, Aug 1, 2012.

  1. Simon

    Simon

    Messages:
    1,531
    Likes:
    4,910
    Monmouth, UK
    Unfortunately there is no info about what they said but in answer to the question"Do the best treatments for CFS cost more?" [edit: they mean CBT & GET]
    1. Yes, according to direct healthcare costs
    2. Yes, for GET according to societal costs while CBT is cost-neutral - if informal care is valued at the minimum wage
     
    Dolphin likes this.
  2. user9876

    user9876 Senior Member

    Messages:
    796
    Likes:
    1,958
    I particularly object to their use of best, or most effective when they have not compared their treatments with others such as Rituximab or Ampligen. In their paper they don't even acknowledge the existance of other possible treatments.
     
    WillowJ, Dolphin and alex3619 like this.
  3. Bob

    Bob

    Messages:
    8,910
    Likes:
    12,605
    South of England
    Thanks for that, Dolphin.
     
    user9876 likes this.
  4. Simon

    Simon

    Messages:
    1,531
    Likes:
    4,910
    Monmouth, UK
    Thanks, For those who prefer a less mathematical explantion of Bootsrapping try this powerpoint one (2001) - still makes sense if you skip the maths slides and you only need the first 10 or so slides, + Summary/conclusion.


    Applying bootstrapping to this paper
    In this paper, 1,000 resamples (each of 570) were made of the original data for net QALY benefit. Figure 1 shows the results. So, where healthcare providers are willing to spend £30,000 to gain one QALY (the threshold usually used in the UK) they found that for 100 of those 1,000 resamples, APT came out as the most effectice, in about 250 of the resamples GET came out top and in almost all the rest CBT came out top. Note that each resample has slightly different data, which is why CBT 'won' out in some resamples while APT won out in others.

    Is Bootstrapping a reliable way to evaluate data?
    This from the powerpoint presentation above:
    • A very very good question !
    • Jury still out on how far it can be applied, but for now nobody is going to shoot you down for using it.
    • Good agreement for Normal (Gaussian) distributions, skewed distributions tend to more problematic, particularly for the tails, (boot strap underestimates the errors).
    I think that's a 'maybe'.
     
    Dolphin and user9876 like this.
  5. Sam Carter

    Sam Carter Guest

    Messages:
    297
    Likes:
    192
    Someone asked me how the percentages in Table 4 had been calculated, because they're not equal to N/(total n)*100.

    Have they been adjusted somehow? (Or are we both misreading the paper?)
     
  6. Valentijn

    Valentijn Activity Level: 3

    Messages:
    6,714
    Likes:
    10,222
    Amersfoort, Netherlands
    Regarding bootstrapping - isn't there a fundamental problem with having the ability to "bootstrap" repeatedly until you randomly get the results you want? And why would be bootstrapping ever be a satisfactory replacement for doing the actual statistical analysis?
     
  7. user9876

    user9876 Senior Member

    Messages:
    796
    Likes:
    1,958
    Thanks for the explanation.

    My guess is that the number of samples you need will depend on the complexity of the distributions that they are drawn from. Hence it works well for a normal distributions. I seem to remember that if you fit a distribution to a set of samples the amount of data you need grows exponentially with each variable. In the document that Dolphin pointed to (message #214) there are references to papers looking at sample size.

    One of the things that worried me about the PACE trial results was the increased std for each group. My first thought was that the result pdfs were multimodal. They haven't published that kind of information so its impossible to tell.

    However, as Dolphin suggested I don't think this is the major issue with the work. To me the major issue is the gap between actual and perceived function/fatigue. Its also interesting to look at the technicalities of the 3 different scales reported. The designers of the EQ-5d scale are very clear that the coding they use has no arithmetic meaning hence you cann't add up the score, yet this is the mistake that both the fatigue scale and the SF36-PF scale fall into. I have slightly more sympathy for the SF36-PF scale in that it is measuring a single factor - however, it suffers from edge effects and non-lineararities hence using means and std are not valid. Hence the clinically useful difference they use is also not valid.
     
  8. Dolphin

    Dolphin Senior Member

    Messages:
    6,872
    Likes:
    6,166
    Can you point out ones which are out. Although the top figure gives sample sizes, it's still possible the sample size could be a little smaller for individual questions which were incomplete/unclear/spoiled in some way.
     
  9. alex3619

    alex3619 Senior Member

    Messages:
    7,722
    Likes:
    12,640
    Logan, Queensland, Australia
    Hi Valentijn, potentially I think you are right. Keep retesting till you get a sequence of data with the result you want. Or, if this doesn't work and you are very unethical, scrap it and start again. Repeat until you get a sequence you like. Then stop testing. Whether this happens in any particular case might be very hard to judge however.

    Bye, Alex
     
    WillowJ and Dolphin like this.
  10. Dolphin

    Dolphin Senior Member

    Messages:
    6,872
    Likes:
    6,166
    Thanks for that. However, the solid line is SMC alone not APT (see Figure 2) i.e. they found that for (around) 100 of those 1,000 resamples, SMC came out as the most effectice
     
    Simon likes this.
  11. Sam Carter

    Sam Carter Guest

    Messages:
    297
    Likes:
    192
    All the ones I've checked are incorrect; as an example, for APT (n=141)

    Income benefits N (%)

    6-month pre-randomisation period 28 (18) but 28/141*100=19.86

    12-month post-randomisation period 33 (22) but 33/141*100=23.40

    Assuming they've rounded percentages roughly +- 0.5 in either direction, then to derive the percentages shown, you would need n∈{152, 153, 154, 155, 156, 157, 158, 159, 160} for the first calculation, and n∈{147, 148, 149, 150, 151, 152, 153} for the second calculation.

    I think you're right about the sample size varying as a consequence of incomplete data.

    ETA: But if n (total n) varies, is it not somewhat misleading to present data in the form of N (%), when the table implies that n does not vary? Would it not be more accurate, and consistent, simply to give the percent rather than an absolute number, since if the absolute number is taken from a smaller sample it will, in this instance, potentially understate the numbers receiving welfare/income benefits, as those for whom data is missing might also be claiming some kind of benefit? In short, the absolute numbers given set a lower bound on the numbers of claimants; in reality it could be higher.
     
    Dolphin likes this.
  12. Bob

    Bob

    Messages:
    8,910
    Likes:
    12,605
    South of England

    Edit: Sorry everyone, it looks like I've got this wrong.
    It looks like these details are incorrect. See my later post, and biophile's post, for explanation.

    Hi Sam,
    Eagle eyes! (Good find.)
    It looks like the percentages relate to the original PACE Trial study numbers.
    You can see the relevant numbers in Table 1 of this paper. (159, 161, 160, 160)

    It's not very helpful.
    I think it makes the percentages that they've given, slightly lower than they should be.
    So the actual percentages are slightly higher.

    I'm sure it's not an intentional 'error', to make the figures for benefit claimants, and insurance claimants, look better than they are.
     
    Simon likes this.
  13. Bob

    Bob

    Messages:
    8,910
    Likes:
    12,605
    South of England
    Regarding Lost Employment, the paper says:
    "There was no clear difference between treatments in terms of lost employment."
    And yet the unadjusted differences between therapies and SMC are as follows:
    Difference from SMC: Changes in Lost Employment: APT = 62, CBT = -1,157, GET = -711, SMC = 0

    I'm not complaining about this, but it is a bit confusing.
    I'm surprised that a £1157 unadjusted difference can be adjusted down to 'no clear difference'.
     
  14. Simon

    Simon

    Messages:
    1,531
    Likes:
    4,910
    Monmouth, UK
    Hi Valentijin
    With bootstrapping, the more resampling (more bootsrapping) the more reliable the estimate is likely to be, so the problem would be with stopping early when you get a result you like, not keeping going until you like the results. 1,000 resamples (as in this paper) is a decent size so it's unlikey they stopped early to get the 'right' results in this case.
     
    Dolphin likes this.
  15. Simon

    Simon

    Messages:
    1,531
    Likes:
    4,910
    Monmouth, UK
    Hi Bob
    I think your interpretation of Table 6 is spot on but I think Figures 1 & 2 are computed on a different basis, looking at costs/benefits vs baseline, rather than vs SMC. This from the method:
    So to me it looks like they compute QALY gains for each individual vs baseline, rather than vs SMC. But I'm not entirely sure about that.
     
  16. Bob

    Bob

    Messages:
    8,910
    Likes:
    12,605
    South of England
    Thank very much to everyone for the info about Figure 1.

    I think I've finally figured out how to work out the QALY-based net benefit per individual, that Figure 1 is based on.
    I was making a basic error before.

    Here is how they work out the QALY-based net benefit per individual:

    "Net benefit values were computed for each study participant, defined as:
    the value of a QALY
    multiplied by the number of QALYs gained
    minus the cost (from both healthcare and societal perspectives)."


    So to make the calculations, you take the proposed value of a QALY (e.g. £30,000); multiply it by the incremental number of QALYs gained per individual for each therapy, given in Table 6 (this gives you the QALY-based gross total cost benefit per individual for each therapy); and then subtract the QALY-based individual healthcare cost for each therapy (this is calculated by taking the cost per QALY for each therapy, given in Table 6, multiplied by the numbers of QALYs gained for each individual, given in Table 6.)

    This gives the QALY-based net benefit values for each individual for each therapy, relative to SMC, which I think Figure 1 is based on. My figures seem to correspond to Figure 1 anyway.

    And here are my calculations for three different different QALY values (£30,000, £20,000, £0):

    (Negative values are net costs. Positive values are net savings.)


    £30,000 QALY value
    SMC 0
    APT (30000 x 0.0149) [447] - (55235 x 0.0149) [823] = -376 (net cost)
    CBT (30000 x 0.0492) [1476] - (18374 x 0.0492) [904] = 572 (net saving)
    GET (30000 x 0.0343) [1029] - (23615 x 0.0343) [810] = 219 (net saving)



    £20,000 QALY value
    SMC 0
    APT (20000 x 0.0149) [298] - (55235 x 0.0149) [823] = -525 (net cost)
    CBT (20000 x 0.0492) [984] - (18374 x 0.0492) [904] = 80 (net saving)
    GET (20000 x 0.0343) [686] - (23615 x 0.0343) [810] = -124 (net cost)



    £0 QALY value
    SMC 0
    APT 0 - 823 = -823 (net cost)
    CBT 0 - 904 = -904 (net cost)
    GET 0 - 810 = -810 (net cost)



    At a proposed QALY value of £20,000 (see above), the net benefit values of CBT and GET are pretty close to zero (i.e. crossing from negative values to positive values). This is near to the (relative) 'zero' value of SMC. So this is why the CBT/GET/SMC lines cross over near £20,000, on Figure 1. This makes me think that I've got these calculations right this time.


    Edit: Except, my numbers don't seem exact enough, so there is room for improvement here.
     
  17. Bob

    Bob

    Messages:
    8,910
    Likes:
    12,605
    South of England
    Hi Simon,
    Thanku very much for that.
    Our posts crossed over.
    If you are interested in this, then see if you agree with what I've done with my previous post.
    Bob

    Edit: Table 6 seems to be based on the changes (from pre-randomisation to post-randomisation) over and above SMC, which they indicate as 'incremental' costs and effects.
     
  18. user9876

    user9876 Senior Member

    Messages:
    796
    Likes:
    1,958
    They seem to be using a regression model.
     
  19. Simon

    Simon

    Messages:
    1,531
    Likes:
    4,910
    Monmouth, UK
    Thanks User & Bob
    Based on our discussions here, my best/final guess is the Fig 1 & 2 were constructed as follows:
    • Net benefit (QALY value x change from baseline - cost) is calculated for each patient based on changes from baseline
    • 1,000 resamples are created in the bootstrapping process, and for each resample regression analysis is used to compute which therapy is best (giving the percentage liklihood of each therapy being best at each QALY).
    Bob, your figures are calculated using average data for CBT, GET etc groups, rather than the calculation per individual (vs baseline not SMC) I think is used in Fig 1 & 2. But of course results based on the averages should give broadly similar results to results based on individuals which is, I think, why your calculations give similar answers to theirs. i.e. they have done the calculations right, and so have you.
     
  20. Sam Carter

    Sam Carter Guest

    Messages:
    297
    Likes:
    192
    Thanks, Bob - but not my eagle eyes!

    I don't want to labour this point because it may be of no importance (and my calculations could well be wrong), but looking at the figures in the APT column, the (small) n of 141 gives the wrong percentage for all the (absolute) N provided; for the other columns it's possible to find one, and only one, percentage (p) such that N/n*100=p, but it isn't clear to me what the (small) n denotes in this context.
     

See more popular forum discussions.

Share This Page