• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Comments submitted to the 2016 Cochrane Review of Exercise Therapy for CFS

Messages
32
Number 4 is ... interesting. My attempt at a summary (hope I get it right, not in an ideal state for this sort of work at the moment)

The reviewers deviated from their pre-specified plan of statistical analysis. According to the original plan, none of the outcomes except for sleep are positive. The review fails to discuss this and instead describes exercise therapy as having a broadly positive effect on outcomes.
Yes, that's how I interpret the review. The unplanned changes to the original analysis plan only affects fatigue, but fatigue is the primary outcome so it's the main focus of the review. But, even with the publication of the post-hoc analysis for fatigue, it doesn't change the fact that the pooled treatment effects for fatigue at follow-up were non-significant, as per their published sensitivity analysis. The other health outcomes, except sleep (i.e. physical function, overall health, pain, quality of life, depression, and anxiety), were not significant at follow-up with or without the changes to the protocol. In the main discussions in the review, physical function and overall-health have erroneously been described as demonstrating a positive treatment effect at follow-up when, in fact, the effects were non-significant as per their analyses.
 
Last edited:
Messages
32
Of course, I'm assuming that my interpretation of the review is correct. It's possible that I've made some errors, but I've re-checked my work repeatedly. Some other reliable people have also checked all the details that I've discussed and highlighted, and no one has spotted any errors so far.
 
Last edited:

Simon

Senior Member
Messages
3,789
Location
Monmouth, UK
Why the Cochrane review seems to have made a significant error
(why the specific outcome switch seems to go against the basic principles of reviews)


Great work by @seaturtle pointing out a significant change from protocol in the Cochrane review that changes a null long-term result (ie there's no point using graded exercise as any gains don't last) into a 'broadly positive' one. I want to try to explain why the specific type of change they made looks like bad practice (regardless of the issue of changing from protocol).

OK, first the result:
So, all but one health indicator (i.e. fatigue, physical function, overall health, pain, quality of life, depression, and anxiety, but not sleep) demonstrated a non-significant outcome for pooled treatment effects at follow-up for exercise therapy versus passive control.

...The way the review is written, any ordinary reader would be under the impression that the main outcomes at follow-up were positive, when in fact the pooled outcomes were not significant, except for sleep only.

Strange change from protocol:

Cochrane advises its reviewers to avoid switching primary outcomes, so it will be interesting to see how they attempt to justify their actions with regard to the switching.
This is key - the default position is that they shouldn't change the protocol, so if they do, they need a really good reason. I think they actually had a weak reason because switching from pooled treatment affects, which combines the results from studies using different types of eg fatigue questionnaire, means you can no longer look at all studies together - which is one of the main benefits of an analysis like this.

By instead analysing studies grouped by the questionnaire used, you have to break the results down into smaller groups, with more chance some groups will show a positive result. What patients and doctors need to know - and what Cochrane should be showing - is what the overall effectiveness of exercise is - regardless of the specific questionnaire used in the study.

Two ways of measuring 'effects': standardised (for pooling) vs simple mean difference [boring, but key point]

Many studies use a mean difference to measure the effectiveness of a treatment, comparing the mean (average) gain in the treatment group with the mean gain in the control group, to give a 'mean difference':
Code:
mean difference = mean gain in treatment - mean gain in control
actually, often they compare means of treatment vs control (rather than mean difference), but that requires control and treatment groups to have equal means at baseline.

Now, if different studies use the Chalder fatigue scale, it's easy to combine these studies to give an overall mean difference for all the Chalder fatigue scale studies. The authors did this, and grouped stuides into three groups, each using the same questionnaire. Happily, this gives a positive result.
For the grouped outcomes, two out of the three outcomes had significant treatment effects for fatigue at follow-up.
But this is flawed, because it doesn't tell us if looking at all the studies together - and looking at all the data together is the biggest reason for doing these reviews.

Now, you can't directly combine, or pool, studies with different scales (questionnaires) because of 5 point difference in one scale won't be comparable with a 5 point difference in another. This is a common problem when reviewing clinical trials for many illnesses, which is why Cochrane and others regularly use a standardised mean difference, which allows data from studies using different measures to be combined. That's presumably why the authors originally used the standardised mean difference in their protocol.

You don't need to know the details, but here's an explanation of the standardised mean difference - handily from Cochrane itself. It's a simple enough formula, and people who read reviews are familiar with the method because it's so widely used.
9.2.3.2 The standardized mean difference

The standardized mean difference is used as a summary statistic in meta-analysis when the studies all assess the same outcome but measure it in a variety of ways (for example, all studies measure depression but they use different psychometric scales). In this circumstance it is necessary to standardize the results of the studies to a uniform scale before they can be combined. The standardized mean difference expresses the size of the intervention effect in each study relative to the variability observed in that study. (Again in reality the intervention effect is a difference in means and not a mean of differences.):

image008.gif
.

Thus studies for which the difference in means is the same proportion of the standard deviation will have the same SMD, regardless of the actual scales used to make the measurements.

So, why would the authors drop this measure? They did report the results according to protocol in the small print, but neglected to mention in either the abstract of conclusion that exercise made no difference at follow-up. I'd be very interested to hear the authors' explanation to for this critical change to their protocol.
Added: a proper reason, given that 'too hard to interpret/understand' isn't one - they are saying that many (most?) Cochrane reviews use a methodolgy that's too hard to understand.

Any change to protocol that leads to a more positive assessment of treatments need rigorous justification, not just a bit of hand-waving.

The other health outcomes, except sleep (i.e. physical function, overall health, pain, quality of life, depression, and anxiety), were not significant at follow-up with or without the changes to the protocol.
Thanks for clarifying that.
 
Last edited:
Messages
32
So, why would the authors drop this measure? They did report the results according to protocol in the small print, but neglected to mention in either the abstract of conclusion that exercise made no difference at follow-up. I'd be very interested to hear the authors' explanation to for this critical change to their protocol.

Any change to protocol that leads to a more positive assessment to treatments need rigorous justification, not just a bit of hand-waving
BTW, in case anyone wasn't aware, they did provide a (totally underwhelming) reason for the change to the protocol: "We realise that the standardised mean difference (SMD) is much more difficult to conceptualise and interpret than the normal mean difference (MD) [...]". This really isn't an adequate reason partly because, as Simon has explained, using a SMD is standard practice for Cochrane reviews, and is recommended for pooled analyses of outcomes that use different measures. If you search my letter no.4 for the word "reason"' you'll see where I've challenged their reason, with reference to the official Cochrane reviewers' guidelines.
 
Last edited:

Simon

Senior Member
Messages
3,789
Location
Monmouth, UK
BTW, in case anyone wasn't aware, they did provide a (totally underwhelming) reason for the change to the protocol: "We realise that the standardised mean difference (SMD) is much more difficult to conceptualise and interpret than the normal mean difference (MD) [...]". This really isn't an adequate reason partly because, as Simon has explained, using a SMD is standard practice for Cochrane reviews, and is recommended for pooled analyses of outcomes that use different measures. If you search my letter no.4 for the word "reason"' you'll see where I've challenged their reason, with reference to the official Cochrane reviewers' guidelines.

So here is Robert Courtney's excellent challenge to the authors' weak reasoning for change (my formatting for clarity):

Justification for Switching Primary Outcomes

The reason given for switching the primary outcomes in the review is: "We realise that the standardised mean difference (SMD) is much more difficult to conceptualise and interpret than the normal mean difference (MD) [...]".

However, it is questionable whether the reason given for switching the primary outcomes justifies such an unplanned fundamental change in the methodology of the review;
  • no justification is given as to why the reviewers believe that readers would find it easier to interpret the mean scores of a range of disparate fatigue questionnaires, in a series of sub-analyses, rather than a single standardised mean difference for a pooled analysis of eligible studies.
  • It is not clear to me why it is assumed that a variety of separate fatigue scales should be easier to understand and interpret than a single standardised mean difference.
As the changes to the protocol have had the effect of changing the primary outcomes at follow-up, this means it would be desirable to provide a well reasoned case to deviate from the protocol and switch the primary outcomes.

The claim with regards to interpretability raises the question of why standardised mean differences are adequate for other Cochrane studies, but not this particular study. Cochrane has not adopted a policy of avoiding using standardised mean differences; instead the Cochrane guidelines (section 12.6) encourage their use [12]. So this decision appears to be a novel post-hoc decision specific for this study.

[more detail...]
The Cochrane guidelines (section 12.6.1) actually suggest that ordinary mean differences can be difficult to interpret: "The units of such outcomes [i.e. mean differences] may be difficult to interpret, particularly when they relate to rating scales." [12] The guidelines (section 12.6.1) acknowledge that there may be difficulties in interpreting standardised mean differences: "Without guidance, clinicians and patients may have little idea how to interpret results presented as SMDs." The guidelines do not favour one method over another in general, but describe how each may be used for specific purposes; if one wishes to provide an overall treatment effect for studies that use different measures to measure the same construct, then the standardised mean difference is a standard tool which is used widely in Cochrane reviews and other research. The guidelines suggest that "[t]here are several possibilities for re-expressing [standardised means differences] in more helpful ways".
 
Messages
32
Cochrane have now published the authors' responses to my first two letters.

Cochrane's new publication (version 5)...
View in Browser: http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD003200.pub5/full
PDF: http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD003200.pub5/pdf

See pages 120 and 125 in the new PDF for the responses, or do an in-page word search for 'Courtney' in the browser version. (I've copied both responses below as well.)

My two letters discussed the following issues:

1. My first letter discusses the use of post-hoc data from the FINE trial. The Cochrane Review claims that all the data they have analysed was previously published data. (i.e. formally published in a peer-reviewed journal, as per the Cochrane review's protocol.) However, the fatigue data the review has used from the FINE trial is based on the Likert scoring system, whereas the FINE trial only published data based on the biomodal scoring system. The FINE trial's Likert data is actually post-hoc data and was initially published by the FINE authors only in a BMJ rapid response post, as an after-thought. I queried this issue in my letter.

2. My second letter discusses the PACE trial data and explains the reasons why I believe that the Cochrane review should have categorised the PACE trial data as 'unplanned', and assessed the risk of bias for 'selective reporting' accordingly. The Cochrane review currently categorises the risk of 'selective reporting' bias for the PACE trial as "low', whereas it is my interpretation that the Cochrane reviewers' guidelines indicate (unambiguously) that the risk of bias for the PACE data should be high. I think my argument is fairly robust and water-tight.

All the issues raised in my letters have been entirely dismissed. All of them! Which I find quite bizarre, especially considering that some of the points that I made were factual (i.e. not particularly open to interpretation) and difficult to dispute. Indeed, the authors' response from Cochrane even accepts the main point that I made, in relation to the FINE data, but then the author strangely says that we must "agree to disagree", on all issues, even though she agreed with my main substantive point.

This is the response to my first letter...
Larun said:
Dear Robert Courtney

Thank you for your detailed comments on the Cochrane review 'Exercise Therapy for Chronic Fatigue Syndrome'. We have the greatest respect for your right to comment on and disagree with our work. We take our work as researchers extremely seriously and publish reports that have been subject to rigorous internal and external peer review. In the spirit of openness, transparency and mutual respect we must politely agree to disagree.

The Chalder Fatigue Scale was used to measure fatigue. The results from the Wearden 2010 trial show a statistically significant difference in favour of pragmatic rehabilitation at 20 weeks, regardless whether the results were scored bi-modally or on a scale from 0-3. The effect estimate for the 70 week comparison with the scale scored bi-modally was -1.00 (CI-2.10 to +0.11; p =.076) and -2.55 (-4.99 to -0.11; p=.040) for 0123 scoring. The FINE data measured on the 33-point scale was published in an online rapid response after a reader requested it. We therefore knew that the data existed, and requested clarifying details from the authors to be able to use the estimates in our meta-analysis. In our unadjusted analysis the results were similar for the scale scored bi-modally and the scale scored from 0 to 3, i.e. a statistically significant difference in favour of rehabilitation at 20 weeks and a trend that does not reach statistical significance in favour of pragmatic rehabilitation at 70 weeks. The decision to use the 0123 scoring did does not affect the conclusion of the review.

Regards,

Lillebeth Larun
The author has provided the effect estimates (mean differences) for the (pre-specified) bimodal and (post-hoc) Likert data for fatigue for the FINE trial at 70 weeks (follow-up). These outcomes, using the different scoring methods, are different, so switching the outcomes may possibly have had an impact on the review's primary outcome (fatigue) at follow-up. Larun hasn't given us the effect estimates for end-of-treatment, but these would also demonstrate variance between bimodal and Likert scoring, so switching the outcomes might also have had a significant impact on the primary outcome of the Cochrane review at end-of-treatment. (Note that the effect estimates given here are mean differences, rather than standardised mean differences, so the differences between the effect estimates may look greater than they actually are simply because they use different scoring scales.)

Larun said: "The decision to use the 0123 [i.e. Likert] scoring did does not affect the conclusion of the review." But she has provided no evidence to demonstrate this! There is no sensitivity analysis. Are we supposed to accept the word of the author, rather than review the evidence (of a Cochrane review - renowned for their rigour and impartiality!), that switching the review's primary outcome data, from pre-specified to unplanned data, has made no difference to the review's outcomes? Is that supposed to be a rigorous and transparent methodology? I quote from Larun's response: "In the spirit of openness, transparency..." But where is the transparency here?

Note that Larun has admitted that I am correct with respect to the FINE data (i.e. that it was previously unpublished data; it was not part of the formally published study, but was simply posted informally in a rapid response): "...the 33-point scale was published in an online rapid response after a reader requested it. We therefore knew that the data existed, and requested clarifying details from the authors..." But then Larun says we must "agree to disagree". And she's refused to correct her literature either to amend the analyses so they use pre-specified data, or to amend the text of the review so that it indicates that the data is unpublished and post-hoc.

Notice the difference in the effect estimates at 70 weeks for bimodal scoring [-1.00 (CI-2.10 to +0.11; p =.076)] vs Likert scoring [-2.55 (-4.99 to -0.11; p=.040)]. Larun says that both outcomes (i.e. bimodal & Likert) are non-significant at 70 weeks (which isn't true of the data that she has provided above, but her quoted data is slightly different to the published data - see below for further details). However, the significance or non-significance of the FINE data in isolation has limited relevance for a meta-analysis; changing outcomes in this way may have an impact on the review's findings. The PACE trial data was added to the FINE data, only, for the review's published primary analysis, and the combined FINE and PACE data was reported to show a positive and statistically significant effect from exercise therapy.


A friend of mine has commented that the Cochrane reviewers saw, from the BMJ rapid response, that a post-hoc Likert analysis of results allowed for better results to be reported for the FINE trial, so they requested the additional data from the trial's researchers (although they said in the review that they had not) in order to include their own post-hoc analysis in their review.

Note that the review still incorrectly says that all the data is previously published data - even though Larun admits in the letter that it isn't. (i.e. Not published in the formal peer-reviewed sense; we assume that the review wasn't referring to data that might be published in blogs or magazines etc, because the review pretends to analyse formally published data only.)

The authors have ignored my letters and not changed anything in the review, despite admitting in the response that they've used post-hoc data.


The figures for the effect size that Larun has included in the response, quoted above, are slightly different from the data in the Cochrane review. Larun states that the effect size for fatigue at 70 weeks using Likert data is -2.55 (-4.99 to -0.11; p=.040), whereas the Cochrane Review states that it is -2.12 [ -4.49, 0.25 ]. It seems that Larun has quoted the BMJ rapid response by Wearden et al. rather than her own review's calculations...
BMJ rapid reponse (Wearden et al.) said:
Supportive listening (SL) is still ineffective when compared with GPTAU
(Table 1 and Figure 1). Effect estimates [95% confidence intervals] for 20
week comparisons are: PR versus GPTAU -3.84 [-6.17, -1.52], SE 1.18,
P=0.001; SL versus GPTAU +0.30 [-1.73, +2.33], SE 1.03, P=0.772. Effect
estimates [95% confidence intervals] for 70 week comparisons are: PR
versus GPTAU -2.55 [-4.99,-0.11], SE 1.24, P=0.040; SL versus GPTAU +0.36
[-1.90, 2.63], SE 1.15, P=0.752.



This is the response to my second letter...
Larun said:
Dear Robert Courtney

Thank you for your detailed comments on the Cochrane review 'Exercise Therapy for Chronic Fatigue Syndrome'. We have the greatest respect for your right to comment on and disagree with our work. We take our work as researchers extremely seriously and publish reports that have been subject to rigorous internal and external peer review. In the spirit of openness, transparency and mutual respect we must politely agree to disagree.

Cochrane reviews aim to report the review process in a transparent way, for example, are reasons for the risk of bias stated. We do not agree that Risk of Bias for the Pace trial (White 2011) should be changed, but have presented it in a way so it is possible to see our reasoning. We find that we have been quite careful in stating the effect estimates and the certainty of the documentation. We note that you read this differently.

Regards,

Lillebeth
I'm at a loss to understand what is meant by: "We do not agree that Risk of Bias for the Pace trial (White 2011) should be changed, but have presented it in a way so it is possible to see our reasoning." I don't think that the review discusses the issue of the PACE data being unplanned, so I'm not sure what is meant by the suggestion the issue has been discussed. This is simply a point-blank refusal to engage with the substantive and serious issues that I raised.
 
Last edited:

JohnCB

Immoderate
Messages
351
Location
England
@seaturtle The author replies are utterly condescending. The tone is that they do not recognise your right to engage with them. I think that other readers will see this too, and if they have the interest, they will read your contributions and see that no effort has been made to engage with carefully constructed arguments. The important thing is that your arguments are included and are available to be read. They are there to be understood and developed by others in due course. I think the brusqueness of the replies is telling in itself. The question stands as to why there was no better response to your criticism. They may have dismissed you but they have not countered you. I thank you for your effort here and I do not think it is in vain.
 

user9876

Senior Member
Messages
4,556
This is the response to my second letter...

I'm at a loss to understand what is meant by: "We do not agree that Risk of Bias for the Pace trial (White 2011) should be changed, but have presented it in a way so it is possible to see our reasoning." I don't think that the review discusses the issue of the PACE data being unplanned, so I'm not sure what is meant by the suggestion the issue has been discussed. This is simply a point-blank refusal to engage with the substantive and serious issues that I raised.

I read this as Cochrane saying the see no risk of bias with with outcome switching on an open label trial. They claim that their review has undergone rigourous peer review so I assume as an organisation they are expressing their belief that outcome switching only has a small risk.
 

Esther12

Senior Member
Messages
13,774
The question stands as to why there was no better response to your criticism. They may have dismissed you but they have not countered you.

It's 'piss off' dressed up in fancy clothing:

We have the greatest respect for your right to comment on and disagree with our work.
...
In the spirit of openness, transparency and mutual respect we must politely agree to disagree.

They've totally avoided engaging in any real debate over the points raised. It's worrying that Cochrane agreed to publish these responses without any criticism from their editor.
 

Esther12

Senior Member
Messages
13,774
I just thought I'd bump this, as White seems to be increasingly relying on Cochrane to distract attention from the problems with PACE. Anyone know if there's been any movement. Should we complain about the dismissive way earlier comments were dealt with?
 

Sidereal

Senior Member
Messages
4,856
Just saw this thread. Their comments about SMD are utterly idiotic. They are actually easier to interpret than mean difference. WHY would you divide up the outcome measures and lose power instead of doing a pooled analysis of each domain (fatigue, physical function etc.)? It defeats the whole purpose of doing a meta analysis. :confused:
 

Esther12

Senior Member
Messages
13,774
All of their responses are pretty appalling. I can't believe Cochrane is letting the get away with this. Any idea how to raise concerns higher up the chain?

Also, have they missed out comment 3? I couldn't see it.

I've pulled out the thee comments I did see here, and labelled them using the titles from Courtney's site:

https://sites.google.com/site/mecfs...exercise-therapy-for-chronic-fatigue-syndrome


Reply #1 to feedback submitted on 16th April 2016
FINE trial: unpublished data


Dear Robert Courtney

Thank you for your detailed comments on the Cochrane review 'Exercise Therapy for Chronic Fatigue Syndrome'. We have the greatest respect for your right to comment on and disagree with our work. We take our work as researchers extremely seriously and publish reports that have been subject to rigorous internal and external peer review. In the spirit of openness, transparency and mutual respect we must politely agree to disagree.

The Chalder Fatigue Scale was used to measure fatigue. The results from the Wearden 2010 trial show a statistically significant difference in favour of pragmatic rehabilitation at 20 weeks, regardless whether the results were scored bi-modally or on a scale from 0-3. The effect estimate for the 70 week comparison with the scale scored bi-modally was -1.00 (CI-2.10 to +0.11; p =.076) and -2.55 (-4.99 to -0.11; p=.040) for 0123 scoring. The FINE data measured on the 33-point scale was published in an online rapid response after a reader requested it. We therefore knew that the data existed, and requested clarifying details from the authors to be able to use the estimates in our meta-analysis. In our unadjusted analysis the results were similar for the scale scored bi-modally and the scale scored from 0 to 3, i.e. a statistically significant difference in favour of rehabilitation at 20 weeks and a trend that does not reach statistical significance in favour of pragmatic rehabilitation at 70 weeks. The decision to use the 0123 scoring did does not affect the conclusion of the review.

Regards,

Lillebeth Larun

Reply #2 to feedback submitted on 1 May 2016 -
PACE trial: selective reporting bias

Dear Robert Courtney

Thank you for your detailed comments on the Cochrane review 'Exercise Therapy for Chronic Fatigue Syndrome'. We have the greatest respect for your right to comment on and disagree with our work. We take our work as researchers extremely seriously and publish reports that have been subject to rigorous internal and external peer review. In the spirit of openness, transparency and mutual respect we must politely agree to disagree.

Cochrane reviews aim to report the review process in a transparent way, for example, are reasons for the risk of bias stated. We do not agree that Risk of Bias for the Pace trial (White 2011) should be changed, but have presented it in a way so it is possible to see our reasoning. We find that we have been quite careful in stating the effect estimates and the certainty of the documentation. We note that you read this differently.

Regards,

Lillebeth

Where is comment 3?!: Misreporting of outcomes for physical function, submitted on 12th May 2016

Have I missed it somehow?


Reply #?? to feedback submitted, 3 June 2016
Primary Outcome Switching in the Cochrane Review


Dear Robert Courtney

Thank you for your ongoing and detailed scrutiny of our review. We have the greatest respect for your right to comment on and disagree with our work, but in the spirit of openness, transparency and mutual respect we must politely agree to disagree.

Presenting health statistics in a way that makes sense to the reader is a challenge. Statistical illiteracy is – according to Girgerenzer and co-workers – common in patients, journalists, and physicians (1). With this in mind we have presented the results as mean difference (MD) related to the relevant measurement scales, for example Chalder Fatigue Scale, as well as standardised mean difference (SMD). The use of MD enables the reader to transfer the results to the relevant measurement scale directly and judge the effect in relation to the scale. We disagree that presenting MD and SMD rather than SMD and MD is an important change, and we disagree with the claim that the analysis based on MD and SMD are inconsistent. This has been discussed as part of the peer-review process. Confidence intervals are probably a better way to interpret data that P values when borderline results are found (2). Interpreting the confidence intervals, we find it likely that exercise with its SMD on -0.63 (95% CI -1.32 to 0.06) is associated with a positive effect. Moreover, one should also keep in mind that the confidence interval of the SMD analysis are inflated by the inclusion of two studies that we recognize as outliers throughout our review. Absence of statistical significance does not directly imply that no difference exists.

All the included studies reported results after the intervention period and this is the main results. The results at different follow-up times are presented in the text, but we have only included data available at the last search date, 9 may 2014. When the review is updated, a new search will be conducted to find new, relevant follow up data and new studies. As a general comment, it is often challenging to analyse follow-up data gathered after the formal end of a trial period. There is always a chance that participants may receive other treatments following the end of the trial period, a behaviour that will lead to contamination of the original treatment arms and challenge the analysis.

Cochrane reviews aim to report the review process in a transparent way, which enables the reader to agree or disagree with the choices made. We do not agree that the presentation of the results should be changed. We note that you read this differently.

Regards,

Lillebeth Larun

1. Girgerenzer G, Gaissmaier W, Kurtz-Milcke E, Schwartz LM, Woloshin S. Helping Doctors and Patients Make Sense of Health Statistics. Pyschological Science in the Public Interest, 2008;8:(2):53-96. http://www.psychologicalscience.org/journals/pspi/pspi_8_2_article.pdf.

2. Hackshaw A and Kirkwood A. Interpreting and reporting clinical trials with results of borderline significance. BMJ 2011;343:d3340 doi: 10.1136/bmj.d3340
 

Dolphin

Senior Member
Messages
17,567
Jay Spero on Twitter has pointed out there is an update.

It looks like it contains 3 comments from Robert Courtney now plus replies.

http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD003200.pub6/full
Well done to Robert Courtney/@seaturtle on the 3rd reply which I just read (I can't remember whether I read it before or not). He calmly points out the changes they have made and the problems with their approach in a thorough manner.

Well done to him for spotting the change: I'm not sure I noticed it myself and certainly didn't pay much attention to the significance of it.
 

Esther12

Senior Member
Messages
13,774
Well done to Robert Courtney/@seaturtle on the 3rd reply which I just read (I can't remember whether I read it before or not). He calmly points out the changes they have made and the problems with their approach in a thorough manner.

Well done to him for spotting the change: I'm not sure I noticed it myself and certainly didn't pay much attention to the significance of it.

All the comments submitted were really strong - which makes the responses all the more depressing!