• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Examples of misleading statements in CFS papers from biopsychosocialists

biophile

Places I'd rather be.
Messages
8,977
We all make mistakes, but for the "expert" authors of biopsychosocial papers on CFS it seems to happen rather frequently, coincidently in their favour too. It takes a lot of time and effort to thoroughly investigate even a single claim, and so many of them are made in various papers. To systematically examine the issue would be overwhelming, I usually discover them when I need to occasionally follow something else up or just happen to be already familiar with the source being cited.

It happens often enough that I suspect there are probably hundreds if not thousands of examples either already known or waiting to be discovered, anyone could probably select a paper at random and find a claim that is not supported by the reference given, and I don't mean just differences of opinion or interpretation or cherry picking (another can of worms) but blatant errors. Is this problem normal in wider academia? Is it spin or incompetence, and where are the peer reviewers in all this? How does it fit into the accusations of "zombie science" and criticisms of the (abuse of) "evidence based medicine" as practiced by proponents of and lobbyists for the biopsychosocial paradigm?

When I first read the term "smoke and mirrors" being applied to the biopsychosocial approach and cognitive behavioural model of CFS, I was relatively naive and thought that perhaps this wording was too strong, surely it couldn't be that bad even if there were some issues with the available evidence. However since then I have come across so many questionable studies and related statements that "smoke and mirrors" does indeed seem like an accurate description afterall. If all the papers and citations were fed into a computer model, would it look similar to a giant web of spin?

I'm starting this thread to document such misleading statements in the hope others will contribute. I will post several examples that I have worked on recently while they are relatively fresh. I'm sure there are countless examples that have already been discussed on other threads at Phoenix Rising or are embedded in my notes somewhere but I'm not going to wade through to find more examples right now, it was difficult enough preparing these examples.
 

biophile

Places I'd rather be.
Messages
8,977
Burgess et al (inc. Chalder) 2011 on CBT (http://www.kcl.ac.uk/innovation/groups/projects/cfs/publications/assets/2011/Burgessface2face.pdf) :

Burgess et al wrote: "A Cochrane review has shown that CBT improves fatigue and physical functioning in about 40% of patients (Price, Mitchell, Tidy and Hunot, 2008)."

The Cochrane 2008 systematic review on CBT in question (http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD001027.pub2/pdf) did not show this at all.

For rates of a "clinical response" in fatigue it was 40% for CBT vs 26% for usual care (4 studies, 371 participants), so the figure should be 14% for CBT over and above usual care (one could argue that the remaining 26% in the CBT group may report greater improvement than the 26% in the usual care group, but Cochrane does not go into that and only reports small-moderate group differences in fatigue at post-treatment of [SMD -0.39, 95% CI -.0.60 to -0.19] in 6 studies or 373 participants).

Also, there was no statistical difference in clinical response at short-term followup (3 studies, 353 participants) so the ability of CBT to elicit a self-rated clinical response in fatigue appears to be transient. However, difference in fatigue severity between groups apparently remained significant at short/medium-term followup ie [SMD -0.47, 95%CI -0.69 to -0.25] in 4 studies or 330 participants, but this discrepency may be explained by the following statement: "At follow-up, 1-7 months after treatment ended, people who had completed their course of CBT continued to have lower fatigue levels, but when including people who had dropped out of treatment, there was no difference between CBT and usual care."

Furthermore, on the outcome of average self-rated physical function scores between groups, there was no statistically significant difference at either post-treatment (4 studies or 318 participants) or short/medium-term followup (3 studies or 275 participants). Note the temporal definitions for above mentioned followup periods: "Outcomes were classified as post treatment, short term followup (1-6 months post-treatment), medium term follow-up (7-12 months post-treatment) and long term (longer than 12 months)."

So it was incorrect for Burgess et al to claim this Cochrane 2008 showed that "CBT improves fatigue and physical functioning in about 40% of patients".
 

biophile

Places I'd rather be.
Messages
8,977
Collins et al (inc. Crawley) 2011 on PACE (http://www.biomedcentral.com/content/pdf/1472-6963-11-217.pdf) :

Collins et al: "Evidence from a recent evidence trial of cognitive behavioural therapy and graded exercise therapy indicated a recovery rate of 30-40% one year after treatment."

24. White P, Goldsmith K, Johnson A, Potts L, Walwyn R, Decesare J, Baber H, Burgess M, Clark L, Cox D, et al: Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomised trial. Lancet 2011, 377:823-836.

This of course is referring to the infamous PACE Trial (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3065633). The figure of 30-40% seems to be based on a combination of two figures: 1) the 30%/28% of CBT/GET participants being within "normal" range in fatigue (CFQ ?18/33 points Likert) and physical function (PF/SF-36 ?60/100 points); 2) 41% of participants in the CBT/GET groups who reported being "much better or very much better" on the CGI-I scale.

The reasons why their definition of "normal" is dubious is worthy of an entire paper, but suffice to say that ?18/33 points on the fatigue scale and ?60/100 points in physical function is an inappropriate threshold for normal not to mention recovery, and "much better or very much better" on the clinical global impression scale is not necessarily a recovery either. The PACE authors themselves stated in their authors' reply to criticism that "It is important to clarify that our paper did not report on recovery; we will address this in a future publication." (http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(11)60651-X/fulltext)

The physical function threshold of ?60 points is the most dubious because in the same trial the authors deemed ?65 points a sign of "significant disability". Also, in the 2007 trial protocol, the physical function criterion for recovery was much higher at ?85 points (http://www.biomedcentral.com/1472-6882/7/12).

By the same logic used by Collins et al, the de facto SMC control group in the PACE Trial had a "recovery rate" (cough) of 15-25% which would indicate that the true "recovery rate" (cough) of CBT and GET would be more like 15% over and above SMC, an inconvenient statistic left out by Collins et al who give the false impression that CBT/GET is responsible for a 30-40% recovery rate.
 

biophile

Places I'd rather be.
Messages
8,977
(continued) [Numerous] on PACE ...

Such confusion between "normal" and "recovery", repeated by the BMJ as "cure" (http://www.meassociation.org.uk/?p=5757) and The Guardian (http://www.guardian.co.uk/society/2011/feb/18/study-exercise-therapy-me-treatment) comes at no surprise when the PACE authors themselves talked at a press conference about how CFS prevents people from "leading a normal life" and that CBT/GET doubles the odds of "[getting] back to normal levels of functioning and fatigue" (http://www.meactionuk.org.uk/pacepressconf.html).

Contributing to this confusion was the Lancet editorial which accompanied the PACE 2011 Trial paper, where authors Bleijenberg & Knoop quote the "about 30%" proportion of CBT/GET participants who were "normal" in fatigue and physical function at 52 weeks in PACE and claim that "PACE used a strict criterion for recovery" derived from "a healthy person's score" (http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(11)60172-4/fulltext). Both these statements are false, as there was no recovery reported, and because the dataset used to derive the threshold of normal for physical function was from a general population which included unhealthy people and the elderly (note that the PACE authors erroneously described it a "working age population" in their 2011 Lancet paper but admitted this error in their authors' reply).

Bleijenberg & Knoop's blunder was ironic considering that both of them co-authored a CFS paper with White (lead author of the PACE Trial) which required a physical function score of ?80 points to be considered recovered (http://www.cfids-cab.org/rc/Knoop-1.pdf), and especially for Bleijenberg who co-authored a CFS-like paper where "A cut-off of ?65 was considered to reflect severe problems with physical functioning." (http://eurpub.oxfordjournals.org/content/20/3/251.long).

Apparently one can be significantly/severely disabled and "recovered" at the same time? To my knowledge, unofficial reports that the Lancet will issue a correction for the editorial ( based on email exchanges with the Lancet) has not yet resulted in an actual correction. I'm not holding my breath for one and wonder how common it is for blatant errors to go uncorrected in the Lancet.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Hi biophile, I agree but I can see their counter-argument. It was indeed 40% of patients undergoing CBT/GET who improved, everyone receives standard medical care so why is this an issue? If the effect is not always prolonged, that just means they needed more treatment, it was stopped too early. In the next study patients require a much longer treatment period.

There is usually a way to spin their outcomes to appear as though its a misunderstanding. Recently I came to the view that it is not a mistake that matters. Anyone can claim a mistake is just a glitch, or a misunderstanding and not really a mistake. What matters is a pattern of mistakes. In the case of the PACE trial it is a pattern of highly deceptive and misleading statments from initial design through to post-publication media interviews. Its the pattern thats important. That is why I like this thread. Establishing a pattern is difficult to do. More minds make it easier.

One of the biggest problems with the biopsychosocial view is that they are operating in their own insular paradigm. Nobody else is there. As a result normal robust scientific criticism does not really exist, and they have been progressively allowed to get away with blunder after blunder, accumulating a long history of blunders. It would be really nice to be able to show this.

So far I am not looking at individual papers. I am looking at a framework to find and classify error in this field, a meta-criticism if you like. This will take months to do at least.

Bye, Alex
 

biophile

Places I'd rather be.
Messages
8,977
Cella et al (Sharpe & Chalder) 2011 on occupational outcomes for CBT and GET (http://www.kcl.ac.uk/innovation/gro...2011ThereliabilityofWASAinCFSpatientsJoPR.pdf) :

(after describing how poor occupational outcomes are for untreated CFS patients)

"However, occupational outcomes tend to improve substantially for CFS patients who receive treatment such as cognitive behavioral therapy and graded exercise therapy [6]."

[6] Rimes KA, Chalder T. Treatments for chronic fatigue syndrome. Occup Med (Lond) 2005;55:328.

However, the cited paper does not appear to discuss work/employment/occupation-related outcomes for CBT and only mentions GET (http://occmed.oxfordjournals.org/content/55/1/32.full.pdf):

"RCTs evaluating GET have found an overall beneficial effect on fatigue and functional work capacity compared to control groups [1013]."

10. Fulcher KY, White PD. Randomised controlled trial of graded exercise in patients with the chronic fatigue syndrome. Br Med J 1997;314:16471652.

11. Powell P, Bentall RP, Nye FJ, Edwards RHT. Randomised controlled trial of patient education to encourage graded exercise in chronic fatigue syndrome. Br Med J 2001;322:15.

12. Powell P, Bentall RP, Nye FJ, Edwards RHT. Patient education to encourage graded exercise in chronic fatigue syndrome. Two-year follow-up of randomised controlled trial. Br J Psychiatry 2004;184:142146.

13. Wearden AJ, Morriss RK, Mullis R, Strickland PL, Pearson DJ, Appleby L, et al. Randomised, double-blind, placebo controlled treatment trial of fluoxetine and graded exercise for chronic fatigue syndrome. Br J Psychiatry 1998;172:485490.

Further, these claims appear to be unsubstantiated even for GET, as explained below ...

* Fulcher & White 1997 (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2126868/pdf/9180065.pdf) : the comparison of improved occupational status was uncontrolled at followup because it was a crossover study and did not account for dropouts etc, the authors acknowledge this weakness but then try to dismiss it by claiming that spontaneous improvement was an unlikely explanation because it didn't occur in a "similar sample" in another study.

* Powell et al 2001 (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC26565/pdf/387.pdf) : reports work status at baseline but not post-treatment.

* Powell et al 2004 (http://bjp.rcpsych.org/content/184/2/142.full.pdf) : followup of Powell et al 2001 above but did not report occupational status at any point.

* Wearden et al 1998 (full text not easily available but a Cochrane 2004 systematic review refers to this study as "Appleby 1995" because of multiple publications - http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD003200.pub2/pdf) : the improvement in "functional work capacity" in the GET group compared to the control group at 12 weeks and at 26 weeks was not statistically significant.

So all in all, no good evidence for the statement that "occupational outcomes tend to improve substantially for CFS patients who receive treatment such as [CBT] and [GET]".
 

biophile

Places I'd rather be.
Messages
8,977
Hi Alex, thanks for the comments. I can see their counte-arguments you suggested, but it is unacceptable for these people to routinely flout intervention improvement rates without control rates. Just imagine if 40% of the CBT group had a clinical response vs 40% of the usual care group. Claiming that fatigue improved in 40% of the CBT group would still be correct but completely meaningless and deceptive. Also, their allusion that Cochrane 2008 showed an improvement in physical function in 40% of the CBT group appears to be false as "clinical response" relates to fatigue scales, and there was no statistically significant improvement in physical function between groups at any time point.

I agree the pattern of mistakes is most important but difficult to do. Personally I would rather just stick to the overall hypothesis of the cognitive behavioural model for CFS and then compare with the overall evidence relating to each pillar of the model and perhaps also with the cherry picking of proponents to see the discrepancies. Good point on the lack of normal robust scientific criticism and getting away with blunder after blunder. I look forward to seeing what you come up with in an analysis.
 

RustyJ

Contaminated Cell Line 'RustyJ'
Messages
1,200
Location
Mackay, Aust
Hi biophile, I agree but I can see their counter-argument. It was indeed 40% of patients undergoing CBT/GET who improved, everyone receives standard medical care so why is this an issue? If the effect is not always prolonged, that just means they needed more treatment, it was stopped too early. In the next study patients require a much longer treatment period.

Hi Alex, doesn't the exclusion of the dropouts undermine your argument? The figure is then not 40%. To leave out this point seems very important to the suggested outcome of the study. It suggests manipulation of the data, or at the very least omission of vital information, not just a matter of differences of interpretation.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Hi RustyJ, ahh, but (hang on a sec, need to put on my black hat) those dropouts don't count unless they dropped out at the end of the study. Since they are delusional psychiatric cases they are resistant to therapy. If they had stayed there is no evidence that they would not have improved too. Let me remind you that almost nobody reported an adverse effects, so there was no good reason for them to drop out except for their psychiatric condition induced bias.

Putting on a white hat, this argument is based on circular reasoning and so is invalid, but it could sound nice to some. The issue about under-reporting of harms due to excessively high criteria is also now recognized by some. Also, I wonder if they even looked for harms or recorded harms in those who dropped out. I could be wrong, but I do not recall reading about this.

Bye, Alex
 

markmc20001

Guest
Messages
877
This type of creative writing has turned science into kind of an "art" as well. :cool:

Impossible to untangle all that "slight of hand" crap.
 

Enid

Senior Member
Messages
3,309
Location
UK
Interesting thread many thanks - can I just make a point about the notion of everyone receiving standard medical care - in fact this is nonsense - the standard here is a zero - complete inability to diagnose the multi dysfunctions we see coming from researchers if looked for hard enough. They do not in the first place.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Hi Snow Leopard, the PACE trial publications are the rosetta stone. They made so many mistakes I think it constitutes a pattern right there. It cannot be reasonably argued to be an accident. Its part of what I am looking into. The big issue though about patterns is that many papers are written by different people at different times, there are many loopholes for someone to claim coincidence. Bye, Alex
 

oceanblue

Guest
Messages
1,383
Location
UK
Great work by Biophile here, thanks!

I also agree with alex3619 that what matters is the pattern, not individual cases, and that as a whole PACE could be the 'rosetta stone' (nice). Certainly for me that's where they crossed over from possibly overenthusiastically championing a pet theory to deliberately setting out to mislead (without actually lying).

One note of caution: claiming references prove one things when they don't at all is a widespread practice. I have come across it many times (to my frustration) in papers supporting a biomedical explanation of CFS and in non-CFS clinical research too. Even when doing my biochemistry degree, where research papers were the main source of information and research standards were so much higher than in the CFS field, I'd often chase a promising reference only to find the referencing authors had put an unjustified spin on it.
 

biophile

Places I'd rather be.
Messages
8,977
van der Meer & Lloyd 2012 - Editorial Comment: A controversial consensus comment on article by Broderick et al.

The article in question is a critique of ME-ICC, and this post was spawned from a related post on a different thread (http://phoenixrising.me/forums/show...us-Criteria-ME&p=235812&viewfull=1#post235812).

oceanblue beat me to it (http://forums.phoenixrising.me/show...and-editorial)&p=235792&viewfull=1#post235792) but I will still post as I go into more detail. Relevant background is unbolded and important quote is bolded and/or inbetween asterisks:

It cannot be denied CFS?ME is a controversial condition. The controversy sometimes deteriorating into overt dispute is between those that believe that it is a nonexistent illness (maladie imaginaire); those that feel it is a psychiatric disorder; and the activists (comprising patients, doctors and even some scientists) who are convinced of a somatic disease all are unfortunately simplistic perspectives on a complex disorder. Separately, there are clinicians and scientists with an open mind, who recognize the disability associated with this enigmatic clinical illness, and who seek to engage scientifically in the challenge of defining the pathophysiology, and are therefore motivated to elucidate the biological basis of CFS in a systematic and unbiased fashion.

***This dispute between the various protagonists recently surfaced with the PACE trial published in the Lancet [2], which provided evidence for effectiveness of elements of cognitive-behavioural therapy (CBT) and graded exercise therapy (GET) for patients with CFS. This publication triggered unscientific and sometimes personal attacks on the researchers in both the scientific literature [310] and via the Internet [11]. Similarly, the recent controversy on the role of the retrovirus, XMRV, in CFS [12] is a good example of how science and emotion (in this case mostly fear of contagion) commonly collide with regard to CFS [1320].***

http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2796.2011.02468.x/full

http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2796.2011.02468.x/pdf

I comment in more detail on the first part on the other thread, but basically, once again we have the typical assertion or allusion that the "controversial" debate about CFS is fuelled by the ideological simpletons while the enlightened researchers such as themselves are above all that and just want to get on with the science. CFS-biopsychosocialist perspectives of the controversy tend to ignore the fact their claims relating to CFS are part of the controversy as well and are being questioned on a rational scientific basis. And which group of "protagonists" do the PACE authors belong in the above mentioned groups?

Anyway, although there may have been a few unscientific arguments towards the trial and personal attacks on the authors over the internet, the detractors of the critics usually issue a blanket dismissal of all criticisms without acknowledgment that legitimate criticisms could exist as well and are what drives the response towards PACE. van der Meer & Lloyd cite all 8 published Lancet letters to the editor (and related authors' reply?) on the PACE Trial for the claim of "unscientific and sometimes personal attacks on the researchers in [the] scientific literature". I quickly went through the letters to summarize each one below:

* [3] Feehan's letter: 18 or less points on the fatigue scale (Likert scoring) may not be an appropriate threshold for normal fatigue, and is still within the range of abnormal fatigue for purposes of trial entry (bimodal score of 6 or more), the authors should recalculate the data using the goalpost in the original protocol.

* [4] Giakoumakis' letter: in the trial a "clinically useful difference" was 0.5 SD (standard deviation) of baseline score, but the SD was artificially low because of trial entry criteria, an alternative as suggested by other researchers would be to use SD from a sample of the general population, doing this with physical function would mean than (on average) CBT and GET showed no "clinically useful difference" so it is questionable whether the effects of CBT and GET were "moderate" as claimed.

* [5] Kewley's letter: changes in assessment were concerning, several unreported secondary measures, overlap between entry criteria for physical disability and outcome criteria for "normal" physical function, the latter was based on a population that included chronic illness and was comparable to the 75-84 year age group including those with illness whereas physical function for the subpopulation without chronic illness was higher, lack of objective data like actigraphy and employment outcomes which is problematic as other researchers have showed that CBT does not increase activity as one would expect, minimal improvement in the 6-minute walking test distances for the GET group compared to healthy elderly people, overall unimpressive results so further biomedical research is imperative.

* [6] Kindlon's letter: subjective improvements didn't match 6-minute walking test distance scores, lack of data in the trial on actual changes in activity, may be premature to call CBT/GET "safe" in general.

* [7] Mitchell's letter: the definition of "normal" range for fatigue and physical function is questionable, so was the promotion of the figures relatiung to it, because contrary to erroneous claims made in the accompanying editorial, recovery was not reported, nor what was reported a "strict criterion" for recovery.

* [8] Shinohara's letter: trial results may not apply to long-term severely affected patients.

* [9] Stouten et al's letter: findings of the trial are less impressive when stricter outcome criteria, like the original goalposts, are applied.

* [10] Vlaeyen et al's letter: there may be concerns about APT and how it has been defined.

Keeping in mind that I only provided a brief summary, do these letters sound "unscientific" to anyone? They are referenced with reasonable arguments. I did not see one instance of a "personal attack", nor is such mentioned in the authors' reply (much of which is the authors just reiterating what they already wrote in the trial paper and reassuring us that the methodology was sound). It is possible that van der Meer & Lloyd never read any of these letters and just accepted without question the rumours about them and/or whatever they were told by the Lancet editorial team?

van der Meer & Lloyd continue to claim that these (alleged) unscientific arguments and (alleged) personal attacks on the researchers also occurred "via the Internet", but the reference given is a CFS review paper from 2006 which is utterly irrelevant to evidence of the occurrences in 2011 (a citation error perhaps?). So once again from detractors of the critics of the PACE Trial, we have unsubstantiated claims about critics issuing "personal attacks" towards PACE authors. They claim that criticism of the PACE Trial is unscientific, without refuting any of the arguments made against it. The followup statement about XMRV as a similar "example" again alludes that the PACE Trial vs criticism was all about "science vs emotion".
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Even when doing my biochemistry degree, where research papers were the main source of information and research standards were so much higher than in the CFS field, I'd often chase a promising reference only to find the referencing authors had put an unjustified spin on it.

Hi oceanblue, I had the same experience, even in textbooks. Who has time to check up on every reference in every paper they read though? Bye, Alex
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
In reply to biophile's post 15, here is what I wrote on the other thread to the comment there:

Hi biophile, just to pick up on a point you made I agree with, on cherry picking supporting info: this is not only widely done but inevitable in any complex topic. Its not like you can reference all 5000 papers on ME and CFS in any publication, you have to use selection criteria. In this sense the cherry picking of the biopsychosocial proponents is justifiable. However, there are larger and overlapping issues. When faced with specific scientific challenges, especially those that go the the very foundation of the biopsychosocial hypothesis, what you use to support argument is much more critical. It has to address the issue at hand, and do so in a rational and data supported way. Typically this is not the case. For the PACE trial this is not the case. Instead we get multiple claims of violent patients, unscientific attacks, and so on. Arguing the man: a logical fallacy. They rarely address the complaints and issues, many of which come from respected scientists and clinicians: instead they divert attention to those hysterical patients again.

This is politics and spin, not science. I think the problem stems from the historical situation. For decades they have not been substantially challenged. Nobody took them seriously, and they were in their own little isolated area: people outside this area just ignored them for the most part. Now more and more people are realizing just how foundationally baseless and methodologically flawed the biopsychosocial research is. BOOKS are being written about it by medical academics (I have one on order). The charge is not being led by hysterical patients, its being led by medical academics, including other psychiatrists. They have never had to face this level of criticism, and it is getting worse as more and more wake up to what they have been saying and doing. They need a scapegoat, a straw-man, and they selected us, either consciously or unconsciously - I am in no position to infer motive or whether its intentional, but I can point to the fact of it happening.

Bye, Alex
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
Keeping in mind that I only provided a brief summary, do these letters sound "unscientific" to anyone?

If they weren't legitimate criticisms, they wouldn't have been published in the first place - remember there were five times as many letters submitted than were published. It would be more reasonable to question the validity of those letters.

I think they are hoping that we simply take their word for it (that the letters were not credible). But the end result is that van der Meer and Lloyd are insulting the intelligence of their readers.
 

Enid

Senior Member
Messages
3,309
Location
UK
Oh Wow - produce as many stinking papers as they like to enhance their egos presented as science - come on all us on PR who know the real thing SPEAK.
 

Dolphin

Senior Member
Messages
17,567
Good work, biophile.
I suspect I have posted about others over the years here. However, my guess is I won't look back at my own messages much. So if people see points of mine, feel free to re-post my point or even re-word it the way you see it yourself if you prefer.