PLos Article: Most published research wrong?!?

Lesley

Senior Member
Messages
188
Location
Southeastern US
From The Atlantic website http://andrewsullivan.theatlantic.com/the_daily_dish/2010/03/abusing-statistics.html#more

Abusing Statistics
20 MAR 2010 06:42 PM

Science News reports on bad studies:

There is increasing concern, declared epidemiologist John Ioannidis in a highly cited 2005 paper in PLoS Medicine, that in modern research, false findings may be the majority or even the vast majority of published research claims.

From later in the article:

Nobody contends that all of science is wrong, or that it hasnt compiled an impressive array of truths about the natural world. Still, any single scientific study alone is quite likely to be incorrect, thanks largely to the fact that the standard statistical system for drawing conclusions is, in essence, illogical. A lot of scientists dont understand statistics, says Goodman. And they dont understand statistics because the statistics dont make sense.

And the Science News Cycle doesn't help.

(Hat tip: 3QD)

My quote didn't pick up the links. The referenced article is here: http://www.sciencenews.org/view/feature/id/57091/title/Odds_are,_its_wrong?utm_source=twitterfeed&utm_medium=facebook&utm_content=feat

The "Science News Cycle" is a comic that is hilarious, but all too true, in pointing out how the press distorts scientific news: http://www.phdcomics.com/comics/archive.php?comicid=1174
 

Dolphin

Senior Member
Messages
17,567
Somebody highlighted this on another list and I thought it was useful. It's prompted me to buy a few books on statistics e.g. "How to lie with Statistics"/similar which is supposedly useful to spot some fallacious reasoning. My statistics education was interrupted by ME/CFS but don't have the mental stamina to work through full text-books. :(
 

dannybex

Senior Member
Messages
3,574
Location
Seattle
Rethinking Peer Review...

Here's another intriguing article along the same lines...

http://www.thenewatlantis.com/publications/rethinking-peer-review

"In recent times, the term peer reviewed has come to serve as shorthand for quality. To say that an article appeared in a peer-reviewed scientific journal is to claim a kind of professional approbation; to say that a study hasnt been peer reviewed is tantamount to calling it disreputable. Up to a point, this is reasonable. Reviewers and editors serve as gatekeepers in scientific publishing; they eliminate the most uninteresting or least worthy articles, saving the research community time and money.

But peer review is not simply synonymous with quality. Many landmark scientific papers (like that of Watson and Crick, published just five decades ago) were never subjected to peer review, and as David Shatz has pointed out, many heavily cited papers, including some describing work which won a Nobel Prize, were originally rejected by peer review.

Shatz, a Yeshiva University philosophy professor, outlines some of the charges made against the referee process in his 2004 book Peer Review: A Critical Inquiry. In a word, reviewers are often not really conversant with the published literature; they are biased toward papers that affirm their prior convictions; and they are biased against innovation and/or are poor judges of quality.
 

Angela Kennedy

Senior Member
Messages
1,026
Location
Essex, UK
Somebody highlighted this on another list and I thought it was useful. It's prompted me to buy a few books on statistics e.g. "How to lie with Statistics"/similar which is supposedly useful to spot some fallacious reasoning. My statistics education was interrupted by ME/CFS but don't have the mental stamina to work through full text-books. :(

Darryl Huff's 'how to lie with statistics' is a wonderful little book! Comical as well, which makes it easier to understand. One issue I've always had with my social science students (I'm a social science lecturer) is that that some of them are anxious about quantiative data and statistical language. Learning not to be scared of them is half the battle!

I would also recommend books/websites on logical fallacies as understanding these helps people, I would say, to identify them in peer reviewed literature. I also like a book by Stella Cottrell called 'Critical thinking skills'.

The reason I say all this is because I truly believe the whole community of people with 'CFS' diagnoses need to arm themselves with tools for critical analysis of all claims that might impact on their lives. I actually think this holds through for people per se, and critical analysis and knowledge of logic are not skills formally taught in schools (and therefore may not even be taught at all!) If more people could learn specific strategies for logical thinking and critical analysis, so much BS would not get around as much as it does, is my basic argument!

Notice that groups like 'Sense about Science' basically advocate, to the public, "if it's peer reviewed you don't need to worry your silly little heads about it". In light of the problems of peer review even highlighted here, that of course is TERRIBLE advice.
 

Angela Kennedy

Senior Member
Messages
1,026
Location
Essex, UK
Here's another intriguing article along the same lines...

http://www.thenewatlantis.com/publications/rethinking-peer-review

"In recent times, the term “peer reviewed” has come to serve as shorthand for “quality.” To say that an article appeared in a peer-reviewed scientific journal is to claim a kind of professional approbation; to say that a study hasn’t been peer reviewed is tantamount to calling it disreputable. Up to a point, this is reasonable. Reviewers and editors serve as gatekeepers in scientific publishing; they eliminate the most uninteresting or least worthy articles, saving the research community time and money.

But peer review is not simply synonymous with quality. Many landmark scientific papers (like that of Watson and Crick, published just five decades ago) were never subjected to peer review, and as David Shatz has pointed out, “many heavily cited papers, including some describing work which won a Nobel Prize, were originally rejected by peer review.”

Shatz, a Yeshiva University philosophy professor, outlines some of the charges made against the referee process in his 2004 book Peer Review: A Critical Inquiry. In a word, reviewers are often not really “conversant with the published literature”; they are “biased toward papers that affirm their prior convictions”; and they “are biased against innovation and/or are poor judges of quality.”

That looks like a nice book Danny. I might be shopping this avo!
 
G

Gerwyn

Guest
From The Atlantic website http://andrewsullivan.theatlantic.com/the_daily_dish/2010/03/abusing-statistics.html#more



My quote didn't pick up the links. The referenced article is here: http://www.sciencenews.org/view/feature/id/57091/title/Odds_are,_its_wrong?utm_source=twitterfeed&utm_medium=facebook&utm_content=feat

The "Science News Cycle" is a comic that is hilarious, but all too true, in pointing out how the press distorts scientific news: http://www.phdcomics.com/comics/archive.php?comicid=1174

I am afraid that this article by Sullivan in some ways mirrors the problem.

Things are unfortunately published in many publications becuse of their potential to increase the circulation figures of the journals involved.

They are usually selected by editors who now nothing about science.Anyone using the term logical fallacy or not making sense to describe a statistical test or tests does not,I am afraid understand statistics or the scientific method at all. It is a branch of mathematics and does not rely on language where the labels bear no objective relation to the labelled.Statistical constructs investigate mind independent entities

Statisicts is not based logic used to construct language and in the main statistical tests are carried out by statisticians like mathematics is used by mathematicians.statistics makes no sense to lay people any more than advanced quantum physics would.


Many journalists seem incapable of grasping this however and they more than anyone are responsible for the confusion that abounds in some quarters caused by their astonishing lack of humility.Perhaps they should undertake rigerous science training before making ill informed speculative comments.This would be a better form of quality control than peer review

It is not the peer review process that is at fault .The fault lies with the fact that it is either not applied or its results ignored by a lay person or persons who allow commercial interests to override the peer review process

.No scientist would take much notice of one study.They also look to journals like Science or the Lancet because they know that the scientific content is the prime if not the only criterea for publication
 

Dolphin

Senior Member
Messages
17,567
Darryl Huff's 'how to lie with statistics' is a wonderful little book! Comical as well, which makes it easier to understand. One issue I've always had with my social science students (I'm a social science lecturer) is that that some of them are anxious about quantiative data and statistical language. Learning not to be scared of them is half the battle!

I would also recommend books/websites on logical fallacies as understanding these helps people, I would say, to identify them in peer reviewed literature. I also like a book by Stella Cottrell called 'Critical thinking skills'.

The reason I say all this is because I truly believe the whole community of people with 'CFS' diagnoses need to arm themselves with tools for critical analysis of all claims that might impact on their lives. I actually think this holds through for people per se, and critical analysis and knowledge of logic are not skills formally taught in schools (and therefore may not even be taught at all!) If more people could learn specific strategies for logical thinking and critical analysis, so much BS would not get around as much as it does, is my basic argument!

Notice that groups like 'Sense about Science' basically advocate, to the public, "if it's peer reviewed you don't need to worry your silly little heads about it". In light of the problems of peer review even highlighted here, that of course is TERRIBLE advice.
Thanks for that. Might get that book you mention. I studied philosophy for a while and we did a bit on logical thinking and the like e.g. begging the question.
 

HopingSince88

Senior Member
Messages
335
Location
Maine
Well...I was a math major, and worked as an application's engineer for about 17 years. I HATED my statistics courses, and could not have gotten through them without my handy-dandy scientific calculator.

Just a quick perusal of many news stories will yield a display of statistics to support or negate something within the story as 'fact.' Flip the coin over and now you have a different story.
 

CBS

Senior Member
Messages
1,522
The implication that statistics is no more than a tool used against the uninformed to promote agendas and fool the public is interesting in that it leaves unanswered the question, if not science (or stats), then what?

Not all science is well written or honest in the conclusions that are drawn and not all editorial boards are created equal but it is rare that a study has no value what so ever. The educated consumer will read a study to glean what the project really reveals about a subject and to determine its limitations. Granted, some of those limitations are significant and many studies confirm nothing more than the already known fact (eg. a sample size was too small or a therapeutic effect was too subtle; a certain methodology was inappropriate and was destined to find a null result).

Keep in mind that statistics are designed to demonstrate stability and replicability, not whether or not a result was important or significant. That's clinical significance and it is a question separate from statistical stability.

What are you left with if you damn all statistics and scientific papers. Do the papers that support your thoughts and impressions get deemed 'good studies' and the rest are 'bad studies?'

I find it ironic that someone might base their assessment of scientific studies on ironically sensationalistic work such as "How to lie with statistics."

Instead, might I suggest 'Statistical Power Analysis for the Behavioral Sciences (2nd Edition)' by Jacob Cohen or "How Many Subjects?: Statistical Power Analysis in Research' by Helena Chmura Kraemer

Good science is hard and flaws in a study do not mean it is worthless. It takes just as much work to understand what information from a flawed study still has value.

If not an educated consumer of research, with what would you recommend we replace statistical analysis?
 

Dolphin

Senior Member
Messages
17,567
I find it ironic that someone might base their assessment of scientific studies on ironically sensationalistic work such as "How to lie with statistics."

Instead, might I suggest 'Statistical Power Analysis for the Behavioral Sciences (2nd Edition)' by Jacob Cohen or "How Many Subjects?: Statistical Power Analysis in Research' by Helena Chmura Kraemer
Thanks. Just for the record, I had been put off by the title before but it got some good reviews by statistics lecturers on Amazon so thought it was worth a look. At the same time as buying it, I also bought: "Common Errors in Statistics (and how to avoid them)" by Phillip I. Good and James W. Hardin. I should also point out that I have did two probability and statistics courses as part of my mathematics course in college, getting firsts in both of them. The second course would be sufficient to get me exemptions from actuarial exams (it counted towards people's finals if they did mathematics and another subject). So I have a reasonable grounding in some areas. But I transferred courses so there are some gaps.

I reject the characterisation that I somehow have black-and-white thinking on scientific studies and see no value in statistics. I like mathematics and indeed was quite good at it e.g. came sixth in the Irish National Mathematics competition (for secondary school students). If my health hadn't deteriorated so much I might be doing a career involving statistics. My "crime" seems to be that I chose a book you do not think is good when all I did was base my choice on what other people wrote.

I am aware that sometimes one can be faced with a choice of statistical tests (e.g. chi-squared versus another test) and whether the results reach significance may depend on the choice so that is another reason why I thought the book "How to lie with statistics" might be interesting.

The only other relatively light book I read on statistics was "Statistics - a guide to the unknown" which our first year lecturer recommended. I read that during vacations and found it interesting although was certainly not technical enough for the courses I did (and so I don't think most other people read it). It would not be fair, as I say, to characterise me as somebody not interested in statistics.

Quite a lot of my mental energy is used reading ME/CFS studies and then sometimes replying to them. Not many people are doing that these days despite the fact that lots of articles can now be read online for free. If more people did that, I might well read more books on statistics as I would have more mental energy to devote to them.
 

Angela Kennedy

Senior Member
Messages
1,026
Location
Essex, UK
The implication that statistics is no more than a tool used against the uninformed to promote agendas and fool the public is interesting in that it leaves unanswered the question, if not science (or stats), then what?

Not all science is well written or honest in the conclusions that are drawn and not all editorial boards are created equal but it is rare that a study has no value what so even. The educated consumer will read a study to gleam what the project really reveals about a subject and to determine its limitations. Granted, some of those limitations are significant and many studies confirm nothing more than the already known fact (eg. a sample size was too small or a therapeutic effect was to subtle; a certain methodology was inappropriate and was destined to find a null result). Keep in mind that statistics are designed to demonstrate stability and replicability, not whether or not a result was important or significant. That's clinical significance and it is a question separate from statistical stability.

What are you left with if you damn all statistics and scientific papers. Do the papers that support your thoughts and impressions get deemed 'good studies' and the rest are 'bad studies?'

I find it ironic that someone might base their assessment of scientific studies on ironically sensationalistic work such as "How to lie with statistics."

Instead, might I suggest 'Statistical Power Analysis for the Behavioral Sciences (2nd Edition)' by Jacob Cohen or "How Many Subjects?: Statistical Power Analysis in Research' by Helena Chmura Kraemer

Good science is hard and flaws in study do not mean it is worthless. It takes just as much work to understand what information from a flawed study still has value.

If not an educated consumer of research, with what would you recommend we replace statistical analysis?

Whoa there Neddy! I'm a social science lecturer! It's usually my job to provide ways of helping students to learn how to critically analyse without giving up at the first hurdle, and I'm recommending Huff's book for people who might feel initally daunted by more tome like manuals (like the two you've mentioned). It's a great little book to get people into the subject.

@ Gerwyn, I'm also a social scientist per se - my specialty is sociology, so logical fallacies and rhetorical and ideological use of language is one of my research interests, hence my use of understanding of logical fallacies - which are CRUCIAL to understanding social sciences (including psychology and medicine for that matter !) Reliability and numbers is not enough, if construct validity is flawed (i.e. like circular reasoning in psychogenic explanations for example?)

I'm also a social science research methodology teacher with a postgraduate qualification in research methodology.

It needs to be remembered that psychology, psychiatry and medicine per se are not pure 'natural sciences' but incorporate more than a soupcon of social science in their disciplines. Hence the issue of construct validity.

Peer review is a problem precisely because many 'scientists' have little or no knowledge of social scientific issues or even philosophy of science issues (like how problematic claims to 'objectivity' are). Some of that is arrogantly naive, frankly, and in the world of ME/CFS politics, has an adverse impact on people. Flaws in studies need to be found. If flawed studies are constantly published and taken as gospel elsewhere (like the 'child abuse/truama/neglect causes CFS' type claims becoming part of guidelines, or propagated by psychiatrists as a rebut to the Lombardi paper!) then it becomes irresponsible scientific publication, and subject to inappropiate power relations.
 
G

Gerwyn

Guest
Well...I was a math major, and worked as an application's engineer for about 17 years. I HATED my statistics courses, and could not have gotten through them without my handy-dandy scientific calculator.

Just a quick perusal of many news stories will yield a display of statistics to support or negate something within the story as 'fact.' Flip the coin over and now you have a different story.

yes you often see that in the news
 
G

Gerwyn

Guest
Whoa there Neddy! I'm a social science lecturer! It's usually my job to provide ways of helping students to learn how to critically analyse without giving up at the first hurdle, and I'm recommending Huff's book for people who might feel initally daunted by more tome like manuals (like the two you've mentioned). It's a great little book to get people into the subject.

@ Gerwyn, I'm also a social scientist per se - my specialty is sociology, so logical fallacies and rhetorical and ideological use of language is one of my research interests, hence my use of understanding of logical fallacies - which are CRUCIAL to understanding social sciences (including psychology and medicine for that matter !) Reliability and numbers is not enough, if construct validity is flawed (i.e. like circular reasoning in psychogenic explanations for example?)

I'm also a social science research methodology teacher with a postgraduate qualification in research methodology.

It needs to be remembered that psychology, psychiatry and medicine per se are not pure 'natural sciences' but incorporate more than a soupcon of social science in their disciplines. Hence the issue of construct validity.

Peer review is a problem precisely because many 'scientists' have little or no knowledge of social scientific issues or even philosophy of science issues (like how problematic claims to 'objectivity' are). Some of that is arrogantly naive, frankly, and in the world of ME/CFS politics, has an adverse impact on people. Flaws in studies need to be found. If flawed studies are constantly published and taken as gospel elsewhere (like the 'child abuse/truama/neglect causes CFS' type claims becoming part of guidelines, or propagated by psychiatrists as a rebut to the Lombardi paper!) then it becomes irresponsible scientific publication, and subject to inappropiate power relations.

I dont accept that psychology is a social science at all and neither would most psychologists.Sociology is very useful but I would not call it a science.I share the same view as Kuhn.Sociological methods lack predictive power and sustained puzzle solving power which are the characteristic hallmarks of a science

There is of course Sociological Social Psychology which is primarily Sociology and does not purport to be scientific .In fact its proponents dont consider scientific methods to be inappropiate

There is an argument about validity of measurements which I agree with.Alternative methods,, however suffer from issues of reliability generalisability and consistency.The impossibility of producing information about mind independent entities using mental constructs is patently obvious.

The choice of methods is not in my view so much an issue.The issue is whether the methods used are appropiate to the kind of research being undertaken.

I have seen quite excellent research produced using qualitative methods and I have seen some which are pure and bad journalism.I can say the same about research using quantitative methods for the sake of it. Researchers attempting to quanify subjective socially constructed terminology.Quantitative researcher often miss the point that there is no objective relationship between the description and the described.They then produce complex statistics which are totally meaningless because they confuse the label with the labelled

There is not a problem with objectivity as far as mind independent knowledge is concerned.With mind dependent constructs of course there can be real issues.Many claim that gaining any kind of objective knowledge is impossible and I tend to agree.

I totally agree that the peer review process is open to abuse can ,and often does, lead to absurdities like the ones you mention.

The problem is the misuse of the process, or overiding it, for purposes of commercial gai,.maniplualtion and all the other issues relating to power that you mention.Scientists in general pay very little attention to work in journals which do not have a proven and jealously guarded peer review process.
Science and the lancet fall into that category. abandoning peer review and instead evaluating research according to totally subjective and politically convenient criterea would almost certainly drag us back into the dark ages

In this era scientific developments were continually hindered if not totally obstructed by political manouvering of one kind or another.I would submit therefore that while far from perfect the peer review process is far superior to whatever on earth would be in second place
 

Angela Kennedy

Senior Member
Messages
1,026
Location
Essex, UK
Hi Maarten and Gerwyn,

The problem with psychology claiming itself as a science rather than a social science is that it deals in social constructs, like sociology. Much psychological "methods lack predictive power and sustained puzzle solving power which are the characteristic hallmarks of a science", for example, when it comes to human response. We see examples of this problem in the very claims that somatic illness is 'psychosomatic' or that, for example, ME sufferers 'catastrophize' or that women are 'hysterical' , or child trauma causes CFS, or even that CBT is 'successful'.

Nevertheless, sociology and psychology can make tentative predictions, and look for empirical evidence to support or refute these. While some branches choose not to do this, I'm somewhat of a empiricist sociologist (we're about!) and whether or not other sociologists and psychologists understand this, logic and rationality remains crucial in social science (and science for that matter!).

But social science is not science, although a tentative scientific method can be employed. The problem is that certain areas claiming scientific authority are not either, and that includes much of psychology, I'm afraid. Sadly I've read a lot of psychological and psychotherapeutic literature (and this includes psychogenic explanation type literature) that does not follow scientific method properly (some of it seems more like astrology!), yet claims the aura of 'science'. This goes for medicine as well.

So maybe people sometimes play a little too hard and fast with the 'scientific method' claim which involves claims to scientific authority which themselves become logical falllacies. Sociology can be scientific. Some of it isn't. Same goes for psychology.

Plus I'm more of a Popperian than a Kuhnian - whose reasoning seems a little circular at times, and prescriptive rather than descriptive.

Sorry I don't have freely available online resource for the Weber reference.
 

Angela Kennedy

Senior Member
Messages
1,026
Location
Essex, UK
I

I totally agree that the peer review process is open to abuse can ,and often does, lead to absurdities like the ones you mention.

The problem is the misuse of the process, or overiding it, for purposes of commercial gai,.maniplualtion and all the other issues relating to power that you mention.Scientists in general pay very little attention to work in journals which do not have a proven and jealously guarded peer review process.
Science and the lancet fall into that category. abandoning peer review and instead evaluating research according to totally subjective and politically convenient criterea would almost certainly drag us back into the dark ages

In this era scientific developments were continually hindered if not totally obstructed by political manouvering of one kind or another.I would submit therefore that while far from perfect the peer review process is far superior to whatever on earth would be in second place

But I haven't advocated abandoning peer review. In fact, I think there should be steps taken to make it more transparent, peer reviewers more accountable, and an ethos of deeper reflection encouraged so that less 'rubber stamping' or knee-jerk rejection happens in the process.

Anonymous peer review is rather ludicrous. Academics should be prepared to stand by their thought and review processes. The MRC have recently been allowed to keep peer reviewers anonymous for the purposes of funding, for example, which is pretty outrageous.

But - perhaps most importantly - 'lay' people who are affected by bad science or poor processes of peer review need to educate themselves as much as possible on scientific methodology and flaws in same, on logic and rational argument (and being able to ascertain when that doesn't happen). Obviously the ME/CFS community need that sort of knowledge urgently, because none of us appear to be taught it as a priority in school, for example! No-one is it seems.
 

CBS

Senior Member
Messages
1,522
TomK - My apologies

<Snip>
I reject the characterisation that I somehow have black-and-white thinking on scientific studies and see no value in statistics. I like mathematics and indeed was quite good at it e.g. came sixth in the Irish National Mathematics competition (for secondary school students). If my health hadn't deteriorated so much I might be doing a career involving statistics. My "crime" seems to be that I chose a book you do not think is good when all I did was base my choice on what other people wrote.
<snip>

Tom,

I want apologize. I attached your quote to my earlier post inappropriately and unfairly (your quote has now been removed). My problem was not with your considering purchasing the book. Rather, I was frustrated in the overall tone of the thread which was not of your making. As for the books that I listed, they are expensive and detailed. You don't need to purchase them to get a good feeling for the importance of their limited subject matter (power analysis in statistical analysis).

The power of a study is only one of a dozen or more different important considerations and I doubt that many of us have the energy to independently undertake a masters level course on all the methodological and statistical concerns that impact research.

Quite a lot of my mental energy is used reading ME/CFS studies and then sometimes replying to them. Not many people are doing that these days despite the fact that lots of articles can now be read online for free. If more people did that, I might well read more books on statistics as I would have more mental energy to devote to them.

I have what could be considered a very strong background in stats and research methodology (designed numerous studies, reviewed articles, years of experience actually doing the statistical analysis, probably ten or so stats courses (most at a master's or Ph.D level) and maybe a half dozen courses on research design). I've also taught sections of research design courses at a graduate level.

There is a saying that data is data. It is what you do with the data and how you characterize both the limitations, and thus what is actually revealed (if anything) by a study that counts.

Using the recent XMRV studies and the Montoya Valcyte studies as examples, there are two types of lessons to be learned from these studies.

As nearly everyone here knows, results of the recent XMRV studies in the EU differed dramatically from the WPI studie(s) - there were actually many different tests and analysis required of the WPI by Science before publication. I would be willing to concede that the EU studies were done in good faith (although there may be questions on this, it's a fight that consumes a huge amount of energy and takes focus off of what the studies actually tell us). I dont need to go into it but the problems here were more methodological than statistical. The statistical results were not close (67% versus 0%). The methodological questions revolved around issues of cohort (not a satisfactory explanation for the lack of XMRV positives in the healthy controls - Groom study excluded), testing methods - more likely (14 day culturing of cells versus roughly a day or two) and possibly, as Dr. Goff in his lecture on XMRV at CROI suggested, there may be issue with XMRV strain variation and various PCR primers not working on possibly divergent strains of XMRV or limits in the sensitivity of various assays used to detect XMRV.

Bottom line, it is extremely unlikely that stats has anything to do with the divergent XMRV studies to date (and this is as far as I would take my advice on the virology of XMRV studies - too many more details and the virology is out of my league - I suggest keeping an eye on Parvofighter's posts. Parvo is far better versed in the virology than I).

The Montoya Valcyte studies are an entirely different matter (full disclosure - I am a patient of Dr. Montoya - and I am a patient of his for very good reason). Montoya's first study of the efficacy of valcyte for CFS patients was very small but showed promise (9 of 12 patients with chronic CMV and HHV-6 responded on cognitive measures to valcyte) and the second study was slightly larger but still small (30 patients - results reported on at 2008 HHV-6 conference "indicated that patients on Valcyte experienced significant cognitive improvement" - not yet published).

The issues with the valcyte study are exactly the opposite of those with the XMRV studies - cohort and statistical power. As for cohort, not all CFS patients have chronic CMV and HHV-6 infections but they may have other chronic herpes virus (or other) infections with just as many severe effects but for these infections, valcyte is not effective.

Twelve and 30 subjects are far too few to start dividing cohorts into subsets to assess the responses of different groups of CFS patients. On top of that, much of what I personally have experienced is neurological dysfunction (autonomic and CNS with components/symptoms that remit dysfunction?) and components/symptoms that do not remit (possible neural damage?). It is known that viral replication produces neuro-toxins. A working hypothesis is that prolonged neural dysfunction due to viral toxins leads to permanent neural damage. So here you have small studies with a new agent (Valcyte) that has a measurable impact on antibody titer levels (which correlate on many measures with reported symptoms severity and degree of disability) but an end point that may or may not be entirely reversible.

On top of that, in the larger CFS population you have sub-groups with a variety of co-infections (or possibly not) that may or may not respond to each different anti-viral. Finally, CFS patients often do not respond to medications in the same manner as healthy controls. I was started on the anti-viral, acyclovir (chronic HSV-1 infection with viral encephalitis) at a ridiculously low level (100 mg / day) and then slowly increased over the course of months to a therapeutic level when most patients taking acyclovir start at 3200 mg/day. I still experienced mild side effects. My pharmacist said that it was impossible that the dose I was on was causing problems. In conversation with Dr. Montoya, 100 mg a day has caused severe problems in very sensitive patients, patients who were reduced to 40 mg a day and then very gradually increased to a therapeutic level which with significant resolution of cognitive symptoms.

The bottom line, as I said earlier, is that all of this is complicated. The sensitivity and clinical importance of your measure (anti-viral titers versus self reported symptoms), the power of you analysis based upon the magnitude of what you would deem a clinically significant effect and the ability to subdivide your entire sample and then generalize those findings to the entire population of interest (or at least carefully define to whom the results may or may not apply). All of this and more comes into play.

Rather than sitting down with a stack of textbooks, I would recommend that those who are interested may find it much more accessible and enjoyable to read (or read about) very well executed research on social phenomena impacting physiological status.

Robert Sapolosky is a scientist and author (as well as extremely bright and interesting). He is currently professor of Biological Sciences, and Professor of Neurology and Neurological Sciences, and by courtesy, Neurosurgery, at Stanford University. His early career interests included the effects of sociological factors on the development of coronary artery disease. He revolutionized thinking around the notion of the Type-A behavior pattern and heart disease. He's now focusing on the human host interactions and interdependency of chronic infectious disease (specifically toxoplasmosis) as well as stress and its impact on neuroendocrinology.

His study designs are elegant and unconventional (eg. years spent in Kenya studying the social hierarchy and dominance dynamics of baboon colonies and the relationship of these dynamics to physiological disease or well being).

Here's his link on Wikipedia: http://en.wikipedia.org/wiki/Robert_Sapolsky

Take a look at one of his books or articles. It will probably change the way you view science.

And a few of his more memorable quotes:
"I love science, and it pains me to think that so many are terrified of the subject or feel that choosing science means you cannot also choose compassion, or the arts, or be awed by nature. Science is not meant to cure us of mystery, but to reinvent and reinvigorate it."

"Get it wrong, and we call it a cult. Get it right, and maybe, for the next few millennia, people won't have to go to work on your birthday."
 

Angela Kennedy

Senior Member
Messages
1,026
Location
Essex, UK
Hi CBS

What's an 'elegant' study?

Have you read Jones and Bright's 'stress, theory and research' by Jones and Bright?

What do you understand as the problems in the construct of 'type A personalities'? I take it you do know there are critiques of this concept?

And what do you understand by the term 'stress'?

I ask these questions because they are important to the thorny issue of 'stress'. No matter how 'elegant' studies are, if there are conceptual problems and problems in methodology, then the problems are there.
 
G

Gerwyn

Guest
But I haven't advocated abandoning peer review. In fact, I think there should be steps taken to make it more transparent, peer reviewers more accountable, and an ethos of deeper reflection encouraged so that less 'rubber stamping' or knee-jerk rejection happens in the process.

Anonymous peer review is rather ludicrous. Academics should be prepared to stand by their thought and review processes. The MRC have recently been allowed to keep peer reviewers anonymous for the purposes of funding, for example, which is pretty outrageous.

But - perhaps most importantly - 'lay' people who are affected by bad science or poor processes of peer review need to educate themselves as much as possible on scientific methodology and flaws in same, on logic and rational argument (and being able to ascertain when that doesn't happen). Obviously the ME/CFS community need that sort of knowledge urgently, because none of us appear to be taught it as a priority in school, for example! No-one is it seems.

I agree with the need to educate lay people whole heartedly.

I aslo think that reviewers need to be named.

I think that info is available in some publications but i would need to check

.Naming a reviewer would stop this "I,ll review yours if you review mine" approach of the Wesselly school and, hopefully, increase confidence within the lay community. I also believe that having a science degree should be mandatoty for anyone on the editorial board of a scienctific journal and a journalist reporting on scientific matters .This I believe should be the absolute minimum requirement.The distortion of scientific evidence by the lay press in general and lay editors of scientific journals has become a huge issue.

In most journals peer evaluation is not the main criterea for publication or even a consideration at all.

An example of lay press distortion is the recent furore in the field of climate change when a scientist was accused of performing a trick with the data

.Performing a trick in this context is managing to successfully perform a particular kind of statistical analysis in order to interpret poorly collected data

.The data in question was collected and presented by amateur enthusiasts and was not in a suitable format for research purposes.
 
G

Gerwyn

Guest
Hi CBS

What's an 'elegant' study?

Have you read Jones and Bright's 'stress, theory and research' by Jones and Bright?

What do you understand as the problems in the construct of 'type A personalities'? I take it you do know there are critiques of this concept?

And what do you understand by the term 'stress'?

I ask these questions because they are important to the thorny issue of 'stress'. No matter how 'elegant' studies are, if there are conceptual problems and problems in methodology, then the problems are there.

absolutely
 

Angela Kennedy

Senior Member
Messages
1,026
Location
Essex, UK
I agree with the need to educate lay people whole heartedly.

I aslo think that reviewers need to be named.

I think that info is available in some publications but i would need to check

.Naming a reviewer would stop this "I,ll review yours if you review mine" approach of the Wesselly school and, hopefully, increase confidence within the lay community. I also believe that having a science degree should be mandatoty for anyone on the editorial board of a scienctific journal and a journalist reporting on scientific matters .This I believe should be the absolute minimum requirement.The distortion of scientific evidence by the lay press in general and lay editors of scientific journals has become a huge issue.

Yes I agree with this too.
 
Back