• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

New Video on ME/cfs -"PROTEOME"

Messages
13
Below is a new video I just put up on the 2011 spinal fluid proteome study by Natelson, et al.

Like the Lights' compelling 2009 gene expression study (which was the subject of my first film in this series) I do not believe that this remarkable 2011 study of the spinal fluid proteomes of CFS and chronic Lyme patients was even considered by the P2P workshop.

Thanks to everyone for all the kind comments about my previous efforts. They can still be seen on my youtube channel here... https://www.youtube.com/channel/UC7GNUcVaYEvm5zhYIZuBwyg
https://www.youtube.com/watch?v=hDnU6JWCtd4
SuddenOnset
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Studies like these could not be considered under EBM methodology. They are not the highest "standard" of evidence, and so were excluded. There are several problems with EBM "standards". EBM i s theoretically sound, but its application is frequently severely flawed.

EBP or evidence based practice is required for doctors to cope with EBM, but growing enforcement of EBM "standards" might be dumbing down our doctors.

While there are good reasons for using dbRpCTs as a higher grade of evidence for clinical efficacy of treatments, this is not applicable to non treatment studies, such as investigatory ones. You also need to understand why cbRpCTs are necessary. dbRpCTs are double blinded randomized placebo controlled trials. Each part of that reduces risk of particular bias. However there are many biases still left. Cookie cutter approaches to EBM, such as used by the team assembling the review of the P2P blatantly ignore other biases.

They ignore that there is risk of allegiance bias. This is closely tied to a bias in publication reviews if only a small number of reviewers, with high allegiance, are doing the reviewing.

They ignore that there is risk of bias from failure to double blind. The fact that psychopsychiatric interventions may be hard or impossible to double blind does not matter.

They ignore that there is risk of bias from failure to use a proper placebo response. Again, psychopsychiatric interventions cannot be placebo controlled. They can however control against a no interventions group. PACE needed an additional control arm.

They ignore that there is risk of methodological bias.

They ignore that there is risk of bias in subjective data, especially for outcomes. PACE gave people many months of psychotherapy then asked them how they feel about what was done.

NO psychopsychiatric study is of the highest grade of evidence. ZERO. Its a big fail, which the cookie cutter approach cannot cope with.

With such a cookie cutter approach contrary evidence can be dismissed as "lower grade" evidence.


Lets look at another issue. Biomedical research on this disease is in its infancy. Funding levels are so low that research on ME and CFS get about the lowest level of funding per patient of any common chronic disease. Few researchers are doing the research, and very few institutions back their researchers properly.

This means the entire field of research is very new and underdeveloped. Cookie cutter EBM approaches, on a tight budget and to a deadline, cannot deal with this.


Quite aside from biases from economic imperatives or broad pressures from BPS ideology, there is a big issue with Zombie Science here. That is, the deck is stacked by priority funding going to favoured areas, that research being of lower quality but sails right through because of entrenched bias. I am of course considering the PACE trial as an example. When large numbers of RCTs, and most notably not dbRpCTs, are in the mix, and ineptly given a high standard of evidence, other things can pale in comparison. Yet such studies do not look at each of these in great detail. For example, the P2P review gave PACE some praise, but this study is of such low quality, with so many demonstrable flaws, that I think all the papers should be retracted.


With some rethinking, rewriting, and after some feedback, I might blog this post.
 
Last edited:

Forbin

Senior Member
Messages
966
Studies like these could not be considered under EBM methodology.

I'm not really up on this, but are you saying that only studies about symptoms/treatment are considered by the P2P?

Shouldn't the IOM also consider studies about physiological findings anyway, since they seem to be empowered to come up with new names for diseases? It doesn't seem like it would make sense to try come up with a name while ignoring the physiological findings.

Then again "chronic multisymptom illness" is the sort of useless result that one might expect from such an approach.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
I'm not really up on this, but are you saying that only studies about symptoms/treatment are considered by the P2P?

Shouldn't the IOM also consider studies about physiological findings anyway, since they seem to be empowered to come up with new names for diseases? It doesn't seem like it would make sense to try come up with a name while ignoring the physiological findings.

Then again "chronic multisymptom illness" is the sort of useless result that one might expect from such an approach.
No, I am not saying that. I am saying other types of studies need to fit high standards. So, and I am guessing here, they need to be independently replicated (many of our studies fail here), large numbers of patients, large effect sizes etc. In other words, what we don't usually have due to lack of funding.

The P2P should indeed follow a very different methodology, but then it wouldn't be the P2P. Their brief is to find deficits in research. So they should be looking at underpowered and unreplicated studies, and requesting larger better funded studies. Should. Yet they fail to even address this question, at least from the review.

The rules of EBM are clearest for clinical trials, but get murkier as you move to less clinical types of research.
 
Messages
13
FWIW, I just wanted to let people know that youtube has finished re-processing the film, improving the contrast and making it look much more like it was originally intended.

It looks much nicer now - to me at least. :)

SuddenOnset
 

Jonathan Edwards

"Gibberish"
Messages
5,256
I am not familiar with the remit of the P2P workshop but I can see why this study would not be considered in a scientific review. I have only looked at the abstract but on that basis it seems to be fatally flawed.

For some strange reason the researchers say they pooled all the CFs, all the Lyme and all the normal smaples into three samples. This would mean that they could not get a result with any meaning. They give a p value but you cannot get a p value out of comparing single samples. The p value is the probability that a two populations, test and control, could in fact be samples of the same population - i.e. that there is no difference. If you do not have populations there is no range or variance so you cannot assess probability of being the same group. I may have misinterpreted what they say but as it stands the abstract is not scientifically valid and I am very surprised it got through peer review.

Also, the differences they report seem entirely to be expected. One pool had about 2,600 proteins 'present' and the others about 2750. Whether or not a protein is judged 'present' will just depend on the limit of detection of the assay. For this number of proteins the number detected is going to vary by at least 5% between samples or pools. So the findings cannot be said to indicate any meaningful differences.

I guess what comes to my mind is that if people are worried about quantity and quality of research what patient groups like PR should be doing is making it clear to researchers that they need to do better than this. I do not think this is an issue of funding. If it was worth doing this study at all it was worth doing it in a way that would produce a meaningful result. I think people need to think whether the relationship is the other way around: no funding because of poor research, rather than poor research because of no funding. If I was reviewing a grant application that had this abstract submitted as preliminary data I would be very likely to think 'if the researchers analyse results this badly then it would be foolish to invest more money in this work'. If the data were submitted in an intelligible form then I might give the benefit of the doubt and recommend funding.

This may sound harsh, but to be very honest I doubt a research group can expect to attract funding if it reports findings in this way.
 

Forbin

Senior Member
Messages
966
It's not the total number of proteins found in each group that is the important finding, it's that very different sets of proteins made up those totals.

Out of 2783 proteins found in CFS patients, only 1740 of those proteins (about 62%) were also found in controls. Only 1605 proteins were shared by all three conditions.
 

Jonathan Edwards

"Gibberish"
Messages
5,256
OK, so one of the problems is that the abstract does not give much of an indication of what is actually in the paper. But we still have the same basic problems. We do not know what the results would be like if three randomly selected normal pools were compared (or just three random mixes from all the samples in the study) - maybe they would only share 60% of proteins giving a signal on their test. It is not possible to use pooled samples and then say that you can separate three populations. The differences in each pool might be due to one person having a lot of a particular protein just by chance. There might be no consistency about this within the population.

I cannot quite see why there is so much emphasis on the pooled samples which they say was needed because of small amounts of material. The only useful analysis I can see is of the individual samples, and that is very difficult to evaluate because of the complex statistics used. We have to forget about the figures from the pools in this case and I find it very difficult to see what they are being replaced by. And if there was enough material to get useful data from individual samples I am puzzled by using up material for pooled samples - especially when it is unclear what one could conclude from the result.
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
This may sound harsh, but to be very honest I doubt a research group can expect to attract funding if it reports findings in this way.

This (Natelson group) is one of the groups that the NIH has consistently awarded grants to over many years. What does that tell you?

For the record, I agree that the data has been confusingly presented, and indeed I don't believe the whole story has been told of this data. I think that any specific and central abnormalities in CFS may well be reflected in this data, but is just one of many differences reported due to the statistical methods used.

The data from this study would be worth revisiting (as in asking the authors to share or reanalyse their raw data) if any other groups happened to find specific abnormalities within the proteome...
 
Last edited:

Forbin

Senior Member
Messages
966
The reason the samples were pooled and then depleted of the 14 most abundant proteins was because that allowed for the detection of the rarer proteins in each group.

From the paper...
We used the proteomic strategy described in Methods to assure that the maximum number of proteins would be analyzed and the more abundant proteins did not obscure the less abundant ones having biomarker potential.

The combined result of the pooled patient groups came up with 3641 proteins total. The combined individual results only came up 474. However, the individual results were made up of more plentiful proteins which could be compared against each other in terms of their relative abundance.

When this was done...
The CSF proteome of the two disease states were markedly different from each other. Individual patients also showed consistent patterns of protein abundances discriminating CFS from nPTLS. These results demonstrated that it is unlikely that any single subject’s CSF [cerebrospinal fluid] sample in the pooled analysis contributed disproportionately to the differential proteome distributions observed between the disease groups.
 

Jonathan Edwards

"Gibberish"
Messages
5,256
I can see they say that but to be convincing in science you actually have to show these things and as they stand I find the results uninterpretable at this level. They wouldn't be able to draw conclusions about the contribution of individual samples to the rarer proteins from the individual sample data presumably. If the individual samples give the evidence then it is unclear what the pooled samples contribute - considering the inability to analyse in population terms, as indicated originally. I would have found it much more convincing if they gave raw data on some specific proteins that looked as if they might discriminate the groups - some simple scatter plots instead of the complex array patterns that don't give us what we need.
 

Ecoclimber

Senior Member
Messages
1,011
OK, so one of the problems is that the abstract does not give much of an indication of what is actually in the paper. But we still have the same basic problems. We do not know what the results would be like if three randomly selected normal pools were compared (or just three random mixes from all the samples in the study) - maybe they would only share 60% of proteins giving a signal on their test. It is not possible to use pooled samples and then say that you can separate three populations. The differences in each pool might be due to one person having a lot of a particular protein just by chance. There might be no consistency about this within the population.

I cannot quite see why there is so much emphasis on the pooled samples which they say was needed because of small amounts of material. The only useful analysis I can see is of the individual samples, and that is very difficult to evaluate because of the complex statistics used. We have to forget about the figures from the pools in this case and I find it very difficult to see what they are being replaced by. And if there was enough material to get useful data from individual samples I am puzzled by using up material for pooled samples - especially when it is unclear what one could conclude from the result.

Understand your concerns and will get back to you on this. Full Paper is located here:
PLOS ONE: Distinct Cerebrospinal Fluid Proteomes Differentiate Post-Treatment Lyme Disease from Chronic Fatigue Syndrome

and here is more related info from Pacific Northwest National Labs:
http://www.pnl.gov/science/highlights/highlight.asp?groupid=753&id=795

Other research by Natelson:
PLOS ONE: Establishing the Proteome of Normal Human Cerebrospinal Fluid

Granted, there are issues involving in both the quantitative proteomic methodologies and the type of immunoaffinity depletion methods for identification for the validation of disease biomarkers as described in this 2007 reasearch article: Quantitative mass spectrometry in proteomics: a critical review. However, with technological and scientific advancements in techniques over the past few years, the possiblity of identifying biomarkers within a disease set will soon cross the statistical threshold for validation. Microscale depletion of high abundance proteins in human biofluids using IgY14 immunoaffinity resin: analysis of human plasma and cerebrospinal fluid and Advances in Proteomic Technologies and Its Contribution to the Field of Cancer.
http://www.pnl.gov/science/highlights/highlight.asp?groupid=753&id=795
One paper that we, Miller's lab, felt was significant in our research was Natelson's Spinal fluid proteins http://www.sciencedaily.com/releases/2011/02/110223171235.htm as we conducted a high resolution
mass-spectrometry using a viral chip against those spinal fluid proteins for xmrv sequences at Pacific Northwest National Labs but our findings were negative for xmrv sequences. We considered Natelson's research a significant milestone since it differentiated two unrelated patient communities with a control group which could lead to potential biomarkers in the future. However, CSF although used for research purposes would be impratical in a clinical setting based on insurance objections. PLOS ONE: Mass Spectrometry-Based Comparative Sequence Analysis for the Genetic Monitoring of Influenza A(H1N1)pdm09 Virus
 

Ecoclimber

Senior Member
Messages
1,011
Studies like these could not be considered under EBM methodology. They are not the highest "standard" of evidence, and so were excluded. Their are several problems with EBM "standards". EBM i s theoretically sound, but its application is frequently severely flawed...

EBP or evidence based practice is required for doctors to cope with EBM, but growing enforcement of EBM "standards" might be dumbing down our doctors...

They ignore that there is risk of allegiance bias. This is closely tied to a bias in publication reviews if only a small number of reviewers, with high allegiance, are doing the reviewing.

They ignore that there is risk of bias from failure to double blind. The fact that psychopsychiatric interventions may be hard or impossible to double blind does not matter.

They ignore that there is risk of bias from failure to use a proper placebo response. Again, psychopsychiatric interventions cannot be placebo controlled. They can however control again a no interventions group. PACE needed an additional control arm.

They ignore that there is risk of methodological bias.

They ignore that there is risk of bias in subjective data, especially for outcomes. PACE gave people many months of psychotherapy then asked them how they feel about what was done.

NO psychopsychiatric study is of the highest grade of evidence. ZERO. Its a big fail, which the cookie cutter approach cannot cope with.

With such a cookie cutter approach contrary evidence can be dismissed as "lower grade" evidence.....

....Quite aside from biases from economic imperatives or broad pressures from BPS ideology, there is a big issue with Zombie Science here. That is, the deck is stacked by priority funding going to favoured areas, that research being of lower quality but sails right through because of entrenched bias. I am of course considering the PACE trial as an example. When large numbers of RCTs, and most notably not dbRpCTs, are in the mix, and ineptly given a high standard of evidence, other things can pale in comparison. Yet such studies do not look at each of these in great detail. For example, the P2P review gave PACE some praise, but this study is of such low quality, with so many demonstrable flaws, that I think all the papers should be retracted.

With some rethinking, rewriting, and after some feedback, I might blog this post.

The hyprocrisy within the P2P and IOM methodology is required so as to guarantee a stated or given outcome. Higher standards for scientific research to meet the arbritary threshold for acceptance vs. the biopyschsocial model of 'research' such as the PACE study where even researchers refuse to disclose the data they manipulated used to arrive at their conclusions. The biopsychsocial model is not restricted by the laws of science nor the scientific method. It is a batch of convoluted theories drawn from the flavor of the month club. A second high-profile clinical psychologist has delivered a hard-hitting criticism of cognitive-behavioural therapy (CBT) claiming it is simplistic and “does not work”

The working committee looks at every possible flaw on the ME/CFS research over the last thirty years but turn a blind eye to the flaws in the biopsychsocial research. I mean after thirty years of research there should be a few significant research papers. Granted most have been published by clinician/researchers and not researchers devoted only to research in their respective fields. However, you cannot ignore the Lights study.

In this article in the New York Times, replace energy and oil with health/medical/disability insurance companies and the APA. You will now know why the P2P and the IOM panels were commissioned by HSS. You will finally realize why officials on the CFSAC and within HSS, NIH and the CDC behave the way they do toward this patient community.

I digress....
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
http://www.nytimes.com/2014/12/07/u...ive-alliance-with-attorneys-general.html?_r=1
We are living in the midst of a constitutional crisis

I think this is the case for all democracies. Our constitutions were written for a different age, one without large multinational megacorporations, or instant media with twenty four hour news cycles and a possible end to in-depth investigative journalism. Some of the early writings warn about these issues. I think Adam Smith would be very upset with modern capitalism. Modern democracy is also failing. These things can be fixed, and we (concerned citizens in free countries) have the power to fix them, but do not have the understanding or will. However this is not the thread for that discussion.
 

catly

Senior Member
Messages
284
Location
outside of NYC
This may sound harsh, but to be very honest I doubt a research group can expect to attract funding if it reports findings in this way.

This (Natelson group) is one of the groups that the NIH has consistently awarded grants to over many years. What does that tell you?

Indeed @Snow Leopard, Natelson has been one of the more prolific ME/CFS researchers in the US and has done so with much of the few NIH dollars awarded each year. Not sure how much this research has contributed to our knowledge base--and I'm not sure how much of it (if any) was considered in the P2P AHRQ report, but here's a link to his published research.

And here's a summary of his presentation to the Mass CFIDS in 2012, which I believe discusses the study referenced in this thread and which indicates that Dr. Natelson has recieved funding to continue this, and other research. It may be of some interest to @Jonathan Edwards regarding your MEs subtypes.
 

Jonathan Edwards

"Gibberish"
Messages
5,256
Thanks, catly, the presentation summary is a useful guide to Natelson's approach, which seems measured and intelligent. I have a suspicion that this particular proteomic study looks unimpressive simply because proteomics is such a blunt tool and so difficult to extract useful conclusions from. Hopefully further work will tease out something concrete.