Philosophy of M.E.: If not Conspiracy, then What?
Part One: Verificationism
WARNING: Philosophy Alert! This is esoteric. It has points I am leading up to but might lose many people along the way.
One of the things I am leading up to is a further discussion of the what I am calling the Dysfunctional Belief Model of CFS (DBM), or the Wessely school as its sometimes called. Its also referred to by its overarching philosophy (the Biopsychosocial school) or its techniques (CBT/GET). However BPS and CBT/GET have other applications and philosophy - the DBM utilizes highly modified versions of these.
I have already challenged the view that a big problem in M.E. is a big conspiracy or series of lesser conspiracies. This directly leads to the most obvious question: if there is no provable conspiracy then what? My proposed answer is a combination of three things which have a high overlap: confirmation bias, verificationism and Zombie Science. I will argue that these three combine to create a particularly insidious and dangerous form of dogmatic verificationism.
Later I intend to discuss how this relates to "Evidence Based Medicine" and the Biopsychosocial movement for treating M.E., though some of that I will leave for my book. I will also be saying some negative things about the whole field of psychosomatic medicine.
This does not mean there are absolutely no conspiracies at all, no wrong-doing, no deliberate deceptions, no hidden agendas. However, this series of blogs will probably not address many of these because such claims must not be based on possibilities but evidence and reason, and I am busing accumulating evidence and developing argument behind the scenes. Its not like there is not a wealth of source material to use. These blogs are my preamble. I will probably make many mistakes, and provoke more than a little debate or dissent even if not all of it is public. Furthermore as my knowledge grows so my view may change.
I am temporarily suspending a hidden question here, that is if there is even a question of wrongdoing at all, largely because for this audience it is not one I need to address. At some point however this question has to be answered. I currently think the evidence shows the answer is yes.
For the purpose of this blog I am also not going to discuss Zombie Science. At some stage I may have more to say about zombies however. Who doesn't think zombies are interesting?
Verificationism
In my blog on conspiracies I briefly introduced the notion of Verificationism in science. A very simplistic view of it, in its original context, is that it purports to show meaning of things through verification or finding evidence: if you think the bird noise which sounds like quacking from the lake over the hill is from ducks, you can verify it. You go over the hill and photograph a duck. Ducks quack. So you have partially verified your hypothesis. You then see a flock of ducks. More verification. Therefore it is reasonable to presume that the bird noise is from ducks: hypothesis verified. However, if I said I thinks its the ducks, it would be absurd to claim that statement had no meaning until I found the ducks, which is an interpretation from classical philosophy - but meaning has little to do with truth or even verification. In the philosophy of science it might however be justified to claim that the statement was not scientifically verified until the ducks were found.
Now falsification (as in critical rationalism) might pose a different question: if its the ducks, does that mean all of it is from ducks? Lets make that assumption and test it. If we can find one bird that is making noise but is not a duck, its falsified. So I go and find a swan, a goose and some miscellaneous waterfowl. They all make some noise. So the noise is mostly quacking, but has other components. Hypothesis refuted. If no bird other than ducks had been found, it could have been said that the theory was not falsified, and so can be considered more reliable but not proven.
If it was falsified, the verificationist will then say that OK, now the hypothesis is the noise is mostly from ducks, and we definitely see mostly ducks, and so the debate continues. Each time a position is refuted it can be modified to account for the changing data. What is supposed to happen is that as additional modifications accumulate it should become clear the hypothesis is less and less tenable. Sadly this does not always happen.
The other problem that arises is that a verificationist approach and a critical rationalist approach lead to different questions. So the way the problem is framed, and the kind of evidence that is sought, can be different. A verificationist approach can lead to bias such that contrary evidence is not found, whereas a critical rationalist approach actively looks for ways to find contrary evidence. Verificationism is better for supporting and creating hypotheses, critical rationalism is better for eliminating incorrect hypotheses.
The modern scientific versions of these are much more complicated, with layers and caveats and statistical arguments, so are not nearly this simplistic, but it gives the general idea.
Confirmation Bias
It is accepted that problems with confirmation bias due to verificationism abound in psychological research, but at this point I do not know how accepted this proposition is - it is definitely a contentious issue. Researchers under this methodology are trying to gather data that support their theory. They may deliberately design and conduct experiments to support their theory, and avoid experiments that would not do so - indeed it is sometimes the case that experiments to disprove the theory simply cannot be created under their methodology.
An example of this is a study looking at how psychiatrists diagnose mental disorders, and as a result how accurate they were. Those who used confirmatory diagnostic processes were often in error - so this doesn't just bias research but also clinical practice.
One opinion they can hold is that contrary hypotheses or counter-arguments are somebody else's problem, its not their responsibility to do all the science. The attitude can be that if other's fail to disprove them, then its clear they were right - or they would have been disproved. So as they accumulate more and more data, they say you can be more and more sure they are right.
Confirmation bias occurs when contrary data or interpretations are ignored (in the extreme case), or when the experimental design does not allow contrary data to be found, or the data is selected or interpreted in a way to support the hypothesis, or when the experimental design is intended only for the purpose of accumulating supporting data. In essence it is about trying to confirm what you already think is true. There is only a view toward showing the evidence fits the model, although some superficial consideration of alternatives may occur. If there is a mismatch between data and model it can be dismissed, or the model tweaked, or perhaps explained in some other way - such as a call for more research funding as obviously the current study was underpowered.
It is when contrary models or data are recognized that additional explanations or hypotheses are added. Over time these accumulate to cover all possibilities. When this process is more or less complete, as in psychoanalysis, it ceases to be anything like science and becomes non-science. Every possible research outcome can now be explained. Its becomes a superstition, a pseudoscience, a modern day cult within the scientific or medical communities.
Confirmation bias is a huge risk for me in investigating for my book. I have limited time and resources and so will be selective in what I look at. I already have a negative view of the DBM. That process of selection can limit what information I use in my argument. While this is a risk for me, I think it is also a process very firmly entrenched in the DBM.
Accepted Practice
One of the accepted practices in science is that research is published and so is open to debate. Flawed or misconceived research can be challenged and mistakes revealed. However, if nobody is directly opposing them, nobody is engaged in debate, it could look like they have no opposition. A model without opposition may then be held to be widely accepted - but this might not be the case.
In the case of the DBM there is almost no vocal opposition from within their small branch of psychiatry, at least none that gets written up in press releases and the popular media quite so much. There are not a lot of researchers working on this world-wide. While there are numerous studies showing CBT and GET do not work, and in particular cause a decline in functional capacity, these are studies that are frequently ignored. The results are simply not discussed. The DBM proponents also get support from other neo-Freudian disciplines - those who support hysteria, conversion disorder or somatization as medical diagnoses.
Slightly outside the narrow view however there are numerous rival theories, enormous amounts of contrary data, and many critics within medical academia and general medical practice. Indeed most of the opposition comes from other schools of psychiatry and medical science, and I will blog about some of this later. Almost none of the issues ever raised by the critics of the DBM are addressed by the proponents of the DBM - they are simply ignored. Ignoring contrary data, models and criticism is indicative of something beyond simple confirmation bias.
Dogmatic Verificationism
The extreme end of verificationism occurs when dogma overshadows reason. Whereas verificationism tends to create bias, in dogmatic verificationism contrary evidence is deliberately ignored. Dogma, not reason, rules. Any contrary data is not considered relevant at all and is dismissed or simply not discussed. When it has to be addressed, and in an absence of sound argument against it, diversionary claims can be made and the original issue is again ignored. This is what I think is happening in the Dysfunctional Belief Model of CFS.
As Karl Popper pointed out over half a century ago, if you have a model that is flexible enough to "explain" everything, then everything can be "explained". So each and every case you "explain" becomes more "evidence" you are right. Popper called this non-science, though in other places he called it pseudoscience. If there is an explanation within a model for anything and everything, without objective evidence to support it, the explanation fits the definition of a superstition more than a science. It cannot be disproved, and hence it cannot be properly tested.
A Philosophical "Joke"
This is about different positions on knowledge. Its not quite accurate, but gives the flavour of the ideas. It also indicates why I think practical solutions are important - theory is nice but it has to be applied in the real world. Models can explain a lot, but a model is not reality.
A bunch of philosophers went into the Aussie outback after hearing a story that a Yowie was sighted. This is the Australian Sasquatch. The first philosopher said "Look, there's a Yowie, and he looks like he wants to eat us." The sceptic said "There is no such thing as Yowies, I still can't see it." The verificationist said "Lets observe and see if he will indeed eat one of us. If we watch for long enough we may prove that Yowies want to eat people." The falsificationist said "I prefer the theory that Yowies don't eat people. If he eats one of us this is refuted and I am out of here." The pancritical rationalist was more interested in the question than the answer: "Are we sure we are asking the right questions?" The first philosopher, prefering the precautionary principle said "You can hang around to get eaten, but I am climbing up a tree to watch from there." Who do you agree with?
Final comments: there is evidence of something that I would consider a ME conspiracy and yet breaks no laws. Its about irregular conduct to circumvent guidelines, ethics and standards. There is only some evidence, it needs a lot more analysis. This will not be posted on PR initially if I can establish this evidence. The report, as I currently envisage it, will be forwarded to relevant authorities and leading advocates, though I might detail such claims in my book - but I might not. At some later point it might be released. There is a also a good chance I will never be able to get that far, or that insufficient evidence exists once I get to the fine details, in which case all I will have are a number of uncomfortable questions for the people involved - and the list of uncomfortable questions is already starting to grow. There is an appendix summarizing these questions in my book. Should the report not be adequately addressed, the entire report may be made public, or published.
My next blog, if I get around to finishing it, is The Witch, The Python, The Siren and The Bunny. The Python's name is Monty, and I hope to be able to show just how irrational the "logic" behind the DBM and a few other things really is. Since I haven't finished writing this yet my fingers are crossed. Oh darn, there is that superstition again ...
Part One: Verificationism
WARNING: Philosophy Alert! This is esoteric. It has points I am leading up to but might lose many people along the way.
One of the things I am leading up to is a further discussion of the what I am calling the Dysfunctional Belief Model of CFS (DBM), or the Wessely school as its sometimes called. Its also referred to by its overarching philosophy (the Biopsychosocial school) or its techniques (CBT/GET). However BPS and CBT/GET have other applications and philosophy - the DBM utilizes highly modified versions of these.
I have already challenged the view that a big problem in M.E. is a big conspiracy or series of lesser conspiracies. This directly leads to the most obvious question: if there is no provable conspiracy then what? My proposed answer is a combination of three things which have a high overlap: confirmation bias, verificationism and Zombie Science. I will argue that these three combine to create a particularly insidious and dangerous form of dogmatic verificationism.
Later I intend to discuss how this relates to "Evidence Based Medicine" and the Biopsychosocial movement for treating M.E., though some of that I will leave for my book. I will also be saying some negative things about the whole field of psychosomatic medicine.
This does not mean there are absolutely no conspiracies at all, no wrong-doing, no deliberate deceptions, no hidden agendas. However, this series of blogs will probably not address many of these because such claims must not be based on possibilities but evidence and reason, and I am busing accumulating evidence and developing argument behind the scenes. Its not like there is not a wealth of source material to use. These blogs are my preamble. I will probably make many mistakes, and provoke more than a little debate or dissent even if not all of it is public. Furthermore as my knowledge grows so my view may change.
I am temporarily suspending a hidden question here, that is if there is even a question of wrongdoing at all, largely because for this audience it is not one I need to address. At some point however this question has to be answered. I currently think the evidence shows the answer is yes.
For the purpose of this blog I am also not going to discuss Zombie Science. At some stage I may have more to say about zombies however. Who doesn't think zombies are interesting?
Verificationism
In my blog on conspiracies I briefly introduced the notion of Verificationism in science. A very simplistic view of it, in its original context, is that it purports to show meaning of things through verification or finding evidence: if you think the bird noise which sounds like quacking from the lake over the hill is from ducks, you can verify it. You go over the hill and photograph a duck. Ducks quack. So you have partially verified your hypothesis. You then see a flock of ducks. More verification. Therefore it is reasonable to presume that the bird noise is from ducks: hypothesis verified. However, if I said I thinks its the ducks, it would be absurd to claim that statement had no meaning until I found the ducks, which is an interpretation from classical philosophy - but meaning has little to do with truth or even verification. In the philosophy of science it might however be justified to claim that the statement was not scientifically verified until the ducks were found.
Now falsification (as in critical rationalism) might pose a different question: if its the ducks, does that mean all of it is from ducks? Lets make that assumption and test it. If we can find one bird that is making noise but is not a duck, its falsified. So I go and find a swan, a goose and some miscellaneous waterfowl. They all make some noise. So the noise is mostly quacking, but has other components. Hypothesis refuted. If no bird other than ducks had been found, it could have been said that the theory was not falsified, and so can be considered more reliable but not proven.
If it was falsified, the verificationist will then say that OK, now the hypothesis is the noise is mostly from ducks, and we definitely see mostly ducks, and so the debate continues. Each time a position is refuted it can be modified to account for the changing data. What is supposed to happen is that as additional modifications accumulate it should become clear the hypothesis is less and less tenable. Sadly this does not always happen.
The other problem that arises is that a verificationist approach and a critical rationalist approach lead to different questions. So the way the problem is framed, and the kind of evidence that is sought, can be different. A verificationist approach can lead to bias such that contrary evidence is not found, whereas a critical rationalist approach actively looks for ways to find contrary evidence. Verificationism is better for supporting and creating hypotheses, critical rationalism is better for eliminating incorrect hypotheses.
The modern scientific versions of these are much more complicated, with layers and caveats and statistical arguments, so are not nearly this simplistic, but it gives the general idea.
Confirmation Bias
It is accepted that problems with confirmation bias due to verificationism abound in psychological research, but at this point I do not know how accepted this proposition is - it is definitely a contentious issue. Researchers under this methodology are trying to gather data that support their theory. They may deliberately design and conduct experiments to support their theory, and avoid experiments that would not do so - indeed it is sometimes the case that experiments to disprove the theory simply cannot be created under their methodology.
An example of this is a study looking at how psychiatrists diagnose mental disorders, and as a result how accurate they were. Those who used confirmatory diagnostic processes were often in error - so this doesn't just bias research but also clinical practice.
One opinion they can hold is that contrary hypotheses or counter-arguments are somebody else's problem, its not their responsibility to do all the science. The attitude can be that if other's fail to disprove them, then its clear they were right - or they would have been disproved. So as they accumulate more and more data, they say you can be more and more sure they are right.
Confirmation bias occurs when contrary data or interpretations are ignored (in the extreme case), or when the experimental design does not allow contrary data to be found, or the data is selected or interpreted in a way to support the hypothesis, or when the experimental design is intended only for the purpose of accumulating supporting data. In essence it is about trying to confirm what you already think is true. There is only a view toward showing the evidence fits the model, although some superficial consideration of alternatives may occur. If there is a mismatch between data and model it can be dismissed, or the model tweaked, or perhaps explained in some other way - such as a call for more research funding as obviously the current study was underpowered.
It is when contrary models or data are recognized that additional explanations or hypotheses are added. Over time these accumulate to cover all possibilities. When this process is more or less complete, as in psychoanalysis, it ceases to be anything like science and becomes non-science. Every possible research outcome can now be explained. Its becomes a superstition, a pseudoscience, a modern day cult within the scientific or medical communities.
Confirmation bias is a huge risk for me in investigating for my book. I have limited time and resources and so will be selective in what I look at. I already have a negative view of the DBM. That process of selection can limit what information I use in my argument. While this is a risk for me, I think it is also a process very firmly entrenched in the DBM.
Accepted Practice
One of the accepted practices in science is that research is published and so is open to debate. Flawed or misconceived research can be challenged and mistakes revealed. However, if nobody is directly opposing them, nobody is engaged in debate, it could look like they have no opposition. A model without opposition may then be held to be widely accepted - but this might not be the case.
In the case of the DBM there is almost no vocal opposition from within their small branch of psychiatry, at least none that gets written up in press releases and the popular media quite so much. There are not a lot of researchers working on this world-wide. While there are numerous studies showing CBT and GET do not work, and in particular cause a decline in functional capacity, these are studies that are frequently ignored. The results are simply not discussed. The DBM proponents also get support from other neo-Freudian disciplines - those who support hysteria, conversion disorder or somatization as medical diagnoses.
Slightly outside the narrow view however there are numerous rival theories, enormous amounts of contrary data, and many critics within medical academia and general medical practice. Indeed most of the opposition comes from other schools of psychiatry and medical science, and I will blog about some of this later. Almost none of the issues ever raised by the critics of the DBM are addressed by the proponents of the DBM - they are simply ignored. Ignoring contrary data, models and criticism is indicative of something beyond simple confirmation bias.
Dogmatic Verificationism
The extreme end of verificationism occurs when dogma overshadows reason. Whereas verificationism tends to create bias, in dogmatic verificationism contrary evidence is deliberately ignored. Dogma, not reason, rules. Any contrary data is not considered relevant at all and is dismissed or simply not discussed. When it has to be addressed, and in an absence of sound argument against it, diversionary claims can be made and the original issue is again ignored. This is what I think is happening in the Dysfunctional Belief Model of CFS.
As Karl Popper pointed out over half a century ago, if you have a model that is flexible enough to "explain" everything, then everything can be "explained". So each and every case you "explain" becomes more "evidence" you are right. Popper called this non-science, though in other places he called it pseudoscience. If there is an explanation within a model for anything and everything, without objective evidence to support it, the explanation fits the definition of a superstition more than a science. It cannot be disproved, and hence it cannot be properly tested.
A Philosophical "Joke"
This is about different positions on knowledge. Its not quite accurate, but gives the flavour of the ideas. It also indicates why I think practical solutions are important - theory is nice but it has to be applied in the real world. Models can explain a lot, but a model is not reality.
A bunch of philosophers went into the Aussie outback after hearing a story that a Yowie was sighted. This is the Australian Sasquatch. The first philosopher said "Look, there's a Yowie, and he looks like he wants to eat us." The sceptic said "There is no such thing as Yowies, I still can't see it." The verificationist said "Lets observe and see if he will indeed eat one of us. If we watch for long enough we may prove that Yowies want to eat people." The falsificationist said "I prefer the theory that Yowies don't eat people. If he eats one of us this is refuted and I am out of here." The pancritical rationalist was more interested in the question than the answer: "Are we sure we are asking the right questions?" The first philosopher, prefering the precautionary principle said "You can hang around to get eaten, but I am climbing up a tree to watch from there." Who do you agree with?
Final comments: there is evidence of something that I would consider a ME conspiracy and yet breaks no laws. Its about irregular conduct to circumvent guidelines, ethics and standards. There is only some evidence, it needs a lot more analysis. This will not be posted on PR initially if I can establish this evidence. The report, as I currently envisage it, will be forwarded to relevant authorities and leading advocates, though I might detail such claims in my book - but I might not. At some later point it might be released. There is a also a good chance I will never be able to get that far, or that insufficient evidence exists once I get to the fine details, in which case all I will have are a number of uncomfortable questions for the people involved - and the list of uncomfortable questions is already starting to grow. There is an appendix summarizing these questions in my book. Should the report not be adequately addressed, the entire report may be made public, or published.
My next blog, if I get around to finishing it, is The Witch, The Python, The Siren and The Bunny. The Python's name is Monty, and I hope to be able to show just how irrational the "logic" behind the DBM and a few other things really is. Since I haven't finished writing this yet my fingers are crossed. Oh darn, there is that superstition again ...