• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Examples of misleading statements in CFS papers from biopsychosocialists

Dolphin

Senior Member
Messages
17,567
Hi biophile, I agree but I can see their counter-argument. It was indeed 40% of patients undergoing CBT/GET who improved, everyone receives standard medical care so why is this an issue? If the effect is not always prolonged, that just means they needed more treatment, it was stopped too early. In the next study patients require a much longer treatment period.
There are two 40% claims mentioned above. Are you replying to the one from Cochrane or the PACE Trial?

I could reply to both now but think I'll cut down my work and wait to see which one you are talking about.
 

Dolphin

Senior Member
Messages
17,567
Hi oceanblue, I had the same experience, even in textbooks. Who has time to check up on every reference in every paper they read though? Bye, Alex
I find looking at references useful for "revision". I imagine plenty of others do likewise.

If one comes to know a field, particularly one like the ME/CFS where there are often not that many papers on a particular topic, one can often quickly see if an reference is correct or not.
A team of peer reviewers will often have some experts from the subject field (although commissioned editorials often don't go through peer review).

As well as knowing what a particular study says, if one becomes knowledgeable in a field, one can learn what has already been shown somewhere and what hasn't been shown (or have a general idea of this) which again can give an idea whether something is likely to be in a reference.

In the ME/CFS field (and perhaps other fields), one can also guess whether something might have been said by looking at the authors involved.

So good, rigorous peer reviewers could easily spot more than they do.
 

Dolphin

Senior Member
Messages
17,567
In reply to biophile's post 15, here is what I wrote on the other thread to the comment there:

Hi biophile, just to pick up on a point you made I agree with, on cherry picking supporting info: this is not only widely done but inevitable in any complex topic. Its not like you can reference all 5000 papers on ME and CFS in any publication, you have to use selection criteria. In this sense the cherry picking of the biopsychosocial proponents is justifiable. However, there are larger and overlapping issues. When faced with specific scientific challenges, especially those that go the the very foundation of the biopsychosocial hypothesis, what you use to support argument is much more critical. It has to address the issue at hand, and do so in a rational and data supported way. Typically this is not the case. For the PACE trial this is not the case. Instead we get multiple claims of violent patients, unscientific attacks, and so on. Arguing the man: a logical fallacy. They rarely address the complaints and issues, many of which come from respected scientists and clinicians: instead they divert attention to those hysterical patients again.

This is politics and spin, not science. I think the problem stems from the historical situation. For decades they have not been substantially challenged. Nobody took them seriously, and they were in their own little isolated area: people outside this area just ignored them for the most part. Now more and more people are realizing just how foundationally baseless and methodologically flawed the biopsychosocial research is. BOOKS are being written about it by medical academics (I have one on order). The charge is not being led by hysterical patients, its being led by medical academics, including other psychiatrists. They have never had to face this level of criticism, and it is getting worse as more and more wake up to what they have been saying and doing. They need a scapegoat, a straw-man, and they selected us, either consciously or unconsciously - I am in no position to infer motive or whether its intentional, but I can point to the fact of it happening.

Bye, Alex
Relating to this is the wider availability of the full texts of papers.
How many patients before in the late 80s and the early 90s (before the internet was widely available and before there was that much on it) saw full papers? The more eyes that see the full text, the more likely one person or more will spot problems.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
There are two 40% claims mentioned above. Are you replying to the one from Cochrane or the PACE Trial?

I could reply to both now but think I'll cut down my work and wait to see which one you are talking about.

Hi Dolphin, I was referring to biophile's account of the PACE trial in post 2. I don't buy their hypothetical counter-argument, but some might, and it should be countered before its even out there if possible. They might also argue that to really show the effect of standard medical care they should have had a control arm of zero medical care .. but then that would point to a failure in design, which they would not like to admit. I am not trying to defend them, just anticipate counter-claims so we may not be surprised.

Furthermore, on a separate topic, if what I recall they were told, and their therapists were told (which is vague and hazy at this point, bad memory day) the CBT/GET patient arm were primed to answer the surveys in particular ways, and then indoctrinated on top of that priming. This is a clear case of bias in the experimental design and implementation. So any result is distorted. Has anyone looked into this for clear design bias?

Bye, Alex
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Relating to this is the wider availability of the full texts of papers.
How many patients before in the late 80s and the early 90s (before the internet was widely available and before there was that much on it) saw full papers? The more eyes that see the full text, the more likely one person or more will spot problems.

I completely agree Dolphin. I first began reading up on this in 1986, but not much, and only from media reports ... Yuppie Flu type stuff. I concluded it had nothing to do with me (doh!) as the media seemed to be describing something else entirely. In 90 I read a 1989 book on it, and another a few years later. It wasn't until 1993 that I started looking at the science, and my resources were limited. My focus was also on biochemistry, I didn't really encounter psychobabble till the mid to late 90s.

Our focus is also different now. We are less naive, more educated on relevant issues, and have a patient base with a wide range of analytical skills - people trained one way may see things people trained another way wont, and vice versa. It could well be worth going back over old documents. To be honest, I am glad to see this thread as the book I am hoping to write is more or less related to the point of this thread.

One question I would like people to keep in mind: if something is potentially deceptive, and it follows a pattern in mulitple papers or mulitple instances in one study, does it constitute an ethical violation? I am not an expert on ethics, but I think I might have to investigate ethics at some point. These people are mostly medical practitioners, subject to medical codes of ethics. If there is repeated violation, and it can be demonstrated we have a case against them and the research. We might even be able to provide cause to have the research retracted.

Bye, Alex
 

user9876

Senior Member
Messages
4,556
We are less naive, more educated on relevant issues, and have a patient base with a wide range of analytical skills - people trained one way may see things people trained another way wont, and vice versa. It could well be worth going back over old documents. To be honest, I am glad to see this thread as the book I am hoping to write is more or less related to the point of this thread.

Bye, Alex

I think that is the whole point. Many professionals including researchers used to get away with shoddy work as it was accepted within their small community, Now people have much more access they get upset at critisism. Whats more as people from other fields look at papers they don't want to accept the critism particularly when some of what they are doing has become an accepted technique within their community and hence is used unquestioningly. People reading papers now want and perhaps expect more data. When I scanned the PACE trial paper I felt the need to see the probability distributions of the outcomes rather than just the means and standard deviations. With a potentially mix of illnesses being included and with a very high variance on the result this seems necessary. Maybe thats just because I'm used to analysing data sets.

Peer review is a rubish system particularly as time seems so be increasingly limited. I know when I review papers (not medical ones) I will rarely check equations are correct or follow up references unless something feels wrong. There simply isn't time particularly for a conference when you have a pile of papers to review whilst being expected to do your own research and publish. I personally think it would be better to have open reviews where reviewers are named and comments given with the paper. I've seen a couple of workshops that do that.
 

Dolphin

Senior Member
Messages
17,567
I think that is the whole point. Many professionals including researchers used to get away with shoddy work as it was accepted within their small community, Now people have much more access they get upset at critisism. Whats more as people from other fields look at papers they don't want to accept the critism particularly when some of what they are doing has become an accepted technique within their community and hence is used unquestioningly. People reading papers now want and perhaps expect more data. When I scanned the PACE trial paper I felt the need to see the probability distributions of the outcomes rather than just the means and standard deviations. With a potentially mix of illnesses being included and with a very high variance on the result this seems necessary. Maybe thats just because I'm used to analysing data sets.

Peer review is a rubish system particularly as time seems so be increasingly limited. I know when I review papers (not medical ones) I will rarely check equations are correct or follow up references unless something feels wrong. There simply isn't time particularly for a conference when you have a pile of papers to review whilst being expected to do your own research and publish. I personally think it would be better to have open reviews where reviewers are named and comments given with the paper. I've seen a couple of workshops that do that.
Don't know you're background, user9876, but glad to have you contributing - sounds like you come from a rigorous background.

By the way, a lot of papers on biomedcentral have pre-publication history where one can see reviewer comments (and reviewers are named).
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Peer review is a rubish system particularly as time seems so be increasingly limited. I know when I review papers (not medical ones) I will rarely check equations are correct or follow up references unless something feels wrong. There simply isn't time particularly for a conference when you have a pile of papers to review whilst being expected to do your own research and publish. I personally think it would be better to have open reviews where reviewers are named and comments given with the paper. I've seen a couple of workshops that do that.

Hi user9876, my focus for this week is on the institutions that allow this kind of nonsense, and I am about to post a blog on it. The institutions have been negligent, consistently, systemically, for a long time. It is time for a paradigm shift.

I am about to post a blog on this but I have deliberately not addressed the referee and editorial problems. Perhaps you might like to write your own blog on what you see are the issues? There are certainly many criticisms of these processes that have been published.

On reviewing research, I have done this three times myself, each time informally for the author. It was to provide feedback so its purpose was not about publication quality so much as argument. It takes a lot of time to properly review something. Superficial reviews are likely the norm. The trust is put in "expert" reviewers, but expertise is specialized a lot now, and so I think experts probably wind up analyzing material with which they are less familiar. However, what happens when an entire field of study, like the biopsychosocial school for ME and CFS, appears to be methodologically flawed and without rational or sound empirical basis? Would not every reviewer from this field be similarly compromised?

Bye, Alex

PS My blog is now posted:
http://forums.phoenixrising.me/entry.php?1336-The-Blame-Game-A-Way-Forward
 

Dolphin

Senior Member
Messages
17,567
I am about to post a blog on this but I have deliberately not addressed the referee and editorial problems. Perhaps you might like to write your own blog on what you see are the issues? There are certainly many criticisms of these processes that have been published.

On reviewing research, I have done this three times myself, each time informally for the author. It was to provide feedback so its purpose was not about publication quality so much as argument. It takes a lot of time to properly review something. Superficial reviews are likely the norm. The trust is put in "expert" reviewers, but expertise is specialized a lot now, and so I think experts probably wind up analyzing material with which they are less familiar. However, what happens when an entire field of study, like the biopsychosocial school for ME and CFS, appears to be methodologically flawed and without rational or sound empirical basis? Would not every reviewer from this field be similarly compromised?
And of course, this can have two problems:
(i) people in the "in crowd" don't get reviewed that harshly/rigorously.
(ii) people who happen to have different ideas may have material reviewed more harshly than they otherwise might.

It has made me think that we probably do need some of the "more sympathetic"/less rehab-focused psychologists, and the like, in the field (versus some people's view that we don't need them at all): so they can offer more independent or rigorous reviews. Most medical conditions these days have psychologists doing some sort of research (or that's the impression I get) - you can't "eradicate it" as some people might like. So you need some good experts involved/around.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
It has made me think that we probably do need some of the "more sympathetic"/less rehab-focused psychologists, and the like, in the field (versus some people's view that we don't need them at all): so they can offer more independent or rigorous reviews. Most medical conditions these days have psychologists doing some sort of research (or that's the impression I get) - you can't "eradicate it" as some people might like. So you need some good experts involved/around.
(My bolding)

Hi Dolphin, I completely agree with your entire post. On the last paragraph issue, I have long thought that we need psychologists and even psychiatrists. ME and CFS are traumatic conditions though that trauma is, I believe, secondary to the primary condition and only in some patients. Some need help. They need help from psychologists and psychiatrists who will assist them independently of a diagnosis of ME or CFS, but taking it into account. We do not need psychiatrists or psychologists who have decided the unproven biopsychosocial model is a good idea. We have to make massive adjustments from health to ill-heath, in a sea of confusion and conflicting advice including so much bad advice from the medical profession. However when the psychological and psychiatric professions alienate their patients by adapting unproven and unsound practices, they have failed us twice. Once for adopting such practices, and once more for then being unable to help us including losing our trust. Bye, Alex
 

biophile

Places I'd rather be.
Messages
8,977
The Great White Hope, spinning occupational outcomes re CBT-GET?

White did a presentation titled "What helps occupational rehabilitation when the doctor cannot explain the symptoms?" Dolphin posted the URL a few weeks ago (http://www.sou.gov.se/socialaradet/pdf/Peter Whites presentation.pdf). This reminded me of my recent post about claims by Cella et al (Sharpe & Chalder) 2011 on occupational outcomes for CBT and GET which upon investigation seem to be smoke and mirrors (http://forums.phoenixrising.me/show...ychosocialists&p=235535&viewfull=1#post235535) so I had a look into it.

From slide 17 of 24 of White's presentation (after discussing CBT/GET for CFS) ...

But do these treatments help patients return to work?

Only cognitive behavior therapy, rehabilitation, and exercise therapy interventions were associated with restoring the ability to work.

- Even without occupation as the aim.

Systematic review: SD Ross et al, Arch Intern Med 2004

The key word here is "associated", the methodological quality of these outcomes was poor so Ross et al probably weren't able to use stronger wording (http://archinte.ama-assn.org/cgi/reprint/164/10/1098.pdf). White conveniently failed to mention the authors also stated that "No specific interventions have been proved to be effective in restoring the ability to work." The authors state, "Only 4 longitudinal studies [26-29] reported employment at baseline and follow-up after intervention." The relevant information is in Table 6 (http://archinte.ama-assn.org/content/vol164/issue10/images/medium/ioi30120t6.gif) and on p1103, below I'll briefly describe each study in a quotation box:

ioi30120t6.gif


* Akagi et al 2001 [CBT] (http://www.ncbi.nlm.nih.gov/pubmed/11600166 or http://www.cfids-cab.org/cfs-inform/Cbt/agaki.etal01.pdf) : A non-RCT retrospective followup of 94 patients with a questionnaire response rate of only 61% and no control group.

* Dyck et al 1996 [rehabilitation] (http://www.ncbi.nlm.nih.gov/pubmed/8694980) : I cannot tell from the abstract if there was a control group, however in Table 6 of Ross et el 2004 the employment outcome was based on only 2 patients where 1 of those patients became employed at 3 month followup.

* Fulcher & White 1997 [exercise therapy] (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2126868) : The comparison of improved occupational status was uncontrolled at 12 month followup because it was a crossover study (and did not account for dropouts or compare with the controls that only did the flexibility program instead of the exercise program), the authors acknowledge this weakness but then try to dismiss it by claiming that spontaneous improvement was an unlikely explanation because it didn't occur in a "similar sample" in another study. The sparse details are on p1651. Table 6 of Ross et el points out that followup figures are "based on the number of patients enrolled" and at 15 month followup from baseline the rate of employment went from 39% to 47% (no control group).

* Marlin et al 1998 [individualized programs] (http://www.ncbi.nlm.nih.gov/pubmed/9790492) : It appears from the abstract that the intervention group received a range of treatment in addition to CBT/GET ("optimal medical management", pharmacological treatment for psychiatric comorbidity, sleep management, participation of patients' family, etc) while the control group received absolutely nothing at all, so we don't know what effect CBT/GET had by itself. Also, in a systematic review of interventions which included CBT and GET (Whiting et al 2001 - http://jama.ama-assn.org/content/286/11/1360.full), the methodological quality of Marlin et al 1998 was described as "very poor".

Two other studies are included in Table 6 of Ross et al 2004 as a quasi control group for natural course (Tiersky et al 2001, Vercoulen et al 1994) with the allusion that the poor occupational outcomes therein suggests that the above interventions could be effective, but as stated elsewhere have not proved to be effective in restoring the ability to work.

The Whiting et al 2001 paper I mentioned earlier points out that the reviewed studies reported occupational outcomes at baseline but not post-treatment, and argues for the importance of such outcomes eg employment hours. An updated version of that systematic review (Chambers et al 2006 - http://jrsm.rsmjournals.com/cgi/content/full/99/10/506) merely repeats the findings of Ross et al 2004 rather than review the data themselves: "Although the authors found some small studies of interventions (including rehabilitation, CBT and graded exercise therapy [GET]) that reported improved employment outcomes, they concluded that no intervention has been proved to be effective in restoring the ability to work."

On to the updated Cochrane 2004 systematic review for GET (http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD003200.pub2/pdf), there is no occupation related outcomes (?), "functional work capacity" was mentioned under quality of life but is exercise related (?) and in the one included study for this outcome there was no statistically significant improvement anyway, although in the conclusions it said the improvement was "close to significance".

On to the updated Cochrane 2008 systematic review for CBT (http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD001027.pub2/pdf), occupational outcomes was reviewed, one study showed no significant improvement in absenteeism from work, while another ("Sharpe 1993" published as Sharpe et al 1996 without the employment data? - http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2349693/pdf/bmj00523-0026.pdf) did show a significant improvement in work status at 12 months between the two groups of 30 participants each (risk ratio = 3.17 [95%CI = 1.47, 6.81]).

The CBT meta-analysis of Malouff et al 2007 (http://www.cfids-cab.org/cfs-inform/Cbt/malouff.etal07.pdf) briefly mentions but does not clearly give occupation related outcomes. They state that "Effects of treatment did not vary significantly between objective and subjective measures. That finding may suggest that treatment benefits extend about equally to subjective reports and to observable behavior, such as cognitive test performance and work and school attendance." Note that Table 4 where this data is presented is based on only 62 participants and it is unclear which study the data for "objective functioning" was drawn from?

Have not seen the full text of the recent CBT/GET meta-analysis of Castell et al 2011 (http://onlinelibrary.wiley.com/doi/10.1111/j.1468-2850.2011.01262.x/full) but judging from PR threads on the paper (http://forums.phoenixrising.me/show...ed-Exercise-for-CFS-A-Meta-Analysis-(Castell)) and accompanying editorial by Knoop (http://forums.phoenixrising.me/show...-Fatigue-Syndrome-Where-to-Go-From-Here-Knoop) I think it is a relatively safe bet that occupational outcomes weren't presented.

Not to mention, as noted in Twisk & Maes 2009, the evaluation of the (failed) Belgium CFS clinic application of CBT/GET which showed that employment hours actually decreased after CBT/GET (http://niceguidelines.files.wordpress.com/2009/10/twisk-maes-cbt1.pdf). The PACE Trial group have not yet published the data it collected on occupational outcomes (the results given for the "Work and Social Adjustment Scale" is not the same). Another safe bet would be, this outcome would have been proudly presented in the 2011 Lancet paper if it was clearly successful.

So apparently after 20 years of research and sweeping claims and false hope, the "best" evidence for improved occupational outcomes boils down to a single CBT study on 60 patients meeting Oxford criteria (not included in the paper that White is using to give the impression of improved occupational outcomes), with contradictory evidence or poor evidence on top of that.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
Hi biophile, I agree with your post 33. A tentative title for my book is BPS: Adopting the Null Hypothesis. I think it has failed (though it doesn't have to) and its time we realized they have failed to show CBT/GET has any cost-effective improvement in ME or CFS. In cost I do not just mean $, I mean all costs including social and personal. Bye, Alex
 

Enid

Senior Member
Messages
3,309
Location
UK
I'm no scientist but can I add the whole basis of the psycho model is a complete insult to one's inteligence - my own brush with them "we think you are imagining" is outrageous for starters. Plenty of room for sleight of hand in their misguided beliefs. It comes down to "belief" alone which doesn't sound very scientific to me. Can't recall my Uni logic too well but isn't their premiss shaky.
 

Dolphin

Senior Member
Messages
17,567
Thanks biophile.

White did a presentation titled "What helps occupational rehabilitation when the doctor cannot explain the symptoms?" Dolphin posted the URL a few weeks ago (http://www.sou.gov.se/socialaradet/pdf/Peter Whites presentation.pdf).

And of course PDW has interesting COIs:
PDW has done voluntary and paid consultancy work for the UK Departments of Health and Work and Pensions and Swiss Re (a reinsurance company).

from: http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(11)60096-2/fulltext

As well as the fact that he has built his career expertise in CFS and his service around these therapies which I think are conflicts of interest.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
I'm no scientist but can I add the whole basis of the psycho model is a complete insult to one's inteligence - my own brush with them "we think you are imagining" is outrageous for starters. Plenty of room for sleight of hand in their misguided beliefs. It comes down to "belief" alone which doesn't sound very scientific to me. Can't recall my Uni logic too well but isn't their premiss shaky.

Hmmm, depends what you mean by shaky. On one definition of superstition, its a superstition. That means it is internally consistent, and they have a reason for everything. If there is counter evidence, that is explained. If someone believes otherwise, that is explained. Its untestable, because it is defined such that there is no basis for theoretically negating it regardless of outcome. This is the basis of why Karl Popper called it non-science. I think superstition is just as valid a label.

Put another way, they had a dodgy theory based on Freud, hysteria and psychosomatic illnesses. They then kept adding ad hoc hypotheses, completely unsubstantiated claims, that might explain them. Eventually you have a system of circular beliefs, where almost everything is explained in terms of other beliefs. Thats one definition of a superstition. Note also the rise of excessively influential individuals. That also happens in superstitious communities.

So there is no hard evidence for their theories. They don't have an objective biomarker. There is no way to be sure any of what they say is right.

Now the model for BPS was tacked onto that, in what some think is an attempt to justify Freudian psychology in a world that is now hostile to the theories of Freud. Freud got so very much of it wrong. So did the claim to psychosomatic illness. The majority (I suspect) of the claimed psychosomatic illnesses were eventually proven to be physical illnesses for which there was originally no explanation. The same looks like happening for nearly(?) all the rest of them, including ME. These are theories so desparate for any credibiltiy that they latch onto anything. In this context, the frequent complaints about militant patients can be seen as a smokescreen to hide the complete failure of their medical model. I think the entirety of the hysteria and psychosomatic claims need to be scrapped. There are specific mental illnesses I think, but these two are not some of them.

Bye, Alex
 

Enid

Senior Member
Messages
3,309
Location
UK
Yes so lets call it dodgy alex - no objective markings - hypotheticals - unlike science I believe which works from an initial thesis to proven/not proven conclusion. In hindsight what struck me in my brief brush with a psychiatrist was that he was dealing in "belief" - his - of my imaginings - no proof offered.

And if I ever doubted persuasive techniques invloved in anger I mentioned psychology as part of my degree - to which he replied "and you are the worst"
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
Not to mention, as noted in Twisk & Maes 2009, the evaluation of the (failed) Belgium CFS clinic application of CBT/GET which showed that employment hours actually decreased after CBT/GET (http://niceguidelines.files.wordpress.com/2009/10/twisk-maes-cbt1.pdf).

Twisk & Maes cites:
Koolhaas MP, de Boorder H, van Hoof E (2008). Cognitive behavior therapy for chronic fatigue syndrome from the patients perspective [Cognitieve gedragstherapie bij het chronische vermoeidheidssyndroom (ME/CVS) vanuit het perspectief van de patint] [Dutch]. Medisch Contact. ISBN: 978-90-812658-1-2.

I'm about to read this now:
http://translate.google.com.au/tran...Fdt&rls=org.mozilla:en-GB:official&prmd=imvns

edit - google doesn't seem to be translating the employment related data. :(
 

Dolphin

Senior Member
Messages
17,567
Twisk & Maes cites:
Koolhaas MP, de Boorder H, van Hoof E (2008). Cognitive behavior therapy for chronic fatigue syndrome from the patients perspective [Cognitieve gedragstherapie bij het chronische vermoeidheidssyndroom (ME/CVS) vanuit het perspectief van de patint] [Dutch]. Medisch Contact. ISBN: 978-90-812658-1-2.

I'm about to read this now:
http://translate.google.com.au/tran...Fdt&rls=org.mozilla:en-GB:official&prmd=imvns

edit - google doesn't seem to be translating the employment related data. :(

Don't know if you'd consider this of any use:

https://listserv.nodak.edu/cgi-bin/wa.exe?A2=ind0803A&L=CO-CURE&P=R890&I=-3

Majority of ME/CFS patients negatively affected by Cognitive Behaviour Therapy (2) (rough translation of tables, etc)