• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

We'll have all the answers about CFS after EHPS psychology conference in Sept ... or maybe not!

Art Vandelay

Senior Member
Messages
470
Location
Australia
I am relatively untrained in statistics and evidence based approaches. Yet its easy to see most doctors know even less. Most of the platitudes I hear are just wrong. Someone trained in evidence based practice would not make these mistakes. I suspect evidence based medicine is dumbing down doctors.

As a part of my economics degree, I studied four years of statistics and econometrics. Friends studying science degrees did about two weeks of basic statistics for their whole degree. A friend who was studying medicine only had to do a few hours.
 

Woolie

Senior Member
Messages
3,263
My brain is not up to details of statistics at this point, but maybe one of our statisticians will chime in here. All I remember (damned cognitive dysfunction) of that stuff is that there's a whole complex set of decisions and requirements to work through to figure out what number of samples is needed in a particular situation in order to achieve statistical significance in your results. Some of it involved the nature of the members of the sample. Sheer numbers isn't it.
Yea, this is pretty much it. You probably even remember its called a power analysis, and the basic idea is that you decide how many participants you're going to need for a study based on what you know already about the size of the effect you're looking for. 350 is HUGE for any experimental study (maybe not for a purely survey based study, where lots of people fill out questionnaires, but for pretty much anything else).
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
As a part of my economics degree, I studied four years of statistics and econometrics. Friends studying science degrees did about two weeks of basic statistics for their whole degree. A friend who was studying medicine only had to do a few hours.

Yes, we did f'all statistics in my undergraduate science degree (physical sciences). Basically some pottering about in Excel. Nothing deep.
 

SOC

Senior Member
Messages
7,849
Yes, we did f'all statistics in my undergraduate science degree (physical sciences). Basically some pottering about in Excel. Nothing deep.
For my engineering degrees, we had at least 2 statistics classes and design of experiments as a significant part of all our lab courses. Certainly statisticians and economists had more statistics courses than we did. Our profs tried to teach us the limits of our statistical understanding so that we would know when we could rely on our limited knowledge and when we needed to seek out experts in statistics.

It's unfortunate you didn't get more stats as part of a physical science degree, but I'll bet (I hope, anyway) that you were taught quite a bit about the proper design of experiments -- ya know, stuff like not changing your outcome measures mid-study, eliminating personal bias, developing a falsifiable hypothesis, using objective measures... stuff like that. :)
 

JaimeS

Senior Member
Messages
3,408
Location
Silicon Valley, CA
[Also has physical science degree.....]

t's unfortunate you didn't get more stats as part of a physical science degree, but I'll bet (I hope, anyway) that you were taught quite a bit about the proper design of experiments -- ya know, stuff like not changing your outcome measures mid-study, eliminating personal bias, developing a falsifiable hypothesis, using objective measures... stuff like that.

....I took one stats course where the prof read from the textbook at a podium, really fast. Learned more in HS statistics class than from him.

....And I learned about experimental design when I started to teach it to my kids. Found lots of resources, did research... if I hadn't become a teacher I would have no clue about fallacious reasoning, personal bias... though I think the hard sciences did teach me the way of thinking necessary to spot a hole in argument.

Rutgers has a good rep, but they taught me more 'facts' than how to actually become a good scientist.

-J
 

jimells

Senior Member
Messages
2,009
Location
northern Maine
My statistics course in the School of Hard Knocks is pretty basic. Is my bank account > zero? Is it likely to stay that way until the next check?

In the public sphere, statistics mostly seem to be about dazzling the uninformed and obfuscating reality. Government economic statistics readily come to mind. The media reports on these numbers are simply appalling and almost completely devoid of meaning. "Employment is up x%" - compared to what? Last year, last month, last week? What is the margin of error? How is "Employment" defined? These questions are almost never answered.

When I'm attempting to decipher research abstracts, I often wonder about the statistics. Do the numbers presented really mean anything? Are they present just because it is expected? With so much faulty research in the literature and my personal ignorance of statistics, it's hard for me to trust any of the numbers.
 

jimells

Senior Member
Messages
2,009
Location
northern Maine
....And I learned about experimental design when I started to teach it to my kids.

Yes I have found the best way to learn something is to try to teach it. My mom went to the proverbial one room school house. This was 1940s rural New Hampshire. I've always found the idea of older students helping the younger ones learn to be appealing. It strikes me as being very anti-authoritarian - anyone can teach others, not just the officially approved authority figures.
 

Valentijn

Senior Member
Messages
15,786
Ugh, @jimells ... I know. Okay, that's it, I'm taking a statistics class. Hey, @Valentijn , didn't you tell me one time you'd taken some free course online you found helpful? What was it?

-J
It was an evil beast which required learning how to use the R programming language. I'd recommend something more basic, unless your language processing centers are far less ravaged by ME than mine are :p If you search for "statistics" on Coursera, something should pop up. The amount of hours involved per week are a good indication of difficulty.
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
It's unfortunate you didn't get more stats as part of a physical science degree, but I'll bet (I hope, anyway) that you were taught quite a bit about the proper design of experiments -- ya know, stuff like not changing your outcome measures mid-study, eliminating personal bias, developing a falsifiable hypothesis, using objective measures... stuff like that. :)

Don't get me wrong, we had to report some basic statistics on our data from 2nd year lab pracs onwards...

The thing they seemed to emphasise is that experiments easily go wrong and we shouldn't fudge or force things. If you get null results despite 100 people before you getting positive results, you still have to report you had null results!
 

Snow Leopard

Hibernating
Messages
5,902
Location
South Australia
'Well we don't know what this disease is really about. It must be in the brain, there is NO biomedical proof whatsoever. I mean, those people (=the biomed crew) keep SAYING there is, but really there isn't. They have nothing to base anything on. Nothing at all.'

This leads in to the Tu Quoque argument - where is the evidence that it is a central disorder of the brain, rather than an as-yet uncharacterised biomedical disorder?

The brain imaging stuff is very non-specific or non-sensitive (eg no test that would work for 95% of patients)

All I can say is less talk, more action (science)...

Still, I heard someone say today (a doctor in training), "a study with 350? What do they think that proves? Nothing, that's what it proves. It's a joke. A joke." (He was talking about the state of Lyme treatment/diagnosis.) While I agreed, I began to wonder a bit what is considered 'significant'. I have a line in my own head, but maybe I've drawn it arbitrarily. Okay, you science people: weigh in. How many subjects would have to be in a study before YOU took it seriously?

Assuming, please, that this is your only criteria: that all other things are equal.

It all depends on what you are trying to measure, and the corresponding treatment effect sizes or sensitivity/specificity of diagnostic tools etc. Some people have the mistaken belief that larger sample sizes lead to less bias, but this is not true. Larger sample sizes lead to more precision, but not necessarily more accuracy. Larger sample sizes just provide tighter statistics - tighter confidence intervals, smaller p values etc. The quality of a result is not based on how small the p value is, but should also consider a-priori probabilities of finding results with that p value by chance, for a given effect size (eg. multiple comparisons, and effects due to publication bias etc). A smaller study with a very low p value is exceptional - it either suggests extreme bias or suggests something very interesting (eg a very large effect size).

Study design is more important than most doctors realise. It is important to consider how participants were included - is it a randomised population/community based study? Primary care? Tertiary care? Convenience sample? There are many, many ways to bias a study...
 

Hutan

Senior Member
Messages
1,099
Location
New Zealand
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2876926/

This is a nice article on sample size calculation.

Generally, the sample size for any study depends on the:[1]
  • Acceptable level of significance
  • Power of the study
  • Expected effect size
  • Underlying event rate in the population
  • Standard deviation in the population.
Some more factors that can be considered while calculating the final sample size include the expected drop-out rate, an unequal allocation ratio, and the objective and design of the study.[2]
In reality, it's as much an art as a science.
 

JaimeS

Senior Member
Messages
3,408
Location
Silicon Valley, CA
@Hutan - thank you so much! That really clarifies things. I will read the article, but it would be cool to see if they had a few examples and demonstrated what would be an appropriate sample size in different situations. :)

I really would not have thought that there would be an article on this, though I should have known better. Thanks again!

-J
 

Dolphin

Senior Member
Messages
17,567
@dxrevisionwatch on Twitter has highlighted the abstracts are now available:
http://www.ehps2015.org/files/EHPS2015_Conference_Abstracts_27082015.pdf

I've started five separate threads:
Unhelpful cognitive and behavioural responses are associated with symptoms in adolescents with CFS
http://forums.phoenixrising.me/inde...-with-symptoms-in-adolescents-with-cfs.39605/


Implicit processing of symptom and illness-related information in CFS: a systematic review abstract

http://forums.phoenixrising.me/inde...on-in-cfs-a-systematic-review-abstract.39607/

Health-threatening interpretation of ambiguity early on: risk or protective factor? Comparing CFS/ME and healthy individuals
http://forums.phoenixrising.me/inde...ve-factor-comparing-cfs-me.39608/#post-636047


There's already a thread on the sixth abstract which has already been published as a paper:
The role of the partner and relationship satisfaction on treatment outcome in patients with chronic fatigue syndrome.

http://forums.phoenixrising.me/inde...rom-the-netherlands-the-treatment-of-cfs.3604
 
Last edited:

Dolphin

Senior Member
Messages
17,567
From: http://www.ehps2015.org/files/EHPS2015_Conference_Abstracts_27082015.pdf

Wednesday, 02 September 2015- Symposiums
Page | 31

Symposium

Fatigue and pain in long-term conditions across the life span

A. Wearden 1 2
R. Moss-Morris 3
T. Chalder 4
H. Knoop 5
J. Menting 4

1 University of Manchester, School of Pyschological Sciences, United Kingdom
2 Manchester Centre for Health Psychology, United Kingdom
3 King's College London, United Kingdom
4 Kings' College London, United Kingdom
5 Expert Centre for Chronic Fatigue, Nijmegen, Netherlands

Aims
The aim of this symposium is to illustrate some of the psychological processes that are related to
symptom experience, focusing particularly on fatigue and pain, across a range of long term conditions
(diabetes, multiple sclerosis and chronic fatigue syndrome), and in participants at different stages of
the life span. Delegates attending the symposium will learn about the cognitions, behaviours and
emotional factors that are thought to maintain symptoms of pain and fatigue across conditions, the
process of developing a treatment model, and factors which are important in determining the effects of
treatment.

Rationale
This symposium is distinctive in that it demonstrates how symptoms across a range of conditions can
be understood in terms of common processes, which can in turn inform treatment models.

Summary
The symposium starts with a report of a prospective study (Chalder) which shows how cognitive and
behavioural factors maintain fatigue in adolescents with chronic fatigue syndrome. The importance of
fatigue related cognitions in the perpetuation of severe fatigue in diabetes is picked up in paper 2
(Menting), which also demonstrates the interrelations between pain and fatigue in this condition.
Paper 3 (Moss-Morris) reports on the development of a model explaining pain in multiple sclerosis on
the basis of cognitive, behavioural and emotional factors, and shows how this model has informed a
self-help intervention. Paper 4 (Knoop) focuses on how interpersonal factors, particularly solicitous
responding on the part of a significant other, may predict symptomatic response to treatment for
fatigue in chronic fatigue syndrome. Finally, paper 5 (Wearden) reports on associations between sleep
problems and fatigue in chronic fatigue syndrome, and shows how improvements in sleep partially
mediate the effect of treatment on fatigue.