Leopardtail
Senior Member
- Messages
- 1,151
- Location
- England
What they are saying (I think in the new paper) is "we have taken some short cuts working the mean out" and "allowing for any error that our short cut introduced the mean may be between 93.4 and 96.5". So yes your 'gut hunch' was correct.So I was right? Or are you saying that the mean is between 93.5 and 96.5 rather than the range of values is? My brain hurts...maths always does that to it. My brother got all the maths genes.
(I thought that reliability was indicated by the 'p='/'p<' value.)
By contrast the p value means 'the probability that our conclusion might be wrong is'. The estimation of p values is something that often gets criticised when research is published since authors tend to treat themselves too kindly.
It's supposed to allow for things like the reliability of blood tests results (e.g. the result of a single blood test being accurate to +- 10%). It's a way of bringing together all the experimental error and giving an indication of how reliable the source data is collectively, the effect of any missing data and so on....
E.g. Prolactin and TSH are both stimulated by TRH. TRH cannot be measured accuruately without great expense. Low levels of Testosterone, or high levels of Dopamine can suppress TSH production.
So if you are gauging hypothyroidism based upon TSH without measuring the factors that would cause suppression there is some doubt as to your conclusion. The p value in this scenario would give your estimate of how often something might suppress TSH that has not been measured combined with an estimate of how reliable the TSH measurement is itself.
E.g.
If TSH is measured to an accuracy of +- 5% and within your sample set 5% have suppression then your p value is then the change your conclusion is right is 95%* 95% = 90%. Your p value 0.1.
In effect the p value deals with missing data, or alternative conclusions from the same data.
Where standard error deals with lazy maths when averages of averages produce possible error. It's only really excusable when the sample size is too large to reliably compute in one go (such as an average of millions of large numbers too large for a computer to handle).
Hope that clarifies it, not exactly easy to explain... with ME brain.
Last edited: