A systematic review of neurological impairments in myalgic encephalomyelitis/chronic fatigue syndrome using neuroimaging techniques

SWAlexander

Senior Member
Messages
2,148
endocrinologist and not had their concerns taken seriously
This is why I encourage my friends to ask the right questions I prepare them with, insist on proper responses from doctors, and, if necessary, request specific tests. This is how 9 of them found the right diagnosis and treatment.
 

linusbert

Senior Member
Messages
1,716
nother discussion had me thinking about how AI could handle the simple, easy cases, leaving doctors with more time to handle the more difficult ones. I'm now wondering if it will be the opposite, with patients going to the doctor, not getting their non-easy problems taken seriously, and having to turn to an AI for more in-depth diagnosing.
i think for AI are no easy and hard cases. that seams for me to just be human made up terms to justify not investing the time required to correctly handle the case. they would need to do a search, read books, talk to colleagues... and drs. dont seam to wanna do that or cant do it, idk... most doctors want to diagnose in 5 minutes or even believe they can diagnose with just looking at patient.
AI has no such problems, it has all knowledge available in the same second it heard of the problem. basically AI has endless time. and all knowledge of the world.
in theory.
probably it will be like , that AI will do 99% of cases, but for some a human doc will weigh in if AI isnt confident enough and the human will throw the coin what path to follow.

what i fear though, is that they will do non-thinking medical AIs which are trained like the large language models. means by example. so the AI is fed with symptoms and cases and the resulting diagnose and therapy.
if they do it like this, the AI will do the same mistakes the humans do , because AI will mimik the errornous behaviour humans did teach it.
i hope they dont. they should build a truth seeking really thinking ai.
I want to share a word of caution. A close friend of mine has a daughter, who’s 44 and living in Germany. Recently, she started taking Luteinizing Hormone
Luteinizing Hormone? is this related to Lutein? the lutein i was talking about is a carotinoid, a antioxidant required for eyes.
 

SWAlexander

Senior Member
Messages
2,148
i think for AI are no easy and hard cases.
AI can provide accurate answers only when prompted with the right questions.
Without specific data, such as results from a urine test, the AI can only offer general insights.
However, AI can assist in verifying whether a preliminary clinical diagnosis aligns with available information. It's important to remember that any clinical diagnosis should be confirmed through appropriate tests.
 

linusbert

Senior Member
Messages
1,716
thanks, but what does it have to do with Lutein? despite similarity in name i do not see any relevance for eye health or why i would need to supplement it.
Without specific data, such as results from a urine test, the AI can only offer general insights.
However, AI can assist in verifying whether a preliminary clinical diagnosis aligns with available information. It's important to remember that any clinical diagnosis should be confirmed through appropriate tests.
i dont think ai doc would work like that.
you would speak at first to AI for introduction. then AI makes a plan what is to do for diagnostics. then things will be done, then AI evaluates again.
probably a medical assistant would feed the data into ai. or at least is all the time watching and then carrying out diagnostics.
 

Wishful

Senior Member
Messages
6,403
Location
Alberta
i think for AI are no easy and hard cases. that seams for me
True. I was using those terms from the human doctor's perspective. By that I meant the ones that can be handled in a few minutes (possibly the wrong diagnosis and treatment, but the disorder isn't serious enough for the wrong one to cause serious harm or risk lawsuits). These are cases that could probably be handled just as well by people who haven't gone through 7+ years of medical training. Cases that nurses could handle without trouble. An elementary school student with a copy of The Idiot's Guide to Medicine might do just as well.

I think AI can handle those simple cases quite easily, with many of them conveniently over the internet/phone. If that doesn't provide satisfactory results for the patient, it would escalate into a more capable AI with human expert collaboration. I look forward to AIs pointing out missing knowledge, such as "what are all the ways that picolinic acid affects body function", focusing research in useful ways.

AI in medicine will likely be opposed by some doctors, since it reduces demand and thus income. Medical insurance companies will probably support AI, for potential cost-savings. An interesting battle ahead.
 

hapl808

Senior Member
Messages
2,432
AI can do a lot more than people realize, and 'AI' covers many different areas, so it depends what someone means. Some people mean LLMs, others mean various classification algorithms, etc.

My own experience is that even a simple LLM like Claude or GPT is often more thorough and makes better guesses than most physicians. Try to take your whole health history if you have one and take out the diagnoses and feed it into GPT or Sonnet and see what you get. I saw one of the leaders of OpenAI took five years of seeing specialists to get his wife diagnosed with hEDS (he'd never heard of it). If you literally paste his tweet about her symptoms into ChatGPT, one of the top three diagnoses it usually suggests to look at is hEDS. She saw like 10 specialists before a rheum suggested it.

Classification algorithms show even more promise for the more difficult cases, but we need better data, biomarkers, EHRs, etc.

And beyond that, there are many ways we could leverage AI/ML. Most of the 'fears' about AI are things that are much less serious than the medical gaslighting and malpractice most of us have experienced or experience on a daily basis.
 

hapl808

Senior Member
Messages
2,432
what i fear though, is that they will do non-thinking medical AIs which are trained like the large language models. means by example. so the AI is fed with symptoms and cases and the resulting diagnose and therapy.
if they do it like this, the AI will do the same mistakes the humans do , because AI will mimik the errornous behaviour humans did teach it.

AI is already better than non-thinking humans. Weirdly, I worry more than the AI will go, "I have examined the labs and her health history and ME/CFS seems a likely diagnosis. She should be warned about the dangers of GET and instructed on ways to pace her activity." And the human will say, "Nah, in my clinical experience she just needs to stop focusing so much on her illness and get out of the house."
 

linusbert

Senior Member
Messages
1,716
you are probably right, i just fear that they might run some metrics and come up with "the ai underdiagnoses psychosomatic patients, we need to fix that" ... and then fuck it up like the LLMS with all this political correctness stuff, where the AI begins to lie to comply.
 

hapl808

Senior Member
Messages
2,432
you are probably right, i just fear that they might run some metrics and come up with "the ai underdiagnoses psychosomatic patients, we need to fix that" ... and then fuck it up like the LLMS with all this political correctness stuff, where the AI begins to lie to comply.

Yes, this is often the bigger danger. Often the AI responds pretty well, then they put on additional 'safety filters' to make it say what they want, and the model is guided into a different answer.

I'm often surprised how good a job an LLM (not a purpose-built tool) can do for medical questions. Better than most doctors, who seem to spend most of their time screaming about the dangers of AI and little of their time actually treating patients with chronic illnesses.
 

linusbert

Senior Member
Messages
1,716
its like they say, AI could lead us into a new golden age... but it could also lead to human extinction... and i guess it will go towards the later, because human greed and power obsession and corruption will like it already does, also infect AI.

i just watched The 100. AI decided that in order to safe humanity, it had to kill 99% of humanity. and started the terminator scenario. the maker developed the AI to do the best for man kind.. not to value life in the process.. ooopsie.
 
Back