• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Will Artificial Intelligence solve the riddle of ME/CFS and find an effective treatment?

Osaca

Senior Member
Messages
344
No, not a chance!

In general you could currently see most of AI algorithms as a blackbox tool. However, our problem at heart is the input into this blackbox, we simply don't have any good input. We lack any meaningful data, furthermore our data is often neither reliable nor reproducible. Furthermore we don't have an abundance of any training data. Without this AI is relatively useless and this is the case for ME/CFS and won't be solved be AI. In general there's also the problem of explainability which is quite important, since we're trying to explain a disease.

Some AI algorithms will be very useful here and there, especially classifiers like Random Forests, Gradient boosting and ANNs are already used very commonly, but even here it has to be said that advanced statistical (non-AI) methods would often do similarly well, AI tools are just far more accesible. These tools are also nothing new and have been around for quite some time.

Finally, with the hype currently surrounding ChatGPT, it has to be said that LLMs probably won't do much or anything for us.

You're saying that scientist can't solve ME/CFS, but why is that the case? In the majority of ME/CFS research we still have extremely basic problems like patient diagnostic criteria being wrongly used (Fukunda, psychological criterias etc), a psychologisation of a physiological disease, far too small sample sizes, lack of measurements and non-reproducible results. For these biggest part these are all problems of lack of funding and medical neglection. Our problems are far more fundamental than the tasks AIs handle. Do you think if an AI will suggest we should invest more into ME/CFS research that, that will happen all of a sudden? We know more investment should take place, there's no lack of knowledge on this, just a lack of acting.
 
Last edited:

hapl808

Senior Member
Messages
2,178
I think it's unlikely, but I'm still hopeful.

GPT4 is amazing, but I worry that LLMs are limited by datasets, and if there's not enough good ME/CFS data and trials, it's unlikely it will make that leap. However I'm hopeful because it's made other leaps that were unexpected.

If you asked me two years ago when we would have an AI that could pass the bar exam, talk in natural language, understand people's intention in questions - I would've said maybe 50 years, maybe never. Many in AI were saying the same.

So while there's a ton of undeserved hype these days about AI (because newspapers like clickbait), there are also people who are too dismissive of AI - like saying ChatGPT is nothing unusual or impressive.

While there is nothing inherently 'magical' about tree ensembles or gradient descent or neural nets, to say that LLMs won't do much or anything is too pessimistic. The emergent capabilities of LLMs have taken even most experts in the field by surprise. I don't agree when I hear some say, "Meh, if you followed the field and read the transformers paper, none of this would surprise you." Yet it seems to have completely surprised Chris Manning, Melanie Mitchell, Geoffrey Hinton, and many others.

I've already found GPT4 more useful than 100% of the physicians I've seen over the years.

I can ask it about microglial activation and medications or herbs that might affect that. I can ask it about pathways, BDNF production, neuroprotective herbs. Anyone try doing that with their physician?

Yes, it's not right 100% of the time, but it's a fantastic sounding board or jumping off point for research. People who entirely write off an AI that gives accurate answers 90% of the time on certain things seem weirdly dismissive. Two years ago Siri had trouble checking the weather, yet now people want to roll their eyes and call it a parrot when it can give a long cogent hypothesis for the roles of BDNF and TrkB in ME/CFS brain fog.

Until human doctors improve in their next firmware updatee, AI seems like a more useful tool. After 25 years of seeing doctors and having them either 1) do nothing more than I was already doing or 2) do significant damage, maybe I'll give AI a chance.
 

hapl808

Senior Member
Messages
2,178
For these biggest part these are all problems of lack of funding and medical neglecting. Our problems are far more fundamental than the tasks AI handles.

While I agree that the underfunding of ME/CFS research is so bad it's criminal, we don't know if funding will fix things. Alzheimer's research has been far better funded than ME/CFS, but we still don't have a reliable test or biomarker, or really understand the underlying mechanisms.

Again, I'd be hopeful that funding and maybe now more need from Long Covid will change some things, but honestly not that hopeful. After the polio epidemics, Post Polio Syndrome became a huge issue and was heavily funded in the 70's and 80's. They made almost zero progress, and eventually the funding dried up (as most of the patients died off).

HIV is maybe the only illness where funding seems to have had a direct positive result over decades. That was an illness where the biomarkers are clear, and the prognosis was easy to understand.

Even so, we need to increase funding for ME/CFS by a couple orders of magnitude. Like cancer or Alzheimer's, just because something is hard to cure doesn't mean we shouldn't try with everything we have. We can spend $100b on war in Eastern Europe, but we gave up after an ineffectual $1b on this 'new' scourge of Long Covid affecting millions of Americans.

If a society doesn't care about its citizen's health, than what good is the society?
 

Osaca

Senior Member
Messages
344
I've already found GPT4 more useful than 100% of the physicians I've seen over the years.

I can ask it about microglial activation and medications or herbs that might affect that. I can ask it about pathways, BDNF production, neuroprotective herbs. Anyone try doing that with their physician?

Yes, it's not right 100% of the time, but it's a fantastic sounding board or jumping off point for research. People who entirely write off an AI that gives accurate answers 90% of the time on certain things seem weirdly dismissive. Two years ago Siri had trouble checking the weather, yet now people want to roll their eyes and call it a parrot when it can give a long cogent hypothesis for the roles of BDNF and TrkB in ME/CFS brain fog.

Until human doctors improve in their next firmware updatee, AI seems like a more useful tool. After 25 years of seeing doctors and having them either 1) do nothing more than I was already doing or 2) do significant damage, maybe I'll give AI a chance.

What you are describing here reads like ode to AI, but to me it's just a description of the neglect we face in the medical community and errors present in the medical system. Yes, 99% of doctors we come in contact with are useless or even harmful, that doesn't mean we should replace them by AI, but rather that they should be replaced by useful ones.

One doesn't have to reinvent the wheel. If we'd have specialst centers just the way they exist for MS, then we'd have neither problem 1) nor 2). Of course there's no guarantee to finding a cure, no matter how much you invest into research, some problems are just too complex to solve. But if we don't even try the bare minimum, we'll never find out.

A look into a medical book in 1969 would have already showed me that ME/CFS is a physiological disease and yet that doesn't transfer back to the real world. The society and medical injustices don't just change, because devices exist that are capable of highly computational algorithms.
 

Wishful

Senior Member
Messages
5,811
Location
Alberta
I agree with hapl808 about AI developing more rapidly than I expected. I don't expect AI to solve ME all by itself, but I do see potential for it to process the available data (samples, MRIs, etc) and find connections that humans don't have the patience to look for. Give the AI the ability to ask for more samples (patients who have a comorbid condition, or are taking certain treatments, etc) and it might be able to follow up on minor findings. Tell the AI that a patient found cuminaldehyde to block PEM 100%, and it could review existing knowledge of the biochemistry of that, how it might block PEM, and offer experiments to test those hypotheses.

The real potential for AI is not direct replacement of existing humans or techniques, but of creating new techniques that aren't practical for humans. An AI could handle vast datasets, or control biochemistry experiments running tens of thousands of microdroplet 'test tubes' in parallel, doing in hours what a traditional human lab would need many years to do. On top of that, they can continuously improve their performance. All too many human researchers refuse to change how they do things. To top even that, AIs don't care about citations, ass-kissing, or all those other unproductive things that humans do that hampers actual research.

Whether humans or AIs solve ME first is tricky to call. Will AIs develop even faster than we now expect? Will humans accidentally stumble across the answer, or have one of those rare insights, such as solving the benzene ring problem? Will ME research get more funding than AI research? Will massively-paralleled experimental equipment be developed for human researchers, or will it be designed for AI use?

Here's an idea: dedicate an AI and a good MRI brain scanner, to collect many scans of an ME brain at multiple times during a day, for several days (before, during, and after PEM), and do it for a couple of healthy and unhealthy (non-ME) controls. See what the differences are. If MRIs are too expensive, it might still be worth it using EEGs. Something isn't working right in our brains, so this sort of study might find abnormalities. For those who believe ME involves muscles or guts, similar "collect lots of data for the AI to search through' experiments can be done.

For the near future, I think ME might be solved by a collaboration of humans and AIs; one finds some abnormality, and the other follows up on it, gradually closing in on the root cause.
 

Wishful

Senior Member
Messages
5,811
Location
Alberta
Here's another question: who is more likely to impede the solution of ME: AIs or humans? This might involve leading down a wrong pathway, or wasting limited funding, or discouraging research somehow. I think AIs can go down a wrong pathway, but it takes a real human to really screw things up.
 

Osaca

Senior Member
Messages
344
Here's another question: who is more likely to impede the solution of ME: AIs or humans? This might involve leading down a wrong pathway, or wasting limited funding, or discouraging research somehow. I think AIs can go down a wrong pathway, but it takes a real human to really screw things up.
That could be correct. But then again will AIs ever possess enough stochasticity to make sufficiently large jumps to discover something like penicillin, whilst at the same time remaining efficient? Going down the wrong path is not always a bad idea. Some of our biggest medical discoveries have been due to sheer dumb luck, rather than the analysis of tremendous amounts of data.
 
Last edited:

hapl808

Senior Member
Messages
2,178
But then again will AIs ever possess enough stochasticity to make sufficiently large jumps to discover something like penicillin, whilst at the same time remaining efficient? Going down the wrong path is not always a bad idea. Some of our biggest medical discoveries have been due to sheer dumb luck, rather than the analysis of tremendous amounts of data.

It sounds improbable - but again, two years ago GPT4 was decades away according to most AI researchers. LLMs and reinforcement learning already have more potential than most people thought.

Discoveries like penicillin are few and far between. Medicine is often more notable for what it can't do than what it can do. Antivirals are still relatively ineffective compared to the promise of antibiotics (before resistance reared its head). Antifungals are a difficult area. Alzheimer's is poorly understood, as are many neurological disorders. We can't even usually fix structural issues like a herniated disc which is relatively simple compared to MS, ME/CFS, Long Covid, PPS, and so forth.

It's not an either or situation. AI should be an enhancement to human research, not a replacement. But the models need to be purpose-trained, better datasets, more targeted RLHF, and a lot more. I worry that it will happen, but it will be walled in so only drug research companies will have access, and they will only pursue things that are profitable and practical for their business plans. To me, that's a bigger concern than AI just being a dud.
 

Rufous McKinney

Senior Member
Messages
13,462
But then again will AIs ever possess enough stochasticity to make sufficiently large jumps to discover something like penicillin, whilst at the same time remaining efficient?

seems like you could use examples from history as a type way of testing AI's potential.

Could AI have come up with penicillin given what was known at the time? One could do some interesting modeling of these past events and that could inform how we might improve integrating AI into our diagnostics and research efforts.

Personally, I think AI may prove essential in highly complex illness like ours. I think no single doctor can wrap their brain around it.
 

Murph

:)
Messages
1,800
No, and I also want to BEG people to stop doing research by using ai chatbots. They are language models, not fact models. They are purpose designed to create plausible sentences in response to questions, not true ones.

The other day I saw someone citing a researcher who doesn't even exist, and talking about results of a study that are fictional. It is extremely worrying.

Use wikipedia and pubmed instead,.
 

BrightCandle

Senior Member
Messages
1,161
Some of the very research focussed AI have already brought some interesting things to look at that researchers hadn't considered. I think once we get a medical research AI that has the complete set of all papers there is a chance with research effort it might point us into tests we can do and how to do them that may one day lead to full understanding of all diseases. But ChatGPT isn't going to be the way that works and I hope the focussed AI get to the answer before we get a general AI and super intelligent AI.

ChatGPT is trained on Pubmed and and the other paper sites but it isn't doing a good job of referencing and putting the facts together due to its design.
 

hapl808

Senior Member
Messages
2,178
ChatGPT is trained on Pubmed and and the other paper sites but it isn't doing a good job of referencing and putting the facts together due to its design.

Is it trained on PubMed? I think ChatGPT's knowledge base is relatively unknown, and it's not purpose built unlike models such as BioMedLM (formerly PubMedGPT). My guess is GPT is trained on a very limited subset, and minimal RLHF is focused on medical research type stuff.

LLMs likely have a lot more potential, but I doubt that will be the answer itself.
 

hapl808

Senior Member
Messages
2,178
No, and I also want to BEG people to stop doing research by using ai chatbots. They are language models, not fact models. They are purpose designed to create plausible sentences in response to questions, not true ones.

The other day I saw someone citing a researcher who doesn't even exist, and talking about results of a study that are fictional. It is extremely worrying.

Use wikipedia and pubmed instead,.

Also, I have to disagree with this. I've seen these exaggerated warnings elsewhere, and while LLMs should never be a 'source' for anything, they can be a great jumping off point. I have seen the same warnings about Wikipedia because 'anyone can alter it' so some people say NEVER use it for research.

Here's Wikipedia for brain fog.
https://en.wikipedia.org/wiki/Clouding_of_consciousness
Here's PubMed:
https://pubmed.ncbi.nlm.nih.gov/36474290/https://pubmed.ncbi.nlm.nih.gov/36497573/https://pubmed.ncbi.nlm.nih.gov/33847020/https://pubmed.ncbi.nlm.nih.gov/34809664/https://pubmed.ncbi.nlm.nih.gov/34073494/https://pubmed.ncbi.nlm.nih.gov/31884303/
(And so forth - many PubMed articles not even available on Sci-Hub, so just minimal abstracts.)

Here's GPT4 on hypotheses that could explain the mechanisms for brain fog:

Brain fog after mental exertion, sometimes referred to as cognitive fatigue, could be due to various factors. Here are a few hypotheses:

  1. Neurotransmitter Depletion: Mental exertion requires the release of neurotransmitters like dopamine, norepinephrine, and glutamate. During prolonged mental tasks, these neurotransmitters could become depleted, leading to a decrease in cognitive performance and a sense of mental fog.
  2. Glial Cell Activation: Glial cells, including astrocytes and microglia, have been proposed to play a role in mental fatigue. When neurons are highly active, they produce waste products like adenosine, which can activate glial cells. Activated glial cells produce inflammatory cytokines, which can cause a sense of fatigue or mental fog.
  3. Energy Metabolism: The brain is a highly energy-demanding organ, and cognitive tasks increase energy demand further. Prolonged mental exertion could temporarily deplete brain energy reserves, causing a sense of mental fog.
  4. HPA Axis and Cortisol: The Hypothalamic-Pituitary-Adrenal (HPA) axis responds to stress, including mental stress. Chronic or intense mental exertion can lead to sustained activation of the HPA axis and elevated cortisol levels. Cortisol affects many brain functions, including memory and attention, and chronically high levels can impair cognitive function and contribute to a sense of mental fog.
  5. Altered Blood Flow: During periods of mental exertion, blood flow in the brain changes to meet the increased demand for oxygen and nutrients. If the blood flow isn't adequately increased or if it's not directed to the right areas, this could lead to cognitive fatigue or brain fog.
Each of these hypotheses could explain brain fog after mental exertion, and it's likely that multiple factors contribute in any given individual. Furthermore, these mechanisms could interact with each other. For example, neurotransmitter depletion could lead to altered blood flow, or activation of the HPA axis could affect energy metabolism. Further research is needed to fully understand these complex interactions.

It would take me many hours (that ironically I don't really have…because of constant headaches and brain fog) to get to the same place as GPT4 gives me in 30 seconds. These are hypotheses - not that they are correct. Then I can research each one individually when I know what I'm searching for.

Yes, some people use it as 'the answer', but every tool can be used incorrectly. Some people see one article in BMJ that ME/CFS is a psychosocial disease, and then they stop researching because 'it's in a medical journal'.

Caution is always good, but we don't have to throw out the baby with the bathwater.
 

Wishful

Senior Member
Messages
5,811
Location
Alberta
Here's a possibility for a new career: AI assistant. I can imagine humans training to become better partners for AIs, providing human strengths, such as imagination and considering possibilities beyond the AIs boundaries (same thing as imagination?), and having a 'feel' for when the process is going off-track (the AI might not realize that nuclear weapons are not an appropriate solution to urban decay or litter). With experience, the humans can also become better at providing that feedback for a specific AI.
 

Wishful

Senior Member
Messages
5,811
Location
Alberta
But then again will AIs ever possess enough stochasticity to make sufficiently large jumps to discover something like penicillin, whilst at the same time remaining efficient?
Eventually, probably. How long that will take is hard to figure out.

The counter to that question is: what new capabilities will AIs have that will lead to discoveries that humans are unlikely to find? AIs can have superhuman capabilities with number-crunching, data analysis, resistance to boredom, etc. In mathematics, AIs could be able to solve multidimensional geometry and topology that the human brain simply isn't capable of. We can't rotate a 6-dimensional object in our brains, but AIs could. There might be similar new capabilities for AIs researching medicine.

I'd like to add that these new developments in AI are ruining science fiction. I think back to SF stories I've read, and I now think: "Why do they still have humans doing <whatever>?" and "Why is society not changed by AIs doing <whatever>?"
 

Osaca

Senior Member
Messages
344
Eventually, probably. How long that will take is hard to figure out.

The counter to that question is: what new capabilities will AIs have that will lead to discoveries that humans are unlikely to find? AIs can have superhuman capabilities with number-crunching, data analysis, resistance to boredom, etc. In mathematics, AIs could be able to solve multidimensional geometry and topology that the human brain simply isn't capable of. We can't rotate a 6-dimensional object in our brains, but AIs could. There might be similar new capabilities for AIs researching medicine.

I'd like to add that these new developments in AI are ruining science fiction. I think back to SF stories I've read, and I now think: "Why do they still have humans doing <whatever>?" and "Why is society not changed by AIs doing <whatever>?"
I’m not sure, some of our significant findings are pure luck or very big jumps. According to what I know that doesn’t fit the framework of for example reinforcement learning. All these algorithms are still not elegant or "intelligent" (whatever that may mean) and rather brute computing force, whether that changes in the future, we'll have to wait and see.

Of course computers have the ability to process far more information at a higher pace than individuals do. Furthermore we can use computational power pretending it is basically for free (it is not since future generations are paying the price for our use of energy, but that is irrelevant unfortunately). Their speed, memory capacity and a lack of physical constraints ridicule our abilities in those fields and will do so at an increasing speed.

A majority of automated tasks could eventually be replaced in the near future. This has been the case for the past hundreds of years, and now re-starts happening at a never before seen speed, causing social disruption for which we have to find solutions. Do we find solutions for those people loosing their jobs? In the past we haven’t been good at that as a society and the majority of answers go along the lines of “but it also creates jobs”, being completely obsolete to the fact that a supermarket cashier whose been doing his job for 10 years, won’t all of a sudden get an degree in computer science and become an AI master.

The question isn’t whether AIs can pass university exams, win board games, speak every language of the world perfectly, help in research, write novels and draw masterpieces of course they can.

To say it as one of the first people to be humbled by computers, Kasparov said: A grandmaster is great, an engine is even stronger, but the combo grandmaster+engine is strongest!

There’s uses all over the place where humans and AIs profit from each other, and since you mentioned mathematics, even in non-applied sciences (https://arxiv.org/pdf/2210.04045.pdf). Even the brightest minds of our time, like Peter Scholze use computer verification simply to verify that their immensely complex and ungraspable theories don’t contain flaws. But will AIs ever be able to come up with something like perfectoid spaces? Only time will tell.

In infinite time we are all dead and research that only AI can understand is never the goal in any case. We should try to have a good collaboration in the mean time.
 
Last edited:

Wishful

Senior Member
Messages
5,811
Location
Alberta
research that only AI can understand is never the goal in any case.
Oooh, now that's a good subject for SF authors to tackle: AIs discovering awesome new ideas that only apply to AIs. Maybe AIs could apply quantum communication to 'view' very distant places, such as planets elsewhere in the galaxy, or link to creatures' brains there. Maybe AIs could link themselves into multidimensional communal minds that far surpass anything humans could imagine.
 
Back