• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Did not realize you can use AI this way

Forummember9922

Senior Member
Messages
183
Did not realize you can use AI this way.

I asked GPT for CFS patient experiences from forums about trying glutamine.

(The glutamine is irrelevant here)

What was cool is than I then asked for a tally:

glut.png


Could be useful on occasion
 
Last edited:

linusbert

Senior Member
Messages
1,220
yes. but i would caution to treat those responses carefully. its probably doing so on limited sources and probably not full forum search like analysing only page 1 of forum threads.
but ai definitely brings good information and helpful analysis.
 

Dysfunkion

Senior Member
Messages
198
I also found the AI search feature in my Brave browser to be very useful when I quickly want to see some very basic connections between something. For example searching " glutamate (insert other neurotransmitter/body system/ect- here)" and getting a quick AI rundown of basic notable connections between it and that thing enough in research to get a statement about the connection from. Didn't know it can be used on forums like that too but for looking for experiences I like to manually go to forums myself and search through and I have a pool of them I use for cross referencing. Yeah it's no replacement for actual work but it saves time where more time doesn't need to be wasted.
 

hapl808

Senior Member
Messages
2,191
I had kind of assumed they pasted in the reviews and then asked? Since seven reviews is something you could easily Ctrl-A and then copy and paste.

These LLMs work best with information within the context - I'll often paste in new abstracts and ask questions. Asking for references on stuff is much harder - imagine it like asking you for a reference off the top of your head with no bookmarks or access to your computer. People sometimes forget the LLMs don't actually have a record of everything they've learned - they've just 'learned' it such that machine learning can be called that.

But in short, it's fantastic for things you've pasted in or general questions. For more specific stuff, it's good to use enhanced versions that are specifically designed for references, etc.
 

junkcrap50

Senior Member
Messages
1,358
I had kind of assumed they pasted in the reviews and then asked? Since seven reviews is something you could easily Ctrl-A and then copy and paste.

These LLMs work best with information within the context - I'll often paste in new abstracts and ask questions. Asking for references on stuff is much harder - imagine it like asking you for a reference off the top of your head with no bookmarks or access to your computer. People sometimes forget the LLMs don't actually have a record of everything they've learned - they've just 'learned' it such that machine learning can be called that.

But in short, it's fantastic for things you've pasted in or general questions. For more specific stuff, it's good to use enhanced versions that are specifically designed for references, etc.
Yes and no. Yes, you're right in that it does analyze info you post pretty well. No, in that it does have some ability to reference papers - althought it's very hit or miss, and not reliable. For instance, I've often asked for what's the research and science between two receptors / two metabolites and asked to include sources. But yeah, it could be way better. Can't tell if that's from its creators handicapping it and changing it with new version or just the nature of it.
Looks like it is able to tap into quoting user posts if you ask for it specifically
Wow, nice, very interesting. I would not have predicted it could do that, because it's been shown to be lying in the past.
 

Wayne

Senior Member
Messages
4,368
Location
Ashland, Oregon
I just did an AI query on "Perplexity". What do you guys think? ( was pretty impressed).

Some people with ME/CFS are prescribed Low Dose Naltextrone (LDN) by their health care providors to "tweak" the immune system. Other health care providers prescribe Vit. D3, to essentially do the same--"tweak the immune system". Are there similarities between LDN and Vit. D3 on what their effects are on the immune system? What would be the advantages and disadvantages of either?​


Answer
Low Dose Naltrexone (LDN) and Vitamin D3 are both used to modulate the immune system, but they do so through different mechanisms and have distinct effects.

Mechanisms and Effects on the Immune System​

Low Dose Naltrexone (LDN)​

LDN is an opioid receptor antagonist that, at low doses, is believed to modulate the immune system by increasing the production of endorphins and enkephalins. These endogenous opioids can influence the immune system by interacting with opioid receptors on immune cells, potentially leading to anti-inflammatory effects and modulation of immune responses. LDN has been reported to reduce pro-inflammatory cytokines and increase regulatory T cells, which can help in conditions characterized by chronic inflammation and immune dysregulation, such as ME/CFS (Myalgic Encephalomyelitis/Chronic Fatigue Syndrome) .

Vitamin D3​

Vitamin D3, or cholecalciferol, is a secosteroid hormone that plays a crucial role in calcium homeostasis and bone metabolism. It also has significant immunomodulatory effects. Vitamin D3 influences both the innate and adaptive immune systems. It enhances the pathogen-fighting effects of monocytes and macrophages and decreases the production of pro-inflammatory cytokines. In the adaptive immune system, Vitamin D3 can inhibit the proliferation of T cells, promote the differentiation of regulatory T cells, and shift the balance from a Th1 to a Th2 response, which can reduce inflammation and autoimmunity .

Similarities​

Both LDN and Vitamin D3 have immunomodulatory effects and can reduce inflammation. They both influence the activity of T cells and cytokine production, albeit through different pathways. Both treatments aim to restore balance in the immune system, which can be beneficial in conditions like ME/CFS where immune dysregulation is a feature.

Differences​

  • Mechanism of Action: LDN works primarily through opioid receptors and the modulation of endogenous opioids, while Vitamin D3 acts through the Vitamin D receptor (VDR) and influences gene expression related to immune function.
  • Pathways Affected: LDN primarily affects opioid-related pathways and has a direct impact on endorphin levels, whereas Vitamin D3 affects a wide range of immune cells and pathways, including those involved in calcium metabolism and bone health.

Advantages and Disadvantages​

LDN​

Advantages:
  • Can reduce chronic pain and inflammation.
  • May improve mood and energy levels due to increased endorphin production.
  • Generally well-tolerated with few side effects at low doses.
Disadvantages:
  • Limited research and clinical trials specifically for ME/CFS.
  • Potential for opioid withdrawal symptoms if not managed properly.
  • Effects can vary widely among individuals.

Vitamin D3​

Advantages:
  • Well-researched with established benefits for bone health and immune function.
  • Can be easily monitored through blood tests.
  • Generally safe with a low risk of toxicity when used appropriately.
Disadvantages:
  • Requires regular monitoring to avoid hypercalcemia and other side effects.
  • The effectiveness can be influenced by factors such as sunlight exposure, dietary intake, and individual absorption rates.
  • May not provide immediate relief of symptoms as it works gradually to modulate the immune system.
In conclusion, both LDN and Vitamin D3 offer potential benefits for modulating the immune system in ME/CFS patients, but they do so through different mechanisms and have distinct profiles of advantages and disadvantages. The choice between them should be based on individual patient needs, existing medical conditions, and the guidance of a healthcare provider.
 

Slushiefan

Senior Member
Messages
117
Thanks for sharing this!

This type of query does seem like what AI is designed to do - that is to separate the wheat from the chaff quickly (more quickly than a person can do).

The negative side is that AI is often incorrect in its assumptions - meaning that if you find something statistically significant, you still have to dig through that to manually confirm the assessment.

AI is good to use to identify absolutes - those things that do almost nothing at all, are in contest (think like something that works about half the time), or work almost every time. Outside of that though, manual research is necessary. So it is about as useful as a very basic intern or an uneducated assistant. It can't replace an expert, a scientist, a professional, or even a hobbyist.

I am willing to bet any person in these forums could do a better job given the general patient level of knowledge about our condition.
 

hapl808

Senior Member
Messages
2,191
Yes and no. Yes, you're right in that it does analyze info you post pretty well. No, in that it does have some ability to reference papers - althought it's very hit or miss, and not reliable. For instance, I've often asked for what's the research and science between two receptors / two metabolites and asked to include sources. But yeah, it could be way better. Can't tell if that's from its creators handicapping it and changing it with new version or just the nature of it.

Agreed. But I wasn't saying it has no ability to reference papers - just that LLMs do not 'remember' everything they've ever seen. People sometime seem to think that an AI remembers everything it's seen, but that's not how training works.

So if a paper is important and frequently mentioned, they may 'remember' it in much more detail. But that's my point - since references to specific papers are unreliable, that's an area it can't be used.

It will rarely get a receptor function wrong or be unable to explain the krebs cycle or be unfamiliar with a common drug, but it will frequently mix up specific papers if you ask about a 1956 study or something. Again, it's not dissimilar to a person. I will remember famous papers, but other ones I will vaguely remember.

I don't think it's creators handicapping it - but there are no 'sources' during inference - that's not really how it works. It's not looking up anything, it's using the weights in its 'neural net' to answer.

The negative side is that AI is often incorrect in its assumptions - meaning that if you find something statistically significant, you still have to dig through that to manually confirm the assessment.

AI is good to use to identify absolutes - those things that do almost nothing at all, are in contest (think like something that works about half the time), or work almost every time. Outside of that though, manual research is necessary. So it is about as useful as a very basic intern or an uneducated assistant. It can't replace an expert, a scientist, a professional, or even a hobbyist.

I am willing to bet any person in these forums could do a better job given the general patient level of knowledge about our condition.

I would respectfully disagree with most of that. AI is actually not great for certain types of absolutes (mathematics, etc), but can be really good for many areas. If you're using the tool in the right way, my 'success rate' is nowhere near 50% (ETA: Haha, as a faulty human, I misread what you were saying here. But still disagree - more useful than any intern or assistant I've ever had. It's like having a full time assistant with a Master's degree.).

Of course, manual research is ALWAYS a good idea, whether it's your doctor (especially) or a scientist or forum member.

But saying AI is often incorrect in its assumptions is usually cherry picking very specific areas that it's known to have problems. In my day-to-day, it's probably right more than 90% of the time for how I use it.

Even in areas where it struggles (like math), the other day I asked it to calculate the electrolytes and sugar in 8oz of V8 Original and 8oz of Gatorade Lemon Lime. This is an area it usually is not good (looking up numbers), but after a quick mental math check of nutrition facts, it looked very accurate.

For instance, I asked AI a very general question:

Is exercise a good treatment for symptoms of major fatigue and 'crashes' after activity?

I think maybe 100% of the 50+ doctors I've seen would get this wrong unless there were further context - like when I already understood how badly exercise affected me and included that in my health history. This is the beginning of GP4o:

Exercise can be a beneficial treatment for symptoms of major fatigue and "crashes" after activity, but it depends on the underlying cause of these symptoms. Here are some considerations:

  1. Chronic Fatigue Syndrome (CFS) / Myalgic Encephalomyelitis (ME):
    • For people with CFS/ME, exercise can be a double-edged sword. Graded Exercise Therapy (GET) was once recommended, but many patients found it exacerbated their symptoms. Now, a more tailored and gentle approach, known as pacing, is often advised. Pacing involves balancing activity and rest to avoid post-exertional malaise (PEM).
  2. General Fatigue:
    • Regular, moderate exercise can improve energy levels, reduce fatigue, and improve sleep quality. Activities like walking, swimming, and yoga are often recommended.
  3. Mental Health:
    • Exercise has been shown to improve mood and energy levels, which can be beneficial for individuals experiencing fatigue related to depression or anxiety.

I always warn people to be careful about AI like they would about their doctor or any Reddit post or whatever. But it's not been my personal experience that it's incredibly unreliable. The problem is when you do ask the type of questions it can't answer, it may just make up numbers. So you need to learn the types of questions it can't answer, but beyond that I think it's much better than most people realize.
 

Slushiefan

Senior Member
Messages
117
Agreed. But I wasn't saying it has no ability to reference papers - just that LLMs do not 'remember' everything they've ever seen. People sometime seem to think that an AI remembers everything it's seen, but that's not how training works.

So if a paper is important and frequently mentioned, they may 'remember' it in much more detail. But that's my point - since references to specific papers are unreliable, that's an area it can't be used.

It will rarely get a receptor function wrong or be unable to explain the krebs cycle or be unfamiliar with a common drug, but it will frequently mix up specific papers if you ask about a 1956 study or something. Again, it's not dissimilar to a person. I will remember famous papers, but other ones I will vaguely remember.

I don't think it's creators handicapping it - but there are no 'sources' during inference - that's not really how it works. It's not looking up anything, it's using the weights in its 'neural net' to answer.



I would respectfully disagree with most of that. AI is actually not great for certain types of absolutes (mathematics, etc), but can be really good for many areas. If you're using the tool in the right way, my 'success rate' is nowhere near 50% (ETA: Haha, as a faulty human, I misread what you were saying here. But still disagree - more useful than any intern or assistant I've ever had. It's like having a full time assistant with a Master's degree.).

Of course, manual research is ALWAYS a good idea, whether it's your doctor (especially) or a scientist or forum member.

But saying AI is often incorrect in its assumptions is usually cherry picking very specific areas that it's known to have problems. In my day-to-day, it's probably right more than 90% of the time for how I use it.

Even in areas where it struggles (like math), the other day I asked it to calculate the electrolytes and sugar in 8oz of V8 Original and 8oz of Gatorade Lemon Lime. This is an area it usually is not good (looking up numbers), but after a quick mental math check of nutrition facts, it looked very accurate.

For instance, I asked AI a very general question:



I think maybe 100% of the 50+ doctors I've seen would get this wrong unless there were further context - like when I already understood how badly exercise affected me and included that in my health history. This is the beginning of GP4o:



I always warn people to be careful about AI like they would about their doctor or any Reddit post or whatever. But it's not been my personal experience that it's incredibly unreliable. The problem is when you do ask the type of questions it can't answer, it may just make up numbers. So you need to learn the types of questions it can't answer, but beyond that I think it's much better than most people realize.
Respectfully - I appreciate the input.

AI is a new field, and not everyones experience will be the same. There is also the question of prompting or how the question(s) is/are posited by the user, which will also have an effect on the output.
 

Forummember9922

Senior Member
Messages
183
Yes and no. Yes, you're right in that it does analyze info you post pretty well. No, in that it does have some ability to reference papers - althought it's very hit or miss, and not reliable. For instance, I've often asked for what's the research and science between two receptors / two metabolites and asked to include sources. But yeah, it could be way better. Can't tell if that's from its creators handicapping it and changing it with new version or just the nature of it.

Wow, nice, very interesting. I would not have predicted it could do that, because it's been shown to be lying in the past.
I only half-experimented with having it actually quote posts. So don't... quote? me on that.
Errr..

+1 to receiving *misinformation at times. More than once I've asked if a medication increases or decreases XYZ, and it will say "It actually can do both". Then I will say "Provide your sources that it can do both" And it will then apologize and say just kidding.

Looks like it can also make colorful pie charts of forum topics as well. Maybe in a year or two they will work the kinks out better. Wishful thinking.
 
Last edited:

hapl808

Senior Member
Messages
2,191
Respectfully - I appreciate the input.

It's a field I'm very interested in and would be more involved if I had less constant health constraints. I'm glad when people are open to it, as the media is painting a very twisted narrative - wrong in many ways, as they often are.

I would say for difficult differential diagnoses that depend on visual cues, physicians are far better than LLMs - and LLMs are not designed for medicine. Sadly, for fields like ours full of human biases and cruelty, I find LLMs outperform human professionals 99% of the time. More the fault of the humans than just credit to the AI.

I only half-experimented with having it actually quote posts. So don't... quote? me on that.
Errr..

+1 to receiving information at times. More than once I've asked if a medication increases or decreases XYZ, and it will say "It actually can do both". Then I will say "Provide your sources that it can do both" And it will then apologize and say just kidding.

Looks like it can also make colorful pie charts of forum topics as well. Maybe in a year or two they will work the kinks out better. Wishful thinking.

Often this is because medications often *can* increase and decrease XYZ.

Again - one very important thing to note is the LLM is not using sources. It does not know how it knows things. It just does. When you ask it to explain, it is starting over and coming up with a rational explanation for how it knew (which often is correct…but not always). It also doesn't know how it came up with the rational explanation. :)

Now people say that's all bullshit - and they're not wrong. But if the bullshit is usually still the correct answer - isn't that similar to how humans work? We come up with answers, then our rational brain kicks in and justifies them.
 

vision blue

Senior Member
Messages
1,903
There’s been a new feature added in last couple days to chat got - there’s now hundreds of specialized gpts to choose from There’s one called scholar which will add accessing google scholar to its database. Unclear if that means being able to site its source accurately. .

For niw am mostly stickinh to generic chat gpt 4o which is so much more useful than 3.5 - or perhaps just better at hiding its errors
 

hapl808

Senior Member
Messages
2,191
The specialized ones have been there for paid users for awhile. Right now GPT4o is the best free OpenAI model, although I think GPT4 (which is confusingly only available for paid) tends to be more reliable.

Claude Sonnet is also freely accessible and can be quite good, along with some other models.
 
Back