Welcome to Phoenix Rising!
Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.
To register, simply click the Register button at the top right.
https://www.sciencedirect.com/science/article/pii/S0944711319302284So the problem is activated mTor and that leads to phosphorlation ATG13 which impairs autophagy.
The question is how to stop mTor being activated.
For some reason, I don't know if he sees these. You tagged him in the Prusty thread too and he hasn't responded yet.@HTester - if you can share, any thoughts on this?
compared to healthy 'controls'
Reminds me of a scene from interstellar where TARS is saying: Absolute honesty isn’t always the most diplomatic nor the safest form of communication with emotional beings.After testing ChatGPT extensively for my medical science searches in recent months, I've found it often completely fabricates answers. And if you ask it to provide sources for the statements it makes, ChatGPT will often make up the title of a non-existent study and provide that for reference! So any results have to be taken with a large pinch of salt.
Bing Chat (https://bing.com/chat) is a much better AI chatbot to use for medical research. It always provides references for the statements it makes, and those refs usually support the statements.
There is now a browser extension you can use which allows you to use Bing Chat on browsers other than Microsoft Edge.
After testing ChatGPT extensively for my medical science searches in recent months, I've found it often completely fabricates answers. And if you ask it to provide sources for the statements it makes, ChatGPT will often make up the title of a non-existent study and provide that for reference! So any results have to be taken with a large pinch of salt.
What the people behind the claims want you to miss? There are lots of claims that focus on some logical-sounding theory, but ignore common observations that contradict the claim. "Drug x theoretically should reduce this symptom!!! However, many people with that disease have taken that drug but showed no reduction, which pretty much disproves the theory."What am I missing?
It seems like Bing Chat uses ChatGPT 4.0:
sourceWe are happy to confirm that the new Bing is running on GPT-4, which we’ve customized for search.
sourceEdit: Microsoft confirmed they're using a smaller model for Balanced, but Creative and Precise remain on the full GPT-4 Prometheus model. (Additional edit: They did NOT say anything about it being GPT-3, and it is quite possibly just a cut down GPT-4 or other LLM) This is by design, and there is nothing to worry about. The CEO of Bing has made it clear that the different chat modes can offer a different experience. Balanced for fast and simple responses, Precise for more grounded responses, and Creative for more detailed and expressive responses. https://www.seroundtable.com/bing-chat-modes-35069.html