• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Landmark Study Confirms Chronic Fatigue Syndrome Is 'Unambiguously Biological'

Booble

Senior Member
Messages
1,464
There's no point in asking that question until there's proof--or even plausible evidence--that there is such a thing as a "soul".


False. In neural networks, you don't load programs, you let the hardware train from inputs, so the "software" does arise from hardware. If you train two networks with the same set of inputs, hardware variations could lead to completely different connections and weights, forming two different "personalities" so to speak, which will respond to new inputs differently. If you started with two identical networks, but a cosmic ray flipped one bit in one network, you might end up with quite different final networks.

Human brains can't be perfectly modeled (yet), but that doesn't mean that imperfect models or analogies can't have value. Human minds depend on human brain cells, and if you change the operating characteristics of those cells, you will change the mind (thoughts, perceptions, etc) too. If ME changes the operating characteristics of some brain cells, the subject might experience pain sensations, or oversensitivity to inputs, or lethargy and brainfog. That software or neural networks depends on the underlying hardware is a valid result of even a simplistic computer model. Can you argue that the mind is completely unaffected by the hardware? Mind-altering drugs (which change the operating characteristics of cells) is counterevidence.


I agree with all of this.
I get Wabi's point too but generally speaking I do think that it is helpful to have simple analogies for complex issues. For me and I think for others it's helpful to grasp how we have bodily parts including chemicals and we have conscious thoughts and we have unconscious thoughts and they all interplay. It's not identical to a computer but it's pretty darn close and something that makes sense for the average person to better understand mind-body connections.

And, yeah, for me the "soul" is just a human construct in order to try and make sense of life.
 

hapl808

Senior Member
Messages
2,117
False. In neural networks, you don't load programs, you let the hardware train from inputs, so the "software" does arise from hardware. If you train two networks with the same set of inputs, hardware variations could lead to completely different connections and weights, forming two different "personalities" so to speak, which will respond to new inputs differently. If you started with two identical networks, but a cosmic ray flipped one bit in one network, you might end up with quite different final networks.

Yes, but we're discussing how specific deep learning algorithms work, not how the brain works. We don't know if they're the same, so we're making grand presumptions.

Shall we discuss that two algorithms run on different hardware should converge on similar weights, but not identical weights. Despite the stochastic nature of gradient descent, both training should lead to 'similar' outcomes and training loss. Unless they get caught in local minima, in which case the two models could drastically diverge. And you can purpose-build inference chips that are wildly more efficient than chips that need to be multi-purpose and function for both training and inference.

How is this related to mind-body?

It's not identical to a computer but it's pretty darn close and something that makes sense for the average person to better understand mind-body connections.

To use @Wishful 's turn of phrase - false.

As discussed earlier in this thread, it is dangerous to use an analogy to explain a system we don't understand. This is wonderful copium, but may be completely inaccurate to what's happening. Is MECFS the brain's broken 'software', or is it mitochondrial dysfunction or a wildly dysregulated immune system. We don't understand these things yet, and we're not trying nearly hard enough.

I can say that the gods have moods, and these moods are cyclical. Half the year they are angry and the world is cold, half the year the gods are happy and the world is warm. This is an easy to understand metaphor - but it may not have any connection to reality, despite explaining things quite well. However, you should do a dance to appease the gods - and for only $399, I can choreograph a personalized dance to appease the most relevant gods for your crops.

Neural nets are designed partially based on a 50 year old idea of how the brain worked. So yes - it is very 'close' to how we thought the brain worked in the 1950's. Because we literally designed neural nets based on that perception.

This does not mean it is correct, or that broad ideas can be drawn about the 'operating system' or the software being broken or whatever. And @wabi-sabi 's point was relevant, because all of the discussion about hardware and software prior to about 2022 was not referring to machine learning, except in academia.

Shoe-horning something doesn't mean it's correct, it just shows rhetorical skill.
 

Booble

Senior Member
Messages
1,464
Is MECFS the brain's broken 'software', or is it mitochondrial dysfunction or a wildly dysregulated immune system. We don't understand these things yet, and we're not trying nearly hard enough.

We don't have to understand what part of the system is broken, for the analogy to stand. With our computers we often don't know if it's the hardware, the OS, the software, or the power supply that are broken too. Just because we don't know what part is broken, doesn't negate the analogy.

Oh how many times I have cursed Bill Gates because a bug in the OS has caused a problem in the hardware. Why? I would say. How can software break hardware?!

We don't have to know what is malfunctioning for the analogy to help provide a better sense of how our bodily systems work. In fact I think you could argue that by thinking of it in this way it helps us get closer to the answers. Recognizing that there is a software piece impacting the hardware and that the hardware is also impacting the software is a huge step compared to thinking we are just a body of parts and that our conscious an unconscious don't play any role.

No analogy is a perfect match but despite the flaws that you mention, I think it's more helpful than not.
 

hapl808

Senior Member
Messages
2,117
If ME changes the operating characteristics of some brain cells, the subject might experience pain sensations, or oversensitivity to inputs, or lethargy and brainfog. That software or neural networks depends on the underlying hardware is a valid result of even a simplistic computer model. Can you argue that the mind is completely unaffected by the hardware? Mind-altering drugs (which change the operating characteristics of cells) is counterevidence.

These are all interesting hypotheses. None are explanations, beyond a rudimentary metaphor.

I don't think anyone argues that the mind is unaffected by the body. Or vice versa. To further my previous analogy since we're doing that - no one argues that crops are unaffected by the changing weather.

Now why those things are happening, and how we affect them - that's a whole 'nother question.

If you saw off a limb while wide awake, it will absolutely affect your mind. No matter how well you concentrate or meditate. This does not mean your mind is 'causing' this problem in any normal sense of the word, even though all pain may route through the brain.

It's actually harder to prove the reverse, but I think most will accept that if you mentally torture someone for a year, you will likely see a physical manifestation of that.

But this is all difficult because we understand many aspects of the body, but the mind is much harder to define, let alone understand. Therefore I think it's a copout to say, "The thing we least understand is clearly the source of the problem, and now let me explain a completely unproven way to solve that problem."
 

hapl808

Senior Member
Messages
2,117
We don't have to know what is malfunctioning for the analogy to help provide a better sense of how our bodily systems work.

I would argue we need to know exactly that.

Are the gods angry at us? If so, that might imply a different solution than a high pressure system. I can understand the mood of the gods much more easily than I can understand how a high pressure system arises.

I don't understand the immune system, but I understand computer architecture quite well. That doesn't mean it's an appropriate metaphor for the thing I don't understand.


54100eae-1fee-40b7-b3d7-c0b9e309da1c_SP+711+-+Looking+under+the+lamppost.png
 

hapl808

Senior Member
Messages
2,117
We don't have to understand what part of the system is broken, for the analogy to stand. With our computers we often don't know if it's the hardware, the OS, the software, or the power supply that are broken too. Just because we don't know what part is broken, doesn't negate the analogy.

To fix a computer, we absolutely need to understand all these things. If we don't, we cannot fix the problem.

This is the difference between effective troubleshooting, and the person who eventually just buys a new computer because they don't know what's wrong. I would support that, but I can't figure out how to do that with my body.

In fact I think you could argue that by thinking of it in this way it helps us get closer to the answers.

Absolutely agree here - this is why metaphors are very helpful when we are formulating hypotheses. I can imagine if the gods are angry, try some rain dances, and see if that helps. If not, maybe the dances were bad, or maybe my hypothesis was incorrect.

When these hypotheses become accepted dogma (FND, SSD), then patients get hurt. Sometimes badly.
 

wabi-sabi

Senior Member
Messages
1,492
Location
small town midwest
False. In neural networks, you don't load programs, you let the hardware train from inputs, so the "software" does arise from hardware. If you train two networks with the same set of inputs, hardware variations could lead to completely different connections and weights, forming two different "personalities" so to speak, which will respond to new inputs differently. If you started with two identical networks, but a cosmic ray flipped one bit in one network, you might end up with quite different final networks.
Here you've clearly demonstrated the problem with reasoning from analogy. You're starting to argue about how a neural net works and get distracted by that, when what we are trying to discover is how the mind works.

When we make an explanation about A is like B, when we are trying to understand A, and then get hung up on B, we are no longer explaining A. We are arguing about B and then assuming our arguments will apply to A.

My love is like a red, red rose. Well a rose photosynthesizes, does my love? No. You won't learn more about your sweetie by discussing plant biology.
 

hapl808

Senior Member
Messages
2,117
Here you've clearly demonstrated the problem with reasoning from analogy. You're starting to argue about how a neural net works and get distracted by that, when what we are trying to discover is how the mind works.

Exactly. If we understand the thing we're explaining, then we can understand the limits of the analogy. If we don't understand, then the analogy becomes the explanation.

A nuclear detonation is like a very big bomb. But if we didn't understand nuclear fission, then 'big bomb' alone wouldn't be enough to explain nuclear physics.
 

wabi-sabi

Senior Member
Messages
1,492
Location
small town midwest
Or another way of looking at it is that we've built neural nets based on how we think the brain works. We can't then turn to the neural net to prove how the brain works, because it's not independent evidence. That castle is still built on air.

Here's MIT's explanation: https://news.mit.edu/2022/neural-networks-brain-function-1102

A few relevant quotes:

In the field of neuroscience, researchers often use neural networks to try to model the same kind of tasks that the brain performs, in hopes that the models could suggest new hypotheses regarding how the brain itself performs those tasks. However, a group of researchers at MIT is urging that more caution should be taken when interpreting these models.

Without those constraints, the MIT team found that very few neural networks generated grid-cell-like activity, suggesting that these models do not necessarily generate useful predictions of how the brain works.
 

Wishful

Senior Member
Messages
5,750
Location
Alberta
As discussed earlier in this thread, it is dangerous to use an analogy to explain a system we don't understand.
There certainly are potentials for danger. There are also potentials for valid use. Most tools have potential dangers to go along with their valid uses. It's silly to ban tools just because there is a possibility for misuse.

I'm not suggesting that a computer model, or even a neural net model, should be used to model all brain dysfunctions. However, for a limited problem, such as "can cellular dysfunction affect the mind", it seems a valid model. A faulty fuel pump will affect a car's performance, and a faulty astrocyte mitochondria will most likely affect the astrocyte's performance, which in turn affects the brain's performance, and thus the mind. Just because the model doesn't also apply to understanding love doesn't make it invalid for "hardware affects function". The danger is when someone assumes that since model x works for problem x, therefor it will also apply to problem y.

In this thread, the model of software loaded on desktop computers does not apply to neural networks, which don't use logical programs. Is ChatGPT a valid model to use for testing theories of ME? My answer is "definitely not", because the underlying hardware is so different (ChapGPT doesn't have an immune system, for one difference). However, if you dropped the bus voltage for ChatGPT's hardware, you would get random failures in transistor switching, which would cause noticeable symptoms. Dropping the voltage could result in something that matches brainfog, but that does not prove that the model is correct for human brainfog, much less any other symptom.

Models have their use, and potential for abuse/misuse.