• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Mind/Brain and ME theorising

Marco

Grrrrrrr!
Messages
2,386
Location
Near Cognac, France
I would buy in to that. Mind could be considered this constantly varied flowing output. I personally try to avoid talking about mind but this seems a good way. I guess my take is that this output has to be an input to something (otherwise why bother to output it) and the real hard problem is working out what it is inputting to. Whatever that is one could call an experiencing subject or, following Descartes, a soul, although this tends to confuse things. Since every stage of neural connection tends to diverge to 10,000 places one would expect there to be about 10,000 such experiencing subjects, and maybe a lot more if there are relays.

Locating the output means locating the cells that send the output signals. People tend to think they are in anterior cortex and maybe something special like dorsolateral prefrontal but nobody has nailed that. What for me is more interesting is locating where the signals co-arrive at the point of experience - which they must do otherwise there would be no experiences to report. That would be the LCD screen sort of except that it is maybe more like the input into the printer that receives all the data for the PDF to be printed?

That seems to me to suggest some sort of second party observer. Perhaps output/display was the wrong analogy. I was suggesting I think that experience is the ouput.
 

Woolie

Senior Member
Messages
3,263
Okay, @Jonathan Edwards, it sounds like we agree for the most part, although I admit I am still a little unsure about the "each individual instance" out-clause. An experience seems to me, by definition, to be a single instance. Unless you're talking about experience in the sense of accumulated knowledge. But then I'd call that accumulated knowledge. Or semantic knowledge.

So maybe we differ in our nomenclature...

Anyway, I look forward to reading your book to learn more!
 

Woolie

Senior Member
Messages
3,263
In a large interconnected network when talking about local vs distributed representations the question becomes one of the strengths of the connections and here analysis of models (such as using a sensitivity analysis) with learning (at least looking at language) showed that the representations did distribute over much of the space. But there were attempts to modularize the models in a large part to reduce the complexity of the learning and make it computationally tractable - at that time it wasn't unusual for the fitting of data to take a week or more given even a reasonable computer.

What becomes interesting is to see the distributed representations as points in large dimensional vector spaces. Where there was a theory that similar representations would be clustered around similar points within this space and processes could be applied as a transformation that would lead to a different set of points and hence different thing being represented. The key question being is there a system of representations that allow for transformations to represent these processes that generalize over the relevant representational space. I seem to remember the work of the early 90's suggesting yes to this but I moved on to other topics and without the internet at the time didn't follow the research further.

Yea, that makes sense to me, the vector idea is one of those things I felt I just grasped, but without much confidence. I'd also have to admit that the argument of resonating activation I made above doesn't really require distributed representations. Just a network where units are constantly co-activating one another, and where the knowledge is primarily in the connections, not the units themselves. The distributed idea is, as you say, more a computational device to capture generalisations in learning and also for modelling the acquisition of more complex knowledge/rule systems.

Nice to think about it all again, though.
 

Jonathan Edwards

"Gibberish"
Messages
5,256
That seems to me to suggest some sort of second party observer. Perhaps output/display was the wrong analogy. I was suggesting I think that experience is the ouput.

The common critique of Descartes is that he posits a 'second inner observer' that leads to regress. However, Descartes is clear that his inner observer is the first and only observer. I am suggesting that when we say 'I see a red tomato' he is right that the first observer is some small inner part of the brain, but I am suggesting there are lots of these.

I am intrigued by the idea that experience is an output, because it has become very popular and yet to my understanding of the meaning of the word experience it is back to front. Experiences are sensory; sensation is input. How does one suddenly reverse that? But maybe I have to justify my intuition and maybe not all would agree.

Using Leibniz as a starting point, we seem to have clear evidence of two aspects of reality. One is a complex changing pattern that we think of as 'the appearance of the world' although we know from neuropsychology that it is a highly selective aspect of that world. The other is at least one viewpoint on that world - a here and now. We have reason to think there is one world pattern but lots of viewpoints because other viewpoints are reported to us. In order to take part in the causal fabric of the world and generate reports those viewpoints must belong to causal dynamic units - physical units if you like. Experiences belong to viewpoints and as far as I can see they have to be thought of in terms of 'the way the world influences the viewpoint unit' or 'the way the world is present to the viewpoint unit' or perhaps 'the way the world informs the viewpoint unit'. That to me is input. The output of a unit is the way it influences the world or informs the world. Outputs do not stipulate any experiencing in an operational sense; inputs do. All this perfectly obviously applies at the grain level of a whole human body. My experiences are inputs to me, not my outputs to the world. And whenever one has the opportunity to go further inside the body and nervous system as far as one can ascertain this remains valid. There is nothing special about the skin so that inside it basic dynamic/physical relations should suddenly cease to apply.

Maybe there is a counterargument to this but what I see in the philosophical and neuroscience literature in the shift to 'experience as output' is not some process of clear logical thinking. It is much more like Chinese Whispers - garbled shifting of meanings to avoid uncomfortable imperatives. Like the way Horace's paper took the field in the opposite direction he intended for forty years.
 

Woolie

Senior Member
Messages
3,263
Snow Leopard said:
What are the common biases?
In case anyone still interested in going back to Snow's initial set of questions....

Okay, the first issue is task design and (for patient studies) choice of control group. This is really important, because fMRI is fairly mainginless unless you choose the right comparisons.

One approach, especially in the health area, is to use resting state fMRI (measure brain activity at rest), often applying functional connectivity analysis. The aim is to show that certain regions/networks are over- or underactivated at rest compared to controls. But this is fairly difficult to interpret, for two reasons.

* First, it rests on a difference between groups, so requires very careful choice of control group (if you're studing a "pscyhosomatic" illness group, you need people experiencing similar symptoms not believed to be psychosomatic - although this is almost never done).

* Second, if you want to use the results to make statements at a psychological level, you have to make a "reverse inference". That is, if region or network X was more activated in patients than in controls, and this region/network has been associated with emotion/perception of pain etc., then ipso facto, there must be enhanced pain perception, etc. The problem is that any one region/network can be involved in many different processes, so we cannot infer backwards which one is being engaged in this instance. A lot of fudging goes on, as researchers try to emphasise differences that fit their ideas (e.g, psychosomatic views), while ignoring differences that are trickier to handle within their framework.

And you thought this stuff only went on with PACE...:oops:

A more common approach in the cognitive neuroscience literature is to compare brain activation across two or more different tasks. If you choose your tasks carefully, you can hopefully isolate just the mental operation you're interested in. So for example, if you wanted to study the regions engaged during face recognition, you might pick a face recognition task and an object recognition task, because the second one involves many of the same activities (e.g. complex visual analysis) but not the one of interest. There are other designs too, but this one - called a subtraction design - is the most common. This is harder than it looks because it assumes that the process of interest is merely an extra source of activity "added in" in to the usual activity measured in the comparison task (called the "pure insertion" assumption). It might not work that way at all - including faces may influence activation in regions other than the specialised face recognition areas too. Even more primary visual areas. We just can't say.

The second, even bigger problem is that you have to have a really sophisticated cognitive model of your various tasks, to be sure you've isolated the correct processes. A lot of early fMRI studies got it wrong, because they weren't based on a good model of the processes they were trying to localise. So for example, we tried to localise the "mental lexicon" by comparing the activations generated during word reading with those generated when people viewed nonsense "words" that presumably, wouldn't already be in their lexicons (e.g., melve). But then we realised that even reading nonsense words most likely requires us to refer to our "lexicon" of accumulated knowledge of words we do know. So that comparison was wrong to start with.

Notice that the reverse inference problem also often comes up with the active task designs, if people try to interpret activations as "indicating" certain types of mental or psychological processes.

Even with an active task, if you want to use fMRI to study differences between a patient population and healthy controls, everything hinges on the choice of control groups. An ill population could behave differently to a healthy one for all sorts of reasons - purely physiological ones (remember, heart rate variability and other cardivascular factors can influence fMRI activations), discomfort/malaise during the scan, and psychological state at the time of scanning (probably not equivalent to healthy controls for any group with a serious illness). These differences can lead to overactivation of certain brain regions in the patients. But it can even go the opposite way sometimes, with patient populations revealing a reduced response in some regions to certain noxious stimuli presented under the scanner, perhaps as a result of how accustomed they are to pain/discomfort/being prodded by doctors.:( You need a group that is equivalent on as many of the things you're not interested in as possible.

That's enough for now, can continue if useful (will also integrate this into the earlier post when I get a chance).
 
Last edited:

Jonathan Edwards

"Gibberish"
Messages
5,256
Okay, @Jonathan Edwards, it sounds like we agree for the most part, although I admit I am still a little unsure about the "each individual instance" out-clause. An experience seems to me, by definition, to be a single instance. Unless you're talking about experience in the sense of accumulated knowledge. But then I'd call that accumulated knowledge. Or semantic knowledge.

So maybe we differ in our nomenclature...

Anyway, I look forward to reading your book to learn more!

I suspect we still differ a bit more than just by nomenclature but this is certainly a very difficult area for terminology.

Anscombe's point was that defining experience by content does not guarantee a definition by token physical event. People often define 'an experience' by content, since that is all our brains can introspect. But that makes an experience a bit like page 3 of the Sunday Times. There may be an experience of page 3 after one of page 2 and before one of page 4 but we are talking of event types, not tokens. A million people may experience page 3 - there may be a million event tokens. And similarly we have no way of knowing how many event tokens that are experiences of page 3 there are in a brain at a time. If all signals go to 10,000 places it seems very likely that there are 10,000 at least. Traditionally, nobody in cognitive science has even considered this issue, but now they are.

The vector space approach, which I think Paul Churchland has been keen on, amongst others, has the problem of being divorced from any plausible physical implementation. I think people are now fairly clear that there are no transformations that could be performed on such vectors because they not actually physical realities. We had a big internet discussion group on all this last year discussing ways of resolving the impasse, like suggestions from Gary Marcus and Randy Gallistel, with Freeman, Baars, Barlow, Quian Quiroga, Marcus etc. Most people seemed still to think there is a log jam but I was impressed by people like Brad Wyble and Tsvi Achler who were building multi-module models that addressed the trickier issues of representation in physical implementation terms.

All this may seem very far distant from theories of ME but I come back to very simple things like the distinction between input and output. If experiences are inputs they will be invisible on fMRI because oxygen uptake is likely to track the need to repolarise membranes during output. The consequent input might be in another gyrus.
 

Jonathan Edwards

"Gibberish"
Messages
5,256
In case anyone still interested in going back to Snow's initial set of questions....

That's enough for now, can continue if useful (will also integrate this into the earlier post when I get a chance).

That's avery neat resume of the problems. I prodded Neil Harrison a bit about his interferon induced fMRI 'fatigue' signals in relation to 'spurious thoughts' and he seems to know his stuff, but it is instructive to see the full range of issues.
 

duncan

Senior Member
Messages
2,240
Are bacteria and viruses just a "kick to the stone" to this discussion, at least the ME part?

I know in the context of the specifics, they are not, but it's a funny thought in a reductionist (simplistic?) kind of way...

Just want to add, this is extraordinarily good stuff, a brilliant read, and way out of my league. I am trying to keep up, and only offered this note as a good-natured sidebar.
 
Last edited:

Marco

Grrrrrrr!
Messages
2,386
Location
Near Cognac, France
To make use of them all as a pattern they all have to relate causally to at least one downstream event - they have to converge on some individual integrator unit. So we are back to what I am suggesting - that an entire output pattern, such as 24, has to be the input to at least one individual integrator unit.

And of course this requires just the same architecture as the connectionist rows beforehand. Having one unit receiving 24, or 12, or 18 does not provide a large enough repertoire of response so you want lots of units all receiving 24 through the divergent component of the network architecture - connectionist again. But we have to posit that, at least in this row, there are inputs to individual units that represent the entire concept or idea or image being experienced. Note that I am in no way suggesting that the experience of 'thinking 24 at that moment' occurs in only one place. That was Descartes's mistake, and also Leibniz's. Ironically, William James makes the same mistake having realised he did not need to. The experience will be massively multiple - as in the rows of a connectionist net. The solution to the paradox is that experiencing is distributed but an individual instance of experience is local. (This is basically the topic of the book Asim Roy and I are producing.)

There are too many points I would have liked to respond to given time but this point doesn't sound too far away from the instantiation of 'concrete' examples object models.

One thing I'm having a problem with is how do you encode a whole idea/thought? We know (or did twenty odd year ago when I last looked at these things) that there are neurons or assemblies of neurons in the visual system specialised to detect lines, edges, corners etc but having a single 'unit' that encodes an idea in it's entirety suggests that it (the idea) becomes indivisible and therefore each idea (or each 'copy' of the same idea) if it has a physical substrate, must be stored at a discrete location which sounds a little profligate.

How also do we deal with new thoughts, novel concepts or something outside our experience? Do we dust off existing thoughts and tailor them to suit or say it's kind or like that but not quite and kind of like that other but not quite so I'll create a new composite concept like 'iron bird' (or flying saucer not to be culture specific).

I'm sure these are all well worn questions.
 

A.B.

Senior Member
Messages
3,780
This will be just theorizing about ME/CFS:

Let's assume that the anecdotally reported clusters of CFS in families are a real thing. They seem to consist of multiple people on a spectrum of CFS-like disease, with milder cases or cases that don't fully meet any criteria being much more common.

Wouldn't this point towards an environmental trigger rather than stochastic autoimmunity?

On the other hand, it is usually reported that the affected people didn't all develop these problems at the same time which is beautifully consistent with stochastic autoimmunity.

Are there any autoimmune diseases that tend to appear in clusters like this?
 

Jonathan Edwards

"Gibberish"
Messages
5,256
This will be just theorizing about ME/CFS:

Let's assume that the anecdotally reported clusters of CFS in families are a real thing. They seem to consist of multiple people on a spectrum of CFS-like disease, with milder cases or cases that don't fully meet any criteria being much more common.

Wouldn't this point towards an environmental trigger rather than stochastic autoimmunity?

On the other hand, it is usually reported that the affected people didn't all develop these problems at the same time which is beautifully consistent with stochastic autoimmunity.

Are there any autoimmune diseases that tend to appear in clusters like this?

RA is very like that - which we think is because of a genetic predisposition - on top of which one has whatever environmental (smoking) factors apply and the chance aspect.
 

Jonathan Edwards

"Gibberish"
Messages
5,256
One thing I'm having a problem with is how do you encode a whole idea/thought? We know (or did twenty odd year ago when I last looked at these things) that there are neurons or assemblies of neurons in the visual system specialised to detect lines, edges, corners etc but having a single 'unit' that encodes an idea in it's entirety suggests that it (the idea) becomes indivisible and therefore each idea (or each 'copy' of the same idea) if it has a physical substrate, must be stored at a discrete location which sounds a little profligate.

How also do we deal with new thoughts, novel concepts or something outside our experience? Do we dust off existing thoughts and tailor them to suit or say it's kind or like that but not quite and kind of like that other but not quite so I'll create a new composite concept like 'iron bird' (or flying saucer not to be culture specific).

I'm sure these are all well worn questions.

Ten years ago everyone tended to be of the view that ideas are somehow encoded in a network of cells, although it was unclear how this would work. With Rodrigo Quian Quiroga's group showing that people have Jennifer Aniston or Saddam Hussein specific neurons things have swung back. Christoff Koch has also come to think that the variations in neuronal responses thought to indicate a random element that would need to be evened out by 'redundant' firing of banks of cells are mostly artefacts of experimental systems. So people have moved to thinking that individual neurons may be given very specific jobs. Horace Barlow made it clear in a 2009 paper that that was what the take home message of his 1972 paper should have been.

Profligacy, or the 'combinatorial problem' of having a cell for every possible idea seems to me to be a six of one and half a dozen of the other sort of problem. People quote the combinatorial problem as a knock down argument against almost any model they do not like - and it is mutual. Barlow's 1972 paper was about how many cells you need for an idea. Since then the estimates of cell numbers have gone up. There are probably enough cells in the hippocampus to allocate one to a new idea about every ten seconds of your life. It will not work like that because specificity and sensitivity of allocation seems to have a statistical spread. But there are probably enough cells.

I personally like an idea that I call mordant loop cells, Arnold Trehub relates to his 'autaptic cells' and which is similar to some ideas that Wyble and others have I think. You have a bank of unallocated cells, each 'tuned' randomly to a slightly different input pattern and also to different levels of sensitivity and specificity (very much like B cells and their antibodies in fact). When the attention system registers that a new idea is worth remembering the cell most closely sensitive to the current pattern of input goes into 'mordant mode' - i.e. it 'fixes the dye' that is current. It can do that through a neat trick involving reinforcing its relationship to the cells firing at it at that moment - basically as Hebb suggested but very rapid and permanent. The cell will be linked in to other cells that allow it to register the idea in time (maybe a uniquely human trick) and also give it a name if needed. The upshot of all this is that the idea is represented in two senses. It is represented by the pattern of firing of cells that were firing at that original time - get them to refire and you re-encode the idea - and it is represented by that same pattern of signals inputting not just to the idea cell but to a whole lot of other cells used to determine responses. The idea is that the thought is not just experienced by the cell that recognises that pattern best; a grandmother cell will get inputs of grandmother, motorbike and coriander but only fire for grandmother.

(Note that the brain does not send meanings from one place to another in the way language does because the level of abstraction or interpretation changes at each step. So communication in the brain is quite unlike communication by language.)

So a new thought can be printed into a new cell whenever necessary. Barlow 1972 was probably right that it will be ten to twenty cells with variable specificity - so you get linking in to related ideas for free but you have a risk of confusion or conflation.

There are a lot of further caveats but I am impressed that people like Wyble and Achler have got the measure of these and they are not unending. Many of them are covered by having a statistical spread in all the response relations that can be nudged this way or that to advantage by 'mordant mode' episodes.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
The output of a unit is the way it influences the world or informs the world.
The world includes the brain itself in my view. My current take on experience is that it is created by combining filtered and modified sensory input, influenced by interaction with existing experience, and leading to formation of memories of that experience. Experience is best thought of as a state, though might be quite dynamic within itself. So experience is an input, state, and output, if you look at what it does on activation. Experience does not supply output to the world. It supplies output to other effector processes in the brain, which produce the output.

If you make the one reality assumption (I do, and yes this applies even to subatomic particles when considered as particle and not wave) then this necessarily gives rise to the notion there are many interpretations of reality, which is what is reported.

The view there are many agencies that process things and interact in the brain was proposed by Minsky in Society of Minds. Not all of those agencies are capable of much learning, some of it is largely hardwired. Our interpretation of reality is constructed, as I currently conceive it, of the interaction between agencies that operate to detect things and are activated by the input, and those can be hardwired or learned, though often both.

Much of my interpretation of things is coloured by my background in neural networks, which is a source of potential bias. So I prefer to think of these things as A view of what is happening, not THE view of what is happening. My conception and reality may not be the same.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
A million people may experience page 3 - there may be a million event tokens. And similarly we have no way of knowing how many event tokens that are experiences of page 3 there are in a brain at a time. If all signals go to 10,000 places it seems very likely that there are 10,000 at least. Traditionally, nobody in cognitive science has even considered this issue, but now they are.
This sounds very similar to the old notions of representation, and gave rise to consideration of things like exemplars and scripts. I vaguely recall discussion on this in the 90s. Indeed, I argued along similar lines but hard core rules/linguistic AI people were not prepared to consider it. That gives rise to a little anecdote I might post later.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
If experiences are inputs they will be invisible on fMRI because oxygen uptake is likely to track the need to repolarise membranes during output. The consequent input might be in another gyrus.
I am not sure I agree. If input is reacted to there will be a change in neural function. Whether a given technology adequately measures that change is a different question.
 

Woolie

Senior Member
Messages
3,263
The vector space approach, which I think Paul Churchland has been keen on, amongst others, has the problem of being divorced from any plausible physical implementation. I think people are now fairly clear that there are no transformations that could be performed on such vectors because they not actually physical realities.
Although to be fair - and @user9876 may have more to contribute here - many would say that this is not the purpose of most vector space models. They are not intended to model neural events, they are intended to model cognitive processes. I think we need both. I know some contemporary computational neuroscience is aimed at creating a "big artificial brain", but for most purposes, this level of detail is too fine-grained to use for answering questions about how we think and feel.
 

Jonathan Edwards

"Gibberish"
Messages
5,256
Although to be fair - and @user9876 may have more to contribute here - many would say that this is not the purpose of most vector space models. They are not intended to model neural events, they are intended to model cognitive processes. I think we need both. I know some contemporary computational neuroscience is aimed at creating a "big artificial brain", but for most purposes, this level of detail is too fine-grained to use for answering questions about how we think and feel.

I am not sure. I think the vector space idea relates to a 'state space' of all possible combinations of neural activations, each given a 'dimension' in this space like a Hilbert space. That is the sort of thing Churchland talks about. The model is intended to be used in a generalised abstract form without reference to individual neurons, as you say, but it is a neurodynamic rather than a cognitive model I think. My impression is that it has never given rise to a testable prediction - which it will not if no precise local dynamics can be applied to it.

I quite agree that the big artificial brain project is a white elephant. Until we understand simple things like how temporal spike train coding works we have no idea how the connection architecture that makes use of it should be constructed. It would be a bit like building an internal combustion engine without knowing whether it was going to use diesel or petrol.
 

user9876

Senior Member
Messages
4,556
David Chalmers addressed this problem at length. He concluded that you ought to get the same experience but with the weasel caveat that the simulation would have to be at 'sufficiently fine grain'. In my view the sufficiently fine grain is the grain of fundamental physics, although the size involved might be microns (dynamic grain and size are not the same). In effect you need cell membranes, cytoskeleton, synapses...

There are attempts to build neurons at a nano scale which would allow millions of them to be placed on a single piece of silicon.

http://arstechnica.com/science/2012/12/neuristor-memristors-used-to-create-a-neuron-like-behavior/

Its not work I have followed but I remember hearing about it in a talk a few years ago.