• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of, and finding treatments for, complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia, long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

Mind/Brain and ME theorising

user9876

Senior Member
Messages
4,556
I would like to get back to the issue of types of belief because I think they are relevant to the friction point between patients and scientists and also to what seems clunky about Mark Edwards's approach.

I looked at the abstract of Seidenberg and McClelland 1989. It seems to deal with the sort of connectionist systems I am familiar with from Rumelhart and McClelland, Hinton etc. I may be wrong but this does not seem to address the distinction between experienced belief and operational belief. All these connectionist models work on computers and nobody much thinks computers experience - at least anything relevant. I think we may be at cross purpose on 'levels'. The connectionist models are at a more generalised or abstracted computational level but that would not be an experiential or 'mental' or 'I feel I believe' account would it?

I feel very uncomfortable with the word belief because it can be so overloaded with different semantics. I'm not sure if I understand what you mean by operational and experience belief but I would guess that one is a current internal representation of the state of the world as just observed and the second is memories encoded in a way that affects how inferences are made about the world?
 

Jonathan Edwards

"Gibberish"
Messages
5,256
Thanks @Jonathan Edwards

ETA

So how 'rich' is the encoding of a thought? Does it encapsulate everything related to the original experience of is it more of a 2D facsimile?

Colin Blakemore, who is perhaps the most well known neuroscientist in the UK and who taught me neurobiology in 1970, takes the view that a conscious experience is built out of about 50 bits of data, i.e. something you could code as: 0100111010 0110110101 010011101001001110100100111010

Victor Lamme, who works on very short term memory, claims that experience is as rich as we think it is - which might be equivalent to an old television screen at about 500,000 bits.

I think the majority of neuropsychologists are sceptical about Lamme's proposal, and even Lamme may deny that there is a conscious representation at this level. On the other hand I think Blakemore is being a bit austere. Fifty bits barely gives you the range of an elevator floor indicator. So my guess is half way - 5,000 bits.

5,000 bits is very convenient because the average convergence/divergence ratio at each connection in the brain is 1:10,000 both ways. The bigger neurons have up to 50,000 synapses but there are reasons to think that there may be about tenfold redundancy in encoding at synapses. Interestingly, if a cell A feeds on to cell B it often tends to feed up to ten branches on to B. So informational divergence/convergence might be at the 1:1000 level. Obviously these are all ball park figures but they seem to have a plausible relation to apparent richness.

That leads to the interesting inference that all mental representations, including conscious experiences, relate to individual neurons. Most of the neuroscience world throws up its hands in horror at this idea. However, Horace, who led them all astray in 1972, seems to be coming to the same conclusion (aged 94).

There is a stonking great advantage in locating experienced representations in the synaptic array of the dendritic tree of an individual neuron, which is that at this point all encoded signals co-contribute to a single computational event - they are an input. For neural nets there is no definable event of this sort because there is no principled basis for defining a boundary (as William James pointed out).

So I think it will turn out to be about 1-5,000 bits.
 

Jonathan Edwards

"Gibberish"
Messages
5,256
I feel very uncomfortable with the word belief because it can be so overloaded with different semantics. I'm not sure if I understand what you mean by operational and experience belief but I would guess that one is a current internal representation of the state of the world as just observed and the second is memories encoded in a way that affects how inferences are made about the world?

I agree. I think belief should be expunged from neuroscience, certainly from discussion of the pathology of ME and probably from philosophy. It is far too loaded. My terminology is crude but based on a distinction the philosopher Tim Crane uses. An operational belief would be defined in behavioural dispositional terms. A child can be said to believe she is in class 4 if on observation she goes to classroom 4 each morning. She still has this disposition when at home having tea. It is embedded in connections. An experienced belief would be when she is actually running down the corridor and thinking 'that is my class'. For a priest belief in God is an operational belief that underlies all daily routines but an experienced belief when in contemplation he thinks 'I might doubt but at this moment I know He is with me'.

Operational beliefs can be instantiated in terms of actual neural impulses regulating the priest's behaviour but an experienced belief those impulses must inform a conscious thought - whatever that implies.
 

Marco

Grrrrrrr!
Messages
2,386
Location
Near Cognac, France
I thinks that's getting to the crux of the issue as it relates to the various uses of 'belief' with respect to ME/CFS. The BPS crowd posit that we have inappropriate illness beliefs (operational) that bias how we believe we feel on a momentary basis (the experienced belief). For PWME how we feel momentarily - the experienced belief - is what we base our judgement on but its easy to see how this could be extended to suggest that the experienced belief during the initial say 'viral onset' becomes embedded as an operational belief.

Referring back to the example of Jonathan's wife, I'm pretty sure she had no operational belief that televisions can watch you but the effects of the drugs made the experienced belief unavoidable.

Interestingly there has been some media coverage about sleep paralysis lately (they've just released a movie based on the phenomenon). I was interested because in my early 20's I experienced this every night for a full year before I realised they weren't anything sinister and they just faded away after that.

I'm sure here also that sufferers don't have operational beliefs about the existence of incubi, succubi, old hags or tall grey shadows but that's what they experience. Some suggest that during these episodes the amygdala is highly active pumping out 'fear' signals and the (semi) conscious brain then tries to supply an appropriate representation to fit the sensations.
 

Jonathan Edwards

"Gibberish"
Messages
5,256
For some reason Alex's post does not paste when I 'reply' but no matter.

Reading Leibniz has made me think a lot about the words used in physics. All words in physics are maybe placeholders based on abstractions our brains or minds use to construct a model of dynamic connections. But they can cut the world at different joints in all sorts of ways. 'State' is a term I find tricky since I am not sure it is clear where it cuts. 'Object' or 'thing' cuts the Newtonian world into spatial asymmetries based on cohesive forces, independent of time. 'Event' cuts by time and implies a location but not always strictly. Modern physics has new abstractions like wavefunction, operator, observable, which cut new ways. But also ordinary language cuts at joints. So 'stimulation' is an event that views the dynamics in terms of input whereas 'activation' implies some sort of output response (it jumped). Leibniz tells us that the only real relations are what we might think of as inputs, although like modern physics he abandons the intuitive idea of causation.

So I feel uncomfortable if we allow experience to be input, output and state. I think experience is like 'stimulation', cutting the analysis at input joints. Having said that there is a whole mass of stuff on how it is physically possible to 'report an experience' that means that one has to invoke inputs, outputs and loops and ends up concluding that what is reported is an inference about what the experience must have been based on comparisons occurring outside the experience!!

It may be that in practical neuroscience terms, like fMRI pictures, these considerations can be overcome by bootstrapping back from cross-correlations as in the functional connectivity studies. But I do worry that unless theorists are clear what the underlying physics/experience relation must be they are likely to make false assumptions. And it is worrying that the Bayesian predictive models have flow diagrams with boxes that would have to be the site of experience but the authors deny that any experience can occur at such a locality.
 

Jonathan Edwards

"Gibberish"
Messages
5,256
I am not sure I agree. If input is reacted to there will be a change in neural function. Whether a given technology adequately measures that change is a different question.

I accept that if reacted to there will be signal. However, I think one can argue from first principles and also from the Quian Quiroga data that inputs often do not give output signals - as when the input for a motorbike arrives at a grandmother or Saddam Hussein cell. There can also be outputs indirectly tripped because of inputs elsewhere for artefactual reasons. There has been a lot of interest in a 300msec negative spike response in prefrontal cortex that was thought to be a signature for 'conscious experience' but a neat experiment doen in Cambridge recently suggest sthat it is the signature for the machinery of responding to what the subject was told to respond to by the psychologist running the experiment. When they experience something 'irrelevant' there is no signature. This is really just a different way of saying what Woolie was emphasising I think.
 

user9876

Senior Member
Messages
4,556
I am not sure. I think the vector space idea relates to a 'state space' of all possible combinations of neural activations, each given a 'dimension' in this space like a Hilbert space. That is the sort of thing Churchland talks about. The model is intended to be used in a generalised abstract form without reference to individual neurons, as you say, but it is a neurodynamic rather than a cognitive model I think. My impression is that it has never given rise to a testable prediction - which it will not if no precise local dynamics can be applied to it.

I quite agree that the big artificial brain project is a white elephant. Until we understand simple things like how temporal spike train coding works we have no idea how the connection architecture that makes use of it should be constructed. It would be a bit like building an internal combustion engine without knowing whether it was going to use diesel or petrol.

So the notion of the vector space comes from the idea of a connectionist network where a set of n output units forms a vector and a given set of activations is just a point in a space of defined by that vector (although where each value is limited between 0 and 1). Transformations then happen by the y_i = funct(sum(w_i,j * x_j) over the new vector space defined by Y where y_i is an element of Y (sorry poor notation its hard in just text). The interesting question to me is can we represent interesting knowledge structure in a vector space in a way that they can be computed on by general transformations. I believe that the only way this could work is by using learning algorithms and at the time I was looking at it I felt it given the available compute power it was not practical. However that was a long time ago and given Moore's law that may no longer be the case.

I'm not sure how this relates to Hilbert spaces since I thought they were defined over the space of complex numbers (but not sure)?

I think it makes an interesting computational model but like the overall connectionist approach I don't really know how/if it maps onto neurons as a compute surface. To me the interesting thing is different computational models and having a reasonably good formalism to describe them, how to represent knowledge and compute with them.

The problem we get is as people think about computation in terms of distributed processes or representations they find it very hard and so tend to head back to more linear type models of computation.
 

Jonathan Edwards

"Gibberish"
Messages
5,256
This sounds very similar to the old notions of representation, and gave rise to consideration of things like exemplars and scripts. I vaguely recall discussion on this in the 90s. Indeed, I argued along similar lines but hard core rules/linguistic AI people were not prepared to consider it. That gives rise to a little anecdote I might post later.

I think the hard core AI people may have been a major obstacle in all this. And Jerry Fodor's idea of 'mentalese' further obscured things because brain signals do not operate the least bit like a natural interpersonal language. At least at the moment there seems to be a degree of humility amongst the old guard figures about the fact that people need to go back to the drawing board and build things up again bit by bit.
 

user9876

Senior Member
Messages
4,556
Thanks @Jonathan Edwards

ETA

So how 'rich' is the encoding of a thought? Does it encapsulate everything related to the original experience of is it more of a 2D facsimile?

Or is it just an internal representation of key features of something rather than as an image even if that was the input technique.
 

user9876

Senior Member
Messages
4,556
I think the hard core AI people may have been a major obstacle in all this. And Jerry Fodor's idea of 'mentalese' further obscured things because brain signals do not operate the least bit like a natural interpersonal language. At least at the moment there seems to be a degree of humility amongst the old guard figures about the fact that people need to go back to the drawing board and build things up again bit by bit.


But a lot of hard core AI people were not trying to model the brain but rather study different ways computation worked and ways intelligent behaviour could be formed. I don't think too many people really believed the logical deduction systems used in things like expert systems represented the way the brain works but they did and still do form interesting ways to solve complex problems.
 

Jonathan Edwards

"Gibberish"
Messages
5,256
So the notion of the vector space comes from the idea of a connectionist network where a set of n output units forms a vector and a given set of activations is just a point in a space of defined by that vector (although where each value is limited between 0 and 1). Transformations then happen by the y_i = funct(sum(w_i,j * x_j) over the new vector space defined by Y where y_i is an element of Y (sorry poor notation its hard in just text). The interesting question to me is can we represent interesting knowledge structure in a vector space in a way that they can be computed on by general transformations. I believe that the only way this could work is by using learning algorithms and at the time I was looking at it I felt it given the available compute power it was not practical. However that was a long time ago and given Moore's law that may no longer be the case.

I'm not sure how this relates to Hilbert spaces since I thought they were defined over the space of complex numbers (but not sure)?

I think it makes an interesting computational model but like the overall connectionist approach I don't really know how/if it maps onto neurons as a compute surface. To me the interesting thing is different computational models and having a reasonably good formalism to describe them, how to represent knowledge and compute with them.

The problem we get is as people think about computation in terms of distributed processes or representations they find it very hard and so tend to head back to more linear type models of computation.


Agreed. I just used Hilbert space as an example of a mathematical state space that does not imply any specific arrangement in physical space.

The basic idea that connectionist nets model the divergent/convergent architecture of neurons better than computer circuits seems to me sound. Three things worry me about actual usage. If the brain uses a 'Darwinian' selective process for setting up memories rather as I suggested above and B cells do then you may not start to get the right level of function until you have a million units in each array. Secondly, the rules of Hebb-style reinforcing feedback may vary from region to region and array to array within a region. So in cortex with five layers of cells the rules for each layer are probably quite different. Thirdly, I think that timing of spikes may be much more important than has been assumed.

Traditionally, two aspects of neural dogma do not really fit. One is the linear summative integrate and fire default model for how a neuron responds to inputs - which is close to the connectionist model. The other is rate coding, where intensity of 'meaning' (i.e. brightness of light or loudness of sound) is encoded in faster rate of firing. In theory the first rule is unaffected by rate of input; summation is supposed to be almost instantaneous. But it is now clear that rate coding may operate statistically at entry points in pathways. The faster you fire the more cells you will catch at the right time to summate, and not in a refractory phase. There are then issues about phase locking allowing signals to be triaged according to cyclic repetition e.g. at gamma frequencies in different phase relations - which probably has to be used where rate coding is no longer used. The upshot of all this is that cell firing does not necessarily just mean 1 rather than 0. The phase relation to events in all 10,000 cells getting the signal may allow for a wide range of 'value vectors' with many degrees of freedom. Connectionist nets do not as far as I know begin to incorporate this. The implications can be major. I sometimes think of it a bit like opening the aperture on a camera so that certain things are in sharp focus and others invisible, except that you could have different parts of the net focusing simultaneously on different things. So there is a danger that connectionist nets as designed so far simply do not start to use the sort of inferential processes neurons can use. So you can have a violin like Isaac Stern but to capture the flavour of Bruch's Concerto you need a bit more than just the hardware.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
However, I think one can argue from first principles and also from the Quian Quiroga data that inputs often do not give output signals - as when the input for a motorbike arrives at a grandmother or Saddam Hussein cell.

I don't disagree but its a different phenomenon. A cell responds, that might be internal, there is no external signal, in the case above. You have to be careful to define what a signal is though, and not confuse or conflate signal with externally detectable response, whether that by by an experimenter or other cells. . A change is in principle measurable, or it isn't a change. There appears to be a muddle about what is input, output, experience etc. I don't think we have resolved it.


Beliefs

I don't think we need to get too heavy into philosopher to get enough understanding of what beliefs are and are not, or their limitations. I might believe I can fly, but if I act on that there will be a possibly tragic mismatch between what I think about the world, and the reality. Yet I think the same kind of error occurs again and again. In extreme beliefs I might think something unacceptable to many (such as "I can read your mind"), but ideas that do not match reality occur all the way down to the little things. Like where did I put my pen! It was here only seconds ago!. What we think, and reality, do not have to match.


Experience Neurons?

I am deeply suspicious of the notion of belief or experience neurons, whether individual neurons or their synaptic connections. I do think that small clusters of neurons can play specific roles, we see that in the visual cortex for example, and so they might activate under specific conditions. I see absolutely no problem with distributed memory as found in artificial neural networks being somewhat analogous to what happens in the brain, though with bells and whistles in the brain you cannot find in an artificial network. I do however think the kinds of amorphous single layer neural net that were very popular with (for example) McClelland are a very poor approximation of anything in the brain.

When I was talking about state, I was talking about a loose concept ... its more valid in my view than saying input or output, but its not particularly helpful either. You can create various hypothetical state mechanisms, but they are mostly still not testable.

So let me put it another way. Suppose there is a small region of the brain in which an experience exists (although I am deeply suspicious of this view as well, it serves to illustrate my point as an hypothetical). Similar regions surround it, with which it interconnects. A sensory input comes in, through eyes, touch or whatever. Its pre-processed by specialist regions, such as the visual cortex. What results is NOT an image of reality, its an abstracted approximation. Usually that is good enough, but illusions work in some cases because the brain's interpretation of what we see can be fooled. Colour is largely a brain interpretation, though not all colours.

Now that modified sensory input reaches the area in which related experiences can exist. Other nearby areas can interact with how it operates, representing our prior experience impact on the sensory input. So the first area to receive the signal may activate nearby areas. The input signal has become an output from that area. It hits other experiential areas, and they process it and send a return signal. That signal was again an input, was processed, and with modification became an output. Dynamic patterns of activation go all over this little area of the brain, and there are connections to other regions of the brain that can activate emotions or autonomic reactions, or whatever its connected to.

So that experience leads to a signal, or probably a group of signals (via as yet mysterious processes .... there is too much we do not understand) to the speech centers of the brain. Cutting out a lot of steps, you eventually say something about the experience.



Experience is ill defined - what do we even want to talk about.

So are we talking sensory input, or experience of reality? Or the consequences of experience? Where you draw the boundary affects the argument.

Now one thing that becomes clear here is that we have not really defined what an experience really is, or even what about experience we want to argue about. I think it makes no sense in some respects to talk of it as just input or output. We are in danger of black-boxing the brain, making unjustified assumptions, and all sorts of biases. Similarly we cannot presume we know what the brain is doing yet. Have someone like you ask someone like me in a hundred years and they might have a better conversation.

You can posit such a situation right the way down to individual neurons, but as I said this is deeply suspicious to me. Clusters of neurons, possibly structured (though differently to the visual cortex) seem more likely.

Any argument that neural networks don't hold up because they cannot be easily fractured actually support the viability of the neural network idea, but not prove it. Specific response clusters can be highly distributed, and not easily separated. This might be one reason we have so much problem figuring out the brain.

I do however think that the brain probably has a lot more fine grained architecture than most suppose. Its not a random lump of stuff, broken up into a few modules, with connections between those modules that are specific on the large scale, but random in fine scale. I think that is how things used to be thought of. I think the visual cortex provides a good example of the kinds of specialized subarchitecture that might be involved. Obviously not identical, but still a clue.

We have also ignored one key aspect. While connections may exist, they may not be activated. Non activation of connections (which in some cases may dwindle or die) is also important. Edelman posited that deep emotional centers of the brain (he allowed for a range, we was mainly concerned with end mechanisms)could send signals that change the response capacity of the synapses toward or away from change the strength or type of connection. Much of what we consider to be learned survives, in my interpretation of his view, only because their is no sufficient signal to trigger synaptic plasticity.


Beliefs Again

This is getting too long. I think beliefs do not need to be delved into too greatly. The brain operates based on its supposed congruence with the world. That congruence is not always valid, or accurate, but sometimes its a good enough approximation for most purposes.

What is more relevant to me is what can a belief, a thought about the world, actually do? I think this is where the magical thinking of the BPS proponents really begins to emerge. Beliefs are not some all powerful mental phantom. I am pretty sure if I really thought I could fly, and jumped off a building, that I would get a very brief instant where I might think I was doing it, but it would be only very brief. Much of the attributions and effects of belief we hear so much from some BPS proponents seems to be so much twaddle ... themselves the kinds of magical thinking that they want to claim we, as patients do. Well, magical thinking is I think a part of how the brain works, an entirely different argument about thinking styles .... but that is a different topic. Generally speaking we call this bias, and science (and rational thinking) is in part about an attempt to strip away bias.


Placebo

I think at some point we will have to discuss the placebo effect. Let me put my current working hypothesis out there, so that people can think about it for later. I think the placebo effect is really only about attitudes, how we think about things. It does not alter reality, but only our perception. I think its a great big dud, and has been overused and oversold. We need proper controls in experiments, so it has some value (a slightly different topic) but the belief in placebo looks to me to be mostly a false belief.
 

Jonathan Edwards

"Gibberish"
Messages
5,256
But a lot of hard core AI people were not trying to model the brain but rather study different ways computation worked and ways intelligent behaviour could be formed. I don't think too many people really believed the logical deduction systems used in things like expert systems represented the way the brain works but they did and still do form interesting ways to solve complex problems.

Again I agree but I often hear people talk as if they assume that AI models are supposed to be telling us something about brains, even if like Marcus and Gallistel they are arguing against that. There has been a big AI meeting in Ireland I believe this summer and there were people going there whose primary interest is in neurophysiology. AI models have been seductive.
 

Jonathan Edwards

"Gibberish"
Messages
5,256
Or is it just an internal representation of key features of something rather than as an image even if that was the input technique.

An interesting question. What is an image? I strongly suspect that the images of our perceptions and dreams and recalled memories are encoded in terms of proposition like features signed by positions in a more or less linear array of electrical potentials. So an image of five red roses, in trivial terms would be build of a signal for red, a signal for 'five of them' and a signal for rose shape. You can then add signals for 'the top one being smaller'. The fully constructed experience is then sensed in 3D, or if a faint memory what Marr I think called 2 1/2 D - the depth being just hinted at.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
At least at the moment there seems to be a degree of humility amongst the old guard figures about the fact that people need to go back to the drawing board and build things up again bit by bit.
There is a counter movement in AI that was saying this from at least the 80s, and might have been much longer. Some AI people also divide the community up into neats and scruffs, but they also divide the community up into rules based and the other ... not really labelled in my day, but including connectionist systems. Often referred to as learning systems. This also includes things like genetic algorithms.

Searle clearly put paid to the idea that symbols have meaning, or that you can write meaning in a definition. A definition of a word is just more symbols, more squiggles, again without meaning. You get meaning in squiggles when a squiggle savvy human being looks at the symbols and not before.

There used to be a signed statement on the wall of my PhD supervisor, signed by me and a friend. This is the anecdote I was referring to earlier. It said something like "There was no meaning on the Rosetta Stone before someone deciphered it" though I have forgotten the exact line. The point is that squiggles have meaning only when a human brain is involved, particularly one versed in that kind of squiggle. No amount of squiggle manipulation gives you understanding, nor will it give a machine understanding ... and the relationship between meaning and understanding is a different topic.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
So there is a danger that connectionist nets as designed so far simply do not start to use the sort of inferential processes neurons can use.
Simplistic 70s and 80s style nets. There were signs modelling was moving away from this when I left the field in 95. My own neural modelling code allowed for different connection properties within and between different clusters. Different rules even. Not everything was about signal summation and threshold firing. I was even considering how to factor in hormonal influences.

The much more difficult issue was how to structure and enforce learning, and when you should and shouldn't do that. Or how this influenced learning strategies, data sets, and so on. I was thinking about this when my brain fried.

Problems arose because their was a huge separation between AI, cognitive science and neuroscience. I suspect its got even worse.
 

user9876

Senior Member
Messages
4,556
An interesting question. What is an image? I strongly suspect that the images of our perceptions and dreams and recalled memories are encoded in terms of proposition like features signed by positions in a more or less linear array of electrical potentials. So an image of five red roses, in trivial terms would be build of a signal for red, a signal for 'five of them' and a signal for rose shape. You can then add signals for 'the top one being smaller'. The fully constructed experience is then sensed in 3D, or if a faint memory what Marr I think called 2 1/2 D - the depth being just hinted at.

So to me an image is literally the image that is what ever is picked up on sensors (neural signals from cones and rods in the eye, or for a computer the signal from the sensor chip). The representation of objects as you suggest and as I would agree is an internal representation of the content of the image which is different (and much more compact!). With language describing physical relationships of objects there are examples where people don't remember the detail of what is specified in the sentence but the consequential relationships between objects. I suspect there is similar work looking at images and image processing but have never looked.
 

alex3619

Senior Member
Messages
13,810
Location
Logan, Queensland, Australia
What is an image? I strongly suspect that the images of our perceptions and dreams and recalled memories are encoded in terms of proposition like features signed by positions in a more or less linear array of electrical potentials. So an image of five red roses, in trivial terms would be build of a signal for red, a signal for 'five of them' and a signal for rose shape.

I strongly suspect this is wrong. We have to be very careful to not just regress to symbol manipulation. The brain can use symbols, but I do not think its a symbolic machine. How is five red roses different to squiggle squaggle squaggle? There is a lot more to it, and I think it has to do with how things are interconnected. Now you might, with sufficient technology, be able to reduce the neural activity representation to some electrical signals in some kind of multidimensional array. This is what is wrong with a lot of early connectionist writing too, and other representations. That reduced measure is not the actual neurons, or how they interconnect. It is at best a weak hypothesis.

I think memory is not so easily dissected. I think its much messier.
 

cigana

Senior Member
Messages
1,095
Location
UK
I think when he said it he was probably right. Even Feynman makes mistakes about what the theory means in his introduction to his Lectures vol 3. I suspect that it was not until the Aspect experiments sorted out the EPR problem and extensions of field theory such as Nambu-Goldstone theorem showed just how general and uncompromising the structure of the theory was that people began to see what Leibniz had predicted - that it was a theory of dynamic indivisibles and true dynamic indivisibles can have no internal structure anything like aggregate matter as we know it, not even von Neumann processes 1 and 2. What Feynman could have said is that all 'interpretations' of QM are wrong - QM needs no interpretations.
Out of interest do you remember what the mistakes in Feynman were?
Why do you think QM needs no interpretation?
 

Marco

Grrrrrrr!
Messages
2,386
Location
Near Cognac, France
So to me an image is literally the image that is what ever is picked up on sensors (neural signals from cones and rods in the eye, or for a computer the signal from the sensor chip). The representation of objects as you suggest and as I would agree is an internal representation of the content of the image which is different (and much more compact!). With language describing physical relationships of objects there are examples where people don't remember the detail of what is specified in the sentence but the consequential relationships between objects. I suspect there is similar work looking at images and image processing but have never looked.

The structure from motion/motion from structure debate would suggest that the image formed at rods and cones isn't enough or at least additional information is needed to interpret it. Is the image you're seeing a trapezium in the horizontal plane or a rectangle angled to the viewer? In the absence of other cues such as lighting or texture both interpretations are possible and are made which leads to some interesting effects. But the interpretation whichever one is made doesn't come from the image - there just isn't enough information there.