What does it mean to represent? A semantic challenge for computational neuroscience.

Image courtesy Wikimedia Commons

Image courtesy Wikimedia Commons

Philosophy, the love of knowledge, and science, the pursuit and application of knowledge, are kind of like siblings. Sometimes they fight, but most of the time they get along quite well. Today, I wanted to show how the two relate to one another by developing two stories. The first is a scientific paper by Zipser and Andersen from the late 1980s, and the second is a philosophical paper by Rick Grush, which contextualizes the first within a philosophical framework.

To explain how these papers interact, you should know a little science underlying the Zipser and Andersen paper. When we turn move our eyes, the light rays entering our irises sweep across different locations on the back of the eye. This area is called the retina, and it is packed with special light-sensing cells. Depending on which direction the eyes are pointing, different retinal cells become active and send electrical signals to the nerves connecting eye to brain. Thus, to make sense of where an object is in space relative to the viewer, it would be helpful for the viewer to use information about both what light is hitting the retina and where the eye is pointing. The first paper attempts to get at how cells in the posterior parietal cortex (PPC) integrate both retinal and eye-position information to locate objects relative to the animal.

Zipser and Andersen conduct a set of simple and elegant experiments. First, they had a monkey fixate at the center of a screen. Then, while the monkey was fixated at the center, they would display a light spot stimulus at a different off-center location on the screen. When light from this spot entered the retina, it strongly activated a PPC neuron, whose activity they monitored using an electrode implanted in the monkey’s brain. Next, they prompted the monkey to keep their head fixed in place while looking at a point left of center. They would move the spot stimulus to the left as well to activate the same retinal cells. They repeated this procedure for a total of 9 different fixation points. Overall, they found that direction of looking had a strong effect on the response of the PPC neuron, even though the spot stimulus always activated the same retinal cells (Figure A, left).

Figure A: comparison of experimental and model-derived spatial gain fields. Left: spatial gain field for a PPC neuron. Right: spatial gain field for a hidden layer unit in the model. Modified from Figure 1C and Figure 3c of Zipser and Andersen (1988…

Figure A: comparison of experimental and model-derived spatial gain fields. Left: spatial gain field for a PPC neuron. Right: spatial gain field for a hidden layer unit in the model. Modified from Figure 1C and Figure 3c of Zipser and Andersen (1988). Spatial gain fields were calculated as follows: the outer circle represents the total activity after stimulus presentation. The outer circle minus the background activity during the 500ms before the stimulus presentation when the monkey was only fixating at a point yields the black circle. The black circle represents the activity due to visual inputs only. The annulus represents the activity due to eye position only.

Next, they also looked at how the responses of PPC cells changed when the animal fixated at a single point straight ahead while the stimulus was moved around, activating many PPC neurons instead of just one. They graphed their results and found quite a bit of variation in the response of cells (Figure B, top).

To understand these responses better, they made a computational simulation for their results. Their simulation was an attempt to model what the brain actually does. So, there are three parts to their simulation. The first part is the inputs. These inputs are the information that is carried by cells which respond to eye-position or cells that respond to retinal signals. These inputs are connected to a second layer of “cells,” which can be thought of as the PPC cells in the brain. These cells then connected to the final layer to predict where the stimulus was relative to the head. They found that their model could indeed predict the location of the stimulus relative to the head of the monkey. Furthermore, they found that the hidden layer generated responses that looked similar to experimental responses (Figure A, right; and Figure B, bottom). In short, although they were only working backwards from inputs and outputs, they were ultimately able to write a piece of computer code that acts very similarly to how the PPC acts. This exciting tool would allow researchers to dissect the inner workings of the PPC much more easily than they can with a monkey. Thus, the model serves as a new system with which scientists can understand how the brain computes the location of objects.

What’s the philosophical take on all of this? In the philosophical paper I alluded to earlier, Grush tackles the question how we can know whether a system is doing a computation. He appeals to a definition of computation by Churchland, Koch, and Sejnowski, who are prominent computational neuroscientists, which states that any physical system can be a computer so long as it meets two conditions: First, the states of that physical system can be taken to represent states of another system. For example, the computer on which I am typing this article is transforming the open-or-closed state of tiny transistors inside a microchip into letters on my screen. Second, the states of that system must partake in a function or algorithm. In the example of my laptop, a special algorithm is required to translate transistor state into a visual image. Given this definition, Grush challenges us to think further about the first condition. How, exactly, do we know whether a physical system is representing something else? He brings up two theories of representation: informational semantics and biosemantics.

Figure B: comparison of experimental and model-derived receptive fields. Top: receptive field for various PPC neurons. Bottom: receptive fields various hidden layer units in the model. Modified from figure 2 and figure 5 of Zipser and Andersen (1988…

Figure B: comparison of experimental and model-derived receptive fields. Top: receptive field for various PPC neurons. Bottom: receptive fields various hidden layer units in the model. Modified from figure 2 and figure 5 of Zipser and Andersen (1988). To generate receptive fields, monkey fixated at a single point and was presented spot stimuli at various locations on the screen. The responses and a smooth surface was fit to the data points.

Informational semantics is the idea that some systems represent other things just because they carry information. For instance, if a cell in my visual cortex becomes active if and only there is a black square in my visual field, then we can say that that cell carries information about the presence of a black square in my visual field. Unfortunately, Grush tells us, there are many problems that plague informational semantics. One particular issue he mentions is called problem of the ubiquity of information. If X carries information about Y because Y causes X, then you can conclude the temperature of the outer surface of my coffee mug carries information about the ambient temperature of the room, the volume of the coffee in my mug, etc. Furthermore, informational semantics would force us to conclude that the surface of my mug represents all the things that somehow impact its temperature. This is a problem because the representational capacity of the brain, which we think gives rise to mental representations, would not be so special. Another way of posing this problem is to think, does the surface of the mug have mental representations? Our intuition is that it does not.

Another account of representation, called biosemantics, partially addresses this problem. It says that system X represents Y so long as Y causes X and X has been chosen to represent Y through natural selection. To illustrate how this theory gets around the ubiquity of information problem, Grush asks us to think about pressure-sensitive cells that many mammals have on their feet. The heavier the animal is, the more the cell fires. According to the informational semantic point of view, those cells represent the weight of the animal. The more the animal eats, the more those cells fire. The informational semantic perspective would then make the strange claim that those cells are carrying information about how much the animal has eaten recently. The biosemantic perspective gets around this issue. It says that those cells do not carry information about how much the animal has eaten, because natural selection did not “choose” those cells for that particular function. Rather, natural selection chose those cells for their ability to carry information about the surface on which the animal is walking. Appealing to natural selection appears to solve a lot of problems. We can say that the mug doesn’t represent ambient temperature or volume of coffee, because those functions were not chosen by natural selection! 

However, if we assume that evolution led neurons to represent specific states of the organism, then there is an even larger problem that neuroscience faces. Grush brings us along for the following thought experiment. Imagine that lightning strikes a swamp and spontaneously arranges the molecules in it to form a human who is identical to you, holding a copy of this very blog post. This swamp-person appears at the same time as you, reading the same word, with its eyes pointed in the same way and its neurons firing at the same time. Would the swamp-person be perceiving anything? According to the biosemantic perspective, it would not, because the spontaneously appearing swamp-person would lack the evolutionary history required to associate information with a state. It would not be perceiving anything because it would not be representing anything. 

Okay, how does all this connect with science? Well, the view of many neuroscientists is that neural representation forms the basis for mental representation. That is to say, our mental states (such as imagining, remembering, thinking, etc.) are formed by neural representations. This is an attractive idea because it promises to solve the mysteries of the mind by studying the brain. But, where do we stand at the end of all of this? It appears that we would need a new theory of representation that could tell us, in principle, why it is that some states represent something while others don’t. I lack the space to bring other accounts of representation here, so suffice it to say that there are other theories, and hopefully they’ll be a topic of interest in the future!

References:

Churchland, P. S., Koch, C., & Sejnowski, T. J. (1993). What is computational neuroscience? In E. L. Schwartz (Ed.), Computational Neuroscience (pp. 46-55): MIT Press. 

Grush, R. (2001). The Semantic Challenge to Computational Neuroscience. In P. K. Machamer, R. Grush, & P. McLaughlin (Eds.), Theory and Method in the Neurosciences (pp. 155-172): Unviersity of Pittsburgh Press.

 Zipser, D., & Andersen, R. A. (1988). A back-propagation programmed network that simulates response properties of a subset of posterior parietal neurons. Nature, 331(6158), 679-684. doi: 10.1038/331679a0