Brain and Behaviourial Science commentary on article, "In search of common foundations for cortical computation" by W Phillips and Singer, 20(4), p698, December, 1997. Information Theory: The Holy Grail of Cortical Computation?

Information Theory: The Holy Grail of Cortical Computation?

James V. Stone

Brain and Behaviourial Science commentary on:
"In search of common foundations for cortical computation"
W Phillips and Singer, 20(4), p 698, December, 1997.

Simple hypotheses are intrinsically attractive, and, for this reason, need to be formulated with utmost precision if they are to be testable. Unfortunately, it is hard to see how the authors' hypothesis might be unambiguously refuted. Despite this, the authors have provided much evidence consistent with the hypothesis, and have proposed a natural and powerful extension for information theoretic approaches to learning.

Theories: the seductive, the simple, and the true

The idea that all tasks undertaken by the neocortex are executed using a single computational principle is both seductive and compelling. Phillips and Singer have marshalled evidence from diverse sources to provide a cogent case in favour of this idea. However, in common with all universal explanations, there is a danger that its elegance is mistaken as evidence for its truth. There is thus all the more reason to treat such ideas with caution. This is not intended as a criticism of the idea itself, but as an antidote for the inevitable temptations associated with such attractive ideas. As noted by Einstein, a theory should be as simple as possible - but not too simple.

Characterising the Holy Grail of cortical computation

The idea of a common foundation for cortical computation was explored as early as 1970 by Marr (Marr 1970), and has been discussed by a number of authors since (Creutzfeldt 1978, Szenatgoathai 1978, Douglas and Martin 1994, Ebdon 1993, Stone 1996a). To date, the idea has remained an intriguing possibility, rather than a testable hypothesis. However, despite the efforts of Phillips and Singer to place this hypothesis on a firm footing, it is not obvious how one would set about falsifying the hypothesis in the form provided in this paper. That is, the authors have not indicated what type of experimental findings would suffice to refute the hypothesis. For example, if it were possible to demonstrate that two cortical regions used qualitatively different operations, would this constitute a refutation? For a number a reasons, I think it would not; but this is due to my own interpretation of the hypothesis, and not a logical imperative of the hypothesis as expressed in the paper. The issue is not whether such a finding would constitute a refutation, but rather, whether any finding is capable of unambiguously refuting the hypothesis. For example, the authors note that, although cortical columns are not central to the hypothesis developed here, criticism of this idea suggests limitations upon anatomical arguments for commonalties (2.1,para 5). This is a fair point, but there is no clear indication of what types of arguments could not also suggest limitations on their use. Surely, a necessary first step in searching for the Holy Grail of cortical computation is to establish what form it might take, and what forms it could not take. Whilst the search for common foundations for cortical computation is undoubtedly a worthwhile endeavour, it would be helpful to establish conditions under which to call off the search, and conditions under which we might shout, Eureka!.

Is maximising information enough?

Several authors have devised neural network models which learn by maximising information theoretic measures (Linsker 1988, Becker 1996, Stone 1996b). As noted by Phillips and Singer, such rules operate in the absence of any ethological considerations, so that even variables which are irrelevant to an animal's behaviour would be extracted by such rules. This appears to be a fundamental limitation of information theoretic models, which either extract any variables (Linkser 1988), or only variables which conform to certain assumptions implicit in the learning algorithm (Becker 1996, Stone 1996a). However, by introducing the idea of cross-stream contextual inputs as constraints on learning, the authors have effectively overcome an important limitation on conventional information theoretic approaches. Whilst others have argued that information maximisation methods are appropriate only for low level sensory processing (presumably because they tend to extract all variables in the input data), the authors argue that the use of contextual information can be used to extract selectively only those variables which are of direct relevance to a particular set of behaviours. This important insight opens up the possibility that a single principle can be used to account for learning of low level perceptual invariances, as well as higher order variables (such as the association between the shape and taste of a fruit). Moreover, it suggests a natural extension to a whole class of neural network models which learn by explicitly maximising information theoretic quantities (Stone 1996b, Becker 1996, Linsker 1988).

Maximising Shannon entropy is hard

On a more technical level, the general approach adopted for generating learning rules shares with others the assumption that information theoretic quantities are suited to extracting variables implicit in sensory data streams. Whilst information theory provides a principled method of deriving learning rules, it is not the only, nor necessarily the best, means to this end.

In its raw form, Shannon entropy takes no account of the temporal ordering of inputs. A number of authors (Foldiak 1991, Stone 1996a+b, Becker 1996, Barlow 1996) have argued for the use of learning rules based on the tendency of distal variables to vary smoothly over time. Indeed the BCM learning rule discussed in the paper (3.5, para 3) is important precisely because it takes advantage of the temporal sequence of inputs. More recently Becker (1996) has used temporal smoothness as an explicit assumption in deriving an information theoretic learning rule. Whilst Phillips and Singer's learning method ignores temporal contiguity of inputs, Becker's method ignores cross-stream constraints. It remains to be seen if these two approaches can be profitably combined, and if the resultant learning method can be related to learning in the neocortex.

References

Barlow H, "Intraneuronal information-processing, directional selectivity and memory for spatiotemporal sequences", Network: Computation in neural systems, 7(2), pp.251-259, 1996.

Becker, S, "Mutual Information Maximization: Models of Cortical Self-Organisation", Network: Computation in Neural Systems, 7(1), pp7-31, 1996.

Creutzfeldt, OD, "The Neocortical Link: Thoughts on the generality of structure and function of the neocortex", in "Architectonics of the Cerebral Cortex", Raven Press, Brazier, MAB and Petsche, H, (eds), New York, 1978.

Douglas RJ, Martin KAC, "The Canonical Microcircuit: A co-operative neuronal network for neocortex", pp131-141, 1994.

Ebdon M, "Is the cerebral neocortex a uniform cognitive architecture?", Mind and Language, 8(3), pp369-403, 1993.

Foldiak P, "Learning Invariance From Transformation Sequences", Neural Computation, 3 (2), pp194-200,1991.

Linsker R, "Self-Organization in Perceptual Network", Computer, pp105-117,1988.

Marr D, "A Theory for Cerebral Neocortex", Proc Roy Soc London(B), 176, 161-234,1970.

Stone JV, "A Canonical Microfunction For Learning Perceptual Invariances", Perception, 25(2), pp207-220,1996a.

Stone JV, "Learning Perceptually Salient Visual Parameters Using Spatiotemporal Smoothness Constraints", Neural Computation, 8(7), pp1463-1492, 1996b.

Szenatgoathai J, "The neuron network of the cerebral cortex: a functional approach", Proceedings Royal Society London (B), 201, pp219-248, 1978.

Back to J Stone's personal page