Language and Reality
The Symbol Grounding Problem
Posted on May 4, 2007 by Peter Turney
There is a view that the meaning of words (more generally, of symbols) must be grounded in sensory perception or in physical interaction with the world (embodiment). If symbols were merely defined in terms of other symbols, then it seems that we would have an infinite regression; we would spin in circles in symbol space, without ever reaching meaning. If symbols must be grounded, then a computer without sensors and actuators might possibly pass the Turing Test, yet everything it said would be meaningless (at least to the computer itself, if not to the audience). A stronger view holds that only an embodied computer could ever possibly pass the Turing Test. Introspection suggests that our symbols are grounded, but is grounding really necessary for meaning?
Many researchers view symbol grounding as a major issue. Lakoff and Johnson argue that it is a fundamental problem for Western philosophy. Regier has developed a computational model in which spatial words, such as above, below, left, right, around, and in, are grounded in visual perception. Woods et al. have developed an algorithm that learns to classify images of objects, such as cups and chairs, based on whether it appears that the objects could support the expected function.
It might seem that mathematics, at least, does not require grounding in sensory perception. Mathematics exists in the Platonic realm of ideas, which we see with our inner vision. However, Lakoff and Núñez argue (persuasively) that mathematics is based on metaphorical reasoning from embodied experience.
French has argued that there are subcognitive questions that can only be answered by an embodied entity, such as a human being, or possibly a very sophisticated robot. Subcognitive questions probe the network of cultural and perceptual associations that we build as we live our lives. For example, how good is the name Flugly for a glamorous Hollywood actress? I have shown that at least some of these subcognitive questions can be answered by statistical analysis of very large collections of text. How far can we go with this ungrounded approach to meaning?
Suppose that symbols are defined in terms of other symbols, either explicitly, as in a dictionary, an encyclopedia, or a semantic network, or implicitly, as in a very large collection of text. It seems possible that meaning might emerge from the complex interconnections among these symbols, without grounding in perception. I have not yet decided whether symbols must be grounded. Perhaps meaning consists of dynamic interconnections. Perhaps some meaning (e.g., the meaning of red) must be grounded in perception, but other meaning does not need perception. What is so special about sensory perception? Language can be used for perception (I can read about a distant country) or for action (talking can change people and it can change things in the world). In the end, whether it’s symbols or perceptions, it’s all bits and bytes to a computer.