Analogy and Metaphor
Meaning, Mapping, Panalogy, and Netflix
Posted on July 24, 2009 by Peter Turney
“Never swallow anything whole. We live perforce by half-truths and get along fairly well as long as we do not mistake them for whole-truths, but when we do so mistake them, they raise the devil with us.” — Alfred North Whitehead, Dialogues of Alfred North Whitehead
There are dozens of theories about meaning, but they share a common element: meaning(semantics) is about mapping. We understand a thing, we give it meaning, by mapping it to another thing. Furthermore, and this is a crucial point, one mapping is not enough. The more mappings we make, the better we understand. A single mapping only gives us part of the truth.
We understand things (words, events, perceptions, people, signs) by relating (connecting, mapping) them to other things. This analogy-making is how we understand both high-level ideas and concepts and low-level perceptions:
“We have repeatedly seen how analogies and mappings give rise to secondary meanings that ride on the backs of primary meanings. We have seen that even primary meanings depend on unspoken mappings, and so in the end, we have seen that all meaning is mapping-mediated, which is to say, all meaning comes from analogies.” [emphasis added] — Douglas Hofstadter, I am a Strange Loop
But one analogy is not enough. No analogy is perfect, and we compensate for their imperfections by using multiple analogies, and by blending analogies together. Marvin Minsky calls this panalogy (parallel analogy):
“If you ‘understand’ something in only one way, then you scarcely understand it at all—because when you get stuck, you’ll have nowhere to go. But if you represent something in several ways, then when you get frustrated enough, you can switch among different points of view, until you find one that works for you!” — Marvin Minsky, The Emotion Machine
If you want to learn more about how we blend analogies, I highly recommend The Way We Think: Conceptual Blending and the Mind’s Hidden Complexities, by Gilles Fauconnier and Mark Turner.
Perhaps the most important lesson from the Netflix Prize has been that many models are better than one:
- “the biggest lesson for ML from the Netflix contest has been the formidable performance edge of ensemble methods” — John Langford, Netflix nearly done
- “We have learned that ensemble methods are the solution for more accuracy.” — Daniel Lemire, After Netflix? What next?
I view this as confirmation of the Panalogy Principle: we don’t understand anything until we understand it many ways. There are no whole-truths, but we can get by reasonably well with a large number of half-truths.