Posted on May 24, 2008 by Peter Turney
I believe that math is very important: My first paper was mathematical (How many ways can an I don’t have a solid argument here; only a vague intuition. What is the nature of the algorithmic processes that make humans smart? Are the algorithms more mathematical, like support vector machines, or are they less mathematical, like genetic programming? I am tempted by the beauty of the math behind support vector machines, but my intuition is that humans use algorithms that are less mathematical. For example, Bayesian inference is an attractive mathematical approach to reasoning, but Kahneman and Tversky and others have shown that human reasoning is not quite so rational. This does not imply that machine learning and AI researchers should avoid Bayesian approaches, but it does suggest that there are more important things for intelligent behaviour than getting accurate probability estimates. What are those things? I think Dedre Gentner is pointing us in the right direction. However, my aim here is only to suggest that we should be moderate in our use of math; my aim is not to endorse any particular alternative approach. As researchers, we are exploring a space of algorithms, searching for “intelligent” algorithms, and it seems to me that our exploration is heavily biased towards more mathematical algorithms, but there is no evidence to support such a strong bias. It looks like physics envy to me. I admit I’m a physics groupie. The first step to rehabilitation is admitting that you have a problem. |