Is generalisation a limit to intelligence?

From: Joaquim Almgren Gāndara (claw@lords.com)
Date: Sat Dec 02 2000 - 06:46:55 MST


First of all, buzz away if I'm below SL4, I won't mind. However, I'd really
appreciate it if someone pointed out why generalisation is not a problem. If
this is stuff that you've already discussed on this list, just tell me what
conclusions you've already reached, or tell me what book or paper I should read
if there's something on this.

We need to generalise in order to acquire knowledge and apply it in different
circumstances. The exception to this is that we have a lot of highly specialised
sensomotory knowledge (at least according to Piaget), but I'll use the term
"knowledge" to refer to knowledge of more or less abstract concepts, stuff that
we usually associate with intelligence.
    I'm sure most people on this list who are moderately interested in neural
networks have heard of the phenomenon known as "overfitting". For those of you
who haven't heard of it, overfitting occurs when a neural network becomes too
rigid as a result of having too many neurons. A network that suffers from
overfitting can't generalise; it's lost the fuzziness and organic quality that
neural networks are known for. However, a network that generalises isn't
entirely accurate; in some sense, it's not as smart as it could be. So it's
obviously a trade-off: the network is either rigid or organic. I believe that
there is no middle ground, since you can always come up with nice exceptions to
the norm that refuse to be neatly classified.
    With the premises that generalisation is an integral part of intelligence,
and generalisation is a way of minimising storage space, doesn't this pose a
problem? Is there a point that intelligence can't cross because it relies on
generalisation? If an AI stores everything it comes across, it might not
generalise properly (it ends up being an expert system -- *shudder*), which
means it won't be able to handle new situations effectively. However, if it does
generalise, it won't have the crystalline quality of today's computers, i.e. it
will make mistakes. Do we want an AI to make mistakes? Doesn't that imply a
limit to intelligence? Are all generalisations bad?

To sum it up: Is there some way to combine the fuzzy quality that intelligence
relies on with the rigid quality of not making a single mistake? Is
generalisation a limit to intelligence?

- Joaquim Gāndara
. claw@lords.com
. http://www.ite.mh.se/~joaal98
. http://games.scandit.com
. http://www.mp3.com/sdtank



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT