Re: Is generalisation a limit to intelligence?

From: Joaquim Almgren Gāndara (claw@lords.com)
Date: Sat Dec 02 2000 - 15:30:48 MST


Hi again,

> But, the refutation: A sufficiently intelligent, self-aware system is
> quite capable of modifying itself to make itself MORE ERROR-PRONE if it
> finds through experimentation that this makes it more intelligent ;>

Yes, but that isn't necessarily a solution. You might find that it will get
stuck in a loop where it is at first too error-prone to realise that it's
intelligent, and when it works out all the glitches it realises that it needs to
be more error-prone, which brings it back to the beginning. Either that, or it
might find the perfect error/perfection or generalising/overfitting ratio.
Perhaps then it will find a second dimension that will raise its intelligence?
Or do no other variables raise intelligence per se?

> So the problem you describe seems to apply to a far-future
> situation of hardware plenty...

Yep. This is (currently) a purely hypothetical discussion about the theoretical
limits to intelligence.

> What I mean is that even if there is a LOT of data, and it's highly varied,
> there is still a certain amount of overfitting that is inevitable.

That can't be right. Take a single perceptron -- a very basic artificial
neuron -- that classifies a set of points of types A and B, nearly linearly
separable, in 2D space using a single line. With a lot of varied data in your
training set, you can't get any overfitting using just a single neuron.
Overfitting isn't inherent in all generalisations, it's just a result of having
too sophisticated soft-/hardware to solve a certain problem. It's like trying
too hard. The solution is just to not try at all or use minimal effort, in which
case you'll have an acceptable generalisation.

> On the other hand, the more memory you have, the more of this data you can
> keep in mind for use for new model-building rounds based on new data combined
> with the old. So the maximum-memory system will achieve the minimum amount
> of possible overfitting given the data.

I can't grasp this either. It goes totally against my concept of overfitting. I
always thought that the more sophisticated method of generalisation, the worse
results for easy problems. Which is why I think it's a limit to intelligence.

- Joaquim Gāndara
. claw@lords.com
. http://www.ite.mh.se/~joaal98
. http://games.scandit.com
. http://www.mp3.com/sdtank



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT