RE: obstacles to unbounded intelligence

From: Ben Goertzel (ben@goertzel.org)
Date: Sat Jan 26 2002 - 21:05:48 MST


James Rogers wrote:
> I too have noticed that some cognitive
> processes start to
> get very difficult as things get into what I would estimate to be
> human-level and above. Basically exponential functions that are
> essentially
> linear for small cases but which really hit the ramp when things get
> "large", which means n > 10^9 (+/- three orders of magnitude depending on
> what I'm looking at). If my numbers mean anything (and they probably
> don't), it would kind of indicate that the human brain was in a sweet spot
> in terms of bang for the evolutionary buck.

William Calvin ("The Ascent of Mind") made a decent argument that the
reason our brain isn't bigger than it is, is that evolution couldn't figure
out how to make a woman's pelvis open wider.... His argument is not
ironclad but seems pretty solid based on various pieces of evidence that
he assembles.

This is good news, because it suggests that at least in the case of the
human
brain, evolution did *not* stop where it did because of some kind of size-
based cognitive limit.... (Of course, even if the human brain architecture
had a size-based cognitive limit, this wouldn't prove that all intelligence
architectures had similar limits...)

> That said, I *think* there are some solid tricks for getting around these
> problems (at least insofar as what I was working with), but they
> are nothing
> that I would ever expect to evolve in biological wetware.

There seem to be OK tricks in the context of my own AI work, but of course I
don't *really* know how well these tricks will scale.... That would require
doing some very (but perhaps not impossibly) hard math...

I think this kind of discussion is quite valuable, because it concretizes
for us "what kinds of questions are important to ask" as we move toward
real implementations of seed AI...

-- ben g



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT