RE: How hard a Singularity?

From: Ben Goertzel (ben@goertzel.org)
Date: Sun Jun 23 2002 - 10:17:41 MDT


To enlarge on my point a little...

Making a rigorous quantitative "growth curve" for AGI is not so simple.

How is one measuring the "degree of general intelligence" taken on by the
best available software at a given point in time?

How much has the degree of general intelligence of the best AI programs
increased from 1992 to 2002? What is the growth exponent?

Plotting progress in this regard is not so easy as plotting progress in
processor speed, memory capacity, etc.

-- Ben G

> -----Original Message-----
> From: owner-sl4@sysopmind.com [mailto:owner-sl4@sysopmind.com]On Behalf
> Of Ben Goertzel
> Sent: Sunday, June 23, 2002 10:00 AM
> To: sl4@sysopmind.com
> Subject: RE: How hard a Singularity?
>
>
>
>
> Eli,
>
> Your reasoning here is fine, but is predicated on your expectations as to
> the "overall shape of the curve" -- most specifically, on your expectation
> as to the *exponent* in the exponential growth curve of AI
> intelligence. Of
> course, if you assume an exponential pattern with an aggressive exponent,
> you will arrive at a very fast transition from human-level intelligence to
> vastly superhuman-level intelligence. I don't doubt the general
> exponential
> growth pattern, but I wonder about the size of the exponent, and I suspect
> there will be some significant plateaus along the way too....
>
> Also, the assumption that "AI development took a century or more"
> is not so
> unrealistic, depending on how you define your terms. It was way
> back in the
> 1930's when Norbert Wiener published his book on Cybernetics, containing
> some rudimentary ideas on AI design, for example.
>
> -- Ben G
>
>
>
>
> > > What I am questioning is not your confidence that the feedback loop
> > > itself will exist, but your confidence in your quantitative
> estimate of
> > > the speed with which the feedback loop will lead to intelligence
> > > increase.
> >
> > Look at this way: Given what I expect the overall shape of the curve to
> > look like, if you specify that it takes one year to go from
> > human-level AI
> > to substantially transhuman AI, then it probably took you between
> > a hundred
> > and a thousand years to get to human-level AI. If you're
> wondering where
> > the ability to name any specific time-period comes from, that's
> > where - the
> > relative speed of the curve at the humanity-point should be going
> > very fast,
> > so if you plug in a Fudge Factor large enough to slow down that
> > point to a
> > year, you end up assuming that AI development took a century or
> > more. Even
> > so I'm not sure you can just plug in a Fudge Factor this way - the
> > subjective rate of the developers is going to impose some limits
> > on how slow
> > the AI can run and still be developed.
> >
> > Seed AI will be a pattern of breakthroughs and bottlenecks. As the AI
> > passes the human-equivalence point on its way between infrahumanity and
> > transhumanity, I expect it to be squarely in the middle of one of the
> > largest breakthroughs anywhere on the curve. If this mother of all
> > breakthroughs is so slow as to take a year, then the curve up to
> > that point,
> > in which you were crossing the long hard road all the way up to human
> > equivalence *without* the assistance of a mature seed AI, must
> > have taken at
> > least a century or a millennium.
> >
> > And if it takes that long, Moore's Law will make it possible to
> > brute-force
> > it first, meaning that the AI is running on far more processing
> > power than
> > it needs, meaning that when self-improvement takes off there will
> > be plenty
> > of processing power around for immediate transhumanity. Still no Slow
> > Singularity.
> >
> > --
> > Eliezer S. Yudkowsky http://intelligence.org/
> > Research Fellow, Singularity Institute for Artificial Intelligence
> >
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT