Re: Open AI

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Jul 30 2001 - 10:43:09 MDT


Ben Goertzel wrote:
>
> Thus I assume that, when a Webmind begins to be able to usefully modify
> itself, it won't be an expert at doing so. Within the domain of
> Webmind-improvement, it will have some strengths and some weaknesses.
> Probably complementing the strengths and weaknesses of human experts and
> Webmind-improvement. Thus there will be a phase, most probably one of years
> but pessimistically perhaps one of decades, where Webmind and humans are
> improving Webminds in parallel. This will lead up to the next phase where
> Webminds are just so much better than humans at improving Webminds that
> human assistance is irrelevant.

The first phase is called "seed AI". The second phase is called "hard
takeoff".

> It seems that the difference between our intuitions is that I suspect a
> years-long "semi-hard takeoff" with ample human assistance, prior to the
> hard takeoff, whereas you and Eli envision a briefer semi-hard takeoff
> period without much human help.

Ben, what we define as a "hard takeoff" comes at the END of a long period
of AI improvement. If you're interpreting "hard takeoff" to begin as soon
as an AI gets its hands on its own source, then of course the runup takes
years and a lot of improvement of AIs by humans. "Hard takeoff" refers to
what happens once the AI is pretty much on its own.

We both agree that it will take years of seed AI development before any
hard takeoff could occur. Where we disagree is that you assert that we
will have genuinely human-equivalent AI, and it will not yet be on its
own, and that it will in fact take years beyond this point. I believe
that the milestone of human-equivalence will be passed during, or very
shortly before, a hard takeoff.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT