From: Ben Goertzel (firstname.lastname@example.org)
Date: Mon Jul 30 2001 - 06:17:17 MDT
> Ben wrote in unquotable html:
> "I think that the development of human level AI will probably
> occur through
> the efforts of a small focused team. However, I think that the
> of *superhuman* AI will occur as you describe.
> It will work like this. A small team will build the roughly
> human-level AI,
> and the world will get psyched about it. THEN, the code will be
> opened up,
> and thousands of coders will participate in improving and
> testing and tuning
> various components, contributing their creativity and making the system
> smarter and smarter."
> I still don't get this. You described in your other messages that you
> expect the completed human-level Webmind to be able to both reprogram
> itself and its environment. If this is the case, why does it need a
> bunch of very very slow human programmers to help it improve itself?
> Wouldn't Webmind be able to improve, test, and tune itself much more
> rapidly than realtime? Especially as it has access to faster hardware?
The useful modification of a Webmind or other sophisticated AI system is a
*very hard problem*. Only a certain subset of very smart humans, with a
particular cognitive orientation, are able to carry out this sort of task.
Thus I assume that, when a Webmind begins to be able to usefully modify
itself, it won't be an expert at doing so. Within the domain of
Webmind-improvement, it will have some strengths and some weaknesses.
Probably complementing the strengths and weaknesses of human experts and
Webmind-improvement. Thus there will be a phase, most probably one of years
but pessimistically perhaps one of decades, where Webmind and humans are
improving Webminds in parallel. This will lead up to the next phase where
Webminds are just so much better than humans at improving Webminds that
human assistance is irrelevant.
It seems that the difference between our intuitions is that I suspect a
years-long "semi-hard takeoff" with ample human assistance, prior to the
hard takeoff, whereas you and Eli envision a briefer semi-hard takeoff
period without much human help. (Some rather obscene metaphors suggest
themselves here, but I'll refrain ;)
By the way, the assumption that Webminds or other AI's will be much faster
than humans is questionable. This depends on hardware advances. Of course
there are some things computers can do much faster than humans right now,
and there will be some cognitive tasks that Webminds can carry out much more
efficiently than humans early on (there already are, even now). But even
though the elementary operation of a CPU is faster than that of a neuron,
there are just *so many fucking neurons*!! The new Webmind core is pretty
fast, but even so, suppose one (very crudely) maps each WM node to a
neuronal group in the brain (1000-50,000 neurons, say). Can Webmind cycle
through all its nodes as fast as the brain can activate all the
corresponding neuronal groups? Not yet. On a distributed hardware platform
(a Beowulf cluster, say), it can come sorta close in a few years perhaps,
with the efficiency of the new implementation and some more hardware
improvements, but it's not there (I can quantify this later).
-- Ben G
This archive was generated by hypermail 2.1.5 : Tue Jun 18 2013 - 04:00:21 MDT