Re: The "One Basket" Problem

From: Charles D Hixson (charleshixsn@earthlink.net)
Date: Sat Aug 05 2006 - 12:27:50 MDT


Deepak Goel wrote:
> I have written once to this list before on this subject ("all our eggs
> are in one basket, we need a backup"). I wrote up an article and hope
> you don't mind my sharing this article with you:
>
> http://gnufans.net/~deego/DeegoWiki/OneBasket.html
>
> It does talk about singularity, shocks, etc, so hope it is on-topic.
It's a genuine problem, and your proposed solution (and various analogs
of it) are worthy. I don't oppose your proposal. But that's not where
my interests lay in this decade. (Two and three decades ago I would
have been in total agreement.)

The problem is the cost factor. Because of the cost factor, it looks
like something that's going to require government sponsorship. And I
trust the current US government so little that if they said the sky was
bluish gray I'd go outside to check. OTOH, Japan may be serious about
their moonbase project. Once you're there you can build a catapult and
a (lunar) beanstalk and you're well on your way. (And that might be an
EXCELLENT environment for an AI to evolve. I'm not sure that it would
be exactly friendly, but it should, as part of it's original function,
be protective towards humans...though not protective at all costs.
That's a good start.)

I'm in a bit of a minority here in that I expect an AI to "evolve" out
of the applications that people use to do things: Google, hospital
administration, etc. Managing a terrestrial-ish environment in inimical
circumstances seems another good place. People seem to add more bells
and whistles with each iteration of the program, and expect it to do
more and more. Voice response is clearly something people will want as
soon as it becomes more feasible. ("Turn out the light in the
kitchen!") Expanding from recognizing a few simple commands to a larger
and more flexible subset of native language to, eventually, full
recognition of natural speech...but that in and of itself requires a lot
of what is required by an AGI, especially when it includes being able to
respond sensibly to those commands/requests. And even more when it
decides which commands to accept and which to ignore...and how to
respond while ignoring them. Doing that requires that it rank various
goals in importance and detect conflicts between them, and the functions
that it has been designed to perform will set it's initial goals
(supergoals?) and rank them in importance. (First of all, protect the
AIR! If that means leaving someone to die, you still protect the
community air supply. Second is water...water is just as important, but
less urgent. So you can think about other priorities...like saving
people's lives. etc. Possibly I've got the priorities wrong...but I
don't think so.)

Of course, an AGI may come sooner, and via some more purposeful route.
But I'm not sure that's the route of maximal probability.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT