Re: Robot that thinks like a human

From: Ben Goertzel (ben@goertzel.org)
Date: Wed May 18 2005 - 20:40:28 MDT


> the only realistic way for humanity to win is for the AGI race to be won
> by a project that explicitly sets out to build an AGI that can be proven
> to be Friendly (to a high degree of confidence, prior to actually
> building it). Right now the SIAI appears to be the only such project in
> existence.
>
> * Michael Wilson

Michael,

I've had this argument with you and Eli on this list many times, so I'm not
going to pursue it at length once more... but I'll mention the point briefly
just for list newbies...

I have a great deal of doubt that it's possible for anyone to achieve a good
understanding of AGI Friendliness prior to building and experimenting with
some AGI's (considerably more general and powerful than any of the narrow-AI
programs or crude AGI prototypes that exist today).

So far none of the ideas published online by the SIAI staff have done
anything to assuage this doubt. Plenty of interesting speculative ideas,
but nothing even vaguely approaching a framework that could be used to
construct a proof of Friendliness of an AGI design.

Sure, you can argue that it's better to spend 10-20 years trying to
construct theoretical foundations of Friendly AGI in the absence of any AGI
systems to play with --- just in the off chance that such theorization does
turn out to be productive. But the risk then is that in the interim someone
else who's less conservative is going to build a nasty AI to ensure their
own world domination.

IMO a more productive direction is think about how to design an AGI that
will teach us a lot about AGI and Friendly AGI, but won't have much
potential of hard takeoff. I think this is much more promising than trying
to make a powerful theory of Friendly AI based on a purely theoretical
rathern than empirical approach.

The Novamente project seeks to build a benevolent, superhuman AGI (I'm not
using the word Friendly because in fact I'm not entirely sure what Eli means
by that defined term these days). We are committed not to create an AGI
that appears likely capable of hard takeoff unless it seems highly clear
that this AGI will be benevolent. We are not committed to avoid building
*any* AGI until we have a comprehensive theory of Friendliness/benevolence,
because

a) we think such a theory will come only from experimenting with
appropriately constructed AGI's
b) we think other existential risks (including the risks of others' nasty
AGI's) are too great to be able to afford a maximally careful approach in
regard to our own AGI

So anyway, it is just not true that the SIAI is the only group seeking to
build a demonstrably/arguably benevolent AGI system. Novamente is, and
probably other groups are as well. Rather, SIAI has a particular approach
toward this goal, which involves the belief that a theory of Friendly AI can
be arrived at purely theoretically rather than empirically -- and this
approach appears to be unique to SIAI.

-- Ben Goertzel



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT