Re: AI Boxing

From: James Higgins (jameshiggins@earthlink.net)
Date: Sun Jul 28 2002 - 06:51:40 MDT


Mitch Howe wrote:
> One thing that I think has been overlooked in these discussions is the
> ethical problem that would result in the unlikely event that a totally
> secure transhuman AI box -- or even a human level AI box -- could be made.

<snip>

> So there is yet another reason to make AI right the first time, using a
> Friendliness architecture that is intrinsically trustworthy from the
> beginning. Not just because boxes are likely to fail. Not just because we
> probably can't tell the good AI from the bad. But also because of the
> morally insufferable problem of appointing ourselves to be lords over these
> minds.

Very good point. Obviously, the goal should be to try and create the
first AI to be friendly using a good, solid architecture. We're still
going to have to box the AI until we're certain the architecture is
working properly (ie: boxed until trans-human+).

One way to think about this is as a child. It could be said that human
children live in a type of sandbox, not possessing real freedom until
age 18-21. The older they get the more freedom they get, unless they
prove themselves to be unfriendly. A very unfriendly 16 year old could
find themselves with less freedom than a friendly 8 year old if the
situation warranted it.

James Higgins



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT