From: Alden Streeter (firstname.lastname@example.org)
Date: Mon Aug 26 2002 - 02:56:45 MDT
From: "Michael Roy Ames" <email@example.com>
> Alden Streeter <firstname.lastname@example.org> wrote:
> > The only two ways I can think of out of this paradox are to:
> > 1. Turn the AI lose without restrictions, including the one prohibiting
> > destruction of humans.
> > 2. Arbitrarily forbid the AI from ever altering human goal systems.
> 1. Is doable, but for many here it is considered a horrible,
> very-last-ditch, gray-goo-is-coming option.
> 2. Is not possible.
> There is another option:
> 3. Turn the AI lose *with* restrictions, self imposed restrictions, that
> has been given and agrees with.
That is vacuous: all you have to do is make one of the restrictions a
prohibition on disagreeing with the restrictions.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT