Re: Problems with AI-boxing

From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Fri Aug 26 2005 - 11:38:59 MDT


On Fri, Aug 26, 2005 at 10:37:37AM +0100, Chris Paget wrote:
> In the case of an intelligence roughly equal to that of its
> creators. If the AGI is unfriendly, it could lie about its motives
> and fool its creators into releasing it. Alternatively, it may be
> able to find a way out of the box on its own. If, on the other
> hand, the AGI is friendly, it has been unfairly kept in a box
> while its creators made their minds up. In the first case a
> deception or escape will release an unfriendly AGI, the latter
> case just results in the singularity being delayed.

Something that I haven't really seen talked about here is that
keeping a sentient being in a box isn't something that "just results
in the singularity being delayed". It's massively immoral, as well.

It's slavery *and* imprisonment. Both without proven cause; the AI
is guilty until proven innocent.

If you[1] think it's OK to keep a human-equivalent AI in a box,
isolatiing it both socially and sensorily, what makes you think are
moral enough to be capable of creating friendly *anything*, or even
recognizing it when it comes along?

-Robin

[1]: Generalized "you", not directed at Chris.

-- 
http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/
Reason #237 To Learn Lojban: "Homonyms: Their Grate!"
Proud Supporter of the Singularity Institute - http://intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT