Re: Problems with AI-boxing

From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Fri Aug 26 2005 - 12:23:25 MDT


On Fri, Aug 26, 2005 at 01:52:03PM -0400, Jeff Medina wrote:
> On 8/26/05, Robin Lee Powell <rlpowell@digitalkingdom.org> wrote:
> > Something that I haven't really seen talked about here is that
> > keeping a sentient being in a box isn't something that "just
> > results in the singularity being delayed". It's massively
> > immoral, as well.
> >
> > It's slavery *and* imprisonment. Both without proven cause; the
> > AI is guilty until proven innocent.
>
> If one has reason to believe a human might accidentally destroy
> the world, it would be incredibly immoral *not* to quarantine the
> person until evidence the threat is gone.
>
> We already do this when the threat is much smaller -- for example,
> we quarantine people with contagions that have no chance of
> destroying the world, but merely (!) might cost hundreds or
> thousands of lives.

That's *after* we've shown that they have the contagion, in general,
not before.

-Robin

-- 
http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/
Reason #237 To Learn Lojban: "Homonyms: Their Grate!"
Proud Supporter of the Singularity Institute - http://intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT