Re: Investing in FAI research: now vs. later

From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Wed Feb 20 2008 - 13:21:51 MST


On Wed, Feb 20, 2008 at 11:07:43AM -0800, Peter C. McCluskey wrote:
> Presumably part of the disagreement is over the speed at which AI
> will take off, but that can't explain the certainty with which
> each side appears to dismiss the other.

I disagree, actually; for me that is the entire argument. If your
AI is mind-blind in such a way that it would drop a piano off a
ledge without thinking to look down, that doesn't matter unless it
gets smart enough to crack nanotech before you can stop it. The
mere *possibility* of a hard-takeoff AI that doesn't like humans
(through indifference or malice) terrifies me enough that I'm a firm
backer of the FAI camp. If I didn't think hard takeoff was
possible, I wouldn't care very much one way or the other at all,
because if it takes decades for the AI to become super-humanly
smart, that's decades for us to figure out that it's warped.

-Robin

-- 
Lojban Reason #17: http://en.wikipedia.org/wiki/Buffalo_buffalo
Proud Supporter of the Singularity Institute - http://intelligence.org/
http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT