RE: Flawed Risk Analysis (was Re: SIAI's flawed friendliness analysis)

From: Gary Miller (garymiller@starband.net)
Date: Thu May 22 2003 - 10:40:20 MDT


Tometeor said:
 
>>Uh, NO! The flaw in your argument is you say that if
>>human designers can create a very primitive, infantile
>>AI, then human regulators can inspect that AI efficently
>>three years down the road when it's ten times smarter than
>>us! Does anyone besides me see that gaping logical flaw?
 
The point is that you teach the FAI morality, ethics, and let it
develop it's moral compass early on before it is ten times
smarter than you.
 
Once it's character has been established I don't believe it's going
to turn evil on you at that point. You wouldn't let a child learn
how to make explosives until you knew he had the common
sense and moral compass not to use such a device against
mankind.
 
The source of most criminal and antisocial behavior is readily
apparent when you examine the childhoods and upbringing the
criminals had up to their teen years.
 

-----Original Message-----
From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf Of
Tommeteor@aol.com
Sent: Wednesday, May 21, 2003 7:56 PM
To: sl4@sl4.org
Subject: Re: Flawed Risk Analysis (was Re: SIAI's flawed friendliness
analysis)

Uh, NO! The flaw in your argument is you say that if human designers can
create a very primitive, infantile AI, then human regulators can inspect
that AI efficently three years down the road when it's ten times smarter
than us! Does anyone besides me see that gaping logical flaw?





This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT