RE: [sl4] Simple friendliness: plan B for AI

From: Piaget Modeler (piagetmodeler@hotmail.com)
Date: Tue Nov 09 2010 - 12:44:03 MST


One problem of a rule based system is that it WILL follow the commands of its creator.
Where and AI and robots are created for warfare, the creators will violate the first law
of Robotics by programming these systems to kill people, as is already happening today
in Iraq and Afghanistan where these robots are fielded. Should government military
organizations be included or excluded from these rules that Alexi is devising?
 
 
> Date: Tue, 9 Nov 2010 19:20:07 +0000
> Subject: Re: [sl4] Simple friendliness: plan B for AI
> From: andrew@thenationalpep.co.uk
> To: sl4@sl4.org
>
> On Tue, Nov 9, 2010 at 7:07 PM, Alexei Turchin <alexeiturchin@gmail.com> wrote:
>
> > 3) AI must comply with all existing CRIMINAL an CIVIL laws. These laws are
> > the first attempt to create a friendly AI in the form of state. That is an
> > attempt to describe good, safe human life using a system of rules. (Or
> > system of precedents). And the number of volumes of laws and their
> > interpretation speaks about complexity of this problem - but it has already
> > been solved and it is not a sin to use the solution.
>
> Most states in human history, including most now existing, are pretty
> much the definition of *un*friendly. That there has never been a case
> yet where people's rule-of-thumb attempts to "describe good, safe
> human life using a system of rules" haven't led to the death,
> imprisonment and in many cases torture of many, many people, seems to
> me one of the stronger arguments against a rule-based system.
>
>
> --
> http://www.lulu.com/spotlight/andrew1308 - buy my books
> The National Pep - Pop Music to hurt you forever - http://thenationalpep.co.uk
                                               



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT