Re: De-Anthropomorphizing SL3 to SL4.

From: Michael Anissimov (michael@acceleratingfuture.com)
Date: Wed Mar 17 2004 - 08:41:25 MST


Michael, thanks for pointing this out. Of course, the spilling of
coffee on a mainframe causing an AI to go nutty sounds like a plot from
a really bad sci-fi novel. The truly scary thing about AGI creation is
that everyone can seem perfectly alright until hard takeoff, at which
point things could go horribly wrong. Unlike the vast majority of past
technological advancements, AGI does not seem like a "free win" - in
fact, the *default* scenario may be a loss (for everyone). I still
think that the vast majority of people who consider the ethics of
advanced AI are worried about one of two things;

1) anthropomorphic goals emerging spontaneously within the AI
2) mechanomorphic (like a gun) or anthropomorphic (like a slave)
exploitation of AIs by human agents

When the real problem (as many people on this list know) is

3) abstract failures of Friendliness within a very foreign and
difficult-to-imagine goal system structure

Michael Anissimov

Michael Roy Ames wrote:

>Michael Anissimov,
>
>I enjoyed reading your comments about McKenna and Pesce.
>
>In reference to:
> "If some idiot walks into the AI lab
> just as hard takeoff is about to
> commence, and spills coffee on the
> AI's mainframe, driving it a bit nutty,
> then the whole of humanity might be
> destroyed by that tiny mistake."
>
>This is a poorly imagined scenario. An accident so overt and random would
>not perturb a properly constructed AI. It might shut down the hardware,
>thus terminating the instantiation, but could not perturb it such that it
>became 'a bit nutty'. For a small perturbation to make an AI "go bad" it
>would have to be very, very poorly designed - it would have to be nutty to
>begin with. A poorly designed AI is something to be avoided like the
>plague... the inoculation against such a plague being a well constructed
>Friendly AI.
>
>To clarify, let me suggest an alternate imaginary scenario:
> "If some idiot learns just enough about
> AI theory to construct a working prototype,
> but not enough to ensure it remains
> friendly to other beings, and humans
> in particular, then the whole of
> humanity may be destroyed by that
> person's well meaning, but ultimately
> disastrous efforts."
>
>Michael Roy Ames
>
>
>
>
>
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT