Re: Destruction of All Humanity

From: Eric Rauch (erauch@gmail.com)
Date: Wed Dec 14 2005 - 07:46:03 MST


By implementing our idea of the good as a hard goal in an ai don't we
prevent it from finding and working towards a greater good. Maybe
there is a better way to keep ai safe then by saddling it with our own
goals

On 12/14/05, David Picon Alvarez <eleuteri@myrealbox.com> wrote:
> From: "micah glasser" <micahglasser@gmail.com>
> Intelligence cannot help you ypu select for the good. The Good must be
> programmed into the AI. Once the AI knows what the Good is then its
> intelligence will surpass any human intelligence in figuring out how to
> obtain bringing about the Good. If the Good is failed to be programmed into
> the machine as its super-goal then it wil certainly be malevolent. Super
> intelligence is not a god. Its merely a tool.
>
>
> Were you programmed with the good? Are you certainly malevolent? What
> distinguishes you from an AI, evolution? Evolution doesn't bring about the
> good, it brings about what works in evolutionary environments, far from the
> good. If the good is objectively existent a super AI can find it, if not
> then there's no point in talking about "the good", we'd rather talk about
> what we want instead.
>
> --David.
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT