Re: I am a moral, intelligent being (was Re: Two draft papers: AI and existential risk; heuristics and biases)

From: Martin Striz (
Date: Tue Jun 06 2006 - 14:11:00 MDT

On 6/6/06, Robin Lee Powell <> wrote:

> Again, you are using the word "control" where it simply does not
> apply. No-one is "controlling" my behaviour to cause it to be moral
> and kind; I choose that for myself.

Alas, you are but one evolutionary agent testing the behavior space.
I believe that humans are generally good, but with 6 billion of them,
there's a lot of crime. Do we plan on building one AI?

I think the argument is that with runaway recursive self-improvement,
any hardcoded nugget approaches insignificance/obsolesence. Is there
a code that you could write that nobody, no matter how many trillions
of times smarter, couldn't find a workaround?


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT