Re: OpenCog Concerns

From: Jeff Herrlich (jeff_herrlich@yahoo.com)
Date: Mon Mar 03 2008 - 23:49:25 MST


It's probably a safe assumption that virtually all humans would prefer that humanity wasn't murdered by an amoral AGI. We can use reasonable eliminations to provide basic guidance. As a matter of reality, there will always be a minority of people who will inevitably disagree with the selected Friendly super-goal. This is inescapable. If we don't assign a specific Friendly super-goal, humanity will be destroyed by default.

Jeffrey Herrlich

Matt Mahoney <matmahoney@yahoo.com> wrote:
--- Nick Tarleton wrote:

> On Mon, Mar 3, 2008 at 12:24 PM, Matt Mahoney wrote:
> > Friendliness has nothing to do with keeping AI out of the hands of "bad
> guys".
> > Nobody, good or bad, has yet solved the friendliness problem.
>
> Then substitute "dumb guys, who don't realize that Friendliness is
> necessary/that they don't know how to do it".

Then we are still in trouble, because nobody knows how to do it. Which of the
following outcomes are Friendly?

1. Humans in an eternal state of bliss, not caring about anything else.
2. Humans in virtual worlds of their choosing and ignorant of their true
environment. (This may have already happened).
3. Humans extinct, but with memories preserved by a superhuman intelligence.
4. Like 3 but with made-up memories, since you wouldn't know the difference.
5. Something else...

My point is that there is no right answer. If we can't agree on what Friendly
means, then I don't have much hope that we will get it right. But that won't
stop people from building AI. I mean, why not just build it and let it answer
the question for us?

-- Matt Mahoney, matmahoney@yahoo.com

       
---------------------------------
Never miss a thing. Make Yahoo your homepage.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT