RE: Fighting UFAI

From: Ben Goertzel (ben@goertzel.org)
Date: Thu Jul 14 2005 - 13:02:53 MDT


> > I'd love to hear well-reasoned thoughts on what and whose
> motivation would
> > end up being a bigger or more likely danger to us.
>
> I think that all utility functions containing no explicit mention
> of humanity
> (example: paperclips) are equally dangerous.

Eli, this clearly isn't true, and I think it's a poorly-thought-out
statement on your part.

For instance, consider

Goal A: Maximize the entropy of the universe, as rapidly as possible.

Goal B: Maximize the joy, freedom and growth potential of all sentient
beings in the universe

B makes no explicit mention of humanity, nor does A.

B admittedly is more vague than A, but it can be specified by saying that
the AI should define all the terms in the goal in the way it thinks the
majority of humans would define them on Earth in 2004.

I really feel that B is less dangerous than A.

I can't *prove* this, but I could make some plausible arguments, though I
don't feel like spending a lot of time on it right now.

Do you have some justification for your rather extreme assertion?

-- Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT