Re: Friendliness SOLVED!

From: Mark Waser (mwaser@cox.net)
Date: Wed Mar 12 2008 - 21:52:39 MDT


> I apologize for the misinterpretation.

Apology unnecessary but appreciated.

> We can't ban something as obviously destructive and difficult to
> produce as nuclear-tipped ICBMs. How in Eru's name are we going to
> enact a global ban on various kinds of AGI research?

By becoming Friendly ourselves so that we can work together to enact a ban.

> We have to build
> a powerful FAI, which actually can ban dangerous AI research, before a
> UFAI (or nuclear war, nanotech, etc.) kills us all. See
> http://www.intelligence.org/upload/CFAI/policy.html, in addition to the
> previous paper.

 Yeah, yeah, yeah, read it many times. That's my motivation. :-)

> A sufficient number of goals is not enough; the vast, vast majority of
> AGIs will then have a large set of unFriendly goals instead of a small
> set of unFriendly goals. See
> http://www.acceleratingfuture.com/tom/?p=21.

Yes. My sole point was that there is probably a minimum number and
diversity of goals that is necessary for this theory to firmly apply.

> How will you get *every* AI, or *every* human, to endorse this
> Declaration? And even if they do, how are you going to enforce it? How
> do you guard against misinterpretations, or non-handled exceptions?

Eventually societal pressure will enforce it the way all other laws are
enforced. Misinterpretations are an intelligence problem, not a
Friendliness problem and need to be ahndled appropriately there.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT