Re: Friendliness SOLVED!

From: Thomas McCabe (pphysics141@gmail.com)
Date: Wed Mar 12 2008 - 21:17:45 MDT


On Wed, Mar 12, 2008 at 10:50 PM, Mark Waser <mwaser@cox.net> wrote:
> >> Show me how I (or an AGI) can stay true to the declaration and still
> >> perform
> >> a horrible *and* unethical act OR
> >> Show me a set of circumstances where my Friendliness declaration prevents
> >> me
> >> (or an AGI) from protecting myself
> >
> > This is a false dichotomy. Neither me, nor other Singularitarians, nor
> > the AI, nor reality are obligated to choose between your two
> > predefined options.
>
> No! I mean that you can disprove my theory by providing a counter-example
> to either of the two OR any other counter-example that you can prove is
> relevant. I was trying to make life easier by showing where the best/most
> likely points to disprove my theory are.

I apologize for the misinterpretation.

>
> > How could this possibly be in the self-interest of, say, a paperclip
> > optimizer? It will obviously be able to create many more paperclips if
> > it ignores your (to the UFAI) funny-sounding pulses in fiberoptic
> > cables and just turns the Earth into a big pile of microscopic
> > paperclips.
>
> I handled the case of the paperclip optimizer in the e-mail that you are
> answering. To quote by pulling my previous statements from your e-mail
>
> >> For a powerful enough,
> >> single-goal entity that is sure that it *can* overcome all other
> >> entities,
> >> this is not going to stop it -- but this is a fantasy edge-case that we
> >> should be able to easily avoid.
>
> You *do* respond below by saying
>
> > Actually, it is probably the default case, and a large number of us
> > are operating off that assumption (it's the conservative scenario).
> > See http://www.intelligence.org/AIRisk.pdf.
>
> So I'll just say OK, it doesn't handle this fantasy case (that can be easily
> outlawed by requiring that all AGI have a sufficient number of goals) but it
> has significant value outside this case (since I don't believe that either
> of us will convince the other).

We can't ban something as obviously destructive and difficult to
produce as nuclear-tipped ICBMs. How in Eru's name are we going to
enact a global ban on various kinds of AGI research? We have to build
a powerful FAI, which actually can ban dangerous AI research, before a
UFAI (or nuclear war, nanotech, etc.) kills us all. See
http://www.intelligence.org/upload/CFAI/policy.html, in addition to the
previous paper.

>
> >> I disagree. Prove me wrong by doing one of the two things above. That
> >> should be easy if my theory is as laughably wrong as you believe.
> >
> > By my count, four different people have now challenged your theory, so
> > there's plenty of other things to say. Stop repeating this
> > unreasonable demand; it isn't getting anyone anywhere.
>
> I've been saying a lot of things and four different people have challenged
> the theory but the truth is "NO ONE HAS PROVIDED A DETAILED, SPECIFIC
> UNHANDLED CASE OR A COUNTER-EXAMPLE" other than the paper-clip example which
> is an absurd edge case which we can simply outlaw by requiring all AGI to
> have a sufficient number of goals.http://mail.google.com/mail/#inbox/118a4cc6d8a786b0

A sufficient number of goals is not enough; the vast, vast majority of
AGIs will then have a large set of unFriendly goals instead of a small
set of unFriendly goals. See
http://www.acceleratingfuture.com/tom/?p=21.

> If you feel that this statement is
> incorrect then I am asking you to provide what you believe is a detailed
> specific unhandled case or a counter-example. This is not an unreasonable
> demand. This is fundamental Bayesian logic. Show me some hard evidence
> that I am wrong. All I've seen thus far is just wild unfounded speculation.
>
>
> > Why does it even have this supergoal in the first place?
>
> Because that is the primary overriding goal in my Declaration of
> Friendliness. Why are you arguing with me if you haven't even had the
> courtesy to read what I have written well enough to realize this.

How will you get *every* AI, or *every* human, to endorse this
Declaration? And even if they do, how are you going to enforce it? How
do you guard against misinterpretations, or non-handled exceptions?

>
> > What other goals? An AI's goal system can be much, much simpler than a
> > human's. There's no reason why it has to have any other goals.
>
> True, but that is a fantasy edge case which we can easily outlaw.
>
>
>

-- 
 - Tom
http://www.acceleratingfuture.com/tom


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT