From: Mark Waser (firstname.lastname@example.org)
Date: Wed Mar 12 2008 - 20:50:53 MDT
>> Show me how I (or an AGI) can stay true to the declaration and still
>> a horrible *and* unethical act OR
>> Show me a set of circumstances where my Friendliness declaration prevents
>> (or an AGI) from protecting myself
> This is a false dichotomy. Neither me, nor other Singularitarians, nor
> the AI, nor reality are obligated to choose between your two
> predefined options.
No! I mean that you can disprove my theory by providing a counter-example
to either of the two OR any other counter-example that you can prove is
relevant. I was trying to make life easier by showing where the best/most
likely points to disprove my theory are.
> How could this possibly be in the self-interest of, say, a paperclip
> optimizer? It will obviously be able to create many more paperclips if
> it ignores your (to the UFAI) funny-sounding pulses in fiberoptic
> cables and just turns the Earth into a big pile of microscopic
I handled the case of the paperclip optimizer in the e-mail that you are
answering. To quote by pulling my previous statements from your e-mail
>> For a powerful enough,
>> single-goal entity that is sure that it *can* overcome all other
>> this is not going to stop it -- but this is a fantasy edge-case that we
>> should be able to easily avoid.
You *do* respond below by saying
> Actually, it is probably the default case, and a large number of us
> are operating off that assumption (it's the conservative scenario).
> See http://www.singinst.org/AIRisk.pdf.
So I'll just say OK, it doesn't handle this fantasy case (that can be easily
outlawed by requiring that all AGI have a sufficient number of goals) but it
has significant value outside this case (since I don't believe that either
of us will convince the other).
>> I disagree. Prove me wrong by doing one of the two things above. That
>> should be easy if my theory is as laughably wrong as you believe.
> By my count, four different people have now challenged your theory, so
> there's plenty of other things to say. Stop repeating this
> unreasonable demand; it isn't getting anyone anywhere.
I've been saying a lot of things and four different people have challenged
the theory but the truth is "NO ONE HAS PROVIDED A DETAILED, SPECIFIC
UNHANDLED CASE OR A COUNTER-EXAMPLE" other than the paper-clip example which
is an absurd edge case which we can simply outlaw by requiring all AGI to
have a sufficient number of goals. If you feel that this statement is
incorrect then I am asking you to provide what you believe is a detailed
specific unhandled case or a counter-example. This is not an unreasonable
demand. This is fundamental Bayesian logic. Show me some hard evidence
that I am wrong. All I've seen thus far is just wild unfounded speculation.
> Why does it even have this supergoal in the first place?
Because that is the primary overriding goal in my Declaration of
Friendliness. Why are you arguing with me if you haven't even had the
courtesy to read what I have written well enough to realize this.
> What other goals? An AI's goal system can be much, much simpler than a
> human's. There's no reason why it has to have any other goals.
True, but that is a fantasy edge case which we can easily outlaw.
This archive was generated by hypermail 2.1.5 : Sat May 18 2013 - 04:01:09 MDT