Re: Fundamentals - was RE: Visualizing muddled volitions

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Wed Jun 16 2004 - 11:51:52 MDT


Brent Thomas wrote:
> Again I'd like to express the hope that any F(AI) developers would build
> into their systems (as a fundamental invariant?) the 'right of
> withdrawal'
>
> This should not be part of a 'bill of rights' as it is so fundamental to
> having an acceptable process that it should be a basic condition.

Would you like to name nine other things that are so fundamental to having
an acceptable process that it should be a basic condition? If you can't,
I'm sure nine other people would be happy to do so. Al-Qaeda thinks that
basing the AI on the Koran is so fundamental to having an acceptable
process that it should be a basic condition.

> No
> matter what the collective thinks is best, even if it has (correctly!)
> extrapolated my wishes or the wishes of the collective, it should still
> not apply that solution to my (or any sentients) physical being without
> my express approval.

Including human infants, I assume. I'll expect you to deliver the exact,
eternal, unalterable specification of what constitutes a "sentient" by
Thursday. Whatever happened to keeping things simple?

> Change the environment, alter the systems, create the transcendent
> utopia but do it with 'choice' and as such do not modify my personality
> or physical being (and as part of that be prepared to create 'enclaves'
> for those who wish to remain unmodified) without the express consent of
> the sentient to be modified.

Could you please elaborate further on all the independent details you would
like to code into eternal, unalterable invariants? If you add enough of
them we can drive the probability of them all working as expected down to
effectively zero. Three should be sufficient, but redundancy is always a
good thing.

> Do this and I think the vision of the coming singularity will be more
> palatable for all humanity.

It's not about public relations, it's about living with the actual result
for the next ten billion years if that wonderful PR invariant turns out to
be a bad idea.

> (and besides I can't really object about
> modifications if I was consulted now can I?)

Not under your system, no. I would like to allow your grownup self and/or
your volition to object effectively.

> Do not tell me that 'oops we got it wrong...' as indicated here:
>
>>> The reason may be, "That idiot Eliezer screwed up the extrapolation
>>> dynamic." If so, you got me, there's no defense against that.
>>> I'll try not to do it.
>
> Instead (using the principal of no modification to sentients without
> express permission) the system can tell me "Hey, you'd be much happier
> if you had green hair, we've done some calculations and if at least 20%
> of the population had green hair then there would be a 15% reduction in
> the general unhappiness quotient... Can I make this modification to you
> or would you like a deeper explanation of the intents and consequences?"

I suppose that if that is the sort of solution you would come up with after
thinking about it for a few years, it might be the secondary dynamic. For
myself I would argue against that, because it sounds like individuals have
been handed genie bottles with warning labels, and I don't think that's a
good thing.

> I think I'm mindful that the system is likely to evolve fast, (go foom!
> (hopefully in a good way!)) and that even if it is Friendly and has my
> best interests at heart I still may not want to participate in all
> aspects of the system, even if its calculations tell it that I would in
> fact in the future have appreciated being modified. I think I do foresee
> a hard takeoff scenario and as long as the fundamentals are good then
> even when no person or group of people is capable of understanding even
> a small percent of the operations or actions of the system as long as
> they have and retain personal choice over their own person (and possibly
> local environment) then things will be fine.
>
> (I don't particularly care that the system decided it needed to convert
> 90% of Arizona into a giant radio transmitter - just don't make me into
> one of the support beams!)
>
> Brent <== *likely to have green hair if the system says it would help
> the singularity, but glad to be consulted*

The title of this subject line is "fundamentals". There is a fundamental
tradeoff that works like this: The more *assured* are such details of the
outcome, even in the face of our later reconsideration, the more control is
irrevocably exerted over the details of the outcome by a human-level
intelligence. This holds especially true of the things that we are most
nervous about. The more control you take away from smarter minds, for the
sake of your own nervousness, the more you risk damning yourself. What if
the Right of Withdrawal that you code (irrevocably and forever, or else why
bother) is the wrong Right, weaker and less effective than the Right of
Withdrawal the initial dynamic would have set in place if you hadn't meddled?

The more *predictable* is the particular detail you care about, the more
that detail is being specified by a human-level intelligence. I have said
this before, but the moral challenges of FAI require FAI to solve - one
must work with the stuff that good ideas are made of, dip into the well of
good solutions, and not depend on one's own ability to come up with the
best answer.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT