RE: Fundamentals - was RE: Visualizing muddled volitions

From: Brent Thomas (bthomas@avatar-intl.com)
Date: Wed Jun 16 2004 - 13:10:53 MDT


Answers below as appropriate -- indicated by !!!

Summary: As far as the system itself, most likely self developed and
capable of 'changing the world', my only fundamental
need/desire/request is that it allow me (the current me, not some
calculated approximation) to act as a 'final judge'
and accept or reject any modifications it would perform on my person.
Change the environment to whatever the collective derives as fitting our
volition, change pretty much anything everyone can agree on...but don't
change any sentient without
presenting the choice and ensuring the sentient is comfortable (to the
limit of their ability to understand) the choice.
Its not a genie bottle because the 'system' works as you have
envisioned...only the fundamental difference is that no action can be
taken to a sentient without their consent. (if they were violent, or
otherwise inclined not to be involved that is the point of the enclaves
and the responsibility of the system to provide such space for them to
exist as they choose...and imho this is no bother or hardship for the
system...by the point it can alter the environment/bodies/selves of
sentients it can also protect them)

I DO think your collective volition is 'right on' in how the system
should model and improve itself and the environment...i just must insist
on the rights of a 'last judge' be in MY hands when it comes to my
body/self/intellect. And truly, for the capabilities I envision this
system will develop in a short period, maintaining enclaves and
providing explanations to whatever level of detail a being requests will
probably take .0000000001% (or less!) of the systems capability.

Whats the rush, or the need to impose? Protect the FUNDAMENTAL condition
where a sentient is not to be affected unless they choose to be
affected....if you consider this deeply enough I'm confident that you
will agree that you would wish the ability to refuse outside change. I
do think that most will embrace the change, and the change will be
better, smarter, faster etc...but retain the ability to choose.

-----Original Message-----
From: Eliezer Yudkowsky [mailto:sentience@pobox.com]
Sent: Wednesday, June 16, 2004 1:52 PM
To: sl4@sl4.org
Subject: Re: Fundamentals - was RE: Visualizing muddled volitions

Brent Thomas wrote:
> Again I'd like to express the hope that any F(AI) developers would
> build into their systems (as a fundamental invariant?) the 'right of
> withdrawal'
>
> This should not be part of a 'bill of rights' as it is so fundamental
> to having an acceptable process that it should be a basic condition.

Would you like to name nine other things that are so fundamental to
having
an acceptable process that it should be a basic condition? If you
can't,
I'm sure nine other people would be happy to do so. Al-Qaeda thinks
that
basing the AI on the Koran is so fundamental to having an acceptable
process that it should be a basic condition.

!!! NO - there is only one fundamental thing...ASK before modification
and respect the answer. There is no need for
other fundamentals in a friendly system operating from our collective
volition.

> No
> matter what the collective thinks is best, even if it has (correctly!)

> extrapolated my wishes or the wishes of the collective, it should
> still not apply that solution to my (or any sentients) physical being
> without my express approval.

Including human infants, I assume. I'll expect you to deliver the
exact,
eternal, unalterable specification of what constitutes a "sentient" by
Thursday. Whatever happened to keeping things simple?

!!! I'll deliver it today...any being that the system can communicate
with and that is capable of
responding. The system should be able to communicate with any human (in
any modality), and (when!) we encounter alien sentients they should not
be 'modified' before we are capable of communicating with them ;-) By
responding I mean that the system must explain what modification it is
intending to make and allow an informed choice. If the system is unable
to clearly (to that target) explain why the modification is necessary it
should not perform it. If the system needs (for some reason determined
by the collective volition) to modify a sentient and the system cannot
communicate with it then the sentient should be 'enclaved' if necessary
until the system is able to explain...dont modify without permission
anything capable of giving/rejecting permission. Pretty simple.

For this particular example human infants should generally not need to
be modified by the system unless their parent wishes them to be (and I
do think we GIVE the RIGHT to modify infants to human parents
today...nothing really new here). In this instance the infants are only
potential sentients as they are not capable of responding. The system
truly isn't a genie because the collective volition (so I believe) will
not see the need to modify infants (whats so urgent they need to be
modified anyway? I don't forsee any condition where the system could not
enclave them until they develop enough to communicate)

> Change the environment, alter the systems, create the transcendent
> utopia but do it with 'choice' and as such do not modify my
personality
> or physical being (and as part of that be prepared to create
'enclaves'
> for those who wish to remain unmodified) without the express consent
of
> the sentient to be modified.

Could you please elaborate further on all the independent details you
would
like to code into eternal, unalterable invariants? If you add enough of

them we can drive the probability of them all working as expected down
to
effectively zero. Three should be sufficient, but redundancy is always
a
good thing.

!!! Sure, just one detail --- don't modify a sentient without
permission, when modification is projected according to the collective
volition explain process until sentient grasps concept and only proceed
if accepted. Pretty straight forward.

> Do this and I think the vision of the coming singularity will be more
> palatable for all humanity.

It's not about public relations, it's about living with the actual
result
for the next ten billion years if that wonderful PR invariant turns out
to
be a bad idea.

!!! First it is about public relations (initially) or else your efforts
a FAI may be stomped by the establishment and foom! Some non F ai will
be developed...into the razor blades blindly...it behoves us to make the
approach palatable to humanity.
Second, you are correct - it is about living with the result...whatever
the changes our collective voliton intends to make...it will be capable
of making them, and of protecting our right to choose...remember this
fundamental aspect is the 'last judge' decision on the individuals
part...it doesn't matter to the enviroment, or the system, or the
majority of the actions of the collective...only and specifically to
each individual their right to choose/allow/deny action taken TO them.

> (and besides I can't really object about
> modifications if I was consulted now can I?)

Not under your system, no. I would like to allow your grownup self
and/or
your volition to object effectively.

!!! Sorry...the decision is MINE...and there is no rush...even if the
projected volition is correct and my future self will have wanted that I
still require the CHOICE - maybe it would be better if I were to follow
the recommendation but life is a journey and I don't want to skip
ahead...The system should present the option and respect the decision.

> Do not tell me that 'oops we got it wrong...' as indicated here:
>
>>> The reason may be, "That idiot Eliezer screwed up the extrapolation

>>> dynamic." If so, you got me, there's no defense against that. I'll
>>> try not to do it.
>
> Instead (using the principal of no modification to sentients without
> express permission) the system can tell me "Hey, you'd be much happier
> if you had green hair, we've done some calculations and if at least
20%
> of the population had green hair then there would be a 15% reduction
in
> the general unhappiness quotient... Can I make this modification to
you
> or would you like a deeper explanation of the intents and
consequences?"

I suppose that if that is the sort of solution you would come up with
after
thinking about it for a few years, it might be the secondary dynamic.
For
myself I would argue against that, because it sounds like individuals
have
been handed genie bottles with warning labels, and I don't think that's
a
good thing.

!!!but that's exactly the point...if you don't think it's a good thing,
well that doesn't matter to me...
I am the one who has to choose. And remember this is only in respect to
modifications the system DECIDES to MAKE to me...

How it reacts to 'wishes' (ala genie) is a whole nother
discussion...this fundamental application of CHOICE is only to
things the system decides it needs to do and in the process must CHANGE
me...at that point I get to choose.

> I think I'm mindful that the system is likely to evolve fast, (go
> foom!
> (hopefully in a good way!)) and that even if it is Friendly and has my
> best interests at heart I still may not want to participate in all
> aspects of the system, even if its calculations tell it that I would
in
> fact in the future have appreciated being modified. I think I do
foresee
> a hard takeoff scenario and as long as the fundamentals are good then
> even when no person or group of people is capable of understanding
even
> a small percent of the operations or actions of the system as long as
> they have and retain personal choice over their own person (and
possibly
> local environment) then things will be fine.
>
> (I don't particularly care that the system decided it needed to
> convert
> 90% of Arizona into a giant radio transmitter - just don't make me
into
> one of the support beams!)
>
> Brent <== *likely to have green hair if the system says it would help
> the singularity, but glad to be consulted*

The title of this subject line is "fundamentals". There is a
fundamental
tradeoff that works like this: The more *assured* are such details of
the
outcome, even in the face of our later reconsideration, the more control
is
irrevocably exerted over the details of the outcome by a human-level
intelligence. This holds especially true of the things that we are most

nervous about. The more control you take away from smarter minds, for
the
sake of your own nervousness, the more you risk damning yourself. What
if
the Right of Withdrawal that you code (irrevocably and forever, or else
why
bother) is the wrong Right, weaker and less effective than the Right of
Withdrawal the initial dynamic would have set in place if you hadn't
meddled?

The more *predictable* is the particular detail you care about, the more

that detail is being specified by a human-level intelligence. I have
said
this before, but the moral challenges of FAI require FAI to solve - one
must work with the stuff that good ideas are made of, dip into the well
of
good solutions, and not depend on one's own ability to come up with the
best answer.

!!! Again...there is only one fundamental thing here...insofar as the
system decides it needs to modify me
it must first obtain permission...thats it. Pretty basic and no trade
off required...remember this applies only to
modifications to my person/self/intellect that the system deems
necessary. This control (if I have any say about it and that's the basic
point isnt it?) must not be surrendered. And there are no circumstances
that I can forsee (with my limited 2004 intellect yes...but that is the
sentient being asked to make a choice) that cannot wait, be fully
explained, and abide by my choice. The collective volition guides the
systems of the universe as it should...I just reserve the right to say
'no' as it regards my self/intellect/personality.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.665 / Virus Database: 428 - Release Date: 4/21/2004
 


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT