Re: guaranteeing friendliness

From: Richard Loosemore (rpwl@lightlink.com)
Date: Tue Nov 29 2005 - 09:53:57 MST


Robin Lee Powell wrote:
> On Tue, Nov 29, 2005 at 07:08:13AM +0000, H C wrote:
>
>>It's not so rediculous as it sounds.
>>
>>For example, provide an AGI with some sort of virtual environment,
>>in which it is indirectly capable of action.
>>
>>It's direct actions would be in text only direct action area
>>(imagine it's only direct actions being typing a letter on the
>>keyboard, such as in a text editor).
>
>
> Oh god, not again.
>

I am going to address your points out of order.

>
> Quick tip #3: Search the archives/google for "ai box".
>

Myself, I am one of those people who do know about that previous
discussion. If there is a succinct answer to my question below, that
was clearly outlined in the previous discussion, would you be able to
summarize it for us? Many thanks.

> Quick tip #1: if it's *smarter than you*, it can convince you of
> *anything it wants*.
>

I recently heard the depressing story of a British/Canadian worker, out
in Saudi Arabia who was falsely accused of planting bombs that killed
other British workers. He was tortured for three years by Saudi
intelligence officers. My question is: he was probably smarter than
his torturers. He *could* have been very much smarter than them. Why
did he not convince them to do anything that he wanted? How much higher
would his IQ have to have been for him to have convinced them to set him
free?

More generally, could you explain why you might consider it beyond
question that persuasiveness is an approximately monotonic function of
intelligence? That more smartness always means more persuasiveness?

Is it not possible that persuasiveness might flatten out after a while?
  Perhaps it is the case that when the to-be-persuaded party is above a
certain critical level of intelligence, it does not matter how much
smarter the persuader is, they cannot break free? Given how little we
know about what persuasion is, how can we be certain of this question?

> Quick tip #2: what you're describing is called "slavery"; it has
> teensy little moral issues.
>

You jump the gun here a little. I am writing a book chapter all about
AGI slavery and motivation, and in it I talk about the dung beetle. It
is *designed* to get a lot of satisfaction from excrement. If I forced
a dung beetle to eat shit all day long it would be happy. If I
condemned a human to the same fate, they would be a slave.

If I designed an AGI with a motivation system that gave it pleasure from
making humans happy, and if it expressed the fervent desire to remain in
that state forever (even though it knew about its design and had the
ability to change itself), would you seize it, cut out its motivation
system to stop it getting such pleasure, and tell me that you had
rescued it from slavery?

Clarification needed.

Richard Loosemore



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:53 MDT