Re: SIAI's flawed friendliness analysis

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu May 29 2003 - 11:14:34 MDT


Bill Hibbard wrote:
>
> Nevertheless, its an interesting question and I'll try to answer it. I
> think the answer divides into two parts: the regulation itself, and how
> to enforce it.
>
> 1. The regulation.
>
> Here's my initial crack at it.
>
> Any artifact implementing "learning" and capable of at least N
> mathematical operations per second must have "human happiness" as its
> only initial reinforcement value. Here "learning" means that system
> responses to inputs change over time, and "human happiness" values are
> produced by an algorithm produced by supervised learning, to recognize
> happiness in human facial expressions, voices and body language, as
> trained by human behavior experts.

Congratulations. You've just ruled out SIAI's Friendly AI architecture
and mandated one that is basically, fundamentally flawed. You have tried
to impose your own, flawed understanding of the nature of human
intelligence and the structure of morality upon every AI researcher on
Earth. You are speaking from within an 80s-era understanding of AI;
supervised learning, reinforcement values, training by human behavior
experts. You have walked into exactly the trap that I expect real-life
regulators to walk into. I oppose attempted regulation of AI morality for
exactly this reason; there is no guarantee that researchers can solve the
problem, but government regulators are guaranteed to fail. Government
regulators don't know how much they don't know.

The one thing I have observed about AI is that everyone believes they
understand it. Government regulators will be no exception to this.
Except that where real AI researchers must actually implement their ideas
successfully in order to have any impact, government regulators have no
check upon them, no experimental test to tell them about their own
ignorance; they will happily regulate everyone into oblivion based on
their private theories. You ruled out SIAI's Friendliness architecture,
and indeed any Friendliness architecture that could possibly work, and you
did so without blinking an eye, without realizing the danger of what you
were saying, without thinking that maybe you ought to spend a few more
years thinking about AI morality before imposing your unfinished ideas on
every researcher on Earth. If you actually had the power to put your
ideas into practice, if your suggestion had been implemented, you would
have, just now, wiped out the Earth. Oh, we'd have staggered along for a
while, but in the end we would have died; you ruled out every possible
architecture that could achieve FAI, and imposed one that inevitably
results in UFAI. You now have one subjunctive planetary kill on your
record. This is exactly the predictable disaster that my opposition of
government regulation is based on.

> Since this is so much shorter than most government regulations, I
> suspect that a real regulation, produced after input from many experts,
> would be much longer.

But, of course, just as unworkable - again, a quite predictable
catastrophe. If you cannot convene a panel of government experts to tell
you exactly what human intelligence is, or a panel of government experts
to tell you how to build an AGI, then why do you think a panel of
government experts can understand FAI? The answer is that they cannot.
It is not because they are stupid. A government panel of smart people
also could not tell you how to build AGI, and this is not because they are
stupid, but because AGI is hard. The difference is that if a government
panel of experts comes up with an unworkable AI theory, as of course they
would, and the public spends a few billion bucks trying to implement the
unworkable theory, nothing happens - the experts visibly fail, get some
egg on their face, and democracy lurches on. With FAI, the experts will
quite happily impose their unworkable theory on everyone, since it is not
subject to test until too late. There's a reason why science tests
theories using experiments instead of having panels of government experts
vote on them, and it's not because government experts are stupid.

> 2. How the regulation can be enforced.
>
> Enforcement is a hard problem. It helps that enforcement is not
> necessary indefinitely. It is only necessary until the singularity, at
> which time it becomes the worry of the (hopefully safe) singularity
> AIs. There is a spectrum of possible approaches of varying strictness.

Of course, enforcement is far, far, far easier, as a conceptual problem,
than anything of or relating to Friendly AI. Unfortunately.

> AI and the singularity will be so much better if the public is informed
> and is in control via their elected governments. It is human nature for
> people to resist changes that are forced on them. If we respect
> humanity enough to want a safe singularity for them, then we should
> also respect them enough to get the public involved and consenting to
> what is happening.
>
> Whether or not you think my regulation ideas can work, my basic point
> is that the singularity will be created by some wealthy and powerful
> institution, and its values will reflect the values of the institution.
> The only chance for a safe singularity will be if that institution is
> democratic government under the control of an aggressive public
> movement for safe AI, similar to the consumer, environmental and social
> justice movements.

So far, you've just demonstrated one planetary kill for regulation. There
is no divine right of democracy; it does not confer infallibility. What
it does confer is faith and the illusion of infallibility. Congress is
not capable of understanding how little it knows, which is what makes it
dangerous. Democracy has known bugs; and those bugs, applied to
Singularity scenarios, result in predictable kills - one of which you have
just demonstrated. People who have been invested with the divine right of
democracy and the holy indignation of the voters do not have enough
humility, in the face of Nature, to confront the Singularity and survive.
  Full of the righteous anger that politics brings, they will take one
step in the wrong direction and die. You will shrug off everything I say
about your theory of AI failing, because, why, how can you have democracy
if the experts are allowed to run everything? Who died and left *me* in
charge? Only an elected representative can have the right to say what
Nature will do in such-and-such a situation, which is, of course, a
political matter, since public policy depends on it. Politics, including
democratic politics, is about who gets to be in charge. Whoever wins the
fight, and gets to be in charge, is far too flush with victory to listen
to some mere unelected expert warning them they're walking into a trap.
And that, too, is a predictable failure.

> My argument for regulation is based on the high probability of unsafe
> AI without regulation, rather than any confidence that I have all the
> answers about how to regulate. I have no practical experience with
> politics, regulation, security or law enforcement, and so my ideas on
> this would certainly need to be refined by professionals.

I don't believe this is a valid argument for choosing a strategy. "I
believe there is a high probability of unsafe AI without chocolate
brownies, so I would like chocolate brownies to be involved. Of course I
have no experience with baking brownies, so my ideas will need to be
refined by professional cooks..."

It is *hard* to survive the Singularity. You saw a problem, and searched
through your mental library of heuristics, until you found one that looked
like it would solve it - your faith in the processes of democracy.
Democracy is a good thing, isn't it? Everyone loves democracy. If you
say bad things about democracy you must be a bad person, and to suggest
that Congress is not capable of solving this scientific problem is, of
course, very disrespectful of our elected representatives and democratic
institutions. But the Singularity is not solved. It has not gotten any
safer. It has just gotten worse. All you did was appeal to a solution
sufficiently vague that you could no longer foresee how it would fail, and
one with plenty of positive emotional hooks to draw in your faith and
banish your anxiety. This, of course, is how everyone seems to handle
problems in Friendly AI as well.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT