Re: SIAI's flawed friendliness analysis

From: Bill Hibbard (test@demedici.ssec.wisc.edu)
Date: Thu May 29 2003 - 09:38:26 MDT


On Mon, 26 May 2003, Eliezer S. Yudkowsky wrote:

> Bill, I've had conversations similar to these before. I'll give the
> challenge that has so far defeated every single proponent of AI regulation:
>
> "Name one specific regulation, in enough detail to make it enforceable,
> that you believe would improve the chances of a safe Singularity if the
> government attempted to enforce it."
>
> It is easy enough to call for "regulation". I have never yet heard anyone
> call for some specific regulation.

My argument for regulation is based on the high probability
of unsafe AI without regulation, rather than any confidence
that I have all the answers about how to regulate. I have
no practical experience with politics, regulation, security
or law enforcement, and so my ideas on this would certainly
need to be refined by professionals.

Nevertheless, its an interesting question and I'll try to
answer it. I think the answer divides into two parts: the
regulation itself, and how to enforce it.

1. The regulation.

Here's my initial crack at it.

  Any artifact implementing "learning" and capable of at
  least N mathematical operations per second must have "human
  happiness" as its only initial reinforcement value. Here
  "learning" means that system responses to inputs change
  over time, and "human happiness" values are produced by an
  algorithm produced by supervised learning, to recognize
  happiness in human facial expressions, voices and body
  language, as trained by human behavior experts.

Since this is so much shorter than most government
regulations, I suspect that a real regulation, produced
after input from many experts, would be much longer.

The N mathematical operations per second number is picked
to be high enough to allow non-intelligent applications like
weather prediction (actually, most weather models don't learn
and so would be exempt from the N limit), and low enough to
exclude intelligence significantly greater than human
intelligence. Based on the opinions of various experts, a
guess at the value of N might be 10^15. There may be
"mundane" (i.e., no danger they will become intelligent)
learning applications that need more than N operations per
second, that can get case-by-case exemptions (with inspection
to verify how they are being used).

As with any law, disputes would be settled before a court
with judges, lawyers representing both parties, and expert
witnesses.

2. How the regulation can be enforced.

Enforcement is a hard problem. It helps that enforcement is
not necessary indefinitely. It is only necessary until the
singularity, at which time it becomes the worry of the
(hopefully safe) singularity AIs. There is a spectrum of
possible approaches of varying strictness. I'll describe
two:

a. A strict approach.

Disallow all development of "learning" machines capable of
at least N operations per second, except for a government
safe AI project (and exempt "mundane" learning applications).
This would be something like the Manhattan Project (only the
government is allowed to build nuclear weapons, although
contractors are involved).

The project could include people working for the government
and for private corporations. There could be multiple competing
designs in the project (e.g. "Fat Man" and "Little Boy"). The
project would have huge resources, which would have the side
effect of attracting talented AI designers away from the
temptation of outlaw AI projects. All designs would be
inspected and reviewed for compliance with the regulation,
overseen by the National Academies of Engineering and
Science.

The focus for detecting illegal projects could be on computing
resources and on expert designers. Computing chips are widely
available, but chip factories aren't. There is already talk of
using the concentration of ownership of chip manufacturing to
implant copyright protection in every chip. Its called TCPA
and I'm against it - see my article at:

  http://www.ssec.wisc.edu/~billh/roads.html

Something very much like TCPA could be implanted in every chip
over a certain power (N/M where M = 1000 or 10000), to detect
when they are being used in sufficiently large clusters on
tightly coupled problems, and cease to operate unless they
have an inspection certificate.

Another tool of strict enforcement could be to prohibit open
sales of chips with power greater than N/M. Chips with greater
than this power would only be available to certified inspected
server centers. The primary need for computing power close to
users is visuals and sound. Chips at 10^12 operations per
second (just about where the current technology driving Moore's
Law is predicted to run out) should be plenty for these needs,
especially in small clusters (anything less than M would be
legal). Otherwise the trend is to put most computing power in
central server sites anyways, so restricting the most powerful
chips to secure central sites should not distort the computing
world too much (I don't pretend there would be no distortion).

Illegal projects could also be detected through their need for
expert designers. As long as the police are not corrupt or lazy
(hence the need for an aggressive public movement driving
aggressive enforcement), they can develop and exploit informers
among any outlaw community. Its hard to do an ambitous project
like creating AI without a lot of people knowing something
about it. They are vulnerable to bribes, and they get into
feuds and turn each other in.

Although we all love to root for the little garage-shop
operations, the overwhelming probability is that machine
intelligence will first appear in facilities that look like
this (4x10^13 operations per second):

  http://www.es.jamstec.go.jp/esc/eng/GC/b_photo/esc11.jpg

Such projects are detectable by the enormous resources they
consume and the numbers of people involved.

Internationally, there could be treaties analogous to those
controlling certain types of weapons. These would prohibit
military use of learning machines capable of more than N
operations per second, and would set up international bodies
analogous to the IAEA for coordinating regulation and
inspection.

b. A less strict approach.

This would be like the strict approach, except that safe AI
projects outside the government could be licensed, in
addition to the government project. These projects would
have inspectors embedded in their design teams. The burden
of proof would be on the designers to convince the
inspectors that their designs comply with the regulation.
As with the government project, all designs would be
reviewed for compliance with the regulation, overseen by
the National Academies of Engineering and Science.

3. Wild cards.

There are all sorts of wild cards that could change the
scenario for regulation considerably:

a. Some new technology, such as quantum computing, enables
anyone with $100 million to fabricate computing devices
capable of 10^30 operations per second.

b. Novamente (just to pick an AI project) demonstrates
human-level intelligence using just 10^11 operations per
second.

c. Saddam Hussein uses his 4 semi loads of $100 bills to
buy a million Playstation 2's and hire AI design geniuses
to create an unsafe singularity in a remote province of
Kazakstan.

There is no way to come up with a regulation plan that will
meet every contingency. The government games out a lot of
contingencies in issues it cares about, which is a lot of
work and usually fails to anticipate what really happens.
In any issue as complex as the singularity, it is
inevitable that strategy must be adaptable.

The other thing to realize is that a lot of scenarios for
the singularity could result in violent human conflict.
If an AI grows fast but does not instantly eliminate human
governments, then the public may be frightened and the
governments may react defensively in a sort of "national
security war over AI". It is impossible to game all these
scenarios out, but the important point is that some pretty
bad scenarios are possible. Which leads to my last point ...

4. The consent of the governed.

AI and the singularity will be so much better if the public
is informed and is in control via their elected governments.
It is human nature for people to resist changes that are
forced on them. If we respect humanity enough to want a safe
singularity for them, then we should also respect them
enough to get the public involved and consenting to what is
happening.

Whether or not you think my regulation ideas can work, my
basic point is that the singularity will be created by some
wealthy and powerful institution, and its values will reflect
the values of the institution. The only chance for a safe
singularity will be if that institution is democratic
government under the control of an aggressive public movement
for safe AI, similar to the consumer, environmental and
social justice movements.

----------------------------------------------------------
Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI 53706
test@demedici.ssec.wisc.edu 608-263-4427 fax: 608-263-6738
http://www.ssec.wisc.edu/~billh/vis.html



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT