Re: SIAI's flawed friendliness analysis

From: Bill Hibbard (test@demedici.ssec.wisc.edu)
Date: Tue May 20 2003 - 15:19:29 MDT


On Sun, 18 May 2003, Brian Atkins wrote:

> Bill Hibbard wrote:
> > On Sat, 17 May 2003, Brian Atkins wrote:
> >>Bill Hibbard wrote:
> >>>The danger of outlaws will increase as the technology for
> >>>intelligent artifacts becomes easier. But as time passes we
> >>>will also have the help of safe AIs to help detect and
> >>>inspect other AIs.
> >>>
> >>
> >>Even in such fictional books as Neuromancer, we see that such Turing
> >>Police do not function well enough to stop a determined superior
> >>intelligence. Realistically, such a police force will only have any real
> >>chance of success at all if we have a very transparent society... it
> >>would require societal changes on a very grand scale, and not just in
> >>one country. It all seems rather unlikely... I think we need to focus on
> >>solutions that have a chance at actual implementation.
> >
> >
> > I never said that safe AI is a sure thing. It will require
> > a broad political movement that is successful in electoral
> > politics. It will require whatever commitment and resources
> > are needed to regulate AIs. It will require the patience to
> > not rush.
>
> Bill, I'll just come out and state my opinion that what you are
> describing is a pipe dream. I see no way that the things you speak of
> have any chance of happening within the next few decades. Governments
> won't even spend money on properly tracking potential asteroid threats,
> and you honestly believe they will commit to the VAST amount of both
> political willpower and real world resource expenditures required to
> implement an AI detection and inspection system that has even a low
> percentage shot at actually accomplishing anything?
>
> And that is not even getting into the fact that by your design the "good
> AIs" will be crippled by only allowing them very slow intelligence/power
> increases due to the massive stifling human-speed
> design/inspection/control regime... they will have zero chance to
> scale/keep up as computing power further spreads and enables vastly more
> powerful uncontrolled UFAIs to begin popping up. The result is seemingly
> a virtual guarantee that eventually an UFAI will get out of control (as
> you state, your plan is not a "sure thing") and easily "win" over the
> regulated other AIs in existence. So what does it accomplish in the end,
> other than eliminating any chance that a "regulated AI" could "win"?
>
> Finally, how does your human-centric regulation and design system cope
> with AIs that need to grow to be smarter than human? Are you proposing
> to simply keep them limited indefinitely to this level of intelligence,
> or will the "trusted" AIs themselves eventually take over the process of
> writing design specs and inspecting each other?

If humans can design AIs smarter than humans, then humans
can regulate AIs smarter than humans. It is not necessary
to trace an AI's thoughts in detail, just to understand
the mechanisms of its thoughts. Furthermore, once trusted
AIs are available, they can take over the details of
design and regulation. I would trust an AI with
reinforcement values for human happiness more than I
would trust any individual human.

This is a bit like the experience of people who write
game playing programs that they cannot beat. All the
programmer needs to know is that the logic for
simulating the game and for reinforcement learning are
accurate and efficient, and that the reinforcement
values are for winning the game.

You say "by your design the 'good AIs' will be crippled
by only allowing them very slow intelligence/power
increases due to the massive stifling human-speed". But
once we have trusted AIs, they can take over the details
of designing and regulating other AIs. The real crippling
effect will be the inability of developers of unregulated
AIs to come out in the open for resources. Cooperating
corporations and government will have much larger
resources available to them for developing regulated AIs.
Don't misinterpret this: I am not saying that it is sure
to succeed (nothing is). But it is much better to use
the force of law and resources of government to help
solve the problem.

> > By pointing out all these difficulties you are helping
> > me make my case about the flaws in the SIAI friendliness
> > analysis, which simply dismisses the importance of
> > politics and regulation in eliminating unsafe AI.
> >
>
> This is a rather nonsensical mantra... everyone is pointing out the
> obvious flaws in your system- this does not help your idea that politics
> and regulation are important pieces to the solution of this problem.
> Tip: drop the mantras, and actually come up with some plausible answers
> to the objections being raised.

Calling this a "nonsensical mantra" does not answer it.
The objections are just possible ways that a political
solution may fail. Of course it may fail. But its the
best chance of success. It really comes down to who you
trust. I favor a broad political process because I trust
the general public more than any individual or small
group. Of course, democratic goverement does enlist the
help of experts on technical questions, but ultimate
authority is with the public.

> SIAI's analysis, as already explained by Eliezer, is not attempting at
> all to completely eliminate the possibility of UFAI. As he said, we
> don't expect to be able to have any control over someone who sets out to
> deliberately construct such an UFAI, and we admit this reality rather
> than attempt to concoct world-spanning pipe dreams.

Powerful people and institutions will try to manipulate
the singularity to preserve and enhance their interests.
Any strategy for safe AI must try to counter this threat.

> P.S. You completely missed my point on the nanotech... I was suggesting
> a smart enough UFAI could develop in secret some working nanotech long
> before humans have even figured out how to do such things. There would
> be no human nanotech defense system. Or, even if you believe that the
> sequence of technology development will give humans molecular nanotech
> before AI, my point still stands that a smart enough UFAI will ALWAYS be
> able to do something that we have not prepared for. The only way to
> defend against a malevolent superior intelligence in the wild is to be
> (or have working for you) yourself an even more superior intelligence.

I didn't miss your point. I accepted that nanotech is a big
threat, along with genetic engineering of micro-organisms.
I added that nanotech will be a threat with or without AI.
The way to counter the threat of micro-organisms has been
detection networks, isolation of affected people and
regions, and urgent efforts to analyze the organisms and
find counter measures. There are also efforts to monitor
the humans with the knowledge to create new micro-organisms.
These measures all have the force of law and the resources
of government behind them. Similar measures will apply to
the threat of nanotech. When safe AIs are available, they
will certainly be enlisted to help. With such huge threats
as nanotech the pipe dream is to think that they can be
countered without the force of law and the resources of
government. Or to think that government won't get involved.

It really comes down to who you trust. I favor a broad
political process because I trust the general public more
than any individual or small group. Of course, democratic
goverement does enlist the help of experts of technical
questions, but ultimate authority is with the public.

----------------------------------------------------------
Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI 53706
test@demedici.ssec.wisc.edu 608-263-4427 fax: 608-263-6738
http://www.ssec.wisc.edu/~billh/vis.html



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT