AGI Policy (was RE: SIAI's flawed friendliness analysis)

From: Keith Elis (hagbard@ix.netcom.com)
Date: Tue May 20 2003 - 16:12:54 MDT


This post is not directed to me, but I'm jumping in anyway.

Bill Hibbard:

> Pointing out the difficulties does not justify not even
> trying. Independent development of AI will be unsafe. A
> political process is not guaranteed to solve the problem, but
> it is necessary to at least try to stop humans who will
> purposely build unsafe AIs for their own imagined benefit.

You need to be far more specific about this. What do you mean by 'a
political process'? Do you mean each line of code is subject to
referendum? Surely not. Perhaps the design should be agreed upon by a
Senate subcommitee? Your insistence on this unknown process doesn't
really take a position.

A broad governmental policy with research guidelines that encourage
Friendly AI (perhaps coupled with an offer of Manhattan Project funding
to the most ethical researchers) *might* help. You admit that a
political process is not guaranteed to help Friendly AI. It probably
won't even come close. Friendly AI and compromise do not co-exist.

> Regulation will make it more difficult for those who want
> to develop unsafe AI to succeed.

Please be more specific. The regulatory process is cumbersome and slow,
and mostly reactive. Good ideas are rarely implemented except as
solutions to mature problems. The mature problems of this domain are the
ones you just don't have time to react to.

> The legal and trusted AIs
> will have much greater resources available to them and thus
> will probably be more intelligent than the unregulated AIs.
> The trusted AIs will be able to help with the regulation
> effort. I would trust an AI with reinforcement values for
> human happiness more than I would trust any individual human.

Are you talking about tool-level AGIs or >H AGIs? In the latter case, do
you really think a >H AGI would make laws the way we do? It's possible,
but even I can think of ways to establish a much larger degree of
control over the things I would need control over.

> It really comes down to who you trust. I favor a broad
> political process because I trust the general public more
> than any individual or small group.

Most people aren't geniuses. What's even worse, most people deduce
ethics from qualia. Average intelligence and utilitarian ethics might
get you a business degree, but this is not the caliber of people that
need to be working on AGI.

> Of course, democratic
> goverement does enlist the help of experts on technical
> questions, but ultimate authority is with the public.

What public? Do you mean the 50% of American citizens of voting age that
actually cast a ballot? Or do you mean the rest of the world, too?

> When
> you say "AI would be incomprehensible to the vast majority of
> persons involved in the political process" I think you are
> not giving them enough credit. Democratic politics have
> managed to cope with some pretty complex and difficult problems.

This is not directed to me, but can you name some of them that approach
the complexity and difficulty of AGI?

Keith



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT