RE: SIAI's flawed friendliness analysis

From: Rafal Smigrodzki (rafal@smigrodzki.org)
Date: Wed May 21 2003 - 13:59:19 MDT


Bill Hibbard wrote:
> On Sun, 18 May 2003, Rafal Smigrodzki wrote:

>>
>> If independent development of AI was unsafe, a political process
>> would not make it any less so.
>
> Pointing out the difficulties does not justify not even
> trying. Independent development of AI will be unsafe. A
> political process is not guaranteed to solve the problem,
> but it is necessary to at least try to stop humans who
> will purposely build unsafe AIs for their own imagined
> benefit.

### But pointing out difficulties may focus attention on exactly what should
be tried, instead of merely hinted at in pleasantly general terms without
any concrete plan, with salutary affects on the outcomes.

Now, good intentions alone are not sufficient to claim necessity of an
action, necessity appears only if a reasonable chance of success exists.

----------------------------

>
> Regulation will make it more difficult for those who want
> to develop unsafe AI to succeed. The legal and trusted AIs
> will have much greater resources available to them and thus
> will probably be more intelligent than the unregulated AIs.
> The trusted AIs will be able to help with the regulation
> effort. I would trust an AI with reinforcement values for
> human happiness more than I would trust any individual
> human.

### Here we definitely agree - a huge government-funded AI program would be
a great idea. Since the FAI may be interpreted as the ultimate public good,
a boon for all, yet profit for nobody, a good case can be made for public
funding of this endeavor, just like the basic science that gave us modern
medicine. This program, if open to all competent researchers, with results
directly available to all humans, could vastly accelerate the building of
the FAI.

Now, of course if you are familiar with the way research progresses, you
know that the power of public research comes from the lack of bureaucratic
regulation and the reliance on peer review, collaboration and public
disclosure. This is why AI's from this program would be better than AI's
built in little closeted groups, and since only peer-reviewed programs (as
opposed to bureaucrat-reviewed ones) would be funded, there would be the
best balance between safety and efficiency. The government program could
have the "security overkill" measures described here recently, while still
running circles around the Islamic programmer trying to build his own phone
line to Allah. In this sense "regulation" would mean the best that human
minds can come up with, as opposed to the contrived schemes produced by
legislatures which try to micromanage. The only input from the political
process is the recognition of the importance of the problem, and the
appropriation of funds, with the actual implementation left to publicly
operating experts with good track records of achievement, operating in the
competitive and yet collaborative, democratic but multicentric fashion
typical of all successful innovative projects.

The positive outcome would be the result of faster development of FAI,
rather than direct human-mediated retardation of the UnFAI. The FAI might
perhaps decide to act as a regulator of other AIs, if this were the smartest
move, in agreement with ideas previously presented by Eliezer. We could
trust this FAI as much or more than any other AI, because the open
competitive process of science, whether privately or publicly funded, is the
best institutional approximation of rationality humans so far have come up
with.

I have a feeling, though, that you are less interested in using the
government's carrot, but rather you would rely on the stick. You mention
making it more difficult for those who want to develop unsafe AI. There are
some methods which seem to be the first to come to the mind of
government-oriented people - burly men with guns, codes, statutes, secrecy
and prisons. They seem easy, but their long-term effects are complex, and
frequently counterproductive, which is why I want to use them only if I have
absolutely no inkling of any better ideas.

What exact means do you want to use for the purpose of "making things
difficult", without interfering with the efforts I described above?

An approach that slows FAI more than UnFAI, should *not* be tried no matter
how easy it appears.

--------------------------------------

>
> It really comes down to who you trust.

### Yes, it comes down to whether you trust the stick, or the carrot. I
prefer the latter, with only very limited uses for the former, and not when
applied to FAI.

Rafal



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT