Re: Flawed Risk Analysis (was Re: SIAI's flawed friendliness analysis)

From: Bill Hibbard (test@demedici.ssec.wisc.edu)
Date: Sun May 18 2003 - 15:31:59 MDT


On Sat, 17 May 2003, Michael Roy Ames wrote:

> Bill Hibbard wrote:
>
> > [snip]
> > It will be impossible for regulation to prevent all
> > construction of unsafe AIs, just as it is impossible to prevent
> > all crimes of any kind.
>
> Very true.
>
>
> > But for an unsafe AI to pose a real
> > threat it must have power in the world, meaning either control
> > over significant weapons (including things like 767s), or access
> > to significant numbers of humans. But having such power in the
> > world will make the AI detectable,
>
> Theoretically, yes. Practically, maybe not. Regular humans have such
> powers in the world right now, and they are certainly not always
> detected before they do something bad... or even most of the time.

We don't have a way to inspect human brains in the way we will
inspect AI brains. And even if we could, we don't have the same
level of motivation to inspect humans. With AIs, inspection is
critical to the future survival of humanity.

I should make it clear that even with a strong effort to
detect and inspect AIs, there is no guarantee that all AIs
will be safe. But without that effort, unsafe AIs will be
guaranteed.

> > so that it can be inspected
> > to determine whether it conforms to safety regulations.
>
> This is too much of a leap. Detection, I'll give you, is possible.
> Even if impossible to guarantee. Inspection for conformation to some
> set of regulations is simply pointless because:
>
> a) an AI will be able to self-modify into something different, thus
> making 'point-in-time' inspections of little value
>
> b) inspecting an AI will be an incredibly complex and difficult task
> requiring the intelligence and tracking abilities of a phlanx of highly
> tallented people with computer support, so it will take a lot of time to
> complete, rendering such inpections out of date and therefore of little
> value.

I never said it would be easy. We must take the time and
effort to inspect every AI to make sure its design
conforms to regulations. Regulation certainly slowed down
construction of nuclear power plants (before construction
stopped altogether), and it will slow down AI development.
But there's no reason to rush.

> > I don't think that determining the state of mind of the
> > programmers who build AIs is all that relevant, [snip]
>
> Your opinion here is held by only a small minority of people. I
> disagree with it because, in humans, state-of-mind effects what people
> do. A person who wishes to improve freedom and security for all with
> the minimum of violation of volition is going to behave quite
> differently from a person who wants to be Emperor of the Universe.
>
>
> > just as the
> > state of mind of nuclear engineers isn't that relevant to
> > safety inspections of nuclear power plants.
>
> On the contrary, it is the most relevent aspect of inspections. If the
> nuclear engineers in charge of the plant don't give a damn about safety,
> then when the power plant does break (and they all do) it is unlikely
> that proper corrective step will be taken.

In my book, I advocate screening the humans who will teach
young AIs. I make the analogy to the screening of people who
control major weapons.

But inspecting designs is different than inspecting operations.
I'll grant that those inspecting designs may want to estimate
the intentions of the designers, but the ultimate judgement
about the design must come from an inspection of the design
itself.

> > The focus belongs
> > on the artifact itself.
>
> Correct. But your statement seems to imply that the 'artifact' is
> unchanging. This is untrue for any of the mind designs I have seen so
> far, including the human mind. Minds change, and an AI is going to be
> faster and more capable at changing its mind than humans are.

We cannot let it outrun our ability to inspect. There will be
no rush.

This is one area where I agree with the SIAI, in their
recommendation 8, "Controlled ascent". A careful regulation
process is a good way to implement controlled ascent.

> > The danger of outlaws will increase as the technology for
> > intelligent artifacts becomes easier.
>
> We agree on this.
>
>
> > But as time passes we
> > will also have the help of safe AIs to help detect and
> > inspect other AIs.
>
> Again, you assume to much. You are assuming here that we will have safe
> AIs before unsafe AIs exist. If this does not come to pass, then:
> pooof!

In the early days, AI technology won't be widely available
so inspection efforts can focus on the few successful groups.
No rush. Lets do the first strong AIs slowly, with the public
insisting on an intensive effort to formulate and enforce
regulations. Humanity can afford to take its time. It cannot
afford to get it wrong because of some imagined need to rush.

By pointing out all these difficulties you are helping
me make my case about the flaws in the SIAI friendliness
analysis, which simply dismisses the importance of
politics and regulation in eliminating unsafe AI.

Cheers,
Bill
----------------------------------------------------------
Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI 53706
test@demedici.ssec.wisc.edu 608-263-4427 fax: 608-263-6738
http://www.ssec.wisc.edu/~billh/vis.html



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT