RE: Security overkill

From: Gary Miller (garymiller@starband.net)
Date: Mon May 19 2003 - 11:08:30 MDT


Eliezer said:

>> That's the problem with outsiders making up security precautions for
the
>> project to take; at least one of them will, accidentally, end up
ruling
>> out successful Friendliness development.
 
Ahhh, but there is the rub... Who are the outsiders and who are the
insiders.

What are the rules and secret handshakes that transform an mere outsider
into an insider?

Are all of the people on this list outsiders? Or is it just me?

And even if I by some miracle of god I achieved the coveted status of a
true insider
would I be thus annointed and always correct? Nay, I say to thee,
brainstorms are meant to
be exactly that, and challenged at every turn.

And who shall ever soever one commit the ultimate sin and have a bad
idea or make an unwise decision. Shall they be cast aside to ever
remain among the outsiders?

-----Original Message-----
From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf Of Eliezer
S. Yudkowsky
Sent: Saturday, May 17, 2003 11:45 PM
To: sl4@sl4.org
Subject: Security overkill

Gary Miller wrote:
> My proposed solution to friendliness problem.
>
> Note some of you will laugh this off as overkill. But believe me
> having worked as a consultant for the government for a number of
> years, this is just business as usual for NSA. It is a very expensive

> but very secure development process. It is based upon separation and
> balance of power. No one person has the access and knowledge to
> compromise the system. Relationships between team members must be
> prohibited to prevent possibility of collusion.

Overkill? No, I don't think it's overkill. I don't think there's any
such thing as overkill when it comes to certain problems. And even if
it
were, what's wrong with overkill?

Note one thing, however: Safety is very, very expensive.

For example, I would very much like to have guaranteed frame-by-frame
reproducibility of AI development. You develop the AI for a week,
recording and timestamping all outside inputs. Then, when the week is
over, you take a snapshot. Then, on a separate computer, you take last
week's snapshot and run it forward, using the timestamped input. If the

final result doesn't match this week's snapshot, start over again from
last week. This guarantees that each and every frame of the AI's
existence is available to inspection, even five years later, or for that

matter after the Singularity.

But that would require specific software support, and the software
support
would be expensive. At present it seems like no one much cares about
the
Singularity, so expensive safety options are pretty much out of the
question. Sad. Pathetic, in fact. But I can't control humanity's
choices, only my own.

Aside from that, I see at least one major problem with the set of
precautions you proposed. You suggested separation of the system
architects from the development environment, which strikes me as both
infeasible, and suboptimally safe. Remember that Friendly AI is much
harder as a theoretical problem than as a trust problem. The
theoretical
problem is harder because you can't solve it by throwing up security
walls. Security measures are one thing, but anything that actually
reduces the ability of the system architects to solve the problem of
*building* Friendly AI... no. You don't have that kind of safety
margin;
or not knowably so, at any rate.

That's the problem with outsiders making up security precautions for the

project to take; at least one of them will, accidentally, end up ruling
out successful Friendliness development.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence




This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT