From: Bill Hibbard (firstname.lastname@example.org)
Date: Fri May 23 2003 - 10:14:59 MDT
On Tue, 20 May 2003, James Rogers wrote:
> Bill Hibbard wrote:
> > Humans will create the first AIs, and certainly will
> > understand their designs and code. If human designers
> > can understand it, then human regulators will also be
> > able to understand it.
> In what finite universe is this true, unless the regulators are also the
> designers? And in practice such things are not even vaguely close to the
> theoretical optimal, such that we have no reason to believe that this will
> be the lone exception. Indeed, rational analysis suggests this would be an
> utter disaster.
I write open source visualization software for a living,
and in the open source community it is common for people
to take over development of other people's systems. It
is also common for new people on a project to become
experts in the system. Many of these systems are very
large and complex.
There are plenty of examples of successful analysis of
complex software from compiled object code for espionage
(both commercial and military), where there is no
cooperation from the designers.
Probably the most efficient way to implement regulation
would be to embed the inspectors right in the design and
coding team, but reporting to the regulators rather than
the AI project managers. This is what NASA does with its
QA inspectors for space flight hardware, embedded in the
design team but reporting directly to NASA.
Once safe AIs start designing other AIs, we could adopt
inspection by independent safe AIs (much the way
NASA's shuttle uses one independently designed and coded
on-board computer system to verify the actions of another).
In my experience, if one group of people can create it,
then another group can understand it.
Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI 53706
email@example.com 608-263-4427 fax: 608-263-6738
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT