About "safe" AGI architecture

From: Ben Goertzel (ben@goertzel.org)
Date: Sat Jun 12 2004 - 09:02:17 MDT


This excerpt from a private email I sent someone last week, who was
asking me about FAI and Novamente and "safety mechanisms", may be of
interst to some on this list...

-- Ben G

******

To indicate why I'm not as worried about Novamente as you -- in the
future scenario in which my project succeeds and Novamente becomes a
clever infrahuman AI -- I'll describe briefly the way we achieve a high
degree of safety in the Novamente design. I'm not sure how much
software background you have, so some of this may be opaque to you.

The main point is, the Novamente design is layered.

There is a C++ layer, which implements the "Novamente core", which is a
kind of special "operating system" handling scheduling of cognitive
operations, movement of data between disk and RAM, network
communications, and storage and retrieval of nodes and links in the
Novamente RAM-based knowledge repository called the AtomTable.

One of the things implemented by this C++ layer is a type of node called
a SchemaNode, which wraps up a structure called a CombinatorTree. A
CombinatorTree is effectively a special computer program, that is
interpreted by a special interpreter that is part of the Novamente core.

So, the programs wrapped in SchemaNodes can control Novamente's
cognitive operations. But there is no way for them to affect the
underlying C++ layer.

We can restrict Novamente's self-modification to automatic programming
of SchemaNodes. This allows it to modify all its thought processes but
not its underlying architecture.

Of course, there's a possibility that a sufficiently smart NM system
could find some kind of hole in the separation between layers -- a
careful software bug. This leads to the need for specialized narrow-AI
software focusing on program-correctness-checking, to verify that no
such bugs exist. It also points to a problem in that the core is now
implemented in C++, which isn't really very compatible with formal
program verification technology. Therefore, for safety purposes, we'll
need to reimplement the core in C# or Java. Doing a Java
reimplementation when the time comes will not be extremely hard, since
using gcc (the compiler we use), Java and C++ are compiled to the same
sort of binaries. My friends at supercompilers.com have unique
technology that allows formal verification of Java programs.

Of course, there's also a possibility that an evil, calculating
Novamente AI could convince its programmers to modify its implementation
and break the layering structure. Arguments such as "My intelligence is
limited by this layering architecture ... Think how much more good I
could do for humanity, and how much more money I could make YOU, if you
let me achieve the intelligence leap that would come from modifying all
those dumb things you did in my core layer...." And naturally, at some
point it WILL be the case that the system figures out better ways to
modify its core ... But here is where we'll need to be very very very
careful....

We are not at the level where the system can learn new cognitive
processes for itself yet, however. For that, the system's procedure
learning component will need to be able to learn CombinatorTrees with
500-1000 nodes in them, and now we're off from that by a factor of
10-20. This can't be solved merely by adding more machines because
there are exponentially scaling processes involved. We have a bunch of
ideas for how to make the procedure learning to scale to the needed
size, and that's one of the many things we'll be working on during the
next couple years.

Once procedure learning is working well enough that having the system
learn its own cognitive processes is a real possibility, then probably
we'll reimplement the core in Java or C# to take advantage of the formal
verification properties that these languages possess.

Note that none of this is a design for AGI Friendliness. Rather, it's a
design to allow you to safely play around with self-modifying code,
without letting the system modify itself severely enough that it can
cause damage in the world. Which, in my view, will allow us to gather
the data we need to create real theories of AGI and FAI, as opposed to
the somewhat facile speculations that are being tossed around these days
by Eliezer, myself, and others who enjoy scientific speculation.

Game theory, evolutionary theory, probability theory and so forth are
simply abstractions of simple types of social, physical and biological
systems that humans are familiar with. I don't think they will model
post-singularity situations very well. I bet that playing with
infrahuman AGI's will lead us to alternative theories that are at least
a little more informative, albeit still not totally insightful relative
to the vast unknowability that is the singularity

-- Ben

*********



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT