RE: How hard a Singularity?

From: Ben Goertzel (ben@goertzel.org)
Date: Tue Jun 25 2002 - 20:30:50 MDT


Steve,

I don't really understand the sense of your question ("instrumentation" ?),
so I'll let someone else answer it.

However, my view is that, *in a system as rooted in symbolic knowledge
representation as Cyc*, it can't be very hard to create an initially
Friendly goal system, dependent upon an AI system's initial understanding of
Friendliness and its initial way of incorporating goals into its behavior.
[In a system based solely on Hebbian reinforcement learning, for instance,
wiring in an initially Friendly goal system would be much less
straightforward].

The hard part is: If one creates a system that is able to change its concept
of Friendliness over time, and is able to change the way it governs its
behavior based on "goals" over time, then how does one guarantee (with high
probability) that Friendliness (in the designer's sense) persists through
these changes.

This may not be a problem for Cyc because Cyc has relatively limited
self-modificatory capability. (Cyc can, I assume, modify its definition of
Friendliness -- but still its definition of Friendliness will always be
explicitly given and humanly readable, so that if the definition changes for
the worse a human can just revise it... and for the foreseeable future I
suppose Cyc can't modify the way it uses goals to determine its behavior,
except perhaps by modifying some pertinent parameters.)

So the major flaw I see in the Friendly Ai concept -- lack of persistence of
Friendliness thru self-modifications -- probably won't be a worry in Cyc,
for the same reasons that I think Cyc will never be a real AGI in anything
near its current form:

a) Cyc relies only (or primarily) on explicitly given knowledge, not having
a major role for implicit, emergent, attractor-style knowledge

b) Cyc's control structures are rigid and based on simple hard-wired control
rules, rather than being learned by a sophisticated "procedure learning"
methodology that is tightly integrated with the system's declarative
knowledge base

When and if you ever expand Cyc to overcome restrictions a) and b), you will
potentially run into issues with "Friendliness drift", if indeed you have a
Friendly goal system in Cyc at that time...

-- Ben G

> On Tue, 25 Jun 2002, Ben Goertzel wrote:
>
> > I still don't believe that you [Eliezer] are anywhere near to
> > understanding the conditions under which a human-Friendly AGI goal
> > system will be stable under successive self-modifications... even mild
> > and minor ones...
>
> As I desire to create a goal and goal modification system for Cyc somewhat
> along the lines described in CFAI, what instrumentation would be
> required to
> ensure correct behavior from a CFAI-style system? Up to now I believed
> that simply monitoring each significant decision and its explanation would
> be sufficient, as one of the measures described in CFAI.
>
> -Steve
>
> --
> ===========================================================
> Stephen L. Reed phone: 512.342.4036
> Cycorp, Suite 100 fax: 512.342.4040
> 3721 Executive Center Drive email: reed@cyc.com
> Austin, TX 78731 web: http://www.cyc.com
> download OpenCyc at http://www.opencyc.org
> ===========================================================
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT