RE: Volitional Morality and Action Judgement

From: Ben Goertzel (ben@goertzel.org)
Date: Tue May 25 2004 - 08:24:22 MDT


Michael Wilson wrote:
> The correct mode of thinking is to constrain the behaviour of
> the system so that it is theoretically impossible for it to
> leave the class of states that you define as desireable. This
> is still hideously difficult,

I suspect (but don't know) that this is not merely hideously difficult
but IMPOSSIBLE for highly intelligent self-modifying AI systems. I
suspect that for any adequately intelligent system there is some nonzero
possibility of the system reaching ANY POSSIBLE POINT of the state space
of the machinery it's running on. So, I suspect, one is inevitably
dealing with probabilities.
 
> > My statement is that, so far as I know, it's reasonably likely that
> > building a decently-designed AGI and teaching it to be nice
> will lead
> > to FAI.
>
> Without a deep understanding of the cognitive architecture,
> you have no way of knowing whether you are 'teaching' the
> system what you think you are teaching it.

Agreed, of course. It would also be very hard to create an AGI without
having a deep understanding of its cognitive architecture -- unless you
create it by imitating the human brain...

> If you /do/ have a
> deep understanding of the architecture, then you don't teach,
> you specify

Not so. The cognitive architecture may be such that learning by
experience is the most effective way for it to learn. Specifying
knowledge rather than teaching via experience may be possible *in
principle* (it would always be possible in principle for any
Novamente-based system, for example), but it may be extremely slow
compared to the high-bandwidth information uptake obtainable via
experiential learning in an environment.

 

> > I wouldn't advocate proceeding to create a superhuman-level
> > self-modifying AGI without a better understanding.
>
> Commendable, but are you sure that you have enough
> understanding not to do it by accident?

I have enough understanding to know when there is an appreciable risk of
doing it by accident. For instance, now, playing with an incomplete and
partial Novamente system, there is effectively zero risk. With a
complete Novamente system that is enabled to self-modify its cognitive
schemata, there will be a greater than zero risk, and more careful risk
analysis will be needed.

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT