From: Jef Allbright (email@example.com)
Date: Tue Apr 25 2006 - 13:12:12 MDT
On 4/25/06, Richard Loosemore <firstname.lastname@example.org> wrote:
> Jef Allbright wrote:
> > On 4/25/06, Ben Goertzel <email@example.com> wrote:
> >>> I think that the question of an AI's "goals" is the most important issue
> >>> lurking beneath many of the discussions that take place on this list.
> >>> The problem is, most people plunge into this question without stopping
> >>> to consider what it is they are actually talking about.
> >> Richard, this is a good point.
> >> "Goal", like "free will" or "consciousness" or "memory", is
> > Building upon Ben's points, much of the confusion with regard to
> > consciousness, free will, etc., is that we tend to fall into the trap
> > of thinking that there is some independent entity to which we attach
> > these attributes. If we think in terms of describing the behavior of
> > systems, with the understanding that each level of system necessarily
> > exists and interacts within a larger context, then this whole class of
> > confusion falls away.
> > - Jef
> Hmmmm.... I wasn't sure I would go along with the idea that goals are in
> the same category of misunderstoodness as free will, consciousness and
> I agree that when these terms are used in a very general way they are
> often misused.
Each of these topics is an attractor for confusion when people aren't
clear to distinguish between subjective and objective descriptions and
neglect the larger context without which a description is necessarily
Goals can be described precisely only within a specified context. We
can speak precisely about the "goals" of a feedback loop, or multiple
loops in a complex physical system, whether it be electronic,
mechanical, chemical, or some combination. Note that we can speak
precisely even of those parameters that we can't currently quantify.
There's no confusion about what we mean by goals when the context is
clearly understood. However, when we try to speak of goals in
relation to a subjective agent, we must be very careful, because the
subjective element doesn't have the same kind of first order existence
as the physical system and it's encompassing environment. Confusion
arises because the subjective element is already a description of an
aspect of the system, removed from the actual system itself. If I
were to speak of "my goals", it must be understood in common terms,
but the closer one looks, the more one sees that they're really not
"my" goals because the subjective "I" doesn't have an independent
existence. "I" am more precisely defined as a behavior of a system
within a given context.
Similarly with "free-will". Certainly we can all speak of free-will
within the context of common human social interactions and it makes
sense. In fact our legal and judicial system, as well as
moral/ethical beliefs and behavior depend on it. However, just as
with the self, the closer one looks, the more it is apparent that
there is no ultimate free-will, and that all interactions can be
described precisely (including describing the degree of uncertaintly)
within a deterministic framework of explanation. In fact, if our
behavior were not deterministic, we would lose the "free-will"--the
ability to choose--that we do have.
Similarly with memory. When we speak of our memory there is an
inherent subjective aspect. We do not often acknowledge this,
especially since ones memories are an important part of ones personal
identity. We can say truthfully that we remember events from our past,
but the closer we look, the more we see that memories are subject to
distortion, gaps, confabulation, and outright fabrication. We can
speak precisely of memory in a well-defined objective context, such as
a memory device in a computer, but when we speak of the memory of a
subjective agent, even an AI with the capability of accurate
introspection, we must be clear to distinguish between subjective and
[Note that there is no truly "objective" description since none of us
can observe from a vantage point completely outside the system, but we
can use the term effectively as long as we always recognize the
importance of context.]
> BUt in the case of goals and motivations, would we not agree that an AGI
> would have some system that was responsible for maintaining and
> governing goals and motivations?
Goals are always about controlling some (complex) parameter relative
to something else. Given a well-specified context, then we can
precisely define goals. Goals are necessary for an AGI, but I believe
they must evolve. Within an evolving model of an evolving environment,
to be invariant is to die.
With regard to developing safe AI, I don't think there can be any
guarantee. The best we can do is to incorporate a model of human
values as broad-based as possible, and to promote the growth of our
evolving values based on principles rather than ends.
> I am happy to let it be a partially distributed system, so that the
> actual moment to moment state of the goal system might be determined by
> a collective, rather than one single mechanism, but would it make sense
> to say that there is no mechanism at all?
> If your point were about free will, I would agree completely with your
> comment. About consciousness .... well, not so much (but I am writing a
> paper on that right now, so I am prejudiced). About memory? That
> sounds much more of a real thing than free will, surely? I don't think
> that is a fiction.
As many list members know, I often point out that Self,
Conciousness/Qualia, Free-will, Morality, and Social Decision-making
(politics) all have an inherent subjective element that commonly, but
not necessarily, leads to confusion.
To take it up a further level, we can never overcome the problem of
induction as described by Hume, but I see no reason why we should want
to. We need only remain aware of the importance of context to any
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT