From: Richard Loosemore (email@example.com)
Date: Tue Apr 25 2006 - 11:05:08 MDT
Jef Allbright wrote:
> On 4/25/06, Ben Goertzel <firstname.lastname@example.org> wrote:
>>> I think that the question of an AI's "goals" is the most important issue
>>> lurking beneath many of the discussions that take place on this list.
>>> The problem is, most people plunge into this question without stopping
>>> to consider what it is they are actually talking about.
>> Richard, this is a good point.
>> "Goal", like "free will" or "consciousness" or "memory", is
> Building upon Ben's points, much of the confusion with regard to
> consciousness, free will, etc., is that we tend to fall into the trap
> of thinking that there is some independent entity to which we attach
> these attributes. If we think in terms of describing the behavior of
> systems, with the understanding that each level of system necessarily
> exists and interacts within a larger context, then this whole class of
> confusion falls away.
> - Jef
Hmmmm.... I wasn't sure I would go along with the idea that goals are in
the same category of misunderstoodness as free will, consciousness and
I agree that when these terms are used in a very general way they are
BUt in the case of goals and motivations, would we not agree that an AGI
would have some system that was responsible for maintaining and
governing goals and motivations?
I am happy to let it be a partially distributed system, so that the
actual moment to moment state of the goal system might be determined by
a collective, rather than one single mechanism, but would it make sense
to say that there is no mechanism at all?
If your point were about free will, I would agree completely with your
comment. About consciousness .... well, not so much (but I am writing a
paper on that right now, so I am prejudiced). About memory? That
sounds much more of a real thing than free will, surely? I don't think
that is a fiction.
This archive was generated by hypermail 2.1.5 : Sat May 25 2013 - 04:01:00 MDT