Re: Please Re-read CAFAI

From: Michael Vassar (michaelvassar@hotmail.com)
Date: Thu Dec 15 2005 - 06:52:17 MST


>Any intelligence which is conscious and aware of that consciousness
>certainly would have a self.

Huh? Hume and Siddhartha would beg to differ, but I would simply ask what
you mean by "conscious" and why it is relevant to talk about "consciousness"
when thinking about intelligences in general? Evolution is certainly an
effective optimization process, but not a conscious one.

The category "Intelligent Processes" is the super-category which includes
both human minds and evolution, and processes as different from both as they
are from one another, including all possible GAIs. Unless you understand
that, anthropomorphising from your sense of what "conscious" implies is
harmful to your understanding.

>I argue that sufficiently
>powerful AGI must have self consciousness because such an intelligence, in
>order to be sufficiently intelligent, must be able to add its own agency as
>one aspect of the system that it is modeling (reality).

The above is all clearly true, but none of it implies anything like what the
mental model of a "self" that you and many others seem to be using. Rather,
an entity must model it "self" if it is to self-improve and if it is to be
able to judge "improvements". The model it uses of its "self" may be
identical to the model it would use for any process it found externally
which behaved in the same manner and which it had the same information
about, or at least far more similar than the model of self used by a human
is to the human's other models.

A strongly suspected feature of Intelligent Processes of sufficient power is
that they will converge to a potentially Friendly and otherwise omnicidal
"Really Powerful Optimization Processes" because consistant "improvement"
requires a well-ordered preference structure.

It is important to note that to a great extent "morality" is an attempt for
agents with competing preference structures to generate a well ordered
aggregate preference structure.

>Now if this is true,
>i.e. that real AGI must be self-aware, then it would be highly dangerous to
>have a bunch of super intelligent AGI running around treating people as a
>means to an end alone. Note that it is ok to treat a person as means to an
>end as long as that action can be justified as also being an end in itself.
>Also I contend that if an AGI is self-aware that it must be programmed to
>understand that it is not just an individual but part of a collective which
>is human civilization and part of its goal system should be in service to
>this collective. This is not just true for machines but for people as well.

Until you understand what powerful optimization processes are, you won't
have any idea what "highly dangerous" means by SL4 standards.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT