RE: Volitional Morality and Action Judgement

From: Ben Goertzel (ben@goertzel.org)
Date: Sun May 23 2004 - 05:24:40 MDT


Hi,

Regarding the relation between consciousness and intelligence, I suggest
everyone read Benjamin Libet's outstanding work in the neuroscience
domain

http://shorterlink.com/?TOD9PD

http://www.pdcnet.org/volbrain.html

This is the only work I know of that really addresses the phenomenon of
human consciousness in a rigorous empirical way. It doesn't resolve
every question, but it says a lot. The former book in particular is
very easy to understand yet amazing in its implications.

My own essays on free will and consciousness are highly in the spirit of
Libet's work

http://www.goertzel.org/dynapsyc/2004/HardProblem.htm

http://www.goertzel.org/dynapsyc/2004/FreeWill.htm

Both Libet and I consider subjective experience to be a philosophically
separate domain from empirical physical dynamics. As noted in my essay,
this is in accordance with the ideas of many philosophers such as
Charles S. Peirce. We also consider subjective experience as something
that *naturally emerges* from complex cognitive systems. In this view
it is extremely unlikely that it's possible to create a nonsentient
general intelligence. But of course this can't be rigorously *proven*
since we lack a solid theory of either general intelligence or
consciousness.

According to Libet's experiments, it seems that, for instance, when a
physical stimulus is received by the skin, it is only consciously
perceived about half a second later. The moment of perceived stimulus
is then "backdated" to the actual time of the stimulus. In dynamical
systems terms, conscious perception of a stimulus seems to involve the
formation of some sort of "neural attractor" initiated by the actual
stimulus (in line with my friend George Christos's notion of
"consciousness as an attractor"). This ties in with my theoretical
notion that the consciousness possessed by a system is connected with
the patterns that system has recognized in itself. Apparently, in
humans, the pattern-recognition subsystem takes a little while to
respond, thus explaining the delay in consciousness. But programmed
responses to stimuli may happen faster than this, because they don't
require the creation of attractors embodying perceived patterns, they
just require charge to flow along existing reflex channels.

In my view, so long as a complex cognitive system recognizes patterns in
itself, it's going to have a subjective experience.

I also argue that any complex cognitive system is very likely to have
some kind of experience of "free will" -- for similar reasons. Free
will has to do with the relation between choices made by unconscious
dynamics, and the registration of these choices in the mind's "virtual
multiverse model" of itself and the world. Any complex mind confronting
the world is going to maintain a virtual multiverse model, and have an
experience of navigating through it. The flavor of this experience may
be very different for different types of mind of course.

Regarding emotion, on the other hand, I have argued that digital minds
may experience much less emotion than humans, and the emotions they do
express may be of a very different kind:

http://www.goertzel.org/dynapsyc/2004/Emotions.htm

So, in my view, what neural and cognitive science suggest at present is:

1) AGI's will be conscious
2) AGI's will have some sort of free-will-ish experience, but probably
with fewer illusions attached to it
3) AGI's will probably have far less intense emotions than us, unless
they're specifically architected to do so

Since our own ethical behavior is closely tied in with our emotions
(e.g. love, compassion), and with some of the more illusory aspects of
our experience of free will and choice, this suggests that the
psychology of AGI ethics is going to be rather different than the
psychology of human ethics.

-- Ben Goertzel

> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf
> Of Michael Roy Ames
> Sent: Sunday, May 23, 2004 2:06 AM
> To: sl4@sl4.org
> Subject: Re: Volitional Morality and Action Judgement
>
>
> Eliezer,
>
> So, let me feed this back to you just to be sure I've understood...
>
> ---
>
> Your definition of a sentient, or 'person', is a process that
> has consciousness and possibly qualia, among other things.
>
> Also, if you can avoid giving a FAI consciousness then you
> will feel much more comfortable creating it, performing
> source control, etc. as there will be no moral imperatives involved.
>
> ---
>
> I believe you are going to have a lot of trouble tweezing
> general intelligence away from consciousness. If you can, it
> would be a hell of a thing to see. For the record: I don't
> want to hurt a person either. Should we hold up creating FAI
> until we know precisely what a person is? Until we
> accurately demarcate the borderline between person and
> non-person, do we hesitate to create something that might be
> a person digitally? If we cannot say just exactly why we
> humans are also *persons*, then how can we determine the
> personhood status of an AI? You would have to simultaneously
> answer the
> questions: "what makes a human a person" and "what makes an
> FAI a (non-)person". Again, that would be a hell of a thing
> to see. Is this what you intend?
>
> Michael Roy Ames
>
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT