Re: The return of the revenge of qualia, part VI.

From: Tennessee Leeuwenburg (hamptonite@gmail.com)
Date: Mon Jul 25 2005 - 20:17:37 MDT


Chris wrote :

> Sorry if none of this directly addresses your concerns, but I do think
> it does provide evidence that qualia, (if they completely track
> physical, neural processes,) can be experienced, and thus "described"
> or "predicted" by other sentience with only a physical knowledge of
> the brain processes giving rise to the qualia.

Actually, I thought it was a reasonably sophisticated defense of the
validity of considering qualia. I have no real qualms about
considering human minds to be nonprivileged in their ability to
experience qualia. I don't regard qualia as a kind of arrow to be shot
at transhumanists.

But I do regard it to be a kind of arrow to be shot at physical
reductionists - which is to say people who believe that talking about
brain states is the same thing as talking about mental states.

I'm even willing to accept that mental states supervene on our brain
states, which is to say that physical modelling of the brain is
possible, and that a constructed brain built on those principles could
enjoy the same experiences as the mind being modelled.

What I'm not willing to accept is that *all* intelligent simulations
have access to qualia, or that qualia are illusory. There is something
which pain is like which is not described by physics equations, even
if physics equations can account for the progress of the state of the
world. This is sounding mightily like a dualist argument, which I
would rather not subscribe to. Rather, I would say that there is a gap
in physics if it cannot distinguish between matter forms which support
qualia and those which don't.

Until we can provide some better than merely intuitive argument about
why, for example, a traditionally embodied mental machine like a brain
supports qualia, and why alternatively embodied machines either might
or might not, I think it is a dangerous leap of faith to assume that
*all* good physical modelling programs will be conscious just because
they are imbued with goals.

Like Ben G obviously does when he says :

"I tend to think that if one builds a software program with the right
cognitive structures and dynamics attached to it, the qualia will come along
"for free". Qualia don't need to be explicitly engineered as part of AI
design, but this doesn't make them any less real or any less important. An
AI created with humanlike cognitive structures and dynamics will spawn
humanlike qualia; an AI constructed with other cognitive dynamics and
structures will have other sorts of qualia..."

I am happy to admit that he may be right, but I reject the idea that
he is clearly or convincingly entirely right. There's plenty of room
here to respect his position while still establishing the parameters
for argument surrounding non-conscious entities which react in complex
ways to their environment.

Norm asked
"2. How can we (or an AI) know, let alone prove, that a sufficient
model for consciousness has been created? Can we define a "Turing
Test" for qualia?"

One response is that we might not need to. If we build an embodied
intelligence using the same biological componentry as a human, with
perfect functionality, then in essense we have done nothing more
dangerous than give birth. The problem of other minds is open for
philosophical discussion, but I think we have to draw a practical line
in the sand about that.

One way to avoid the problem might be to use similar biological
componentry to human brains. By analogy, the hardware should support
qualia, and we can proceed with more confidence.

Alternatively, it may be possible to identify differences between
modelled simulation and observed effects. It may be that brains are
not reducible to brain models. With more science, it might not be
necessary to rely on Turing Test guesswork and mere acting ability.

Anyway, I'm out of time. More later.

Cheers,
-T



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT