Re: 3 "Real" Conscious Machines [WAS Re: Singularity: A rock 'em, shock'em ending soon?]

From: Olie L (neomorphy@hotmail.com)
Date: Tue Jan 17 2006 - 22:54:50 MST


>From: Phil Goetz <philgoetz@yahoo.com>
>Date: Tue, 17 Jan 2006 20:11:22 -0800 (PST)
>
>--- Damien Broderick <thespike@satx.rr.com> wrote:
>
> > At 07:06 PM 1/17/2006 -0800, Phil wrote:
> >
> > > > >Woody has not proposed
> > > > >any test that can be carried out by a human.
> > > >
> > > > Has in fact proposed (for a profoundly half-arsed value of
> > > > "proposed") a
> > > > test that specifically and by design *can't* be carried out by a
> > > > human.
> > >
> > >I didn't mean the test can't be taken by a human
> >
> > I did, and said so. Searle designed his Chinese Room thus, as an
> > attempted
> > reductio ad absurdum of semantics-free piecemeal emulation.
> >
> > Damien Broderick
>
>Searle's Chinese room is not a reductio ad absurdum of semantics-free
>emulation. This is proven because, when presented with a situation in
>which the Chinese room is embedded within a robot body just like a
>human's, responding directly to sensory stimuli, Searle STILL says it
>has no consciousness.

I can't speak for Searle himself. Searle has said some pretty stupid
things, but he's had some gems, too.

BUT

Rephrasing your statement a little:

>Searle's Chinese room is not a reductio ad absurdum of semantics-free
>emulation. This is proven because, when presented with a situation in
>which the Chinese room is embedded within a robot body just like a
>human's, responding directly to sensory stimuli, { one can say } it STILL
>says it
>has no consciousness

NO.

We can't say it has no Consciousness.

But, we STILL can't prove that it DOES have Consciousness.

With any entity,

Just because "it" appears to behave in a manner that indicates
consciousness, there is no guarantee
that "it" has Consciousness.

In a way, Searle's Chinese Room Example is just a semi-plausible example of
an entity behaving as if it had a Conscious understanding, when no such
Conscious understanding exists.

Some people have a tendency to project consciousness onto some entities -
soft toys, computers, AIBOs, plants, the French... often the projection is
more relevant to how they feel about the object than whether it displays
evidence of intelligent behaviour. Even some apparently intelligent
behaviour is caused by unintelligent mechanisms ("wow - look at those
changing clouds in the sky!")

Searle's /example/ (how many fucking times do we have to make it clear that
IT IS NOT A @#%$ing TEST!) is simply showing one type of complicated,
apparently intelligent behaviour, that is controlled by an extremely
simplest mechanism (IF - THEN) does not require Conscious understanding

In this way, it is relevant to the Philosophical Zombie argument. (For
background, google "Chalmers philosophical zombies". Chalmers' zombie pages
are not only informative, they're damn funny)

...

Going back to Phil's example of a "black box" put inside a humanoid:

If the box inside operates on a lookup table, it doesn't match my concept of
Consciousness.

If the box is inside a Madame Tussaud's replica of a human and is
"responding directly to sensory stimuli" using an if-then table, it still
doesn't match my concept of Consciousness.

So, how does this "prove" that "Searle's Chinese room is not a reductio ad
absurdum of semantics-free emulation. " ?

I don't think it does.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT