Re: No More Searle Please

From: Charles D Hixson (charleshixsn@earthlink.net)
Date: Fri Jan 20 2006 - 14:21:29 MST


On Wednesday 18 January 2006 11:48 pm, Daniel Radetsky wrote:
> On Wed, 18 Jan 2006 08:09:43 -0500
>
> Richard Loosemore <rpwl@lightlink.com> wrote:
...
>
> I'll be blunt: if you want to challenge Searle, use the Systems Reply. It's
> the only reply that actually works, since it explicitly disagrees with
> Searle's fundamental premise (consciousness is a causal, not a formal,
> process). You went on to make something like the Systems Reply in the rest
> of your post, but against a straw man. Searle never claims that since
> 'understanding doesn't bleed through,' Strong AI is false. He claims (in
> the original article; I haven't read everything on this subject) that no
> additional understanding is created anywhere, in the room or in the man,
> and so Strong AI is false. That is, the fact that 'understanding doesn't
> bleed through' is only a piece of the puzzle.
>
> Daniel
But the real objection is that his proposed "thought experiment" isn't
constructively specified. Until someone builds a "Chinese room" and runs it
through it's paces, then we have no reason to believe that it can be done.
More particularly, I would assert that it couldn't be done without the
"Chinese room" system itself becoming conscious, and that no "book of rules
for translation" is, even in principle, possible. This isn't to assert that
mechanistic translation isn't possible, but rather to assert that stateless
translation isn't possible. Once the states are a part of the system, then
the system can potentially itself be conscious.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT