Re: Edge.org: Jaron Lanier

From: Perry E.Metzger (perry@piermont.com)
Date: Sat Nov 29 2003 - 22:57:53 MST


"Colin" <chales1@bigpond.net.au> writes:
> A model of a thing is not a thing!

Is a thought of a unicorn a real thought?

Or, to throw the spear straight at Mr. Searle, is a simulation of an
addition somehow different from "really" adding two numbers?

(Presumably a Chinese room made up of neurons in a small dense volume
following a deterministic program can't be conscious because none of
those neurons, when interviewed, experience qualia individually. :)

Sorry to single out one sentence among many for assault -- I just see
red when Searle's bizarroid argument gets mentioned even indirectly.

As for myself, I don't believe we'll solve the problem of
consciousness -- and we won't care. The problem of producing a
synthetic construct that passes (or more to the ultimate point,
surpasses) the Turing Test is not a problem of producing a
consciousness -- it is a problem of producing a black box that has a
particularly observable external behavior.

(Indeed, one might easily argue that, from the point of view of the
Friendly AI people, it is unnecessary that the god they wish to create
be conscious so long as it acts as though it has a conscience, whether
it is "aware" of it or not.)

Perry



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT