Re: AI debate at San Jose State U.

From: Richard Loosemore (rpwl@lightlink.com)
Date: Tue Oct 18 2005 - 06:23:42 MDT


Eliezer S. Yudkowsky wrote:
> Eliezer S. Yudkowsky wrote:]
>
>>
>> Untrue. I spent my first six years from 1996 to 2002 studying the
>> mechanics of human intelligence, until I understood it well enough to
>> see why it wouldn't work.
>
>
> To clarify: I mean "wouldn't work" in the sense of it not being a good
> idea to try and build an AI using a human cognitive architecture (a plan
> distinct from uploading). This has little to do with any objections to
> the humaneness of humans; more to do with the instability of the human
> cognitive architecture for recursive self-improvement, and the
> difficulty of getting a fragile mind right on the first try.
>
Can you claqrify what you mean by "the instability of the human
cognitive architecture for recursive self-improvement, and the
difficulty of getting a fragile mind right on the first try."

I can see that individual instances of the biological mind design are
unstable. What I cannot see, absent people building experimental
versions of an artificial mind using the same design, is reasons to
conclude that the artificial design would, as a design, be more unstable.

Surely we need real examples of fully worked-out, whole cognitive
systems, so we can study them, before we can draw any conclusions about
how much better a normative AGI would be than a cognitively-inspired AGI?

I can think of theoretical reasons to argue the case, but they are
several steps removed, and as has been demonstrated here, not easy to
understand.

Richard Loosemore.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT