From: Gordon Worley (email@example.com)
Date: Sun Dec 02 2001 - 20:43:35 MST
On Sunday, December 2, 2001, at 09:34 PM, James Rogers wrote:
> The problem is that I assume that the code required to respond exactly
> a human is identical to the code required for "human mind algorithms".
Ah, this is where we differ.
Here's where I think I'm getting some of my ideas from:
I go to sleep and have a dream or just day dream. During the course, I
create characters in my mind. To me, they act and respond like real
humans, albeit it takes me quite a while to figure out how they should
respond. I base this by considering what I would do given my mind
worked a little differently. It's not completely accurate, but accurate
enough. If I were a Power, I'd be able to do the same thing and try to
think what other Powers might try to do, but also I'd be able to reason
how I might act if I were dumber. But, these are not new intelligences,
just my own, masked.
But, I don't think we're going to manage to resolve this difference
here. There's no reason why two algorithms can't give the same answers
but be different (think two strings that have the same hash sum). My
point is that they can have fundamentally different internal states
(which I take it you disagree with).
I do agree with the FSM assumption, just in case anyone wasn't sure.
-- Gordon Worley `When I use a word,' Humpty Dumpty http://www.rbisland.cx/ said, `it means just what I choose firstname.lastname@example.org it to mean--neither more nor less.' PGP: 0xBBD3B003 --Lewis Carroll
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT