From: Gordon Worley (firstname.lastname@example.org)
Date: Tue Jul 31 2001 - 00:04:34 MDT
At 4:53 PM -0700 7/30/01, James Higgins wrote:
> But this never needs to be a problem for the AI since it could have
>dozens (if not more) scientists to discuss ideas with.
This is the only way I could see moronic AI (I'm going to put the
moron level at < 0.8 brains, because from 0.8 to 1 brains I think
we'd just be dealing with something along the lines of your average
stupid human) eventually taking off: outside influence. A moronic
AI left to itself, no matter how long it runs, will never come up
with the ideas needed unless entropy is thrown into it's thoughts on
occasion to mix things up and produce new ideas, and even then it's
going to take a really long time before the moron has a good idea.
This is why it's so important to develop human level AI, because
Moore's Law won't carry us to the Singularity if we don't have
something smart (even if it/ve is a general intelligence).
But, that's enough for now, because I feel like I'm rehashing old ideas.
-- Gordon Worley `When I use a word,' Humpty Dumpty http://www.rbisland.cx/ said, `it means just what I choose email@example.com it to mean--neither more nor less.' PGP: 0xBBD3B003 --Lewis Carroll
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT