From: Tennessee Leeuwenburg (firstname.lastname@example.org)
Date: Mon May 02 2005 - 18:26:22 MDT
I think my position in history will be as an intelligent observer. I
don't think I have the ability to actually build a proper AI, but I
think I could largely understand one.
>You're right, we face some high hurdles, but I'm not sure the answer is to stop trying to clear them; adopting the belief that we will successfully end up at the other end of the track. We're just not that good, and reality *doesn't* care.
>Question 1: If Friendliness is likely to arise anyway, what are the consequences of pursuing it with due speed?
No harm is done.
>Question 2: If Friendliness is not likely to arise, what are the relative consequences of not pursuing it?
I think this could be re-worked. If Friendliness is not likely to arise,
what things should we pursue? Rather than only trying to build the first
AI, should we be trying to work on "proofs" for the value of morality,
or working out how to strap it on later etc...? Or perhaps we should
consider the form of AI development post the first convincing AI.
However quickly AI may progress in geological timescales, it's unlikely
to do so fast that humans can play no role in shaping its development.
It seems more fruitful to me to consider the transition phase with more
Perhaps friendliness would turn out to be an idea rather than something
hard-wired - an idea sufficiently convincing that AIs will choose to
This archive was generated by hypermail 2.1.5 : Thu May 23 2013 - 04:01:13 MDT