From: Ricky Loynd (firstname.lastname@example.org)
Date: Mon Nov 26 2007 - 17:20:37 MST
Tom & Jeff,
Feynman held that if a topic could not be explained in a freshman lecture, it was not yet fully understood. Nobody can doubt that you are sold on your positions, but you're not putting much effort into making your explanations persuasive. SL4's archived posts, though immensely entertaining, show that from the very beginning the discussions on FAI and CEV have settled very few points to hardly anybody's satisfaction.
As members of the distinct minority convinced that so much rides on whether the first AGI is built exactly right, you should feel an obligation to find more effective ways of convincing others. The bright people on this list are happy to help by pointing out flaws in the theory or its presentation. Eli continues to work on this. (At the last summit he added clarity by recasting some key CEV ideas in terms of nested layers of programming indirection.)
Bold, direct summaries like these are part of the solution:
> *Why* would anyone build an anthropomorphic AI?
> It would be a huge amount of extra work, for no palpable gain,
> at a great risk to the planet...
> We'll either realize how useless it is and not try, or try and fail.
> Anyone with the intelligence and determination to implement
> a human-like personality, which is stable under recursive
> self-improvement, has the intelligence and determination
> to realize why it is not a good idea."
Now, if only such dramatic claims can be backed up by concise, compelling arguments, beyond deferrals to (or snippets from) a literature which is vast (including the SL4 archives), disorganized, and conflicting.
You keep typing, we keep giving. Download Messenger and join the iím Initiative now.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT