From: Robin Brandt (firstname.lastname@example.org)
Date: Wed Mar 26 2008 - 14:12:26 MDT
Thanks for the misunderstanding... haha
My point was that you need to program friendliness, and a value system, not
expect it to be an emergent property, just as the basics of our limited
human morality is hard coded in our genome/epigenome, and not some emergent
property fo beeing intelligent, thatīs why you can do moral reasoning with a
human, because we all share some basic underlying brainware... The problem
is to make the brainware, not to do moral reasoning with an AI. And more
importantly, make the AI brainware perfect, optimal and such that no human
peruasion can be effective in changing it.
On Sat, Mar 22, 2008 at 4:22 PM, Mark Waser <email@example.com> wrote:
> Sorry for the delay. The answer is the subject of this e-mail.
> HINT1: Your question is also the answer (and I apologize for being
> obnoxiously zen -- but your question is awesomely good and should also lead
> directly to the answer)
> HINT2: The logical definition of Friendliness (aka the solution to the
> Friendliness problem aka the Friendliness meme) is so simple that it can be
> successfully expressed in less than 20 ASCII characters.
> ----- Original Message -----
> *From:* Robin Brandt <firstname.lastname@example.org>
> *To:* email@example.com
> *Sent:* Thursday, March 20, 2008 9:46 AM
> *Subject:* Re: Friendliness SOLVED!
> Just answer this simple question?
> Why would it be in the self-Interest of a Super-Intelligence to be
> Friendly unless it already is friendly?
-- ~Robin Brandt~ Emergence, Evolution, Intelligence Control the infolepsy! Love the world! The blog has opened! http://mandelum.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT