Re: Singularity Institute volunteer meeting this Sunday @ 7 PM EST

From: Yan King Yin (y.k.y@lycos.com)
Date: Thu Mar 25 2004 - 11:00:39 MST


From: Paul Hughes <psiphius@yahoo.com>

>> 6. Get rid of all the anthropomorphicisms,
>> misinterpretations, and cluelessness surrounding AI
>> today
>
>That's a good idea, but you're going to need some
>anthropomorphisms as part of guiding people from their
>current conceptions of SAI to more accurate ones. You
>can't douse someone with cold water. although you'll
>wake them up, they will be so angry, they are unlikely
>to listen to you after that. I'm speaking here from
>personal experience in many people's impression of
>Eli's writings.

I think we should stop using language such as 'waking
people up' to the Singularity. Eliezer, or anyone else
here, has not figured out how to create FAI. I have no
problem with FAI per se, but I find it unacceptable
that many on this list:

1. Claim that FAI is feasible without qualifications
and without detailing how it could be done;
2. Steadfastly avoid any political discussions that
are obviously relevant to establishing a scientific
theory of (general) morality.

Be honest. There is no evidence that lying to people
makes it better for them. Can you cite some examples?

YKY

____________________________________________________________
Find what you are looking for with the Lycos Yellow Pages
http://r.lycos.com/r/yp_emailfooter/http://yellowpages.lycos.com/default.asp?SRC=lycos10



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT