From: Arona Ndiaye (email@example.com)
Date: Mon Apr 22 2002 - 02:08:54 MDT
Any course of action resulting in any AI that is Unfriendly (major failure
or not) should not be considered. This conversation seems strange to me too.
The whole point of FAI is to give us/implement a SAFE transition guide. With
all this mumbo-jumbo (which can very well give us U-FAI instead of FAI
99.99% of the time) we get a transition guide to just about anybody's
definition of hell. What is the point ? The idea can be entertaining to
some, fine, but is the risk worth it ? Is ANY risk worth U-FAI ? *scratches
head, a bit lost*
Unless you are seeing something, that I am missing. If it is so, please
help..... "Aroooooooooooooooonaaaaaa phoooooooooone hooooooooooooooooome"
----- Original Message -----
From: "Damien Broderick"
> Is it unconscionable to try such an experiment because it seems bound to
> yield at least some proportion of hellish worlds?
This archive was generated by hypermail 2.1.5 : Sat May 25 2013 - 04:00:33 MDT