From: Ben Goertzel (email@example.com)
Date: Mon Dec 12 2005 - 10:41:53 MST
I don't normally respond for other people nor for organizations I
don't belong to, but in this case, since no one from SIAI has
responded yet and the allegation is so silly, I'll make an exception.
No, this is not SIAI's official opinion, and I am also quite sure that
it is it not Eliezer's opinion.
Whether it is *like* anything Eliezer has ever said is a different
question, and depends upon your similarity measure!
Speaking for myself now (NOT Eliezer or anyone else): I can imagine a
scenario where I created an AGI to decide, based on my own value
system, what would be the best outcome for the universe. I can
imagine working with this AGI long enough that I really trusted it,
and then having this AGI conclude that the best outcome for the
universe involves having the human race (including me) stop existing
and having our particles used in some different way. I can imagine,
in this scenario, having a significant desire to actually go along
with the AGI's opinion, though I doubt that I would do so. (Perhaps I
would do so if I were wholly convinced that the overall state of the
universe would be a LOT better if the human race's particles were thus
And, I suppose someone could twist the above paragraph to say that
"Ben Goertzel says if a superintelligence should order all humans to
die, then all humans should die." But it would be quite a
-- Ben G
On 12/12/05, 1Arcturus <firstname.lastname@example.org> wrote:
> Someone on the wta-list recently posted an opinion that he attribtuted to
> Mr. Yudkowsky, something to the effect that if a superintelligence should
> order all humans to die, then all humans should die.
> Is that a wild misrepresentation, and like nothing that Mr. Yudkowsky has
> ever said?
> Or is it in fact his opinion, and that of SIAI?
> Just curious...
> Yahoo! Shopping
> Find Great Deals on Holiday Gifts at Yahoo! Shopping
This archive was generated by hypermail 2.1.5 : Tue May 21 2013 - 04:00:49 MDT