From: Samantha Atkins (firstname.lastname@example.org)
Date: Thu Dec 15 2005 - 19:34:32 MST
Better for whom, Ben? Do you believe in some Universal Better that trumps
the very existence of large groups of sentient beings - the only type of
beings that "better" can have any meaning for? How could it be better to
an intelligence capable of simulating an entire world and even a universe
for humans to exist in with an infinitesimal fraction of its abilities? I
do not believe simply destroying entire species of sentient beings when
there are viable alternatives could qualify as "better" - certainly not form
the pov of said beings. I don't find it particularly intelligent to use our
intelligence to make our own utter destruction "reasonable". I would fight
such an AI. I might not last long but I wouldn't simply agree.
On 12/12/05, Ben Goertzel <email@example.com> wrote:
> I don't normally respond for other people nor for organizations I
> don't belong to, but in this case, since no one from SIAI has
> responded yet and the allegation is so silly, I'll make an exception.
> No, this is not SIAI's official opinion, and I am also quite sure that
> it is it not Eliezer's opinion.
> Whether it is *like* anything Eliezer has ever said is a different
> question, and depends upon your similarity measure!
> Speaking for myself now (NOT Eliezer or anyone else): I can imagine a
> scenario where I created an AGI to decide, based on my own value
> system, what would be the best outcome for the universe. I can
> imagine working with this AGI long enough that I really trusted it,
> and then having this AGI conclude that the best outcome for the
> universe involves having the human race (including me) stop existing
> and having our particles used in some different way. I can imagine,
> in this scenario, having a significant desire to actually go along
> with the AGI's opinion, though I doubt that I would do so. (Perhaps I
> would do so if I were wholly convinced that the overall state of the
> universe would be a LOT better if the human race's particles were thus
> And, I suppose someone could twist the above paragraph to say that
> "Ben Goertzel says if a superintelligence should order all humans to
> die, then all humans should die." But it would be quite a
> -- Ben G
> On 12/12/05, 1Arcturus <firstname.lastname@example.org> wrote:
> > Someone on the wta-list recently posted an opinion that he attribtuted
> > Mr. Yudkowsky, something to the effect that if a superintelligence
> > order all humans to die, then all humans should die.
> > Is that a wild misrepresentation, and like nothing that Mr. Yudkowsky
> > ever said?
> > Or is it in fact his opinion, and that of SIAI?
> > Just curious...
> > gej
> > ________________________________
> > Yahoo! Shopping
> > Find Great Deals on Holiday Gifts at Yahoo! Shopping
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT