From: Amer A. Qaiyum (email@example.com)
Date: Mon Dec 07 2009 - 13:03:27 MST
Hey, this is my first post, so apologies if I come across mistaken, ignorant
I question your use of good and evil. To me, such things are far too
subjective to use in this way. Now, you can either base your definitions of
good and evil either on religious belief or personal perception (or both
perhaps). Take for example an actor (whose name I forget) who played many,
what we might consider to be, "evil" characters. Yet, in an interview, he
claims he never has because all his characters believed what they were doing
was right. Now, of course to us individually, it may seem that what they did
was clearly evil or whatever, but my point is that it isn't correct to make
these views appear good or evil and forcing them on to others. Abortion is
considered evil by some, and good by others, yet the public can be very
split over this. See what I mean?
So with AI, some might argue that AI that takes control of our lives would
be better, and some would argue that AI which allows us to decide our own
fate is better. However, we can agree to an extent that Friendly AI would be
AI that did not take away our free will, or at least, our illusion of free
will. You may attribute other aspects to Friendly AI such as it allowing us
to survive and such, but I'm not an expert Singularitarian so I'll leave
definitions to them.
Who are we to say that whoever is in charge is good or evil? There is going
to be two people somewhere who believe either things! It's just we can't
start labeling these things as good or evil, because who are we to decide
Thanks, Amer A. Qaiyum.
From: "Glenn Neff" <firstname.lastname@example.org>
Sent: Monday, December 07, 2009 12:33 PM
Subject: Re: [sl4] I saved the world. I can prove it.
> --- On Mon, 12/7/09, Panu Horsmalahti <email@example.com> wrote:
>> From: Panu Horsmalahti <firstname.lastname@example.org>
>> Subject: Re: [sl4] I saved the world. I can prove it.
>> To: email@example.com
>> Date: Monday, December 7, 2009, 12:10 PM
>> 2009/12/7 Glenn Neff <firstname.lastname@example.org>
>> It seems that you haven't read about any of the basic
>> texts involving this subject, however, you're confused
>> about what people (especially people on sl4 and related
>> "memecomplexes" want to do. Their goal is to
>> create a "Friendly AI", which by definition is not
>> dangerous. And they want to create it quickly, because the
>> more they wait, the more there's chance that something
>> kills everyone (which is also called an existential risk).
>> - Panu H.
> So you've got everything covered then?
> Correct me if I'm wrong, but I thought there was a risk that the AI might
> *not* be friendly, despite all the smartest peoples' best efforts.
> And the point that I was trying to make is that we are facing an
> existential risk right now . . . so unless you've got that friendly AI
> ready to go *right* *now*, we need a different plan. What I am trying to
> accomplish in bringing this to your attention is to make sure that Evil
> people are not in control of the world when said AI is developed.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT