Re: Threats to the Singularity.

From: Eugen Leitl (eugen@leitl.org)
Date: Mon Jun 17 2002 - 08:39:27 MDT


On Mon, 17 Jun 2002, Gordon Worley wrote:

> Eliezer addressed this in his reply to this thread earlier. It is
> irrational if the attachment is blind. You must have some reason that

No, I don't need any reasons to stay live. That's a core axiom of
rationality (and I happen to like my evolutionary artifacts, dammit). If
you question that basic of rationality you're definitely a sick puppy.

> you need to stay alive, otherwise provisions for it will most likely get
> in the way of making rational decisions.

I think you would understand that it is very rational to preemptively kill
people who're effectively trying to kill me and those close to me (I'm not
very close to the rest of humanity, but there is an awful lot of them, so
this tends to cumulate).

> Sure, you should be concerned. I think that the vast majority of
> humans, uploaded or not, have something positive to contribute, however

I don't think right to live is based on my assessment on who is
contributing something positive, or not. I hope that this position is
mutual, orelse we are so screwed.

> small. It'd be great to see life get even better post Singularity, with
> everyone doing new and interesting good things.

Indeed. So let's try to go there.
 
> If only it were easy to become inhuman, but it's not.
>
> Uncaring is inaccurate. I do care about humans and would like to see
> them upload. I care about any other intelligent life that might be out
> there in the universe and helping it upload. I just don't care about
> humans so much that I'd give up everything to save humanity (unless that
> was the most rational thing to do).

So basically in your value system the right of humanity to exist and the
right for you to pursue your activities, which bears considerable
probability to teotwawki this place not even balance.

You *are* a sick puppy.
 
> There is an ethical issue, however the irrational attachment is the
> result of relatedness. A proper ethic is not so strong that it prevents
> you from even thinking about something, the way evolved ethics do.

I can think about a great many things. It doesn't mean I have to like
them, and it specifically doesn't mean I have to do them.

Your thoughts about the primate of rationality are rather irrational.
 
> In many ways, humans are just over the threshold of intelligence.

Intelligence is just a trait. I happen to like it, but it doesn't make the
rest of the world into one huge blind spot.

> Compared to past humans we are pretty smart, but compared to the
> estimated potentials for intelligence we are intellectual ants. Despite

I agree. Let's change this. Let's make everybody who is willing smarter.

> our differences, all of us are roughly of equivalent intelligence and
> therefore on equal footing when decided whose life is more important.
> But, it's not nearly so simple. All of us would probably agree that
> given the choice between saving one of two lives, we would choose to
> save the person who is most important to the completion of our goals, be
> that reproduction, having fun, or creating the Singularity. In the same
> light, if a mob is about to come in to destroy the SI just before it
> takes off and there is no way to stop them other than killing them, you
> have on one hand the life of the SI that is already more intelligent
> than the members of the mob and will continue to get more intelligent,
> and on the other the life of 100 or so humans. Given such a choice, I
> pick the SI.

I figured that much. Please hang on while I gather the mob. You might find
the odds rather tough, being 1) in the vast minority 2) not an SI yourself
3) assuming the mob will be kind enough to wait until you're done with
your little Golem project
 
> In my view, more intelligent life has more right to the space it uses
> up. Of course, we hope that intelligent life is compassionate and is
> willing to share. Actually, I should be more precise. I think that
> wiser life has more right to the space it uses (but you can't be wiser
> without first being more intelligent). I would choose a world full of
> dumb humans trying hard to do some good over an Evil AI.

Excellent. This implies you can prove that the AI is not going to be evil,
and convince everybody else of the correctness of your proof. Given the
magnitude of the impact, you'd better be damn convincing.
 
> If an SI said it needed to kill a bunch of humans, I would seriously

There's one big thing wrong with this sentence: it implies the SI is
already here and operating. I definitely can't let you do that, Dave.

> start questioning its motives. Killing intelligent life is not

No, you won't "seriously start questioning its motives". You would be too
busy with dying, along with everybody else.

> something to be taken lightly and done on a whim. However, if we had a
> FAI that was really Friendly and it said "Gordon, believe me, the only
> way is to kill this person", I would trust in the much wiser SI.

Yeah, that's some really Friendly AI. "Trust me, I'm a FAI! Kill that
person, Gordon!"
 
> This is the kind of reaction I expect and, while I'm a bit disappointed
> to get so much of it on SL4, therefore avoid pointing this view out. I
> never go out of my way to say that human life is not the most important
> thing to me in the universe, but sometimes it is worth talking about.

This conversation is completely surreal. Do you realize that you're
considerably harming your goals by making this public?



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT