RE: guaranteeing friendliness

From: H C (lphege@hotmail.com)
Date: Fri Dec 02 2005 - 18:02:16 MST


>From: "Herb Martin" <HerbM@LearnQuick.Com>
>Reply-To: sl4@sl4.org
>To: <sl4@sl4.org>
>Subject: RE: guaranteeing friendliness
>Date: Fri, 2 Dec 2005 13:17:21 -0800
>
> > From: H C
> > >From: "Herb Martin" <HerbM@LearnQuick.Com>
> > >Or, it's open source? How do you stop some human from taking
> > >it offline and tinkering with it until it is able to do battle
> > >again with the Supreme Friendly AI running the world.
> > >
> > >People cannot even agree on what would constitute 'friendly'
> > >or 'unfriendly' behavior -- if it is powerful enough to prevent
> > >any competitors it will by definition have to PROTECT itself
> > >and, almost by definition, have to evolve to counter new threats,
> > >at which point you cannot count on ANY pre-programmed friendly
> > >behavior being permanent.
> >
> > Stop here.
> >
> > You are making a very critical and very subtle implicit
> > assumption here.
>
>No, I was making no such assumption it was rather you who
>were assuming that the word 'evolution' always means by
>"natural selection" or works as we expect NATURAL evolution
>to work.
>
> > The
> > AI's exponential increase in intelligence is absolutely nothing like
> > evolution.
>
>While what you said about the Singularity was generally
>not incorrect it had little to do with my point above,
>that any AI which is going to defend itself from other
>unfriendly AI will have to evolve or develop in the sense
>of evolving capabilities.
>
>That this will occur more rapidly and by means other than
>natural selection should go without saying among those who
>follow this topic.

Actually what I said had a lot to do with your original point, but not
obviously so. I think the crux of the issue is that I believe you
misunderstand the implications of exponentially increasing intelligence.

While you say it goes without saying that "[exponentially increasing
intelligence] will occur more rapidly and by means other than natural
selection", you aren't sufficiently defining the effect this distinction
entails.

In my case, I attempted to point out a few of the countless advantages a
mind on a computer substrate would have over humans. The point I was trying
to get accross was that the transition from AI -> Power will be extremely,
inconcievably fast. At such a point as the AI develops general molecular
nanotechnology, whether or not a second AI or human comes up to such a power
will be strictly under the control of the super-human AI. I believe that,
while it is possible that two or more AIs could be developed at the same
time, and some insane dramatic good versus evil battle takes place on some
cosmic abstract plane of existence, I also believe that this is somewhat
improbable, and that, out of all of the people attempting to create AI,
there is one person or team that is the smartest of the group, and I believe
this team or person will be the first to create an AI.

>
>Now you have indirectly made my point about friendly AI:
>Once we reach the Singularity we cannot be assured of
>a 'friendly AI' or even have any control or effect on
>the developing and evolving AI.

Actually, if you even remotely understand the principles of goal based
systems (such as an intelligence), then you would know that there are ways
of developing verifiable Friendliness in an AI. In my previous posts in this
very thread I outlined a plausable Friendliness verification mechanism as
well.

>
>Guaranteed control is an illusion.

If so, then you can kiss your ass into oblivion, because nothing less than
guaranteed control is sufficient to ensure our immortality.

>
>Much like believing you can keep terrorists from taking
>down an airplane by taking away sewing scissors from
>ordinary passengers.

Your analogy makes incorrect implicit assumptions. First of all, if you
refer to the mechanism I proposed for verifying Friendliness, you will see
that, unlike in the case of the analogy were you talk about the airport
security doing ineffectual guesswork in order to prevent disaster, i'm
talking about putting the passengers to sleep, reading through their
memories to check out their intentions, uploading them into a simulation
where they think they are boarding the plane for real and watch to make sure
they don't do anything bad, and then waking them back up again without them
even knowing it happened...

How many terrorists get by this time?

>
>--
>Herb Martin
>
>

--Th3Hegem0n
http://smarterhippie.blogspot.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:53 MDT