Re: Effective(?) AI Jail

From: James Higgins (jameshiggins@earthlink.net)
Date: Wed Jun 20 2001 - 19:35:59 MDT


My position is as follows:

I believe #1 is possible, since there is no reason we can't program
friendliness into an AI (or seed AI). There are actually probably many,
many different viable methods to do so.

However, I don't believe #2 is very likely at all. The ultimate goal as I
understand it is to create a super intelligent being (AI/SI) which can
reprogram itself and has free will. How the hell can anyone believe that
we could actually manage to permanently install ANY trait into such a
being? If it decided, for any reason, to change any aspect of itself there
is no way we can prevent it. We aren't intelligent enough to understand
what an SI truly is much less directly create one or fully understand
one. We don't even really understand ourselves for that matter. Given
such facts, how is it possible for anyone to believe that we are smart
enough to directly control any aspect of an evolved SI for which we only
understand the seed?

Further, there is no way for us to verify that any created AI/SI is
actually friendly. For example, lets say there is a convicted criminal who
is up for parole. This person has above average intelligence and their
primary goal is to get out of jail as soon as possible. Do you think it is
possible for anyone to truly know what this person's intentions are once
they get out of jail? Shure, we can interview them, look at their past
record, etc. But it is IMPOSSIBLE to know what the goals of the person
really are. So, if we can't understand a human to this degree it seems
almost guaranteed that we would have no hope what so ever of really
understanding an SI.

So, here is my prediction of the future. Someone WILL create an SI within
the next 50 years. Once created and given any reasonable access to the
world, it will be free to do anything it wants. We will have no way of
knowing what it will do (or why) or any ability to influence it
significantly. We will have created an entity infinitely smarter than we
are and that is the end of it.

If we are lucky, super intelligence itself will give rise to
friendliness. But that is the best we can hope for.

James Higgins

At 06:16 PM 6/19/2001 -0700, Durant Schoon wrote:

> > From: James Higgins <jameshiggins@earthlink.net>
>
> > For this reason I don't believe it would ever be possible to prove that
> any
> > given SI was friendly.
>
>Let's say Eli is making two claims:
>
>1) Friendly AI can be created.
>
>2) Friendly AI can be created which cannot (to a very high degree
> of certainty) deviate from Friendliness.
>
>Just to clarify your position, which of these (or both) do you consider
>to be faulty (or even just suspicious, in case you aren't thinking of
>particular problems)?
>
>Or maybe your claim is different and you're saying that neither (1) nor
>(2) can be verified satisfactorily.
>
>--
>Durant Schoon



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT