RE: guaranteeing friendliness

From: Herb Martin (HerbM@LearnQuick.Com)
Date: Fri Dec 02 2005 - 20:49:43 MST


> While you say it goes without saying that "[exponentially increasing
> intelligence] will occur more rapidly and by means other than natural
> selection", you aren't sufficiently defining the effect this
> distinction
> entails.

> Your analogy makes incorrect implicit assumptions. First of
> all, if you
> refer to the mechanism I proposed for verifying Friendliness,
> you will see
> that, unlike in the case of the analogy were you talk about
> the airport
> security doing ineffectual guesswork in order to prevent
> disaster, i'm
> talking about putting the passengers to sleep, reading through their
> memories to check out their intentions, uploading them into a
> simulation
> where they think they are boarding the plane for real and
> watch to make sure
> they don't do anything bad, and then waking them back up
> again without them
> even knowing it happened...

You comments above are all we need to prove that you agree:

In specific "exponentially increasing" intelligence, and
"reading their memories". You cannot do the later NOW
for human being or even approach this computationally
(ignoring technical barriers of method), and you seem to
think you can do this for an expenentially increasing
intelligence.

The Singularity if that is what you are considering is
precisely that: A Singularity beyond which you can
guarantee nothing.

If we are discussing a human level AI, which may remain
on this side of the Singularity for some finite and
interesting amount of time, then there is plenty of
room for friendly and unfriendly AI.

Even if YOU create what YOU believe to be a perfectly
friendly AI, you will almost to a practical certainty
find some people (large groups of people) who think your
AI is unfriendly to THEM.

Again, the do you propose that you will consider a
friendly AI created by the US government (a very
high probabability candidate, especially the NSA as
they tend to have the most advance computers and
algorithms, almost anachronistically) to be friendly
by YOUR standards?

How about any other government? Or corporatin?
Microsoft, IBM, and others with similar interest in
hardware and software are additional likely candidates
for the first "Friendly AI". How do you feel about
Microsoft's idea of friendly?

Or mine?

Or Al Qaeda? (For that matter how does Al Qaeda feel
about the friendliness of the Pentagon's Friendly AI
that figures out all of their hiding places and connecdtions?)

And that latter capability is being actively worked on....

You can TRY (and should) to develop (or encourage the
development of) friendly AI, but it cannot be guaranteed.

Beyond the singularity is unknowable territory (thus the
name) and in front of the singularity are competing groups
of human beings with different goals and ideas of
friendliness.

--
Herb Martin


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:53 MDT