Re: Friendliness SOLVED!

From: Thomas McCabe (
Date: Wed Mar 12 2008 - 21:28:25 MDT

On Wed, Mar 12, 2008 at 10:59 PM, Mark Waser <> wrote:
> > Performing unethical acts is usually in the self-interest of, not only
> > AIs, but most humans. Billionaire drug-barons and third world
> > dictators make themselves huge piles of money off horrible and
> > unethical actions.
> Only in a short-sighted view in a society with inadequate enforcement. This
> is *much* more the argument that I was expecting to have. I will continue
> to address this point shortly. Thank you for bringing it up.

You can't simply *assume* that society will enforce prohibitions
against unethical actions. We're not just laying back and observing
the future- it's our job to *build* such a society, from the ground
up, starting with whatever we have now. You can't write a blueprint
for how to build such a society that starts off by assuming that such
a society has already been built. If you start off by assuming that
any unFriendly being is instantly vaporized, you're quite correct. The
question is, how do we get to the point where unFriendly beings are
vaporized (or at least prohibited from doing harm)?

> > Show us examples of such derivations.
> Coming shortly (it's getting late). Again, an excellent question!
> > Error, reference not found. There's no such thing as a computer "with
> > the intelligence of a human", because computers will have vastly
> > different skillsets than humans do (see
> >
> :-) You're being pedantic and difficult. I'm arguing a general equivalence
> here, not a specific skill set.

There's no such thing as general equivalence, without specific
equivalence in at least some cases; the general skillset is simply
some function of the union of all specific skillsets. To name a
specific example, there's no such thing as an animal that's
human-equivalent in sports, because the skillsets are too different.
Few animals could even hold a javelin, while no human can match the
brute strength of most animals.

> > The people on this list already have a great deal of human-universal
> > architecture, which AIs won't have. See
> >,
> >,
> >
> Yes, but I don't see why my argument cares whether or not the AGIs have
> human universal architecture (except that it is a good argument that my
> testing on humans is insufficient for proof of behavior in AGIs)

You can't explain something to someone in English unless they have a
great deal of human-universal architecture. English was *built* for
humans- you can't just give it to, say, a Boeing 767 and pray for the
instructions to work. We invented programming languages precisely
because computers can't parse English.

> > Any AI intelligent enough to actually understand all this will be more
> > than intelligent enough to rewrite itself and start a recursive
> > self-improvement loop. See
> Possibly true but it is probably not smart enough to get around the blocks
> that humans will have placed in it's way (and the fact that humans will have
> placed the goal that it is UnFriendly to attempt to do so until the humans
> declare that it is ready).

See on how effective such
"blocks" are, even against other humans.


 - Tom

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT