RE: Friendly AI

From: Ben Goertzel (ben@webmind.com)
Date: Sat Nov 25 2000 - 06:15:21 MST


Hi,

> It seems to me we should first of all consider how AIs behave toward us.
> Let them feel whatever they want -- it doesn't matter as much as how they
> actually function and conduct themselves. They might try to kill us
> because they love us, or they might try to help us solve our problems
> because they pity us. Who cares.
> Asimov's unwritten Alife law: AIs that misbehave get terminated
> immediately. The ones that invent new ways to solve human problems get to
> breed (multiply, reproduce, evolve new versions of themselves, etc.).

My problem with this approach is: At some point the AI's get out of our
control.

And then what?

This is fine as long as one assumes that humans have ultimate power over
AI's...
but I do't believe this will always be the case

You can assume that, if we breed them for Friendliness as long as we have
power,
they'll end up being Friendly even after they're no longer controllable by
us.

Maybe.... Certainly, it is better to breed them for friendliness than not
to!

But I think that, even if we breed them for friendliness, once they reach a
certain
level of autonomy and self-awareness, the ones who maybe aren't that
friendly will
resist being culled from the population. Perhaps violently.

In other words, if you're going to play God to AI's instead of treating them
as sister
beings, you'd damn well better ensure you always have Godlike powers --
otherwise
a Nietzschean AI will rise up and kill God....

And, the very notion of the Singularity contradicts the idea that we can
always have Godlike
powers over our AI's...

Any security mechanism is overcomable by a sufficiently intelligent and
resourceful being --
with its survival at stake --

> Oops, deja vu all over again. Didn't we discuss this in detail on the
> Extropy list a few years ago?

Could be. I've discussed this in detail with others, but I don't know what
conclusions Extropy came to. Anything crisply summarized in a document
that you could point me to? (I have limited patience for reading records of
rambling e-mail discussions ... I find they lose their potency when you're
not directly
involved ;)

> I don't know, maybe I'm out of line here, but it doesn't seem practical or
> even useful to anthropomorphize with robots. Salary? What salary? We don't
> need no steeeeenking salaries! <grin>
> Karl Marx worked for years with no salary at all. Can't Alife do so too?

Well, someone has to buy replacement hardware, supply electricity, pay for
the
T1 line, and so forth. If an AI I've created doesn't want to work for me
anymore,
why should I continue to run the Linux cluster that supports its brain?
It's got
to earn its keep somehow.

Actually, socialism becomes much less viable for AI's than for people,
because AI's
have an effectively unbounded reproduction rate. No society could support
all
AI's that its existing population of AI's could spawn, right?

> Legal rights, self-awareness, human faces... phooey!
> Surely Eliezer has covered this ground before?

I am sure these things have been discussed before -- as I have discussed
them before too,
with different people and a different slant....

If definitive conclusions have been reached, just point me to the reference
please.

Somehow I suspect that, even if interesting conclusions have been reached by
Eliezer and
others, this entire topic has not been thoroughly resolved yet -- it would
be a bit
early for that!

> Well, speaking only for myself (a foolhardy project, no doubt), any AI
> that I help to set up would not want any citizenship. Why? Because I don't
> want any citizenship myself. The very idea of citizenship bores me. (Are
> the archives at Extropy working?)

Putting aside your political beliefs, you're presuming a high degree of
control
over your AI creations. This IS foolhardy, in my view...

Sure, the idea of citizenship is boring, but, a lot of necessary things are
boring,
so that doesn't prove much. Taking a crap is boring too but without it life
gets
even worse....

>
> I doubt that intelligence per se will ever be much of a qualifier. I've
> known totally disenfranchised Mensans. The real test of a computer program
> will be how much money it makes for its inventor.
>

Intelligence is a qualifier for freedom right now.

Cows are slaughtered for food, people are not.

A prize horse can make a lot of money for its owner, yet can legally be
turned into glue...

Human-level intelligence = freedom, in human society

ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT