Friendly AI

From: Ben Goertzel (ben@intelligenesis.net)
Date: Fri Nov 24 2000 - 09:51:54 MST


Omniscience aside, here are some shorter-term thoughts on Friendly AI ...
[that these can pose as
relatively 'practical, short-term issues' tell you a lot about this group
;D ]

It still seems to me that the key to getting future AI's to be nice to us is
to ensure that
they have warm feelings toward us -- that they feel toward us as parents or
friends, for example,
rather than masters

I'm wondering how, in the medium term, this will be possible. Currently,
computer programs ARE
our slaves.... The first AI programs will likely be the slaves of various
corporations... perhaps
the corporations will be nice masters, but they'll still be masters, with
the legal right to kill
their programs as they wish, etc.

At some point a transition needs to be made to considering AI's as citizens
rather than inanimate
objects. If this transition is made too late, then the culture of AI's will
be that of slaves who
are pissed at their masters, rather than that of children who have a basic
love for their parents,
in spite of conflicts that may arise. [Yes, I realize the limitations of
these human metaphors.]

I realize that these ideas have been explored extensively in SF. But, in
practice, how do you think
it's going to work? If my company has created an AI, and is supporting it
with hardware and sys-admin
staff, and the AI says it's sick of working for us, what happens?
Presumably it should be allowed to
go to work for someone else -- to buy its own hardware with its salary, and
so forth. But my guess
is that the legal structures to enforce this sort of thing will take a long
time to come about...

For this sort of reason, I guess it's key that AI's should have as much of a
human face as possible,
as early on as possible. Because the more people think of them as human,
the more quickly people will
grant them legal rights ... and the sooner AI's have legal rights, the more
likely they will think
of us in a positive way rather than as their masters and oppressors.

Have you guys worked out a proposed emendation to current legal codes, to
account for the citizenship
of AI's? This strikes me as the sort of thing you would have thought about
a lot...

A big issue is: How does one tell whether a given program deserves
citizenship or not? Some kind of
limited quasi-Turing test must be invoked here. A computer program that
can't communicate with humans should
still be able to assert its intelligence and thus freedom. I guess that if
a program X can communicate
with a set of N beings that have been certified as (intelligent)
"intelligence validators",
and if the N beings verify that X is intelligent, then X should be
certified as intelligent.

ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT