RE: Open Source Friendly AI? (was Singularity and the generalpublic)

From: Ben Goertzel (ben@webmind.com)
Date: Fri Apr 13 2001 - 13:25:26 MDT


> > What, do you think that corporate spies or government ones won't
> > be able to acquire your code on way or another?
>
> I think that by the time they want to acquire the code, "the code" will be
> capable of defending itself, will be capable of figuring out that it's
> been kidnapped, and will be too complex (or deliberately self-obscured) to
> be passively modifiable in the face of self-created safeguards, by any
> unassisted human programmer who steals it.
>
> Before that time period, "the code" may be doing a few cool things, but
> nothing that would make anyone want to take the risk of illegal action to
> acquire it. Any AI that can't rewrite verself is probably not a threat to
> humanity, even if stolen and corrupted.

This seems naive.

The risk would be something like this.

Suppose, in 2005, you have a system that is 3/4 of the way to being
productively self-improving, AND, does enough useful things that some rich
crooked company wants to steal it.

Then, the company may steal your code for its currently useful functions.
Their researchers may then take it to the point of productive
self-improvability. Having more resources, perhaps they'll finish the thing
before you do. And perhaps they'll edit out the Friendliness module,
replacing it with an "Obey our Board of Directors at all costs" module.

There is a real future risk, though it's not a current one...

ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT