RE: Military Friendly AI

From: Ben Goertzel (ben@goertzel.org)
Date: Thu Jun 27 2002 - 19:58:06 MDT


Brian Atkins wrote:

> James Higgins wrote:
> >
> > I would tend to worry very little if Ben was about to kick off a
> > Singularity attempt, but I would worry very much if you, Eliezer, were.
>
> That's quite odd since last I checked Ben wasn't even interested in
> the idea of Friendliness until we invented it and started pointing out
> to SL4 exactly how important it is.

Quite the contrary, Brian. I -- like Pei Wang, Minsky, Peter Voss, and many
other AI researchers -- have been thinking about Friendliness for many
years. Since Eliezer was in diapers -- and in Minsky's case, since before I
or Eliezer were born! These are not new ideas. The term "Friendly AI" is
due to Eli, so far as I know, but the concept certainly is not.

Over the last 15 years, I have chosen to focus my research work, and my
writing, on the creation of real AI, rather than on the Friendliness aspect
specifically. This is not because I consider Friendliness unimportant. It
is, rather, because -- unlike Eliezer -- I think that we don't yet know
enough about AGI to make a really detailed, meaningful analysis of the
Friendly AI issue. I think it's good to think about it now, but it's
premature to focus on it now. I think we will be able to develop a real
theory of Friendly AI only after some experience playing around with
infrahuman AGI's that have a lot more general intelligence than any program
now existing.

I believe my attitude toward Friendliness is typical of AGI researchers.
It's not that no one but Eliezer realizes the issue exists, or is
important -- it's not that he brought the issue to the AI community's
intention. It's rather that he's nearly the only one who believes it's
possible to create a detailed theory of Friendliness *at this stage* prior
to the existence of infrahuman AGI's with a decent level of general
intelligence.

Personally, I think he's largely wrong on this; I think that his theory of
Friendly AI is not all that valuable, and that it will look somewhat
oversimplistic and naive, in hindsight, when we reach the point of having a
powerful infrahuman AGI.

The idea of self-modifying AI causing exponentially increasing intelligence
is also something AI researchers have been talking about for years -- Minsky
since the 70's or earlier. What distinguishes Eliezer is not his
understanding of the long-term relevance of this issue, but the fact that
he's one of very few AI researchers who thinks that this issue is worth
paying a lot of attention to *now*. Most AI researchers, rather, believe
that only once we have an infrahuman AGI with a lot of intelligence, does it
make sense to pay a lot of attention to intelligence-increasing
self-modification.

Now, no one has proved they know how to construct an AGI. It is possible
that Eliezer is correct that it makes sense to spend a lot of time on these
issues *now*, before we have a decent infrahuman AGI. But it is not right
to claim that others don't understand these issues, or think they're
serious, just because they think the task of creating a decent "real AI"
should come temporally first.

I note that, while Eli has been focusing on these topics, he has not made
all that much observable progress on actually creating AGI. He has
performed a valuable service by bringing ideas like AI morality and AI
self-modification to a segment of the population that was not familiar with
them (mostly, members of the futurist community who are not AI researchers).
But by making this choice as to how to spend his time, he has chosen not to
progress as far on the AI design front as he could have otherwise.

> Not that it seems to have had much
> effect since he still has no plans that I know of to alter his rather
> dramatically risky seed AI experimentation protocol (basically not
> adding any Friendliness features until /after/ he decides that the
> AI has advanced enough) (he has a gut feel you see, and there's certainly
> no chance of a hard takeoff, and even if it did he's quite sure it would
> all turn out ok... trust him on it)

I think that it is not possible to create a meaningful "Friendly AI" aspect
to Novamente at this stage. I am skeptical that it's possible to create a
meaningful "Friendly AI" aspect to any AI architecture in advance, before
one has a good understanding of the characteristics of the AI in action.

Perhaps someone will create an AI system that is sufficiently deterministic
that it would be possible to create an effective Friendliness component for
it in advance of seeing how the system works as a fairly intelligent
infrahuman AGI. However, my intuition is that no system with this level of
determinism will be able to achieve a high level of general intelligence.

I do trust my intuition that there is no chance of Novamente having a hard
takeoff right now. The damn design is only about 20% implemented! We will
know when we have a system that has some autonomous general intelligence,
and at that point we will start putting Friendliness-oriented controls in
the system. Putting this sort of control into our system now would really
just be silly -- pure window dressing.

You may say "Yeah, Ben, but you can't absolutely KNOW the system won't
achieve a hard takeoff tomorrow." No, I can't absolutely know that, and I
can't absolutely know that I'm not really a gerbil dreaming I'm an AI
scientist, either; nor that the universe won't spontaneously explode three
seconds from now. But there's such a thing as common sense. There are a
dozen other people who know the Novamente codebase, and every single one of
them would agree: there is NO chance of Novamente as it is now, incomplete,
achieving any kind of takeoff. It does not have significantly more chance
of doing so right now than Microsoft Windows does. I am sure that if Eli
saw the codebase as it now exists he would agree -- not that it's bad, it's
just very incomplete.

>I guess it is because we go to the effort to
> put our plans out for public review and he sits in with the rest of the
> crowd picking them apart. At least we _have_ plans out for public
> review.
>

Eliezer has a much more detailed plan for AI friendliness than I do, but in
my view it's sort of a "castle in the air," because it's based on certain
assumptions about how an AI will work, and Eliezer does not have a detailed
design (let alone an implementation) for an AI fulfilling these assumptions.
The whole theory may be meaningless, if it turns out it's not possible to
make (or even thoroughly design) an AGI meeting the assumptions of the
theory.

I am working on a book on the Novamente AI design. It's a long and hard
process; I've been spending about 50% of my time on it since October 2001.
When done, the book will be 750+ pages and full of math, diagrams, etc. etc.
The draft I circulated to a few readers, a couple months ago, was badly
flawed and is being enhanced, repaired and extended significantly (based on
the early readers' suggestions and complaints). I expect this book to be
published in 2003. This will include a description of the Novamente goal
system and a discussion of Friendliness from a Novamente point of view.

For now, a 50-page high-level overview of the Novamente system is available
on the site www.realai.net (go to the "Novamente AI Engine" page).

Also on that page you will find a link to an essay I wrote on "AI Morality".
(Eliezer and some others pointed out some minor flaws in that paper, which I
have not yet found time to correct, but it still basically represents my
views.) I do not give a detailed theory of Friendly AI comparable to
Eliezer's there, but I do explain generally how I expect AI morality to
work, and discuss some of the issues I have with Eliezer's ideas on Friendly
AI. I stress that this is something I've thought about "in the background"
for a long time, but NOT something that has been a major focus of my work
lately, because of my believe that the right way to do Friendly AI will only
be determinable via substantial experimentation with early-stage infrahuman
AGI's.

> How about we set July for picking Ben's plan apart. After all he is far
> closer to completion (he claims) than anyone else, yet few people here
> seem to have anywhere near as good a grasp of his ideas compared to
> SIAI's.
>
> Disclaimer: this post is not intended to start any kind of us vs. them
> phenomena. It exists simply to point out a perceived important difference
> in the amount of critical discussion regarding the two
> organizations' plans.

Regarding picking my ideas on Friendly AI apart, that sounds like a fun
discussion! However, I will be on vacation from July 1-11 (though I will
check e-mail occasionally); hence I suggest to postpone a long and detailed
thread on this until mid-July when I get back.

Regarding picking the Novamente AI design apart, unfortunately a really
detailed thread on that will have to wait until sometime in 2003, when the
book comes out. There is a lot of depth there, much more than most of the
readers of the first draft saw (due to the flaws of the first draft), and a
detailed discussion of the design among a group who doesn't *know* the
details of the design, is unlikely to be productive.

Yours,
Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT