Novamente project goals

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Mar 10 2002 - 20:26:38 MST


Ben Goertzel wrote:
>
> Hi,
>
> > Actually, as described, this is pretty much what I would consider
> > "human-equivalent AI", for which the term "transhumanity" is not really
> > appropriate. I don't think I'm halfway to transhumanity, so an AI twice as
> > many sigma from the mean is not all the way there. Maybe you should say
> > that Novamente-the-project is striving for human-equivalence; either that,
> > or define what you think a *really* transhuman Novamente would be like...
>
> We are striving first for human-equivalent, then slightly-transhuman, then
> profoundly-transhuman AI.
>
> I do not think it is important for us to articulate in detail, at this
> point, what we believe a profoundly-transhuman AI evolved from the Novamente
> system will be like.
>
> Because, I think it is probably NOT POSSIBLE for us to envision in detail
> what a profoundly-transhuman AI evolved from the Novamente system will be
> like.

That's right. It's not. In fact, it's not even possible for either of us
to envision in detail what a human-equivalent Novamente would be like,
although we have different ideas about how much self-modification will have
been done by that level.

But there is a difference between envisioning system architectures, and
envisioning consequences.

It so happens I believe that no matter what a project *says* it's trying to
achieve, you can often figure out what it's *actually* trying to achieve by
looking at the way the researchers act. Various projects have previously
claimed to be trying to build an AI that was human-equivalent in one some or
another. Were they really trying to build a person? Of course not. They
were trying (and failing) to build advanced tools. Had they really been
trying to build a person, it would have shown in their attitude; they would
have given some thought to whether the resulting system would be deserving
of human rights, their responsibilities toward the created individual, and
so on.

Now, it certainly appears that Novamente has gotten this far in terms of
feeling the emotional impact of envisioned consequences. Your dedication,
your willingness to work on the AI even after Webmind went down, shows the
same thing. Whatever it is you're trying to create, it doesn't feel like a
fancy tool to you. I don't know what goes on within the Secret Private
Novamente Mailing Lists, but I wouldn't be surprised to find that serious
consideration of the moral responsibility you owe to the AI is a frequent
topic.

I'm willing to believe that you envision, and are working to create, a
human-equivalent being - what I would consider human-equivalent, anyway -
who'll be smart; better than us humans at things like Higher Math and code;
quite the scientific literate; a fine scientific collaborator; speaking to
us with a personality that uniquely marks it as an AI. This is what you
described when I asked you what you were building, and it is consistent with
the visible portions of your emotional attitude that you take this vision
seriously.

But is your attitude toward Novamente really consistent with trying to
create a superintelligence?

What is a ~H Novamente? It's the most important scientific discovery in the
21st century, beyond all doubt, with moral and philosophical and
technological implications that place it far ahead of nanotechnology or even
human immortality. It is one of the grandest acts in the pageant of human
life that could ever be conceived. It would give rise to technological
aftereffects immense enough to power an economic boom across multiple
decades, a new industrial revolution.

But it would still be something that occurred within the framework of human
life, however significant. You're uncomfortable with my declaration of
intent to bypass having an ordinary life, because to you Novamente may be
the greatest thing you ever do, but it won't be the only thing you ever do.
You can have a life that includes wife, kids, and Novamente as
accomplishments. For me the Singularity marks, not the end of everything,
but the beginning of everything. It is the sum of what there is to do, here
on Earth before the Singularity.

While a ~H Novamente might bring about an immense economic boom, it would be
an economic boom whose first effects would be felt in the First World. It'd
trickle down to the Third World eventually, of course, but it would take a
while. So it's also understandable that you have a bad reaction to the
belief that a Friendly superintelligence benefits every one of six billion
humans equally, to such a great extent as to utterly wipe out existing
differences; to you it appears to be a case of ignoring a problem that you
don't expect to be magically fixed just by Novamente. For us, of course,
everything begins with the Singularity, for First Worlders and Third
Worlders alike.

On the Friendliness issue, well, a ~H Novamente going bad could cause a hell
of a lot of trouble, but it wouldn't be the end of everything. And all you
need is a Novamente that behaves nicely around other people; your
Friendliness architecture doesn't need to contain all of humanity's hopes
and dreams. Your attitude toward Novamente's outlook on life seems to be
around the attitude I'd take toward building an AI that *wasn't* supposed to
grow up into the Singularity Transition Guide. I'd want that AI to be a fit
player in the human drama, maybe even a child I could be proud of, but not
just like a human; that would be boring.

Asking you to swear a solemn oath to act on behalf of sentience/humanity is
a bit over-the-top if you're envisioning yourself building a ~H
(roughly-human) Novamente. The people who built the integrated circuit
didn't have to swear an oath like that, why should you? If you're working
on a superintelligence, though, the only problem with taking an oath is that
any oath pales by comparison with the act itself. You don't sound like
someone setting out to commit an act with (positive) consequences so
tremendous that anyone setting a single dividing marker across all of human
history throughout time would set it there, and not because they think Real
AI is a philosophically important moment in human history, either. You
appear, from what I can see through email, to consider such statements
over-the-top. Implications that extend out from a scientifically active, ~H
Novamente are okay; implications that extend out from superintelligence are
not. Not just in terms of what you consider to be good public relations,
which is a separate issue, but in terms of what you, personally, are
comfortable with discussing.

In short, everything about your emotional posture that I can read through
email says that you're making decisions based on your vision of a ~H
Novamente - not a superintelligent one. The problem is that Moore's Law
goes on, and self-improvement goes on, and even if there is somehow a stable
state in which ~H Novamente lasts for ten years instead of two weeks, any
scientific revolution started by Novamente is utterly insignificant by
comparison with what happens at the end of ten years.

Now, it is well-known that figuring out people's real thoughts and emotions
through email is an underconstrained problem. I'm not trying to pigeonhole
you. Just consider this as depicting the causes and conclusions of my
erroneous intuition in sufficient detail that you can fix what's broken.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT