Webmind Inc. as social process

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Mar 10 2002 - 18:33:00 MST


Ben Goertzel wrote:
>
> These are indeed claims, but as far as I can tell they are not backed up by
> anything except your intuition.
>
> I am certainly not one to discount the value of intuition. The claim that
> Novamente will suffice for a seed AI is largely based on the intuition of
> myself and my collaborators.
>
> However, my intuition happens to differ from yours, as regards the ultimate
> superiority of your CFAI goal architecture.
>
> I am not at all sure there is *any* goal architecture that is "ultimate and
> superior" in the sense that you are claiming for yours.
>
> And I say this with what I think is a fairly decent understanding of the
> CFAI goal architecture. I've read what you've written about it, talked to
> you about it, and thought about it a bit. I've also read, talked about, and
> thought about your views on closely related issues such as causality.
>
> Sometimes, when the data (mathematical or empirical) is limited, there is
> just no way to resolve a disagreement of intuition. One simply has to
> gather more data (via experimentation (in this case computational) or
> mathematical proof).
>
> I don't think I have a big emotional stake in this issue, Eliezer. I never
> have minded integrating my own thinking with that of others. In fact I did
> a bit too much of this during the Webmind period, as we've discussed. I'm
> willing to be convinced, and I consider it possible that the CFAI goal
> architecture could be hybridized with Novamente if a fair amount of work
> were put into this. But I'm not convinced at the moment that this would be
> a worthwhile pursuit.

Figuring out how to convince you on these issues is indeed a pretty problem
for me. Well, the worst that can happen is that I don't figure out how to
convince you and the Earth is destroyed, so no pressure - right?

Webmind Inc. as a social process - actually, I think I'll hereafter refer to
it as Intelligenesis, to avoid confusion with the AI architecture. Anyway,
Intelligenesis interests me. I know in some detail how evolution walked up
the incremental path to intelligence, and I know in some detail how AI
research as commonly conducted managed to repeatedly fail to create
intelligence. One of the many contributing factors toward the failure of
AI, as commonly understood, is theoretical straightjackets so rigid that it
isn't possible for the AI programmers to be creative. Webmind's agent
architecture - and your own beliefs, on starting out, about magicians
transforming magicians - permitted Intelligenesis to merge multiple
mostly-wrong theories of intelligence into a common ground where the
occasional bright idea from one of the researchers could survive and,
perhaps, reproduce.

Considered from an evolutionary standpoint, Intelligenesis broke out from
the one-AI one-theory straightjacket, which had previously held for *general
intelligence* projects (e.g. Cyc) even if it was occasionally violated by
more pragmatic robotics architectures and so on. Correspondingly, Webmind
broke out of the AI-as-single-algorithm straightjacket, not so much because
any individual researcher had a picture of AI as a supersystem, but because
all the different researchers thought that AI was composed of different
systems. In combination, all the ideas added up to a much bigger idea than
any previous single AI researcher had ever had for general intelligence.

Of course, I am just extrapolating here based on what I know about
intelligence and Intelligenesis. If I know you, Ben, right now you're
thinking of some additional properties of Intelligenesis's climb towards
theoretical complexity and Webmind's climb toward supersystemness which I
didn't mention and which must therefore be explained to me. Please bear in
mind that, as I visualize Intelligenesis, there is indeed a great deal which
went on that I haven't mentioned here. I can guess that you encountered all
sorts of difficulties which had to be resolved in the course of integrating
everyone's small theories into the big theory that slowly emerged; I can
guess that it was hard to figure out which parts went into the big theory; I
can guess that some parts of the big theory survive unchanged from the
theories that people brought in with them; I can guess that you saw some of
the shape of the big theory in advance, but that there were still surprises;
I can guess that some researchers had bigger theories than others as they
signed on with Intelligenesis; I can guess that some people worked on the
concepts behind their theories for years before entering Intelligenesis, so
that the component theories aren't "small" in any absolute sense; I can
guess that some of the people brought with them theories that were more
complex than what you consider to be the "failed past simplicity of AI", so
it wasn't just different classical theories breeding; I can guess that you
tried to integrate some theories but that they just didn't work; and so on.

Some of these guesses may be wrong, of course, since I don't know as much
about the history of Intelligenesis as I do about the evolution of human
intelligence. If they're all wrong, though, then I really would be
surprised, but I'd also give Novamente a much smaller chance of going
supernova, since from my perspective the reason why Webmind/Novamente gets
more credence than any other random AI project is that you built it using
more than one idea.

The problem is that, while I can make general guesses like those given
above, the question of how much you *did* learn, which *specific* ideas and
lessons you accumulated, is an underconstrained problem. The question of
what you *know* you've learned is even more underconstrained. Otherwise, it
would be very easy to demonstrate to you that I know something unusual about
intelligence; I could recite the most important and least-known lessons
about AI methodology that I would expect you to have learned from
Intelligenesis and Webmind. But this, unfortunately, is a transhuman party
trick, and I am not a transhuman.

As it is, the most I can offer to do is solve problems or answer questions.
Unfortunately, so far it doesn't appear possible to do this over the Web or
email, though I've had much more luck conveying the deep ideas through
realtime interaction (and not just at Webmind, either). By the way, the
problem is that even with realtime interaction, Webmindfolk were possible to
convince, but they didn't *stay* convinced. They'd talk to someone else
about causality or goals and the next day they'd have a different idea. It
was pretty frustrating; I wanted to get everyone in the same room at the
same time so that I could deal with all the objections simultaneously, but
that wasn't in the cards. Of course I expect my visit to Webmind played a
larger role in my week than it did yours, and hence looms larger in my
memory.

(And now for a sharp segue.)

In the old days I would have been overjoyed to see any AI project making any
progress at all. Now any progress that exceeds progress in thinking about
Friendliness represents an urgent problem to be corrected - by improving the
understanding of Friendliness, of course.

The two most important questions, from my perspective, are: (1): Now that
you're working with the Novamente approach, did you learn from
Intelligenesis *how* to build supersystems, or did you just learn about *a*
supersystem that will become a new cul-de-sac for you? (2): How much
intelligence does it take for a seed AI takeoff anyway? The latter one in
particular has too many internal variables for me to guess it. It could be
anywhere from human-level intelligence to just above Eurisko.

So, I've got to explain this Friendliness stuff as early as possible. Your
current estimation of me appears to be as someone who'd make a nice
researcher for Intelligenesis, at least if he could learn to just build his
own Friendliness system and see what it contributes to intelligence as a
whole, instead of insisting that everyone do things his way. This is very
kind of you, and I do appreciate it. But the thing is, I'm not *supposed*
to be a typical Intelligenesis researcher. I'm supposed to be the guy that
takes the project over the "hump" that's defeated all AI projects up to this
point. I doubt that any single Intelligenesis programmer could have thought
up Webmind on their own; though I suppose the ones that believed in the
right kind of agent architecture, and who had the right talent for
integrating other people's ideas, could have filled the Ben Goertzel role
and built another Intelligenesis. Now of course I do plan to call upon the
talents of others in building SIAI's AI, but there's a difference between
calling on the talents of others, and building a plan that doesn't work
unless someone else swoops in and solves a problem you don't currently know
how to solve. My job is to parse up the problem into chunks that can be
solved by sufficiently creative individuals, and to make sure that there's
full scope for individual creativity in any specific area while
simultaneously preserving the general architecture that makes the levels of
organization add up. If it turns out that I have help on the deep problems,
then great! But I'm not relying on it.

Now, of course I realize that you haven't seen me in action enough to know
that I'm any smarter than a run-of-the-mill AI researcher - who of course
are bright people in their own right; just not, so far, smart enough to
crack the deep problems of AI. It's not a planetary disaster if you
continue to assume that I'm of O(Intelligenesis-personnel) smartness, or
what the hell, even a bit under, given that you probably consider OIPS to be
a high level of intelligence that I haven't quite demonstrated my fitness
for yet. But it will make it harder for me to get past the point of
convincing you that I also see the things you see that create the deep
questions of Friendly AI, so that I can start showing you the accompanying
answers.

You've already demonstrated your ability to acquire ideas from people whom
you regard as OIPS. So my current take is to go on trying to show how
Friendliness works. If email just isn't enough, maybe I'll fly over for a
week so we can finally work this stuff out between us; I doubt you'd want
SIAI to build an UFAI either. Otherwise, I guess SIAI will just have to
finish building our seed AI first. This is actually what I expect to be the
case regardless <smile>; it's just that I am obliged not to rely on that, if
at all possible. (Not that I'd mind Novamente beating us to it - *if* I
could be confident that you had someone around who fully understood
Friendliness. I am not going to be the only one who raises this question if
you go on designating transhumanity as your target, please note.)

>From my perspective, the basic current problem is not so much the degree to
which you think I'm <OIPS, >OIPS, or whatever, but rather one particular
case of how you generalize your experience with Intelligenesis; you don't
trust AI researchers' arguments until you see them implemented in practice.
Of course this is an obvious lesson to learn; I wouldn't trust an AI
researcher's arguments either, because they are, broadly speaking, blatantly
wrong. And I would expect that you've heard a lot of rationalizations of
flawed ideas in the course of the Intelligenesis social process. But it
does represent a problem to me if you've learned to distrust all verbal
arguments and rely on either working code or, failing that, your intuitive
perceptions. Your intuitive perceptions may be more reliable in that they
are not as subject to manipulation and rationalization by Intelligenesis
researcher-units. But "more reliable" is not "reliable"; intuitive
perceptions can be wrong too. Even working code can be wrong, for that
matter. For me it means that I have to alter your intuitive perceptions.
That's a lot of work over and above what it would take to initialize the
intuitive perceptions of someone who hasn't tried to solve the problem yet,
or to show the flaw in someone's intuitions to a third party who shares a
common knowledge base. You're right that it would help if you'd get around
to publishing that paper on the Novamente design; as it is, "where Ben's
intuitions come from" is rather underconstrained, and I'm working blind.

> Overall, I think the problem with this long-running argument between us is:
>
> 1) You don't really know how the Novamente goal system works because you
> don't know the whole Novamente design
>
> 2) I don't really know how your CFAI system would work in the context of a
> complete AI design, because I don't know any AI design that incorporates
> CFAI (and my understanding is, you don't have one yet, but you're working on
> it).
>
> I can solve problem 1 by giving you detailed information about Novamente
> (privately, off list), though it will take you many many days of reading
> and asking questions to really get it (it's just a lot of information).

I'll take it. Please send.

> Problem 2 however will only be solved by you completing your current AI
> design task!!

Yes, that has always been the traditional test of We Who Claim To Understand
Intelligence. But it will take time, as is known to both of us.

> I don't mean to say that I'll only accept your claims about the CFAI goal
> architecture based on mathematical or empirical proof. I am willing to be
> convinced intuitively by verbal, conceptual arguments that make sense to me.

Fair enough.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT