RE: AGI Prototying Project

From: J. Andrew Rogers (andrew@ceruleansystems.com)
Date: Sun Feb 20 2005 - 12:03:28 MST


Michael Wilson wrote:
> Yes, I am. AGI is an incredibly hard problem. Thousands of very
> talented researchers have been attacking it for decades. Tens of
> thousands of part-time dabblers have had a go by now. If this
> was something that could be solved by conventional methods, it
> would've been solved by now.

All current AGI projects are standing on the shoulders of giants. The vast majority of
projects have not had the benefit of the math and theory we have now. All real technology is
developed incrementally, and AGI is one particular technology that does not produce much in
the way of very obvious qualitative results until it is at a very advanced stage.

> We have brilliant, dedicated
> people, we have a uniquely cross-field perspective, we have a
> very advanced theory. That won't be enough; we still need more
> funding and recruits, but I think we have enough to try some
> exploratory implementation.

This is not a differentiator. Everyone else claims to have the same, and they are mostly
correct.

> Again I wish this wasn't the case, as I don't like elitism either,
> but reams of past experience (just look at the SL4 archives) have
> shown that though many people think they have something to contribute
> to AGI, very few people actually do.

So what you are saying is that there is ample evidence that people are easily capable of
deluding themselves into thinking that they are smart enough to figure out the One True
Path to AGI? And this does not apply to the folks at SIAI because...?

Don't bother answering that question. Whatever you are going to say, it is what we would
expect a self-deluded SIAI person to say.

> It's clear that making stuff up just doesn't cut it; if we did that
> we'd have no more chance of success than all of the above projects
> (i.e. almost none). Our theory must be /different in kind/, in
> particular the way in which we validate and justify it.

My biggest criticism of AI designs generally is that almost none of them really offer a
theoretically solid reason why the design can be expected to produce AI in the first place.
Engineering design based on things that "sound good" does not fly (no pun intended) in any
other domain (except perhaps social engineering) and it has never produced good results
anywhere it has been tried as a general rule. Justification and validation is a very necessary
prerequisite that cannot be glossed over because it is difficult or inconvenient.

> I haven't published anything yet and I won't be doing so in the
> near future. I'd like to, but Eliezer has convinced me that the
> expected returns (in constructive criticism) aren't worth the
> risk. As such I'm not asking anyone to accept that my design is
> the best, or even that it will work. Frankly I'm not that sure
> that it will work, despite having a deep understanding of the
> theory and advanced validation techniques; that's why this is
> exploratory prototyping (note that many other projects are quite
> happy to claim certainty that they've got it right despite being
> unable to verify cognitive competence and/or blatantly wrong).

I've been going back and forth on the "publish or not" thing for a long time, but have finally
come to the conclusion that publication is a net negative. I agree with Eliezer and Michael
Wilson on this. Publication has its uses, particularly in academia, but to a large extent it is
attention whoring. You can get the same constructive criticism value via very limited private
circulation.

As for "certainty", you can also say something is "certain" to the extent that one can validate
the model in implementation. And even then, you can only say that which has been
demonstrated is a certainty; "the house is painted white on the side". Nothing beats a killer
demo.

 
> The problem is that AGI theories are very
> hard to validate; to the untrained (or even moderately trained)
> eye one looks as good as another.

This is to be expected, since most of the people interested in validating an AGI theory are
interested in exploiting it rather than understanding it. And of these people, the smart ones
are rightly skeptical. Nothing beats a killer demo.

> I've spent a bit more than a year working on AGI design before
> attempting a full-scale architecture. In my opinion this is the
> bare minimum required; if we weren't up against such a pressing
> deadline I'd insist on another year or two.

One can burn a lot of resources on iterative verification by implementation. It will get the job
done, but it ain't cheap. On the other hand, there are often subtle problems that you
discover in implementation that would have taken a lot longer to discover doing high-level
design. I wish I hadn't spent so much time on iterative implementation, but I'm not sure that
I would have been obviously better off doing it another way.

> We don't know exactly what
> we're going to do yet, but we're light-years ahead of all other AGI
> projects in this regard.

So to clarify: 1) you don't know what you are doing, 2) you have used your powers of
omniscience to divine what everyone else is doing, and so it follows that 3) your ideas are far
ahead of everyone else. A compelling argument to be sure, but it sounds like you should
have used your powers of omniscience to figure out your own plan rather than trying to
figure out what everyone else does or does not know.

> All of the SIAI staff are dedicated to the principle of the most
> good for the greatest number. Friendly AI will be a project undertaken
> on behalf of humanity as a whole; Collective Volition ensures that
> the result will be drawn from the sum of what we each consider our
> best attributes.

What organization in the world, good or evil, does NOT profess these very things? The anti-
Singularity organizations will have the same statement pasted on their homepage.

> Because the inner circle are known to be moral...

Is it any wonder that SIAI is sometimes painted as a cult? While I have no reason to believe
The Inner Circle is Evil, statements such as this give skeptics a reason to be skeptical.

cheers,

j. andrew rogers



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT