RE: AGI Prototying Project

From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Sun Feb 20 2005 - 14:42:32 MST


"J. Andrew Rogers" wrote:
> All current AGI projects are standing on the shoulders of giants.

All the major, credible AGI projects take care to examine the field
of AI, looking for incremental successes to reuse and common pitfalls
to avoid. I still regularly see people starting personal projects and
declaring that 'since all past work failed, it must be irrelevant, so
I don't need to pay much/any attention to it'.

> The vast majority of projects have not had the benefit of the math
> and theory we have now. All real technology is developed
> incrementally, and AGI is one particular technology that does not
> produce much in the way of very obvious qualitative results
> until it is at a very advanced stage.

Most relevant recent theoretical progress has been in decision
theory and cognitive science, not in AI as such; AGI projects don't
often reuse technology from earlier, failed AGI projects (or from
other people's AGI projects in general).

>> We have brilliant, dedicated people, we have a uniquely
>> cross-field perspective, we have a very advanced theory.
>
> This is not a differentiator. Everyone else claims to have the
> same, and they are mostly correct.

Untrue; there are plenty of projects that don't acknowledge the
interdisciplinary requirement (SOAR, Cyc) or which don't require
any radical new theory (CCortex; granted 'advanced' is only
verifiable in retrospect). Brilliance of researchers is something
that people just have to make personal best guesses on.

> So what you are saying is that there is ample evidence that people
> are easily capable of deluding themselves into thinking that they
> are smart enough to figure out the One True Path to AGI?

Yes.

> And this does not apply to the folks at SIAI because...?

It does apply to the SIAI. Our only defence is that we acknowledge
the depth of the self-delusion problem (and take what steps we can
to detect it). In fact this is another reason I want to do (limited)
prototyping now; if we have serious blind spots, I want to know about
them ASAP. This is also a major reason for Eliezer's push for formally
provable algorithms and techniques for FAI; while deluding yourself
into thinking you have an AGI design when you don't is merely a waste
of time and resources, deluding yourself into thinking you have an
FAI design when you really only have a seed AI design is fatal.
 
> My biggest criticism of AI designs generally is that almost none
> of them really offer a theoretically solid reason why the design
> can be expected to produce AI in the first place.

I agree. There are many designs that are theoretically capable of
supporting general cognition; Novamente for example, or even in
principle a Perspex machine (it is Turing complete). However these
designs don't actually specify the required cognitive complexity,
only inductive mechanisms intended to generate it, and I have yet
to see a design where it is demonstrated that these inductive
mechanisms will actually be capable of creating the relevant
'emergent' complexity (given a tractable amount of computation).

> Engineering design based on things that "sound good" does not
> fly (no pun intended) in any other domain (except perhaps social
> engineering) and it has never produced good results anywhere it
> has been tried as a general rule. Justification and validation
> is a very necessary prerequisite that cannot be glossed over
> because it is difficult or inconvenient.

Absolutely. If you can't show that a design will work in principle,
it definitely won't work in practice. There are some problem
domains for which this doesn't hold; if the range of possible
solutions is small enough trial and error will eventually come up
with the right answer.

> As for "certainty", you can also say something is "certain" to
> the extent that one can validate the model in implementation.
> And even then, you can only say that which has been demonstrated
> is a certainty; "the house is painted white on the side".

True, but if the demo confirms a good fraction of the predictions
made by a theory of general cognition, the probability of the
other predictions being accurate goes up considerably.

> Nothing beats a killer demo.

Ideally you want to excite people enough to get them interested
without being so impressive that everyone starts trying to clone
your effort.

>> We don't know exactly what we're going to do yet, but we're
>> light-years ahead of all other AGI projects in this regard.
>
> So to clarify: 1) you don't know what you are doing,
> 2) you have used your powers of omniscience to divine what
> everyone else is doing, and so it follows that
> 3) your ideas are far ahead of everyone else.

By 'what we're going to do' I mean 'the Friendliness architecture
that we're going to implement via the AGI'. I'm not talking about
my speculative seed AI architecture, which right now has only my
opinions in favour of it. I'm reffering to Eliezer's Friendliness
architecture, the implementation of Collective Volition. It's
not complete, but it is definitely the most advanced Friendliness
theory developed to date. No other AGI projects have cracked or
even seriously tackled the Friendliness problem yet (unless there
is something you're not telling us in that regard).

> A compelling argument to be sure, but it sounds like you should
> have used your powers of omniscience to figure out your own plan
> rather than trying to figure out what everyone else does or does
> not know.

I'm not qualified to research Friendliness theory; I'm relying
on Eliezer's powers of omniscience in that regard ;)
 
>> All of the SIAI staff are dedicated to the principle of the most
>> good for the greatest number. Friendly AI will be a project undertaken
>> on behalf of humanity as a whole; Collective Volition ensures that
>> the result will be drawn from the sum of what we each consider our
>> best attributes.
>
> What organization in the world, good or evil, does NOT profess these very
> things?

Say what? The vast majority of organisations are companies dedicated
to their stockholders or private clubs dedicated to the interests of
their members. Most AGI projects are in the former category. As for
Collective Volition, if anyone other than the SIAI is planning to
implement it, I haven't heard about it.

>> Because the inner circle are known to be moral...
>
> While I have no reason to believe The Inner Circle is Evil,
> statements such as this give skeptics a reason to be skeptical.

James, whoever develops a seed AI first (and manages to meet the
strict requirements for getting it to do something predictable) will
have an awesome moral responsibility. I know you don't think hard
takeoff is likely, but even with soft takeoff the people programming
the initial AGI(s) will have an utterly unprecedented amount of power
over all of humanity. If there is a way to avoid this massive risk,
I'd be all for it, but I'm not aware of one. As such anyone who is
developing AGI is effectively claiming that they have the right to
take the future of the human race into their own hands, and anyone
who wants to support an AGI project is forced to decide who is most
likely to be trustworthy. I can't see what else we can possibly do
than state our intent clearly and try to be as transparent and
honest as possible.

 * Michael Wilson

        
        
                
___________________________________________________________
ALL-NEW Yahoo! Messenger - all new features - even more fun! http://uk.messenger.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT