RE: AGI Prototying Project

From: Tyler Emerson (emerson@intelligence.org)
Date: Sun Feb 20 2005 - 05:15:57 MST


Another email from Mike Wilson.

-----Original Message-----
From: owner-volunteers@intelligence.org [mailto:owner-volunteers@singinst.org] On
Behalf Of Michael Wilson
Sent: Sunday, February 20, 2005 3:03 AM
To: volunteers@intelligence.org
Subject: RE: AGI Prototying Project

Dustin Wish wrote;
> You seem a little elitist about such a project as this.

Yes, I am. AGI is an incredibly hard problem. Thousands of very
talented researchers have been attacking it for decades. Tens of
thousands of part-time dabblers have had a go by now. If this
was something that could be solved by conventional methods, it
would've been solved by now. The prior probability of /any/ one
AGI project being successful is perhaps a million to one even
before you begin to analyse what the design is capable of. For the
SIAI to have any realistic chance, we must have multiple major
advantages over all the competition. We have brilliant, dedicated
people, we have a uniquely cross-field perspective, we have a
very advanced theory. That won't be enough; we still need more
funding and recruits, but I think we have enough to try some
exploratory implementation.

> You attitude about such a project seems to push away more than
> attract.

Unfortunately very, very few people are qualified to work directly
on AGI; my guess would be fewer than 1 in 10,000,000 (we haven't
found many of these yet). People at the 1 in 10,000 level of
intelligence and skill have a reasonable chance of being able to
help with the non-core areas. Beyond that the effort of making
the problem accessible isn't worth it for what people can contribute.
Again I wish this wasn't the case, as I don't like elitism either,
but reams of past experience (just look at the SL4 archives) have
shown that though many people think they have something to contribute
to AGI, very few people actually do.

> AGI isn't any harder than Speech Recognition application development
> to me.

Speech recognition is a largely solved problem that was and is amenable
to standard engineering methods. The key point is that it's well
defined; we know exactly what we want, we just need to find algorithms
that can do it. General intelligence is /not/ a well-defined problem
(or rather, it's very hard to boil it down to a definition that is both
correct and complete).

> I think if you can read Dr. James Anderson works on Perspex Machines
> dealing with 4D matrix models then I think you wrap you brain around
> cognition.

I read the two papers, it's an interesting programming substrate but
there aren't any mechanisms there that would actually cause useful
cognitive complexity to develop. As an AGI effort this has roughly
the same value as Mentifex, or Marc Geddes's designs, or OSCAR, or
EPIC, or Gaurav Gupta's, or Bruce Toy's, or hundreds of other crank
proposals (or for that matter Cyc's AGI suitability, based on a vauge
unjustified notion of 'cognition is lots of knowledge').

It's clear that making stuff up just doesn't cut it; if we did that
we'd have no more chance of success than all of the above projects
(i.e. almost none). Our theory must be /different in kind/, in
particular the way in which we validate and justify it.

> I have yet to see even a base demo on your theories that
> pushes you theorem to the front?

I haven't published anything yet and I won't be doing so in the
near future. I'd like to, but Eliezer has convinced me that the
expected returns (in constructive criticism) aren't worth the
risk. As such I'm not asking anyone to accept that my design is
the best, or even that it will work. Frankly I'm not that sure
that it will work, despite having a deep understanding of the
theory and advanced validation techniques; that's why this is
exploratory prototyping (note that many other projects are quite
happy to claim certainty that they've got it right despite being
unable to verify cognitive competence and/or blatantly wrong).

> or that you're trying to sell to a government or private company
> your "secret" snake oil.

Rest assured that I'm not trying to sell anyone anything (well,
unless this code develops into actual products that stand on their
own merits).

> It may seem a little untrusting of me to pry at you just way,
> but I run into a ton of "snake oil salesman" since I have been
> in this business.

I know what you mean. The problem is that AGI theories are very
hard to validate; to the untrained (or even moderately trained)
eye one looks as good as another. The sad truth of the matter is
that it would be easy for me to write a convincing looking yet
utterly bogus architecture document, post it to this list and
have lots of people say 'yes, that looks good'. Realistic AGI
architectures are confusing and unintuitive; people who don't
have a deep AI understanding would look at one and say 'that
can't work, I don't see X, Y and Z (where the missing things
are folk psychology concepts or popular simple algorithms)'.

Michael Ames wrote;
> We readily acknowledge that there are no demonstration results...
> yet.

I've spent a bit more than a year working on AGI design before
attempting a full-scale architecture. In my opinion this is the
bare minimum required; if we weren't up against such a pressing
deadline I'd insist on another year or two (Eliezer has been
working on this for eight years and still isn't ready to write
a constructive architecture description, though to be fair I
picked up quite a bit from where he left off). AGI is mostly a
high-level design challenge, not an implementation challenge
(unless you're trying to brute-force the problem, which as we've
acknowledged would result in an uncontrollable, world-destroying
seed AI).

Scott Powell wrote:
> It sounds like SIAI is pretty guarded about some of the 'hard
> science' behind their approaches; much of what I've seen around
> seems more like sales than science.

I agree and I don't like it; I acknowledge that we need the 'sales'
stuff to pull in new donors and volunteers, but I'd prefer that
SL4 was filled with hard technical discussion of AI internals and
that this list was buzzing with 'I coded this, what do you think'
and 'how about combining these modules?' etc. However we cannot
operate that way; firstly once you acknowledge the sheer difficulty
of AGI you realise that there just aren't that many qualified
people available (and the unqualified ones would just waste time
with plausible-looking but unworkable ideas), and that we cannot
take the risk of releasing powerful techniques to all comers. I
don't like it, you don't like it, but that's the way it is.

> Why should the SIAI have "control" of the intelligence it creates?

We don't want 'control' in the sense of having an AGI that follows
orders. We want the AGI to do a very specific thing; implement
Collective Volition (Eliezer's FAI theory; see the SL4 Wiki), or
more generally behave in a Friendly manner. However Friendly
behaviour is a tiny subset of /possible/ AI behaviour, and by
default an AGI will be neither Friendly nor controllable (which leads
to existential disaster). Again we're not ready to build this yet;
right now I'm just building some non-AGI prototypes to test aspects
of the theory.

> Otherwise, what is the danger in sharing the development
> and sourcing of the SIAI movement?

Because if you build an AGI without knowing /exactly/ what you are
doing, it will do arbitrary things, which will almost certainly be
things that you don't want to happen. We don't know exactly what
we're going to do yet, but we're light-years ahead of all other AGI
projects in this regard. If we handed out takeoff-capable code,
some fool would proceed to build an AGI with it without understanding
the implications or building in a stable, Friendly goal system, and
it would be game over for everyone.

> SIAI itself seems to have an intuitive grasp of 'what
> comes after,' even if it is not laid out for all to see.

It is laid out for all to see here;

http://www.sl4.org/wiki/CollectiveVolition

Please read this if you haven't already. It's a statement of what
the SIAI intends to do. If you don't agree that this is better
than the alternative (which is basically allowing other projects
to build badly understood AGIs that will destroy the world in a
randomly choosen fashion), you shouldn't be volunteering to help.

> Credit will rapidly become an archaic notion; nobody will be
> honored as the 'Creator of AI.'

I have no idea how relevant credit will be post-Singularity, but
it's certainly irrelevant to what we do now; again see

http://www.sl4.org/wiki/SoYouWantToBeASeedAIProgrammer

We're here to save the world, not buck for credit.

> I am concerned that the development is not entirely for the
> benefit of ALL peoples, but rather just of a few. Is my concern
> grounded?

All of the SIAI staff are dedicated to the principle of the most
good for the greatest number. Friendly AI will be a project undertaken
on behalf of humanity as a whole; Collective Volition ensures that
the result will be drawn from the sum of what we each consider our
best attributes.

> Obviously there's no easy way to answer this, but I ask instead,
> what -are- the security reasons for a select inner circle on this
> project?

Because the inner circle are known to be moral, and perhaps more
importantly have the correct attitude to risk ('AGI is really, really
dangerous, we will not build one without a damned good proof that the
result will be good things'). I have this attitude, and that is why
I am taking so many safety precautions. Unfortunately most people who
are interested in AGI do not. If you're on this list, make sure you
read everything on the SIAI site and understand /why/ we have this
attitude.

> I believe that SIAI is an honorable initiative, and I only seek to
> stem the rather disturbing doubts that come to mind as I read the
> posts within this forum.

Though we can't give out key seed AI theory, everything else about
the SIAI (particularly our goals) should be and is up for scrutiny
and constructive criticism. If you're worried about the prospect of
a small group wielding awesomely powerful technology; congratulations,
you damn well should be. I know I am. Unfortunately however all of
the alternatives to the SIAI project seem to be much worse.

> What is 'volunteer activity' if not donating money?

The vast majority of people associated with the SIAI aren't qualified
to do any AGI coding at all. It's a shame, but AGI is very hard,
period. There is a limited amount of non-AI work (that Tyler wants a
VC to organise); PR, Friendliness advocacy, fundraising. For most
people the best way to support the SIAI and increase the probability
of a Friendly singularity is to donnate money; even if you're helping
in other ways, the SIAI still needs your donnations to fund the main
implementation project.

Tennessee Leeuwenburg wrote;
> The question is, do you think you should give the AI access to its
> own source code? ;)

I'm not sure if you mean 'will my prototypes incorporate reflection
and AI code generation' or 'will the SIAI's main project use self
modifying code'. The answer to the latter has to be yes, since that's
part of the definition of seed AI. The answer to the former is 'kind
of'; I'm not using any conventional forms of code generation (e.g. GAs
and other probabilistic mechanisms are out) but I will be using a
limited amount of structural self-modification (hence the safety
precautions).

> I write this only because it irritates me when "open source" becomes
> a philosophical dogma. I don't see why the project need be run as an
> open-source one. Having run various projects before, I tend to find
> that a small, specific group of people naturally tend to take power,
> and the project is best served by promoting their power rather than
> through massive distribution of workload. Information NEEDS to be
> concentrated in the minds of a few, when it gets past a certain
> complexity.

Read this carefully, as this is a nugget of wisdom (possibly hard won).
I completely agree, and the problem is compounded in (serious) AGI
because it's /very/ hard to explain (just look at the problems people
had interpreting LOGI, a much simpler theory) and the core elements of
the system have dense inconnectivity and interdependence (by necessity,
not choice).

> Designing an AI, something which I have thought about philosophically
> if not in context of programming (lack the skill), is I think something
> personal. An AI is typically backed by a framework, which is often only
> one of many potential models for driving a decision-making program.
> Making it open source may please the punters, but does not bring a great
> advantage to the progress of the project.

Again, very perceptive. Since it's so easy for a single person to miss
things we're using a small team, but the sweet spot for core design is
very small (for this prototyping project I'm designing and Eliezer is
reviewing).

> I for one am sceptical that there is any large body of exceedingly
> dangerous knowledge that a few singer institute software developers have
> managed to achieve - however brilliant they are. Scientific history is
> one of incremental improvement, and our most advanced technologies are,
> conceptually, available to anyone with the intelligence to understand
> them.

I agree that it seems unlikely, but from my point of view there is
compelling evidence to override the low prior. I'm actually rather
suprised that the SIAI exists and has any chance at all; this is
enough to give me a touch of anthropic paranoia. From your point of
view I agree that you can't tell if we're genuinely ahead of the game
or just misleading ourselves. However given the risks, stakes and
state of the competition, it makes sense to support the SIAI and this
project in order to find out.

 * Michael Wilson

~~~
Tyler Emerson
Executive Director
Singularity Institute
P.O. Box 50182
Palo Alto, CA 94303
Phone: 650.353.6063
emerson@intelligence.org
http://www.intelligence.org/



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT