Singularity Institute - update

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Apr 30 2003 - 15:03:21 MDT


Singularity Institute - Update

I've been drawn farther and farther into Friendly AI theory recently, and
I haven't been able to concentrate much on promoting SIAI - our last major
addition of site content was in October 2002. It's become clear that
there are some major unfinished issues in Friendly AI, and I may need to
end up concentrating exclusively on the Friendly AI side of things. This
is starting to drive home to me yet again that there's rather a *lot* of
work involved in carrying off a successful Singularity. I can't do all of
it, or even most of it. In the long run I doubt that I can successfully
split my focus of attention between AI and SIAI - I haven't been having
much luck so far.

There is a certain amount of effort that needs to be exerted and there is
a limit to how few people can exert it. SIAI is here to improve
humanity's chances in the superintelligent transition. Surprisingly and
counterintuitively for such a huge event, it turns out that there are
points of leverage such that significant things can be accomplished by a
small number of people - SIAI, which needs to exist in any case, may be
able to accomplish its purpose without needing thousands of people. This
is not the same as being able to get by on a shoestring. There's more
work here than I can do. For the AI project to start up with a realistic
chance of success, it needs, I think, at least six extremely bright
programmers. As the AI theory has developed, the number of people needed
has gone down, but the minimum necessary intelligence has gone up.

We also need an Executive Director who can immediately take over the
administrative work and the job of writing site content - someone who
already has the competencies needed. Everyone I can think of with the
dedication required to fill this position is unfortunately too
inexperienced to do so.

What SIAI has, at this point, is a people problem. Even if we had all the
funders we needed, I don't think we'd be able to start up the primary AI
project because we wouldn't have the programmers. Perhaps, by the time we
have the funders, we will have those programmers - but we do not have the
people we need right at this moment. Also, we wouldn't be able to start
up because we wouldn't have an Executive Director to handle the
administrative end. We need people who are willing to step up and
allocate their lives to this, and these people need to have specific
competencies or abilities at very high levels. That's what it takes to
get the job done.

Since the lack of people is a blocker problem, I think I may have to split
my attention one more time, hopefully the last, and write something to
attract the people we need. My current thought is a book on the
underlying theory and specific human practice of rationality, which is
something I'd been considering for a while. It has at least three major
virtues to recommend it. (1): The Singularity movement is a very precise
set of ideas that can be easily and dangerously misinterpreted in any
number of emotionally attractive, rationally repugnant directions, and we
need something like an introductory course in rationality for new members.
  (2): Only a few people seem to have understood the AI papers already
online, and the more recent theory is substantially deeper than what is
currently online; I have been considering that I need to go back to the
basics in order to convey a real understanding of these topics.
Furthermore, much of the theory needed to give a consilient description of
rationality is also prerequisite to correctly framing the task of building
a seed AI. (3): People of the level SIAI needs are almost certainly
already rationalists; this is the book they would be interested in. I
don't think we'll find the people we need by posting a job opening.
Movements often start around books; we don't have our book yet.

It would be much more convenient if the people we needed just walked up.
They might. But they have not done so yet. There is no reason to believe
they will do so. And we cannot proceed without them. The idea of writing
a book on rationality is a roundabout approach, and I strongly dislike
that, but it's the only method I can think of that might prove reliable.

The recent changes in my thinking about Friendly AI are more difficult to
explain. Roughly, I'm holding myself to a higher standard and trying to
accomplish more work in advance. The theory has progressed considerably
beyond "Creating Friendly AI", but the current theory is in a state where
further improvement is clearly possible, and is, at this time, still
improving. It has, in fact, improved to the point where it can describe
certain problems which the programmers must solve, at some point, for a
Friendly AI to be built. I think it might prove very wise to have those
solutions in hand before work begins, or a complete theoretical
description of what the solution should look like, such that it is very
clear which unfinished problems are unfinished. I'm worried about whether
I'll be able to simultaneously solve undone problems in Friendly AI theory
while also trying to teach a group of AI programmers and building an AI.
If the theory gets to a certain point, which looks doable in the near
future, we will at the very least have a very solid description of the
things we still need to know. That way, even if FAI work goes slower than
expected, or AI work progresses faster than expected, we are not setting
ourselves up for future problems. It should be remembered that it is far
better to fail at AI than to succeed at AI and fail at Friendliness.

The upshot is that there are at least the following simultaneous
conditions that must be met before work can begin:

1) A programming team comprising, for all tasks required to build a
complete seed AI, persons capable of completing those tasks, and with
adequate time and energy to do so. This will require a certain minimum
number of brilliant people, who we must find, who will be very hard to find.

2) An Executive Director capable of taking over all nontechnical
functions presently performed by Eliezer Yudkowsky, particularly that
whole "community leader" thing, but also including content writing for the
SIAI website, staying in touch with people, doing all administrative
paperwork, and providing the motive "push" behind the Singularity Institute.

3) Sufficient funding, at any given point in time, to fund the
programming team plus the Executive Director plus any ongoing activities
relating to fundraising and Singularity education.

4) More advanced Friendly AI theory. My job, but I'm becoming unsure of
my ability to do this and other things simultaneously.

Bear in mind the size of the goal we're trying to accomplish. It can be
very hard to take genuinely useful steps in that direction. The
establishment of SIAI is one such step; it allows for resources to be
focused on the Singularity, for the development of professional
specialists in Singularity matters, for volunteer efforts to be
coordinated. The establishment of SIAI is a genuinely useful step, but
it's only one step - it doesn't solve the whole problem all by itself.
Next we need an Executive Director. Then the AI programmers. Then enough
funding to launch the project. It's hard but not impossible. We can do
this, one genuinely useful step at a time, if we avoid distraction.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT