Re: Singularity Institute - update

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Apr 30 2003 - 19:03:12 MDT


Ramez Naam wrote:
> Instead of the roundabout strategy of writing a book on rationalism,
> how about:
>
> 1) Writing a book on your AI and FAI ideas?

I have the feeling no one would understand them unless I write the book on
rationality first. One of the questions leading up to this has been me
asking myself: "Why haven't more people understood what's out there
already? Why is no one extending these ideas?" And my conclusion is that
what I've written is the final output of a thinking process which has not
itself been explained. When I see people trying to extend the ideas
currently out there, I see them getting stuck on basic mistakes that I
dealt with long before I started writing AI papers. So I need to go back
to fundamentals.

Everything James Rogers says is also correct. (Except that I hope the
tome won't be all *that* immense. Perhaps this is merely foolish idealism
on my part.)

> 2) Publishing your AI and FAI ideas in AI journals?

Evolutionary psychology journals, perhaps. "Brain and the Behavioral
Sciences", perhaps. Even a journal on Bayesian information theory.
Certainly not AI journals.

> 3) Pursuing a PhD in AI? (which would force you to do #2)

I've discussed this before and am really not all that interested in taking
it up again. I don't consider this the best use of my time. I'm not
getting a PhD. You can get a PhD if you like.

> Any or all of the above would have the advantages of:
>
> a) Spreading your ideas to other people working in AI. FAI could go
> from a project that you and a few other people are working on to a
> mainstream consideration for AI work. This would seem to reduce the
> overall risk of a non-friendly AI being inadvertently built by some
> other AI research group.

Unfortunately, I have recently become less and less certain that this will
actually work. As Friendly AI theory has developed, it has become clear
that this is not something you can do correctly based on a few vague
admonitions. It is not something you can do by working from someone
else's theory that you don't really understand. It is not something that
you can do using a few extra modules. Any AI group that does not spend
years thinking about FAI and then build their AI from the ground up using
a deeply understood detailed correct theory of Friendly AI will not build
a Friendly AI, period.

It'd buy some time. Not much.

> b) Giving you more direct credibility. Scientific publications,
> technical books, and mainstream credentials all increase your ability
> to raise funds from private sources, acquire funds in the form of
> grants, and convince others of your ideas.

They increase them very little. I'd rather talk directly to rationalists,
than try (and fail) to persuade nonrationalists by arguments from authority.

> A book on rationalism seems very low leverage to me. It doesn't
> specifically target people with the skills to work on FAI or people
> with the potential resources and inclination to help fund FAI.

Having thought about exactly that problem, I concluded that a book on
rationality targets such people far more precisely than a book on AI. If
someone is currently interested in AI, it means they have a head stuffed
full of the misleading information that currently predominates in the
field - philosophicalish anti-knowledge. What's needed for AI work are
abilities which are more likely to make their bearers interested in
rationality than in the wasteland of modern AI. See:

   http://sl4.org/bin/wiki.pl?SoYouWantToBeASeedAIProgrammer

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT