Re: Time and Minds/Big Daddy

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Sep 23 2001 - 12:41:44 MDT


Xavier Lumine wrote:
>
> I often find self-describing 'bystanders' posting on this list, following
> their often important thought with something along the lines of "but I'm no
> one to speak" or "I probably don't know what I'm talking about" or "I'm
> getting back to my work because the people who actually write AI will sneer
> at my post". The individuals on the Friendly AI project under the
> Singularity Institute for Artificial Intelligence are not superhuman. We are
> a small team of computer programmers, self-educated cognitive scientists and
> ethusiasts. Although I will not dispute that bright engineers and designers
> are necessary to create true Friendly AI, we are still people with an
> incomplete view of the world and posts to this list are not laffable nor
> shunned.

Hm, I'm not so sure about that. Creating AI is not like anything else in
the world, up to and including Flare. There is a Flare team, but there is
not yet an AI team. I'm reasonably confident I can transfer over the
knowledge needed to create Flare. I'm a bit more worried about
transferring over the knowledge needed to create AI. The real knowledge,
I mean; not design specs, but the skill needed to come up with design
specs. Programming languages have been created before and will be created
again. AI is something else. There are no superhumans, and so we'd
better *hope* it doesn't take superhumans, but I'm not so sure it can be
done with your average bright 99th-percentile programmer either. Sixty
percent of real AI is knowing how to refuse the path of ideology, or pause
and invent complex solutions to complex problems, but that still leaves
forty percent blinding flash of intuition.

Yes, my view of the world is still ultimately incomplete, but I know how a
mind works, not just in the abstract but in the specific, and that makes a
huge difference. I don't live in a fog of nervous confusion. If I come
to a conclusion, the conclusion may be strong or weak, but either way I'll
know why I came to that conclusion, and furthermore I'll be able to check
that the reasoning proceeded according to the normative rules for
rationality - giving rise to an outlook that is often mistaken for
"confidence" on a first meeting, but which is actually just a case of
knowing exactly how uncertain I am and why.

There are no superhumans, unfortunately, yet. But let's not understate
what it takes to be an AI programmer either. It's not enough to be very
bright, or even smart enough to be written up as a genius in Wired
magazine. People of that caliber have hit the problem of AI and bounced.
If we are lucky, it will turn out that being "very bright" is enough to
join an existing AI project already headed in the right direction and make
useful contributions, but this is by no means certain.

As for what it takes to post to the list, I wish I could say that everyone
had what it takes, or that everyone would automatically know whether or
not they have what it takes, or that at the very least all the good
posters would estimate themselves to be good posters even if some bad
posters did so as well. Unfortunately, all three hypotheses have been
disconfirmed by experience, and so I can't tell people "If you're not
sure, don't post" because that would wipe out at least two-thirds of the
sufficiently-good first posts I've seen.

Still, I think that in the end the list is better served by artificially
high standards than artificially low standards. I think the list is
better served by perfectionism than tolerance. Smart (and grammatical!)
lurkers eventually overcome their nervousness and post... though
admittedly, I would have no way of knowing if good people got scared off
entirely. But to consciously profess universal tolerance can destroy a
list. I've seen it happen.

This list was created to provide a refuge for the best damned posts in the
whole damned Solar System - or that, at any rate, is the ideal. This is
not a list that is *supposed* to be easy to post to. It is a list that is
supposed to be fun to read.

I appreciate useful criticism, but the vast majority of criticism I get is
not useful. You don't find the rare people who can catch you in a mistake
by sweeping your net as wide as possible; you find them by raising and
raising your standards until your environment meets such high standards of
rationality that useful critics can hang out there. Useful criticism is
not something that you find by seeking criticism, it's something you find
by creating an environment where rational arguments can grow and prosper,
free from distraction; among those rational arguments will eventually be
found rational criticisms. And that, in turn, means being willing to do
something when it seems like standards might be dropping. It means that
instead of soliciting easy fun criticism, you solicit correctly spelled
rational arguments that thread through complex (but fun!) technical
issues, and hope that the people who can manage *that* will manage a few
pieces of useful criticism as well.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT