From: Eliezer S. Yudkowsky (email@example.com)
Date: Sun Jan 29 2006 - 10:00:16 MST
Michael Roy Ames wrote:
> I would have to agree with you somewhat. The article is definitely
> not politically correct. I see it as a somewhat tounge-in-cheek
> over-statement meant to discourage applicants who are not willing to
> put in the hard, tedious work of learning the needed knowledge and
> gaining the mental discipline to contribute as a seed AI programmer -
> not to mention discouraging less than highly talented individuals.
> The seed AI programmer described &/or implied in
> seed-ai-programmer.html is an imaginary being, unlikely to be found
> in the world today, but perhaps not with a probability of zero. And
> there lies the rub. Should we alter the entry requirements to be
> less stringent, more readily attainable by mere mortals? Should we
> alter the article to appear less offensive, more conventional while
> retaining its filtering effect?
"Should we alter the entry requirements to be less stringent?"
While we're at it, let's alter the task to be less difficult. Also
let's change the speed of light so we can get to Andromeda faster.
The article is indeed not politically correct; it was written in a rush,
originally on the SL4 Wiki, in between doing other things.
If today I were rewriting the actual content of the article, rather than
the tone, I would emphasize mathematics, mathematical logic, decision
theory, probability theory, and above all, sheer raw fluid intelligence,
aka g-factor. Marcello doesn't have a solid background in mathematical
logic or decision theory, which is probably the next nearest neighbor to
the problems we are currently discussing. (For that matter, *I* don't
have a solid background in mathematical logic or decision theory.) But
Marcello competes at the national level at programming, and has overall
studied more math than I have. So when I say, "If utility functions are
unique up to a positive affine transformation, what does that actually
preserve?" Marcello says "Relative size of intervals" and then I say,
"Oh, of course" and proceed to describe what this means about the
structure of utility functions. Now, Marcello probably(?) would not
have independently realized what preserving the relative size of
intervals means about the structure of utility functions, at that point
in his career. And if I hadn't asked the question, Marcello might not
have realized that it was an important question to ask, at that point in
his career. Marcello still needs a lot of ancillary experience and
principle before he can help steer the bicycle, not just help pedal it.
But Marcello can keep up his end of the conversation, rather than just
staring at me blankly. As most AGI wannabes would, if I asked what a
positive affine transformation preserves.
At the inaugural colloquium of the Redwood Neuroscience Institute, there
was a discussion among some (major, prestigious) computational
neuroscientists about what kind of degree they'd most like to hire.
Some said neurology, some said electrical engineering, but then one
person said, "I'd rather have someone with a degree in physics, because
they can learn anything," and the rest nodded agreement. The most
important requirement, obviously, is that the one be able to learn
anything. It is also indispensable that the one already be a math
talent, and have some experience programming, because there are some
things I'm not willing to bother teaching.
-- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Thu May 23 2013 - 04:01:15 MDT