Re: Seed AI (was: How hard a Singularity?)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Jun 25 2002 - 13:05:19 MDT


Ben Goertzel wrote:
>
> I don't believe that Einstein or Newton possessed more intelligence than was
> put into the moonshot or the Human Genome Project....

I do, actually. Sometimes great scientific achievements are made by people
who are no smarter than many other contemporary geniuses but who happen to
be in the right place at the right time. Both Newton and Einstein managed
to do it *consistently*, however. If Einstein had been an ordinary genius,
he would have invented Special Relativity on the basis of Michaelson-Morley,
but not gone on to invent General Relativity on the basis of essentially
*no* evidence. Almost as astonishing as Einstein's invention of General
Relativity is his (absolutely correct) advance confidence in it: on being
asked what he would have done if the eclipse observations conflicted with
General Relativity, Einstein is said to have replied: "Then I would have
pitied the Good Lord. The theory is correct."

Now of course it could be that Einstein was just overconfident and happened
to be right (though this also happened with Special Relativity and
Kauffman's apparent disconfirmation), but it's even more unnerving to
contemplate whether Einstein's confidence might have been *actually
justified*. Sure, this violates the social rules of science laid down by
people who weren't Einstein; what if Einstein was right and they were wrong?
  What if Einstein's assigned confidence level was as correct as his theory?
  It implies that Einstein didn't have *just enough* intelligence to invent
General Relativity; it implies that Einstein had enough intelligence to
invent General Relativity and *know* it was correct. How did he possibly
know? For that matter, how did he invent the theory in the first place? It
seems to me that in some way, Einstein learned to think like the universe
well enough to anticipate, in advance of knowing, what kind of physical laws
the universe would invent. Not barely, by the skin of his teeth, but with
enough margin that he could do it several times in a row and be confident of
his answers in the face of apparent contradictory experimental evidence.

That's the power of intelligence. It's quite fashionable to deemphasize the
power of individual intelligence in favor of social intelligence these days.
  Fashionable, but wrong. Nature is not that fair or democratic. There is
a historical tendency to overattribute problems to famous individuals rather
than collaborative efforts, but intelligence is still far more powerful when
concentrated in a single individual.

> It seems to me that some types of problems are more suitable to "lone
> geniuses" (a la Einstein, Netwon) whereas others are more suitable to
> "brilliant teams".
>
> Is AGI a lone genius problem or a brilliant team problem? I think it is a
> brilliant team problem, due to the complex and integrative nature of
> practical intelligence. On the other hand, I think "unifying physics" is
> more of a lone genius sort of problem (as it was in the times of Einstein
> and Newton).... But of course these are "just intuitions" ;->

I think the basic nature and architecture of intelligence is a lone genius
problem, followed by a brilliant team problem for building all the parts
once the *kind* of parts are understood. I don't think the members of the
team have to be quite as brilliant as the lone genius individually, although
the total intelligence might be greater collectively. Fortunately so,
because it becomes exponentially harder to find a lone genius as the
required intelligence increases linearly.

> Computer science in general seems to have benefited from collective
> intelligence at least as much as from "lone genius" style intelligence.
> Progress in CS has been as substantial as that in physics, yet we don't have
> CS heros on the order of Einstein or Newton to look up to.

I wouldn't describe AI as a CS problem.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT