From: ben goertzel (firstname.lastname@example.org)
Date: Tue Jan 22 2002 - 09:18:12 MST
> If you look at chess playing computers, they beat
> the best humans when they had specialized hardware
> that was estimated to be 3% of a human brain in
> total capacity. The implication is that the best
> human players devote 3% of their brain to chess,
> and the rest to the usual stuff our brains do.
Of course, though, we all know that "total capacity" means next to nothing
when comparing such totally different systems.
> Thus, I take a lower bound on 'when will the
> Singularity happen' as when we have computers at
> that capacity level available for specific tasks
> like optimizing code and designing chips.
But without some kind of advanced AI, no matter how fast future computers
are at carrying out specific aspects of chip design and code optimization,
the bottleneck is still going to be human thought and communication.
Fast computers, unless they're also smart, aren't going to come up with
the next breakthrough in global program optimization, 3D chip design, or
whatever. They're just going to solve hard technical problems within the
framework of each human-created breakthrough. And then when (as
inevitably) the possible improvements achieved by *that* breakthrough have
exhausted, we're back to human brains for the next step.
So, I don't believe that "having programs fast enough to superintelligently
code and chips" is going to launch the Singularity. Even these seemingly
technical tasks rely heavily on general intelligence for their ongoing
Yes, chess champion programs have been written without general intelligence.
But I think playing chess is more analogous to solving hard problems within
a given conceptual framework, than to inventing a new conceptual framework.
And Moore's Law and other related laws *do* rely on the invention, every few
years, of really new ways of looking at the relevant technical issues...
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT