AI and Moore's Law redux

From: Emil Gilliam (emil@emilgilliam.com)
Date: Sat Jan 26 2002 - 00:35:45 MST


I tremble in *fear* writing this message, because as soon as I mention
an ancient incantation such as "Moore's Law", the list is bound to
degenerate into endlessly rehashed banalities which could be avoided
with a quick look at the archives. I might as well throw in "human
augmentation" and "qualia", and really foul things up. ;-)

My attempt at a somewhat-new question is:

How could some preeminent scientists, in this day and age, defend the
notion that existing computer power (or even a lot less than it),
combined with some rather simple but undiscovered programming technique,
would be sufficient for general intelligence?

It does not suffice to say "Given enough scientists (even famous ones)
and enough viewpoints, there's someone to defend just about anything." I
would like for people who know more about the history and personalities
behind the study of AI than I do to explain where this notion comes from
and why it persists. I suspect that the answer would be quite subtle.

John McCarthy says this in his FAQ [1]:

Q. Are computers fast enough to be intelligent?

A. Some people think much faster computers are required as well as new
ideas. My own opinion is that the computers of 30 years ago were fast
enough if only we knew how to program them. Of course, quite apart from
the ambitions of AI researchers, computers will keep getting faster.

Stephen Wolfram said this in a 1996 interview [2]:

"Sometime--perhaps ten years from now, perhaps twenty-five--we'll have
machines that think. And then we'll look back on the 1990s and ask why
the machines didn't get built then. And I'm essentially sure the reason
won't be because the hardware was too slow, or the memories weren't
large enough. It'll just be because the key idea or ideas hadn't been
had. You see, I'm convinced that after it's understood, it really won't
be difficult to make artificial intelligence. It's just that people have
been studying absolutely the wrong things in trying to get it. ...

"Well, anyway, after the failures of the early brute-force approaches to
mimicking brains and so on, AI entered a crazy kind of cognitive
engineering phase--where people tried to build systems which mimicked
particular elaborate features of thinking. And basically that's the
approach that's still being used today. Nobody's trying more fundamental
stuff. Everyone assumes it's just too difficult. Well, I don't think
there's really any evidence of that. It's just that nobody has tried to
do it. And it would be considered much too looney to get funded or
anything like that."

... "I'm guessing that a key ingredient is going to be seeing how
computations emerge from the action of very simple programs--the kind of
thing that happens in the cellular automata and other systems I've
studied."

Marvin Minsky has stated recently (I do not have the exact quote) that a
1 megahertz machine could become sentient with the right programming [3].

[1] http://www-formal.stanford.edu/jmc/whatisai/node1.html
[2] http://www.stephenwolfram.com/about-sw/interviews/96-2001/text.html
[3] http://www.nanomagazine.com/nanomagazine/01_22_09



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT