Re: Edge.org: Jaron Lanier

From: Eugen Leitl (eugen@leitl.org)
Date: Wed Nov 26 2003 - 12:42:22 MST


On Wed, Nov 26, 2003 at 02:15:35PM -0500, Perry E.Metzger wrote:

> We've done much more work on the parallel architectures path than
> people seem to remember. I'm a veteran of several parallel computation
> projects in the late 1980s, like the DADO Machine, the Y Machine, etc.

Computer science is now quite firmly in an established discipline
(read: rigor mortis) mode.
Nevermind the founders, early adopters have been littering the obits,
and each subsequent generation seems to be bound on the task of
rediscovering the wheel, only as a polygon. No new language has
been able to transcend concepts pioneered in Lisp, the second oldest
language after Fortran. The (already fading) XML hype is just badly reinvented SEXPRs
in disguise. This is so pathetic, words fail me.

It is a bit like a Cambrian explosion being over, don't you agree?
The crazy experimentation mode (and funds for R&D) are over, and
the discipline has just stagnated. Is it just me, or doesn't CS
smell a lot like a crypt full of mummies? Perhaps it's time to
torch the place, and move on to something completely different.
 
> Huge amounts of effort was expended on trying to produce new, parallel
> paradigms for computation to take advantage of massively parallel
> hardware. Hundreds of radical new designs were worked on, with all
> sorts of innovative ideas -- everything from moving to smart memory
> and dumb computation to data flow architectures to everything else you
> can imagine.

I remember. I was there, if only as an interested layman observer.
It is hard to remain optimistic on the backdrop of so many lost hopes.
 
> The result of all of it was that you could produce some interesting
> architectures for specific computational tasks, but producing
> something that had general programming utility was damn hard. We just
> don't know how to do it well.

My point precisely. Our high-level architecture has a huge
perception/performance deficit as far as parallelism is concerned.
We can't do it, because we just can't. Can alternative entities (AIs, aliens)
debug 10^6 concurrent threads in the same manner as we can 2-3?
Is this at all possible, or is all massive parallelism unattainable
but by algorithms driven stochastically from the roots up? I wish we knew.

> I don't think the problem was hardware. I think the problem is that

Initial bits were dumb. We were stuck with the notion "smart central
processing unit" vs "dumb storage, a pile of bits" even after we
got bits being represented in the same structures as logic gates.
It was a habitual bias we didn't even realize we had, because we're
so used to the human processor sequentially hacking away at the
dumb universe metaphor.

> you can show that some cellular automaton is Turing Equivalent and
> massively parallel, but actually making it do useful work requires

The Turing machine is a nice hardware for a sequential-mind gedanken
theorists, but as blueprint for a physical computer it sucks mossy
rocks. People still don't realize that in this universe a sequential
machine simply can't do most task by virtue of the universe being over,
not to mention current realtime requirements, which are on ms..us scale.

> techniques we don't understand. By contrast, the Von Neumann style lent
> itself very easily to real world work.

Becase we seem to see the world as a sequence of events. We can't
jump outside of our skins.

-- Eugen* Leitl leitl
______________________________________________________________
ICBM: 48.07078, 11.61144 http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net





This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT