Re: Teraflops consumer chips by 2006

From: Dale Johnstone (DaleJohnstone@email.com)
Date: Tue Mar 13 2001 - 06:19:33 MST


James Rogers wrote:
>On 3/12/01 8:22 PM, "Dale Johnstone" <DaleJohnstone@email.com> wrote:
>> I'm not sure how useful a graphics processor would be for general AI
>> work, but I suspect it's possible to implement some kind of neural
>> style processing with judicious use of texture compositing hardware
in
>> combination with the z-buffer (or stencil buffer).
>
>In any case, it is largely irrelevant, as memory size/speed is far
more of a
>limitation on general AI than instructions per second (not strictly
true if
>you are trying to map biological models onto silicon, but I don't
consider
>that to be efficient anyway). So I don't see this chip as any
substantial
>breakthrough as far as AI is concerned.

Actually I think getting the design right is *far* more important.

>> Ben: Have you thought about compressing your data before paging to/
from
>> disk? And why use Java anyway? Nice language to work with, but it
blows
>> goats in the memory use department... :)
>
>Perhaps because minor linear improvements in memory speed and
availability
>don't justify the programming effort when the need is for
exponentially
>larger memory? No point in wasting the effort; one might as well wait
for
>the hardware to catch up because you'll get there about the same time
for
>almost the same amount of money.

So you're saying that (A) memory size/speed is more of a bottleneck to
AI than processor time, and (B) it's not worth doing anything about
because in a few years time we'll have that anyway?

Well, you could decide to just go on holiday and just wait for things
to eventually happen all by themselves! I would prefer to do something
about it now and get next years performance today if it's at all
possible. Waiting is a good strategy for catching a bus, not advancing
the Singularity.

The time involved writing a compressor is nowhere near comparible to
waiting for hardware to improve. The first sounds like a weekends work,
the second is a multi-year wait. (BTW there's a zip library already
available in Java.)

Compressing the data *could* potentially give you worthwhile gains if
your app is heavy on disk access. Let's say your processor has to load
a large block of data (4 second), then process it (1 second), then save
it (4 second). Total time is 9 seconds.
Now if you could compress that data down to 1/4 of its original size,
the total time taken would be 1+1+1 = 3 seconds. That's 3 times faster.
(You're also 4 times more likely to find the data you want in the disk
cache, depending on access patterns.) Admittedly, if the access
patterns are nothing like that then you may not get much gain, but it's
certainly worth trying if they are.

I've used Java for memory demanding apps in the past and found it to be
a terrible memory hog, mainly because of it's poor garbage collection.
I found it didn't scale very well at all. It seems like an odd choice,
that's why I was asking.

BTW don't forget, this is in the context of AI - linear improvements
can lead to exponential returns. They don't call it a Singularity for
nothing! :)

Singularity Now!

--
Dale Johnstone.


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT