Re: [sl4] The Jaguar Supercomputer

From: Alexei Turchin (alexeiturchin@gmail.com)
Date: Tue Nov 17 2009 - 03:28:47 MST


Even if Moore law stops, supercomputers can grow several orders of
magnitude after it. Simply because more monney will be spent on buying
 "cores".

If 1 core will cost 1 dollar, and all the earth will spent 1 trillion
dollars each year on "cores", after 10 years we will get 10**13 cores.
(And no moral aging, because Moore law has stoped). And if each "core"
will make 10**10 flops wich seems resonable in current technology, the
total power of supercomputer will be 10**23 flops.

10**23 maximum what we could get with current technology. And it is
far more then 10**15 of Yaguar. So it is not surprising that
supercomputers grow fast.

But for me is more interseting progress in high speed single cores -
where is 1 THz?

On 11/17/09, J. Andrew Rogers <andrew@ceruleansystems.com> wrote:
>
> On Nov 16, 2009, at 10:37 PM, John K Clark wrote:
>> However it's not likely to
>> retain its crown for long, for reasons that I don't entirely understand
>> the speed of the very fastest computers has been advancing even faster
>> than Moore's Law. For the last 5 years the speed of the fastest computer
>> on Earth has doubled every 9 months and there is no slowdown in sight.
>
>
> It is not hard to understand, the Top 500 benchmark (LINPACK) doesn't
> measure much of anything useful. It is an embarrassingly parallel benchmark
> that will scale linearly with the money spent on throwing cores at the
> machine. Most codes have not scaled remotely as fast. In fact, most codes
> are barely scaling at all. Most codes, including almost any code of interest
> for AI, won't show anything like this scalability on the XT5. Most
> sparse-structure and graph-like problems don't scale at all on massively
> parallel machines, though Cray makes machines that are much better suited
> (and smaller) for those types of codes than the XT5. The XT5 is a nice
> machine, but it is oriented toward topologies and access patterns like
> computational fluid dynamics.
>
> Top 500 has zero -- repeat *zero* -- relevance to AGI. None. Zip. Nada. It
> benchmarks a code that is completely orthogonal to essentially all AGI
> workloads, and selects for systems that typical AGI workloads would scale
> horribly on. Top 500 is scaling because it increasingly excludes almost all
> useful workloads outside of an extremely narrow application space.
>
> You can buy machines that scale well for AGI workloads, but you will never
> see them in the Top 500 because they are not designed for that workload. In
> fact, for AGI workloads these machines will run rings around the Top 500
> systems.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT