From: Daniel Yokomizo (email@example.com)
Date: Tue Nov 17 2009 - 15:23:54 MST
On Tue, Nov 17, 2009 at 6:43 AM, J. Andrew Rogers
> On Nov 16, 2009, at 10:37 PM, John K Clark wrote:
>> However it's not likely to
>> retain its crown for long, for reasons that I don't entirely understand
>> the speed of the very fastest computers has been advancing even faster
>> than Moore's Law. For the last 5 years the speed of the fastest computer
>> on Earth has doubled every 9 months and there is no slowdown in sight.
> It is not hard to understand, the Top 500 benchmark (LINPACK) doesn't measure much of anything useful. It is an embarrassingly parallel benchmark that will scale linearly with the money spent on throwing cores at the machine. Most codes have not scaled remotely as fast. In fact, most codes are barely scaling at all. Most codes, including almost any code of interest for AI, won't show anything like this scalability on the XT5. Most sparse-structure and graph-like problems don't scale at all on massively parallel machines, though Cray makes machines that are much better suited (and smaller) for those types of codes than the XT5. The XT5 is a nice machine, but it is oriented toward topologies and access patterns like computational fluid dynamics.
The Monte Carlo AIXI is embarrassingly parallel:
> Top 500 has zero -- repeat *zero* -- relevance to AGI. None. Zip. Nada. It benchmarks a code that is completely orthogonal to essentially all AGI workloads, and selects for systems that typical AGI workloads would scale horribly on. Top 500 is scaling because it increasingly excludes almost all useful workloads outside of an extremely narrow application space.
> You can buy machines that scale well for AGI workloads, but you will never see them in the Top 500 because they are not designed for that workload. In fact, for AGI workloads these machines will run rings around the Top 500 systems.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT