Hard takeoff date projections

From: Dani Eder (danielravennest@yahoo.com)
Date: Mon Jan 24 2005 - 13:57:27 MST


-A previous writer said:
> The risk of some nut intentionally creating an AI
> that achieves hard takeoff
> and destroys the human race, during the next 10
> years, is IMO **not**
> effectively zero.
>
> The chance of this happening during the next 50
> years, IMO, is *scarily
> high*.
>
- And another said:
> We haven't the foggiest
> > clue exactly what it will take to make a human
> level AI, let alone an AI
> > capable of doing serious harm to the human race."

I agree with the first comment and disagree with the
second. We know to some extent how the human brain
functions, at least at the level of neurons and
synapses. A sufficiently accurate simulation of
10^11 neurons and 10^14-10^15 synapses should
produce a human level intelligence by brute force.
Clever AI software design may require less than
this, but I claim that it is an upper limit.

The exact computation required to simulate a neuron
sufficiently accurately is not known exactly, but
we can put some reasonable estimates to it. I use
1 synapse firing = 1 bit +- factor of 30, which
leads to a human equivalent = 3,000 Tflops (range
100-100,000 TFlops).

I will take as a proxy for 'largest computer
available for AI research' the 500th computer listed
in the top500.org list of most powerful
supercomputers.

The trend has been for the #500 machine to grow at
93% per year in performance. A factor of 30
uncertainty in required performance thus only leads
to a 5 year unceratinty in date.

3000 Tflops for the #500 machine would occur in
2017 at historical trend rates, to which I would
add 5 years for software development/AI training,
so the 'danger zone' for superhuman AI starts at
2017-2027.

In response to the first comment, SETI@home runs
65 Tflops currently on a distributed network, which
is barely below my low end 100 Tflops estimate,
so I would concur that the risk of a runaway
intelligence on a distributed network is non-zero
(whether malicious or well meaning). The risk
from a top ranking supercomputer is lower in
my opinion. The #1 machine clocks 70 GFlops, but
the top ranking machines are operated in a much
more controlled environment.

If I was asked what will seal our doom, I would
say it's the playstation 3. It will contain a
'Cell' processor jointly developed by IBM, Sony,
and Toshiba. It's designed to be highly parallel,
and it will be produced in mass quantities which
will make it cheap. Thus it will will be well
suited to MPP type supercomputers.

Daniel

__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT