Re: AI timeframes

From: Thomas Buckner (tcbevolver@yahoo.com)
Date: Wed Apr 07 2004 - 17:49:02 MDT


--- Dani Eder <danielravennest@yahoo.com> wrote:
>
> > > I'd be surprised if AGI is developed in less
> than
> > 30
> > > years. I'm looking at a 50 year time-frame
> > actually.
>
> Ben G. said:
>
> > In the end she agreed that 8-10 years was
> plausible
> > and 15 years was
> > fairly likely -- assuming the focused team of 5-7
> > people really has
> > total focus on the project, and assuming hardware
> > continues to advance
> > as projected, making the running of multiple
>
> It's the 40th anniversary of IBM's introduction of
> the first general purpose mainframe family, the 360
> series. It's worth considering what made such
> mainframes so popular and predicting how AI may
> follow a similar path.
>
> In the first half of the 20th century, a computer
> was generally a young lady who operated a mechanical
> calculator. There were legions of them working in
> industries like banking and insurance that required
> a lot of arithmetic to be done. There were also
> large numbers of them working in payroll and
> accounting departments of most companies. Prior
> to the 360 series, electronic computers were
> generally special purpose. The 360 series had
> a consistent machine code and OS across a range
> of models, and the machines were capable of running
> a wide range of programs. They were expensive,
> but less expensive than the large number of human
> computers they replaced.
>
> Similarly, there are software systems that have
> been developed to do specific tasks, such as
> circuit layout or flagging unusual credit card
> transactions. These are hand crafted for the
> specific task. Software systems that are trainable
> for a wide variety of tasks are not yet available.
>
> Once hardware is available that is adequate for
> the job at a competitive cost, there will be a
> strong incentive for the software to be developed.
> For example, a voice recognition call center
> package that can be trained much as the humans
> who do the job to reply to product-specific
> questions might sell for a lot of money if it
> replaces dozens or hundreds of humans.
>
> The more generally trainable a system is, the
> more potential customers there are, and thus the
> amount that can be invested in developing it can
> increase.
>
> I estimate sufficiently powerful hardware needs
> to be in the range of $30K to $30 per teraflop,
> or a factor 5 to 5000 less than at present, or
> 4 to ~20 years of continued cost reductions for
> computers at recent rates, plus 5-10 years for
> development of 'trainable software' of increasing
> generality once the financial incentive exists.
>
> Thus I project a range of 10-30 years in total from
> now.
>
> Daniel

May be, but I project MASSIVE white-collar
unemployment and social disruption on the way, because
we are still operating under the old economic model.
If a factory owner can lay off N employees while still
producing and selling the same quantity of product, do
the ex-employees get to share the extra profit? Of
course not. We are starting to see the result with the
'jobless recovery' which is getting blamed partly on
overseas outsourcing and partly on increased
productivity.
At some point an economic crisis will result from too
few people actually being able to earn money to buy
products (even bare necessities like food which
requires precious energy to produce and precious
insects to pollinate) and we can expect to see
dirt-cheap computers and other tech products the way
we see cheapo digital watches and $1 pocket
calculators now. I would not be surprised to see
people starve to death surrounded by supercomputers. I
wonder whether this economic disruption is going to
interfere with AGI development.
Tom

=====

__________________________________
Do you Yahoo!?
Yahoo! Small Business $15K Web Design Giveaway
http://promotions.yahoo.com/design_giveaway/



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT