RE: AI hardware was 'Singularity Realism'

From: Ben Goertzel (ben@goertzel.org)
Date: Sat Mar 06 2004 - 15:58:55 MST


Hi,

> >Keith, I am curious what funding sources you're thinking of.
>
> I know at least one computer savvy person who could write a check that
> large.

Indeed, the individual investor/philanthropist is the most likely source
of funding for pure AGI R&D right now, given the conservative nature of
the research establishment.

> >The major science and technology funding bodies in the US are very
> >tightly tied into the "narrow AI" research programme and very
skeptical
> >of radical approaches to AGI. This is in spite of the fact that
they've
> >spent hundreds of millions of dollars on narrow AI programs (Cyc
being
> >the flagship example ;-) without obtaining dramatic returns either
> >scientifically or economically.
>
> That's not surprising considering how much computational power biology
> lavishes on the problem. Have you ever looked up the MIPS rating of a
> retina?

Well, comparing contemporary computer hardware with biological systems
is definitely tougher than the proverbial comparison between apples and
oranges! The two are good at different sorts of things. Simulating
human neural process on digital computers would definitely require MUCH
more computing power than is currently affordable for an R&D project.
On the other hand, this observation does NOT rule out AGI architectures
specifically designed to exploit the strengths and minimize the
weaknesses of contemporary computers & networks.

> >So, if by a "strong case" you mean a case that will convince AGI
> >skeptics such as the folks at the National Science Foundation -- I
guess
> >this essentially means "something like young-child-level human
> >intelligence has been achieved and what remains is teaching of the
> >system and refinement of the algorithms."
>
> I would not need that level, but you need to sell me that you have a
> viable
> approach before I stick my neck out and try to convince people with
money.

Yes, of course that makes sense.
 
> We might consider spending an hour or so in chat. If nothing else,
you
> could salvage the chat log for a written presentation material.

Sure, that would probably be worthwhile. If you'd like to set up a time
to chat, email me at ben@goertzel.org. Unfortunately, I lost your email
address in a recent hard drive crash...

> I happen to be a bit skeptical that the hardware is up to the task
based
> on
> arguments by Hans Moravec, Ray Kurzweil and others. In the long run
this
> is not a problem since hardware equal to the task is less than a human
> generation away. If you have a radical approach that would allow
> cockroach
> level hardware to generate superhuman AI level performance, I would
sure
> like to know what it is.

As I said above, hardware is DEFINITELY a problem if you're trying to
simulate the brain -- rather than achieve qualitatively "human-level"
(and then beyond) intelligence in a manner fundamentally tailored to the
hardware at hand.

Also, please note that I am not aiming to imitate human sensorimotor
capability. Sensorimotor capability is definitely critical, but
emulation of human strengths and weaknesses in this domain is not
important if your goal is AGI.

Kurzweil and Moravec tend to focus on uploading and human brain
simulation, rather than on AGI that takes non-human-imitative
approaches. Kurzweil is skeptical that the cognitive science problems
involved in structuring AGI systems can be solved in any way other than
studying and imitating the human brain. I don't agree with him on this
point, though I very much respect his thinking generally.

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT