Re: guaranteeing friendliness

From: Ben Goertzel (ben@goertzel.org)
Date: Sat Dec 03 2005 - 19:19:01 MST


Hi,

> These organizations frequently buy up such experts and
> such research decades in advance.

Not really -- they generally buy up the experts after the big advances
have already happened. I agree that there are rare exceptions, but AI
is not one. You would see it if this was happening -- smart AI PhD's
from top schools would be "disappearing" into these companies or
organizations, which is not happening at all. Rather, AI is a weak
job market in these organizations and out.

> > Since academic/corporate AI these days is so strongly opposed to AGI
> > research, my guess is that *if* there is an AGI breakthrough in the
> > next 10-15 years,
>
> Interesting: I didn't know that.
>
> Can you offer some references or documentation for this
> assertion (I ask not in order to stifle debate, as some
> shout "cite" whenever they disagree but rather because
> I am truly interested and curious about this phenomenon.)

Well, you could look at Minsky's chapter in this book

http://llt.msu.edu/vol1num1/reviews/hal.html

or (when it finally comes out in a few months) my forthcoming edited
volume on AGI,

http://www.springer.com/sgw/cda/frontpage/0,,4-147-22-43950079-0,00.html

> But, be warned that one possibility for this is that
> 'someone' (like the NSA) is actively discouraging such
> work and at the same time is offering strong incentives
> for those who can and do pursue it to be "one of the
> family" and keep such research under security veil.

I know enough about both the AI field and the intel community to know
that this is very unlikely to be true; but I realize there is no way
to convince you that it isn't true, if you choose to maintain this
belief.

>
> Al Qaeda? Or some rogue nation?

No chance -- those countries have very little technological advancement

> It is unlikely that a "maverick breakthrough" will have much
> effect until hardware increases by several orders of magnitude.

And what evidence do you have for THIS assertion? As someone actively
involved with AGI design, I am not nearly so sure as you are of
this....

> And, those with the hardware will have the first chance
> to actually try any such breakthroughs....

You're making an unjustified assumption about the hardware
requirements of a successful AGI design.

Based on my current estimates (which are based on the capabilities of
my own Novamente AI system when run on a handful of machines), the
minimum hardware requirements for an AGI are not *less* than a few
hundred commodity PC's ... but I am not sure they are more than this.
And a few hundred PC's are not all that expensive to network together.

-- Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT