RE: guaranteeing friendliness

From: Herb Martin (HerbM@LearnQuick.Com)
Date: Sat Dec 03 2005 - 17:02:17 MST


From: Ben Goertzel
> I'm not sure what reasoning you base this on.
>
> IMO, seed AI is only likely to be created by one of these
> organizations only if the major insights required for creating seed AI
> are made by academic researchers, who then publish their results,
> which are then taken up and exploited by large organizations such as
> the ones you mentioned.

These organizations frequently buy up such experts and
such research decades in advance.

Look into the history of DES (not the problems with it
but how it was adopted, who really designed it, and how
the theoretical works on it was decades ahead of later
research on the algorithms used.)

Also note that that such organizations are the only ones
likely to have the requisite hardware first.

So, while the following is also frequently true:

> Large organizations like these are famously poor at radical
> innovation, but are well poised to make large-scale implementation of
> already-published ideas (or small variations on such).

...any theoretical ideas from the outside (e.g., academia)
are likely to find their first practical hardware applications
within such organizations where "money is no object."

And do notice that the government is able to co-opt
small groups on occasion to derive these benefits;
as is IBM, Microsoft, and many others who also understand
this effect and who must also remain competitive over
time.

> Since academic/corporate AI these days is so strongly opposed to AGI
> research, my guess is that *if* there is an AGI breakthrough in the
> next 10-15 years,

Interesting: I didn't know that.

Can you offer some references or documentation for this
assertion (I ask not in order to stifle debate, as some
shout "cite" whenever they disagree but rather because
I am truly interested and curious about this phenomenon.)

But, be warned that one possibility for this is that
'someone' (like the NSA) is actively discouraging such
work and at the same time is offering strong incentives
for those who can and do pursue it to be "one of the
family" and keep such research under security veil.

Again, there is significant precedent for this in
cryptography research.

[Mere supposition on my part, but it is plausible, and
this is part of my reason for wishing to investigate
your assertion -- perhaps there are clues in the way
such opposition manifests.]

> ...it will come from outside the academic/corporate
> mainstream -- perhaps by some small startup company, or else an
> independent researcher or a non-government-funded academic at a
> low-prestige university.

Al Qaeda? Or some rogue nation?

Maybe someone relatively neutral like the Japanese government....

> On the other hand, if there is no maverick breakthrough like this,
> then eventually the academic mainstream will come around, and in 20-40
> years powerful AGI results will be published by academics and picked
> up on by large institutions, and your prediction will come true...

It is unlikely that a "maverick breakthrough" will have much
effect until hardware increases by several orders of magnitude.

And, those with the hardware will have the first chance
to actually try any such breakthroughs....

--
Herb Martin


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT