Re: Hiding AI research from Bad People was Re: OpenCog Concerns

From: J. Andrew Rogers (andrew@ceruleansystems.com)
Date: Mon Mar 24 2008 - 10:36:48 MDT


On Mar 24, 2008, at 5:17 AM, William Pearson wrote:
> On 24/03/2008, J. Andrew Rogers <andrew@ceruleansystems.com> wrote:
>> Why would they do it secretly?
>
> If someone is manifestly on the right track to AI, I can see the
> military mind treating it the same way as nuclear technology, keeping
> it as secret as possible to gain an edge and avoid its use by
> terrorists/unfriendly states. That might mean appropriating it, then
> quietly quashing the research trying to make it appear as another
> failed attempt in the litany of AI.

This is essentially circular reasoning. DARPA et al have shown no
capacity whatsoever to discriminate between research that is
"manifestly on the right track to AI" and the thousands of dead ends
out there. To put it another way, if they *were* capable of making
meaningful discriminations, they would already know how to build AI
and they would not need your work.

In short, it will not be manifestly obvious that you are on the right
track until you unambiguously produce AI.

J. Andrew Rogers



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT