Re: Hiding AI research from Bad People was Re: OpenCog Concerns

From: William Pearson (wil.pearson@gmail.com)
Date: Mon Mar 24 2008 - 06:17:11 MDT


On 24/03/2008, J. Andrew Rogers <andrew@ceruleansystems.com> wrote:
>
> On Mar 23, 2008, at 4:26 PM, William Pearson wrote:
> > Sorry to put a downer on this idea, but it smacks of naivety somewhat.
> > The military of any country (and DARPA is not the only one that you
> > might be worried about) are not going to respect copyright if it harms
> > their perceived defensive capability. These are the people that have
> > secrets upon secrets. They will just use your code, secretly.
>
>
>
> Why would they do it secretly?

If someone is manifestly on the right track to AI, I can see the
military mind treating it the same way as nuclear technology, keeping
it as secret as possible to gain an edge and avoid its use by
terrorists/unfriendly states. That might mean appropriating it, then
quietly quashing the research trying to make it appear as another
failed attempt in the litany of AI.

> > Far and fast is my argument so that any one persons mistake is
> > unlikely to stomp on the rest of the people. Democracy and the markets
> > seem to be the least bad types of power structures we have made.
>
>
>
> Do you understand the failure modes, as it would appear from your
> perspective, of democracy and markets when you inject gross
> disparities of intelligence?
>

I'm not sure of the exact failure modes you are referring to
(Economics 2.0 of accelerando origin?). As I see it there are no
likely* pure success models for when humanity understands
intelligence. So the least worst situation is where the maximum amount
of humanity possible leaves some imprint of its existence on the
future.

Humanity losing "control" and stopping being the top dog is pretty
much a given. But perhaps we can do as well as the bacteria and be 90%
of the individuals that make up the higher order life form that will
be created. Most likely not in our current bodies, unless non-bio nano
tech is nigh impossible.

  Will Pearson

* I don't consider friendly AI at all likely (theoretically or
practically), so I have to consider contingencies.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT