RE: Military Friendly AI

From: Ben Goertzel (ben@goertzel.org)
Date: Sun Jun 30 2002 - 11:41:34 MDT


> 6 months for me was like the bare minimum needed to allow everyone to
> pick through your plans. Actually I'd prefer it be a longer period than
> that. What does your common sense tell you? Do you have a time period in
> mind?

Months, not days or years...

> Clearly, coming up with better ways to measure these risks and make
> decisions upon the measurements is something we all still need to work on.
>
> I think if I was presented with such a figure, I would also want
> to compare
> it to a bunch of humans that had been similarly tested. If the figure for
> the AI was significantly less risky than the humans, then I would argue
> pulling the plug makes no sense. If the AI was higher than any of the
> humans then I would either pull the plug or if possible work on revising
> the AI to lower the risks further.

The tough thing is going to be the lack of the rigorous quantitative
estimates that your example presumes...

> I think SIAI is trustable on this since we have been the leaders when it
> comes to these issues, when it comes to stressing lowering risks, and
> when it comes to publicly publishing our plans and discussing them.

Of course you think you and your team are trustable. I think my team and I
are trustable too!

This is not surprising...

> Because a diverse
> > committee of transhumanist-minded individuals would be
> incredibly unlikely
> > to say "The .01% chance we have calculated that your AI will go rogue at
> > some point in the far future is too much in our opinion. Pull the plug."
> > This statement bespeaks a lack of appreciation of the
> possibility that the
> > human race will destroy itself *on any given day* via nuclear
> or biological
> > warfare, etc. It is not at all the kind of statement one would
> expect from
> > a set of Singularity wizards, now is it?
>
> Right, we need some way to compare between them all.

And I don't see any way to make reasonably solid probability estimates about
*any* of these risks... the risk of the human race bombing itself to
oblivion OR the risk of a certain apparently friendly Ai going rogue...

ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT