RE: Military Friendly AI

From: Ben Goertzel (ben@goertzel.org)
Date: Sun Jun 30 2002 - 12:44:12 MDT


>
> Ben Goertzel wrote:
> >
> > And I don't see any way to make reasonably solid probability
> estimates about
> > *any* of these risks... the risk of the human race bombing itself to
> > oblivion OR the risk of a certain apparently friendly Ai going rogue...
>
> Sigh. Well, I just want to point out for the record that although it
> certainly *sounds* very reasonable, wise, and openminded to say that
> your estimates are just intuitions and they could be wrong and there's
> probably no way to get a good picture in advance, if you *go ahead and
> do it anyway*, it's not really any less arrogant, is it?

I think the picture will become clearer as the Singularity becomes nearer...
but certainly not clear enough for *anyone*'s total comfort...

My concern is not to "sound reasonable, wise and openminded", I'm just
calling the situation as I see it. Others can decide how they think I
sound, and opinions do vary!

Intentionally NOT launching a Singularity is a huge decision too, as you
know, because the non-Singularity future of humanity is far from
undangerous, and because there's always the possibility of someone else
launching a much worse Singularity....

There is no safe way out from the current position of humanity, in my view.
Not pursuing AGI or the Singularity because of the uncertainty, would be a
huge decision to make also, at least for someone who thinks they know
(mostly!) how to make an AGI and launch the Singularity.... Inaction can
obviously have just as huge good or bad consequences of action.

-- ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT