RE: Military Friendly AI

From: Ben Goertzel (ben@goertzel.org)
Date: Thu Jun 27 2002 - 13:45:48 MDT


> It genuinely strikes me as very strange that anyone would try to fix the
> subjective morality problem by taking 10 nodes with subjective moralities
> and letting them work it out using a human political protocol.
> If that was
> all it took...

The idea is not to "fix" the subjective morality problem in any conclusive
or absolute way, just to mitigate it

It does not seem at all strange to me to partially rely on the advice of an
appropriate group of others, when making an important decision. It seems
unwise to me *not* to.

> > Of course, this group will *still* not have an objective morality --
> > there is no true objectivity in the universe -- but it would have a
> > broader and less biased view than me or any other individual.
>
> That is not even close to being good enough.

It is going to have to be.... Other than humans or groups of humans, who
else is going to decide when allowing the Singularity to develop is
appropriate? Who else is there but

a) chance

b) the AI itself?

I love my dogs, but I'm not sure they're going to be able to fully
understand the issues...

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT