Re: Singularitarian Principles

From: BillK (pharos@gmail.com)
Date: Fri Mar 23 2007 - 08:22:12 MDT


On 3/21/07, Gordon Worley wrote:
>
> For those that support the creation of Friendly AI, I don't think
> extreme means can every be justified. The whole point of Friendly AI
> is that humans don't really know what is best, and even if we are
> extremely confident that we do, we still want to be sure that, even
> if the "bad guys" get the code, the AI will eventually turn out
> good. Then we'll be able to say "I don't know; let's ask the
> Friendly AI".
>

New Scientist is reporting a study:
"Impaired emotional processing affects moral judgements"
<http://www.newscientist.com/article/dn11433>

Now, revealing new research shows that people with damage to a key
emotion-processing region of the brain also make moral decisions based
on the greater good of the community, unclouded by concerns over
harming an individual.

These results suggest that emotions play a crucial role in moral
decisions involving personal contact – but not in moral judgments
involving distant, indirect impacts on other people. "What's beautiful
to me is how subtly different the situations are," says Marc Hauser at
Harvard University in Cambridge, Massachusetts, US, one of the
researchers involved.

"Emotions are an anchor for our moral systems. If you remove that
anchor you can end up anywhere," says de Waal.

------------------------

This seems to indicate that FAI must be designed to be 'emotional'.
Otherwise it will make decisions that humans regard as 'ruthless',
'morally unacceptable', etc. You may argue that this would be a
'good' thing, but probably most humans would disagree with you.

BillK



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT