From: Mikko J Rauhala (email@example.com)
Date: Fri Mar 23 2007 - 10:22:51 MDT
On pe, 2007-03-23 at 14:22 +0000, BillK wrote:
> This seems to indicate that FAI must be designed to be 'emotional'.
> Otherwise it will make decisions that humans regard as 'ruthless',
> 'morally unacceptable', etc. You may argue that this would be a
> 'good' thing, but probably most humans would disagree with you.
Indeed most humans would much prefer decisions that benefit them and
their inner circle the most, never mind the "others". This is what the
emotional level "moral" process optimizes.
I'd much rather have a relentlessly, indeed even ruthlessly ethical AI
than a self-servingly "ethical" one, even if wannabe self-servers
disagree. Which they would be rather stupid to do, by the way, since
they're highly unlikely to be the inner circle and very likely to be
"others", if the AI does emulate human emotionality in making the
PS: Alexei, please don't test here, you're likely to be very close to
being kicked out.
-- Mikko Rauhala - firstname.lastname@example.org - <URL: http://www.iki.fi/mjr/ > Transhumanist - WTA member - <URL: http://transhumanism.org/ > Singularitarian - SIAI supporter - <URL: http://intelligence.org/ >
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT