Re: ethics

From: Keith Henson (hkhenson@rogers.com)
Date: Fri May 21 2004 - 06:02:42 MDT


At 10:24 PM 20/05/04 -0700, Michael wrote:
>Thomas Buckner,
>
>You wrote:
> >
> > We have a classic blocker problem hanging with
> > human-level intelligence, and if we can't solve it at
> > human-level, we may not have enough to go on for
> > anything beyond.
> >
>
>Human unfriendliness could probably be considered a blocker problem, as
>defined by Eliezer. Until we understood it, we would be held up. However,
>we understand human unfriendliness quite well - there are reams of research
>on why people seek power, abuse power and behave in unfriendly ways.

I would appreciate a few pointers into this research. I have seen nothing
during the recent past to indicate people understand what lies behind
Zimbardo's and Milgram's results or even what was involved with Patty
Hearst and Elizabeth Smart. But I certainly could have missed it.

>Therefore, because we *do* understand why it happens we can, quite simply,
>NOT program the selfish gene promotion goals into FAI. We do not have to
>solve the problem in humans to avoid the problem in a newly created being.

It is not obvious to me that all human traits should be left out of
FAI. But in any case you want to think long and hard about what you do
work into an AI as goals.

Keith Henson

>Michael Roy Ames



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT