RE: What is morally/ethically Friendly?

From: Gary Miller (garymiller@starband.net)
Date: Sun Nov 24 2002 - 16:56:06 MST


Wouldn't it be safer to let a panel of reasonably moral minded folks
yhammer out a list of some obvious and not so obvious moral decisions
and let the AI learn based on case based reasoning. Any decisions made
off of these examples could therefore be tied back to the set of moral
tenets it learned from.

Upon the catching any immoral or questionable ethical decisions made it
would be possible to enhance the existing moral case set.

Humans often convince themselves to do questionable moral things by
rationalizing the action by minimizing the downside and maximizing the
potential good from the potential upside result. Knowing how and when
rationalization is appropriate seems to be the most difficult issue and
biggest potential
cause of trouble.

The danger in letting the computer learn ethics by itself may cause it
to at least temporarily favor self-interest over altruistic action.

Of course this also would let a less ethically minded organization
create a bot with a questionable moral compass, but that has a potential
to happen anyway! And at least in a limited sense already being done.

There are large AI programs now trying to maximize their trading
strategies in the stock market now to the detriment of the individual
investor. They are not as infallable as they might be however due to
the large role that positive and negative media attention plays in
determining short term share value, but they are able to react much more
quickly to economic indicators and technical trend data than their human
counterparts giving them at least a large theoretical advantage.

 

-----Original Message-----
From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf Of Ben
Goertzel
Sent: Sunday, November 24, 2002 2:53 PM
To: sl4@sl4.org
Subject: RE: What is morally/ethically Friendly?

MRA wrote:
> > But I think you mean the statement a different way -- you, like
> > Michael Roy Ames, seem to believe that there is some True and
> > Universal Moral Standard, which an FAI will find....
> >
> > Well, maybe it will. I'm not confident either way....
> >
>
> Neither am I confident of this outcome, but its worth a shot don't you

> think?

Sure.... But I don't know of anything concrete to do right now toward
the goal of understanding whether or not there's a Universal Moral
Standard of some kind. Except for:

a) meditating on it
b) trying to build an AI that can make more progress on the issue than
me...

> And as to my belief (or lack of it): I have none. The definition of
> Rightness is just a definition. If it is useful, then great! If not,

> scratch it and try again.

I don't think there's a big problem with your working of the definition
... but I think the concept does strain the bounds of human cognition &
knowledge...

> However, if
> there is a way to 'ask the universe the question' like: which of these

> 38 options is the most Right? Then, wouldn't that be clear up a lot
> of guessing? (This question is asked only half-rhetorically)

I just asked the universe...

Unfortunately, I couldn't understand the answer it gave me ;-)

ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT