RE: Ethical theories

From: Ben Goertzel (ben@goertzel.org)
Date: Sun Feb 01 2004 - 20:35:10 MST


Hi Keith,

Yes, I'm aware of the recent work on evolutionary psychology and ethics.

I think this work is VERY relevant in terms of helping explain why human
ethics are the way they are.

However, when thinking about the ethics of posthuman digital minds,
evolutionary ethics probably aren't so relevant, because these minds won't
be evolving by Darwinian natural selection.

As per Jef's comments, it probably makes sense to tie a "universal theory of
ethical systems" in with the process of "general evolution" as observed in
the cosmos at large -- more so than with the ethical systems that emerge
from Darwinian natural selection. However, even in this perspective,
Darwinian natural selection is interesting as an important special case of
general evolution.

-- Ben G

> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org]On Behalf Of Keith
> Henson
> Sent: Sunday, February 01, 2004 8:40 PM
> To: sl4@sl4.org
> Subject: Re: Ethical theories
>
>
> At 02:59 PM 31/01/04 -0500, you wrote:
>
> >Hi,
> >
> >I've been thinking a lot about ethical theory, and I'm wondering
> if anyone
> >knows of an ethical theory that has the same kind of structure as Imre
> >Lakatos's theory of "research programmes" (in the philosophy of science).
> >
> >What I'm thinking of would be a theory of "ethical systems" rather than
> >ethical acts. It would agree that any one act may be ethical or not
> >depending on the ethical system within which you view the act.
> But it would
> >give some high-level criteria for judging ethical systems as wholes.
>
> At the risk of sounding like someone with a new hammer . . . . I
> would look
> into the basics of evolutionary psychology and biology to analyze
> ethics. That area has resolved a lot of questions to my satisfaction,
> though some of the answers are extremely unsettling--like the discovery
> that evolution has put no limit on irrational behavior in wars.
>
> http://cfpm.org/~majordom/memetics/2000/16474.html
>
> The genes which lead to our feelings about what are thought of as good
> ethics/morals were shaped by our long evolutionary history in
> small tribes
> where the tribe members were mostly relatives. Positive ethics
> and morals
> are actions that promotes survival of your genes, particularly in
> Hamilton's sense of "inclusive fitness."
>
> Sometimes personal survival was "contraindicated" as it was for Leonidas
> king of Sparta who commanded 300 Greeks in one of the most important
> battles in human history: Thermopylae where his small force held up over
> 100,000 Persians for six days while the Greeks mustered the forces that
> eventually won at
> Marathon. http://users.erols.com/nolan/heroes.htm Sometimes as in the
> story of Horatio at the Bridge they took chances and survived.
>
> You can see at once where the (shared) genes of the relatives of these
> heros did better than they would have without their sacrifice.
>
> Turning to less drastic situations, a major drive for humans is seeking
> status. We have genetically build brain reward systems that are switched
> on by attention following actions such as killing an animal large
> enough to
> feed the whole tribe. Status the residue from many such episodes of
> attention.
>
> But status is more complicated than just being a good hunter. It
> indicates
> a person who can be trusted, who can be counted on to do what
> they say they
> will, who will not be the first to defect ("nice" in the terms defined in
> Evolution of Cooperation). Status and power more likely accrue to those
> willing to delay gratification.
>
> It is easy to see why the reward circuits behind these traits evolved in
> cooperative social primates, as Henry Kissinger said, "Power is the
> ultimate aphrodisiac."
>
> Unfortunately, non-zero cooperative situations--such as cleaner fish--can
> be exploited. There is a fish that looks like a cleaner and takes
> advantage of larger fish by biting a chunk out of them when they are
> expecting to be cleaned. Even though this has nothing at all to do with
> human ethics, humans react to it with disgust.
>
> So while I have not worked out a framework for ethical theories,
> this might
> provide a pointer to a possible solution.
>
> Keith Henson
>
> PS as a complete coincidence, I was looking to talk to Ben today before I
> saw his postings here.
>
> PPS As a goal for friendly AI, seeking status (good opinion) in the eyes
> of humans and other AIs might be one part of the design.
>
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT