Re: Ethical basics

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Jan 23 2002 - 16:16:57 MST


ben goertzel wrote:
>
> > To be a Singularitarian you have to
> > have enough self-awareness to actually work *for the Singularity* and not
> > let your genes just use the Singularity meme as a path for their own
> > purposes. This is not a minor issue.
>
> Realistically, however, there's always going to be a mix of altruistic and
> individualistic motivations, in any one case -- yes, even yours...

Sorry, not mine. I make this statement fully understanding the size of
the claim. But if you believe you can provide a counterexample - any case
in, say, the last year, where I acted from a non-altruistic motivation -
then please demonstrate it.

I'll even make it easy for you by providing the narrowest possible
definition of altruism:

"Altruistic behavior: An act done without any intent for personal gain in
any form. Altruism requires that there is no want for material, physical,
spiritual, or egoistic gain."
        - http://barbaria.com/god/philosophy/zen/glossary.htm

> > The basis of the evolution of honorableness instincts and even altruistic
> > instincts is game theory, and specifically the iterated Prisoner's
> > Dilemna. Good books to read are Douglas Hofstadter's "Metamagical
> > Themas", which contains a discussion of the game theory of altruism in a
> > couple of the chapters (and the rest of the book is also a great deal of
> > fun); "The Moral Animal" by Robert Wright; "The Origins of Virtue" by Matt
> > Ridley.
>
> These are all very good books, however I don't believe that game theory is a
> *complete* explanation of the origins honorableness and altruism, I think
> it's just one important part of the puzzle...

It's the first piece of the puzzle. You start with a description of
fitness maximization in game theory; then shift to describing ESS
adaptation-executers; then move from ESS in social organisms to the
population-genetics description of political adaptations in communicating
rational-rationalizing entities; and then describe the (co)evolution of
memes on top of the political adaptations. As far as I know, though, that
*is* the whole picture.

Again, a counterexample is welcome. In this case, though, the above
description should be interpreted broadly - for example, constraints on
which adaptations can evolve (in a reasonable time period) would be
covered by use of the term "population genetics"; and the interface
between adaptations for moral reasoning and the cognitive architectures
that evolved for other purposes, would be covered by the term
"rational-rationalizing entities".

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT