Re: ESSAY: Forward Moral Nihilism

From: Phillip Huggan (cdnprodigy@yahoo.com)
Date: Tue May 16 2006 - 12:21:33 MDT


Our brains utilize chemical and physical phenomena that are substrate dependant. As long as your AI stays silicon there are no ethical implications *before* you achieve seed-AI.
  

Martin Striz <mstriz@gmail.com> wrote:
  On 5/15/06, Woody Long wrote:
> I purposely avoided the issue of consciousness.

But it's crucial.

> Seems to me a FAI can be
> EITHER an ingenious classical syntactical simulation of intelligence, and
> so a non-conscious classical FAI, or a semantically understanding
> intelligence, and so a post-classical conscious FAI. Furthermore, since
> there can be no Alife without machine consciousness, a classical FAI can
> not be an Alife. However, a post-classical conscious FAI can be an Alife,
> if the other necessary ingredients of Alife are included. For these
> ingredients my system duplicates the human life theory of William James,
> father of modern psychology.

Let me be more specific. Suppose you instantiate every cognitive
process of your mind onto machine substrate, i.e. there's a one-to-one
mapping of every computation that your brain does at the most abstract
level, even if it is done differently. By what criterion does that
computational system, eWoody, not deserve the same rights that you do?

How about if you successfully upload your mind in the truest sense of
the word? Is there a difference?

                
---------------------------------
Yahoo! Messenger with Voice. Make PC-to-Phone Calls to the US (and 30+ countries) for 2¢/min or less.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT