Ethical experimentation on AIs

From: Harry Chesley (chesley@acm.org)
Date: Sat Oct 20 2007 - 12:04:02 MDT


I expect this has been discussed to some degree before, so forgive me if
I'm dredging up old topics, but I was wondering if anyone had opinions...

First, as we experiment with AI technology, we're bound to get it wrong
often, to create malformed AIs of various sorts, and to destroy most of
them. If we created AIs by cloning a human intelligence, it would very
clearly be unethical to do this, as it would result in "human" pain,
suffering, and death.

Second, pain, a self-preservation instinct, and consciousness appear to
be interrelated, and a consequence of evolution. (This is, of course, a
complex topic that could lead to much discussion, but I'm trying to skip
by it quickly in the interest of brevity.)

But, third, there is no reason to believe that consciousness et al are a
necessary part of an intelligent system. Nor any reason to believe that
unless we intentionally try to create such that they will spontaneously
occur in the systems we build. (This is a weak argument, since we don't
understand those elements well, but still...)

The question: Are we ethically in the clear to experiment on AIs under
the assumption that we won't accidentally create AIs that feel pain or
fear death?



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT