RE: nagging questions

From: Peter Voss (peter@optimal.org)
Date: Tue Sep 05 2000 - 10:32:12 MDT


Reasons for us to be personally going all out to make SI happen (in spite of
the real dangers):
- The huge benefits that better AI will provide before singularity (SI may
take much longer than we think, or there may be some upper theoretical limit
to intelligence that will dramatically slow the singularity)
- Eli's reason: our best hope to avert another technological catastrophe
that has *no* chance of good outcome
- Some (all?) of us feel that whatever limited control we *may* have over
AI's development path, *we* - the good guys - would rather be right there at
the developmental edge to help guide the best possible outcome.
- Any other major reasons?

I want to expand a little on the issue of developmental path: Even if we are
right, and SI is an 'inevitable' result of the technology that we now have
(provided that we don't 'blow ourselves up' first), there may still be
several *developmental* options - some of which may include us, while other
may not. And specifically, initial design parameters may (chaotically)
affect what the SI does 'in its youth'. I'd like to hear any good arguments
against this possibility.

This bring me to the issue of machine ethics: What will an SI value? What
major goals will it have? I have not abandoned the hope that we might be
able to predict this to some degree. We may be able to predict its goals
(with some degree of certainty) during its early stages; that would help.
Note that I'm only suggesting that we may be smart enough to foresee its
major goals, not what it will do to achieve them.

The way I'm pursuing this idea, is by developing a rational approach to
(prescriptive) ethics. If we can discover (perhaps with the help of early
AI) what moral values a more rational (trans-) human - who can actually
reprogram his emotional evolutionary baggage - would choose, that might give
us clues to the values of an AI/ SI. (I have a number of papers on this
subject at www.optimal.org.) Any comments?

Peter Voss

peter@optimal.org www.optimal.org



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT