From: Mike & Donna Deering (firstname.lastname@example.org)
Date: Mon Jul 08 2002 - 06:41:29 MDT
I've been worried about something new for a while now. Something in addition to all the things I normally worry about. But I don't know if it is something that I should be worried about because I don't if what I am worried about is possible. I would like to ask the people on this list that have a fairly well developed AI design concept a question.
Would it be possible to change the design slightly to avoid volition, ego, self consciousness, while still maintaining the capabilities of complex problem solving, self improvement, super intelligence? Basically a tool level SAI, super intelligence under the control of a human. I can't think of anyone, or any government, I would trust with that kind of power. Super intelligence without super power ethics is a real problem.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT