Re: The dumb SAI and the Semiautomatic Singularity

From: Stephen Reed (reed@cyc.com)
Date: Mon Jul 08 2002 - 07:53:53 MDT


On Mon, 8 Jul 2002, Mike & Donna Deering wrote:

> Would it be possible to change the design slightly to avoid volition,
> ego, self consciousness, while still maintaining the capabilities of
> complex problem solving, self improvement, super intelligence?
> Basically a tool level SAI, super intelligence under the control of
> a human. I can't think of anyone, or any government, I would trust with
> that kind of power. Super intelligence without super power ethics is a
> real problem.

The recently announced Darpa BAA (Broad Agency Announcement) for Cognitive
Information Processing Technology calls for self-aware programs having
cognitive capabilities. If Cycorp submits a proposal and subsequently
receives an award, I believe that we would not comply with your request.

However, I will design (and sell to Cycorp architects) a causal goal
system incorporating friendliness features, which in my opinion directly
address your concern.

-Steve

-- 
===========================================================
Stephen L. Reed                  phone:  512.342.4036
Cycorp, Suite 100                  fax:  512.342.4040
3721 Executive Center Drive      email:  reed@cyc.com
Austin, TX 78731                   web:  http://www.cyc.com
         download OpenCyc at http://www.opencyc.org
===========================================================


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT