Re: Guidelines on Friendly AI

From: Stephen Reed (
Date: Tue Sep 20 2005 - 07:25:48 MDT

As an extra-effort task at Cycorp over the last couple of years, I've been
writing a James Albus - inspired hierarchical control system using the Cyc
knowledge base as the World Model and repository for the behavior scripts
that implement each node. SIAI Friendship theory, in my opinion nicely
fits into the goal/command hierarchy of the Albus "Reference Model
Architecture". Although I've sketched a framework for a robust several
hundred node system, currently I'm sidetracked trying to build a
scaffolding compositional structure around the behavior scripts so that
Cyc can deductively answer questions about them, revise, diagnose and
author them. This (AI-hard) substantial subtask was motivated by
management criticism that programmers should not have to learn yet another
one-off programming language.

So to respond directly to your point, I believe that a goal based approach
to AI, in which an agent must determine the utility of its actions while
choosing among alternatives, is one well suited to implement SIAI

My statements are my own opinions and do not represent Cycorp nor its
sponsors in any way.


On Tue, 20 Sep 2005, Michael Vassar wrote:

> I have been considering devoting a substantial amount of time to examining
> the question of how the SIAI Guidelines on Friendly AI could be implemented
> by GAI research projects other than that undertaken by SIAI. How feasible
> and how worth-while do people here consider that task to be?

Stephen L. Reed                  phone:  512.342.4036
Cycorp, Suite 100                  fax:  512.342.4040
3721 Executive Center Drive      email:
Austin, TX 78731                   web:
         download OpenCyc at

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT