More on the abstract theory of super-smart safe AI's

From: Ben Goertzel (ben@goertzel.org)
Date: Wed Feb 23 2005 - 00:16:01 MST


I have written another brief essay,

http://www.realai.net/GoalGeneralization.pdf

This is a follow-up to my recently posted essay "Encouraging a Positive
Transcension via Incrementally Self-Improving, Theorem-Proving-Based AI."

The goal of this new essay is to solve three problems with the ITTSIM
approach, presented in the prior essay:

1. It doesn’t guarantee any kind of “conceptual continuity” between one AI
and its successor in the chain

2. It doesn’t leave AI’s any possibility to deviate from the path of safe
self-modification in order to deal with nasty existential risks like aliens
who want to destroy the universe

3. It doesn’t give AI’s any positive purpose, it only tells them what not to
do

All three issues are addressed.

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT