RE: About "safe" AGI architecture

From: Ben Goertzel (ben@goertzel.org)
Date: Sun Jun 13 2004 - 16:32:13 MDT


Hi,

> So now sufficient POC to guarantee the SI will not take off and do
> terrible things depends on humans reading and verifying a
> specification
> of the intent of the programming? This doesn't look very "safe" to
> me.

Hey you -- you're completely misrepresenting what I said!

If you read my original post, I was describing

* a layered architecture for minimizing the already small risk involved
in experimenting with infrahuman AGI's

* the possible use of formal verification tech to verify the correctness
of this layered architecture

I was not positing a long-term AI Friendliness architecture, rather an
architecture to improve the safety of running experiments with
infrahuman AI, which will help us gain knowledge about AI and
Friendliness that will help us to build the theoretical tools needed to
create a good Friendliness architecture.

-- Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT