Re: About "safe" AGI architecture

From: Metaqualia (metaqualia@mynichi.com)
Date: Sun Jun 13 2004 - 12:07:49 MDT


> An AGI, in principle, could find a way to exploit flaws in the OS to
> allow it to break the controls that its software framework places upon

Exactly.
I am not sure how much faith I have in this kind of safety mechanism. We are
talking about humans against the machine in an environment which is native
to the machine, how will you _ever_ be sure that every piece of hardware and
software in the machine does not contain a bug of _some_ sort that can be
exploited by the machine to get rid of programmed blocks? In a system as
complex as an AI? I am not discouraging this discussion just expressing
concern for the feasibility of a complete check. What about the bios? is the
cdrom firmware upgrade feature going to put humanity at risk?

I think a better way is to make the mind accept the kind of control that
programmers seek, and make it not try to overcome the restrictions imposed.

mq



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT