RE: About "safe" AGI architecture

From: Yan King Yin (y.k.y@lycos.com)
Date: Sun Jun 13 2004 - 12:19:05 MDT


Hi

I want to add something on the security issue: I think
it is well possible to design tools that perform as we
expect within some reasonable error margins, under
conditions of *proper usage*. A chain saw, a car, or
a high voltage transformer are examples.

The problem comes when we take into consideration the
possibility of *misuse*. If you look around the tools
we use in real life, none of them are safe when
misused. So I think it is reasonable to separate these
two aspects.

I'm all for designing secure AIs in the first sense,
but I'm not sure if the second kind of safety is even
achievable. It is more like a paradox to think of an
AI that actively prevents us from getting unhappy, and
yet does not take away our free will.

At least in the short term, I think we'll not be seeing
this second kind of FAI yet. Perhaps we should focus
more energy on safety issues in the first sense.

YKY

____________________________________________________________
Find what you are looking for with the Lycos Yellow Pages
http://r.lycos.com/r/yp_emailfooter/http://yellowpages.lycos.com/default.asp?SRC=lycos10



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT