AGI Philosophy

From: Phillip Huggan (cdnprodigy@yahoo.com)
Date: Wed Jul 27 2005 - 10:58:25 MDT


It would be nice to have an AGI which only offered suggestions of actions a set of human participants could take to realize optimal scenarios, instead of the AGI being an active player in forcing ver utopia. Once this AGI is achieved, it would be nice if the actions proposed by ver excluded any further imput or activity from any AGI-ish entity in effecting each discreet suggestion. Seems we'd be a little safer from being steamrolled by the AGI in this regard; us humans could decide what we'd specifically like to preserve at the risk of sacrificing some degree of efficiency in the grand scheme of things. FAI needs to enact the "Grandfathering Principle" for it to be friendly towrds us.

__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT