From: Nick Tarleton (firstname.lastname@example.org)
Date: Thu Mar 13 2008 - 16:33:06 MDT
On Thu, Mar 13, 2008 at 5:42 PM, <email@example.com> wrote:
> Asking the question "What would be attractive to an AGI (or any other intelligent entity)?" yields the answers "Their own self-interest!" and "Fulfilling their goals!"
> Asking the question "What would be most repellent to an AGI (or any other intelligent entity)?" yields the answer "Having their goals interfered with!"
> Now we're at the point where I can argue that if we have a set of entities that can fulfill both the personal goal of self-interest AND the "other guy" goal of not interfering with the goals of others, then we have a stable Friendly system.
> So how do we collapse the two frequently conflicting goals into one uniform non-conflicting goal?
> How about "Don't interfere with the goals of others unless not doing so basically prevents you fulfilling your goals (explicitly not including low probabilty freak events for you pedants out there)"
Using matter, energy, negentropy, and whatnot that another agent could
exploit for their goals constitutes interference. Property rights are
a human idea that very few possible minds share.
Also, what Robin said: more math.
This archive was generated by hypermail 2.1.5 : Thu May 23 2013 - 04:01:33 MDT