From: Mark Waser (firstname.lastname@example.org)
Date: Wed Mar 12 2008 - 18:22:04 MDT
> I'm noting that the objective of FAI research is to produce an AI
> which is reliably Friendly, not to make it so that every possible AGI
> must behave in a Friendly manner. It's hard to see how, eg., a
> paperclip tiler could be made Friendly.
I noted that this theory would *not* prevent a single-goal entity that is sufficiently powerful that it believes that it can take on the *entire* universe from paperclipping the universe but I argued this is a fantasy edge case that can easily be avoided since it only includes systems that
a.. have a sufficiently small number of goals and don't anticipate any/many more AND
b.. that believe that they could take on the entire universe and win more easily than they could achieve their goal(s) with any assistance they could recruit by being Friendly.
This looks like a vanishingly small, easily avoidable number of cases to me.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT