From: Thomas McCabe (firstname.lastname@example.org)
Date: Wed Mar 12 2008 - 20:14:13 MDT
On Wed, Mar 12, 2008 at 8:22 PM, Mark Waser <email@example.com> wrote:
> > I'm noting that the objective of FAI research is to produce an AI
> > which is reliably Friendly, not to make it so that every possible AGI
> > must behave in a Friendly manner. It's hard to see how, eg., a
> > paperclip tiler could be made Friendly.
> I noted that this theory would *not* prevent a single-goal entity that is
> sufficiently powerful that it believes that it can take on the *entire*
> universe from paperclipping the universe but I argued this is a fantasy edge
> case that can easily be avoided since it only includes systems that
> have a sufficiently small number of goals and don't anticipate any/many
> more AND
This is the vast majority of systems. In general, there are going to
be many more simple systems than complex systems, because each
additional bit of complexity requires additional optimization power.
This is the principle behind Solomonoff induction.
> that believe that they could take on the entire universe and win more easily
> than they could achieve their goal(s) with any assistance they could recruit
> by being Friendly.
> This looks like a vanishingly small, easily avoidable number of cases to me.
Try reading http://yudkowsky.net/singularity.html to get an idea of
the potential power behind AGI. Note that this paper was originally
written in 1996.
-- - Tom http://www.acceleratingfuture.com/tom
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT