From: Thomas McCabe (email@example.com)
Date: Wed Mar 12 2008 - 16:58:31 MDT
> Take one of Eliezer's examples of an AGI that loves
> smiley faces and ends up tiling the universe with
> smiley faces. How would any of this prevent that? It
> wouldn't. Acid test over, you lose.
I'm noting that the objective of FAI research is to produce an AI
which is reliably Friendly, not to make it so that every possible AGI
must behave in a Friendly manner. It's hard to see how, eg., a
paperclip tiler could be made Friendly.
-- - Tom http://www.acceleratingfuture.com/tom
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT