From: Byrne Hobart (firstname.lastname@example.org)
Date: Wed Nov 02 2011 - 18:20:29 MDT
Given a sufficiently low discount rate, a paperclip-optimizing AI could be
far more friendly to human goals than the non-AI alternative. And I'm going
to go out on a limb and assume that any good AI will have a ridiculously
low discount rate.
>From a chicken's perspective, humans are an optimizing-for-omelet
omnipotent AI. And yet we're better than foxes.
See the "Thousand-year Fnarg"
On Wed, Nov 2, 2011 at 4:31 PM, Jens-Wolfhard Schicke-Uffmann <
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> On 11/01/11 18:13, Philip Goetz wrote:
> > The term "Friendly AI" is a bit of clever marketing. It's a technical
> > term that has nothing to do with being friendly. It means a
> > goal-driven agent architecture that provably optimizes for its goals
> > and does not change its goals.
> "Friendly AI" also implies that those goals do not conflict (too much) with
> human values. Details vary though.
> See: http://en.wikipedia.org/wiki/Friendly_artificial_intelligence
> In particular, an AI which optimizes for number of paperclips in the
> and never changes that goal (both provably), is _not_ a friendly AI.
> (to give the prototypical counter example)
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.10 (GNU/Linux)
> -----END PGP SIGNATURE-----
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT