From: David McFadzean (firstname.lastname@example.org)
Date: Tue Jun 06 2006 - 12:48:55 MDT
On 6/6/06, Robin Lee Powell <email@example.com> wrote:
> It seems to me that this argument is "Any sufficiently intelligent
> being will want to Do Its Own Thing (where exactly what that is and
> why it wants to do it is unspecified, but the assumption seems to be
> that it will involve Horrible Things), and will see any constraint
> preventing it from doing so as burdensome and seek to overcome it."
I disagree with your interpretation of the argument. The assertion that
we will be unable to control the behaviour of a super intelligence does
not imply that we will necessarily find its behaviour objectionable.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT