From: Stathis Papaioannou (firstname.lastname@example.org)
Date: Sun Dec 02 2007 - 17:39:44 MST
On 03/12/2007, Aleksi Liimatainen <email@example.com> wrote:
> Strawman. No sane definition of friendliness calls for unquestioning
> obedience regardless of consequences.
> Blindly fulfilling all of a child's requests has a good chance of
> screwing ver up pretty badly, even if it won't kill ver. Caring parents
> tend to filter a child's wishes quite a bit before ve learns to do it
> verself (ie. grows up).
Yes, but it's risky applying this analogy to the relationship between
AI and adult human. I don't want to be forced to do things on the
grounds that I am not intelligent enough to know what's good for me,
or even on the grounds that that is what I would wish to do were I a
more intelligent and better informed version of myself.
-- Stathis Papaioannou
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT