Please Re-read CAFAI

From: Michael Vassar (michaelvassar@hotmail.com)
Date: Tue Dec 13 2005 - 20:16:52 MST


Some posters seem to be very seriously unaware of what was said in CAFAI,
but having read and understood it should be a prerequisite to posting here.
My complaints
Friendly AIs are explicitly NOT prevented from messing with their
source-code or with their goal systems. However, they act according to
decision theory. According to decision theory there is almost no reason (no
reason not involving a hostile entity or severe requirements to economize)
to ever change one's goal system. They are not told not to, nor do they
have to be, any more than I have to tell people not to drink bleach.
The same confusion relates to the discussion of the categorical imperative.
The categorical imperative simply makes no sense for an AI. It doesn't tell
the AI what to want universally done. Rational entities WILL do what their
goal system tells them to do. They don't need "ethics" in the human sense
of rules countering other inclinations. What they need is inclinations
compatible with ours.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT