Re: Please Re-read CAFAI

From: Michael Vassar (michaelvassar@hotmail.com)
Date: Wed Dec 14 2005 - 16:05:47 MST


Yes Jef. I think we agree.
I think that there are several problems here relating to being understood
though.
The first, real humans are not agents. Rather, they act *slightly*
agent-like at least a little of the time. Agency is immensely powerful, and
civilization is the result. However, a consequence of this is that almost
all people have no model of agency, and fail to understand it at all. (are
Role Playing Games effectively training in understanding agency?)
The second problem is that among those few who understand agency we often
have a "virtue and sanity" cargo cult. It is correct that certain beliefs,
assertions, and preferences, such as the belief that the world is about to
end, that one is uniquely or almost uniquely capable of saving it, or that
one no longer needs traditional morality are extremely good empirically
validated predictors of harmful behavior. It is likewise true that in the
Pacific Islands in the 1940s airstrips were good empirically validated
predictors of food delivery. The refusal to formally adopt agency in place
of evolved morality despite the knowledge that morality is evolved for a
radically different environment because morality's rejection has predicted
destructive behavior in the past is thus similar to building airstrips after
the food deliveries have stopped, especially since agency, properly
understood, encompasses fully that part of evolved moral behavior that (due
to it's self-referential nature and connection to your goal system) you wish
to retain. Similar things can be said about the refusal to adopt any
particular far-from-appearent-consensus belief, especially beliefs regarding
the low rationality, agency (ironically), or competence of accepted
high-status authorities. Admittedly, the warmth of evolved morality,
agreement with consensus, and high regard for the leaders/authorities of
one's society are features of one's goal system, so I am counciling a sort
of self-sacrifice, but more properly I am counciling that certain goals be
temporarily deferred for the sake of the long term, for without doing so
those goals will fail to be realized in the more distant future. Students
of psychology should be at least as aware of the need to avoid hyperbolic
discounting as students of history should be of insanity, especially given
that the actual expected cost of insanity is so low (for most forms of
insanity, only one relatively normal human life worth of utility plus a bit
extra is at stake).

Tennessee. The vast majority of FAI and UFAI designs are, at equilibrium,
consequentialist rational agents, but NOT objectivitst. They are
consequentialist because without consequentialism, goals can drift, but
consequentialism is a black hole from which goals cannot, under ordinary
circumstances, drift.
They are NOT objectivists because except in rare cases they aren't even
expected to have a "Self" to be self-interested in. Perfectly in violation
of the categorical imperitive, they will see everything as a means to an end
like all consequentialists do.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT