From: Bill Hibbard (email@example.com)
Date: Sat Jun 19 2004 - 09:33:02 MDT
> > Why not just write some kind of happiness-maximization algorithm?
> You mean something like putting everyone on heroin? Happiness is a
> problematic goal, because IMO it's only valuable as a motivator, not as an
> end in itself.
If you cared about someone's happiness would you put them on
heroin? I wouldn't, because my model of human beings, based
on direct and indirect observation, is that in the long run
heroin use makes humans unhappy. The essence of intelligence
is a simulation model for predicting the long term effects of
behaviors. Reinforcement learning algorithms include expicit
parameters for the relative weighting of short and long term
values. An intelligent mind with a long-term view will avoid
satisfying short term values at the expense of long term.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT