From: Petter Wingren-Rasmussen (email@example.com)
Date: Wed Dec 03 2008 - 00:46:36 MST
Not sure of what kind of methodology you think of when you talk about these
things - but as have been discussed regarding selfimprovement, an AI will
try to find ways to circumvent a hardwired thing like
self-improvement=nausea (or the AI equivalent).
Taking human beings as an example: about 1500-2000 years ago Islam and
Christianity tried to imprint rules on us to follow, with fairly good
success. (Ie the ten commandments, believe in God and his representatives on
earth - Heaven if you do, Hell if you dont).
Nowadays few academics (ie those that drive our evolution forward) believe
in this reward/punishment model and even fewer take the rules literally.
This development has taken about 80 generations.
Now traits that we have that have came into being through evolution (for
example all kinds of phobias) are still fairly active in all types of
people. There are no spiders in Scandinavia, but arachnophobia is still
pretty common and there has been people here since 10.000 BC.
If we should use evolitionary pressure it will be much harder to imprint
detailed rules in an AI. I believe however that the stability it will
provide is well worth the tradeoff.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT