From: Petter Wingren-Rasmussen (email@example.com)
Date: Mon Nov 24 2008 - 07:42:50 MST
I´ve been reading through the archives and hope you are interested in
reanimating an old subject here.
I believe that its vitally important for an AI to have some engineered
personality traits to be functional.
Biological life as an example:
Some of the first macromolecules didnt replicate, some did. Some did
catalyze reactions and some didnt. The ones that expanded their territory
were the ones that could replicate and did so. This principle has been
ongoing since then.
Although we as organisms are a lot more complex I'd say this is exactly the
same as "will to live" and fertility. These traits have been inherited all
through evolution and I cant see any of them necessarily correlate to
In other words, an AI, no matter how intelligent, might still be
self-destructive, passive or unwilling to improve itself.
The same argument holds true for a lot of other traits imho.
Altruism for example: The program
altruism might be an evolutionary beneficial trait. As we grow
more intelligent we show these traits in more complex ways, but that doesnt
mean the initial driving force behind them are any different.
An intelligent person that like to think of himself as good, might argue
that he is good because he is intelligent. I believe he is strongly biased,
just as I belive that I - that regard myself as somewhat nihilistic - is
biased towards thinking intelligence doesnt change anything by itself.
Altruism is just an example here, insert any personality trait that you want
an AI to have and I think the argument still holds.
My conclusion: We cant know what, if any, traits an Ai with superhuman
intelligence will have inherent. Several, maybe all, of the traits we want
AIs to have can be gained through evolution. Can we afford the risk of
developing AIs of this level without applying evolutionary pressure?
This archive was generated by hypermail 2.1.5 : Mon Jun 17 2013 - 04:01:06 MDT