From: Harry Chesley (firstname.lastname@example.org)
Date: Sun Nov 25 2007 - 14:57:07 MST
Thomas McCabe wrote:
> More anthropomorphicism. A Jupiter Brain will not act like you do;
> you cannot use anthropomorphic reasoning.
Yes you can. Anthropomorphism is a dangerous trap that can lead you to
assign intelligence where there is none or assume motivations or
operational knowledge that isn't appropriate. But that doesn't mean that
any time anyone brings up something anthropomorphic they're wrong.
In this case, the anthropomorphism was part of a list of maybes, not an
argument that a particular behavior is unavoidable. Taking what-ifs from
the only available source of existing generally intelligent behavior
(people) is perfectly reasonable.
Nor is there any reason to assume that a GAI will *not* have
anthropomorphic aspects. If it's made by cloning people or bits of
people, it probably will. If we want it to, it probably will. If the
same evolutionary forces that caused that behavior in us apply to the
GAI, it very well might.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT