Re: [sl4] Comparative Advantage Doesn't Ensure Survival

From: Charles Hixson (charleshixsn@earthlink.net)
Date: Sun Nov 30 2008 - 12:40:00 MST


Nick Tarleton wrote:
> On Sat, Nov 29, 2008 at 1:43 PM, Charles Hixson
> <charleshixsn@earthlink.net <mailto:charleshixsn@earthlink.net>> wrote:
>
> You are assuming that an AGI will be modeled on human motivations,
> etc. I find this highly suspect. My expectation is that it will
> model human motivations only sufficiently to understand us, and to
> explain itself to us in terms that we can accept. As for
> resources... the surface of a planet is not an optimal location
> for machines. Not unless they are designed to use liquid water as
> an essential component. Asteroids have a lot to reccommend
> themselves, including constant access to sunlight for power.
> Other moons without atmospheres also have potential. Mercury has
> lots of power available, just not continually unless you transfer
> it down from orbit. Etc. Luna isn't a very good choice, as it
> doesn't have any real advantages except a lack of air, and the
> rocks are rather low in mass, which indicates that metals will be
> hard to come by. (People have come up with all sorts of schemes
> to extract them, but mainly because Luna is close to Earth.)
>
> My expectation is that an AGI will stick around Earth only if it
> really likes being around people. (Of course, it might leave, and
> then return after it had mined out the rest of the more easily
> accessible solar system.)
>
>
> Sounds like you're assuming human motivations, namely, satisficing as
> opposed to maximizing. I would expect all matter and energy to be
> maximally exploited under most simple goal systems.
>
>
> One could, of course, design an AGI to want to kill people, but I
> think only a person would come up with that as a goal.
>
>
> Resources aside, killing agents that might compete with you (including
> by building another AGI with different goals) is a convergent subgoal.
>
> http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/
>
> npt
Certainly it would be possible to design AIs with such goals. I think
it would be rather silly to do so, however.

Also, there are excellent reasons to suspect than any successful AI
would choose satisficing over maximizing. The maximal condition, e.g.,
is excessively difficult to achieve, so one would spend all ones effort
controlling resources rather than achieving ones more central goals.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT