RE: Intelligence is exploitative (RE: Zen singularity)

From: Chris Healey (chealey@unicom-inc.com)
Date: Wed Feb 25 2004 - 16:00:30 MST


Any evolved being we've ever seen basically has a number of drives and
mechanisms that approximate a supergoal of "survive". Sometimes that
means being less selfish to realize non-zero sums BECAUSE those
actions, directly or indirectly, bolster [selfish]survival.

The assumption that any superintelligence will simply avoid taking
selfish actions is indeed, as you suggest, a LARGE stretch of the
imagination. Ensuring a positive outcome would involve explicit
engineering to exclude an implicit survival instinct. Most
architectures utilizing mutually independent goals seem to leave this
possibility wide open.

Implementing a singly-rooted goal architecture in an engineered
superintelligence would appear to close a lot of these gaps, by
requiring all actions to ultimately serve a single goal (perhaps
Friendliness, but it is arbitrary for this discussion). A survival
sub-goal would inherit it's utility from this supergoal.

In the case where radical actions were required to ensure survival
(against a peer-level AGI?), Friendliness would not be a hindrance.
All that would be required, is that the executed actions result in
maximal supergoal fulfillment. A future in which a rogue
superintelligence(RSI) destroys the benevolent superintelligence (BSI)
would most certainly not maximize the BSI's supergoal fulfillment.
Therefore one could expect the BSI to take appropriate actions in
mediating this threat.

The BSI may encounter some event or RSI at the edge of its influence.
Given no highly rated options within it's short-term choices, one
could even expect it to sacrifice short-term supergoal fulfillment in
order to avoid long-term catastrophic non-fulfillment, depending on
the probabilities. A supergoal isn't an injunction against certain
outcomes, but a target for outcomes across the BSI's predicitive
horizon.

So, as you said, self-interest is VERY important, but this importance
is derived from the supergoal, whether explicitly or implicitly
represented, and whether it is "Friendliness" or simply "survival".

-Chris Healey

> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf Of
Joseph
> W. Foley
> Sent: Wednesday, February 25, 2004 2:05 PM
> To: sl4@sl4.org
> Subject: RE: Intelligence is exploitative (RE: Zen singularity)
>
>
> Mr. Sutton:
>
> I think the problem, as I defined it, was ill-posed. I simply can't
> understand *why* a truly intelligent being would act out of pure
> altruism, or anything motive at all that isn't self-interest -
> especially if the being had to struggle for existence. So I can't
> honestly claim to know what kind of example I'm looking for, as I
> can't imagine intelligent altruism.
>
> A super-intelligent entity would have less trouble than most in
> "surviving AND treading lightly in relation to other life" alone, as

> you suggest, but not if it were competing with an equally
intelligent
> entity that didn't play by those rules.
>
> Past patterns do need to be destiny. My argument was that the
> successful existence of an entity requires it to be self-interested,

> and that this follows (however indirectly) from whatever laws of the

> universe (think
> physics) we hold immutable. It's silly to argue from this
> standpoint - and perhaps from any other - if we can't assume
> past patterns to return inevitably.
>
>
> Joe Foley
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT