Re: Ben's "Extropian Creed"

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Nov 13 2000 - 23:22:29 MST


Ben Goertzel wrote:
>
> Hi,
>
> I admit that we're all way out of our depth in speculating about what's
> going to happen after
> the Singularity... maybe my reasoning based on biological life is wrong, but
> it's the best guide
> we have... my intuition is that it's a better guide than the kind of Utopian
> arguments you're presenting,
> but hey, what's my intuition or yours worth here, we're both just bags of
> meat

Actually, I think that our model of the future should deprecate shaping forces
which are clearly and specifically modeled on aspects of biological life which
are strict properties of our own evolution, and for which no equivalent shaper
exists in the futuristic scenario.

In less formal language: "That's like saying future supercomputers will be
built from human neurons and that starships will have two arms and two legs."

> > It's quite possible that the Citizens will be capable of trading
> > computational
> > resources back and forth, but hopefully some "minimal living space"
> > requirement will be enforced... possibly as a restriction upon the trading
> > activities of the Citizens who exist, but certainly before a
> > Child Citizen is
> > created. (The ethical rationale for placing this restriction on trading
> > activities would be that it represents an irrevocable harm to any future
> > versions of the Citizen's self, which may have changed their minds, or may
> > even be changed so much as to be different entities.)
>
> Well, what you're suggesting here is basically a democratic socialist
> digital
> universe ... which seems to me quite anti-libertarian. Isn't it restricting
> someone's freedom to tell them they can't bet all their computational
> resources on
> a horse race? Isn't this immoral, according to the strict libertarian
> creed?

I live libertarianism, rather than looking it up in a table of rules, so I'm
not sure what you mean by "strict". Heh heh. (I'm currently in the midst of
a totally unrelated offline debate about "strictness". Never mind.)

My libertarian intuitions tell me that your future self is not the same as
your past self, especially if tools exist that could enable you to completely
rewrite your own motivations and memories. My intuitions tell me that the
Sysop should not obey an instruction "Terminate me one subjective year from
now, regardless of any objections I may make at that time", nor should the
Sysop enforce a contract which contains such a provision. You can commit your
current self to anything, even instruct the Sysop to switch you off, but to
irrevocably commit your future self constitutes coercion of that future self
and violation of that future self's rights. If building a Child that runs on
a memory partition below Minimal Living Space constitutes child abuse, then
committing all versions of your future self to running below MLS constitutes a
possible infliction of nonconsensual hurt on that future self.

Bear in mind that, for a human right after the Singularity - one of us fleshly
refugees - we're talking about something like 0.00[...]01% of total wealth,
and if you want to give EVERYTHING away, you can commit suicide to do so.

I confess that some of my response may have to do with an intuitive dread that
otherwise some idiot will reproduce a billion times over, and then a second
generation, until we've got minds squeezed onto 50K of RAM and another zillion
wretched poor and the same damn problem with charity and "bad incentives" and
so on. As far as I'm concerned, the whole "wretchedness" thing SHOULD NOT
carry over into the Sysop Scenario, EVER, and that's a BIG part of the whole
point of doing this.

But I also think that there's a direct grounding for the Minimal Living Space
idea in individual rights.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT