Re: Another Take on the Fermi Paradox

From: Brian Atkins (brian@posthuman.com)
Date: Tue Dec 24 2002 - 14:31:34 MST


Ben Goertzel wrote:>
>>I've brought up my complaint about this answer to the FP previously:
>>Your answer does not explain why I will not, about 5 seconds after the
>>Singularity, design/test/launch a self-replicating probe "manned" by
>>some sort of mind (either sentient or not, depends on what I decide
>>then) that will go off and scour the whole reachable Universe for
>>sentients that need help. Note that I do not have to go with the probe,
>>and it only takes a few seconds of realtime to accomplish which probably
>>isn't enough to completely destroy my livelihood in the post-Singularity
>>rat race.
>
>
> My hypothetical explanation is that, to your post-Singularity mind, this
> probe-sending does not seem to be a worthwhile activity.
>
> What we see as "needing help," a post-Singularity mind may see in a totally
> different way.
>
> I could conjecture that this post-Singularity mind might see "needing help"
> situations as "part of the natural order of being". But that too would be
> imposing too much human moral psychology on the "motivational structure" of
> a being whose "inclinations", "desires" and "causes" are far beyond us.
>
> Brian, I feel like you're asking us to explain why a post-Singularity
> superintelligence won't do what you feel a well-intentioned human would do
> in that situation. But it will be far from human!!
>
> I have very little faith in the survival of human morality or
> humanly-comprehensible motivations into the dramatically posthuman realm.
>
> A post-Singularity "mind" will probably be neither friendly or unfriendly,
> neither helpful nor non-helpful, but will behave in ways that the minds of
> remaining humans (if indeed they are able to observe its behaviors) will
> find rather inscrutable. Unless it "chooses" to have its behaviors appear to
> humans according to some easily comprehensible pattern...
>

Ok, but I find this argument unconvincing since a) building and
launching the initial VNP should be relatively easy to do for such
post-S minds b) we both expect (AFAIK) there to be a wide variety of
post-S minds such that it is very unlikely that not a single one of them
in any post-S civ decides to launch a VNP. One possibility I suppose is
that the automatic outcome of any Singularity is _always_ that a single
mind or "ruleset" takes control of that local space and _always_ decides
to disallow VNPs or any other form of "civ making itself known to the
rest of the galaxy". It does seem unlikely though that every single
successful Singularity process has the same exact outputs.

-- 
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT