[sl4] Singularity, "happiness", suffering, Mars

From: D Goel (deego@gnufans.org)
Date: Tue Sep 20 2005 - 13:54:56 MDT


I remember reading in Elizier's and others' writings that Singularity
will bring all end to human/posthuman/intelligent-spcies suffering,
amplify happiness manifold, and lead to a tremendous increase in
"human GOOD" G, whatever you definition of G is. Say G = happiness
minus suffering.

This GOOD then persists for a very long time A (A is the virtually
limitless post-singularity age) . Suffering is negligible.

Thus, the human good, or happiness, (in the simplest model) is given
 by G = H A (+ very very small suffering terms, + very small
                 pre-singularity happiness/suffering terms)

where H is post-singularity happiness por year, and A is the virtually
limitless post-singularity time. (A is very large, say at least 10^10
y? Even 10^12? Infinite?)

The faster we bring about the singularity, the larger is A, thus
increasing more good.

How does the expected good modify when you factor in the possibility
of the singularity not happeneing?

Let T be the expected time to singularity. eps (epsilon) be the
miniscule chance of a civ-wiping accident, like a comet,/asteroid,
gamma-ray burst, supernova, etc. (or, some would say, also grey goo).
Then, human good is given by

G = HA ( 1 - (eps))

Say, our efforts bring the singularity closer by t years.

Not only do you increase A, you also decrease epsilon by eps*t/T (less
                                        time avilable for accidents.)

The new G is

G = G = HA ( 1 - (eps) + eps*t/T) (1 + t/A) ---- (1)

Essentially, if we subtract out the constant G_0 == HA, our task is to
minimize

G_1 = G_0 - G = HA ( eps (1-t/T) - t/A)--- (1B)

G_1 is bad. Minimize G_1.

----
Next, what if we were diversifiod across several planets so not all
our eggs are in one basket?  If we had a (nearly) self-sufficient base
at Mars, our eggs are in 2 baskets, so d = 2.  More conservatively, d
= 1.3 , since there may exist common threats, and, say, the base does
not have a huge population.
G = H A ( 1 - (eps)*d)       --------------(2) 
G_1 = H A ((eps)*d)       --------------(2B) 
----
Most singularitarians believe that eps is very small.  So, let's say,
eps = 10(-5).  Take even smaller if you wish, and proceed:
Next, consider 2 scenarios:
A.  You work tirelesly towards singularity and manage to halve the
    expected time of singularity.  You halve the multiplier in 1B
    Eq. (1B).  Instead of the multplier being 10^(-5), it is now only
    (1/2)*10^(-5).  You subtract another miniscule[1] t/A from the
    multiplier 
         [1] A is very large.
 Not bad.  We halved the 10^(-5) factor.  The possibility of threats
 made the urgency of singularity only more important.  In the absence
 of threats, the improvement was only t/A which was neglgible, but
 now, the improvement is a nonvanishing factor: (1/2)*10^(-5). 
 Summary: We halved the 10^(-5) factor.
     
B.  Instead, You work on improving d. If d = 1.3, the multiplier is
    suddenly now 10^-8.  For d=2.0, it is 10^(-10). Much better.  Me
    changed the factor from 10^(-5) to 10^(-8).  Relatively speaking,
    we almost completely eliminated the threat.
(More detailed models of happiness, etc, don't change the basic
conclusion of the model.)
I am increasingly starting to wonder if trying to spread our eggs in
other baskets makes more sense than trying to hurry up
singularity. The above model formalizes the argument.  In English, the
argument is: "What we have, an intelligent species, is precious, and
singularity is nearly a given for humanity now.  What matters is that
the species now work to minimizes any chances of accidont between now
and then."
---
Of course, if you believe that the singularity is imminent in the next
10 years, (and that going to Mars takes much longer) then you can
argue that increasing d itself takes 10 or more years.  And so, is
impossible to do within the given time T.   We will call this case (P).
                                                   ------------------- (P)
But in all other cases, it seems that the case for improving d seems
to win out over trying to lowering T.  Also, this was independent of
whether or not grey goo factors in as an additional threat. (an
addition to the usual ones).
---
"Appendix"
Let's examine the case (P) -- those who believe that singularity
will happen within the next 5--10 years.. 
Even in case P, for those who believe in a grey goo threat
possibility, most of the contribution to eps comes right near the
singularity (if grey goo overshadows other threats), and for them, it
makes sense to actually even *delay* T *if they have to*, in order to
make sure that d is increased before singularity happens.
Sincerely,
DG                                 http://gnufans.net/
--


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT