Re: Anthropic Inference (Was study comparing 150 IQ+ persons to 180 IQ+ persons)

From: Michael Vassar (
Date: Wed Aug 23 2006 - 08:11:01 MDT

Well, I was suggesting, not claiming, as I'm not confident with respect to
whether Friendly singularities preserve the reference class or not, nor with
respect to the validity of many worlds and anthropic inference.
Anyway, if we do take an anthropic inference path, we should expect integral
over population, not time, to singularity to be maximized, e.g. we should
not expect a nuclear war to reduce the population and delay singularity
simultaneously unless the population reduction effect was the greater of the
two. However, anthropic inference *should* give us theories with more
predictive validity than what we started with. Here are some examples.
We should expect the demographic transition to turn around and exponential
population growth to resume.
We should expect people who are trying to build GAIs to die or to be
thwarted by amazing coincidences
We should expect general societal decay to decelerate technological advance
We should expect to personally survive dangerous situations due to amazing
We should expect GAI to never be developed, but if it is developed, we
should expect major social changes of a variety of sorts to start to occur a
couple decades ahead of time as anthropic pressures forcing society into a
GAI unfriendly shape are relaxed.
One particular change that I would see as nearly sure evidence of strong
anthropic selection would be the develop of fairly powerful genetic
engineering, the use of which is then successfully suppressed on a global
scale enabling the development of SENS and the addition to the human
germline of novel and more powerful DNA repair mechanisms but not any other
enhancements. If the technology was afterwords actually lost, or if
spaceflight was lost after successfully colonizing Mars or some other
non-terrestrial environment, I would likewise see myself in, essentially, an
anthropically generated sf novel, a splinter of the multi-verse of
negligible probability.

In fact, I don't anticipate any of the above occurances, but I see the value
in explicating the implications of radical hypotheses so that they can be
confirmed or disconfirmed.

>From: "Michael Anissimov" <>
>Subject: Re: A study comparing 150 IQ+ persons to 180 IQ+ persons
>Date: Wed, 23 Aug 2006 01:41:50 -0700
>On 8/23/06, Michael Vassar <> wrote:
>>No-one is trying to organize society along the lines which will minimize
>>path distance towards the singularity. If anything, anthropic selection
>>be extending the path beyond its probable length.
>This is a very important statement. What Michael is saying is that we
>are living in a timeline where many many humans are born before they
>destroy themselves with UFAI or move into another reference class.
>(If I'm translating the idea correctly.)
>Given that our probability of being born into any given being in our
>reference class is roughly equivalent, it is likely that we will find
>ourselves in the universe where many us-beings exist. That is, where
>they make babies the most before dying or transcending.
>A tremendous number of persons are being born today. Every year
>increment that time extends before the Singularity occurs, that number
>increases exponentially. So there is an anthropic selection effect
>favoring our finding ourselves in worlds where time is maximally
>prolonged before the Singularity occurs, while still preserving
>physical laws.
>This may be why the education system is so amazingly poor. I often
>found myself in elementary, middle, and high school thinking "why is
>this so amazingly awful?" In fact I printed out and read most of
> while ignoring my teachers in high school classes.
>For this reason, as Michael says, the time from now until the
>Singularity will likely be measured in terms of its maximum probable
>length. This could be anywhere between a few months and thirty years,
>we don't know.
>That's the funny thing about anthropic inference. It seems like
>"cheating", but when you think about it, it doesn't actually give us
>theories with much more predictive validity than we already had to
>begin with.
>Michael Anissimov
>Lifeboat Foundation

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT