Re: Simulation argument in the NY Times

From: Matt Mahoney (
Date: Sun Aug 19 2007 - 13:14:05 MDT

--- Samantha Atkins <> wrote:

> On 8/19/07, Matt Mahoney <> wrote:
> Legg essentially showed in a second paper [2] that intelligence =
> > algorithmic
> > complexity. Currently, I think that the algorithmic complexity of all the
> > DNA
> > on earth (taking into account redundancy) exceeds the knowledge (both
> > brains
> > and technology) of any group of humans bent on world destruction. But a
> > singularity would change that.
> Algorithmic complexity of DNA does not count torward effective intelligence
> for survival of extreme conditions. Most species on earth have relatively
> narrow survival ranges and little capability for fast adaptation.
> Relatively little intelligence (even none in the case of natural disaster)
> is required to destroy relatively large complexity. Destruction is much
> easier. As long as we are all on this rock anything that whacks
> conditions on this planet sufficiently into a condition outside our survival
> range and does so on a scale and in a short enough time to overload our
> effective intelligence will wipe humanity out.

Legg defined intelligence as the ability to meet goals in an unknown
environment. If you define the goal of evolution to be the continuation of
life (i.e. it fails if all life becomes extinct), then you see a correlation
between algorithmic complexity and success. For example a population of
identical clones has the same complexity as a single organism. But if the
members are slightly different through genetic variation, then the information
content is higher, and also increases with population. If there are many
species, then the complexity is greater still. I think you can see that when
faced with a disaster such as a new disease or a change in the environment,
that greater complexity increases the chance that at least some organisms will

-- Matt Mahoney,

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT