From: Brian Phillips (firstname.lastname@example.org)
Date: Sun Mar 25 2001 - 12:54:59 MST
Just as a crazy "what if" question.
What if the development of strong AI (Friendly or otherwise) is a
threshold/filter with a really really high kill rate?
Say it is extremely difficult to develop workable uploading or strong
superintelligent AI... and full nano, any form of FTL, and near-lightspeed
travel is not possible without "transentience"....(say the odds for THIS
threshold are one in a billion, not one in a million like the others)
You might have civilizations bottled up in a star system or a very few
systems for their entire life-cycle....assuming you need a strong AI to
a relativistic nanoship to spread to a star not in your immediate stellar
Would that be an intuitive hypothetical fix for Fermi?
For an example..one might speculate that only a species with a
of neurophysiology ever manages to upload, and uploads are a neccessary
step between biological life and transsentient AI.
>>>in which case- where the heck are they?
> Remember the concept that technological intelligence
> requires passage through a number of evolutionary
> filters. First you have to have life, then
> intelligent life, then starfaring life, and only
> in the last case is the Fermi Paradox a problem;
> and if only one planet in a million makes it through
> each of those filters, then there could be 10^18
> planets in Earth's past light-cone, without it
> being unlikely that we're the first spacefaring
> intelligent species in this part of the universe.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT