Re: Existential Risk and Fermi's Paradox

From: R. W. (rtwebb43@yahoo.com)
Date: Sat Apr 21 2007 - 09:11:35 MDT


Maybe it's simply easier for civilizations to maintain their consciousness in worlds of their own creation rather than expend energy and time in this one which is outside of their complete control. It would seem to me that being able to create a paradise of information and experience from the substrate of this world would be a better existence than existing in this world as is. Once to this stage, maybe to other civilizations simply do not want to be bothered by lesser beings in this reality who might upset the balance and control they desire. One would only need to be able to generate the prime number sequence in order to create an infinite order of probability densities with the next higher prime as the next iterative seed value. In this way, one could mimic true randomness. A civilization could at both times experience truly unique experiences yet have complete control over their reality. The reality they experience would ultimately be limited by the available
 energy in this reality but hypothetically, they could manipulate time in such a way that one second here would be a million years in their experienced reality. Ultimately, their fate would be dependent upon the goings on in this universe, but they could develop machines to gather energy and other resources to maintain their minds in the sub-realities.
   
  They would need to build machines incapable of communicating or avoid communicating with minds in this reality while they experience a completely unique reality of their own choosing through technology. The machines in this time and space are drones programmed to protect the mind(s) living within the created world(s). You could go so far as to model this entire existence where each individual mind shapes vis own reality which is protected by drones in the higher reality with the ability to transfer one's mind between realities as one sees fit or keep others out as one sees fit. Universes could be born by the integration and random sharing of minds thereby generating more unique child realities.
   
  The ultimate liberty would be to give each person vis own ideaspace with which to construct their own reality and experience it as they see fit.
   
  It would be really cool to be to the level of existence as a universal mind integrating with other universal minds creating completely new universes.
   
  Why would you want to exchange this kind of ability for the lesser existence of an entropic reality?
   
  Stathis Papaioannou <stathisp@gmail.com> wrote:
  

  On 4/20/07, Gordon Worley <redbird@mac.com > wrote:

  The theory of Friendly AI is fully developed and leads to the
creation of a Friendly AI path to Singularity first (after all, we
may create something that isn't a Friendly AI but that will figure
out how to create a Friendly AI). However, when this path is
enacted, what are the chances that something will cause an
existential disaster? Although I suspect it would be less than the
chances of a non-Friendly AI path to Singularity, how much less? Is
it a large enough difference to warrant the extra time, money, and
effort required for Friendly AI?
Non-friendly AI might be more likely a cause an existential disaster from our point of view, but from its own point of view, unencumbered by concerns for anything other than its own well-being, wouldn't it be more rather than less likely to survive and colonise the galaxy?

Stathis Papaioannou

       
---------------------------------
Ahhh...imagining that irresistible "new car" smell?
 Check outnew cars at Yahoo! Autos.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT