Re: Existential Risk and Fermi's Paradox

From: freaken@freaken.dk
Date: Sun Apr 22 2007 - 10:34:17 MDT


Couldn't it just be, that the aliens in question doesn't want us to see
them? A la the movie Contact?

> The obvious solution is "we are first".
>
> "We" "[being] first" is only unlikely in the sense of "what is the chance
> of
> that golf ball landing exactly there" said by a teenage caddy pointing at
> a
> random golf ball happening to lie in a certain place when he happens to
> say
> that.
>
>
> On 21/04/07, apeters2@nd.edu <apeters2@nd.edu> wrote:
>>
>> I'm not sure if "machine rebellion" is a workable concept here. If we
>> are
>> talking about a civilization able to build whole subrealities at a whim,
>> we are
>> already talking non-biological, uplifted sentience. Why would they make
>> these
>> (I assume lesser) guardian entities with the capacity to rebel, or even
>> to
>> want
>> to rebel? Leave them with limited intelligence, perhaps a basic
>> compulsion-program to ensure that they concentrate solely on defense and
>> resource harvesting.
>> Your other point - "bumping up against" other civilizations - seems like
>> a
>> more
>> likely source of problems.
>>
>> Quoting Dagon Gmail <dagonweb@gmail.com>:
>>
>> > The implication would be, the galactic disk would be seeded with a
>> steadily
>> > growing number of "bombs",
>> > i.e. extremely defensive automated civilizations solely dedicated to
>> keeping
>> > intact the minds of its original
>> > creators. Just one of these needs to experience a machine rebellion
>> and
>> the
>> > precarious balance is lost. A
>> > machine rebellion may very well not have the sentimental attachment to
>> the
>> > native dream-scape. Machine
>> > civilizations could very well be staunchly objectivist, dedicated to
>> what it
>> > regards as materialist expansion. Any
>> > such rebellion would run into the (alleged) multitudes of "dreaming"
>> or
>> > "virtuamorph" civilizations around.
>> >
>> > And we are talking big timeframes here. If the statistical analysis
>> has
>> any
>> > meaning, virtuamorph civilizations
>> > shouldn't be a de facto dying process; for a dreaming civilization to
>> have
>> > any other meaning than a slow
>> > abortion they have to last millions of years; millions of years means
>> a
>> lot
>> > of galactic shuffling in terms of
>> > stellar trejacteories. There would be many occasions of stars with
>> > "dreamers" drifting into proximity, giving rise to
>> > paranoid, highly protectionist impulses. After all, if all that
>> dreaming
>> is
>> > worth anything in subjective terms the
>> > civilization doing it would fight realworld battles to defend it, and
>> not
>> > just dream about it in metaphorical terms
>> > of +5 vorpal swords.
>> >
>> > Unless the mindscapes have a way of closing off access to reality,
>> i.e.
>> they
>> > materially escape this universe.
>> > But then we introduce new unknows and arbitrary explanations.
>> >
>> > Maybe it's simply easier for civilizations to maintain their
>> consciousness
>> > > in worlds of their own creation rather than expend energy and time
>> in
>> this
>> > > one which is outside of their complete control. It would seem to me
>> that
>> > > being able to create a paradise of information and experience from
>> the
>> > > substrate of this world would be a better existence than existing in
>> this
>> > > world as is. Once to this stage, maybe to other civilizations
>> simply
>> do
>> > not
>> > > want to be bothered by lesser beings in this reality who might upset
>> the
>> > > balance and control they desire. One would only need to be able to
>> > generate
>> > > the prime number sequence in order to create an infinite order of
>> > > probability densities with the next higher prime as the next
>> iterative
>> seed
>> > > value. In this way, one could mimic true randomness. A
>> civilization
>> could
>> > > at both times experience truly unique experiences yet have complete
>> control
>> > > over their reality. The reality they experience would ultimately be
>> > limited
>> > > by the available energy in this reality but hypothetically, they
>> could
>> > > manipulate time in such a way that one second here would be a
>> million
>> years
>> > > in their experienced reality. Ultimately, their fate would be
>> dependent
>> > > upon the goings on in this universe, but they could develop machines
>> to
>> > > gather energy and other resources to maintain their minds in the
>> > > sub-realities.
>> > >
>> > > They would need to build machines incapable of communicating or
>> avoid
>> > > communicating with minds in this reality while they experience a
>> completely
>> > > unique reality of their own choosing through technology. The
>> machines
>> in
>> > > this time and space are drones programmed to protect the mind(s)
>> living
>> > > within the created world(s). You could go so far as to model this
>> entire
>> > > existence where each individual mind shapes vis own reality which is
>> > > protected by drones in the higher reality with the ability to
>> transfer
>> > one's
>> > > mind between realities as one sees fit or keep others out as one
>> sees
>> fit.
>> > > Universes could be born by the integration and random sharing of
>> minds
>> > > thereby generating more unique child realities.
>> > >
>> > > The ultimate liberty would be to give each person vis own ideaspace
>> with
>> > > which to construct their own reality and experience it as they see
>> fit.
>> > >
>> > > It would be really cool to be to the level of existence as a
>> universal
>> > > mind integrating with other universal minds creating completely new
>> > > universes.
>> > >
>> > > Why would you want to exchange this kind of ability for the lesser
>> > > existence of an entropic reality?
>> > >
>> > > *Stathis Papaioannou <stathisp@gmail.com>* wrote:
>> > >
>> > >
>> > >
>> > > On 4/20/07, Gordon Worley <redbird@mac.com > wrote:
>> > >
>> > > The theory of Friendly AI is fully developed and leads to the
>> > > > creation of a Friendly AI path to Singularity first (after all, we
>> > > > may create something that isn't a Friendly AI but that will figure
>> > > > out how to create a Friendly AI). However, when this path is
>> > > > enacted, what are the chances that something will cause an
>> > > > existential disaster? Although I suspect it would be less than
>> the
>> > > > chances of a non-Friendly AI path to Singularity, how much less?
>> Is
>> > > > it a large enough difference to warrant the extra time, money, and
>> > > > effort required for Friendly AI?
>> > >
>> > >
>> > > Non-friendly AI might be more likely a cause an existential disaster
>> from
>> > > our point of view, but from its own point of view, unencumbered by
>> concerns
>> > > for anything other than its own well-being, wouldn't it be more
>> rather
>> than
>> > > less likely to survive and colonise the galaxy?
>> > >
>> > > Stathis Papaioannou
>> > >
>> > >
>> > >
>> > > ------------------------------
>> > > Ahhh...imagining that irresistible "new car" smell?
>> > > Check out new cars at Yahoo!
>> >
>> Autos.<
>> http://us.rd.yahoo.com/evt=48245/*http://autos.yahoo.com/new_cars.html;_ylc=X3oDMTE1YW1jcXJ2BF9TAzk3MTA3MDc2BHNlYwNtYWlsdGFncwRzbGsDbmV3LWNhcnM-
>> >
>> > >
>> > >
>> >
>>
>>
>>
>>
>>
>>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT