Re: [sl4] Our arbitrary preferences (was: A model of RSI)

From: Eric Burton (brilanon@gmail.com)
Date: Fri Sep 26 2008 - 12:03:45 MDT


Er, my use here of 'AI researchers' refers specifically to researchers
who are AI, rather than humans researching AI. I'm thinking Android
Scientists!

I also misspelled explicitly :(

On 9/26/08, Eric Burton <brilanon@gmail.com> wrote:
>> You've completely lost me; why couldn't we observe a superintelligence?
>
> There could be post-singularity nannies in orbiting femtodatacentres
> exerting climate control and influence on nearth-Earth objects to our
> benefit. People who don't want to fuse with a benevolent
> superintelligence may find themselves its ignorant wards. Happily
> ignorant, in the case of luddites who find the thing an abomination.
> Why would its galaxy-sized, light-speed ruminations appear in any form
> to people that eschew interaction with the technology where they
> occur?
>
> I can see a transition to singularity which begins with great reams of
> future technology and alien blueprints unrolling from a thousand
> supercomputer centers where AI researchers are working explicitally to
> benefit humanity, but after an initial influx of technologies that
> take off all survival pressure for the organics, results in a rapidly
> deepening alienation between the organic and electronic substrates.
> Human enclaves that were content with food/water replicators and
> self-assembling structures might not go back to the source, the
> post-singularity mind, often or at all.
>
> Communicating with it could become an eccentric pursuit, and rapidly
> an impossible one. After a few generations like this, would anyone
> notice if the superintelligence left Earth for good, before their
> gifts started breaking down? Maybe this has happened many times
> already.
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT