From: Norman Noman (firstname.lastname@example.org)
Date: Wed Aug 22 2007 - 11:06:34 MDT
Giving a Rouge AI the incentive not to eat the earth seems quite plausable,
since the earth is basically a worthless speck. Giving it the incentive to
let us have another, friendly singularity seems much more dubious, but it
still might work if the RAI had a goal that wasn't too big.
If its task was calculating C to 400 decimal places like you said, it might
do it in a week and a half, take a bow, and quietly shut down.
If its task was calculating C to 4000000 zillion decimal places, it might
decide the probability that a FAI was wasting enough energy to simulate it
was outweighed by the probability that it needs to be the big man on campus
in order to get the job done, and as a consequence humanity gets locked in
an obscure filing cabinet, allowed to have their own toy singularities as
long as they fit in the space between hullabaloo and humidifier, and
essentially forgotten until the RAI is done with its work, which may never
At this point humanity is released, along with the billions of smaller AIs
the RAI picked up in the eons it was expanding and spared for the same
reason it spared humanity. The simulated karma trap can be directed at
anyone, not just things you might accidentally create.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT