From: Jeff Bone (firstname.lastname@example.org)
Date: Fri Dec 07 2001 - 17:53:01 MST
Gordon Worley wrote:
> On Friday, December 7, 2001, at 03:34 PM, Jeff Bone wrote:
> > Bottom line, as long as there is any connection whatsoever to the
> > physical universe we are almost certainly and absolutely screwed in
> > the long run: either the universe is open and we experience the heat
> > death due to 2LT, or its closed and we experience collapse, modulo
> > some Tipler-esque imaginary infinity. Give that safety is
> > *physically* an impossibility of absolute safety, we just need to
> > realistically assess the tradeoffs between the costs and benefits of
> > any desired level of safety.
> While a lot of this discussion is redundant, I think it needs to be made
> clear that this doesn't matter.
And let's make it absolutely clear that this is complete bullshit
reasoning. The reason actuarial science is called a science is because, um,
it's a science. It's "risk science." You can't just wish risk away by
assuming an unspecified ubertechnology or benevolent, artificial uberbeing.
Doing that is exactly the same kind of tautological faith-based reasoning
that is the underpinning of *other* major world religions (aside from the
unexamined and uncritical belief in transhumanism and the possibility of
omnipotent benevolent go^h^h Sysop or -ops. ;-)
> These figures are based on right now,
No, they are extrapolated from physical constants and metrics. Granted it's
possible to make the values change dramatically --- but it is necessary for
a critical mind to examine *what* is necessary to make those values change.
And the bottom edge case means that it is entirely impossible to eliminate
risk entirely from the equation for any system that is grounded in physical
reality, given physical actuarial considerations. Any statement that you
can do so is uninformed, unexamined, implausible. This isn't a matter of
opinion, it's a matter of physics and examination of the edge cases. If you
want to speculate that a different physics exists that might make absolute
and perpetual (to timelike-infinity) safety of even a small physical system
possible, you are welcome to do so --- but understand that you've strayed
from the bounds of science and empiricism and pragmatism into the realm of
imagination, fancy, and speculation.
> where you have only one life to live (que the melodramatic music).
Nothing in what I have said assumes that. Indeed, I pointed out the backup
question myself. The actuarial analysis becomes more complex in the case of
backups vis-a-vis survival of an individual mind, as you then have to
consider propagation / saturation of a volume of timespace, your worldline
by the lightcone subtended by your birth event, etc. But the bottom line is
that risks continue to exist in this universe unless you turn all of
spacetime itself into computronium, and IMO while normal matter-based
computronium is within the realm of possibility turning the substrate of the
universe itself into computronium belongs in the realm of things like the
Bible, the Torah, etc. I.e., interesting *fiction,* but in the absence of
evidence to the contrary, just that --- fiction.
> what if one copy of your mind can only live x years, you can live
> forever by making copies.
Unless the universe is open, in which case entropy will ultimately get you
no matter *what* Sysop, etc. you have. For extra credit: 5000 words
critically comparing and contrasting Eli's Sysop with Maxwell's Daemon.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT