From: Gordon Worley (firstname.lastname@example.org)
Date: Thu Dec 06 2001 - 10:11:24 MST
On Thursday, December 6, 2001, at 10:38 AM, Jeff Bone wrote:
> And my argument is that in order to determine when and where the Sysop
> be needed, some agent --- whether the Sysop itself or the environment is
> unimportant --- is going to need a predictive ability that allows it to
> prevent unwanted harm or death to individuals. This will be, for
There's no need to prevent anything. Though it will slow things down
slightly, the safest means is likely to delay all actions with a Sysop
step. So, if you want to start interacting outside your own mind, the
Sysop will do a quick check every time. But, I defer to let the Sysop
figure out the best way to do it.
> BTW, this all assumes that some part of the constituency "remains"
> interested in being physically present in the physical world. If
> uploads --- *everybody* --- then this isn't as big a problem, though the
> Sysop must still be concerned with the physical safety of whatever
> substrate it runs on.
You still have to protect the algorithms. Part of the Sysop's job would
be to keep big mister Evil Power from taking over the mind of lowly SI
Bob and Alice.
> Logic, common sense, and actuarial reasoning should tell us that that
> *absolute* safety is an impossibility, and my gut tells me that
> to task some Power with providing it is a recipe for disaster.
We've already been down this road: anthropomorphic thinking.
We cannot be 100% safe, but we'll try to get as damn close to it as
possible and have escape routes in case all hell breaks loose.
-- Gordon Worley `When I use a word,' Humpty Dumpty http://www.rbisland.cx/ said, `it means just what I choose email@example.com it to mean--neither more nor less.' PGP: 0xBBD3B003 --Lewis Carroll
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT