Re: Cracking and Parasitic AI WAS: Floppy take-off

From: Durant Schoon (durant@ilm.com)
Date: Mon Aug 06 2001 - 10:39:40 MDT


> From: James Higgins <jameshiggins@earthlink.net>
>
> >I was thinking more along the lines of "How much if any time should an SI
> >(or an AI) spend?". It's one of those things that might seem unlikely, but
> >have a huge impact if found...or it could be a wild goose chase resulting
> >in nothing. Should an SI spend time wondering if there is a God? Or being
> >in a simulation (below)?
>
> I don't think the quantity of time is relevant. The SI will likely think
> much, much faster than we do, and almost certainly must be capable of
> multiple threads of consciousness. Therefor, I don't think it need
> dedicate any time, in particular, to these tasks. It would probably be
> worth having a low priority background thread running for each as soon as
> any major, impending problems are handled. I doubt it would ever devote
> 100% of its effort on any of these tasks, unless it solved every other
> existing problem or one of these became extremely urgent for some reason.

I was thinking more in terms of percentage of time rather than absolute
time, just to normalize things. I agree with the last sentence. That's
probably the right answer.

> >Hmm, if the simulation is created by an SI of greater intelligence, S'
> >(S prime), don't you think that S' would have sealed off the simulation
> >so that it can't be altered or halted from "inside"?
>
> Why would our SI halt the simulation? I meant that the simulation could be
> setup to halt if a Singularity occurs within.

Interesting! But, yes, diverging into the purely speculative...

--
Durant Schoon


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT