Re: Sysop hacking

From: DarkVegeta26@aol.com
Date: Thu Feb 14 2002 - 17:23:51 MST


In a message dated 2/14/2002 2:42:31 PM Pacific Standard Time,
polysync@pobox.com writes:

> I feel that these assumptions, along with Murphy's Law, will survive
> into and through a singularity

Or infinite growth occurs within a finite time.

> This is based on my
> assumption that at any one moment there is an upper limit on the number of
> resources that can be constructively committed to a single mind.)

What is a mind? A "self aware information processing mechanism"? In a
googlebyte-size Sysop, I find it hard to believe that ve would only have one
mind...so, a more likely outcome, in my opinion, would be a Sysop composed of
differentiated and specialized subprograms, each self aware and continuously
engaging in intense information exchange with other Sysop-ian subprograms and
perhaps even a main brain. And if we don't think SI's will get
ontotechology, then there must be a mass limit to how dense/large a
computing/emulation centre is.

> Without 100% knowledge of the universe, there will be problems that can't
be
> solved, or even predicted through simulation.

If its a perfect simulation, the problem could be simulated and solved in
accordance with the quality of the simulation (which would be perfect). The
problem is conjuring up an accurate enough simulation (simulations in human
minds are often unfaithful to the "real world", as the reader is probably
fully aware of.

> - For any given intelligence level, there are good plans that will just
fail,
> for reasons you could not have foreseen.
> - We might explore the wrong paths, and commit to things that fail.
> - Some experiments and tests might be unpredictably disastrous or fatal.
> - Sometimes action is required, whether a correct solution or knowledge is
on
> hand or not.

Sometimes, Powers make less mistakes than humans. heh.

Michael A



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT