Re: Opting out of the Sysop scenario?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Aug 04 2001 - 20:23:01 MDT


Gordon Worley wrote:
>
> - Develop a design for the Sysop (based on what I've written, what
> Eli's written, what's on this list, and anything else interesting
> that will surely come along).

I doubt it. Any mind that has the slightest use for what you or I wrote
is too primitive to even begin thinking about defining Sysop Space.
("Unix Reality"?) The Transition Guide might share some of the causes
that moved us to write, but the writing itself as an influence? No way.
I'd have to veto that - in fact, if we're hypothesizing that an FAI is
rummaging through the SL4 archives, I just did. <smile>.

> - Refine the design using the coding heuristics (vis codic cortex?)
> ve has developed.

Again, I think this presumes too little intelligence. "Solve all the
design problems in negligible time through the use of enormously
superhuman intelligence beyond our ability to comprehend or even describe
as a cognitive process" is probably a better way of putting it.

> - Develop a system (whatever it may actually be, probably something
> like the Sysop, or maybe you might call it Unix for the Singularity).

"Unix Reality" is the best phrase I've been able to come up with so far...

> - Stress test it in simulation (yes, this will be the same as the
> real thing, the AI should have the ability to do this).

"Testing" is a paradigm for evolution and slow/parallel/linear minds.
Based on my theoretical understanding of how the regularities in reality
operate, my guess is that testing will become an inefficient use of
cognitive resources not too far past the point where it is possible to
cognitively model each lowest-level element in what is being tested.
Think of Deep Blue versus Kasparov - Deep Blue tested actual chess
positions and Kasparov thought *about* chess positions, which is why they
were evenly matched when Deep Blue was processing 2 billion positions per
second to Kasparov's 2 positions per second. Basically, thinking of
actual realities is a very inefficient way to think, and creating actual
realities is even worse. The only reason for it would be to catch unknown
unknowns that were hypothesized to act with respect to a physical test,
but not with respect to an imaged test. Humans, of course, do not have
the ability to run low-level simulations and must thus always resort to
physical tests.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT