Re: large search spaces don't mean magic

From: Daniel Radetsky (daniel@radray.us)
Date: Tue Aug 02 2005 - 03:29:54 MDT


On Tue, 2 Aug 2005 02:44:43 -0400
"Ben Goertzel" <ben@goertzel.org> wrote:

> For instance, quantum physics can be derived from the assumption that
> uncertainty should be quantified using complex-valued probabilities (cf Saul
> Yousseff's work). Mathematically it seems consistent that there are more
> general physics theories that use quaternionic and octonionic
> probabilities>

Okay, so you have probabilities coming from "larger" fields than the reals. Do
you think you have evidence that those would provide box-exploits, or are you
just saying that you now have a larger universe of physical theories in which
box-exploits might be? The first disjunct is just another possibility, which
means it isn't an argument for magic. The second disjunct requires that you
answer my earlier objections. Pick a disjunct and start swinging.

> So you're right. The argument from the known (empirical and conceptual)
> incompleteness of physics is only PART of my reason for believing a
> superhuman AI could find a box-exploit.

But my point is that the incompleteness of physics provides next to no support
for the existence of exploits.

> The other part is the part you don't agree with, which is a general argument
> that if X is a lot smarter than Y, then X can probably find a way out of any
> box that Y creates.

That's only true if there really *is* a way out, given the circumstances X
finds himself in. It may simply be impossible for X to get out. To believe that
X can probably find a way out, you must first believe that X has a reasonable
way out. What makes you believe he has a way?

> It occurs to me now that it might be possible to prove a mathematical
> theorem to this effect. One could look at an average over all possible
> physical universes (assuming some probability distribution on them), and
> over all pairs of organisms X and Y within them, then try to prove that "If
> X is much smarter than Y, then X can escape from most boxes Y could create."

This sounds mighty specious to me, but I can't really say for sure until I know
exactly what it would mean. What is the probability distribution a distribution
of? Which of the universes is most likely? If so, it would have to work for
*any* probability distribution, since you don't know what the real one is,
including the distribution in which there is a probability of 1 that we end up
in a universe where most boxes are unbreakable (or it is ridiculously easy to
make an unbreakable box).

> Now, turning the previous paragraph into a real theorem would involve
> formalizing "intelligence" and "organism" and "box" in useful ways (which we
> have currently only made limited progress towards), and then proving a
> possibly very hard theorem. But I submit that if we did prove something
> like this, it would be decent evidence for the "other part" of my reason for
> believing a superhuman Ai could find a box-exploit.

You'd also need a good working definition of "possible," and other nasty
things like that. I doubt it would work. In any case, the evidence would only
be as strong as your definitions of all of the terms are uncontroversial. Good
luck.

Daniel



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT