Re: [agi] A difficulty with AI reflectivity

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Thu Oct 21 2004 - 09:52:21 MDT


Jeff Medina, Christian Szegedy:

Thank you for your suggested reading, but I'm already quite extremely aware
that no consistent formal system can prove its own consistency. If you're
not clear on why this presents a unique challenge for reflective theorem
provers, I recommend John Harrison's "Metatheory and Reflection in Theorem
Proving: A Survey and Critique."

Mostly Harrison concludes that reflection is not needed in modern
engineering theorem-provers (he doesn't consider it as an AGI problem).
Insofar as Harrison does suggest a solution, he suggests taking a theory
Theory-0 and then creating a new system, Theory-1, which has the new rule
that if a theorem is proved provable in Theory-0, then it is proved in
Theory-1. This is not exactly the same as adjoining the assumption of
Theory-0's consistency to Theory-0 to form a new Theory-1 - the new theorem
prover cannot prove that the old theorem prover is consistent, for example.

The new theorem prover is not inconsistent because it does not contain the
rule that any theorem proved provable in Theory-1 may be added as a theorem
in Theory-1; this would run straight into Lob. Instead, the new theorem
prover only contains a rule that whatever is provably provable in the *old*
theorem prover may be added as a theorem in the *new* theorem prover. This
does not change the set of admissible theorems if Theory-0 is consistent.
What it does is enable *much shorter proofs* of some theorems.

The unique, new problem comes when we ask the theorem prover to rewrite
itself entirely. Even if we adjoin to Theory-1 the assumption that
Theory-0 is consistent, if a Theory-1 prover were to write a provably
consistent (in Theory-1) prover, it would write a Theory-0 prover. This
prover would then be unable to approve any further rewrites.

We may be able to rescue Schmidhuber's Godel Machine by compartmentalizing
it into an object system that provably has expected utility for solving
problems, and a meta-system that can only be rewritten if the rewrite
provably admits only theorems admissable in the original meta-system. That
is, we use *two different* criteria for modifying *two different*
components of the Godel Machine. I don't regard this as a good solution to
the deep AGI problem, since I wasn't planning to build a Godel Machine. It
would also be an additional requirement over and above the strategy given
in Schmidhuber's original paper, which claimed total self-modifiability
including modifiability of the proof-verifier or even the fact that the
system contained a proof-verifier. It is furthermore unclear as to how a
rewrite of the meta-system would be adjudged *superior* to the prior
system, even if it were proven admissible.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT