re: noncomputability

From: Mitchell J Porter (mjporter@U.Arizona.EDU)
Date: Wed Jul 25 2001 - 19:27:49 MDT

I suppose I should try to answer Emil's real question, which was:
what if noncomputable physics exists, and we need it for real AI?

First of all, I am sure that plain old computable, nonconscious
AI is capable of producing some sort of superintelligence.
If "real AI" means "really impressive AI", then I'm sure you
don't need it.

But let's just suppose that there *is* something noncomputable
in the dynamics of human thought. Let's suppose humans have a
cognitive primitive that isn't Turing-computable. The *only*
candidate I have for that is some sort of 'semantic reflection'
operation, in which the 'meaning-qualia' of symbols in one thought
act as causal inputs to another thought. Metamathematical (Godelian)
reasoning involves reasoning about the semantics of a formal
system; it's because you know the semantics that you can prove
the Godel proposition to be true, even though the formal system
can't. But you can create a new formal system by appending that
Godel proposition to the old one as an axiom, and it's not at
all clear that human beings really can engage in valid reasoning
about every such augmented system, which is why the
Turing-noncomputability of human reason is far from proven.

But LET'S SUPPOSE. Let's suppose there's a noncomputable
cognitive primitive, implemented by some noncomputable
neurophysics in humans, like Penrose's quantum-gravitational
wavefunction collapse. What would AI look like if it wanted
to use this new-physics process?

First, it would need to use the right hardware (quantum
computers with heavy ensembles of entangled qubits, in
Penrose's case). Second, it could use a programming
language (Flare-prime) which included the extra computational
primitive alongside +, * and so on. Third, we could write
code in which we took for granted the ability to answer
Turing-noncomputable questions.

It's not clear to me what this third ability gets you.
If the universe has Turing-noncomputable processes in it,
then an oracle is going to be pragmatically useful, so,
okay, when our AI grows up it will want to have oracular
capabilities. But a "Turing AI" might still be capable
of formulating such a goal and doing the necessary
self-reconstruction. The question is: are the semantics
of the concept "oracle" Turing-computable? If yes, then
you don't need to already *be* an oracle to think about
becoming one.

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT