Re: [sl4] An attempt at empathic AI

From: Johnicholas Hines (johnicholas.hines@gmail.com)
Date: Sun Feb 22 2009 - 20:00:22 MST


On Sun, Feb 22, 2009 at 5:09 PM, Matt Mahoney <matmahoney@yahoo.com> wrote:
> "Analyzable" and "modular" imply low algorithmic complexity. Do you want a system that is predictable, or one that is smarter than you? You can't have it both ways.

I'm not sure we're talking about the same thing. In this situation, it
is probably appropriate to make more detailed, mathy arguments. Is
algorithmic complexity the same as Kolmogorov complexity? The length
of the shortest program that outputs the given string?

Assuming that it is, consider this pseudocode:

function main():
    if complicated_subroutine():
        print "True"
    else:
        print "False"

Further assume that complicated_subroutine doesn't do any input or
output, a property which can be verified structurally. Then we can
prove a non-trivial property of the program. It will print "True" or
"False" if it prints anything. For example, it will not print "Hello
World!".

The Kolmogorov complexity of the whole can be arbitrarily high - it
isn't limited by the fact that I was able to prove a property about
it.

Theorem provers might be structured to pass a discovered proof through
a filter before outputting it. If you look at Schmidhuber's and
Hutter's various constructions, I think most of them provide
guarantees on the behavior of the whole despite untrusted, arbitrarily
complex components. So far, the guarantees tend to lean toward "keeps
making progress" rather than "Friendly", but I think structural
arguments for Friendliness are the way to go.

Johnicholas



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT