From: William Pearson (firstname.lastname@example.org)
Date: Mon Feb 27 2006 - 07:48:21 MST
On 27/02/06, Philip Goetz <email@example.com> wrote:
> On 2/26/06, William Pearson <firstname.lastname@example.org> wrote:
> > > 2. This last bit about the maximum number of bits being bounded by
> > > your starting program. That doesn't seem to connect to anything else
> > > in the proof.
> > By programs in the following I shall mean programs whose kolmogorov
> > complexity is equal to there length. That a program can know that an
> > infinite number of programs that all do the same thing have the same
> > halting likelyhood or friendliness is not interesting.
> If you limit yourself to programs whose K-complexity = their length,
> then you have destroyed the point of the exercise. A program has 1
> K-complexity, so I think you are saying that "The length of programs
> of length n that programs of length n can be transformed into is
> bounded by n".
No I am not trying to say anything like that
You miss out things like provably maintaining a certain programmatic
property such as friendliness. I also haven't bounded the length of
the programs to be transformed into, although a bound is possible.
> > The thrust of the argument is this. Determining friendliness (or other
> > desired property) of a program is as hard or harder as determining
> > whether a program halts or an infinite looping program has finite
> > output (whether it hangs or not). Friendlyness implies (in a logical
> > sense) the program halts.
> Determining friendiness may be as hard as determining whether a
> program halts. I don't know. Why would friendliness imply that the
> program halts? I don't see any connection.
No? If a program doesn't halt it doesn't output (in the classical
formulation of halting). If it doesn't output it doesn't do anything.
Most definitions of friendliness definitely imply that programs are
supposed to do something. This may be non-obvious, because as Richard
Loosemore we tend to think of AI programs as being infinite
outputting, so it might be clearer if I talked about whether a program
hangs or not rather than halts. However this discussion with you is
consuming the time that I can devote to that endeavour.
I treat the halting problem as described in the final paragraph here.
[...] the halting problem is undecidable.
In some sense, this problem (or some related formulation) is the
canonical undecidable problem. There are countless other undecidable
problems, which can often be expressed in terms of some simple
question about a computer program. For example:
* Will the program ever print out a 5?
* Will the program ever execute line 26?
It turns out that these sorts of problems are all equivalent to the
halting problem, in the sense that given a solution to one of them,
you could write a solution to any of the other ones. The reasons why
that is possible are interesting, but beyond the scope of the last
sentence of this document.
I just substitute friendliness. Or strongly self-improving or other
property we want the program to maintain as it replaces itself with
new programs. I believe Rice's Theorem deals with the last sentence.
> > As all the bits of the Omega Series have no relation to each other,
> > they are random so the shortest way of storing them is the direct
> > information. And so because it is as hard or harder to prove
> > friendliness Phi must be equally random.
> > A proof system and axioms is a way of storing information about the
> > world, we shall call a starting program with embedded proof system and
> > axioms S.
> > How many other programs can S prove the friendliness of?
> > I have said the bound on the number of programs a proof system can
> > have knowledge that they are friendly is the number of bits in it.
> > Otherwise it has found some hidden relation between the bits of the
> > Phi so that it can be compressed.
> Well, you're assuming, in advance, that there is no such thing as a
> proof system, but that instead, all the bits of Phi must be
> enumerated. A proof system is generative, and a proof system of
> finite size can prove, for example, that an infinite number of numbers
> are either prime or not prime.
This has no bearing on determining properties of strings that are
interpreted as programs. Primeness of a finite string can be
determined in finite time by a finite algorithm. Halting, and I argue
friendliness, cannot be. I would advise you to read some Chaitin if
> If you assume that there is no way to prove that a program is
> friendly, even then, it doesn't follow that you have to list one bit
> per program you know, because you may have heuristics that allow you
> to produce a prior probability for each program under consideration.
This was distinctly beyond the scope of what I was arguing. It is fine
method if you accept that this has an arbitrary chance of having your
program hang or never halt as it replaces itself with new ones. To
give you a brief argument, if it can be shown that your heuristic
correctly assigns over 50% chance of halting to those programs that
halt and under 50% for those that don't halt for more programs than
its length, then it would lead to violation of the limits of knowing
Omega. If we can't or don't show this link then the correspondence of
the heuristic with reality is arbitrary.
This archive was generated by hypermail 2.1.5 : Sat May 18 2013 - 04:01:00 MDT