From: Philip Goetz (firstname.lastname@example.org)
Date: Mon Feb 27 2006 - 22:37:19 MST
On 2/27/06, William Pearson <email@example.com> wrote:
> On 27/02/06, Philip Goetz <firstname.lastname@example.org> wrote:
> > Determining friendiness may be as hard as determining whether a
> > program halts. I don't know. Why would friendliness imply that the
> > program halts? I don't see any connection.
> No? If a program doesn't halt it doesn't output (in the classical
> formulation of halting). If it doesn't output it doesn't do anything.
> Most definitions of friendliness definitely imply that programs are
> supposed to do something. This may be non-obvious, because as Richard
> Loosemore we tend to think of AI programs as being infinite
> outputting, so it might be clearer if I talked about whether a program
> hangs or not rather than halts. However this discussion with you is
> consuming the time that I can devote to that endeavour.
This is an interesting way of interpreting the formalism.
However, think of the application. Any competent AI, friendly or not,
will be a non-halting program. If you consider only halting programs
because of how you define "output", you have defined away the
interesting part of the problem. You can't say that you're going to
somehow reify all of the outputs produced by an AI over time
interacting with its environment into one single output, and then make
I don't know how to address the problem of applying this sort of
analysis to programs that accept input from the environment as they
run, particularly input whose values are conditional on the prior acts
(outputs) of the program.
> > > As all the bits of the Omega Series have no relation to each other,
> > > they are random so the shortest way of storing them is the direct
> > > information. And so because it is as hard or harder to prove
> > > friendliness Phi must be equally random.
> > >
> > > A proof system and axioms is a way of storing information about the
> > > world, we shall call a starting program with embedded proof system and
> > > axioms S.
> > >
> > > How many other programs can S prove the friendliness of?
> > >
> > > I have said the bound on the number of programs a proof system can
> > > have knowledge that they are friendly is the number of bits in it.
> > > Otherwise it has found some hidden relation between the bits of the
> > > Phi so that it can be compressed.
> > Well, you're assuming, in advance, that there is no such thing as a
> > proof system, but that instead, all the bits of Phi must be
> > enumerated. A proof system is generative, and a proof system of
> > finite size can prove, for example, that an infinite number of numbers
> > are either prime or not prime.
> This has no bearing on determining properties of strings that are
> interpreted as programs. Primeness of a finite string can be
> determined in finite time by a finite algorithm. Halting, and I argue
> friendliness, cannot be. I would advise you to read some Chaitin if
> you haven't.
Yes it does. As I understand it, you have essentially said that a
program of length S can tell you only whether S different programs
halt or not, because there's no way of testing for halting, so really
the "program" is just a lookup table. Hence, a program of S can also
tell you only whether S different programs halt or not.
It seems that what you're saying boils down to that Friendliness is undecidable.
Can you restate what exactly you're trying to prove?
> This was distinctly beyond the scope of what I was arguing. It is fine
> method if you accept that this has an arbitrary chance of having your
> program hang or never halt as it replaces itself with new ones. To
> give you a brief argument, if it can be shown that your heuristic
> correctly assigns over 50% chance of halting to those programs that
> halt and under 50% for those that don't halt for more programs than
> its length, then it would lead to violation of the limits of knowing
> Omega. If we can't or don't show this link then the correspondence of
> the heuristic with reality is arbitrary.
What is known about probabilistic approaches to undecidable problems?
Is it possible to probabilistically solve the halting problem - that
is, let us say, to be able to create a program that estimates whether
any given program halts, where a measure of error per guess of its
predictions can be made arbitrarily low?
This archive was generated by hypermail 2.1.5 : Wed May 22 2013 - 04:01:08 MDT