From: Tim Freeman (firstname.lastname@example.org)
Date: Sun Apr 27 2008 - 08:21:22 MDT
(I'm skipping lots of sensible text I agree with.)
From: "Byrne Hobart" <email@example.com>
>'Future choice' refers to a choice in the future.
Reading more context than I quoted above, it's apparent that a "future
choice" is a decision made now to do something later. Thanks, that
>If you're incapable of making commitments regarding the future, it
>would be hard to have property rights, though.
The basic problem is that my scheme  is incapable of specifying
anything about commitments. I'm not up to saying anything about
verbal behavior in general because I don't know how. The AI might
figure out verbal behavior, but only as part of its general ability to
learn. "Commitment" implies some sort of connection between the
statement made and some future decision-to-do, and in my present
scheme there is no such connection because in my present scheme,
statements are completely absent.
Apparently CEV  and the GodAI scheme  don't talk about keeping
commitments either. I didn't get around to understanding Practical
Benevolence  yet. Does anyone know of a FAI proposal that aspires
to get the FAI to keep its agreements for some reason beyond
If the FAI takes over, then expedience won't constrain it. You can
also expect it to be a much better lawyer than you, so I don't know
how to safely use words with it even if we find some way to make it
want to keep promises. This is a standard problem. 
>So it all compiles down to mutual agreements; the abstraction just
>makes it easy to talk about.
I agree that it's sufficient to talk about agreements, and that
enumerating the agreements that make up a property right is clumsy.
Clumsy is an option for these specifications, but not for a real
>Of course, if you don't believe in such agreements, we can't have property
>rights as I think of them. But it seems like a pretty elementary part of
>human behavior to make promises about the future. If we can't do that, we
>can't have governments, and most forms of anarchism won't work, either. I
>guess we'd be stuck with Stirner.
After a little googling, I'm guessing you mean Max Stirner. His main
work seems to be "The Ego and its Own", which I haven't read yet. Do
you recommend starting there?
To a first approximation, the "Ego" of this proposed AI that doesn't
understand agreements is basically act utilitarianism, that is, the
greatest good for the greatest number of humans. (There are minor
caveats: it could be configured to care about other sets of entities,
it could weight them unequally, and then there's respect, which is
essentially conflict-aversion.) Rules are verbal behavior, so I can't
specify rules and therefore rule utilitarianism is not an option.
>I'm starting the AIXI paper right now.
The hardest part in reading it is that there's no glossary of
notation, and there is a lot of notation that's introduced and then
not used for a few pages and then relied on later so you can't just
flip back a page or two to figure out what he meant. At the end his
main point slightly abuses his own notation and I had to have my whole
hand-built glossary there in front of me to guess what he meant. If
you have access to a copy of Hutter's subsequent book "Universal
Artificial Intelligence", there's a glossary of notation on page xvii,
and he does seem to cover the same material. It might be easier to
start there, if the glossary is accurate. I haven't checked.
>But while I do that, I have to ask what sort of model you're using for
>an intelligence if it cannot commit in advance to preferring a given
>outcome or choice, especially in the context of being rewarded or
>punished for choices.
Talking about the same AI as earlier (citation ), verbal behavior
is learned. It is not part of the specification. It might learn to
make and usually (or conceivably always) keep commitments, or perhaps
the environment and circumstances don't require that and it wouldn't
learn that. When the time to keep the commitment comes, the
commitment is part of the past and the only incentive to keep the
commitment are the relative consequences of keeping it verses the
>I would be really interested in how you would formalize your views.
I think you're really interested in a formalization of the AI's point
of view. That's in reference .
You might be asking for a formalization of my own point of view about
how things should be. To a first approximation, I don't think I have
one, since I don't think it makes sense to use the word "should" in
>What principles do you start with to derive not-property-rights from
>the very small set of pretty fundamental attributes I argued would
>automatically lead to such rights?
The difference in conclusion is primarily caused by the lack of
ability to specify verbal behavior. Thus the AI can't presently
describe its future choice, and therefore commitments and agreements
and so forth cannot be directly specified.
If the AI learns verbal behavior, and the AI's planning horizon is
long enough so the long-term consequences of keeping agreements
matter, then maybe it's actual behavior would be better than something
you could easily infer from the spec. Also, if it has been configured
to be Friendly toward the people it's making commitments to, and those
people are relying on it keeping its commitments, then the compassion
and respect it has for those people are likely to cause it to keep the
Or maybe it would feed the hungry methamphetamine addict, if that
seemed more important. Hmm, if the methamphetamine addict traded his
food money for drugs, that would tend to lead the AI to conclude that
he doesn't care much about eating, which would tend to lead to the AI
not doing anything extraordinary to feed him.
Things might get weird toward the end of the planning horizon. At
that point, the AI is trying to put the world soon into a state
desired by the humans, and the humans have a longer planning horizon
than the AI, so the important long term planning is happening in the
AI's model of the humans. I *think* it would still behave reasonably,
but I'm not sure and devoting some effort to looking for
counterexamples is probably worthwhile. So far as I can tell, it's
important to have a finite planning horizon to avoid indefinitely
delayed gratification .
If someone can devise an example of using anything resembling AIXI to
specify verbal behavior, I'd have a lot more choices here. I tried
for a while and got nowhere.
-- Tim Freeman http://www.fungible.com firstname.lastname@example.org  http://www.fungible.com/respect/paper.html  http://www.sl4.org/wiki/CoherentExtrapolatedVolition  http://www.neweuropeancentury.org/GodAI.pdf  http://rationalmorality.info/wp-content/uploads/2007/12/practical-benevolence-2007-12-06_iostemp.pdf  http://www.overcomingbias.com/2007/11/complex-wishes.html  http://www.fungible.com/respect/paper.html#deferred-gratification
This archive was generated by hypermail 2.1.5 : Wed Jun 19 2013 - 04:01:37 MDT