Re: justification

From: Michael Vassar (michaelvassar@hotmail.com)
Date: Fri Aug 05 2005 - 00:37:42 MDT


>Okay, I'm not really sure what you were trying to say after that first
>sentence. Perhaps "Since 'There are exploits' is a very vague claim, and
>'there
>are ninja hippos' is a very specific claim, the former claim has a higher
>probability." I don't want to attack this claim, because I'm not sure its
>yours.
>Care to clarify?

You interpreted correctly.

> > Since the Kolmogorov complexity of a god or a ninja hippo which
> > wants you to do X (e.g. one which changes the utility implications of
>any
> > particular behavior in any particular way) is roughly constant across
>the
> > space of possible values of X, and since we have no Bayesian valid
>evidence
> > for updating our priors, nor any way of gaining such evidence, our
>rational
> > behavior does not differ from what it would be if ninja hippos did
>exist.
>
>So if a fear satisfies these three conditions, we ought not to worry about
>it.
>Now, the last two conditions are just to say "We are not justified in
>believing in ninja hippos," and furthermore both support my position on
>exploits, as we have no valid evidence for exploits, and no way of gaining
>such
>evidence.

Yes, but the first condition is not satisfied by ninja hippos. "Exploits"
define something with a high prior probability.

>Honestly, as might be expected from my pathetic, substandard rationality,

The standards on SL4 are supposed to be very very high, as in Earth's last,
best hope high. Sam, Merry, and Pippin were not pathetic, and actually
ended up being useful when the author set things up to make them so, but
they shouldn't have been trying to save the world, and in real life, as
opposed to a story, they would have been a disadvantage rather than an
advantage.

>don't know why the first condition has any more than a trivial impact on
>the
>question,

Then you have to learn that before we can usefully continue. The prior
probability of something being true should not be ignored. Rationality is a
technique for aggregating evidence to form beliefs and actions, but humans
typically ignore all but the first, most recent, or strongest piece of
evicence. You really should deeply understand Bayesian probability theory
and the many ways in which it differs from ordinary attempts to reason. The
former knowledge is available on web tutorials such as the ones Eliezer has
written and posted in this list's archives, (I recommend reading everything
he has written and the things he has responded to at the very least), the
latter in Kahneman and Tversky's book "Judgement Under Uncertainty:
Heuristics and Biases". Eliezer told me about 2 years back that he wouldnt
speak to me again until I had read it, and he was right, it's that important
to read and understand in general how humans think badly and to avoid those
mistakes.

>so help me out: suppose that the question about the existence of
>exploits satisfied conditions (2) and (3); that is, we have no evidence for
>exploits, and no way to gain evidence. But suppose that for all boxed AIs
>attempting to find exploits in their boxes who want you to do X (or
>whatever
>the analagous example would be), their complexity is not constant across
>possible X. Ought we then to believe in exploits? Why?

I'm not following this, but I think the statement above should explain it.

>PS: I didn't really know what you were getting at with the stuff about
>alchemists and metaphysics. If it's important, please clarify it for me.

The stuff about alchemists was a response to the poster who used them as an
example of people trying to do something "impossible". I pointed out that
the means by which they attempted to realize their goal was ineffectual, a
failure of intelligence, but their real goal was "understand nature and use
that knowledge to become rich", a goal which is eminantly achievable with
adequate intelligence. There is all the difference in the world between a
particular action not leading towards a particular goal, which happens all
the time, and a particular utility function being non-satisfyable at any
level of intelligence, which we have never known to happen in interesting
cases.

     For now, one point. You have a feeling that exploits are impossible,
no evidence that they are impossible, and no evidence that they are
possible. In other words, you have your feeling, and no other evidence for
or against their possibility. In all honesty, how often have your feelings
which were held with this strength turned out to be wrong? How often have
such feelings which you held turned out to be wrong when many other truly
brilliant people, essentially all of the people who had seriously considered
the question, disagreed with you. Note that you are advocating the
non-conservative position, the position that will bring disaster if it turns
out to be incorrect.
     In such a situation, it appears to me that one would most frequently
act optimally by asking yourself "what am I not understanding", and then
asking the people who disagree with you what it is you are not
understanding. Your starting assumption should be that you, not they, are
wrong, but you should wish to understand so that you will be right and well
informed, and in order to confirm that you are wrong in case of the small
chance that you might actually be right. Then, if after they have explained
a few times, you still don't understand, you should conclude that either a)
they are badly confused and unreasonable people, b) you just aren't capable
of understanding them, or most probably c) you desperately need some very
large quantity of background material before you will be ready to follow
their argument. Usually, for the smartest percent or two of the population,
which you probably belong to, c will be correct, though different people
seem to have different mental weaknesses.
     At any rate, you seem to have been doing something like that, or I
would not have bothered explaining.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT