Essay: On psychological frames of reference

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Apr 08 2002 - 15:31:49 MDT


Ben Goertzel wrote:
>
> Similarly, it's true that if Eliezer or I or any other one individual or
> small group stopped working on Singularity-focused technology, the
> Singularity would still happen. But yet, if *everyone* took this to heart
> and hence stopped making efforts, the Singularity *wouldn't* happen.
> Because the Singularity is composed of the sum total of loads of individual
> efforts like ours.

This is certainly a very commonly given answer to questions of the form "Why
do you think that *you* can do *X*?" where [you] refers to someone who wants
to do it but hasn't done it yet, and [X] is something very difficult. The
reason it's commonly given is that it's a good one. If you start a new
business, for example, and someone who wants to invest says "How do you know
this is going to work?", the correct answer is "Nobody knows for sure that
this particular business will work - that's not where businesses come from.
I think this business has a better chance than the average 95% chance of
failure, and that's all that anyone can ask for. Nothing you can possibly
do will raise the chance of a new business succeeding above 50%."

This is one of those questions that can't really be understood in strictly
normative terms. You have to view it in the context of the human emotional
architecture and our social homeostasis systems. Today, Einstein is a very
high-ranking social figure (dead, but still very high-ranked) by virtue of
his successful invention of Special *and* General Relativity - he not only
did it, he did it twice in a row. The thing is that our mental processes
tend to think of this high rank as being inherent in Einstein the person -
that if someone even *more* brilliant had invented both SR and GR just a few
years earlier, Einstein would still be Einstein. In fact this view is
probably wrong; Einstein's intelligence wouldn't change but the world's view
of his intelligence would. The point is that Einstein's archetypal role in
the world is that of someone super-smart, not super-smart and super-lucky,
or one of a large group of super-smart people who happened to try and
succeed instead of try and fail. People who think about the problem
deliberatively, and whose morals are at least partially interwoven with
modern pro-egalitarian anti-dramatic attitudes, will probably come to the
conclusion that Einstein probably was not the smartest figure of his
generation, just the one with the best combination of intelligence and post
facto success with respect to physics - there wasn't anyone who was both
smarter *and* luckier. Personally, the fact that Einstein pulled it off
twice in a row makes me wonder, but the point is that there is a
pro-egalitarian anti-dramatic standard answer.

However, the conscious answer is not the answer that shows up in our
attitudes, our emotions, our use of "Einstein" as an archetype, and the way
people react to people trying to do something very difficult. In essence,
the intuitive view skips over all the complex reasoning about causation and
holds that:

1) Einstein's achievement of Special and General Relativity proves that
Einstein is super-smart.

2) Einstein has high social rank by virtue of being super-smart.

Now, rank in a hunter-gatherer tribe - and in modern times too, for that
matter - is a precious quantity. So if you are trying to do [X], where [X]
is something very difficult, you rapidly become accustomed to running into
the following chain of logic:

1) You are publicly trying to do [X].
    [AUTOMATIC TRANSLATION:]
2) You are claiming you can do [X].
    [AUTOMATIC TRANSLATION:]
3) You are claiming you are smart enough to do [X].
    [AUTOMATIC TRANSLATION:]
4) You are claiming the high rank that is a consequence of super-smartness.

And so the observer wants you to immediately prove that you are entitled to
this high rank, because otherwise you're trying to lay claim to rank you do
not possess - that is, you are trying to cheat the system.

Ben's Argument from Common Effort is a simple (and accurate) rejoinder to
the effect that "If nobody ever tried to do something until they'd already
proved they could do it, nothing would ever get done." The Argument from
Common Effort is a very effective rejoinder against the above because it
strikes directly at the moral intuitions that are supporting the objection:

1) The Argument from Common Effort acts as a disclaimer of high rank,
because disclaimer of rank is a psychological consequence of the statements
"I'm just like everyone else, really" or "I'm just one of a large group of
people" or "It would happen without me eventually".

2) The Argument from Common Effort strikes directly at the social intuition
that this is a case of maintaining the common social benefit by maintaining
the rank system, by showing that enforcement of the argument as presented
would cause a social penalty (no advancement toward [X]).

So what's not to like about the Argument from Common Effort?

The problem is that the Argument from Common Effort, like the original
Argument from If You're So Smart Why Aren't You Rich, is the product of a
way of thinking that runs skew to actual reality. You can either treat
figuring out the truth as a special case of figuring out what's socially
acceptable, or you can treat "figuring out the truth about what's socially
acceptable" as a special case of figuring out the truth.

Whether you're really unique or one of a horde is an issue that is totally
orthogonal to society's different perspectives on these two statements
presented publicly. It doesn't even matter if the answer that society likes
really is the right answer; you can't afford to arrive at that answer by
reasoning about what's socially acceptable. You have to do it by reasoning
about what's true. If there is, in your mind, a little circuit that asks
how other people will react to your thinking X while you are trying to
figure out whether X is true, then your mind has short-circuited. Now of
course it's theoretically possible to ask what other people think of an idea
on the grounds that other people are likely to be right, and to factor that
into the confidence. I do this when I talk about General Relativity; I
haven't checked the numbers myself but I have confidence in the people who
have.

The point is that I believe the Earth is round, not because believing the
Earth is flat would make me socially unpopular as a crackpot, but because I
think that modern-day society really is likely to be right about that sort
of thing. Many past societies wouldn't have been.

Now the short-circuit from social acceptability to personal belief is
certainly an *adaptive* short-circuit. We are imperfectly deceptive social
organisms; what we believe affects how others react to us. If mutation pops
up a little circuit that runs from the internal perception "people will
react badly to idea X" and makes it a little more painful to think about
beliefs that support X or predict X, that circuitry is likely to rapidly
become a fixture in the gene pool. The same goes for a circuit that runs
from the internal perception "people are likely to praise me for believing
X" and makes it comfortable and pleasurable to think about beliefs that
support X and chains of reasoning that end by concluding X. It doesn't even
have to be a piece of circuitry. It can be an emergent byproduct of the
pleasure-pain reinforcement architecture for reasoning. The point is that
any heritable variation in this tendency, whether the tendency is originally
emergent or is an actual piece of circuitry, will tend to become genetically
fixed as a result of natural selection on imperfectly deceptive linguistic
organisms. It is one of the more subtle of the many forces that contribute
to rationalization behaviors.

If the truth is precious to you, and if you exist in a contemporary
scientific memetic environment (Richard "What do you care?" Feynman and
Robert "Church of Reason" Pirsig and the public-consumption version of the
story of Galileo "Still It Moves" Galilei), then you wind up with this idea
that you ought to believe things that are true though Hell bar the way.
Most people who are part of the family of truthseekers believe this.
Knowing enough evolutionary psychology to see the dangling puppet strings of
evolution, and having enough native introspective talent to cut the strings
judged obnoxious, is another issue. I should emphasize that "emergent
effects of the pleasure-pain reinforcement architecture on deliberation that
have been genetically fixed by selection pressures" are probably the most
advanced things I've ever tried to deal with, in terms of debugging myself,
and before I tried that I had a hell of a lot of practice on simpler things
with clear subjective correlates, like the blatantly hunter-gatherer
political emotions.

Why the "practice this before you try it" disclaimer? Because if you are
still subject to self-overestimation effects (ancestrally adaptive because
they led you to run for tribal chief), then maybe you *shouldn't* be trying
to filter the "What will my friends think?" emotion through a rational
check. In this case the check is actually normative; it is an irrational
social pressure that opposes an irrational part of your own mind. *First*
you disentangle your own personal self from the non-normative psychological
effects of rank-seeking, and *then* you disentangle yourself from social
pressures that are the non-normative output of psychological forces that
oppose rank-seeking in others. Now when I say these forces are
"non-normative", I don't mean "outright wrong", I mean that these
psychological forces are built around something other than truthseeking -
they are part of a skew view of reality.

So when someone says, "Eliezer Yudkowsky, you're trying to do [X]" followed
by a bunch of AUTOMATIC TRANSLATION and then "Now prove you're not just a
social cheater claiming undeserved rank", I cannot just turn around and use
the Argument from Common Effort - the rejoinder is part of the same world as
the objection. The entire argument is part of a skew view of reality. The
real answer is that I laid aside every claim of rank long ago, right after I
learned about evolutionary psychology for the first time. Nobody has to
respect me. Nobody has to obey me. I've never said that I deserved power
over others and I've never asked anyone to give me power over others. This
is not just part of an effort to be a nice person, it's part of an effort to
step outside what I think is a non-normative psychological frame of
reference. We shouldn't be orienting our lives around social rank. In that
sense, orienting your view of the world around how to answer the objection
that "you're claiming social rank you don't deserve" is as wrong as
orienting your mind around social rank to begin with. There's an
intermediate stage where what matters to you is ridding your mind of the
want-social-rank effect, and while you're in that intermediate stage, then
it's not all that bad an idea to think about the things that other people
are likely to react to as a claim of social rank; it's a good way to clean
out the bugs. But until you're *outside* that frame of reference, not just
at one extreme *within* the frame of reference, you've still got work left
to do.

The real answer to "You think you can do [X] ... AUTOMATIC TRANSLATION" is
to explain why I deny that the AUTOMATIC TRANSLATION is a good way to look
at reality - to deconstruct the question from below, I think the phrase is.
Unfortunately I usually don't have time to do this. But I do try to give
answers that are outside the psychological frame of reference that I think
is wrong. Look at my original answer to the objection. In fact, I'll even
quote it:

Eliezer Yudkowsky wrote:
>
> *Shrug.* It's not my job to be messianic, and it's even less my job to be
> non-messianic. I want to accomplish the largest possible amount of good
> with my life. In a pre-Singularity era, that means doing something that
> relates to the Singularity, because those are the largest stakes currently
> on the table. If I find a $1000 bill lying in the street, I'll pick it up.
> If I find an opportunity to bring the Singularity nearer or make it safer,
> I'll take it. I do see what looks like such an opportunity and I am taking
> it. That's all there is to it. It doesn't take any complex hypotheses
> about my psychology. I suppose it takes more knowledge than usual to see
> that the situation really is that simple, but it doesn't take anything
> besides that.

Do you see what this answer is getting at? It's an attempt to yank the
whole problem out of the hunter-gatherer social psychology and put it into a
more normative frame of reference. I didn't have the time then to go on at
such great length because I wasn't on temporary rest-up hiatus, but it is a
right answer *for the right reasons*. There is normative reasoning that
runs skew to the whole mess of anxieties about how much rank you have or how
much rank other people might think you're claiming.

Why is it so important to make this distinction? At the risk of triggering
an [AUTOMATIC TRANSLATION: showing off], it's because I want my whole mind
to be focused on the truth. I think that if you start focusing on anything
but the truth, you end up somewhere that isn't true. I think that there are
emergent effects in truthseeking that emerge only *after* you've practiced
self-correcting deliberation for a while, and the effects propagate down to
corrected thoughts, so that your mind doesn't go off on the wrong track to
begin with and you can do an extended deliberative chain of thoughts that
formerly would have each required an entire deliberative session to arrive
at. To train yourself into that state you have to avoid compromise. If you
compromise by choosing to tolerate an error, it means that the source of the
error is still there, and that whatever degree of rationality you achieve is
achieved by balancing your mind against the error, devoting a part of your
energy to struggling with it. But if you decide *not* to compromise, and
devote enough energy to correcting the error whenever you find it, a funny
thing happens; you start to make the error less often.

Let's say you're beginning to write for the first time. You can start out
by saying, "Well, I made a few spelling errors, but those are tolerable.
People will still be able to understand what I've written." And in this
case you've struck your compromise. You stay where you are as a writer.
But if you don't trust compromises in general or want to do the best you
can, then you go back and eliminate your spelling mistakes. So right now,
you're just like the other guy, except that you're spending more time and
effort to correct your spelling mistakes. And the other guy may point out
that you're expending more effort than you're getting back in benefit - and
may be right, in the short term. But after a while, a funny thing happens.
Instead of needing to spend time and effort on correct spelling, instead of
having to enforce correct spelling through deliberation, you find that you
just don't make all that many errors to begin with. And once it's no longer
your spelling errors that leap out at you and distract you when you're
reading your own prose, you find that higher-level errors become visible -
awkwardness of phrase, needless words. The difference isn't between a few
spelling errors and no spelling errors, because correct spelling is only the
first step on a very long path toward being a good writer. When you strike
a compromise you make a decision to stay where you are.

I don't know this part from practical experience with spelling, I'm afraid;
my spelling started out as okay. But I know the value of not compromising
from experience keeping in the frame of mind where the truth is what
matters. You can't compromise and say, "Well, it's okay if a few of my
thoughts are redirected toward social conformity." (Which is bad not
because it's the evil demon 'conformity' but just because it's anything
other than 'truth'; being 'non-conformist' is just as bad.) Learning to
keep to the 'truth' frame of reference in this one case is only a first
step. There are other errors I didn't see when I was at that stage because
I didn't yet have enough experience with what it feels like when a chain of
thought goes *right*. That feeling of clear thought is what you learn to
cherish, not just because it feels good, but because you see that it works.
You can't get to that destination by compromising... or at least, I don't
know of anyone who's gotten any distance by doing so... because when you
compromise you've made the decision not to start.

And just so that none of this gets taken out of context, I should emphasize
that the first step down this road consists of reading a few popular books
on evolutionary psychology and trying to clean up the blatantly adaptive
political emotions with easily identifiable subjective correlates, like
"seeking status", "rebelling against conformity", "I deserve [Y]", and so
on. It's not so much "Don't try this at home" but "Yes, I did get where I
am today over a very long period of time that involved a lot of effort, and
incremental progress from relatively easy problems to problems that I didn't
even know existed when I started out." There is this idea that
self-awareness challenges are unsolveable, and that the world is divided to
people who acknowledge they have the problems and people who have a dearth
of self-awareness or an excess of grandiosity and so claim to be immune to
them. But the problems are not that absolutely intractable. The war is
never finished, but you can get into the habit of winning the battles. It's
just that a randomly selected person saying "I think more clearly" is
statistically more likely to follow it up with "...because of my inherent
virtue, unlike the Godless fools who disagree with me" and not "...but I
emphasize that this is just a morally nonvalent report on my internal state;
I am not asking for any relaxation in the social or moral rules as a
consequence." But you can't give up on the war right at the start because
you're worried you'll end up with a viewpoint of which there exists at least
one possible *over*simplification that could conceivably be clustered with a
bunch of wrong viewpoints; that would be losing a battle to that exact same
piece of circuitry I spoke about earlier.

Sincerely,
Eliezer Yudkowsky.

PS:

http://www.exploitationnow.com/d/20010214.html

"More proof that sometimes it isn't all that hard to tell the difference."

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT