Re: Anti-singularity spam.

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed May 03 2006 - 10:36:11 MDT


Bob Seidensticker wrote:
> Eliezer: You're very skeptical of future predictions. Sounds good to me.
> But tell me what we should do about the situation we find ourselves in.
> There are lots of irresponsible predictions, and the public buys them. They
> figure, "Well, this person knows more than I do, so what choice do I have
> but believe it?" Should we sound the alarm? If not, is that because it
> doesn't matter if the public is deluded or because we're powerless to do
> anything?

I read the situation with popular futurism as screwed up far beyond the
point where I could repair it, unless I wanted to spend my whole life
doing it, and I have larger fish to fry. Building an AI isn't easy; but
it would be easier to build an AI than to get people to stop trying to
predict AI's arrival time.

> How do you respond to Kurzweil's predictions? (I don't mean to pick on him,
> but he seems to me to have the highest profile at the moment.) Does he
> follow the Way or is he a loose cannon?

Kurzweil certainly doesn't go in the same category as the random
newspaper quote generators. "The Singularity Is Near" makes a detailed
attempt to support his main talking points. As such, if I wanted to
critique him, I'd have to do it in detail - Kurzweil deserves that, and
would be justly annoyed if I responded to a chapter with a paragraph.

But, here's a copy of a section from a book chapter I recently wrote -
the book chapter being titled "Cognitive biases potentially affecting
judgment of global risks", for Nick Bostrom's forthcoming edited volume
"Global Catastrophic Risks".

**

4: The conjunction fallacy

        Linda is 31 years old, single, outspoken, and very bright. She majored
in philosophy. As a student, she was deeply concerned with issues of
discrimination and social justice, and also participated in anti-nuclear
demonstrations.

        Which of the following is more probable:
        1) Linda is a bank teller and is active in the feminist movement.
        2) Linda is a bank teller.

85% of 142 undergraduates at the University of British Columbia
indicated that (1) was more probable than (2). (Tversky and Kahneman
1983.) Since the given description of Linda was chosen to be similar to
a feminist and dissimilar to a bank teller, (1) is more representative
of Linda's description. However, ranking (1) as more probable than (2)
violates the conjunction rule of probability theory which states that
p(A & B) ≤ p(A). Imagine a sample of 1,000 women; surely more women in
this sample are bank tellers than are feminist bank tellers. The
original version of this study included 6 other statements, such as
"Linda is an insurance salesperson" and "Linda is active in the feminist
movement", and asked students to rank the 8 statements by probability.
(Tversky and Kahneman 1982.) However, it turned out that removing the
disguising statements had no effect on the incidence of the conjunction
fallacy - one of what Tversky and Kahneman (1983) characterize as "a
series of increasingly desperate manipulations designed to induce
subjects to obey the conjunction rule."

The conjunction fallacy also applies to futurological forecasts. Two
groups of professional analysts at the Second International Congress on
Forecasting were asked to rate the probabilities of "A complete
suspension of diplomatic relations between the USA and the Soviet Union,
sometime in 1983" or "A Russian invasion of Poland, and a complete
suspension of diplomatic relations between the USA and the Soviet Union,
sometime in 1983". The second event was rated significantly more
probable. (Tversky and Kahneman 1983.)

In Johnson et. al. (1993), MBA students at Wharton were scheduled to
travel to Bangkok as part of their degree program. Several groups of
students were asked how much they were willing to pay for terrorism
insurance. One group of subjects was asked how much they were willing
to pay for terrorism insurance covering the flight from Thailand to the
US. A second group of subjects was asked how much they were willing to
pay for terrorism insurance covering the round-trip flight. A third
group was asked how much they were willing to pay for terrorism
insurance that covered the complete trip to Thailand. These three
groups responded with average willingness to pay of $17.19, $13.90, and
$7.44 respectively.

According to probability theory, adding additional detail onto a story
must render the story less probable. It is less probable that Linda is
a feminist bank teller than that she is a bank teller, since all
feminist bank tellers are necessarily bank tellers. Yet human
psychology seems to follow the rule that adding an additional detail can
make the story more plausible.

People might pay more for international diplomacy intended to prevent
nanotechnological warfare by China, than for an engineering project to
defend against nanotechnological attack from any source. The second
threat scenario is less vivid and alarming, but the defense is more
useful because it is more vague. More valuable still would be
strategies which make humanity harder to extinguish without being
specific to nanotechnologic threats - such as colonizing space, or see
Yudkowsky (this volume) on AI. Security expert Bruce Schneier observed
(both before and after the 2005 hurricane in New Orleans) that the U.S.
government was guarding specific domestic targets against "movie-plot
scenarios" of terrorism, at the cost of taking away resources from
emergency-response capabilities that could respond to any disaster.
(Schneier 2005.)

Overly detailed reassurances can also create false perceptions of
safety: "X is not an existential risk and you don't need to worry about
it, because A, B, C, D, and E"; where the failure of any one of
propositions A, B, C, D, or E potentially extinguishes the human
species. "We don't need to worry about nanotechnologic war, because a
UN commission will initially develop the technology and prevent its
proliferation until such time as an active shield is developed, capable
of defending against all accidental and malicious outbreaks that
contemporary nanotechnology is capable of producing, and this condition
will persist indefinitely." Vivid, specific scenarios can inflate our
probability estimates of security, as well as misdirecting defensive
investments into needlessly narrow or implausibly detailed risk scenarios.

**

And some additional material deleted from an earlier draft of the same
chapter:

**

Even when people bet money on real events, they still fall prey to the
conjunction fallacy:

        Consider a regular six-sided die with four green faces and two red
faces. The die will be rolled 20 times and the sequence of greens (G)
and reds (R) will be recorded. You are asked to select one sequence,
from a set of three, and you will win $25 if the sequence you chose
appears on successive rolls of the die. Please check the sequence of
greens and reds on which you prefer to bet.

        1. RGRRR
        2. GRGRRR
        3. GRRRRR

125 undergraduates at UBC and Stanford University played this gamble
with real payoffs. 65% of subjects chose sequence (2). Sequence (2) is
most representative of the die, since (2) contains the greatest
proportion of green faces. However, sequence (1) dominates sequence (2)
- to win (2), you must roll sequence (1) preceded by a green face. The
probability of (2) must be two-thirds that of (1). 76% of research
subjects, when presented with this argument, agreed and switched choices.

The conjunction fallacy also applies to futurological forecasts:

        Please rate the probability that the following event will occur in 1983...
        [Version 1]: A massive flood somewhere in North America in 1983, in
which more than 1,000 people drown.
        [Version 2]: An earthquake in California sometime in 1983, causing a
flood in which more than 1,000 people drown.

Two independent groups of UBC undergraduates were respectively asked to
rate the probability of Version 1 and Version 2 of the event. The group
asked to rate Version 2 responded with significantly higher probabilities.

In each of these experiments, human psychology fails to follow the rules
of probability theory. According to probability theory, adding
additional detail onto a story must render the story less probable. It
is less probable that Linda is a feminist bank teller than that she is a
bank teller, since all feminist bank tellers are necessarily bank
tellers. It is less probable that the sequence GRGRRR will be rolled
than RGRRR. Yet human psychology seems to follow the rule that adding
an additional detail can make the story more plausible. The extra
detail increases the vividness of the hypothetical event, or supplies a
plausible-sounding cause where no cause comes readily to mind, or
renders the event more "representative" of the generating process.
North America is not famous for floods, but California is famous for
earthquakes; a massive flood caused by an earthquake in California
sounds more plausible than a massive flood in North America, even though
it is necessarily less probable. Similarly, offering to sell insurance
against terrorism on the flight from Thailand to the US is a vivid
scenario that brings many possible causes to mind, leading to a
willingness-to-pay more than twice the price evoked by offering to sell
insurance against terrorism for the entire trip.

As the above experiments illustrate, human beings are not consciously
aware of the conjunction fallacy. Futurists who spin richly detailed,
persuasive scenarios are not consciously lying, any more than students
who bet on GRGRRR are betting to lose. Nonetheless, no few of the
positions taken on existential risks can be characterized as "absurdly
detailed".

Fischoff (1982) notes: 'The probability of its weakest link should set
an upper limit on the probability of an entire narrative. Coherent
judgments, however, may be compensatory, with the coherence of strong
links "evening out" the incoherence of weak links. This effect is
exploited by attorneys who bury the weakest link in their arguments near
the beginning of their summations and finish with a flurry of
convincing, uncontestable arguments.'

**

References from above:

Fischhoff, B. 1982. For those condemned to study the past: Heuristics
and biases in hindsight. In Kahneman et. al. 1982: 332–351.

Johnson, E., Hershey, J., Meszaros, J.,and Kunreuther, H. 1993. Framing,
Probability Distortions and Insurance Decisions. Journal of Risk and
Uncertainty, 7: 35-51.

Kahneman, D., Slovic, P., and Tversky, A., eds. 1982. Judgment under
uncertainty: Heuristics and biases. New York: Cambridge University Press.

Schneier, B. 2005. Security lessons of the response to hurricane
Katrina.
http://www.schneier.com/blog/archives/2005/09/security_lesson.html.
Viewed on January 23, 2006.

Tversky, A. and Kahneman, D. 1982. Judgments of and by
representativeness. In Kahneman et. al. (1982): 84-98.

Tversky, A. and Kahneman, D. 1983. Extensional versus intuitive
reasoning: The conjunction fallacy in probability judgment.
Psychological Review, 90: 293-315.

**

Back to Kurzweil.

It is necessarily more probable that "someone will create AI" than that
"someone will create AI by reverse-engineering the human brain". It is
necessarily more probable that "some form of smarter-than-human
intelligence will come into existence" than that "smarter-than-human
intelligence will come into existence when we merge with our AIs by
using increasingly sophisticated brain-computer interfaces and
eventually medical nanotechnology to add new neurons until finally our
biological brains are a small fraction of our entire selves..."

_The Singularity Is Near_ includes far, far too many details to be true.
  That's my largest criticism. (But I really ought to pick a specific
detail from TSIN and argue at length that it is wrong, for the criticism
to carry through. Otherwise you might as well say the same about a
detailed physics textbook. The details that I singled out above are
justifications which I believe improbable, attached to predictions that
I believe probable (as vaguely stated); but you have no reason to take
my word for this.)

I don't think Kurzweil is familiar with the literature on heuristics and
biases. But of course it is necessarily more probable that Kurzweil has
committed the conjunction fallacy, than that Kurzweil has committed the
conjunction fallacy because he isn't familiar with the literature on
heuristics and biases. By adding to my prediction the extra detail that
"Kurzweil isn't familiar with the literature on heuristics and biases",
I give myself one more chance to be wrong. And since it isn't
absolutely necessary to my thesis that Kurzweil has not read "Judgment
Under Uncertainty", I may as well omit that assertion from my thesis.
This will decrease the apparent plausibility of my prediction, since I'm
not attaching a plausible cause for my conclusion. But it is
necessarily more likely that "The Singularity Is Near includes too much
detail to be true" than that "TSIN includes too much detail to be true
because Kurzweil hasn't read the literature on the conjunction fallacy."

This should not be taken as a harsh criticism of Kurzweil because it is
quite possible to be a diligent, serious futurist and never run across
the field of heuristics and biases. Serious futurism is a disorganized
field. There is no standard reading list for futurists that includes
cognitive biases; serious futurists are all self-trained. I suspect
that the vast majority of other scientists in Kurzweil's place would
have made the same mistake. I myself only started being aware of
heuristics and biases in 2003, and I had to throw out all my futurism
from previous years and start over.

I think that Kurzweil is a serious, honest, on-the-front-lines futurist
who does his best to justify his predictions. I don't think Kurzweil is
e.g. consciously aware that each additional detail he specifies in his
justifications drives down the joint probability of his entire book.
Since I have not studied Kurzweil extensively and do not have a history
of correct predictions about him, I may be wrong.

There's plenty of middle ground between your two alternatives of
following the Way and being a loose cannon, and that's where I'd place
Kurzweil.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT