Re: Bayesian Pop Quiz

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Aug 29 2002 - 15:15:38 MDT


Christian L. wrote:
>
> Reading this, I looked up Bayes' theorem in my Probability Theory book,
> and under the theorem itself, it was written: "Never has any theorem
> been misused so much my so many".
>
> I fail to see why anyone would hold this theorem so highly that he
> writes poetry about it. In fact, I do not really think that you (Eli)
> really understand the theorem. For instance, this example is given in
> my book on elementary probability theory as a direct application of
> Bayes Th.:

Heh. Well, I am not alone in holding the BPT in very high esteem. There
is a small but growing movement in science to replace the Popperian view
of proof with a Bayesian view, and you will often find "Bayesian
rationalist" used as a more precise synonym for "rationalist", so it's not
just me.

> ** In a land there lives two kinds of people: X and Y. Among the X:s,
> 80% are tall. Among the Y:s, 1% are tall. The population in this
> country is 10% X:s and 90% Y:s.
>
> A tourist randomly meets a person, who happens to be tall. Use Bayes'
> theorem to calculate the probability that this person is an X. **
>
> Can you (Eli) solve it? It ought not be a problem for someone who can
> "see the BPT flowing underneath the surface of all cognition, like
> blood beneath skin."

The answer is obviously 80/89. 100 Xs live in the country and 80 are
tall, 900 Ys live in the country and 9 are tall. I don't know what you
intended to prove by asking me that. There's a similar piece of math
under the definition of BPT in the glossary of GISAI:

http://intelligence.org/GISAI/meta/glossary.html#gloss_bayesian_probability_theorem

Here's also a little excerpt from a work in progress (don't know if it'll
ever be finished, so don't hold your breath):

> Suppose you know the following: 1% of the North American population
> has cancer. The probability of a false negative, on a cancer test, is
> 2%. The probability of a false positive, on a cancer test, is 10%.
> You take a cancer test and it comes up positive. What is the
> probability that you have cancer?

[some discussion omitted]

> The way the human mind works instinctively is something like this: In
> the beginning, you're told that around 1 in 100 people has cancer.
> Then, the doctor shows you a cancer test and says that if you take the
> test and you don't have cancer, the probability of the test coming up
> positive is only 10%. You take the test and it comes up positive.
> Inside your mind, the evidence of the test results replaces the prior
> probability of 1% and substitutes the new probability of 90% that you
> have cancer. Initially the Bayesian Probability Theorem, even if it
> works, seems like a very alien way of looking at the world - you take a
> test that has a 90% chance of working, it comes up positive, and the
> statistician says that your actual probability of having cancer is
> 49/544, roughly 9% or around one-tenth of what the intuitive
> probability is. Since the Bayesian Probability Theorem is often
> explained by teachers who don't realize how insanely powerful the BPT
> really is, the picture that forms in many students' minds is probably
> something like this: "My prior probability of having cancer is 1 in
> 100. I take a test which is 90% accurate and it comes up positive; in
> any sane world, my probability of having cancer would be 9 out of 10.
> But this strange thing called the Bayesian Probability Theorem says
> that the obvious answer of 9/10 is replaced with the bizarre answer of
> 49/544. I accept it, but I don't understand it."
>
> However, there exists a way in which we can integrate the Bayesian
> Probability Theorem into our intuitive understanding of probabilities.
> Instead of imagining the prior probability of 1% being replaced with
> 90% after the test results arrive, imagine the test results as sliding
> the probability from its starting point. A test that's 90% accurate
> has a lot of weight, but it doesn't have quite as much weight as the 1%
> prior probability. A positive result on the test slides the 1%
> probability up to 9.007%, but not all the way to 90%. The chance of a
> false negative is 2%, so a negative result slides the initial 1%
> probability down to .0224% - getting a negative result on your test
> doesn't replace the initial 1% probability with a higher 2%
> probability! It's important, though, to remember that it isn't just
> the 10% chance of a false positive that matters, but also the 98%
> chance of a true positive if you do have cancer. If there was a 60%
> chance of a true positive (and hence a 40% chance of a false negative),
> then getting a positive result on your test would divide the groups
> into 8,910, 990, 40, and 60; thus, getting a positive result would make
> the calculation (60) / (990 + 60), 2/35 or 5.714%. The intuitive
> translation: The degree to which a result is evidence for X depends,
> not only on the strength of the statement "we'd expect to see this
> result if X were true", but also, vitally, on the strength of the
> statement "we wouldn't expect to see this result if X weren't true."

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT