From: Richard Loosemore (firstname.lastname@example.org)
Date: Thu Mar 23 2006 - 08:01:57 MST
>> From: Richard Loosemore <email@example.com
>> 3) solid data about prior probabilities of at least something (and
>> justification for why the numbers *are* solid, of course)
>> 4) ways to represent subtle questions and statements about real world
>> situations in such a format that a Bayesian reasoning system could
>> actually do something sensible with them (for example, answering
>> questions about abstract analogies)
>> if all this apparatus is just
>> feeding the Bayesian module heaps of low quality data, then heaps of low
>> quality conclusions are what you get out the other end.
> Michael Vassar wrote:
> Humans already have the shopping list of abilities associated with being
> a general intelligence.
> There is no reason that they should need solid data about prior
> probabilities, just active work reconciling reasonable guesses into a
> coherent view.
> One great thing about actual normative reasoning, unlike what humans do
> by default, is that it's not nearly as vulnerable to GIGO contamination
> of initial conditions. While human guesses don't update properly and
> never move very far, normative reasoning produces beliefs that fairly
> rapidly converge to predictively valuable values even in the absence of
> good initial guesses.
You misunderstand where I was going with my comments: I do, of course,
agree that humans already have these abilities.
Perhaps I can rephrase it and say that any Bayesian reasoning engine
needs some "preprocessing" and "postprocessing" apparatus in order to
function (see the shopping list for details). My contention is that the
pre and post processing is so extensive and complex in the human
cognitive system that it dwarfs the role played by any Bayesian (or
other) reasoning engine. AI researchers implicitly use their own p+pp
to help the systems they create (or shield the systems from the need to
use too much pp, by choosing appropriately constrained domains), thus
making the systems seem smarter than they really are.
To illustrate: would you be able to pick, say, 20 examples from your
own real life thinking, where Bayesian reasoning helped you perform a
task, in domains like the following:
- Producing speech in a party conversation
- Debugging a computer program
- Writing a paper for a conference
- Shopping down at the supermarket
- Driving your car to work
- Teaching some children about biology
- Cooking meals for your family
- Planning a vacation
- Having a vacation
- Planning this year's gardening project
- Fixing a household problem like a plumbing leak
- Choosing a trustworthy painter to paint your house
- Planning your finances
- Learning how to play a musical instrument
- Finding a partner
- Choosing Christmas presents for your relations
- Assessing the validity of a political claim made by someone
- Choosing a new house
- Trying to figure out how to get your child to practice the piano
- Learning how to ice skate
In any of the examples that you generate, do you think that the Bayesian
reasoning (and *not* the preprocessing and postprocessing) was where all
the important work was done (and if you would say this, is it possible
to give some convincing argument to back it up?)?
My claim is that I have just set you an impossible task, because
Bayesian reasoning does not play a significant role in most of the
things that cognitive systems do, and even when it does kick in, what is
often attributed to it is partially the work of the peripheral processes.
By that account, teaching Daniel to do fabulously sophisticated Bayesian
reasoning would avail him nothing if his ability to do the preprocessing
was no better than your or mine.
I agree that it can be useful in certain situations, and indispensible
in certain other, even more specialized, circumstances. It is just not
the biggest thing that makes an intelligent system intelligent, and I
see no reason to believe that it is *the* most significant next step in
the evolution of intelligence.
P.S. You need look no further than the SL4 List for examples of people
who, professing that they make judgements using normative reasoning of
the highest order, are nevertheless able to produce, on occasion, the
most asinine, emotive, prejudice-driven statements about subjects they
know little about. I can say this only because I am privileged to be an
expert, in a certain unnameable field, and have had deep theoretical and
empirical experience in that field, so at the very least I know what I
am talking about *there* .... and I have seen the aforementioned rubbish
talked about that area, by people who claim to know better.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT