Re: Un-importance of (Re: The Conjunction Fallacy Fallacy)

From: Richard Loosemore (rpwl@lightlink.com)
Date: Tue Aug 29 2006 - 11:48:03 MDT


AT LAST!!!! Some more people are beginning to want to talk about the
actual issues.

The point at issue (well, the one I raised, and I was the one who
started this) is a fairly subtle one, and not easy to convey to amateurs
like Eliezer: by focussing on the human mechanism as a "malfunctioning
version of a perfect reasoner" you could completely ruin your chances of
building a viable AGI. The reason? There are indications that what the
human system is doing is deploying a bunch of powerful and useful
mechanisms that are not appropriate for the task (indeed, when the
system is on the threshold of acquiring a new skill, it always does that).

What does this mean? Well, it *could* mean that to build a powerful
thinking system you need 99% <clever mechanisms for building new
concepts, etc.> and 1% pure reasoning system. And the 1% pure reasoning
system is something that can only be used under some circumstances, and
it has to be aqcuired because it IS just another heuristic.

The interesting thing is that it is hard to decide between this and the
alternative approach: it needs science! It needs empirical data!

So far, there are plenty of indications (to people well versed enough in
the details of cognitive science) that the suggested approach above IS
the most likely explanation.

But if you go around interpreting the heuristics and biasses results in
the way that some people (e.g Yudkowsky) interpret them, you close off
the door to even the *possibility* that this al;ternative might be
correct. And if they are wrong, but carry on dominating the research
field in spite of that, AI research could be stuck in a quagmire for
another fifty years.

Hence: BIG and important question if you care about whether or not AI
researchers spend the next fifty years pissing into the wind.

That is why the question was worth discussing.

Needless to say, all of this is going *way* over the heads of people
like Eliezer. Or it would, if people like that had the reading skills
to get more than one paragraph into a post.

Richard Loosemore.

Olie Lamb wrote:
> Why focus on any particular cognitive bias?
>
> Please correct me if any of the following are the slightest bit
> controversial:
>
> 1) All human brains have many cognitive biases
>
> 2) Human brains do not use Bayesian Reasoning (nb: 2 can be derived
> from 1)
>
> 3) Any _really_ powerful AI needs to avoid the same cognitive-bias
> pitfalls as humans
>
> 4) A really-powerful-AI shall not live by Bayes alone (or, at least, a
> Seed-AI can't)
>
> 5) Any would-be-powerful-AI is going to utilise some other decision
> theory, at least in part.
>
> If we're all very clear and in agreement about this, why is any
> particular cognitive bias excruciatingly important?
>
> A fair understanding of human decision making processes is good for
> informing AI research, but I don't see the importance on getting hung
> up on any particular aspect of the human brain.
>
> ANALOGY: Plane designers should have a basic understanding of how
> birds fly, and the aerodynamics of birds' bodies. However, for
> aeroplane designers to bicker over the effects of turbulence effects
> between feathers is /irrelevant/ to the task of designing an
> aeroplane, even a mechanical wing-flapping-aeroplane. Birds aren't
> perfectly aerodynamic. They could be improved. They are a good
> example of how a flying machine can work, but aren't the only one.
> Yes, designing a working AI is harder than designing a plane. My
> point is that aeronautical engineers were not bird-biologists even
> before the wright brothers.
>
> So, fer crying out loud, don't read too much into any particular
> cognitive bias. They are significant, but I doubt that any human
> extent of cognitive-bias study will tell one how to build an AI.
>
> -- Olie
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT