**From:** Ben Goertzel (*ben@goertzel.org*)

**Date:** Tue Sep 13 2005 - 13:03:26 MDT

**Next message:**Michael Wilson: "Re: Logics at multiple levels of abstraction"**Previous message:**Phil Goetz: "Re: Logics at multiple levels of abstraction"**In reply to:**Ben Goertzel: "FW: Hempel's Paradox -- OOPS!"**Next in thread:**Eliezer S. Yudkowsky: "Re: Hempel's Paradox -- OOPS!"**Reply:**Eliezer S. Yudkowsky: "Re: Hempel's Paradox -- OOPS!"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*> So, in conclusion, according to PTL applied correctly, we may say:
*

*>
*

*> "The observation of a nonblack nonraven in a population known to
*

*> be finite and *known to contain at least one raven* may be
*

*> considered as a small amount of evidence in favor of the
*

*> existence of a black raven in that population."
*

*>
*

*> Without the assumption that the population is known to contain at
*

*> least one raven, then the argument I've given above fails.
*

Or maybe not... actually, now that I think about it again, it seems

that if one doesn't assume the one raven in the population,

but merely assumes a fractional raven (i.e. a >0 probability of

there being a raven) then the argument may still hold.

But anyway, to get back to where this thread started

-- As it turns out standard probabilistic semantics and PTL say

basically the same thing about Hempel's paradox, though they

express it in different ways. This is interesting to me,

though in the big picture not surprising. PTL was created in

order to formula probabilistic ideas in a way convenient for

AGI, not to contradict probability theory.

-- My issue with probability theory as a foundation for AI has

to do with its inability to deal with some issues in a

computationally efficient way, without the addition of a

significant amount of additional concepts. The main

examples I have in mind here are attention allocation in

a complex cognitive system, assignment of credit, concept

creation, and the learning of schemata for the control

of inference trajectories. These things certainly can be done

in a way that's *consistent* with probability theory, but,

seem to require the addition of a lot of structures and

dynamics that are not suggested by probability theory.

-- Where complex systems dynamics (as Loosemore was mentioning)

come in, in the Novamente design, is in dealing with some of

these things that are not efficiently dealt with by probability

theory alone. In all these cases, the role of non-probability-theory

mechanisms may be placed into the category of "hypothesis

generation", if one wants to use a Bayesian-type terminology...

-- So, then, my contention is that in complex AI systems, complex

emergent dynamics are very useful for hypothesis generation; and

the hypotheses generated via these mechanisms can then be fed into

probabilistic inference mechanisms

-- Finally, my contention is that the brain largely works this way,

even though it's not explicitly thought of in such a way. The brain

works this way because Hebbian learning is essentially an imprecise

and distributed way of doing some simple forms of probabilistic

inference.

-- Ben G

**Next message:**Michael Wilson: "Re: Logics at multiple levels of abstraction"**Previous message:**Phil Goetz: "Re: Logics at multiple levels of abstraction"**In reply to:**Ben Goertzel: "FW: Hempel's Paradox -- OOPS!"**Next in thread:**Eliezer S. Yudkowsky: "Re: Hempel's Paradox -- OOPS!"**Reply:**Eliezer S. Yudkowsky: "Re: Hempel's Paradox -- OOPS!"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:00:52 MDT
*