**From:** Ben Goertzel (*ben@goertzel.org*)

**Date:** Sat Jan 15 2005 - 08:27:24 MST

**Next message:**Ben Goertzel: "Probabilistic Philosophy of Mind"**Previous message:**Michael Wilson: "Re: Fuzzy vs Probability"**In reply to:**Stephen Tattum: "Fuzzy vs Probability"**Next in thread:**Eliezer Yudkowsky: "Re: Fuzzy vs Probability"**Reply:**Eliezer Yudkowsky: "Re: Fuzzy vs Probability"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

Hi Stephen,

Well I'll give you my own point of view on this. I am not affiliated with

SIAI, and I have some differences with their approach, but I think I

*almost* see eye-to-eye with them on the issue of probability theory.

About philosophy of mind: I agree somewhat with your criticism. My own

approach to AI is founded on years of thinking I did about the philosophy of

mind, as well as more scientific considerations. I think that Eliezer's

work could use a little more depth in this area.

About probability theory: I agree with Eliezer that in principle,

brain-minds act as if they were obeying an approximation to probability

theory. Now, whether this is explicit or implicit in the structure and

dynamics of a given brain-mind is a totally different question. In the

brain I believe it's implicit, and in

http://www.goertzel.org/dynapsyc/2003/HebbianLogic03.htm

I have made some arguments as to how Hebbian learning in the brain might

give rise to probabilistic inference on the emergent level. In an AI system

it may be implicit or explicit depending on the design. In my own Novamente

AI design it's explicit, and I think Eliezer is proposing to make it

explicit in his AI design as well.

As to the foundation for the claim that probabilistic reasoning is

foundational, Cox's mathematical arguments are pretty convincing. Cox shows

that any measure of plausibility that obeys certain very sensible axioms

*must* be probability: for a discussion see e.g.

http://leuther-analytics.com/bayes/papers.html

I don't fully understand the fuss about "Bayesian reasoning" -- to me, Bayes

rule is just one among many useful mathematical rules derivable from the

axioms of elementary probability theory, which IMO are the correct axioms to

use for making intelligent judgments in the face of uncertainty.

As for fuzzy logic, I think it has its place, and we do use the fuzzy-set

min(,) and max(,) operators in Novamente in a couple places, as well as

fuzzy quantifiers. But fuzzy logic IMO doesn't have the foundational status

that probability theory does. The whole argument for fuzzy min/max truth

value operators is founded on the assumption that distributivity must hold

when manipulating uncertain truth values -- and I think this is just a false

assumption. The fact that some of the proponents of fuzzy logic like Kosko

are smart is kind of a red herring -- of course history is full of smart

people advocating silly positions!

In an accompanying email I will outline my thoughts on probability theory

and philosophy of mind in more depth. Recently I have experimented with

introducing probabilistic notions at a very low, foundational level in my

"pattern-based" philosophy of mind.

-- Ben

*> -----Original Message-----
*

*> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org]On Behalf Of Stephen
*

*> Tattum
*

*> Sent: Saturday, January 15, 2005 6:21 AM
*

*> To: sl4@sl4.org
*

*> Subject: Fuzzy vs Probability
*

*>
*

*>
*

*> I was looking over the Singularity Institute page on becoming a seed AI
*

*> Programmer the other day and I couldn't help but feel that there is an
*

*> overwhelming bias towards bayesian reasoning and I have noticed that a
*

*> lot of contributors to sl4 hail this as all-powerful - should they?
*

*> Check out this paper by Bart Kosko (clearly a 'brilliant' individual)
*

*> and his other work -
*

*>
*

*> http://sipi.usc.edu/~kosko/ProbabilityMonopoly.pdf
*

*> http://sipi.usc.edu/~kosko/
*

*>
*

*> I couldn't help noticing also that generally there are gaps in the
*

*> plan. As a philosopher I saw the ommission of any philosophy of mind -
*

*> crucial to any AI discussions and for any 'deep understanding' of the
*

*> issues actually outlined - strange... I have witnessed in the past
*

*> prejudice against philosophy and philosophers here too (apology already
*

*> accepted of course) and I wondered if the project of creating AI is
*

*> being pushed forward before it is ready. Now I believe that the
*

*> singularity is inevitable and I am not suggesting that the institute is
*

*> wrong, just that creating an Artificial General Intelligence, needs more
*

*> emphasis on the general. Any thoughts?
*

*>
*

*>
*

**Next message:**Ben Goertzel: "Probabilistic Philosophy of Mind"**Previous message:**Michael Wilson: "Re: Fuzzy vs Probability"**In reply to:**Stephen Tattum: "Fuzzy vs Probability"**Next in thread:**Eliezer Yudkowsky: "Re: Fuzzy vs Probability"**Reply:**Eliezer Yudkowsky: "Re: Fuzzy vs Probability"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:00:50 MDT
*