RE: Introducing myself

From: Ben Goertzel (ben@goertzel.org)
Date: Wed Apr 03 2002 - 08:15:41 MST


Eugene wrote:
> It's interesting how many people have very strong opinions on how AI
> should be done, on basis of no other evidence than what they think.
> Somehow, empirical analysis of existing instances of intelligence, or
> actual experiments almost never appear in those musings.

Well, what you're not taking into account here is that everybody has a mind.
Introspective psychology IS a valuable source of information about the
nature of mind and intelligence, and it's a source that everyone has access
to.

Hence, I'd expect the average person's insights on the nature of mind &
intelligence to be worth at least a little bit more than their insights on
quarks or quasars.

Now, I don't believe that introspection alone is enough to guide anyone (no
matter how smart, or how introspectively acute) to a quality theory of AI.

However, I think that if one is purely concerned with the theory of mind on
an abstract level, and not with issues of bridging the physical & mental
realms, then introspection can actually carry one pretty far.

Evidence of this is provided in the psychological theories of the medieval
Buddhists. There's a great book called Buddhist Logic, by a Russian named
Th. Stcherbatsksy (sp?), summarizing the thinking of two 14'th century
theorists named Dharmakirti & Dignaga. These guys had a pretty damned good
cognitive science theory, based solely on the distillation of centuries of
introspective reports. Honestly, I find their view of the mind more
convincing than that of most modern cognitive science thinkers, in spite of
the greater empirical-science foundation that the latter have had for their
work. (Of course, I also find the Buddhist Logic work erroneous and/or
annoying in some aspects.)

> I'm rather impressed that the non-armchair variety of AI philosophers,
> who'd bloodied their noses countless number of times, still haven't lost
> the conviction that they know how to do it.

Imagine achieving Real AI as climbing to the top of a very tall peak.

Imagine that, not so long ago, our Intrepid Explorer climbed VERY CLOSE to
the peak, but ran into a dead end.

However, on the way up, he saw another, slightly different route, that
looked pretty good.

But he couldn't follow the alternate route at the time -- he would have had
to backtrack a while... he needed more supplies... and then there was that
damned twisted ankle.

So now he's going back up to try the slightly different route that appeared
to him to be clear. But of course, he can't say it's clear with 100%
confidence -- there could be some hidden pitfall along the new route, that
wasn't visible from his prior perspective.... But he must be a bit of a
loony optimist masochist in the first place, or he wouldn't be climbing
mountains and twisting ankles, he'd be sitting at home watching some other
asshole do it on TV, like everyone else ;)

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT