From: Michael Wilson (email@example.com)
Date: Thu Sep 08 2005 - 17:04:48 MDT
Ben Goertzel wrote:
> Well, my issue with a purely probabilistic approach is that it doesn't
> seem to give a computationally tractable approach to the formation of
> complex concepts or predicates.
This is certainly a valid concern, but I wouldn't consider it a
criticism of probability theory as such. Probability theory operates
on atomic statements, and trying to use raw probability theory for
high level cognition would be as silly as GOFAI symbolic reasoning
on detail-free tokens. 'Concepts' are elements or regularities at a
considerably higher level of organisational, and much more
constructive detail would be required to discuss them.
I certainly wouldn't claim that that probability theory is sufficient
for intelligence. It's true that I haven't specified what additional
complexity should be layered on top to get up to the level where one
can reasonably talk about 'concepts', nor have I addressed many
questions of tractability. But I think progress can be made on the
questions of whether probability theory is a valid place to start,
and whether behaviour /above/ the concept level can also be Bayesian,
without such detail.
> But what about hypothesis formation? The space of possible hypotheses
> is very large. Most of AI is devoted to search techniques for
> searching hypothesis space, and these techniques suck in an AGI context.
Again, very important question, wish I could discuss it more. But for
the purposes of this argument the question is whether any useful
mechanism of hypothesis search would inevitably introduce 'Complexity'
that would render the system thoroughly impossible to predict. I am
not aware of any such mechanism.
> I have never seen an adequate concept/hypothesis formation method
> that's based solely on probabilistic concepts. Do you have one that
> you'd care to discuss?
Unfortunately I'm going to have to ask for a rain check on that. I'd
enjoy discussing this area with you, but this isn't the time or the
place. That said these are definitely the questions that people
/should/ be asking, and I would've been considerably more impressed
with the original poster if he'd actually managed to identify them.
> Distributed systems, as commonly used in large-scale infrastructure
> today, are *very* different from massively parallel MIMD systems
> like the Connection Machine, in terms of algorithm design and
Agree, but similar MIMD techniques are common in high-performance
scientific computing. The connection machine was a fascinating,
pioneering project which went to completely new places (I for one
would love to to have worked at Thinking Machines), but twenty years
later those areas are starting to get quite familiar.
* Michael Wilson
To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre. http://uk.security.yahoo.com
This archive was generated by hypermail 2.1.5 : Tue May 21 2013 - 04:00:48 MDT