From: Ben Goertzel (firstname.lastname@example.org)
Date: Mon Sep 12 2005 - 18:53:59 MDT
> Where the real difficulty arises is how to generate and refine the
> elementary symbols that the logical reasoning component works on. If
> some other system did that, and was then smoothly integrated with the
> logical part, no problem. It's the grounding of those symbols that is
> the sticking point.
What's interesting is that, when you formalize logic in the right way,
then the boundary between logical reasoning and low-level perception/
action processing becomes rather fuzzy.
Symbol grounding can be done (for example) via Hebbian-learning-type
and evolutionary-programming-type learning over memories of
perceptions and actions -- and these are methods that can be
tied in very closely with probabilistic reasoning.
In an AI system, logic and symbol grounding can be bound together more
tightly than in the human mind, to the benefit of both processes.
> Personally, I feel that the "other" part is going to be massive, and
> needs a lot more thought than it gets.
IMO, symbol grounding can be carried out via a combination of
-- Hebbian-type learning (which can be formalized as a species
of probabilistic reasoning)
-- Evolutionary learning (which can be carried out via probabilistic
methods like the Bayesian Optimization Algorithm or its variants,
and is a rough analogue to Neural Darwinist type learning in the brain)
> To put that another way, I think
> there are many AI formalisms that look great on paper but which, when
> implemented, leave all the really important stuff hidden in the mind of
> the programmer (who invented, preprocessed and then interpreted the
> symbols that were fed to the formalism). This is of course the
> grounding problem itself: recognized and appreciated by many, but still
> happening today.
No argument there...
> Lastly, you say: "However, I suggest that in an AGI system, logical
> reasoning may exist BOTH as a low-level wired-in subsystem AND as a
> high-level emergent phenomenon, and that these two aspects of logic in
> the AGI system may be coordinated closely together." If it really did
> that, it would (as I understand it) be quite a surprise (to put it
> mildly) ... CAS systems do not as a rule show that kind of weird
> reflection, as I said in my earlier posts. I suppose we could call this
> "self-similar" behavior (emergence of a copy of the low level mechanisms
> in the highest level emergent behavior), and my understanding is that
> this has either never been observed or it only happens under peculiar
I agree that such systems do not seem to exist in nature, nor have they
been engineered yet.
But that is no reason not to design and build one!
Complex Adaptive Systems do not as a rule show this kind of phenomenon,
yet they are also not as a rule superhumanly intelligent ;-)
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT