From: Mohsen Ravanbakhsh (firstname.lastname@example.org)
Date: Sun Feb 25 2007 - 10:55:07 MST
"The last step, the non-existent "magic dust of intelligence", would then be
a model linking up these modules"
I know what you mean, but that might be an over-simplification.
Consider the relation of our visual and lingual modules (If we can bring
such vast functions under the name of modules), Their relation is too tight
to be just a linkage; some visual abilities are not even considerable
without considering some lingual functions and vice verse.
But the problem is they are still modules, possibly with different
structures and/or mechanisms.
"they do not attempt to reproduce the full functionality of the modules on
the level of, e.g., a dog or chimp brain before going for a (super)human
mind, and I wonder why not."
If we can produce a functional module on the level of a chimp or a dog,
human wont be much harder...
But I agree with you that an evolutionary approach would help here, and I
believe it's the best approach, in the case of highly modular brain.
On 2/25/07, Joshua Fox <email@example.com> wrote:
> I've actually been wondering this recently.
> Human intelligence is built up on top of modules that are evolutionarily
> older (and are found in non-human animals).
> Contemporary serious AGI efforts (in my layperson's understanding ) are
> indeed modular and layered, but they do not attempt to reproduce the full
> functionality of the modules on the level of, e.g., a dog or chimp brain
> before going for a (super)human mind, and I wonder why not.
> The last step, the non-existent "magic dust of intelligence", would then
> be a model linking up these modules.
> 2007/2/23, Mohsen Ravanbakhsh <firstname.lastname@example.org >:
> > Hi everybody,
> > I'm new to this list.
> > I wanna begin with a question:
> > A case of formation of human intelligence is considerable for which the
> > current trend of study of AI is not appropriate. Suppose our brain is highly
> > modular ( every single intelligent capability have been provided in a
> > module), in both structural and algorithmic aspects, and the unity we feel
> > in our cognition is some kind of illusion (our mental activities are not
> > transparent to us, but we think they are; as Churchlands propose)
> > It seems in this case our endeavor is pointless, because our intuition
> > is of no help and the only reliable source is neuroscience which is not
> > good in giving big pictures.
> > I'm asking, in this case (which is quite probable in my view) what can
> > we do to construct AI?
> > (Becarefull of the 'I' in AI ! that is the vague point in this
> > situation)
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT