Deadly Sins of Real AI

From: Peter Voss (peter@optimal.org)
Date: Tue Apr 02 2002 - 11:35:17 MST


Here's a list of 'Sins' from my a2i2 Project FAQ
http://www.optimal.org/a2i2/projectFAQ.htm

They add up....

Peter

Why is there so little progress in (workable) AGI models and systems?

· See: ‘*Why are so few researchers pursuing Real AI?’

· The importance of general intelligence is not appreciated

· Poor epistemology (grounding, f&s, concepts, context, etc. –
leading to: embedding, vector encoding, activation spreading, multi-sense,
etc.)

· Poor understanding of key concepts – getting stuck on:
consciousness, volition, meaning, representation/ world-model, emotions,
common-sense

· Poor understanding of ‘intelligence’ - knowledge (Cyc) Vs two-way
interactive, adaptive learning – also, importance of abstract cognition

· Not appreciating the central importance of patterns – especially
dynamic ones (entities, attributes, concepts, actions, thoughts, etc.)

· Too much focus on copying/ reverse-engineering the brain –
biological feasibility

· Too much focus on high-level abilities (logic, language,
creativity, etc.)

· Performance expectations too high for any specific functionality
(vision, speech, etc.)

· Schism between traditionalists & connectionists – False dichotomy:
symbol/ schema Vs pattern/ incomprehensible

· Undue focus on evolutionary & agent systems (Society of Mind)

· Getting stuck on custom (NN and/ or robotics) hardware in the early
stages (CAM, Cog)

· Over-estimate of hardware & software needed (Seed AI, minimal AGI,
limited motor-sensory, NN without paying NN price)

· Under-researched areas: Incremental, real-time, unsupervised/
self-supervised learning (Vs backprop!) – Self-tuning: bottom up & top-down
(data & emotions/ goal/ meta-cognitive driven) – Dynamic NN topologies –
Dynamical, interactive, adaptive AGI systems – Combining the best from NN,
traditional AI, fuzzy, etc.

Why are so few researchers pursuing Real AI?

Basic division - *

· Many don’t believe that human-level AGI is possible at all

· Others think it will happen anyway. Sometime. Lots of people are
working on it. Eventually it will all come together.

More specific reasons for not focusing on ‘Real AI’

· They don’t believe that ‘general intelligence’ is a valid concept

· They don’t believe that AGI can be achieved within their lifetime –
the time is not ripe!

· They don’t believe that ‘general intelligence’ is the best approach
to achieving ‘AI’

· They don’t see why it’s so important – don’t consider Seed AI
benefits

· They don’t know how to do it (no model) – intimidated!

· They are trying to reverse engineer the brain - one function at a
time

· They are focusing all of their attention on one (or a few) aspects
of intelligence – not the whole picture

· They tried in their youth (15 - 30 years ago) and failed - now,
‘conclude’ that it can’t be done

· They can get quicker results (financial and other) pursuing
specialized AI

· They get little academic respect/ support/ funding

· They are afraid of it

All of the above combine to create a dynamic where Real AI is not
‘fashionable’, further reducing the number of people drawn into it!



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT