From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Sat May 04 2002 - 15:48:33 MDT
Peter Voss wrote:
> I generally agree with Ben's response to Eli's review.
> Specifically, I agree that Real AI (including Seed AI) will require a
> simpler, cleaner design rather than the kind of complexity that Eli seems to
> call for. Really understanding what general intelligence is, and what its
> essentials are, is the key to an effective implementation.
Um... DGI *is* the simpler, cleaner, bare-bones design based on really
understanding general intelligence and its essentials.
> Unlike Novamente, the a2i2 approach focuses on the basic cognitive
> mechanisms that lead to high-level thought (including reasoning & formal
> logic). We firmly believe that dog-level ('general') intelligence is an
> extremely significant step towards human-level AGI. In fact, we take this to
> be the real hurdle. This view is based on significant research, and is more
> than a vague intuition.
I agree; looking at the cognitive architecture, it's pretty clear that a
human is a chimpanzee hacked to support general intelligence. But what do
you take dog-level intelligence to specifically consist of and how are
"cognitive" mechanisms implemented above that complexity base? You don't
have to answer right away; I'm just stating what I think the question is.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT