From: Ben Goertzel (email@example.com)
Date: Mon Jan 23 2006 - 16:31:15 MST
I certainly see the point of Richard's proposed test. A Novamente
with "human-adult-level intelligence" (and yes, I understand this is a
somewhat bogus term, but I do think it has value as an ambiguous
natural language expression) connected to the Net would certainly be
able to answer these questions.
However, I also see the point of Eliezer's objection. One could make
very substantial progress toward AGI, going far beyond all existing AI
systems, without having a system capable of answering this sort of
If we proceed as hoped with Novamente (which will begin by eventually
getting adequate funding to hire a few dedicated staff so that the
project can proceed at a non-ridiculously-slow pace) then there will
be intermediary stages between where we are now and human-adult-level
intelligence, which will be obviously impressive and exciting and
fascinating yet not involving the ability to answer Richard's
-- Ben G
On 1/23/06, Eliezer S. Yudkowsky <firstname.lastname@example.org> wrote:
> Richard Loosemore wrote:
> > 1) Give an introduction to Heim's theory of quantum gravity, in
> > sufficient detail to allow a Physics graduate to understand it.
> Good heavens. For a nonhuman paired with a human physics graduate, this
> is a superintelligence test, not an AGI test.
> RGE Corp. made some audacious claims, but this isn't fair even to them.
> Making some allowance for hype, I think that a fair challenge to RGE, or
> any other commercial AGI company, is handing them a task sufficiently
> far beyond state-of-the-art that they could beat up Google if they
> succeeded. Say, scoring above 1000 on the SAT - though maybe that's
> still much too difficult.
> Dan Clemmensen wrote on 2002.03.01:
> > Arthur T. Murray wrote:
> >> Now that Technological Singularity has arrived in the form of
> >> http://www.scn.org/~mentifex/mind4th.html -- Robot Seed AI --
> >> you all deserve this big Thank_You for your successful work.
> > Sorry, Arthur, but I'd guess that there is an implicit rule
> > about announcement of an AI-driven singularity: the announcement
> > must come from the AI, not the programmer. Now if you claim to
> > be a composite human/AI based SI, the rules are different:
> > I personally would expect the announcement in some unmistakable form
> > such as e.g. a message in letters of fire written on the face
> > of the moon.
> Eliezer S. Yudkowsky http://intelligence.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT