Re: Some considerations about AGI

From: Richard Loosemore (rpwl@lightlink.com)
Date: Tue Jan 24 2006 - 17:59:53 MST


Ben and Eliezer,

Now, I didn't say that the AGI would have to pass with a 100% score to
be accepted as a genuine AGI ...

Let me mention some of the motivations behind the test.

1) If some unscrupulous people were to trying to "boost" their chances
of convincing potential investors that they had an AGI, they might
decide to plant a secret link from their AGI to a team of people sitting
in front of Google pages. This is not something that would have been
possible a mere five years ago, but today, someone could mount such a
"Clever Hans" AGI by this method. Asking the machine to switch between
languages in its reply, without too much advance warning, would make it
impossible for someone to have a Google Hans team squirreled away
somewhere. Too many languages.

2) Some people might be tempted to put a cheap front end on the Cyc
database and claim that they had a machine that answered factual
questions. I am not familiar with the extent of Cyc's performance, but
this would not amount to an AGI, and my litmus test would be if the
system could talk and think in nuance and metaphor. Anything able to
understand the significance of poetry would do nicely.

3) Please note that RGE seem to claim that their system can read and
understand pretty much anything. Under those circumstances, the Heim
Theory problem would be a challenge, to be sure, but it might be able to
make a serious attempt to understand his original German and interpret
it. That said, I'd be willing to leave out the Heim Theory test, if
that were considered too brutal.

4) On a more general level, the test questions were meant to be at the
extremely-difficult-crossword-puzzle level.

I do not suggest this as a general AGI test: this was more targeted at
what I saw to be the extreme claims made by RGE, and to guard against fraud.

Richard Loosemore.

Ben Goertzel wrote:
> I certainly see the point of Richard's proposed test. A Novamente
> with "human-adult-level intelligence" (and yes, I understand this is a
> somewhat bogus term, but I do think it has value as an ambiguous
> natural language expression) connected to the Net would certainly be
> able to answer these questions.
>
> However, I also see the point of Eliezer's objection. One could make
> very substantial progress toward AGI, going far beyond all existing AI
> systems, without having a system capable of answering this sort of
> question.
>
> If we proceed as hoped with Novamente (which will begin by eventually
> getting adequate funding to hire a few dedicated staff so that the
> project can proceed at a non-ridiculously-slow pace) then there will
> be intermediary stages between where we are now and human-adult-level
> intelligence, which will be obviously impressive and exciting and
> fascinating yet not involving the ability to answer Richard's
> questions...
>
> -- Ben G
>
>
> On 1/23/06, Eliezer S. Yudkowsky <sentience@pobox.com> wrote:
>
>>Richard Loosemore wrote:
>>
>>>1) Give an introduction to Heim's theory of quantum gravity, in
>>>sufficient detail to allow a Physics graduate to understand it.
>>
>>Good heavens. For a nonhuman paired with a human physics graduate, this
>>is a superintelligence test, not an AGI test.
>>
>>RGE Corp. made some audacious claims, but this isn't fair even to them.
>>
>>Making some allowance for hype, I think that a fair challenge to RGE, or
>>any other commercial AGI company, is handing them a task sufficiently
>>far beyond state-of-the-art that they could beat up Google if they
>>succeeded. Say, scoring above 1000 on the SAT - though maybe that's
>>still much too difficult.
>>
>>Dan Clemmensen wrote on 2002.03.01:
>>
>>>Arthur T. Murray wrote:
>>>
>>>
>>>>Now that Technological Singularity has arrived in the form of
>>>>http://www.scn.org/~mentifex/mind4th.html -- Robot Seed AI --
>>>>you all deserve this big Thank_You for your successful work.
>>>
>>>Sorry, Arthur, but I'd guess that there is an implicit rule
>>>about announcement of an AI-driven singularity: the announcement
>>>must come from the AI, not the programmer. Now if you claim to
>>>be a composite human/AI based SI, the rules are different:
>>>I personally would expect the announcement in some unmistakable form
>>>such as e.g. a message in letters of fire written on the face
>>>of the moon.
>>
>>--
>>Eliezer S. Yudkowsky http://intelligence.org/
>>Research Fellow, Singularity Institute for Artificial Intelligence
>>
>
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT