Re: answers I'd like from an SI

From: Wei Dai (weidai@weidai.com)
Date: Tue Nov 13 2007 - 22:35:44 MST


I wrote:
> Eliezer S. Yudkowsky wrote:
>> I agree. Now explain all this to Marcus Hutter.
>
> Do you know if Marcus believe that his AIXI model captures all aspects of
> intelligence, or just that most well-defined AI problems can be formalized
> and "solved" in the AIXI model? Have you discussed this issue with him
> previously?

Curiously, I found that SIAI's own position seems closer to the former than
the latter. Quoting from
http://www.intelligence.org/blog/2007/07/31/siai-why-we-exist-and-our-short-term-research-program/:

Theoretical computer scientists such as Marcus Hutter and Juergen
Schmidhuber, in recent years, have developed a rigorous mathematical theory
of artificial general intelligence (AGI). While this work is revolutionary,
it has its limitations. Most of its conclusions apply only to AI systems
that use a truly massive amount of computational resources ¨C more than we
could ever assemble in physical reality.

What needs to be done, in order to create a mathematical theory that is
useful for studying the self-modifying AI systems we will build in the
future, is to scale Hutter and Schmidhuber¡¯s theory down to deal with AI
systems involving more plausible amounts of computational resources.
(end quote)

I may be quoting that a bit out of context, but I think it illustrates that
even SIAI may be underestimating the amount of work that needs to be done
for the AI-based approach to a positive Singularity to work. I guess either
that, or Eliezer and Ben have different opinions on the subject.

BTW, I still remember the arguments between Eliezer and Ben about
Friendliness and Novamente. As late as January 2005, Eliezer wrote:

> And if Novamente should ever cross the finish line, we all die. That is
> what I believe or I would be working for Ben this instant.

I'm curious how that debate was resolved?
 



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT