Re: Finding for SIAI

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun May 19 2002 - 20:11:09 MDT


Ben Goertzel wrote:
>
> 3) seems to be your best bet. For this, it seems to me that your best bet
> is either to:
>
> a) code a prototype DGI AGI system yourself, to show off

I guess it's just *inherently implausible* that AGI is too large for one
person to build a prototype that does anything convincingly powerful...

> b) release a very detailed design document explaining your DGI design to
> others, not in a way that will convince *everyone* (impossible) but in a way
> that will convince some significant segment of the AI-wise population
>
> Because an individual in category 3) is going to want to see some kind of
> proof that you can actually build something interesting, apart from your
> obvious intelligence and your ability to write interesting philosophy
> papers.
>
> I am curious (and others on this list probably are too): Which of the two
> routes, a) or b), are you intended to take? Or are you intending to pursue
> both simultaneously?

Uh... no offense or anything, but I tend to view this entire line of
argument as "Ben tries to frame the problem so that all the funding
automatically goes to Novamente." I'll go on creating the DGI design
because that's on the direct critical path to AI; this is also one of many
things that increases the probability of obtaining funding.

DGI is less than a design paper, but it is more than the philosophy paper
you are trying to paint it as. I'm not quite sure you understood the
non-philosophy parts, though. You may have thought I was talking about
philosophical-ish emergent behaviors when I was talking about design
properties with design consequences.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT