RE: What I think is wrong with Eli's current approach

From: Ben Goertzel (ben@goertzel.org)
Date: Mon Oct 25 2004 - 07:36:57 MDT


Marc,

I think it would be great if you'd summarize your ideas on AI, FAI, morality
and life in general into a crisp & coherent document, and post the URL to
the list.

I can see you've done a lot of deep thinking about these issues, but it's
often hard for me to piece together your ideas from your various posts,
into a coherent point of view...

-- Ben G

> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org]On Behalf Of Marc
> Geddes
> Sent: Monday, October 25, 2004 1:31 AM
> To: sl4@sl4.org
> Subject: What I think is wrong with Eli's current approach
>
>
> Eli said>
>
> >Someone other than me needs to talk to Marc Geddes
> before he snaps completely and becomes another
> Mentifex, complete with incomprehensible ASCII
> diagrams. I don't have the time and I don't have the
> tact, but Geddes was once a promising mind and there
> might be some way to pull him out of this.
>
> Heh. The philosophical schematic was designed only to
> make sense to someone who already has an inkling of my
> general theory. It wouldn’t mean much to you, but it
> does to me. Be assured I do have a general theory
> that makes some degree of sense. I suggest you come
> back and take a another look at my schematic after
> you’ve been working on FAI theory for 10 more years ;)
>
> But O.K Eli I promise that I will not continue to
> discuss my own ill-formed ideas on Sl4 (excepting this
> one post). I am simply going to try to explain, as
> best I can, what I think is wrong with your current
> approach.
>
> My schematic on the SL4 wiki:
>
> http://www.sl4.org/bin/wiki.pl?FundamentalTheoremofMorality
>
> You'll notice in my schematic the entry for 'POLITICS'
> and 'POSSIBILITY' in my matrix reads 'MARKET', which
> in my explanation of terms I said was referring to CV
> (Collective Volition is a kind of highly sophisticated
> 'futures market'). In fact you'll see that a lot of
> the issues that Eli is dealing with I have placed in
> the POLITICS row of the Cognition matrix, instead of
> the ETHICS row.
>
> In short I think that CV (Collective Volition) is 'a
> nice place to live', or *a good political system*. It
> is *not*, I believe, something that can be
> comprehended by or embodied in a single agent. So CV
> is the operational *global outcome* of morality, *not*
> something that a singleton AI *does*. What exactly
> do I mean? Well, I'm really saying that I think that
> the whole top-down approach *can't work*
>
> Eli's requirement that his AI *not* be sentient should
> be the tip-off that there is something highly suspect
> and peculiar about his proposed RPOP. Should actual
> consciousness emerge in simulations of sentients, then
> RPOP is immediately stymied, since it would not be
> able to make effective simulations of sentients
> without violations of *person-hood*. Worse, a
> conscious (sentient) RPOP would immediately run into a
> problem with self-reference. A conscious RPOP would
> be a *person* itself, and a suitably generalized
> definition of *person-hood* would end up with the RPOP
> having to include its own volition in the calculation
> of CV. This leads a fatal infinite regress.
>
> Is general intelligence without sentience possible? I
> say not, not in any *practical* sense. Of course we
> can imagine a theoretical general intelligence with
> infinite computational power. In that case, I agree
> that general intelligence without sentience would be
> possible. One would simply take a pure Bayesian
> reasoning machine, capable of duplicating any kind of
> intelligence without sentience by burning up as much
> computational resources as it needed.
>
> But in the real world, there can be no such thing as
> *infinite computational resources*. Any real world AI
> will only have access to *finite* computational
> resources at any given time. General intelligence
> would require *useful computational short-cuts* in
> order to do useful things in real-time. Theoretically
> ideal Bayesian reasoning won't work, because it will
> quickly run into computational intractability for
> complex problems. So all finite resource AI's would
> need *specialized computational short-cuts*. And
> these *computational short-cuts* I maintain, are what
> *necessarily* give rise to qualia (consciousness and
> sentience).
>
> To sum up: Any RPOP would quickly run into
> computational intractability if it stuck with pure
> Bayesian reasoning. It would be forced to resort to
> *computational short-cuts*. These would, I claim,
> inevitably give rise to consciousness. With qualia
> present the RPOP would now be a 'Person'. A suitably
> generalized definition of 'Person-Hood' would result
> in the RPOP being forced to include its own volition
> in the CV calculation. This would give rise to an
> infinite regress. Ergo, Collective Volition cannot be
> calculated by a singleton RPOP and the entire top-down
> approach is flawed.
>
> I should make it clear that I *do* agree that CV
> (Collective Volition) is the ideal *political system*.
> That is, I agree that CV is *a nice place to live*.
> But I disagree that CV is something capable of being
> embodied in a singleton RPOP and imposed from the
> top-down. Eli's mistake is his insistence on the
> top-down approach. He has mistaken a *distributed
> system* (Collective Volition) for a mind. But in fact
> CV is not a singleton.
>
> Under my theory, all working FAI's are necessarily
> sentients which assign themselves *Person-hood*
> status. No singleton FAI can possibly implement
> Collective Volition (since any FAI is itself
> *included* in what constitutes Collective Volition).
> None the less, CV would still represent an ideal
> *political system* for sentients, which the FAI's
> would try to act in harmony with.
>
> Under my theory no singeton FAI can fully calculate
> CV, but it *can* still obtain some degree of
> understanding and determine which actions are and are
> not in harmony in CV. That is, FAI could still
> perform calculations about CV sufficient to establish
> a sort of 'futures market' to help determine which
> actions were *Friendly* and which were *Unfriendly*.
>
> CV places constraints on permissible sentient actions.
> But it's a distributed *global system* and *not*
> something that can be embodied in a Singleton as
> Eliezer thinks.
>
> Of course I've banging away with my objections on SL4
> for a couple of years now ;) Recall that I've always
> said that;
>
> (1) (Practical) general intelligence without
> sentience is impossible
> (2) Completely selfless AI is impossible
>
> Now though, I think my objections are stronger because
> I've got some plausible reasons for them and my own
> general theory of FAI.
>
> Eliezer. And me. One of us has to be right out of
> this, the other has to be wrong (Marc chuckles to
> himself and nods his head). The time is fast
> approaching when we'll find out who's who...
>
>
> =====
> "Live Free or Die, Death is not the Worst of Evils."
> - Gen. John Stark
>
> "The Universe...or nothing!"
> -H.G.Wells
>
>
> Please visit my web-sites.
>
> Sci-Fi and Fantasy : http://www.prometheuscrack.com
> Mathematics, Mind and Matter : http://www.riemannai.org
>
> Find local movie times and trailers on Yahoo! Movies.
> http://au.movies.yahoo.com
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT