Re: AGI Philosophy

From: Philip Sutton (Philip.Sutton@green-innovations.asn.au)
Date: Wed Jul 27 2005 - 11:57:04 MDT


Hi Phillip

> It would be nice to have an AGI which only offered suggestions of >
> actions a set of human participants could take to realize optimal >
> scenarios, instead of the AGI being an active player in forcing ver >
> utopia. Once this AGI is achieved, it would be nice if the actions >
> proposed by ver excluded any further imput or activity from any >
> AGI-ish entity in effecting each discreet suggestion. Seems we'd be a
> > little safer from being steamrolled by the AGI in this regard; us >
> humans could decide what we'd specifically like to preserve at the >
> risk of sacrificing some degree of efficiency in the grand scheme of >
> things. FAI needs to enact the "Grandfathering Principle" for it to >
> be friendly towrds us.

(Not that I've thought about the full implications of this idea) but it seems like
a good idea. (I'm sure someone will point out though that a nasty AGI could
lure humans to act on its behalf - getting around the no action injunction)

After we've had a few decades (or more????) experience working with
helpful advisory AGIs we might move on to give them increasing freedom to
act (a movement for the emanciaption of the AGIs).

I think one of the things we need to do to rescue humanity from our own
exploitation by other humans is to introduce large-scale deliberative
democracy (there's quite a bit on the web about small-scale deliberative
democracy). I reckon advisory or even facilitatory AGIs could help this
process along really nicely.

Cheers, Philip



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT