Re: Arbitrarily decide who benefits (was Re: Bounded population)

From: Lucas Sheehan (lucassheehan@gmail.com)
Date: Wed Apr 16 2008 - 15:59:11 MDT


On Wed, Apr 16, 2008 at 2:33 PM, Nick Tarleton <nickptar@gmail.com> wrote:
> On Wed, Apr 16, 2008 at 5:12 PM, Stuart Armstrong
> <dragondreaming@googlemail.com> wrote:
> > I hope there will be! What the moral system of the AI is will be vital
> > to the whole future of humanity. How to make the decision?
> >
> > Schematically:
> >
> > 1) The decision is made by the programmers.
> > 2) The decision is made by a small group of people with their own
> > interest in mind.
> > 3) The decision is made by a small group of altruistic people.
> > 4) The decision is made by some sort of democratic process, strongly
> > guided by those with a good understanding of the issues.
> > 5) The decision is made by some sort of pure democratic process.
>
> Extrapolated volition? (In which the initial dynamic comes from (3),
> and the end result from something like (4) or (5) but without at least
> some of democracy's flaws.)
>
The problem it seems is that by examining human inovation and progress
in the past these matters are often not handled gacefully as listed
above. Implimentation is often devoid of significant forethought even
if completely altruistic. I know communities like sl4, conversations
like this and other factors are trying to midigate this but.... Could
there be a way to set fundemantals, axioms, laws (for lack of better
words) that would help to avoid disaster? Some high level etheral AI
universal standards that bond even nefarious forces to some "good"
implimentation.

Is this impossible? Is there any kind of example that could lend
credence to my day dream?



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT