Guide AI theory (was Forward Moral Nihilism)

From: m.l.vere@durham.ac.uk
Date: Sun May 14 2006 - 16:31:21 MDT


So, where would i take my 'moral nihilism'? The reasons I advocated it are the
following:

All morality is artificial/manmade. This is not an intrisnic negative, however
it is negative in this case, as:
1. Morality made by mere humans would very likely not be suitable/a net
postivie for posthumans. Therefore we need to go into the singularity without
imposing morality on our/other posthumans (ie as moral nihilists).
2. As morality is artificial, there is no one (or finite number of) 'correct'
moralit(y)/(ies). Thus it would be better for each individual posthuman to be
able to develop his/her/its own (or remain a nihlist), than have one posthuman
morality developed by a sysop.

At the moment, what i would advocate is that
universal egoists (or moralists who dont want to constrain others with their
morals) build
a sysop which grants them all complete self-determination in becoming
posthuman. My ideas so far (written previously):

"The best posssible singularity instigator I can imagine would be a
genie style seed AI, its supergoal being to execute my expressed
individual will. From here I could do anything that the person/group
instigating the singularity could do (including asking for any other
set of goals). In addition I would have the ability to ask for
advice from a post singularity entity. This is better than having me
as the instigator, as the AI can function as my guide to
posthumanity.

If anyone can think of better, please tell.

The chances of such a singularity instigator being built are very
slim. As such I recomend that a group of people have their expressed
individual wills excecuted, thus all being motivated to build such
an AI.

The problem of conflicting expressed wills can be dealt with by
1. Prohibiting any action which affects another member of the group,
unless that member has wilfully expressed for that action to be
allowed (a form of domain protection).
2. Giving all group members equal resource entitlement

The first condition would only be a problem for moralists and
megalomaniacs (and not entirely for the latter as there could exist solipism
stlye simulations for them to control).
The second seems an inevitable price of striking the best balance
between the quality of posthumanity and the probability of it
occuring.

I tentatively recomend that the group in question be all humanity.
This is to prevent infighting within the group about who is
included, gain the support of libertarian moralists and weaken the
strength of opposition - all making it more likely to happen.

This is a theory in progress. Idealy, we would have an organisation
similar to SIAI working on its development/actuallisation. As it is,
Ive brought it here. Note, I hope to develop this further (preferably from
the standpoint of moral nihilism).

Whilst the AI interpereting commands may be an issue, I dont see it
as an unsolvable problem.

Note: I see this as a far better solution to singularity regret than
SIAI's CV."



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT