Re: Coercive Transhuman Memes and Exponential Cults

From: Durant Schoon (durant@ilm.com)
Date: Mon Jun 11 2001 - 16:01:13 MDT


> > Now, imagine Charlie. Transhuman Charlie is sufficiently more intelligent than
> > transhuman Bill. Again our favorite Sysop can warn Bill off from doing any
> > thing he might not want to do, if only he knew all the consequences. But there
> > is a new twist with this case. Transhuman Charlie might convince transhuman Bill
> > to modify vis own volition so that accepting new memes from Charlie is highly
> > desired. Transhuman Charlie might advertise his suggested memes cleverly so that
> > transhuman Bill bill chooses them with no violation of vis volition.
>
> Actually, transhuman Bill automatically flushes all messages from Charlie
> in a single thought smoothly integrated with the Sysop API. Charlie can't
> convince Bill to modify vis volition in the first place - at least, not
> using invalid arguments. And if the arguments are valid from Bill's
> perspective, even with the consequences taken into account, what's the
> problem?

Say Charlie has a memetic attractor X. He can then create various messages that
bring anyone from Bill's state of mind incrementally closer to X, designing each
message to be appealing (ie. valid) with regard to the target (Bill in this case).

Bill might say: "I do not want to be lead to X, Y, or Z. I wouldn't mind being
drawn closer to A, B, & C, but I like where I am now". By doing this he would
specifically be opt'ing-out of X, Y, Z and opt'ing-in to A, B, & C. Yet there
are probably a non-enumerable set of memetic attractors (or this set is
incredibly large).

Maybe Bill could say: I will accept messages for which these, and only these,
parameters of my personality might be modified. I choose to allow the option
for my most recent self to nullify any modification greater than x amount.

I suppose Permutation City addressed these ideas, but I still think cults could
creep in. I don't remember specific mention of them. I do think it becomes
more of a concern when we can rewire our volitional centers.

> Charlie has to be smarter than the *Sysop* to *covertly* take
> over Bill, at least if Bill has a normal relationship with the Sysop.

Not smarter. Just smart enough. Charlie merely needs to be smart enough to
persuade Bill to incrementally change vis volition, without violating any
rules of the Sysop. Charlie might also do this completely openly. In fact,
if Charlie does not do this, then transhuman Cindy probably will, ie.
someone would do it after enough time passes. And you know transhuman Cindy
has a way with words. She makes everything so clear and understandable.

Replication would still seem to dominate interactions in this environment.

I'm suggesting (or opening for discussion) the idea that:

        Systems which influence your volitional mechanism to favor and replicate
        themselves will thrive at the expense of other systems (I mean memetic
        systems, memeplexes, mores, even just behavioral instincts).

Smart people can be convinced to make incorrect conclusions if there is
enough spin and doubt created or if an idea is "irresistably appealing".
For some category of non-dangerous manipulation, the sysop won't intervene.
It seems likely that there will be an arms race for intelligence to defend
one's self against hyper-clever marketting - ie. ideas which influence but
are completely permissible by the sysop.

Side note:
I'm not sure if there are any laws against subliminal advertising (in the US).
fnord

If you are assuming there is going to be a separation between humans and
transhumans (as in protecting humans from 'coercive transhuman memes'), I'm
considering that there might be levels of protection between a transhuman
level A and a transhuman level B (where anyone in B could easily influence
anyone in A WITHOUT violating rules of the sysop, merely by being a highly
effective marketter). These levels would go up and up A < B < C < D, etc.
and I'd assume that the more computational power you had the more effective
memes you could devise. Lower levels would be "protected" from higher levels.
I'm sure a funny book could be written about inter-level dating...or maybe
a not so funny book.

Smart people can be duped by smarter people (and even stupider people if the
conditions are right). The same seems only more so in an exaggerated version
of hyperintelligence.

> > The result? Exponential cults win (in a weird way).
>
> Only for people who like exponential cults...

Ah, but because volition can be rewired, all you have to do is keep
getting people closer to liking exponential cults. Given that these
entities will live a long time and be mentally mutating anyway (give
or take), there is probably sufficient time to be largely
successful. People will "drift" toward strong attractors, or maybe
leap and bound between them...if they can escape from the previous one,
that is, if they'll *want* to.

Because cults do not violate volition, they have a good chance of going
exponential (in the nanomachine-convert-the-universe-to-computronium sense).

I suppose this leads to another question about property rights and volition.
If there is a land grab and I get out to the other matter in the universe first,
claim it and convert it to computronium populated with sentients who follow my
cult am I violating anyone's volition in an unfair manner?

I'm not saying the universe should be any other way...I'm just wondering if I
should start working on my cult personality for when the singularity arrives
;-)

Seriously though, are exponential cults a natural consequence for societies
revering volition?

(if we can convert sentients, we'll create them!)

--
Durant Schoon


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT