Intramind Memewar; Implications for AGI

From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Sun Apr 04 2004 - 17:34:12 MDT


Given the amount of directed evolution cheerleading going on, here is a
follow-up to my previous excerpt. I've been there, yes it looks good to start
with, yes you think you can control it, no you really can't. Reply
to Ben's comments at the end.

-----

An AGI-relevant example of this is emergent competition between
representational paradigms in the human brain. Concept frameworks and
belief systems gain strength as they are used to form more concepts; the
more extensive and useful-seeming the paradigm, the quicker we are to
represent new information in that mold (strengthening the source
paradigm). As with generalised meme competition, representational systems
can leverage general intelligence to supress rivals and build defences
against competitors. This is usually but not always adaptive for the host
intelligence; how much of our tendency towards closed-mindedness and
strong cognitive disonance is genetic adaptation and how much of it is
internal conflict between self-reproducing neural patterns, or
meta-patterns?

Since genes survive the host and until very recently neural patterns
didn't, genes used to keep on top of the situtation by manipulating the
neural substrate to maximise the benefits and minimise the penalties
(more gene-directed DE). However the invention of language and rise of
memetic (cultural) evolution has obsoleted these mechanisms. As well as
engaging in unadaptive behaviour (like spending all day every day holding
a sign saying 'Sinners Repent!') as a result of memes trying to infect
other hosts and defend their own hosts, we /also/ get the fallout from
memetic competition within our own brains. As with extrahost competition,
memes have adapted to exploit all of our evolved brainware to gain an
advantage over rivals in the fight to control the brain.

There are even parallels of body/cell and cell/chromosome interlevel
competition; meme complexes have to resist being weakened by mutinities
of their constituent selfish memes and taking over more of one host's
brain might actually reduce the chances of spreading to other hosts (not
least due to genetic level defences against rampantly parasitic memes; we
have an instinctive dislike for monomaniacs). Fortunately for our sanity
co-operative effects dominate here to the same extent that they do when
genes build cells and cells build bodies; meme complexes within the
K-line plane of Minsky's society of mind have ample opportunities for
non-zero-sum interactions. When this results in adaptive behaviour we call
it synthesis and cross-fertilisation of ideas (spot the unintentionally
apt reproductive metaphor); when unadaptive memes scramble [1] for every
possible source of support to try and survive we call it rationalisation.

>From a (naieve) humanistic viewpoint the situation just keeps getting
worse; not only are we robots hardwired by our genes to replicate them at
all costs, our software consists of a collection of viruses and worms
downloaded from the social network (which combines peer-to-peer and
client/server dynamics; Slashdot your local priest today!) and our
central processors are running a complex version of Core Wars [2]. It
should be obvious by now AGIs that replicate even part of this mess are
Unsafe By Definition. Bayesian AGIs that prevent this resource-wasting
memetic competition (strict utilitarianism is both an optimal fitness
function, a superior design strategy and with pervasive causal validation
the recursive version of memory protection) are not only much safer,
they're more efficient too. The moral of the story is to check your
proposed cognitive dynamics very carefully; emergence is insidious,
difficult to spot and (without recursive causal validation) regenerative
as system complexity increases.

Eliezer nearly destroyed the world by rouding up thousands of half-bright
monkeys, pointing them in the right general direction and setting them to
the task of typing out AGI source. I was on the path to destruction (no
pun intended) when I started implementing a cognitive supersystem with
multilayer Bayesian directed evolution several orders of magnitude more
complex than any existing genetic algorithms and more efficient than
anything in nature (I was a half-bright moneky, but I got better [3]).
Hopefully there will be humans or human-derived intelligences left after
the Singularity to forgive us.

[1] These days I like to open discussions with religious types with the
    line 'Please allow me to euthanise your unadaptive meme complex...'.
    As a result I waste a lot less time talking to religious people ;>

[2] Core Wars is a fight for survival between machine code programs
    running on a simulated OS with no (or optionally very little) memory
    protection. It is the ultimate test of viral coding ability; see
    http://www.koth.org . As a student I proposed but never implemented
    Network Core Wars, with tens to hundreds of compute nodes in a
    simulated network with corruptible routers (cynics might suggest that
    I decided to play the real thing instead :).

[3] I would like to stress that subjunctive planet kill is /not/ a
    prerequisite for being a SIAI seed AI programmer. Two negative
    examples are quite enough; anyone who reads this and still does
    something potentially terminal for the species is definitely trying
    to disqualify themselves, not to mention kill everyone. After all,
    third time unlucky.

[4] Yet another example of genes exploiting directed evolution at a
    separate level of organisation; the dynamics of the human immune
    system. As Ridly points out 'The whole system is beautifully designed
    so that the self-interested ambitions of each cell can only be
    satisfied by the cell doing its duty for the body... It is as if our
    blood were full of Boy Scouts running around looking for invaders
    because each time they found one they were rewarded with a
    chocolate.' DE can be used safely and effectively, but in AGI
    contexts this only works if you know exactly what you are doing; in
    the vast majority of cases it is much more likely that you merely
    think you know what you are doing.

Ben Goertzel wrote;
> While I find Eliezer's ideas interesting and sometimes deep, based on
> his posted writings I do not agree that he has "made progress in
> producing engineering solutions."

Recent history; I attempted to implement a prototype of LOGI and CFAI over
the November -> February period this year. In the process I discovered just
how vauge the two documents were; I had to fill in a lot of details myself
in order to produce a constructive account that would server as a blueprint.
I agree that CFAI etc provide (dangerously outdated and flawed) principles
but not a HOW TO. I have decided to suppress publication of my results and
commentary for the time being, but it was certainly a highly informative,
not to mention hair-raising experience.

> The concepts outlined in his writings do NOT constitute "engineering
> solutions" according to any recognizable interpretation of this term!

I agree. This personal assessment is based on extended personal
conversations, with a limited input from SL4 and Wiki posts that need the
right context to be interpreted correctly. It looks like Eliezer is in fact
making good progress towards a scientifically well justified and feasible to
engineer constructive theory. I look forward to his publication of the
details. In the mean time I have been learning Friendliness and broadening
my knowledge of AGI theory as fast as I can.

 * Michael Wilson

'You have found an altar. Would you like to (p) pray or (d) desecrate?'
 - Unix Larn

.

        
        
                
___________________________________________________________
WIN FREE WORLDWIDE FLIGHTS - nominate a cafe in the Yahoo! Mail Internet Cafe Awards www.yahoo.co.uk/internetcafes



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT