Re: De-Anthropomorphizing SL3 to SL4.

From: Michael Anissimov (michael@acceleratingfuture.com)
Date: Mon Mar 15 2004 - 11:16:12 MST


Whoops! Looks like I misinterpreted some of what Pesce was trying to
say. I was under the impression that McKenna construed the Singularity
as this benevolent agent we are collectively becoming, so when Pesce
says stuff like,

"In 1993, a writer named Vernor Vinge gave a talk to NASA
<http://singularity.manilasites.com/stories/storyReader$35>, in which he
described the architecture of an event he called the “Singularity”,
which is identical in every feature to McKenna’s Eschaton."

it makes me think that they're talking about the same thing. The
Singularity is radically different than McKenna's Eschaton. When people
like Pesce say things like "the bios might not be prepared for the
emergence of the logos", I tend to visualize them visualizing some sort
of societal/psychic chaos rather than outright destruction at the hands
of a paperclip SI. I might be wrong about this, but it's the impression
I've gotten from reading some of McKenna's articles. Humans are fragile
creatures; we require a very precise environment to survive, and in the
absence of an SI that specifically cares about preserving it, we would
probably be swept aside (read: extinct) by the first SI(s) with goals
that meant the rearrangement of matter on any appreciable scale. "Care
for humans" is not a quality we should expect to emerge in an arbitrary
SI of a certain level of intelligence.

When you say "will this new emergent complexity be friendly in any sense
we can fathom, or will we be consumed or destroyed by it?", I think the
answer lies in the initial motivations and goal structure we instill
within the first seed AI. As Nick Bostrom writes in
http://www.nickbostrom.com/ethics/ai.html,

"The option to defer many decisions to the superintelligence does not
mean that we can afford to be complacent in how we construct the
superintelligence. On the contrary, the setting up of initial
conditions, and in particular the selection of a top-level goal for the
superintelligence, is of the utmost importance. Our entire future may
hinge on how we solve these problems."

The *first* seed AI seems to be especially important because it would
likely have cognitive hardware advantages that allow it to bootstrap to
superintelligence before anyone or anything else. This means that the
entire human race will be at the mercy of whatever goal system or
philosophy this first seed AI has after many iterations of recursive
self-improvement. The information pattern that determines the fate of
humanity after the Singularity is not be within us as individuals, or
predetermined by meta-evolution, or encoded into the Timewave; it will
be in the source code of the first recursive self-improver. If some
idiot walks into the AI lab just as hard takeoff is about to commence,
and spills coffee on the AI's mainframe, driving it a bit nutty, then
the whole of humanity might be destroyed by that tiny mistake. Also,
novel events prior to the Singularity are liable to have negligible
impact upon it. If someone has a really great trip where they visualize
all sorts of wonderful worlds, shapes, and entities, it will have
absolutely no impact on whether humanity survives the Singularity. I
have a feeling that Pesce and others would be turned off by this
interpretation of the Singularity because it is so impersonal and
arbitrary-seeming.

So when Pesce says stuff like,

"So we have three waves, biological, linguistic, and
technological, which are rapidly moving to
concrescence, and on their way, as they interact,
produce such a tsunami of novelty as has never before
been experienced in the history of this planet."

or

"Anything you see, anywhere, animate, or inanimate, will have within it
the capacity to be entirely transformed by a rearrangement of its atoms
into another form, a form which obeys the dictates of linguistic intent."

it makes me feel like he has a false sense of hope, that the Singularity is more about embarking on a successful diplomatic relationship with the self-transforming machine elves, rather than solving a highly technical issue involving the design of AI goal systems. I doubt that Pesce realizes the forces responsible for the rise of complexity and novelty in human society correspond to an immensely sophisticated set of cognitive tools unique to Homo sapiens, not to any underlying feature of the universe. Fail to pass these tools onto the next stage, and the next stage will fail to carry on the tradition of increasing novelty.

The vast majority of biological complexity on this planet will be irrelevant to the initial Singularity event, because it will play no part in building the first seed AI, except insofar as it indirectly gave rise to humanity. Linguistic; irrelevant except insofar as the language the first AGI designers are using to plan their design and launch. Technological; also, only a small portion of the technological complexity on our planet today will be used to create transhuman intelligence. The *simplest constructable AIs* are likely to have correspondingly simple goal systems; so the *easiest* AIs to launch into recursive self-improvement are also likely to be the ones bringing on the most boring arrangements of matter, such as multitudes of paper clips. Simple, boring, cruel, easy. Instead of the Timewave hitting the bottom level of the graph, it goes to the top, reverting to whatever level of interestingness corresponds to quadrillions of paper clips and no intelligence but one obsessed with them.

Given a *benevolent* Singularity, yes, biological, linguistic and technological forces might indeed intertwine with one another and produce a "tsunami of novelty" in much the way that he describes, but it seems to be that he's regarding this tsunami of novelty as basically coming for free. "Novelty", in the sense that Terence McKenna uses it, has an unambiguously positive connotation.

Michael Anissimov

Paul Hughes wrote:

>Hi Michael,
>
>You wrote:
>
>
>
>>>My first problem with the analogy is that it subtly
>>>
>>>
>implies a developmentally predetermined positive
>outcome to the Singularity, when this needn't be the
>case. The first recursively self-improving
>intelligence could easily be selfish, or obsessively
>focused on a goal whose accomplishment entails the
>destruction of humanity.<<<
>
>Yes, this is definitely true, and Mark Pesce and I
>agree with you. Mark has said that the bios was not
>prepared for the emergence of the logos. We could
>look at current species extinction as analagous
>evidence of what you are saying.
>
>
>
>>>>In this scenario, the arrival of the Singularity
>>>>
>>>>
>needn't represent the "next stage of intelligence and
>complexity in the universe" in the positive,
>uplifting sense at all. A superintelligence devoted
>solely to manufacturing as many paper clips as
>possible could easily delegate all of its complexity
>and intelligence towards plans for the complete
>material conversion of the universe into
>self-sustaining paperclip manufacturing facilities,
>for example.<<<
>
>I agree this is a possibility, however I think the
>reality will be vastly more complex than that. The
>question then is will this new emergent complexity be
>friendly in any sense we can fathom, or will we be
>consumed or destroyed by it? I don’t think anyone
>knows what the answer to that is, and as Mark Pesce
>says it is the greatest challenge we face.<<<
>
>
>
>>>>The universe has steadily been increasing in
>>>>
>>>>
>complexity, yes. From the anthropic point of view,
>this makes perfect sense; a threshold level of
>complexity is clearly required for a universe to
>generate agents capable of observing it to begin with.
>But once the agents have come into being,
>there are not necessarily any promises for continued
>survival. The forces responsible for building stable
>forms in this universe across the eons do not give a
>damn about us, and would continue chugging along
>should we become extinct one day.<<<
>
>Again, agreed. Nothing in Mark Pesce’s thinking would
>disagree with that. This is precisely the problem
>that worrying him the most.
>
>
>
>>>My second problem with the analogy is that it
>>>
>>>
>actually seems to understate the potential magnitude
>of a successful Singularity.<<<
>
>I don’t see where he does that. All Mark says is that
>the next change is beyond anything our language can
>fathom, and that is at least as big as the jump from
>bios to logos. His 10-million fold figure was only
>pertaining to the speed of logos over bios, not techne
>over logos.
>
>
>
>
>>>>My point here is that the Singularity radically
>>>>
>>>>
>outclasses any historical event, even when we make
>incredibly conservative assumptions. To me, making an
>analogy between the rise of general intelligence and
>the Singularity sounds like trying to make an analogy
>between a firecracker and the Big Bang. Any analogy
>will be appealing, but as far as I can tell, the
>Singularity seems to be an event which is *genuinely*
>new. Consider a hypothetical world where a large
>transition that occurs can objectively be said to lack
>any analogies. If humans lived in such a
>world, don't you think they would grasp for whatever
>analogies they could, just to achieve the sensation of
>cognitive closure? Obviously the ancestral environment
>didn't have anything remotely like the
>Singularity in it, so we're clearly adapted poorly to
>modeling changes of this size. Therefore I think it
>makes sense to be extremely careful in which analogies
>we choose to use, if any.<<<
>
>Exactly. That was Mark Pesce’s whole point! He said
>the coming changes are so great the completely
>outstrip ANY linguisitic model we could possibly hope
>to attach to them.
>
>Quoting him again here,
>"
>And that search for a language to describe the world
>we’re entering is, I think, the grand project of the
>present civilization. We know that something new is
>approaching."
>
>"So we have three waves, biological, linguistic, and
>technological, which are rapidly moving to
>concrescence, and on their way, as they interact,
>produce such a tsunami of novelty as has never before
>been experienced in the history of this planet."
>
>Paul Hughes
>
>
>
>__________________________________
>Do you Yahoo!?
>Yahoo! Mail - More reliable, more storage, less spam
>http://mail.yahoo.com
>
>
>
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT