Donaldson, Tegmark and AGI

From: Russell Wallace (russell.wallace@gmail.com)
Date: Fri Aug 11 2006 - 19:45:07 MDT


(Advance warning, I'm tired and the ideas herein are still somewhat inchoate
so I'm not sure how coherent all this is, but if I wait until I'm fresh as a
daisy and my thoughts are perfectly organized I may be some time, and it's
possible this may be of use to someone, in which case this version now is
better than a perfect version Real Soon Now. Also please disregard apparent
egotism: the point I'm trying to make is not about me, I merely perforce use
myself as an example.)

The fantasy author Stephen Donaldson in my opinion belongs in the canon of
Western literature, on the criterion of depth: elements of sufficient
profundity that their significance may not be apparent until years after
first reading. A theme that pervades much of his work is _despair_, made
explicit in one memorable scene (no spoilers, you'll recognize it if you
read the book in question): There are certain insights that can only be
obtained in the extremity of despair.

Douglas Lenat was, like da Vinci, Babbage and Drexler, a man ahead of his
time; Eurisko stood for perhaps two decades as the most advanced AI ever
created (though Novamente may be ahead of it by now; comparison is
difficult). And then he abandoned it, decades ahead of the rest of us in the
insight that recursive self-improvement is a mirage, there are no short
cuts; and moved on to tackle the problem of real-world knowledge.

Yet Cyc achieved less than Eurisko, and we understand why: it handles only
ungrounded declarative sentences, which are far too shallow a subset of
knowledge for any significant use - sufficiently so that they're not even
worth bothering with if that's all you have; Google walks away from Cyc in
Cyc's strongest areas.

Why did such a brilliant man fall into such an obvious dead end, and stay in
it for two decades and counting? The answer mocks with its banality: that
was all 20th century hardware could handle. One has to undertake a project
that is doable; failing that, one has to believe some undertakable project
is doable. (Disclaimer: obviously I'm not privy to Lenat's thoughts, so this
is speculation; but I believe it to be very much in accord with how our
minds work.)

Today's hardware isn't up to it either of course. I've finally managed to
see, in very vague and shaky outline, how to create AGI - not as a blueprint
of course (I'm not _that_ good :)), and with lots of big unsolved problems
still, but as a path running decades into the fog of the future; if Moore's
Law holds out, if nothing goes disastrously wrong, if people and funding can
be assembled and kept together, there is a way. It doesn't involve
spectacular results in the next few years, and while I won't swear there's
no other way, this one looks marginal enough that I _think_ any substantial
deviation from it drops the chance of success by many orders of magnitude.
(I'll try and put something together on the actual how at some point, though
I don't have a precise enough idea yet to write a paper on it.)

So why was I able to figure that out and Lenat wasn't? I don't think it's
because I'm smarter than him. I think it's because I was able to accept at
last that _we will probably fail_. Not in a near-future toss of the dice
that's emotionally comforting in its own way: Judgement Day comes, we're
uploaded to heaven or the world goes poof in one clean conflagration and no
more pain. No, if we succeed it will be after many long decades of
exhausting thankless work; and if we fail - it will be after many long
decades of exhausting thankless work followed by the agonizingly protracted
death that modern medicine and nursing deliver, leaving behind a world that
will die, not with a psychologically acceptable bang, but with a whimper;
and all because we were not smart and fast enough to do our jobs in whatever
window of opportunity was available.

I do not, of course, know what the probabilities of each of these outcomes
are. (And there may be other paths; in particular, nanotechnology minus AGI
may yet suffice. I haven't studied that end of things as intensively; my
best guess is that AGI is the most likely route to success, but I'm not
sure. I'm much more sure AGI is the point of highest leverage, based on
resource requirements.) It could, for all I really know, be 50/50. But I
suspect it's more like 99/1. Decay is easier and faster than progress. Only
in the acceptance of despair did I arrive at these insights.

Why was I able to do this? I don't think it's because I have some emotional
strength not given to other men. I think it's about belief systems. That's
not terribly surprising after all, religion has over the years given many
people the strength to endure unpleasant things; but that is not my path,
nor I think that of most of the people on this list.

I've believed in the Tegmark multiverse (the many-worlds interpretation of
quantum mechanics and the Platonic philosophy of mathematics - a post I
found in the extropy-chat archives,
http://bbs.extropy.org/exi-lists/archive/9904/35948.html gives tantalizing
hints that the two may be connected - but I digress) since I was old enough
to ponder the issues; not at the time for emotional reasons, simply because
of its elegance, its elimination of the requirement for special assumptions,
its ability to explain albeit not (usually - with one or two exceptions)
predict. Originally it was purely an intellectual thing, but over the years
the habit of mind ingrained itself.

So finally I was able to understand that if I believe this is _true_ - which
I do - then I can accept a 99% probability that we will fail and the Grim
Reaper will take us, our civilization, our species, our world as it has
taken others in the past; for that means the other 1% of the probability
amplitude finds Ascension, and the total utility thus achieved far, far
outweighs our loss. In utilitarian logic the correct action is of course the
same either way, but in human psychology it is not.

The reason I post this is that I have seen - not just on this list - a need
to deny the possibility of failure, of death, most particularly of the
emotionally unacceptable kinds of death that are in fact the ones reality
hands out, to believe there is some path that can deliver safety by
squelching danger; but if denial of the possibility of failure is followed
far enough, if everything that appears dangerous is squelched, the reality
of failure and death become certain.

I'm a pessimist and the philosophy here offered is in accordance with that,
and I suspect most will not find it of use. But to any who do: there is
light the other side of despair.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT