From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Sun Nov 26 2000 - 15:35:27 MST
Ben Goertzel wrote:
> First of all, you certainly have not shut the door to a mathematical proof
> of transhuman
> minds' significant imperfection. The analogy to thermodynamics is OK, but
> it doesn't give you
> guidance as to what the actual probabilities involved are. You just pull
> the probability values
> out of a hat (leading to your figures of decillion years, centillion years,
> etc.), by analogy to
> the logic of much simpler systems.
Well, sure. Once you've said that the expiration date on a description
increases exponentially with RAM, or that the time for a thermodynamic
miracle increases exponentially with the number of particles in the
system, you've said everything there is to say from a complexity-theory
perspective. Trying to come up with real numbers for that would be
> The door is open for a real mathematical
> analysis of
> the lossiness and error-prone-ness of knowledge and inference in minds of
> various sizes.
I don't see how you can possibly do this. Given a specific physical
system, you can estimate the error rate of the underlying processes and
demonstrate that, e.g., there is a 10^-50 chance of a single error
occurring in ten thousand years of operation. I don't see how you could
possibly get specific numbers for software errors in a program as complex
as Webmind, much less a human, much less a totally unspecified transhuman.
> I do not intend to provide this at the moment -- though I do hope, before I
> die (if I ever do, which
> I hope I don't ;), to create a real mathematical theory of mind that would
> allow such questions
> be be explored...
There will never be a real mathematical theory of mind that is less
complex than a mind itself; no useful part of the mind will ever be
describable except by all the individual equations governing
transistor-equivalents or neurons. The wish for such a theory is simply
physics envy. We live in a world where physical systems turn out to
exhibit all sorts of interesting, mathematically describable high-level
behaviors; but neither the biology of evolved organisms, nor the behavior
of evolved minds, nor the computer programs we design, exhibit any such
tendency. If you took the sum of all the numbers in a computer's RAM and
plotted it over time, you might find that it danced an airy Gaussian
minuet around a mean, but I don't think you will ever find any behavior
more interesting than that - there is no reason why such a behavior would
exist and no precedent for expecting one. Mathematics is an
extraordinarily powerful tool which will never be useful to cognitive
scientists. We'll just have to live with that.
> Second, your investigation raises an interesting question. If a human were
> faced with the tasks of a
> leaf-cutter ant -- find some leaves to eat, carry them back home and store
> them, then eat them when hungry --
> then presumably the human would not make very many errors of fact or
> judgment. (If the human went insane
> it would be from boredom ;)
Precisely. Effective perfection is achievable.
> The point is: We have sought out tasks that strain our cognitive
Yes. And, at our current level of intelligence, staying sane - or rather,
developing a personal philosophy and learning self-knowledge - strains
these abilities to the limit. Indeed, before the development of
evolutionary psychology and related sciences, humanity's philosophers
totally failed to deduce these facts through introspection - even though
all the information was, theoretically, available.
We have no ability to observe individual neurons.
In the realm of the mind, we have no ability to construct tools, or tools
to build tools, whereby we could examine intermediate levels.
We have no direct access to, or manipulation of, our underlying functional
Of course we fail.
Now, you can make up all kinds of reasons why AIs or transhumans might run
into minor difficulties decoding neural nets, failing to achieve complete
perfection, and so on. But it looks to me like simple common sense says
that, if we humans had all these abilities, we would have achieved vastly
more than we have now. No, we still might not be perfect. But we would,
at the very least, be vastly better. And, given enough time, or given
transhuman intelligence, we might well achieve effective perfection in the
domain of sanity.
Will transhumans seek out tasks that strain their abilities? I don't
know. Will they take these domains and make them part of themselves,
assimilating them, running mental simulations, so that their self is also
a strain to understand? Maybe - the border between external and internal
reality gets a bit blurred when you can *really* run mental simulations.
Even so, however, the domain of "sanity" is only a subset of the domain of
"self-observation". I think that the problem of sanity can be solved,
completely and forever. I think it's a problem that strains our current
minds and current access, but would not strain either a transhuman mind,
or a mind with the ability to access the neural-analogue level and build
tools to build tools.
> I don't claim to have demonstrated anything here -- except that there is a
> lot of room for doubt where these
> matters are concerned... that the apparent solidity of arguments based on
> the thermodynamic analogy is only
> apparent. You may well be right Eliezer, but by ignoring "cognitive
> science" issues you really ignore the
> crux of it all. Of course our knowledge of transhuman psychology is fairly
> limited, which just means, to my
> mind, that a lot of humility is appropriate in the face of these huge
I never wanted to ignore "cognitive science" issues. The whole point of
the post was to take the ball out of the mathematical court and drop-kick
it back into the cognitive one.
> With a project like building a thinking machine, or proving a theorem, or
> even composing a piece of music,
> one's ideas can eventually be refuted by experience. The algorithm fails,
> the proof fails, the composition
> sounds bad [to oneself, or to others]. In noodling about transhuman
> psychology, there's no feedback from
> external physical or social or mathemtaical reality, so anything goes --
> until 500 years from now when AI's
> look back and laugh at our silly ideas...
Of course. But, though perhaps I am mistaken, it looks to me like your
beliefs about transhumanity have effects on what you do in the
here-and-now. Certainly your projections about transhumanity, and your
actions in the present, spring from the same set of underlying
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Sat May 25 2013 - 04:00:25 MDT