Re: Overconfidence and meta-rationality

From: Tennessee Leeuwenburg (tennessee@tennessee.id.au)
Date: Tue Mar 22 2005 - 17:35:37 MST


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

| This is why, when people accusingly say the Singularity is a
| religious concept, or claim that hard takeoff is inspired by
| apocalyptic dreaming, I feel that my best reply remains my
| arguments about the dynamics of recursively self-improving AI.
| That question stands in closer causal proximity to the matter of
| interest. If I establish that we can (or cannot) expect a
| recursively self-improving AI to go FOOM based on arguments purely
| from the dynamics of cognition, that renders the matter of interest
| conditionally irrelevant on arguments about psychological
| apocalyptism.

Obviously. Understanding the Singularity really only takes a few steps.

1.) AI is possible.
2.) Superhuman AI is possible given (1)
3.) AI will seek to improve its' IQ indefinately

All attacks on the singularity attack (1), (2) or (3) either
qualitatively or quantitatively, and frequently the attacks are not
based on science or strict rationality. Most Singularists, I would
say, would order those steps, in order of likelihood or being correct,
2, 3, 1. That is, most people think that if AI is possible, getting AI
even smarter should be pretty easy. Also, no-one can really see why
anyone would prevent themselves from getting smarter, so they give in
fairly readily to (3) also. Most challenges come in to the nature and
quantity of (1). Sometimes these arguments are a reaction directly to
(1), but may come as people disagree with any later stage, or the
concept of the Singularity itself.

I'm sure I don't need to go into it, but I will outline some of those
arguments by heading -- the argument from qualia, the argument from
behaviourism, the argument from religion/spirituality, the argument
from feasibility. For whatever reason, people who argue against the
Singularity can be broadly split into those who don't like the idea of
the Singularity, and those who just don't think it will work.

Your argument about the Wright brothers is a simple analogy dealing
with the practical elements of (1). However, the Wright brothers now
have something you still lack - proof by example. To use your language
- - arguments about the feasibility of AI are conditionally independant
of the AI once it has been measured.

The world is big enough, and has enough money, that it can afford to
wait for proof. There is no reason to deny your arguments absolutely,
nor to accept them unconditionally. I accept in principle the
fundamental thinkings of this list, but we still lack even good
neuroscience of IQ, let alone truly understanding qualia or anything
like that. It's like there exists a proper framework for discussing
intelligence, but no "physics" of intelligence. Cognitive science
still lacks the ability to describe and predict many things.

I don't think that the feasibility of the Singularity can be
established /a priori/, because I don't think it's obviously true than
machines will achieve both AI and the experience of qualia. Nor do I
think it's obviously true that machines will achieve the indefinite
improvement in IQ suggested by by Singularist positions.

I don't have a drawing tablet, so let me "describe" a graph to you.

The Y axis represents IQ, and the X axis represents complexity of
configuration. As the complexity of configuration increases, it
becomes more and more difficult to build or understand that kind of
brain. A point on the graph represents the IQ of a brain built at that
level of complexity.

Now, what configurations might lead to superhuman AI? Is there a
steady rise of intelligence with complexity? If the answer is yes,
there will be consistently increasing levels of IQ with complexity.
But what if the answer is no? Surely we can imagine some highly
complex configurations which do not produce good minds, and some good
minds which will be produced by configurations of lesser complexity
that this?

If that is the case, let us imagine a graph with many turning points,
and many peaks and troughs, covering the area. This, perhaps,
represents the true map for possible brains to the resulting IQ.

If this is the case, it's possible for the Singularity to get trapped
in relative peaks - too stupid to understand how to move from its
existing configuration to a more complex one beyond its' current
understanding. We currently sit in an evolutionary peak - there is
little selective pressure to change our biology, but perhaps we are
also in a cognitive one. Or, more likely, perhaps our artificial
descendants will become trapped in a cognitive peak.

I would argue this is probably *true*. Imagine that the line between
IQ and complexity takes a random walk. Also imagine a line going up at
45 degrees from the origin. This line represents the complexity of
mind comprehensible to a particular IQ.

This means :

* It's possible to have a simple, effective mind which can understand
minds more complex than itself, to a point. These other minds may or
may not have greater IQ.

* It's possible to have a complex mind, which has a low IQ, and does
not properly understand itself

* To find out what other minds a particular mind can understand, find
out where the "complexity line" intersects the current IQ. Anything
less complex than this intersection point can be understood.
Complexity is defined in such a way as to make this a truism.

The alternative configurations available to mind (A) are anything
equal or less than the complexity allowed by its' IQ. This *may* allow
it to envision alternative, more complex minds with IQs that will be
higher still, or it *may not*.

This is how I suppose things to really be, with one qualification:
There is a *general tendency* for minds with a higher IQ to be more
complex

As a result, I challenge (3) more than either (1) or (2). That is, I
believe some degree of AI is possible. I think humans are clearly
biological machines, and am not here challenging the religious
position. I also think that any intelligence is likely to explore
superhuman intelligences. However, I don't think it's necessarily true
that this is always achievable, because there is nothing "automatic"
about understanding how to reach ever higher levels of intelligence.

Given the current levels of human self-understanding, it might well be
that our own minds are too complex for us to understand fully.
However, given our success in creating useful machines, it might be
that a less complex artificial mind of superhuman intelligence is
possible. In fact, it is this possibility of increased
self-understanding that makes superhuman AI so attractive.

Hmm, well, I'd better get back to work. Sorry if there's too much
sloppy thinking in the above email, it is a response, not a paper.

Cheers,
- -Tennessee
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.5 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFCQLnZFp/Peux6TnIRArpHAJ9F6NbPm/Qqpf5HYxn5KHrTQ8FTtQCfQWsh
CghDDl1E9cFCK+T3sAu+HMU=
=uwBN
-----END PGP SIGNATURE-----



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT