From: Eliezer S. Yudkowsky (email@example.com)
Date: Sat Dec 08 2001 - 22:22:51 MST
Jeff Bone wrote:
> There is also a big difference between saying "long-term predictions made from current
> scientific understanding *may* be inaccurate" and "long-term predictions made from
> current scientific understanding *must* be inaccurate." What you are stating is much
> closer to the latter than I am comfortable admitting,
Uh... why? In what way is this not a total strawman argument?
> and again I will claim that
> those kinds of comments are most often heard from people that are either ignorant of a
> given field or predisposed to be antagonistic to the logical conclusions of a
> particular field. Given that you're neither, Eli, then I am rather surprised to hear
> you make arguments like this.
... and in what way is this not a total ad hominem?
> So here's what's wrong with this argument: just as technology has been accelerating
> non-linearly (perhaps asymptotically) over history, so has scientific understanding
> (the accuracy of our models for making predictions at longer terms and finer levels of
> "resolution") been accelerating similarly.
This observation, I rather like. The problem, as my humorous post
suggests, is workarounds where the physicists stand around innocently
saying "Violation? What violation?"
> Note we aren't talking about "technological impossibilities," rather logical and
> physical constraints. Apples and oranges. "The world market for computers is around
> five," "we will never put a man on the moon," etc. are all dumb statements. OTOH,
> things like QED aren't about impossibilities, they are probabilistic models for actual
> physical events.
The problem is "physical constraints" that turn out to be merely
"technological impossibilities". Future citizens may look back on us and
say "How could they possibly believe in the logical impossibility of
"global causality violation" when there were already so many different
physics papers proposing methods for constructing closed timelike curves?"
I say this, BTW, not because I want to have closed timelike curves or
because my whole precious universe will come apart like tissue paper if
CTCs are impossible, but because I'm trying to make the point that there
are *already* proposed workarounds. These proposals may stand or fall,
but at any rate it is not *yet* true to say "our current model of physics
says XYZ is 100% impossible". Sure, even if XYZ *is* 100% impossible, I
would expect a certain number of papers with subtle flaws arguing for
various unworkable workarounds, so the papers are not evidence in that
sense. For that matter, something which appears possible under our own
laws of physics could turn out to be impossible under the real laws of
physics! There could be dozens of basic limits we haven't discovered
yet! But the issue isn't settled yet.
Humanity may have at least as many surprises in waiting as all those it
has already encountered... or even a far greater number of surprises. I
am reminded of the line in Zindell's novel "Neverness" which mentions in
passing that physicists pursued the trail of fundamental particles
composing fundamental particles down through 200,000 layers before finally
giving up. (And before you accuse me of whatever, I want to say that I
personally believe that quarks are it - the fundamental particles do keep
getting simpler, so I doubt the trail continues forever.)
The sole evidence for the "many surprises" proposition is the Principle of
Mediocrity, which isn't really evidence at all. In truth, I have no idea,
and it is not possible that I should have any idea, no matter which way it
ultimately turns out. Sometimes the Principle of Surprise Mediocrity is
the conservative assumption for Friendly AI (how much philosophical depth
is required?) and sometimes not (how much specific moral content should we
be able to generate right now?), so I generally have to keep track of both
> Modulo accepted physical and mathematical constraints that form the most essential
> underpinnings of our most accurate theories: things like c, the Beckenstein Bound,
> 2LT, Godel's theorem, the Halting Problem, Chaitin's incompleteness theorems, etc.
> While I place a high probability on at least one of those being incorrect, there's
> good reason to believe that the probability of all of them being incorrect is
> approaching zero.
I don't think that the size of the hole that would be left in our
comfortable worlds by a stunning disproof should be allowed to mediate
against the long-term probability of disproof. Remember also that the
probability of an "innocent physicist" workaround for any given limit is
probably much higher than the probability of an actual disproof.
If, leaving workarounds aside, I had to pick one of these rules as *most*
likely to survive, I'd pick c - it seems to be built into the nature of
causality in our universe.
> To the extent that your world-view requires you to blissfully ignore the implications
> of such things, you should recognize it as the former: a psuedo-religious expression
> of faith used to support belief in highly speculative hypotheses.
To what extent does my world-view require me to blissfully ignore the
implications of such things? Obviously I'd rather live in a universe
where true immortality is possible, but I acknowledge that this is not a
variable my actions can influence. None of my current actions are
predicated on that variable taking on a particular value, so why am I
being accused of religious faith? As far as I can tell, my sole crime is
that I attach a 20% probability to irrelevant-but-fun hypotheses to which
you would rather grant a 90% probability. Is this really adequate
evidence for you to conclude that my entire worldview is religiously
biased toward pleasant possibilities, especially where the usual outcome
of such a case is the assignment of 0% probability, or more often a
complete refusal to address the issue? Is it your thesis that I am
ignoring actions which I should be taking to prepare for the 20%/90%
probability that some law remains solid over the long run? I've already
explained why a resolution of this issue is not required to construct
> I want to believe
> in Friendliness, Eli --- indeed I do believe that superhuman AI is an inevitability,
> and I'd like for it to be benign.
Dear me. How biased.
> But honestly, your arguments are inspiring less
> confidence rather than more. :-(
Yes, well, I have some experience with the bizarre matrix of
self-reinforcing misinterpretations that usually results in such a
statement. You don't appear to be an advanced case, and hopefully can be
extracted from whatever corner you're currently wedging yourself into.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT