Re: The inevitability of death, or the death of inevitability?

From: Jeff Bone (jbone@jump.net)
Date: Sun Dec 09 2001 - 01:28:26 MST


So some of these things beg for a calm yet detailed response, so here's my attempt at that.
Let me set the stage up front: I'm not trying to pick a fight just because, nor cast
aspersions on you or your efforts, Eli; indeed, I agree with you more often than not. That
having been said...

"Eliezer S. Yudkowsky" wrote:

> Jeff Bone wrote:
> >
> > There is also a big difference between saying "long-term predictions made from current
> > scientific understanding *may* be inaccurate" and "long-term predictions made from
> > current scientific understanding *must* be inaccurate." What you are stating is much
> > closer to the latter than I am comfortable admitting,
>
> Uh... why? In what way is this not a total strawman argument?

So I'm not sure what the "why" in your first question is referring to; if it's the "big
difference" then surely you can reflect on it and conclude that the qualifiers are the big
difference in question. If you are asking why I see there being a similarity between your
statement and the "must" formulation, let's revisit what you said:

     "science moves forward, therefore our extrapolation of the next
     billion millennia changes every fifty years".

The question isn't one of whether it changes; we agree that it does. The question is how
*much* it changes, i.e. the accuracy of predictions from current understanding vs. later
understanding. Your position seems to assume that because our models of certain distant
cosmological environments (e.g. the first three minutes, etc.) has changed dramatically
over the last 50 years, it must therefore change in a qualitatively similar way over the
next 50 years. However that ignores the very real, very observable Moore's Law-like
appreciation in both the quantity of predictive models and the accuracy of their
predictions. (IMO, there's something very fundamental about Moore's Law --- it may be that
indeed it is a close relative to some as-yet unexpressed theory about the way knowledge
builds and accumulates, much like Turing's and Godel's are related.)

Given the application of Moore's Law to the argument for the Singularity Hypothesis, then I
find it surprising that you don't see the inconsistency you are introducing here. The
pattern of reasoning that leads from Moore's Law to Singularity is closely analogous to the
argument from "exponential increase in number and accuracy of scientific models" to an
asymptotic level of accuracy of such models.

Regardless of that, it's clear that some of the things that are clear priors in the
formulation of your world-view are indeed *less* justified by observation, logical proof and
experimental verification than, say, the evidence to suggest that the cosmological constant
has a certain value, that 2LT holds over an infinite expanse of spacetime, etc. Given that,
if you believe in your own conclusions, it seems preferential for you to discount other
conclusions drawn from even more empirical biases. If you believe that Singularity can be
predicted from existing limited observations with some degree of accuracy, then you should
believe that cosmological eschatology can be discussed with a proportionally greater degree
of accuracy given the greater degree of verifiability (and verification) of its priors.

To believe otherwise is inconsistent.

> > and again I will claim that
> > those kinds of comments are most often heard from people that are either ignorant of a
> > given field or predisposed to be antagonistic to the logical conclusions of a
> > particular field. Given that you're neither, Eli, then I am rather surprised to hear
> > you make arguments like this.
>
> ... and in what way is this not a total ad hominem?

Actually, it's not an ad hominem, it's a genuine expression of surprise. The appeal isn't
to personal considerations, it's to logical consistency and rationality.

> > So here's what's wrong with this argument: just as technology has been accelerating
> > non-linearly (perhaps asymptotically) over history, so has scientific understanding
> > (the accuracy of our models for making predictions at longer terms and finer levels of
> > "resolution") been accelerating similarly.
>
> This observation, I rather like. The problem, as my humorous post
> suggests, is workarounds where the physicists stand around innocently
> saying "Violation? What violation?"

Yes, but until you demonstrate at least *some* likelihood of some non/violation, you can't
just argue from your gut that such are possible. At least, not if we're having a rational
discussion.

> > Note we aren't talking about "technological impossibilities," rather logical and
> > physical constraints. Apples and oranges. "The world market for computers is around
> > five," "we will never put a man on the moon," etc. are all dumb statements. OTOH,
> > things like QED aren't about impossibilities, they are probabilistic models for actual
> > physical events.
>
> The problem is "physical constraints" that turn out to be merely
> "technological impossibilities".

Granted. And hence, I'm less inclined to believe such "laws" that are phenomenological ---
like the inviolability of c --- vs. things that are more abstract in both their construction
and their application, such as Godel's, 2LT, etc. The latter transcend their
phenomenological formalizations and speak of deep, formal, mathematical, epistemological
concerns. As such, they are accessible to the tools of mathematics, logic, and reason ---
and experiments related to them can be conducted in the mind, mostly independent of any
given technological constraints or capabilities. Advancing technology may increase the
accuracy of our predictions using such tools, but it is unlikely to overthrow the
fundamental concepts and relations.

Another way to look at this is: very general predictions far off in the future are much
more likely to be more accurate than very specific ones. (Lots of reasons for this, but it
seems hardwired into our universe.) Given that, I'm much more inclined to give weight to
cosmological eschatological arguments than more phenomenological considerations such as the
probability of constructing closed timelike curves.

> For that matter, something which appears possible under our own
> laws of physics could turn out to be impossible under the real laws of
> physics!

Aha, now I see the conflict! You are a Platonist! ;-) What, exactly, pray tell, are the
"real" laws of physics? Where are they kept? ;-) Remember: the "laws of physics" as we
understand them now, and as we will ever understand them at any given time, are only an
instantaneous snapshot of a monotonically improving model.

> Humanity may have at least as many surprises in waiting as all those it
> has already encountered... or even a far greater number of surprises.

I would grant this statement a probability approaching unity: either the above conjunction
is true, or it is true that humanity may have encountered most of the surprises it will ever
encounter. ;-) Even a strong belief that we have yet to face the majority of surprise we
will ever face doesn't render the field of cosmological eschatology meaningless, nor does it
imply that we should not attempt to use the tools we have today to make predictions and use
those predictions to guide our current decision-making.

> I
> am reminded of the line in Zindell's novel "Neverness" which mentions in
> passing that physicists pursued the trail of fundamental particles
> composing fundamental particles down through 200,000 layers before finally
> giving up. (And before you accuse me of whatever, I want to say that I
> personally believe that quarks are it - the fundamental particles do keep
> getting simpler, so I doubt the trail continues forever.)

I think you're right --- there *is* a bottom. Here's my current guess-nearly-hypothesis.
We know that spacetime is likely to be discrete, quantized. I suspect that we're going to
find that just as space/time, matter/energy, and the various forces have each been or are
likely to be unified via the various contemporary theories, so are we likely to unify
space/time with matter/energy. QG, the various strings and branes are very close to this
view already. IMO, the quarks are mostly-persistent deformations in the spacetime
fabric. The whole thing has a single substrate. Here's the more out-lying guess: there's
really an even more fundamental thing, an "information gradient." The whole
space/time/matter/energy fabric --- the total phase space of the universe --- is actually a
complex information environment, in which information is conserved. At the end of the day,
simulation or not, everything really *is* bits --- i.e., one kind of quantity. :-) And the
entire phenomenological universe is actually a result of the behavior of this fabric in the
presence of 2LT, an undiscovered opposing counterpart to 2LT, and perhaps a handful of other
simple rules that combine in complex nonlinear ways.

But enough wild guessing on that front.

> I don't think that the size of the hole that would be left in our
> comfortable worlds by a stunning disproof should be allowed to mediate
> against the long-term probability of disproof.

But the only arguments you've given for long-term probability of disproof are
methodologically inconsistent with the arguments you've bought into elsewhere to weight the
probability of Singularity, as illustrated above!

> If, leaving workarounds aside, I had to pick one of these rules as *most*
> likely to survive, I'd pick c - it seems to be built into the nature of
> causality in our universe.

Actually, that's the one I'd rate as least likely to survive. 2LT is *much* more intimitely
involved in causality --- "c+1 only violates causality" only holds true if many other things
hold true, while "systems tend from order to disorder" (pardon the simplification) *by
itself* explains one of the most puzzling assymetries in physics ("The Arrow of Time")
without introducing an observer bias.

c has lots of problems. One of the most underdiscussed problems in physics today is that
not only is c different in different contexts (media) --- there's accumulating empirical
data which suggests that indeed c may be inconstant in the *same* context (media) over time,
that c indeed may be decreasing. It's *very* unfashionable to publish about that, hence the
literature is scant and mostly confined to some rather fringey types without much
professional credibility with the mandarins. ;-) Let's hope that if this is true it's a
local phenomenon and not a universal one, as the cosmological implications of the latter are
very unpleasant, even worse than the conclusions that can be drawn from 2LT. :-/

> To what extent does my world-view require me to blissfully ignore the
> implications of such things? Obviously I'd rather live in a universe
> where true immortality is possible, but I acknowledge that this is not a
> variable my actions can influence.

While it's true that you cannot influence whether this is possible or not, if it *is*
possible your actions can influence the probability of achieving that goal. Don't be
timid! :-) This is the crux of my cautionary statements to you: the impact of the choices
we make now, the bias we give to our predecessors, may indeed fix the course and the
probability of achieving such things, and may limit the probability of achieving whatever
maximal positive outcome is achievable.

Here's a gedankenexperiment. If we were to decide between the two following scenarios,
which would we choose: a Friendly SI and successors that or capable of carrying the
entirety of pre-Singularity humanity for the next 10^20 years in idyllic bliss but unable to
escape some eventual catastrophe (due to resource allocations on achieving individual
satisfaction vs. other considerations which ensure longevity and security) or a Friendly SI
and successors that are capable of eventually building a post-Human society that experiences
subjective infinite time, at the cost of some (perhaps large) amount of involuntary pain,
suffering, or termination for some (perhaps large) portion of immediately pre-Singularity
humanity? I don't know which is preferable, and I have no bias one way or the other ---
though I think the conversation is definitely worth having. ? I indeed believe that the
choices we make now can impact such things; the "moral" bias we provide to the first AIs is
very likely to determine which outcomes are possible.

> None of my current actions are
> predicated on that variable taking on a particular value, so why am I
> being accused of religious faith?

I'm not accusing you of that. I'm merely pointing out that it's very hard for humans to
accurately assess their priors and reason about things in a manner divorced from sentiment.
You and I share at least in a general way the same cognitive architecture which makes that
true, and it's important that we be sure that we don't take the values of certain variables
on faith. Not saying that you are --- I'm mostly just tweaking you --- I'm just throwing
flags where there appear to be relatively unexamined (or at least underexplained)
assumptions.

> As far as I can tell, my sole crime is
> that I attach a 20% probability to irrelevant-but-fun hypotheses to which
> you would rather grant a 90% probability.

??? Which ones?

> Is it your thesis that I am
> ignoring actions which I should be taking to prepare for the 20%/90%
> probability that some law remains solid over the long run?

My thesis is that it's *possible* that you're ignoring such possibilities. Given the
profound potential impact on the future course of humanity that a success in your endeavors
might have, I'm just throwing in my $0.02 to encourage you to be as diligent as possible in
such considerations. That doesn't require an answer, it's just a checkpoint.

> I've already
> explained why a resolution of this issue is not required to construct
> Friendly AI.

To be fair, I don't wholly buy the arguments set forth in the various monographs --- all of
which I've read numerous times --- but I should pick those apart directly rather than
obliquely. It wasn't my intent to do that in this thread.

> > I want to believe
> > in Friendliness, Eli --- indeed I do believe that superhuman AI is an inevitability,
> > and I'd like for it to be benign.
>
> Dear me. How biased.

That statement wasn't an argument, hence expressing bias isn't inappropriate. Indeed,
you've expressed various biases along those lines at various points in time. ;-)

> Yes, well, I have some experience with the bizarre matrix of
> self-reinforcing misinterpretations that usually results in such a
> statement. You don't appear to be an advanced case, and hopefully can be
> extracted from whatever corner you're currently wedging yourself into.

And how exactly is that not an ad hominem? ;-)

jb



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT