Re: Deconstructing Eli: A Final Cautionary Note

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Dec 09 2001 - 14:39:53 MST


Rather than try to reply to all of this, I'm just going to try and correct
the specific dangling points where I think an idea is being
misrepresented, and I don't want to leave it that way. Rather than taking
the argument any further, that is. Some of these are things I do care
about, such as an accusation of being anti-logic in general, when I was
asserting distrust of a certain ontology.

Some people complain about list quality, some people complain about my
leaving arguments unfinished, and everyone has so many other interesting
uses for my time. Sigh. Some days you just can't win, can't break even,
and can't get out of the game.

Jeff Bone wrote:
>
> "Eliezer S. Yudkowsky" wrote:
>
> I'm not saying you are incorrect about any of your assertions, just that it's sloppy to assume (as
> you have above) that the availability of some speculative loopholes in some (classes of)
> currently- accepted physical laws implies there will similarly significant loopholes in all
> (classes of) currently- accepted physical laws.

You are transforming a sensible probabilistic argument into a nonsensical
absolute. I do not argue that the past history of science, or the current
state of research papers, definitely show that useful loopholes exist for
all laws. I am saying that they constitute significant but not definite
evidence against the assertion that a given law is knowably absolute. I
am not claiming proof. I am adding a single weight to the scales.

> > I don't trust human conceptions
> > about "logic" because past experience has shown that the universe often
> > defies our intuitive conception of logic.
>
> ELI DROPS THE BALL #3:
>
> Well, I think that about wraps things
> up; if the results of our best efforts to use mathematical (logical) reasoning to discuss various
> issues cannot be trusted simply because human conceptions of "logic" are invalid, then we're
> done. I'll proceed to parse through the rest of this and bat away objections, but Eli --- you
> just shot yourself in the foot -wrt- ever again having a "rational" conversation with a skeptic.
> :-( And I'm not even a skeptic, I'm a believer playing devil's advocate.

Oh, come now! I don't mean that I distrust logic in the classical sense
of simple rationality, or that I prefer emotion to logic. I mean that I
distrust human logical formalisms as an alleged foundation of our
universe's basic ontology. For example, a Turing machine has a single
space of simultaneity and our universe does not (although this does not
alone affect computability). For example, our innate conception of cause
and effect runs into an infinite recursion problem which our universe
clearly manages to resolve one way or another. And Penrose once made the
interesting point, with which I agree, that mathematics should be treated
as a very, very well-confirmed physical theory rather than as a logical
truth.
            /\/\
But I still \ / the Bayesian Probability Theorem.
             \/

Incidentally, I don't think it's irrational to distrust rationality. Real
skepticism should apply to everything, including skepticism. The point at
which my alarms go off is when I hear someone claiming that there is some
specific thing that they trust *more* than rationality.

> > Only if you think that "Moore's Law is likely to continue for the next ten
> > years" should be analogized to "no effective workaround to the second law
> > of thermodynamics will be discovered over the next ten billion years".
>
> ELI DROPS THE BALL #4:
>
> You're having a scaling problem. You have to consider Moore's Law relative to the history of
> human existance, then consider the observable implications of 2LT relative to the current age of
> the universe. And BTW, I'm not speaking of Moore's Law in its strict form, I believe that in
> order to justify prediction of Singularity you must, as Kurzweil does, look for a Moore's Law-like
> principle operating throughout the age of humanity.

Okay, now this is exactly what I was talking about earlier. I don't care
about Moore's Law as a grand sweep across history as long as it holds up
just long enough to deliver me the computing power I need. You say that
"in order to justify prediction of Singularity I must", generalize Moore's
Law, and then attack that generalization as if I had asserted it, which I
do not. If I wish to limit my reliance on Moore's Law to ten years, you
cannot claim that I am guilty of logical contradiction by virtue of the
fact that I *would* be guilty of logical contradiction if I said that
Moore's Law would hold for the next ten billion years, even if you think
that's something I "must" assert.

> > But
> > unless you have a physics degree you've been concealing, and I'm pretty
> > sure you would have mentioned it by now, both of us are simply listing
> > which laws of physics we like and dislike based on their character...
> > rather a silly activity, really.
>
> ELI DROPS THE BALL #5, #6, #7:
>
> Now really, Eli. You of all people should know that it doesn't take a degree to make significant
> advances in a field

True. Consider me corrected. What I meant was that I thought that both
of us had overreached our expertise. I didn't mean that all people not
possessing degrees should be excluded, just that it might take a roughly
degree-equivalent amount of expertise to continue the argument any
further.

> "Jeff doesn't have a physics degree" --> both of us are simply listing favorite laws
>
> doesn't make sense on many dimensions. Why should my degree or lack thereof effect the process by
> which *you* are engaging in this discussion? Why should it impact *my* criteria?

Because I felt myself making a mental reach for the specific mathematics,
and the mental operation failed due to lack of knowledge - I moved from
physics to cognitive science before I started getting into the
differential equations. Maybe you can operate without them, but that
would imply that you're much better at this than I am; and while that
*could* be true, if I believed that, I wouldn't still be arguing with you.

> Is the
> implication that only a degreed person can have selection criteria other than an apparently
> baseless emotional "like" or "dislike?"

Well... I wouldn't use the term "baseless", but yeah. This conversation
has basically degenerated into "I like this law", "I don't like this
law". Perhaps one of us is right and the other wrong, but I don't think
we can take the argument any further.

> Muddy thinking, wrapped up with a pat
> value judgement intended to devalue the whole endeavor: "rather a silly activity, really."

This *is* rather silly. Don't you think so?

> > If it's a decision predicated on the truth of physical law, then what a
> > Friendly AI requires is the ability to make the correct decision based on
> > the truth as known to it at that time.
>
> ELI DROPS THE BALL #8:
>
> But herein lies the handwaving. That's tautological. "What's needed for the Friendly to make the
> correct decision is the ability to make the correct decision based on what is known at the time."

I did NOT say that. What I said was that the DESIGN REQUIREMENT was that
the Friendly AI have the basis to LATER make the correct decision based on
later knowledge, as OPPOSED to the design requirement being that WE make
the correct decision NOW based on CURRENT knowledge.

> > Even if you argue that our current model of
> > physics will affect how we now make moral decisions that establish basic
> > values (supergoals) which are then not dependent on physics, a Friendly AI
> > with causal validity semantics would probably re-model the moral decision
> > we would have made at this point as if we had had accurate knowledge of
> > physics.
>
> Explain the implications of the latter.

We can screw up our understanding of physics without screwing up the
Friendly AI. Causal validity semantics describes the cognitive processes
needed for the Friendly AI to find and fix mistakes in its own creation;
to heal the consequences of those points in its past history where the
programmers took actions based on incorrect models of reality.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT