From: Eliezer Yudkowsky (email@example.com)
Date: Sun Oct 24 2004 - 12:54:07 MDT
Ben Goertzel wrote:
>> Actually, let me amend that question. According to your given
>> definition, we can never make a "rigorous" statement about any
>> real-world issue.
>> The empirical probability of a physicist's prediction is not certainty
>> - it can't be, not in an uncertain universe.
> The real-world observations made by experimental physicists go into the
> ASSUMPTIONS based on which the correctness of mathematical physics is
> So when I ask you to accept my physics theory as rigorous, I'm asking
> you to accept that my theory makes correct deductions from its
> assumptions about experimental data.
All men are mortal.
Socrates is a man.
Therefore, Socrates is mortal.
This is a valid syllogism. Is it a rigorous, technically sophisticated
theory of biology?
Suppose I "axiomatized" Friendly AI in such a way that my conclusions
followed from my assumptions, but using silly verbal definitions as of
Greek philosophy. Would you call that rigorous? Rigorous but wrong?
> You could consider it rigorous but wrong, if you disagree with me about
> what experimental data is valid.
> As I said, the notion of rigor is in part cultural, therefore tricky to
> define in a rigorous way ;-) I don't think this makes the notion
> useless, however.
> For example, math theorems as published in math journals aren't 100%
> rigorous -- they're not fully formalized like in Mizar -- yet they're
> culturally accepted as rigorous...
Is this the kind of rigor that you *want* from Friendly AI theory? I have
little use for conclusions that are absolutely certain given their
assumptions. I want conclusions that are correct. A Friendly AI is a
physical object in a physical universe, not a mathematical theorem. We
need a way to predict the real-world behavior of the physical object. In
particular, I would consider it a bad idea to build a Friendly AI that goes
mad given one bitflip. Yet the very same Friendly AI might (we can
imagine) be provably Friendly *assuming* that no bitflip ever occurs in any
But mostly, my objection to your definition of "rigor" (and if you change
your mind and decide that you want something other than "rigor" from a
Friendly AI theory, feel free to say so) is that (a) it seems to rule out
probabilistic conclusions of very high probability, which is the best we
can ever do in the real world, and (b) it doesn't detect the difference
between Aristotle and Newton so long as both use technically valid
syllogisms. It could even penalize Newton, if he didn't bother to pretend
that his experimental predictions were absolutely certain syllogisms.
-- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Tue Jun 18 2013 - 04:00:45 MDT