From: Ben Goertzel (firstname.lastname@example.org)
Date: Sun Oct 24 2004 - 16:31:40 MDT
Hi Eli --
The notion of "rigor" I outlined for you is not my invention, it's just my
summary of "conventional wisdom" in the intellectual world today about what
constitutes a rigorous argument. You're free to believe that rigorous
argumentation in this conventional sense is not valuable. However, if you
want to convince contemporary scientists that your conclusions are good
ones, you should be aware that adhering to this notion of rigor will make
your job a lot easier!
> All men are mortal.
> Socrates is a man.
> Therefore, Socrates is mortal.
> This is a valid syllogism. Is it a rigorous, technically sophisticated
> theory of biology?
It's rigorous, but pretty boring ... and the assumptions are not necessarily
going to be accepted by the listener (since as an optimistic transhumanist I
don't buy "all men are mortal" ;-)
> Suppose I "axiomatized" Friendly AI in such a way that my conclusions
> followed from my assumptions, but using silly verbal definitions as of
> Greek philosophy. Would you call that rigorous? Rigorous but wrong?
Well, rigor takes place in the context of a set of reasoning rules, and a
way of describing heuristic assumptions, that is agreed upon by the
community in question. In this sense rigor is culturally relative.
> Is this the kind of rigor that you *want* from Friendly AI
> theory? I have
> little use for conclusions that are absolutely certain given their
> assumptions. I want conclusions that are correct. A Friendly AI is a
> physical object in a physical universe, not a mathematical theorem. We
> need a way to predict the real-world behavior of the physical object. In
> particular, I would consider it a bad idea to build a Friendly AI
> that goes
> mad given one bitflip. Yet the very same Friendly AI might (we can
> imagine) be provably Friendly *assuming* that no bitflip ever
> occurs in any
To convince scientific listeners of your points, you need to make rigorous
arguments that begin from assumptions that your listeners believe.
> But mostly, my objection to your definition of "rigor" (and if you change
> your mind and decide that you want something other than "rigor" from a
> Friendly AI theory, feel free to say so) is that (a) it seems to rule out
> probabilistic conclusions of very high probability, which is the best we
> can ever do in the real world, and (b) it doesn't detect the difference
> between Aristotle and Newton so long as both use technically valid
> syllogisms. It could even penalize Newton, if he didn't bother
> to pretend
> that his experimental predictions were absolutely certain syllogisms.
I never stated that rigor was the ONLY valuable thing in a body of
knowledge, just that it's a valuable thing ... and a valuable thing that
seems to be missing from most of your own work. Most of your own arguments
are full of ambiguities and holes, so when I read them I think they're
interesting, but I'm not really convinced.
My own arguments as to why I believe Novamente will achieve superhuman
intelligence when completed, tuned and tested -- are *also* nonrigorous in
this sense. I don't know how to make them rigorous -- appropriate math
doesn't exist -- so I've chosen to focus on making the thing rather than
rigorously proving it would work if I made it...
Aristotelian theory is fairly rigorous, but it's founded on empirically
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT