Re: [sl4] foundationalism

From: Vladimir Nesov (robotact@gmail.com)
Date: Mon Feb 23 2009 - 12:55:52 MST


On Sat, Feb 14, 2009 at 7:23 PM, Johnicholas Hines
<johnicholas.hines@gmail.com> wrote:
> Let me make an analogy between mathematics and ethics. Many objects in
> mathematics (e.g. matroids, vector spaces, the natural numbers) have
> many alternative axiomatizations. Foundations-of-mathematics
> researchers create and offer different possible systems.
> Zermelo-Frankel set theory is a powerful contender, but there are many
> alternatives that are argued to be more elegant, to be more relevant
> to mathematical intuition, or to avoid various confusing features.
>
> If the foundations are actively being worked on, does that instability
> ripple upward into other mathematician's work? Do topologists say "I
> wish those foundations researchers would quit changing the definition
> of a vector space!"? No. There is a social consensus of mathematical
> facts. Axiomatizations that lead to a novel conclusion contradicting
> that consensus (e.g. "1=0") are tossed out as unacceptable.
>
> The foundations of ethics and morality are not entirely pinned down.
> Philosophers publish papers in these areas all the time. Despite this
> instability, we can make judgments like: "It is right, correct,
> appropriate, moral, and ethical to rescue a person from death by an
> oncoming train.". There is a social consensus of ethical judgments.
>
> Just like the many alternative definitions of a vector space, there
> are many possible justifications for why it is appropriate to rescue a
> person from death. You could justify it as an axiomatic moral duty.
> You could justify it as useful to the continuance of the species, if
> you thought the continuance of the species was morally axiomatic. You
> could justify it as maximizing total happiness, if you thought that
> was morally axiomatic.
>
> When you are taking actions or advocating for actions you are doing
> APPLIED ethics, similar to the working mathematician.
>
> "Should I (donate/volunteer/study neuroimaging) for the sake of
> helping to build 'upload' technology? Or should I
> (donate/volunteer/study AGI) for the sake of helping to build Friendly
> AI technology? What about the RepRap?"
>
> The most stable justifications for these actions are commonsense
> morality from the social consensus, NOT foundational axioms of ethics.
>

I call the associated failure mode in AI, following Stuart Russel [1],
"premature mathematization", an analogy with infamous "premature
optimization". One shouldn't start studying a concept completely
defined by its foundation, as you are not interested in foundation,
but rather in the problem that you are trying to solve, which given
foundation may fail to address. On the other hand, there are many
expressively complete foundations, modes of thinking about the
problems, that don't really limit what you can think about, what you
can model with them. They specify the fundamental rules of the game,
with which you can construct any game whatsoever. It may be useful to
study such rules and their consequences with precision, even if you
don't yet know how to apply them. And so we should study many
fundamental technical (mathematical) fields, even where we don't know
how to apply them to FAI.

[1] Rationality and intelligence. Artif. Intell., Vol. 94, No. 1-2.
(1997), pp. 57-77. by Stuart J Russell

-- 
Vladimir Nesov
http://causalityrelay.wordpress.com/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT