[sl4] foundationalism

From: Johnicholas Hines (johnicholas.hines@gmail.com)
Date: Sat Feb 14 2009 - 09:23:14 MST

Let me make an analogy between mathematics and ethics. Many objects in
mathematics (e.g. matroids, vector spaces, the natural numbers) have
many alternative axiomatizations. Foundations-of-mathematics
researchers create and offer different possible systems.
Zermelo-Frankel set theory is a powerful contender, but there are many
alternatives that are argued to be more elegant, to be more relevant
to mathematical intuition, or to avoid various confusing features.

If the foundations are actively being worked on, does that instability
ripple upward into other mathematician's work? Do topologists say "I
wish those foundations researchers would quit changing the definition
of a vector space!"? No. There is a social consensus of mathematical
facts. Axiomatizations that lead to a novel conclusion contradicting
that consensus (e.g. "1=0") are tossed out as unacceptable.

The foundations of ethics and morality are not entirely pinned down.
Philosophers publish papers in these areas all the time. Despite this
instability, we can make judgments like: "It is right, correct,
appropriate, moral, and ethical to rescue a person from death by an
oncoming train.". There is a social consensus of ethical judgments.

Just like the many alternative definitions of a vector space, there
are many possible justifications for why it is appropriate to rescue a
person from death. You could justify it as an axiomatic moral duty.
You could justify it as useful to the continuance of the species, if
you thought the continuance of the species was morally axiomatic. You
could justify it as maximizing total happiness, if you thought that
was morally axiomatic.

When you are taking actions or advocating for actions you are doing
APPLIED ethics, similar to the working mathematician.

"Should I (donate/volunteer/study neuroimaging) for the sake of
helping to build 'upload' technology? Or should I
(donate/volunteer/study AGI) for the sake of helping to build Friendly
AI technology? What about the RepRap?"

The most stable justifications for these actions are commonsense
morality from the social consensus, NOT foundational axioms of ethics.

On Fri, Feb 13, 2009 at 9:21 PM, Charles Hixson
<charleshixsn@earthlink.net> wrote:
> If we want the future that we build to have any chance of enduring, then we
> need to be clear about our foundational concepts. Blurring distinctions is
> an invitation false reasoning. So it's important that the language used to
> describe it be unambiguous. This is very difficult. It's probably
> impossible. But one can attempt to minimize the ambiguity while still
> retaining sufficient generality to apply to actual circumstances.

I agree, we need to compromise between foundational precision and generality.

On Sat, Feb 14, 2009 at 4:03 AM, Stathis Papaioannou <stathisp@gmail.com> wrote:
> In response to all these statements the question can be asked, "and
> why is that wrong?" In the final analysis we end up with an
> irreducible ethical principle, "it's wrong because it's wrong".

I agree, justifications will have to bottom out somewhere.


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT