RE: Ethical theories

From: Rafal Smigrodzki (rafal@smigrodzki.org)
Date: Thu Feb 19 2004 - 15:16:56 MST


Ben wrote:

>Rafal wrote
>>
>> I think we could begin by making the metaethical statement
>> "Formulate rules which will be accepted" (although this statement is
>> actually a high-level link in a very long-term recursive mental
>> process, rather than a starting logical premise).
>
> That's interesting. It's a little deeper than it seems at first, and
> I need to think about it more.
>
> At first it seems a pure triviality, but then you realize what the
> preconditions are, in order for the statement to be meaningful. For
> "be accepted" to be meaningful, one needs to assume there is some
> mind or community of minds that has the intelligence and the freedom
> to accept or to not accept. So one is implicitly assuming the
> existence of mind and freedom. So your rule is really equivalent to
>
> "Ensure that one or more minds with some form of volition exist, and
> then formulate rules that these minds will 'freely' choose to accept"

### You are close to getting to the bottom of the issue here, but let me try
to reformulate the initial meta-ethical statement. As you point out, this
statement is actually applicable only to ethical systems professed by
creatures interested in survival - but creatures which don't care about
their own lives can have ethical systems, too. Let me then make a hopefully
more general meta-ethical statement - "Formulate rules that make themselves
into accepted rules, or, make themselves come true" (cause the existence of
states of the universe, including conscious states, in agreement with goals
stated in the rules). Or "Formulate rules which, if applied, will as their
outcomes have the goals explicitly understood to be inherent in these
rules". Or "Do not formulate rules which have outcomes *opposite* to
intended". If the goal of the rule is to have a "good" outcome, where
goodness is defined within the rule itself, then only rules which have good
outcomes are good rules, and ethical systems which have good outcomes are
worth considering. Ethical systems which by their very structure have
results opposite to or uncorrelated with goals of these systems would appear
to be inferior to those which produce intended outcomes, because the very
essence of ethics is to define desired outcomes, no matter what they
actually are. I think that this is the basic meta-ethical statement we can
make, essentially demanding rationality in ethics.

>From the demand for rationality in ethics one can derive further
meta-ethical statements. Need for computability - a system which does not
provide rules sufficient to compute desirability of concrete actions open to
decision-makers (such as the system consisting of the sole statement "Be
good"), is useless, uncorrelated with outcomes. Internal consistency - the
system should not make contradictory recommendations for a single situation.
Wide applicability - a system that guides only in a few situations is less
useful (less correlated with outcomes) than a system applying everywhere.
Stability under changes of input - systems which totally change
recommendations after minor changes in inputs are likely to be affected by
random misinformation and therefore uncorrelated with outcomes. I think
similar points were made in this thread, sorry for repetition.

All these considerations seem at first approximation to be independent of
the content of ethical systems, but depend on epistemological features of
existing minds - which in turn *are* linked to ethics via the shared
physical environment which caused both our desires and our truth-finding
faculties to develop. This represents a bit of circularity between ethics,
and epistemology, but I don't think that such circularity would invalidate
the meta-ethical statements - it merely makes them contingent on the current
state of our (physical) truth-finding capabilities - but, everything we say
shares this feature.

---------------------------------
>
> If we define happiness_* (one variant of the vague notion of
> "happiness") as "the state of mind a volitional agent assumes when
> it's obtained what it wants", then your rule is really equivalent to
>
> "Ensure that one or more minds with some form of volition exist, and
> then formulate rules that these minds will 'freely' choose to accept,
> because they assess that accepting these rules will bring them an
> acceptable level of happiness_*"
>
> My point in tautologously unfolding your rule in this way, is to show
> that (as you obviously realize) it contains more than it might at
> first appear to...
>
> However, the shortcoming it has, is that it doesn't protect against
> minds being stupid and self-delusional. Volitional agents may accept
> something even if it's bad for them in many senses. (This is because
> happiness_* is not the only meaningful sense of happiness).

### Well, as I mentioned above I wanted to say something even less dependent
on our current structure of volition, which for most humans contains a
desire to exist. I hope that the reworked statement is more general, and
then it wouldn't entail the need for continued existence of minds espousing
a given ethics, much less the specific content of joyousness or growth. I
understand that this makes it even less intuitively compelling that my
initial statement, but it is more meta-ethical.

Rafal



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT