From: Ben Goertzel (firstname.lastname@example.org)
Date: Fri Feb 13 2004 - 21:46:18 MST
> The only point where I disagree with EPT is where you say that any
> meta-ethical or ethical system must in the end be irrational and based on
> "invented" assumptions. If this were really the case, then there would not
> be a "best"/"better" moral system but they would all be
> equivalent bar none
> including the most evil and those that generate most negative qualia.
Well, I don't think all ethical systems are equivalent. However, I think
there are many different ethical and meta-ethical systems that are
equivalently good from the point of view of reason. Reason merely judges
whether an ethical or meta-ethical system is CONSISTENT or not. It makes no
judgment beyond that. And I think there are many different consistent
ethical and meta-ethical systems.
> In my opinion once you experience pain and pleasure directly, that is an
> absolute frame of reference given to you, it is a data point.
Each quale is what it is -- positive or negative -- but the notions of
"total positive qualia" or "total negative qualia" over some region of time
are quite abstract and are not really immanent in the quales themselves in
any direct way. It's not so clear to me that quales permit this kind of
summation --- maybe they do, but it's certainly not obvious to me...
>We can argue
> that we have no real evidence other people and animals also experience the
> same thing, although most people seem to be convinced that at least other
> people do experience it (the reasons why they believe this are by the way
> often the wrong ones). Even if you make such an argument however, then you
> will agree that morally the thing to do is be conservative and
> assume other
> people feel stuff in just about the same way; at least for now. Because if
> you are not conservative you have considerable likelihood of producing
> events that in retrospect may become very evil.
I do agree with this, but not for rational, logical reasons.... I think it
is logically consistent to assume other minds have no qualia, AND it is
logically consistent to assume other minds have qualia. The choice between
these perspectives seems NOT to be a matter for reason. At least, given the
current state of science.
> I think that qualia-based morality is not invented or equivalent to
> maximizing <favorite object> but the only moral theory that can
> justify its
> stuff in universal/physical terms from a simple perceptual perspective.
Conceivably this might be true ... but I don't see how you can justify this
statement rationally right now, when there is no rational way to even
explain how to sum up the valence of a set of qualia ... or to explain why I
should assume you have qualia ... etc. As of now, your belief in
qualia-based morality would seem to be (yet another) nonrational belief
being used as a foundation for a value system...
> Another thing that made me think is how easily you dismiss an orgasmic
> universe as not worthy of being our ultimate goal. I don't see any good
> reason to dismiss it. First, all growth and freedom are in the end only
> there to let us increase our happiness.
Again, this last statement is a value judgment you are making.
I think it is possibly a logically consistent point of view (though I'm not
certain of that). But I am pretty sure it is not the ONLY logically
consistent point of view.
When there are multiple conflicting logically consistent perspectives,
something besides logic needs to make the decision.
> With immediate happiness who cares
> about everything else including freedom and growth?
Yes, I understand that if a being is experiencing a constant orgasm than it
won't care about anything else. To me, that does not demonstrate that
experiencing a constant orgasm is a desirable state.
> Second, the
> things that
> look so cool to us now, like space exploration, warping the laws
> of physics,
> whatever you care to imagine following from immense growth and freedom,
> these are just little tricks for an AI who has already gotten there. There
> is no intrinsic moral valence in freedom.
There is no intrinsic moral valence in anything, in my view. Moral valence
is extrinsic, it's imposed by the valuing mind.
You see intrinsic moral value in happiness -- but it's YOU who's seeing
that... the moral value is in your mind, your value system, not in the
> Everything that has value for us, including romance,
> reproduction, success,
> freedom, achieving our goals, this ONLY makes sense from our particular
> perspective of evolved mammals.
Sure, but your notion of "happiness" also makes sense only from the
perspective of evolved mammals.
Happiness as experienced by you is a
a) a certain neurochemical cocktail, or
b) less reductionistically, a certain type of experience of a certain type
And then you generalize this experience into a broad concept of "happiness."
Just as I generalize the human experiences of growth and choice also...
We have to take our mammalian experiences and do our best to generalize and
abstract from them. The result of this is not infinite wisdom, but it's
greater wisdom than if we just took our mammalian experiences directly as
the only truth.
> These are coupled with happiness
> so closely
> for us that when we think of success we think of happiness, when
> we think of
> freedom we think of the absolute good. But these things are good only
> because they are always (from our perspective) followed by happiness. We
> must decouple these and realize that satisfaction alone is our
> ultimate goal
> and the factors that commonly trigger it are dispensable and irrelevant.
You keep repeating that we must all realize that the goal YOU posit as
important is the only important goal!!
But you can't give any kind of rational argument for this... supporting my
point that the choice of abstract ethical principles is nonrational...
> This is not an argument in favor of the orgasmium universe which
> is only one
> scenario. I personally think that in order to keep maximizing positive
> qualia and minimizing negative ones, a lot of resources will need to be
> dedicated to growth and research. All I am saying is that we can't dismiss
> the orgasmium very easily. It is probably a mistake to consider
> orgasmium as
> a mindless jelly; more likely it will be a highly intelligent substance
> capable of producing transhuman qualia of all sorts. Therefore I see a
> window of possibility of growth being a necessary precursor to
> of positive qualia and minimization of negative ones (although not a
> "sibling" supergoal, just a physical necessity).
I agree, it is possible that the maximum of joy, growth and choice will
occur via some kind of universewide cosmis orgasmium. Quite possible! Gee,
I hope so...
But if the choice comes down to orgasmium VERSUS "growth and freedom plus a
decent amount of joy" I'm going to choose the latter, whereas it seems
you're going to choose the former. And that's fine with me, so long as we
each get the choice (because choice, according to my value system though not
yours, is a fairly important thing)
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT