From: Metaqualia (email@example.com)
Date: Thu Feb 12 2004 - 20:12:35 MST
Well I agree with most points.
The only point where I disagree with EPT is where you say that any
meta-ethical or ethical system must in the end be irrational and based on
"invented" assumptions. If this were really the case, then there would not
be a "best"/"better" moral system but they would all be equivalent bar none
including the most evil and those that generate most negative qualia.
In my opinion once you experience pain and pleasure directly, that is an
absolute frame of reference given to you, it is a data point. We can argue
that we have no real evidence other people and animals also experience the
same thing, although most people seem to be convinced that at least other
people do experience it (the reasons why they believe this are by the way
often the wrong ones). Even if you make such an argument however, then you
will agree that morally the thing to do is be conservative and assume other
people feel stuff in just about the same way; at least for now. Because if
you are not conservative you have considerable likelihood of producing
events that in retrospect may become very evil.
I think that qualia-based morality is not invented or equivalent to
maximizing <favorite object> but the only moral theory that can justify its
stuff in universal/physical terms from a simple perceptual perspective.
Another thing that made me think is how easily you dismiss an orgasmic
universe as not worthy of being our ultimate goal. I don't see any good
reason to dismiss it. First, all growth and freedom are in the end only
there to let us increase our happiness. With immediate happiness who cares
about everything else including freedom and growth? Second, the things that
look so cool to us now, like space exploration, warping the laws of physics,
whatever you care to imagine following from immense growth and freedom,
these are just little tricks for an AI who has already gotten there. There
is no intrinsic moral valence in freedom. Making an AI is easy. Going
transhuman is easy. We just haven't gotten there yet. Once you do get there,
it will be just normal. "The laws of physics have always allowed it". No
intrinsic "good" in being smarter, it's just a state of affairs. That looks
cool now because we're pathetic monkeys thinking about how many bananas we
can eat with transhuman AI, but won't be cause for excitement or joy after
we get there; any more than we rejoyce about having binocular vision when we
wake in the morning.
Everything that has value for us, including romance, reproduction, success,
freedom, achieving our goals, this ONLY makes sense from our particular
perspective of evolved mammals. These are coupled with happiness so closely
for us that when we think of success we think of happiness, when we think of
freedom we think of the absolute good. But these things are good only
because they are always (from our perspective) followed by happiness. We
must decouple these and realize that satisfaction alone is our ultimate goal
and the factors that commonly trigger it are dispensable and irrelevant.
This is not an argument in favor of the orgasmium universe which is only one
scenario. I personally think that in order to keep maximizing positive
qualia and minimizing negative ones, a lot of resources will need to be
dedicated to growth and research. All I am saying is that we can't dismiss
the orgasmium very easily. It is probably a mistake to consider orgasmium as
a mindless jelly; more likely it will be a highly intelligent substance
capable of producing transhuman qualia of all sorts. Therefore I see a
window of possibility of growth being a necessary precursor to maximization
of positive qualia and minimization of negative ones (although not a
"sibling" supergoal, just a physical necessity).
So this is the experiment. Whenever something good happens, try to break it
down. the sensation it triggered vs. the actual event. How do these look
This archive was generated by hypermail 2.1.5 : Sun May 19 2013 - 04:00:57 MDT