From: Metaqualia (firstname.lastname@example.org)
Date: Sat Jun 19 2004 - 10:55:24 MDT
> It seems to me that it is a cardinal point whether survival is a key
> or only a key value in the preponderance of positive qualia.
Preponderance of positive qualia is achievable.
> The human mind is a rather complex system (an understatement) an
> the fulfilling of people's material needs does not ensure (perhaps not
> tend to) the preponderance of positive qualia. People find all sorts of
> (and bad) reasons to be unhappy.
So that is why you must target happiness directly.
> It's not this type of action I'm referring to. Rather, it's the action of
> someone who sacrifices his life to safe a perfect stranger, for example,
> some other similar case you can imagine. Something which, coinciding with
> one's goal system, is however a positive qualia minimizer, at least
Humans are imperfect positive qualia maximizers. what is your point?
> However, altruism can be a source of negative qualia, in the individual as
> in the social level, so that's why I was asking whether you'd advocate its
only in a zero sum society, only with imperfect technology and scarcity of
resources, only with legacy darwinian wetware. you can do better than that.
You can be altruistic and feel good about it, because being altruistic in a
world reshaped to maximize positive qualia, resources are boundless and
giving does not subtract from what you have.
> Let me put it this way: you can 1) modify the environment and slightly
> modify the human mind so that happiness is atainable and roughly
> to the achievement of goals which overlap the collective volition or
sounds good enough to me.
> 2) you
> can modify the human mind so that happiness becomes necessary (id est,
> whatever happens the subject will be happy). I would argue that, under any
> reasonable objective goal system (I am doubtful that qualia are as
> as you think they are, but that's a whole other matter) 1 is a superior
The reason humans intuitively prefer #1 is that they are unable to
completely step outside their goal system and see that as long as the
positive qualia are substainable and expandable without real technical
limitations, the circumstances surrounding the "physical" world are utterly
Oh my god, a world with the quale for raising children but without actual
children! What an abomination (since children won't actually _exist_ and my
DNA will not actually duplicate).
> The key issue as I see it is the issue of necessity: if you modify people
> such that they are no longer able to be unhappy, you have removed the
> for happiness to have any meaning. Under my current goal system, that has
> high negative utility.
The meaning of happiness is not dependent on unhappiness! read the FAQ on
chapter 4 on paradise-engineering.com
> However, maybe I'm not smart/knowledgeable/good enough to see the truth in
> your argument. In such case, collective volition could sort it out and set
> qualia-maximizer (or whatever compromise of qualia-maximization is found
> be best) as a successor dynamic.
IF in fact the initial collective volition has any grasp of what qualia are,
which at this point is not at all obvious. It needs to have a hardcoded
instruction that it will figure out qualia before making moral decisions,
this is my definition of friendliness bootstrapping
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT