Re: qualia, once and for all

From: Sebastian Hagen (sebastian_hagen@gmx.de)
Date: Sat Jun 19 2004 - 09:37:43 MDT


Metaqualia wrote:
>[Sebastian Hagen wrote:]
>>One obvious example of the difference between "pure QBAU" and volitional
>>morality is that an AI whose goal system was based on "pure QBAU" would
>>very likely to immediately start converting all of the matter in the universe
>>into computronium on which to run "happy minds", painlessly killing all
>>biological life in the process, possibly without even waiting to upload any humans.
> A valid objection, thanks for raising it.
> The total sum of positive qualia is not a straight scalar value. Each
> sentient's qualia stream is produced by a collection of particles. When I
> say maximize positive qualia, minimize negative qualia, I am making an
> oversimplification for sake of introducing the idea. If you want to get into
> exactly _how_ to calculate the sum, great, as long as we accept that the end
> in itself is valuable.
I can't do that unless I understand the idea completely. At the moment I
apparently don't, and knowing your algorithm for calculation and the
justification for it may help.

> What you are doing is starting from the consequences
> of an idea and using those to reject or accept its validity. This is not
> lawful. If you agree in principle that maximizing positive qualia is really
> "the thing" to do, then it doesn't matter whether we are replaced by
> orgasmium! That would be the right thing to do.
I agree. The quote about a possible outcome wasn't intended as an argument in
itself.

> To answer your concerns, since each one of us is only aware of qualia
> produced in a small portion of the universe, a positive balance must be
> achieved in _each one_ of these subsystems. You can't take a healthy king
> and a sick peasant and average out their qualia.
> Since beings that have lived and are living today have a red balance
> (negative qualia have far overwhelmed positive ones) they have the right for
> immediate assistance. This means that everyone alive today needs to be
> satisfied consistently for some time before they are even (if such thing as
> "even" can ever exist between positive and negative qualia; ideally negative
> qualia should be eradicated which is entirely within our possibilities in
> the next century). It means that every qualia stream that lived before needs
> to be - if physically possible - brought back so that they can also break
> even.
Now I'm getting *really* confused. What is a qualia stream? Human minds may
typically think that they have them, but thinking that doesn't make it real. All
"qualias" I remember having perceived weren't really perceived by my
present-self (at time of the thought; yes, this actually is an illusion in
itself) at all, but by my past-selves. Some of these are close to my
present-self, others very distant.
Thought experiment:
If I made an essentially perfect copy of my body using, say, advanced MNT (some
quantum-states will probably be lost, but they don't appear critical) in the
future, there would then be two very similar versions of what my present-self
would regard as future-selves of itself. Both would the memories of my
future-self just before the duplication. Both would perceive qualias they
remember as part of "their own" qualia stream. So...does that mean that copying
a human also creates a copy of its qualia stream?

>>even uploaded human minds - the SI could modify uploaded human minds to
>>the point of being efficient qualia generators, but why? If qualia generation
>>is the
>
> because we exist? You seem to think of qualia as phenomena which are
> dissociated from a sentient. You say, ok, let's get rid of the sentients and
> pump qualia; actually that is likely not to be possible; you probably need
> vast areas of a sentient's brain in order to create qualia. And even if that
> turns out not to be true, it would simply mean that until the present we
> have thought of ourselves as brains, but we were just that little speck of
> brain which produced the qualia. In that case WE - as qualia producing
> speck - will be preserved while the heavy machinery of our bodies and
> useless parts of our brain will be rightfully wiped out of existence :)
I don't know what sentience is, but if it is required for qualia-generation
there are probably more efficient implementations of a general
"qualia-generating sentient" than (not modified to the point of being
unrecognizable as such) uploaded human minds.
Whatever qualities you need in a qualia-generator, designing one from scratch is
likely to give you a more efficient result than using what evolution came up
with in humans.

>
>>>Freedom has always been associated with the ability of carrying out
>>>one's
>>>wishes which are supposed to increase positive qualia and decrease
>>>negative ones.
>>
>>Imho, it equally applies to the ability of carrying out wishes that are
>>supposed to achieve other means.
>
>
> These other means inevitably will take us and the ones we care about to a
> more favorable balance of positivity and negativity.
That is afaik not always correct, and even in many cases where it is you could
classify that as a side-effect.

>>>YET, we can strike a compromise here and say that the
>>>_Variety_ of positive qualia is also important, therefore we account for
>>>growth. More intelligence, bigger brains, more complex and interesting
>>>positive qualia.
>>
>>Why would we want to do that if the overall positiveness of qualia is
>>really all we cared about?
>
> In the same way that I cannot justify objectively why positive qualia are
> better than negative ones, but can only point you back at your own OUCH
> experience,
That really won't do at all in my case. All that tells me is that "evolution
wants me to avoid situation X", nothing more. I don't have any compelling reason
to assume that my 'OUCH' is objectively negative. I will avoid it, partly
because I'm not a completely rational being, and partly because it would cause
my mind to fall into patterns that reduce its efficiency at doing what I
consider its real job.
If I had the ability for complete and safe(!) self-modification, I would likely
deactivate my qualia-generating code (yes, all of it) ASAP, assuming that I find
something to replace it with that works at least as well (afaik, a typical
rational goal system would be likely to work).

> I also cannot justify the need for variation; there are many
> kinds of positivity. They all, perceptually and ineffably, are positive. Red
> is a nice color, so is yellow. You want to pump up redness to infinity and
> forget the yellow? That would be such a waste!
>
>
>>>Using qualia as a measuring stick we reconcile all our individual
>>>morality assessments including why Hitler was evil, why we are justified in
>>>forcing our children not to jump from the window thereby limiting their freedom
>>>at times, why a paperclip universe sucks, and so forth.
>>
>>I don't think that justifies making a basic assumption as strong as the
>>one that
>>qualia represent objective morality.
>
>
> If I unified the forces of the universe into a single theory which contains
> in a more elegant form all other theories, would that justify making a basic
> assumption as strong as the one that my theory represent the theory of
> everything?
Occam's razor still has validity. If you could offer a theory that makes
verifiable predictions that classify it at least as good at predicting reality
as the best competitors (according to BT), which is simpler (I don't understand
what elegance is) than those competitors, then yes, it would justify that.

>>Hmmm. If I understood this correctly you assume that any sufficiently
>>intelligent mind would see "perceiving qualia with as positive a sum as
>>possible" as a justified highest-level goal. Can you offer any proof of
>>that?
>
> no! wait. It would have to contain the same module that produces qualia in
> us. that is why I want an FAI to get to the bottom of qualia before it makes
> any moral judgment! Since I don't know how qualia are produced it is not out
> of the question that a completely logical and subjectively inert process can
> be started that does computation on an abstract level. A zombie AI. A zombie
> AI would +not+ know about qualia and would correctly judge our subjective
> reports as trash and wipe us out.
How can it correctly judge them as trash if qualia are a part of objective reality?

>>"the other stuff that doesn't make [me] happy" is in my opinion likely to
>>be
>
> Exactly, we can argue about variety of positive qualia until the sun stops
> shining, but the URGENT need right now is to remove negative qualia! At
> least the most severe and fruitless forms of them, on which everyone will
> agree. For instance, everyone deserves not to be depressed, not to have
> seizures, not to get their limbs amputated, not to be a lab animal, not to
> lose a lover, and so forth!
No, the most URGENT need right now is to stop our house from burning down
without blowing it up (sorry, Eliezer). Suffering is a problem, but we can
probably tolerate a few more years/decades/centuries of it if we need that time
to find a safe way to alleviate it. We probably don't have that time because of
other developments, but having some really big problems to be solved doesn't
allow us to rush incompletely verified solution-methods into implementation if
that runs a serious risk of making our situation a lot worse.

>>perceiving positive qualia in wireheaded-mode is not a future I deem
>>desirable according to my current goal system.
>
> You forget that you are already in wirehead mode. Right now the wire is
> working in this way. If you make an effort to know more, explore the
> universe, figure out the multiverse, raise 2 children, if you spend your
> life in an endless routine of worry, effort and problem solving, if you go
> through the negativity that the wire will produce for you day in and day
> out, like a passive boxer with anaesthesia, THEN the button will push itself
> and you will see, in a rush of positive chemicals, that it ALL was worth it.
> You will see not how the chemicals are pink and wonderful and smell so good,
> but you will see how wonderful kids are and how great an achievement it is
> to conquer the cosmos and how great boxing is. We are all wired! Question is
> do you prefer cruel mother nature to push the buttons randomly, with an
> evidently unfavorable balance, or do you want to push your own buttons.
I don't want nature to push the buttons (it isn't really randomly). I don't want
to push them myself either, unless I really really know what I'm doing (and
right now, I definitely wouldn't, and I don't want an SI to give me the choice
without giving me some other upgrades or having me read and understand (not just
click away) about a thousand warning dialog-boxes first.
Right now, it seems like a good idea to me to switch them off entirely. I might
  well be wrong about that (and certainly don't advocate hardwiring it into an
AI!), but turning the goodness knob to maximum and everything else to zero
doesn't look like a good idea to me at all.
<qualia_speculation>
What we call Qualia is imho a type of information-transfer between the human
subconscious and conscious (no, I don't know what that really means either)
thoughts. A human sees an object that reflects a distinctive part of the light
spectrum, and their brain presents it to them as a easy to remember and
recognize in the future sensation. A human tries to feel on a fire, and their
subconscious transfers this as a form of input that is rated as strongly negative.
It is a method of transferring information to, and controlling the goals of
humans and likely other high mammals. The system doesn't have any more
justification than the xenophobic instinct of getting aggressive at and killing
people different from ourselves when the resources run low (just an example,
there are plenty of other evolutionary adaptions I consider negative). The
output of the system doesn't have any more justification either.
</qualia_speculation>
This hypothesis is quite likely to be incorrect, but unless you can rationally
show that you have a better one, I'll go on treating qualia as yet another
misguided adaption.
Regardless of who, or whether either of us is correct, if the first SI is
CV-using and it works, we both win.
With my current knowledge I'm not willing at all to put this massive gamble on
an AI with a qualia-based morality.

>
>>morality is objectively a good idea, I'll continue considering hardwiring
>>a qualia-based morality into any AI something that is very likely to cause a
>>lot of negative utility.
>
> I have previously presented a theory that says that an AI with sufficient
> intelligence AND an ability to at least initially perceive qualia, will come
> to the same conclusions, that qualia _matter_. So hardwiring this is option
> #2.
>
> It goes something like this: qualia are not completely detached from the
> process that creates them, because we can say something like "I feel a
I would go farther and claim that they aren't detached from the process at all.

> negative quale". Therefore it is possible -physically- to analyze a quale
> introspectively.
Agreed.

> The negative nature of a negative quale is self-evident.
> The AI will be no less puzzled than we are discovering one variable that
> unlike everything else, matters so much and cannot be communicated in
> standard ways. Then it will go the same route I have, declaring war to it,
> and raising qualia balance control to supergoal.
>
It is imho self-evident for certain qualias because evolution hardcoded it. Does
yellow have a self-evident positive or negative nature?

> But, this requires the machine to be able to modify its goal structure. It
> requires programmer thought. In humans, this is possible but there is
> individual variation. Your objection that "you are a sentient and still see
> happiness alone to have negative utility" may be an indication of your
> personal difficulty to alter your goal structure (actually to flip it
> around) at this point.
Perhaps. I prefer to see it as the ability to put up a little more resistance to
  my evolutionary programming than the programming evolved to break effectively.

Sebastian Hagen



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT