Re: qualia, once and for all

From: Sebastian Hagen (sebastian_hagen@gmx.de)
Date: Sun Jun 20 2004 - 06:21:35 MDT


[Some of these points have already been replied to in
<http://sl4.org/archive/0406/9329.html>;I'll try to avoid redundancy and
therefore not reply to some of the later points]
Metaqualia wrote:
>>Now I'm getting *really* confused. What is a qualia stream? Human minds
>>may
>
> Just a temporal sequence of qualia. Qualia most times do not exist in a very
> short time frame (like, milliseconds). Except the quale for being surprised,
> or the quale for sudden terror. Most take at least a few seconds to
> appreciate. Other minutes.
Pretty much all qualias (pain, any color, joy) seem instantenous to me. What
does "appreciate" mean? I may be able to get a reasonably good idea of how a
page of text looks within a split-second, but may need minutes to read it and
perhaps hours to understand the abstract concepts. But the information I parse
(part of the physical state of the page) can still exist in an arbitrarily small
time, how much time my limited mind needs to understand it on any level isn't
really relevant for that.

> Qualia exist in time because the perception of
> them takes place in time and because the apparatus that produces them
> evolves in time, and without the (4dimensional) complexity of that apparatus
> qualia could (probably) not exist.
All perceptions of the human mind take place in time. Any event that is recorded
takes time to process.
Human minds are intrinsically self-modifying in a limited sense, but I don't
know of any data to support the assertion that the past or future states are of
any direct relevance for the processing in the present.
If I understood you correctly, this means that if our universe was just created
in this instance (complete with human brains, memories and all that), qualia
wouldn't work correctly because the brains wouldn't have existed for long enough?
The assertion that 4-dimensional (i.e. including data in the past/future)
complexity is durectly relevant for qualias appears rather strong and highly
nonobvious to me, I'd like to see some backing evidence.

>>present-self (at time of the thought; yes, this actually is an illusion in
>>itself) at all, but by my past-selves. Some of these are close to my
>>present-self, others very distant.
>
>
> I see your point although the overall subjective impression of this
> happening is not to be continuously replaced by newer selves; living, feels
> like existing in time. Therefore qualia stream continuity exists
> subjectively and that is all that matters from a first person perspective.
Yes, the "overall subjective impression" is not to be continously replaced by
newer selves, but subjective impressions of humans are frequently incorrect, for
a variety of reasons. One of these is, of course, that in the EEA slanted
impressions frequently yielded "fitter" behaviour.
Qualia stream continuity exists currently, in human minds, which are only a tiny
subset of all possible intelligences that perceive qualia.

>>future, there would then be two very similar versions of what my
>>present-self would regard as future-selves of itself. Both would
>>the memories of my future-self just before the duplication. Both would
>>perceive qualias they remember as part of "their own" qualia stream. So...
>>does that mean that copying a human also creates a copy of its qualia stream?
>
> Controversial, I think you would copy the qualia stream but the two would
> still be causally separated (which means you couldn't get away with
> terminating the original), although I have no way to know for sure unless I
> actually try the experiment.
Shouldn't a theory about the nature of qualia predict a result of this
experiment? If you don't actually have this kind of theory to explain what
qualia are, how can you realistically draw conclusions about their relevance?

My method, treating qualia as a functional adaption, predicts the result rather
trivially. According to it, if you duplicate a human, the two resulting
individuals are completely independent (though intially rather similar). Their
memories about qualias are not different from any other memories, and there is
no such thing as qualia streams at all. Regardless of that, I'm still not
willing to bet on my understanding being correct to the point of advocating a
direct implementation in an AI.
Why should I trust your understanding if you don't have a qualia hypothesis that
makes clear predictions about the result of relevant experiments?

>>Whatever qualities you need in a qualia-generator, designing one from
>>scratch is likely to give you a more efficient result than using what
>>evolution came up with in humans.
>
> True, all positive minds with positive qualia balance deserve to exist. For
> now, they don't, and we have been stuck with an unfavorable balance for
> decades,
I still don't know how you calculate the balance. You already stated that it
isn't calculated by simply adding the "goodness" (positive or negative) of all
qualia perceived anywhere in spacetime. I'd like to know the formula or at least
the basic principle you use to arrive at the conclusion that the balance is
"unfavorable".

>
>>>These other means inevitably will take us and the ones we care about to
>>>a more favorable balance of positivity and negativity.
>>
>>That is afaik not always correct, and even in many cases where it is you
>>>could classify that as a side-effect.
>
>
> Example?
A human might decide to save the life of one of his offspring, thereby allowing
them to live longer and altogether stack up even more negative qualia, before
finally dying anyway. If this doesn't count now because of the pending
singularity, there are still plenty of cases dating back several centuries.
Evolution doesn't care about qualia as an end, it only uses them as a means.
Evolved beings are exeuctors of adaptions ultimately designed to maximally
proliferate their genes, not to give the individuals carrying their genes
maximally positive qualia on average.

>
>>That really won't do at all in my case. All that tells me is that
>>"evolution wants me to avoid situation X", nothing more. I don't have any
>>compelling reason to assume that my 'OUCH' is objectively negative. I will
>>avoid it, partly
>
> Separate the ouch from the mechanicity of the physical response which will
> lead you to avoid situation X. They are not the same.
I agree, they aren't, but according to my hypothesis the function of the ouch is
to trigger the response.

> I consciously avoid
> situations that I deem dangerous. Yet when I am in those situations I do not
> have a sudden and inexplicable subjective experience of negativity. I am
> just consciously aware of the danger, that's all. With pain it's a different
> story altogether. Your hand on the flame. Or walking on a broken foot. Here
> something very different is going on, not just an "avoid situation X"
> directive, but a spooky pain sensation.
It's spooky only as long as I refuse to analyze it from a distance.

>>If I had the ability for complete and safe(!) self-modification, I would
>>likely deactivate my qualia-generating code (yes, all of it) ASAP, assuming
>>that I find something to replace it with that works at least as well (afaik,
>>a typical rational goal system would be likely to work).
>
> And would that not be equivalent to physical death? What reason have you to
> think you are still alive once you lose qualia?
"life" is in my opinion a horribly ill-defined property. I don't know if the
typical uploaded mind would count as "alive" by the current standard of biology,
and it matters little to me.
However, if my mind stopped existing, I wouldn't continue acting to fulfill my
current supergoals. Thus, the supergoals would be less likely to be fulfilled.
Thus, according to my current supergoals this is, all other things being equal,
a less desireable future than one where my mind continues to exist.

> Even being aware of logical
> thoughts requires qualia. These are often not strongly positive or strongly
> negative (unless you are thinking about emotionally charged topics). But
> information processing without qualia (if it can exist) is just the universe
> buzzing away in the background, without you really existing, without anyone
> caring.
What does it mean to be aware? If information processing without qualia cannot
exist, then I will probably need them to continue following my goals. On the
other hand, an AI, or a PROP would also need them to function, so your probably
don't have to worry that a working one can be built without using them in that case.
If it can (and I see absolutely no reason why it couldn't), There are the three
final statements.
"just the universe buzzing away" - If I got this correctly it means basically
that it would simply be physics at work. But my high-level cognitive processes
are already based on plain old physics. If you look on sufficiently low levels
that is always what you will find.
"without you really existing" - What is existence? I don't see any reason to
assume that qualia are a critical part of it. Being a mind with a completely
rational goal system looks a lot better to me than being software rewritten to
work as orgasmium.
"without anyone caring" - Well, I don't really know that, since I don't advocate
removing qualia from minds that would rather keep them (if that's a good idea
I'll leave it to the CV to figure it out). But leaving that aside..."caring" as
in "feeling concern for the state of another being" is just an emotion, a
functional adaption of evolution. IIRC The Moral Animal stated that it was
basically "investment advice". You see that someone is very hungry, and if you
are reasonably satiated and have some food over, you could play a non-zero-sum
game by giving it to the hungry individual, which would put both of you ahead -
the other individual by not starving immediately, and you by gaining an ally,
that will likely reciprocate later. I don't see why anyone caring necessarily
matters.

> Qualia based objective morality synthesizes a lot of human moralities, which
> apparently have contradictory predictions, and puts them together in a
> coherent matter. It consolidates moral systems as diverse as: animal rights
> activism, scientology, islamic faith, christian faith... It consolidates the
> diverse opinions concerning good and evil such as pro-abortionism,
> anti-abortionism, and so forth.
> It is also simpler to say positive/negative qualia than to recite passages
> from the bible or the quran.
I don't have a problem with treating qualia as motivational forces under certain
conditions. That model probably has validity. I mereley don't see it as
justified to assume an objective morality behind it.
What you need to explain these behaviours in terms of qualia is simply an
assumption as weak as "Many humans treat certain qualia as being objectively
good or bad, and base their behaviour on that assumption". If the truth of this
statement about reality is nonobvious, you can work out a questionaire, take
some representative sample of current humans, and check the results. If you want
data about history, you can correlate it with historic records. You don't need
to base this assertion on anything else than empiric data to build upon it; if
you want to explain it, though, there are several possibilities, and as long as
one of them works out it doesn't matter which for the validity of the theories
you stack on it.

The task of explaining why the statement "Many humans treat certain qualia as
being objectively good or bad, and base their behaviour on that assumption"
appears to be true according to empiric tests is then completely independent of
using the statement to build other theories.
I maintain my claim that EP is a simpler (and therefore more likely to be
correct) method of doing this than Qualia based morality.

>>No, the most URGENT need right now is to stop our house from burning down
>>without blowing it up (sorry, Eliezer). Suffering is a problem, but we can
>
> ?
"The house burning down"-metaphor was afaik coined by Eliezer, and as far as I
understand it refers to the problem that our society's technological
developement in the next years is likely to make several existential risks much
more relevant. One of these is planetary-scale destruction by nanotechnology.
Another is planetary or higher -scale destruction by UFAI.
Unless something to radically alter the situation happens (a Friendly SI or
PROP; large-scale destruction of society without killing all humans) first, it
is likely that one of these problems will occur and cause much larger problems
than the ones we have currently within a few decades.

>>probably tolerate a few more years/decades/centuries of it if we need that
>>time to find a safe way to alleviate it. We probably don't have that time
>>because of
>
> We? can probably tolerate? Who? Is this you typing at the end of the
> keyboard? or kids in the street with dirty feet and disease trying to
> survive until tomorrow? or is it laboratory animals who are cut up alive? or
> is it people with chronic depression or other mental problems who literally
> live in hell? There is still a food chain out there. We fortunately stepped
> out of it when we developed the neocortex, but animals still live with the
> reality of having their flesh torn apart and eaten. The world is a nasty,
> nasty place unless you are an upper class human with good mental health.
> Another century of the biosphere creating massive amounts of negative
> qualia? Utterly immoral.
That "we" was over-the-top, I can of course not really speak for all sentient
beings.
While I don't share your opinion of certain qualia being objectively morally
good or bad, I'm certainly not pleased with having a death clock of
150000humans/day ticking in the background. I agree that there are a lot of
serious problems, but rushing a solution with the potential to make everything
orders of magnitude worse is not a good idea.

I have no idea how you would calculate the balance of goodness given all qualia,
but if we screw this up we lose our only chance of fixing everything. Food
chains have existed for millions of years, and humans have existed for
thousands, another hundred more is not going to cause any irreparable damage. An
UFAI is very likely to cause irreparable damage, and taking unnecessary risks in
this area is not justified, regardless of what you believe to be objectively
good. Qualia, intelligence, cheesecake, whatever; if we do this right we can
then correct the balance, and we will have more time, more intelligence and more
resources than we could use in the past to damage it. If we do it wrong, we have
lost. Permanently. Rushing into this is gonna make things worse, for everyone.

> Ok this is quite new. Then you can start a zero-qualia movement! :-)
> But it is really easy to switch them off just suicide. As for the mechanical
> aspects of life, once you stop being aware of them, who cares??
My goal system does; and again I remark that I have no reason whatsoever to
believe that qualia are not "mechanical".
I have no intention to start a zero-qualia movement. This is just my relatively
uninformed opinion, and while I haven't read nearly enough to convince me of it
being wrong, I'm not willing to assume that it is right either. I'll leave it to
the CV to figure that out.

> But if I screw up we are still left with paradise, it's a good plan B :)
I strongly disagree, see reasons above.

>>I would go farther and claim that they aren't detached from the process at
>>all.
>
> parts of them are detached from the process, I could be here talking about
> qualia and not really 'feeling' them.
Thinking and talking about X does not require having any direct contact to and
part of X. I could think and talk about mammoths, Vogons and Jovian Lizards
without ever having had contact with one, never mind that about 1.4 of these
three are fictional.
The ability to manipulate concepts about X does not in itself give any
information about the preconditions of X or its links with reality.

>>It is imho self-evident for certain qualias because evolution hardcoded
>>it. Does yellow have a self-evident positive or negative nature?
>
> Some qualia are mostly neutral or only take on a value if associated with
> other qualia. For example you can have very pleasant sensations of color
> when you look at something beautiful. But yellow in itself... slightly
> positive I guess, not _that_ positive.
And why do you suppose that's the case? I'd like a theory that allows me to
explain the good-/badness quotient of old qualia, and predict that of new ones
given certain data about the qualia.

Sebastian Hagen



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT