From: Lee Corbin (firstname.lastname@example.org)
Date: Thu Apr 24 2008 - 22:16:57 MDT
> Lee wrote:
>> > You'll have to stop at BusyBeaver(2.91 x 10^122) (Bekenstein bound
>> > of the Hubble radius).
>> Very good! Few people understand the philosophical significance of
>> the Busy Beaver. Add just *one* neuron and the capacity of a system
>> is potentially increased by an enormous factor. We cannot begin to
>> imagine just how much pleasure a cubic meter of material could be,
>> or how smart it could be. Much less a Jupiter brain.
>> So I do not take "unbounded" literally. It really cannot be taken
>> literally. The reason for this is that the speed of light is constant.
>> If a brain gets too big, it ceases to be a single entity.
> (One correction. The Hubble radius is not constant, so you don't
> have to stop).
By bringing in the speed of light, I was not referring to the
size of our Hubble volume. It's only 45 or so billion ly in
diameter, which, if Tegmark is right, is of course nothing
in an infinite universe. (And I've been looking for a long
time: no one has proved that the universe is not infinite.)
The speed of light limits how big an "individual" can be.
If it conducts thoughts at the speed of light, then is the
.1 second or so delay across the world still okay for
a single mind? How can I think when half my brain
needs a few seconds to get up to speed with what the
other half has already "thought"? So, to have solar-system
sized intelligences is out of the question.
> I proposed a bound on happiness (or unhappiness)
> of K(S2|S1), where S1 is the state of an intelligence
> before the reinforcement signal, S2 is the state
> afterwards, and K is Kolmogorov complexity.
An SR (stimulus response) theory of happiness?
That's a new one. Drugs can make one quite,
quite happy (on any reasonable person's usage
of the term), at least for a while. There's no SR
that I can see. It's just neurons firing, again.
> It is intuitive in that a stronger reinforcement signal
> induces a greater change in mental state (as measured
> by the length of the shortest program that describes
> the change), and that it is not possible to experience
> happiness without memory.
That's just a new one on me, is all I'll say.
> By this definition, happiness would be bounded
> by the complexity of the intelligence.
Now you've hit a nerve! :-) Hmm. I'd always supposed
that people could suffer more than animals because they
more thoroughly understand their situation. But maybe I
was wrong then. Let's see if Mr. Darwin can help.
It makes sense that trees don't suffer when you chop
them down, *because they can't do anything about it*.
A mammal, on the other hand, has all sorts of reasons
to suffer. For one, it's nature's way of telling him that
he must extricate himself from this situation, no matter
what. For another, nature is telling him "never, never,
never forget how you got into this, stupid".
So I proceed to refute my own long held position
thusly: let's suppose that to get out of some horribly
painful situation, the subject must solve a puzzle.
Isn't it rather likely---at least from the depictions
I've seen on TV---that people can hurt so much
that they can't think clearly? In some hideous
experiment (well, the Nazis were helpful in one
small, unfortunate way), the mean time of unlocking
a puzzle chain could be measured as a function of
pain. Indeed, there might be a minimum: up to a
certain threshold, the victim/subject doesn't really
bother with solving the puzzle, or puts it off. At
a certain higher threshold he hurries as fast as he
can. Then isn't there also a point where his performance
tanks as the pain becomes blinding? I think so.
Therefore, a Darwinian analysis suggests that elephants
might suffer pain as much or more than do humans.
Hmm, can this be right? Of course if your KC analysis
is right, we still win the sensitivity contest.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT