Re: Mindless Thought Experiments

From: Lee Corbin (lcorbin@rawbw.com)
Date: Wed Mar 05 2008 - 22:31:13 MST


Matt writes

> Lee Corbin wrote:
>
>> "Experience doesn't exist"? You mean to say that you don't have an
>> experience right now of sitting in front of a monitor? That you have
>> not the experience of reading an email or watching TV? You mean to
>> say that the experience of skiing down a slope doesn't exist? You
>> are using the word in a way that I've never heard before.
>
> Of course I believe in my own experience or qualia. But there is a difference
> between believing something exists and it actually existing. You can't prove
> that anything exists.

Outside mathematics, you can't prove *anything* at all! So what
good is such a complaint? I think it's a big mistake to try to depend
in any way upon absolute certainty. It's an illusion.

We have *conjectures* that stand up to criticism (see Pan Critical
Rationalism) http://clublet.com/why?PanCriticalRationalism,
http://www.geocities.com/Athens/Ithaca/2564/i1p4.htm
(apologies if you already are familiar with PCR).

All knowledge is conjectural. Any time you try to complete the sentence
"He knows that...", "She knows that...", or "I know that...", what is
"known" is only a conjecture. We believe those conjectures that have
withstood the test of time, i.e., have withstood a lot of criticism.

In any real situation, you know that your mother has experiences,
and if you love anyone then you deeply care about their experiences.
In daily life, the fact that they do experience is important to you (unless
you're a psychopath).

> I can't make any argument that a robot couldn't make.

True enough. But what the robot says is either so, or it's not so.
If you don't know how advanced the robot is, you don't have any
idea of whether or not it has experiences.

> When I suppress my evolutionarily programmed beliefs and analyze the question
> logically I have to conclude that either a robot experiences or I don't. But
> in the common sense meaning of the word, humans experience and machines don't.
> But do animals? Embryos? Terminally ill Alzheimer's patients?

It's a sliding scale. We know that some entities are what we call very
conscious, that other entities are (we think) much less so, and that
embryos are hardly conscious at all, any more than, say, your arm is.

>> Eventually robots could have "inner lives" as complex as ours or more,
>> when one day they have enough intelligence/consciousness.
>
> How do you measure complexity? How many bits are required for consciousness?

A lot, but we don't know enough yet to put a lower bound on it.

> A program like autobliss ( http://www.mattmahoney.net/autobliss.txt )
> simulates reinforcement learning. Would it be moral to run a version of the
> program capable of learning more complex functions? Suppose I add episodic
> memory, i.e. it can recall the training sequence? Where do you draw the line?

It's not easy. Since it's still only 2008, we have to do a lot of guessing.
But if it passes the Turing Test, if it interacts with people in other satisfactory
ways, then the safe bet is that it would be immoral to run the program if
it complains or if we think it's suffering. There is no certainty, of course.

Lee



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT