Re: Fighting UFAI

From: Chris Capel (pdf23ds@gmail.com)
Date: Thu Jul 21 2005 - 07:54:20 MDT


On 7/21/05, Tennessee Leeuwenburg <hamptonite@gmail.com> wrote:
> > Eh? What about emotion is so special that it would require anything
> > more than a Turing machine to implement as part of an GAI? (That begs
> > the question of whether it's even desirable for Friendliness. That one
> > seems to be emphatically NO.) How would quantum computing help
> > anything?
>
> Allow me to respond to this entirely out-of-context, as this was a
> debating point against something I didn't stay. Rather, let me pose a
> thought experiment to you.

[clip thought experiment]

> Has she learnt anything new about colour? If you accept that she has,
> then qualia must be real, because she already knew everything that
> science could inform her about the world and about colour. There must,
> therefore, be something real about colour which is not addressed by
> science.

Well, I read a good essay by Dennet examining this very experiment in
The Mind's I. Basically, his argument was that the intuition pump is
misleading because of the phrase "learned everything about
vision/seeing red". We really don't know what knowing "everything"
about this subject would be like, so our intuitive idea of what this
amount of knowledge is, is approximately what a very accomplished Ph.D
or two or three would collectively know on the subject. But taken
literally, it implies almost infinite amounts of knowledge, most of it
mostly useless. But certainly we can't rule out the possibility that a
scientist living in a time where the science of the brain is mature
and mostly complete would be able to use all of the existing
scientific knowledge, and knowledge of how her own brain is wired, to
know exactly what visual impression she would receive from a red
object. In fact, the situation--knowing "everything" about
something--is so foreign to us that using it as a thought experiment
is practicing philosophy on rather shaky grounds.

Actually, bringing this back to the original point (did this thought
experiment bear on that point?), I do lend some credence to the
existence of qualia, and still I have no trouble believing that they
could arise on purely non-quantum biological devices, or even
electronic, devices. Now, I have no reason to believe that they do,
except that most thought apparently does, and it would be quite an
exception, and a violation of Occam's Razor, to say that it requires a
fundamentally different kind of device to support them, and I just
don't see the evidence, nor the justification. The same way that
Occam's Razor seems to some to discount the possibility of qualia,
those who see their primary experience as lending evidence to qualia
ought to apply Occam's Razor to the idea that qualia are somehow
exceptional processes in the brain, ones that can't be modeled the
same way the rest of the brain can.

> > I don't quite understand what kind of threat you could see concerning
> > an AI suddenly understanding a different ontology and going crazy. How
> > likely would this be?
>
> The quote marks indicate that you are replying to me, but in fact I
> didn't suggest this.

I didn't mean to imply this. But I believe pdugan did suggest, and I
could be wrong, that there is a danger in the possibility that an AI
would find some other universe, or some other mode of existing in this
one, that lends itself to different modalities and a different
ontology. I was just inquiring as to what he thinks the exact nature
of the threat that situation would pose is, besides being existential.
My first impression is that it's rather unlikely, but he didn't do
much explaining.

Chris Capel

-- 
"What is it like to be a bat? What is it like to bat a bee? What is it
like to be a bee being batted? What is it like to be a batted bee?"
-- The Mind's I (Hofstadter, Dennet)


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT