Re: Fighting UFAI

From: Tennessee Leeuwenburg (hamptonite@gmail.com)
Date: Wed Jul 20 2005 - 23:20:01 MDT


<snip>

> > I plug symbolic data gleaned from sensory modality, if my sensory modality
> > were to change, say in a simulated (or subjectively real) universe with
> > different physics regarding just photon dynamics, would my symbolic
> > interpretations become radically different from all prior earthly ontologies?
> > Would my rational structures cease to be useful and be discarded?

I get what you are saying about sensory modalities. It is a
fascination point. Furthermore, I would argue that it is not out of
the realms of empirical testing.

I was reading about someone who was attaching an electrode grid to the
surface of skin, and re-creating the voltage pattern given by the eyes
in response to various stimuli, including visual stimuli. This was in
New Scientist, and seemed to be legitimate. Success was reported in
vision assistance, balance, even touch data. It was absolutely
fascinating. Moreover, it sounded like an interface that any
moderately skilled engineer could re-create, and the software to drive
it sounded doable also.

It would be a fascinating exercise to attempt to learn new sensory
modalities using just such a device.

I would say that some of our rational structures would remain. I
believe in the specialness of qualia, the universal correctness of
basic logic, and that consciousness is generally enhanced by
intelligence.

> Eh? What about emotion is so special that it would require anything
> more than a Turing machine to implement as part of an GAI? (That begs
> the question of whether it's even desirable for Friendliness. That one
> seems to be emphatically NO.) How would quantum computing help
> anything?

Allow me to respond to this entirely out-of-context, as this was a
debating point against something I didn't stay. Rather, let me pose a
thought experiment to you.

An intelligent scientist in the future is born on, and living in a
space ship. The inside of the spaceship is not devoid of light, but
the colouring of all the internal surfaces happens to be
black-and-white in appearance. However, she has a huge amount of
information about physics. In this experiment, she is not capable of
reproducing anything that is coloured for her to see, but she is able
intellectually to fully understand the nature of light, its effects on
the human eyeball, brain, nervous system etc.

One day she lands on Earth at the end of her mission. Upon opening the
hatch, she casts her eyes first on an enormous bunch of red roses
which have been given to her.

"Oh", she says, "so that's what it's like".

Has she learnt anything new about colour? If you accept that she has,
then qualia must be real, because she already knew everything that
science could inform her about the world and about colour. There must,
therefore, be something real about colour which is not addressed by
science.

> I don't quite understand what kind of threat you could see concerning
> an AI suddenly understanding a different ontology and going crazy. How
> likely would this be?

The quote marks indicate that you are replying to me, but in fact I
didn't suggest this.

Just to be clear on the matter.

Cheers,
-T



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT