Re: Friendliness and blank-slate goal bootstrap

From: Metaqualia (metaqualia@mynichi.com)
Date: Sat Oct 04 2003 - 00:33:22 MDT


I agree with many of your points, and not with others.

"Why have a blank slate moral system?"

Actually it was just an idea, it doesn't really matter whether it is blank
or not to start with.
If we are making a recursively improving AI, it should have a recursively
improving moral system.
Start it up with some basic human morality, or start with a blank goal
system, whichever is easiest.
The important thing is to allow the AI not to remain stuck in one place but
to keep improving.

I think that just as a visual cortex is important for evolving concepts of
under/enclosed/occluded, having qualia for pain/pleasure in all their
psychological variation is important for evolving concepts of
wrong/right/painful/betrayal.

The AI, without a visual cortex, and if it had access to the outside world
(nanotechnology?) could still run physics experiments, infer the existence
of electromagnetic waves, create an array of pixels, and develop a visual
cortex on its own.

But would an AI without qualia and with access to the outside world ever
stumble upon qualia? I don't know.

While we can stand to have a temporarily blind AI we can't afford to have a
temporarily selfish/unfriendly AI on the loose. So IF we could incorporate
some kind of qualia-system in the AI (of course making sure that it had
complete control over these "emotions", unlike a human) wouldn't it be a
good thing?

However we don't have a clue how to create a qualia module, so that is why I
wrote that garbage about trusting humans (or better, the basic set of human
moral laws, as you said) until qualia are developed in _some_ way.

1. Absorb a refined version of the human moral code until you have qualia
2. Then, create your own moral code

does this make sense?

curzio

> Whatever qualia are, they are something emergent from a physical system,
> something that evolved from a single cell. To understand this an AI
wouldn't
> need to keep humans around, it could scan our state into it's mind for
closer
> examination. Even if it did need people around, one should suffice, and I
> very much doubt they'd have a very meaningful (or happy) existence. I
suspect
> we can do better than this, and I think we should try.
>
> I don't think we should try to create an AI that either implicitly (eg.
add
> extra conditions to "keep humans in the loop") or accidently (eg. the AI's
> search for understanding qualia is accidently meaningful for it's human
> information sources too) does what we think of as meaningful. How about
> creating a mind that explicitly wants to make our future meaningful and
good?
> Why not give the AI the capability to reason about morality as we can,
about
> good and bad, better and worse, rather than some minimal bootstrapping
> system?
>
> - Nick
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT