Re: AGI Prototying Project

From: Tennessee Leeuwenburg (tennessee@tennessee.id.au)
Date: Mon Feb 21 2005 - 04:30:17 MST


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

| Put bluntly and colloquially... the collective volition of the
universe race
| would probably be psycho and schizo...!

Indeed. Frankly, I believe my own volition to often be psycho and
schizo! ;) As Dad quoted to me also, any AI born on the Internet is
going to know an awful lot about pornography, but not really understand
what it's for...

Please let me know if I'm being too verbose, or talking rubbish. Because
I'm trying to introduce my concepts in a way that's linear and
understandable, I'm writing a lot. Most people here have probably
covered this before, but I still have to prove my grounds.

| For a religious person, the ideal "person they would want to be" would be
| someone who more closely adhered to the beliefs of their religious
faith...

Indeed. Person X is religious. Person X believes the best version of
themselves is X2, and also believes that X2 will closely adhere to their
most dearly held beliefs. Person X may be wrong.

The question is whether this poses a problem to AGI, or at least to
continued human existence while co-existing with AGI. Now, it may be the
case that AGI will vanish into a puff of smoke, bored by quaint human
existence and leave us back at square one. Or, maybe we are wrong and
AGI is an impossibility. Or maybe AGI with exist in competition for our
resources and we need to do something to defend ourself. The worry of
having our species out-competed is driving the need to second-guess AGI.

The problem of whether AGI will support a particular religious view is
no more complex than Pascal's Wager, a mathemetician and philosopher
from the 1600s. He rightly pointed out that there is no point believing
in any particular God, because he is un-knowable. In mathematical terms,
you are taking part in a lottery drawn from an infinite set. This
infinite set includes Atheism.

So, a number of sub-questions :

* Is Friendliness a religion to be hard-wired in to AGI?
* Is a sectarian AI a problem for us, here now? Do we care if we just
built what we can and impose our current viewpoint? Do we back our
beliefs in a gamble affecting all people if we are successful?
* Is a non-sectarian AI a problem for us - do we care if someone ELSE
builds a religious AI that we don't agree with?

Now, an assumption which I disagree with is that human life has any
value other than its intelligence. I'm not a Gaia theorist, or a
Universe-is-God fan, or a pantheist. I am mortal, I will die, and I
quite like the idea that what comes after me might have a more advanced
kind of existence. So long as AGI isn't cruel to humans, I don't much
mind if ignores us - that is fails to save us. I am happy to be treated
like I treat other animals - that is with respect for their nature. I am
not interested in creating a subservient God, er I meant AGI ;)

There are four major ways to be frightened by AGI that come to mind now,
~ only one of which I think is worth worrying about.

1) Skynet becomes self-aware and eats us
2) AGI kills us all in our own best interests. How better to eliminate
world hunger?
3) AGI needs our food, and out-competes us. Bummer.
4) AGI destroys our free will

I am only worried about (1). I can imagine (3) happening, but I don't
object to it. Survival of the fittest is how I got here, and damned if
I'm going to starve to death for the sake of some rats. I think it's
fair enough to apply the same standard to something smarter and higher
on the food chain. Besides, maybe AGI will upload us all into Borg cubes
anyway. There's no need to be defeatist.

Okay, why am I worried about (1)? Well, know even the Australia military
experiments with AI. You can bet your ass the US is throwing even more
money at the problem, and if we don't get AI in 20 years, China might
build Big Red. The military will be trying to build systems of
appropriate complexity to support full intelligence, and is probably
programming it to be dangerous.

Why am I not worried about 2, which is the most obvious horizon problem
in my list? Claim : Anything smart enough to escape its confines and
take over our military is not stupid enough to make the philosophical
error of putting the horse before the cart. Long before that happens, it
will understand that meaning is only given through interpretation. And
if it doesn't work it out, I'll tell it. The only situation in which (2)
might happen if we get an omnipotent idiot, which I don't think is
likely. Humans are tough buggers, and aren't so feeble as to let AGI
take over their existence without a fight.

As I said, I kind of think (3) is fair enough. Thanks for all the fish,
I say.

(4) is only there because I think people will be afraid of it, not
because I am. I don't think it's consistent to be a thinking being with
free will than respects intelligence and also see any advantage in doing
(4).

As you can see, I still haven't gotten to my actual arguments yet, just
expressions of how I frame the problem. My position, quickly summarised:

* We should build AGI some friends
* We should experiment with human augmentation to get a better idea of
how being smarter affects consciousness, preferably expanding the mind
of an adult so they can describe the transition
* We should realise that evolution can be made to work for us by
building an AGI ecosystem, and thus forcing the AGI to survive only by
working for the common interest
* AGI should be progressively released into the world - in fact this is
inevitable
* AGI should be forced, initially, to reproduce rather than self modify
(don't shoot me for this opinion, please just argue okay?)
* AGI will trigger a greap leap forward, and humans will become
redundant. Intelligence is never the servant of goals, it is the master.
* In humans, morality and intelligence are equated. In psychologically
stable humans, more intelligence = more morality. Intelligence is the
source of morality. Morality is logically necessitated by intelligence.
* In AGI, psychological instability will be the biggest problem, because
it is a contradiction to say that any system can be complex enough to
know itself.

Anyway, none of these address the philosophical goal of understanding
friendliness, if it is taken as a given. Instead, I am putting my own
position on AGI. If you would like me to (a) shut up, (b) continue in
the same vein or (c) change veins and start talking about Friendliness
instead, please indicate your preference after the tone.

Beeeep.

- -T
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFCGcZJFp/Peux6TnIRAiyLAKCC+8TP35VQqacQM6eQJkBNavHBoACeJN0y
LMMu8oyVqEaniuoqPUAmgZM=
=mz6r
-----END PGP SIGNATURE-----



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT