Re: [SL4] Rogue AIs

From: DaleJohnstone@email.com
Date: Tue Feb 08 2000 - 18:09:37 MST


From: DaleJohnstone@email.com

>Greets. Marc Forrester, Transhumanist, and as yet inexperienced
>information technician. (Database setup, rescuing files from obscure
>and inappropriate formats, straight programming where necessary.)

Hiya Marc. :)

>An AI equivalent of grey goo is a disturbing idea, but it's not as
>flat out terrifying as the nanotech and biotech dangers, they don't
>have to be any more intelligent than smallpox to destroy our world.
>Combined nanotech and AI in one weapon doesn't bear thinking about.

I could argue that a smart smallpox would be even more dangerous but I
think you acknowledged that indirectly with the nanotech + AI comment.

>AI developed by military 'thinkers' will likely not be developed to
>match human intelligence, let alone exceed it. It will probably have
>irrational drives that limit its potential, and it will certainly not
>be intentionally given the ability to redesign itself at will.

An architecture may be found that can simply be scaled to match human
level intelligence regardless of whether it was the intention of their
designers.
I don't think it's safe to assume that our 'moral' behaviour is the
most optimal and that anything else puts a limit on potential.
Probably a highly selfish, shoot first mentality would be. Although a
society of such creatures wouldn't flourish, but the military
certainly wouldn't care about that.
As for redesigning itself, you're assuming this isn't a fundamental
part of it's intelligent design in the first place. My money would be
on some form of self-modification at some level to enable intelligent
behaviour.

I don't have a fundamental problem with an AI that can redesign
itself. It the human factor I don't trust.

>AI developed by Singularitarians will be entirely the opposite,
>and so would not be easy to cripple for use as a military or state
>slave machine. Some 'authority' may very well seize the project,
>but the big advantage of open source is that they can't destroy
>the original, so they'd be competing with free Singularitarians
>elsewhere in the world, trying to reverse engineer a project
>designed by saner, smarter people than themselves and pervert
>it to the antithesis of its original design.

Again I don't think you can design in any safeguards again 'irrational
drives'. Asimovs Laws wouldn't work, and even if they did they could
be changed. Once you understand how to build minds you can bias them
quite easily.
DARPA (www.darpa.mil) could compete just fine with a bunch of 'saner,
smarter' Singularitarians. They have a budget of over 2 billion US
dollars this year. I agree an open source project would be next to
impossible to stop, but then they wouldn't need to reverse engineer
anything.

>Chances of success?
>
>I think there is far more danger in not acting, or acting in a
>slow and secretive manner, than there is in the old forces of the
>dark ages hitching a ride in our slipstream. If we act fast now,
>we can change the rules from under them. They will not adapt.

I'm beginning to agree that if a working AI was built it would be best
if it was also open.
I'm not sure I like the idea of changing the rules from under people.
That sounds very destabilizing. You preferably want to keep the
balance of power level and not rock the boat so much it sinks.

A stable increase in the intelligence of AIs would be great, but I
think it'll happen as a breakthrough. Hopefully the hardware
limitations will cushion the blow so people can see the singularity
growing and prepare for it, instead of crapping themselves and doing
something stupid.

(I wouldn't mind seeing an open source group beat a 2 billion dollar
agency though :)

--------------------------- ONElist Sponsor ----------------------------

Valentine's Day Shopping Made Simple.
<a href=" http://clickme.onelist.com/ad/SparksValentine7 ">Click Here</a>

------------------------------------------------------------------------



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:06 MDT