Re: [SL4] Sinking the Boat

From: DaleJohnstone@email.com
Date: Wed Feb 09 2000 - 21:33:05 MST


From: DaleJohnstone@email.com

>> I don't think it's safe to assume that our 'moral' behaviour is
>> the most optimal and that anything else puts a limit on potential.
>> Probably a highly selfish, shoot first mentality would be. Although
>> a society of such creatures wouldn't flourish, but the military
>> certainly wouldn't care about that.
>
>Precisely so. Whereas we do. We want to produce Minds that -can-
>flourish in a society, and that can be a part of our society in order
>to bootstrap them into their own. Intelligence -needs- a society,
>because society is one of the most complex and extelligent worlds a
>developing mind can play in. How smart would Einstein have grown to
>be if he was 'raised' as a lab-rat by some alien species?

What we want and what the military want are two different things. That
doesn't mean they're mutually exclusive. We can't design them to be
'good', or rather we can, but they could easily be changed.

I agree intelligence would do better in a richer environment. I
wouldn't go so far as to say it can't happen without it.

I'm sure as lab-rat of an alien species Einstein would have a most
stimulating time :)

>> As for redesigning itself, you're assuming this isn't a fundamental
>> part of it's intelligent design in the first place. My money would be
>> on some form of self-modification at some level to enable intelligent
>> behaviour.
>
>Some form, yes, but free to completely rebuild itself from the most
>fundamental drives upward? Not a goal of smart weapon research.
>You don't want your warplanes getting -too- smart and asking
>dangerous questions like "What's in it for me?"

Human level intelligence doesn't automatically imply some sense of
self-preservation. Without evolution's constraints you could make even
stranger monsters. They would most certainly try to avoid
uncontrollable AIs but the issue of how intelligent is separate.

>> Again I don't think you can design in any safeguards again 'irrational
>> drives'. Asimovs Laws wouldn't work, and even if they did they could
>> be changed. Once you understand how to build minds you can bias them
>> quite easily.
>
>I think the safeguards are built into the universe. If one group
>create a mind full of irrational and contradictory instincts for use
>as a weapon, and another build a saner mind as a sibling, friend,
>and child, and encourage it to grow in all ways, which mind is
>going to be the smartest, going by the universal intelligence
>test of survival ability?

If any I'd say there's only one safeguard built into the universe and
that's Existence. What's good at existing exists longer. What tries to
exist will be more likely to exist.
This can work on many levels. If you promote the existence of others
(who also promote existence) then you're all likely to be better off,
because of richer interaction possibilities that improve the group.
I'm basically an optimist about non-human intelligence.

Things get bitchy when there's limited resources. Most wars are about
resource squabbles, or triggered by the evolved behaviour attached to
that (tribalism, racism, nationalism, xenophobia etc).

It's my sincere hope that nanotechnology will make these behaviours
redundant with enough resources for all. Humans have the capability to
be unbelievably stupid though (religion for example). Let's hope
nanotech and/or AI can fix the human condition before we really screw
up.

>Quite so. An open source Singularity project would not be hugely
>useful to them, the primary concern is that they must not be allowed
>to stop it. Certainly, they have resources. The critical difference
>between us and them, I think, is that they won't encourage their 'smart'
>missiles to read and play. They feel no kinship.

My argument was that the project *would* suddenly become useful to
them if it succeeded, or was about to. They also have the fastest
machines. They could catch up and overtake us while simultaneously
closing us down (or making it bloody difficult), all in the name of
national security.

What I'm most unclear about is the period between a human-level AI
being built (or trained), and the runaway singularity process.

Assuming a non-nanotech world, this isn't a instant process.
You could mass produce clones of your first working AI and replace
most human jobs, including ours. Then the feedback loop is complete.
Whoever has the fastest machines will determine who first sees the
fruits of the first iteration. That's most likely to be some well
equipped agency like DARPA, and the fruits will most likely be
nanotech. They wouldn't want a foreign power getting there first
(everyone has AIs remember).
So then we have a situation where because of their fast computers
they're first with nanotech and they know Iraq, North Korea, and China
etc will have it soon (maybe in an hour, maybe in a month, who knows
how much CPU power they each have).
This strikes me as a f**king dangerous scenario.

The nanotech before AI scenario is starting to look more appealing to
me. Nanobots will be designed in a simulated environment which is safe
to screw up. Lessons can be learnt from virtual mistakes.

Yet I suppose the side that gets nanotech first also has the most time
to research and build defenses. However that might be considerably
more difficult than a basic weapon.

Urgh... this is all very depressing stuff. There has to be a safe path
through this mess or we're all screwed.

>> I'm not sure I like the idea of changing the rules from under people.
>> That sounds very destabilizing. You preferably want to keep the
>> balance of power level and not rock the boat so much it sinks.
>
>Ah, no, not really. The driving force behind Singularitarianism,
>(Which is far too long a word, BTW :) is that the boat is already
>sinking, the world is full of hatred, stupidity, destitution and
>agonies, the 'civilised' nations are rapidly changing from the
>imperfect Republics that they were into de-facto Feudal
>Aristocracies that laughingly call themselves Democracy,
>and physical science is charging forwards far ahead of
>human maturity. And we thought the Cold War was scary.

People can be stupid, the boat may already be sinking, things have to
change, but changing the rules from under people sounds like
extremism, and that's what we're trying to avoid.

>> A stable increase in the intelligence of AIs would be great,
>> but I think it'll happen as a breakthrough. Hopefully the hardware
>> limitations will cushion the blow so people can see the singularity
>> growing and prepare for it, instead of crapping themselves and
>> doing something stupid.
>
>A breakthrough, and then a whole series of ever-faster breakthroughs,
>researched by the AI Minds themselves. Singularity is essentially a
>sudden and irrevocable boat-sinking change in the rules that will
>inevitably occur as soon as intelligent minds arise with the ability
>to design their own upgrades. It doesn't actually have to be AI,
>with nanotech, transhumans will do it to themselves, but nanotech
>is too dangerous without posthuman intelligence, and AI is something
>we can start serious, effective work on right now.

We don't want boat-sinking change. That doesn't help anyone.
Boat-sinking means we've lost and lots of people die.
We need better plans than to offload it onto some future entity.

>People panicking doing something stupid, now, there is the greatest
>danger. An open, distributed project is one powerful defense against
>such things. Likeable, human (Or transhuman, or posthuman) AI Minds
>with a sense of humour and a pleasant speaking voice would be another.
>Films like the Bicentennial Man also, (As opposed to the Matrix..)
>meaningless and deathist though that ultimately was, and characters
>like Data and #5. It's not going to be an easy time, though.
>The monotheistic religions are going to be the biggest problem.

It's ironic that the evolved behaviours designed to keep us alive
could pose the biggest threat. Fear of the unknown.

>
>> (I wouldn't mind seeing an open source group
>> beat a 2 billion dollar agency though :)
>
>It can happen. Open networks are smarter than primate heirarchies,
>and free thinking futurists of all kinds are smarter than military
>scientists. We will also take every opportunity to improve ourselves,
>where their priority is simply to do their jobs and obey orders.

hehehe, I think you underestimate military scientists. I'll wave the
flag for the freedom & progress tribe though :)

/me does a tarzan yell

--------------------------- ONElist Sponsor ----------------------------

Get what you deserve with NextCard Visa. ZERO. Rates as low as 0.0
percent Intro APR, online balance transfers, Rewards Points, no hidden
fees, and much more. Get NextCard today and get the credit you deserve.
Apply now! Get your NextCard Visa at
<a href=" http://clickme.onelist.com/ad/NextcardCreative6 ">Click Here</a>

------------------------------------------------------------------------



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:06 MDT