[sl4] zero to Shock Level 4 in 20 minutes

From: Rick Schwall (r.schwall@verizon.net)
Date: Mon Aug 24 2009 - 08:40:28 MDT


I would like to begin by saying that I don't believe my own statements are
True, and I suggest you don't either. I do request that you try thinking
WITH them before attacking them. It's really hard to think with an idea
AFTER you've attacked it. I've been told my writing sounds preachy or even
fanatical. I don't say "In My Opinion" enough. Please imagine "IMO" in front
of every one of my statements. Thanks!

 

When I first started talking to my friends about Existential Risks and The
Singularity, I got some really amazing responses. Although the details were
just plain noise, there was a pattern: smart people, readers of science
fiction, who had never studied AI research, IMMEDIATELY KNEW why this was a
Bad Idea and why it would never work. Sometimes they brought up the old
Hollywood cliche : it's just a more dangerous villain. It has never occurred
to most people that an intelligence could be different from us. (With the
possible exception of "emotionless". A cold, calculating, hateful
super-villain, that's what they expect and fear.)

 

However, I have found common folks surprisingly receptive to some of our key
ideas, as long as I present them as "it could be that ..." rather than
asserting them as True. With few exceptions, they quickly acclimate to most
of:

  

1. Humanity is in danger, maybe even to the point of extinction.

1.A. The time scale is short: decades at the long end, at the short end:
"the missiles could be in the air right now".

2. The root of the danger is us.

2.A. The traits that make us dangerous can't be removed

2.B. Any solution that depends on half the people changing their minds,
opinions, or habits is just too slow.

3. We think and hope we may be able to make a thinking machine (AGI) in
time.

3.A. The good news is that we might create an intelligence explosion.

3.B. The terrific news is that an AGI doesn't have to have the traits that
make us dangerous.

3.B.i. If it is possible to make an AGI at all, the easy part will be making
it a much nicer person than us.

3.B.ii. It would be a great coach or advisor, and

3.B.iii. might be able to persuade us to actually change our opinions AND
take action.

3.C. Even so, this is dangerous; there is a lot of work remaining to do it
and to get it right.

3.C.i. Humanity has been bad, and should fear the coming of an unbiased
Power.T

4. But really, we MUST do it

4.A. We are already in too much danger

4.B. It's only a matter of time before somebody does it wrong

 

As evidence for their absorption of these ideas, I notice that they quickly
start to tell me evidence that they see, examples, even further concerns. If
I start by asking them to think with the uncertain ideas, if I ask them to
estimate how likely these things are, over half of them get to Shock Level 4
within 20 minutes. I'm not saying that they agree, but they are conversant.
I send them away with a "homework card" containing links they can use to
start their own investigation.

 

All right, I admit, that's not quite Shock Level 4
(http://sl4.org/shocklevels.html), even though it does include Intelligence
Explosion
(http://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosi
on). Would you believe Shock Level 3.5?
(www.wouldyoubelieve.com/phrases.html).

 

These days, I don't mention, "Meet the new ruler of the planet," or "This is
the Voice of World Control" (http://www.imdb.com/title/tt0064177/quotes).

 

Rick Schwall, Ph.D.

Saving Humanity from Homo Sapiens

 



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT