Re: [sl4] zero to Shock Level 4 in 20 minutes

From: Chris Staikos (cstaikos@gmail.com)
Date: Mon Aug 24 2009 - 23:36:53 MDT


The most common stance I am faced with is that of a disdain towards
technology in general. Far too often it seems that as soon as I bring up the
singularity people stop listening, claiming that technology is essentially
the root of all our problems and that more sophisticated technology can only
bring about more sophisticated problems. Then they drive home, check their
e-mail, etc. People seem to forget that a tool is a tool. It can be used
in positive as well as negative ways. I suppose it's a lot easier to blame
a tool than to take responsibility for misusing it.

On Mon, Aug 24, 2009 at 4:01 PM, Mark Nuzzolilo <nuzz604@gmail.com> wrote:

> Here in Arizona I have been noticing a general trend and that is that many
> of the people that I have been talking to lately about this seem to
> -already- have a very "far out" understanding of things. Things like the
> singularity are not looked at in terms of "shock" but in terms of
> "outcome". One person I talked to (who is schizophrenic) threw me off guard
> when he said "oh of course, the artificial intelligence argument" and then
> proceeded to throw philosophical arguments related to friendly AI at me. I
> don't know if this is all due to the year being 2009, or due to the region I
> am living in, or due to the fact that many people I know tend to have
> psychedelic-induced or natural open mindedness; not such an uncommon thing
> in Tucson especially.
>
> It is rare that I meet somebody these days who doesn't know what
> nanotechnology is when I ask about it. I feel that many of us have built up
> some kind of imaginary "wall of ignorance" between us and the "general
> public" and this is obviously apparant due to the shock level system written
> about many years ago. My solution is to let the wall fall. Barriers are
> better off broken unless absolutely necessary. Talking about the
> singularity with "ordinary people" has allowed me to keep the ideas flowing
> and at the same time, the ideas are now stronger in their minds as well.
>
> Nuzz
>
> On Mon, Aug 24, 2009 at 7:40 AM, Rick Schwall <r.schwall@verizon.net>wrote:
>
>> I would like to begin by saying that I don't believe my own statements
>> are True, and I suggest you don't either. I do request that you try thinking
>> WITH them before attacking them. It's really hard to think *with* an idea
>> AFTER you've attacked it. I've been told my writing sounds preachy or even
>> fanatical. I don't say "In My Opinion" enough. Please imagine "IMO" in front
>> of every one of my statements. Thanks!
>>
>>
>>
>> When I first started talking to my friends about Existential Risks and The
>> Singularity, I got some really amazing responses. Although the details were
>> just plain noise, there was a pattern: smart people, readers of science
>> fiction, who had never studied AI research, IMMEDIATELY KNEW why this was a
>> Bad Idea and why it would never work. Sometimes they brought up the old
>> Hollywood cliche : it's just a more dangerous villain. It has never occurred
>> to most people that an intelligence could be different from us. (With the
>> possible exception of "emotionless". A cold, calculating, hateful
>> super-villain, that's what they expect and fear.)
>>
>>
>>
>> However, I have found common folks surprisingly receptive to some of our
>> key ideas, as long as I present them as "it could be that ..." rather than
>> asserting them as True. With few exceptions, they quickly acclimate to most
>> of:
>>
>>
>>
>> 1. Humanity is in danger, maybe even to the point of extinction.
>>
>> 1.A. The time scale is short: decades at the long end, at the short end:
>> "the missiles could be in the air right now".
>>
>> 2. The root of the danger is us.
>>
>> 2.A. The traits that make us dangerous can't be removed
>>
>> 2.B. Any solution that depends on half the people changing their minds,
>> opinions, or habits is just too slow.
>>
>> 3. We think and hope we may be able to make a thinking machine (AGI) in
>> time.
>>
>> 3.A. The good news is that we might create an intelligence explosion.
>>
>> 3.B. The terrific news is that an AGI doesn't have to have the traits that
>> make us dangerous.
>>
>> 3.B.i. If it is possible to make an AGI at all, the easy part will be
>> making it a much nicer person than us.
>>
>> 3.B.ii. It would be a great coach or advisor, and
>>
>> 3.B.iii. might be able to persuade us to actually change our opinions AND
>> take action.
>>
>> 3.C. Even so, this is dangerous; there is a lot of work remaining to do it
>> and to get it right.
>>
>> 3.C.i. *Humanity has been bad, and should fear the coming of an unbiased
>> Power.*™
>>
>> 4. But really, we MUST do it
>>
>> 4.A. We are already in too much danger
>>
>> 4.B. It's only a matter of time before somebody does it wrong
>>
>>
>>
>> As evidence for their absorption of these ideas, I notice that they
>> quickly start to *tell me *evidence that they see, examples, even further
>> concerns. If I start by asking them to think *with* the uncertain ideas,
>> if I ask them to estimate how likely these things are, over half of them get
>> to Shock Level 4 within 20 minutes. I'm not saying that they *agree*, but
>> they are conversant. I send them away with a "homework card" containing
>> links they can use to start their own investigation.
>>
>>
>>
>> All right, I admit, that's not quite Shock Level 4 (
>> http://sl4.org/shocklevels.html), even though it does include
>> Intelligence Explosion (
>> http://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosion).
>> Would you believe Shock Level 3.5? (www.wouldyoubelieve.com/phrases.html
>> ).
>>
>>
>>
>> These days, I don't mention, "Meet the new ruler of the planet," or "This
>> is the Voice of World Control" (
>> http://www.imdb.com/title/tt0064177/quotes).
>>
>>
>>
>> Rick Schwall, Ph.D.
>>
>> Saving Humanity from Homo Sapiens
>>
>>
>>
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT