From: Aaron McBride (email@example.com)
Date: Mon Jul 23 2001 - 23:29:39 MDT
Ok, here's my current take on this.
Case 1: We don't actually need human type intelligence to have a super
'smart' AI that can protect us from ourselves. Think air-bags.
We just continue with what we are doing, and work to integrate software
into more and more of the world.
Case 2: Network everybody. People worry about privacy, etc... when you
talk about that, but here's a thought "Who *needs* privacy?"
I see very little chance of humanity surviving in it's current form for the
next 100 years. If we just wire (or wireless) everyone's heads together,
and allow everybody to have access to everyone else's wet-ware, we could
watch out for people thinking about doing bad things with nanotech,
etc... Sure, there will be neural-hacking going on, but maybe that's ok
too. Maybe it will come down to the top 1000 hackers in the world running
everyone else's brain... sure things would be different, but at least we
wouldn't all be dead. (Probably with all of that computing power, those
1000 'people' would be very very smart too... aka you will be assimilated
(but it's ok - hacker ethics and all :-)).
Case 3: Launch an all out nuclear war.
Knock us back to the stone age, and let the cycle begin again. Doesn't
sound too fun, but it might buy us some time.
(This is a last resort only to be used on the eve of someone releasing a
DNA munching nanobot.)
Case 4: Launch very fast (FTL?) generational space ships. Colonize between
the stars (or better yet between the galaxies).
Problem with this is that at this point it takes a LOT of resources to do
it. The key is to not to tell anybody where you're going. It's never
going to be enough save everybody on Earth, but it will ensure that humans
go on living somewhere... maybe someday to comeback with The AI.
I'll leave it at that, but I'm sure there are hundreds more. I've tried to
list these in order of feasibility. And, from what it looks like there are
people in the world working on all of the above cases, so I do feel
confident that humanity will 'survive'.
PS I never try to say anything original, so if you find something valid
above, then it's probably stolen from someone else.
At 11:31 PM 7/23/2001 -0400, you wrote:
>However, I shall play the Devil's advocate and ask whether anyone has a
>backup plan in case (1)-(3) turn out to be true. It would definitely push
>the Singularity back by a decade or two, at the very least. At what point
>would we decide that it's probable enough -- or, pessimistically, that the
>strong-AI program has gone on for too long without progress -- that it's
>worth spending time on this? I suppose it largely (though not entirely)
>depends on what happens in physics in the near future.
>* Not that I am predisposed to insecurity and group-think, but I am trying
>to clarify for myself why previous stabs at this topic ended up the way
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT