From: Ben Goertzel (firstname.lastname@example.org)
Date: Sat Sep 07 2002 - 13:46:27 MDT
> -----Original Message-----
> From: email@example.com [mailto:firstname.lastname@example.org]On Behalf
> Of email@example.com
> Sent: Saturday, September 07, 2002 1:08 PM
> To: firstname.lastname@example.org
> Subject: Worldcon report
> So I was able to make it to Worldcon last Sunday, and I'm glad I went.
Regarding Vinge, Jason Joachim wrote:
> Unfortunately, I failed to get picked from the audience to ask a question.
> I wanted to challenge him on his sorely under developed idea of finding
> safety in a slow take-off.
I'd be curious to hear Vinge's ideas on this also.
However, I think the basic concept of "safety thru a slow takeoff" is very
1) The feeling that, right now, we don't know enough about AGI and other
Singularity-enabling technologies to meaningfully figure out how to make a
2) The hope that, if these technologies mature relatively slowly, we will
have time to understand them better BEFORE they "take power", and hence will
be able to figure out LATER what we're not able to figure out NOW (how to
have a safe Singularity)
Underlying this is the assumption that
3) There are good ways to mitigate the pre-Singularity risk of
not-quite-Singularity-ready advanced technologies being used to destroy
To me, 3) is a major worry. If not for this worry, I would be more inclined
to accept 1) and 2).
Regarding Vinge using Eliezer to get a laugh, that's too bad. I wonder if
Vinge has studied Eliezer's writing in any detail.
I think that what Vinge is *probably* reacting to in Eliezer's work, is the
impression that Eliezer does not agree with Vinge on point 1) above.
At a recent conference I attended (at IBM, on 'autonomic computing'), a
speaker on AI got up and his first words, intended to placate the audience,
were about how modern AI really isn't about trying to create HAL or any
other kind of general intelligence, it's focused on building probabilistic
inference based machine learning and knowledge management systems to be used
in narrow domains. He said "You know, general human-level AI is fun to talk
about at cocktail parties, but it's not what we AI researchers think about
when we get up in the morning." Everyone laughed. Yuk yuk yuk.
I guess Vinge is not as conservative as that guy, but it's a *little*
disappointing that the original Singularity-guru himself seems to be shying
away from deeply thinking about the more radical possible near-term futures
related to his own ideas...
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT