Worldcon report

From: joachimj@pacbell.net
Date: Sat Sep 07 2002 - 13:08:15 MDT


So I was able to make it to Worldcon last Sunday, and I'm glad I went.

Vernor Vinge gave his customary Singularity presentation followed by a
'Visions of the Singularity' program, with panelists Vernor Vinge, Charles
Stross, Walter Jon Williams, and James Patrick Kelly. Greg Bear was
scheduled to attend but was disappointingly absent.

While Vinges presentation was just the same old thing, there was never the
less a nice turnout for it and the following discussion. It garnered a
largish room and had only scattered empty seats. The audience was a bit
sleepy though. Vinge cut his presentation a little short finishing with the
idea that a slow takeoff, "prolonged for a hundred years perhaps", might be
preferred for safety reasons. He really seemed comfortable with this idea,
and this particularly needs some testing on our behalf.

There were routine questions, usually involving anthropomorphic assumptions
about AI's. Eventually someone asked about the potential to give an AI some
human-like emotion, implying that this would reduce the risk of unwanted
behavior. Vinge side stepped the assumption and directly addressed the idea
of making an AI safe. He responded with something like "Well, of course
there are some people, like Eliezer Yudkowsky, who think that all we need to
do is make AI's friendly to humans!" eliciting a sizable chuckle from the
audience-- as much from his lightheartedly sarcastic tone and dramatic
hand-waving as anything else. His body language cued them to laugh. And it
felt as though it was a simple social animal urge to win back status by
pointing out someone else as being more fringe than he. It strengthens ones
case, right?

Unfortunately, I failed to get picked from the audience to ask a question.
I wanted to challenge him on his sorely under developed idea of finding
safety in a slow take-off. I wanted to suggest, for the audience's sake,
that the best hope for humanity may well be to immediately develop and
implement a "safe architecture" (as he referred to it). But then of course
he'd just ask me how this is to be done, whereupon I would have to pull out
my pocket edition of CFAI, and things would get messy. (But seriously, in
such a forum how does one convey the notion that the subject of safe
architectures is a valid one?)

The panel discussion was decidedly lackluster. Vinge asked the panelists
for their ideas of Singularity and various definitions were proposed. Some
were characterized by 'slow-takeoff', and Charles Stross suggests that we've
gone through 5 singularities already. However, this rubs me wrong.
Singularities are, to my mind, fairly defined by 'hard-takeoff', this being
the likely nature of directly self-improving intelligence. Also, my
expectations for Singularities are that they are discontinuous to the point
that, even if you keep pace with the wave front- perhaps no matter where you
are relative to it, you can look around and point to the moment in history
where The Big Change happened. Any Singularity short of that just isn't
worthy of the name.

The good thing is that I'm sure I got 150 flyers out to people, many by
hand, but more from tables of handouts. One guy asked me for a handful more
flyers after he saw what it was all about.

I wasn't sure what to expect, but after this experience I feel that SIAI's
presence at an affair like this one is an important thing. And I'm ready to
try and maintain a presence whenever possible. I'm aching to get Vinge to
admit some sympathy for singularitarianism.

And some of you might be happy to know-- Vinge mentioned that he has signed
a contract for four new novels, and three books of reprinted material.

--
Jason Joachim


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT