SIAI has become slightly scary

From: Ben Goertzel (ben@goertzel.org)
Date: Fri Jun 04 2004 - 08:31:44 MDT


Eliezer and Michael Wilson,

Quite frankly, the two of you are starting to scare me a little bit.

Just a *little* bit, because I consider it pretty unlikely that you guys
will be able to create anything near a detailed enough AGI theory to go
about really building a successful AGI program (or large-scale
uber-optimization-process, or whatever you want to call it).

But in the, say, 2% likely scenario that these guys really will be able
to create the right kind of theory ... I'm pretty scared by the outcome.

One of my problems is what seems to be a nearly insane degree of
self-confidence on the part of both of you. So much self-confidence, in
a way which leads to dismissiveness of the opinions of others, seems to
me not to generally be correlated with good judgment.

[I'm not one to dismiss "insanity" generally -- boring and narrowminded
people have called me "insane" many times, and I have a lot of respect
for the creative power of socially marginal or unacceptable frames of
mind. But some types of "insanity" are scarier than others. Note that
I'm not accusing you guys of actually being clinically whacko -- I know
Eli well enough to know that's not true of him, anyway ;-) However, the
degree of self-confidence displayed by both of you lately seems to me to
come disturbingly close to the "delusions of grandeur" level. Comparing
SIAI's theory of FAI to General Relativity Theory? Calling everyone
who's not part of the SIAI club a "meddler and dabbler" who's part of
the "peanut gallery" and just needs to be "shown the path". Egads!!
Yeah, it's all in good fun, it's humorous, etc. But it's not too hard
to see through the humor to the actual attitudes, in this case.]

Another problem I have is this notion of "volition" (related to
Eliezer's notion of "humane"-ness). This line of thinking is intriguing
philosophically, but scary pragmatically.

I don't want some AI program, created by you guys or anyone else,
imposing its inference of my "volition" upon me.

When I enounced the three values of Joy, Growth and Choice in a recent
essay, I really meant *choice* -- i.e., I meant *what I choose, now, me
being who I am*. I didn't mean *what I would choose if I were what I
think I'd like to be*, which is my understanding of Eliezer's current
notion of "volition."

To have some AI program extrapolate from my brain what it estimates I'd
like to be, and then modify the universe according to the choices this
estimated Ben's-ideal-of-Ben would make (along with the estimated
choices of others) --- this denies me the right to be human, to grow and
change and learn. According to my personal value system, this is not a
good thing at all.
  
I'm reminded of Eliezer's statement that, while he loves humanity in
general in an altruistic way, he often feels each individual human is
pretty worthless ("would be more useful as ballast on a balloon" or
something like that, was the phrasing used). It now seems that what
Eliezer wants to maintain is not actual humanity, but some abstraction
of "what humanity would want if it were what it wanted to be."

As a side point, this notion of "what humanity wants to be" or "what Ben
wants to be" is a confusing and conflicted one, in itself. Eli has
realized this, but chooses not to focus on it, as he believes he'll
resolve the problems in future; I'm not so sure. As Eli pointed out
somewhere, there's a iteration:

0. What Ben is
1. What Ben wants to be
2. What Ben would want to be, once he had become what he wants to be
3. Etc. ...
...

Eventually this series might converge, or it might not. Suppose the
series doesn't converge, then which point in the iteration does the AI
choose as "Ben's volition"? Does it average over all the terms in the
series? Egads again.

So what SIAI seems to be right now is: A group of people with

-- nearly-insane self-confidence
-- a dislike for sharing their more detailed ideas with the
peanut-brained remainder of the world (presumably because *we* might do
something dangerous with their brilliant insights?!)
-- a desire to give us humans, not what we want, but what their AI
program estimates we would want if we were what we wanted to be

The only way this isn't extremely scary is if SIAI lacks the ideas
required to create this volition-implementing AI program. Fortunately,
I think it's very likely that they DO lack the ideas required.

Yes, I know SIAI isn't just Eliezer. There's Tyler and Mike Anissimov.
So far as I know, those guys aren't scary in any way. I have plenty
respect for both of them. But what distinguishes SIAI from just being a
Singularity-and-AGI advocacy group is Eliezer's particular vision of how
to use AI to bring the Singularity about. So I think it's fair to judge
SIAI by Eliezer's vision, though not by his personality quirks.

On the bright side, I've noticed that Eliezer changes his views every
year or so, as his understanding of the relevant issues deepens. I'm
hoping that the next shift is in a less disturbing direction!

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT