RE: The ethics of argument (was: AGI funding)

From: Ben Goertzel (ben@goertzel.org)
Date: Sun Nov 10 2002 - 15:47:15 MST


Eliezer wrote:
> Ben Goertzel wrote:
> >
> > Let Y = "Institutions and people with a lot of money"
> >
> > I understand that there are risks attached to convincing Y of X via X2
> > rather than X1
> >
> > The problem is that there are also large risks attached to not
> convincing Y
> > of X at all.
> >
> > The human race may well destroy itself prior to launching the
> Singularity,
> > if Singularity-ward R&D does not progress fast enough.
> >
> > The balancing of these risks is not very easy.
> >
> > Taking the coward's way out regarding the risks of PR, could have
> > dramatically terrible consequences regarding the risks of some
> nutcase (or
> > group thereof) finally getting bioterrorism to work effectively...
>
> Oh. You have a *goal*. I didn't realize you had a goal. It
> must be okay
> to ignore your ethics if you have a goal.

I am not suggesting to ignore ethics, in pursuit of some goal. Neither was
anyone else in this thread, so far as I can tell.

I was suggesting that doing PR for the Singularity might be the best (*most
ethical*) course of action, even if it involves initially presenting aspects
of the Singularity to the mass audience in a carefully "spun" way.

Acting ethically in a situation like this involves making complex judgments,
involving difficult tradeoffs.

> Anyone can be ethical when nothing much is at stake.

Well, actually, humans seem to very frequently be unethical (even according
to their own individual standards), in many cases where *very little* is at
stake...

> What makes a
> Singularitarian is the ability to keep your ethical balance when the
> entire planet is at risk.

Yes, there are very difficult ethical decisions to be made here. I respect
that you understand the seriousness of the decisions involved, and have
thought hard about them; but I don't always agree with your particular
judgments.

I think your judgment that doing PR for the Singularity is unethical, is an
incorrect judgment. On the contrary, I think it's unethical NOT to do our
best to do PR for the Singularity -- because doing this PR is the best way
to create the funding that will accelerate the creation of AGI, improving
the odds of a beneficial human-level AGI coming about prior to humanity's
self-destruction.

I recognize that I don't have a bulletproof demonstration that my ethical
judgment is correct here. There are many uncertainties involved. It's
definitely a judgment call.

> I'm curious. What do you propose I should do about the fact that
> Novamente *would* destroy the world if it worked, given that you still
> don't understand Friendly AI?

I do not believe your alleged "fact" is a fact. So I don't think you should
do anything about it.

I do have a somewhat different view of "Friendly AI" than you do. I've read
all your writings fairly carefully, and discussed the issues with you
extensively, and I just don't agree with you on everything. Sorry. Your
views are forcefully and intelligently presented, and highly stimulating,
but in my view your arguments contain some major holes. We've been over
this ground before and I suppose there's no point to rehash the details.

I don't think *anyone* can understand as much about Friendly AI up-front,
prior to having near-human-level AGI's to study, as you think you understand
about Friendly AI right now.

I think that there are going to be very difficult ethical decisions to make
regarding any AGI that becomes seriously intelligent -- and I think that the
nature of these decisions will become much clearer after the science and
practice of AGI are a little further along.

Serious theories of Friendly AI will have to be formulated AFTER we have AGI
systems that have something like chimp-level AGI (to study and experiment
with), and BEFORE we have roughly-human-level AGI. This means that once we
reach (to speak loosely) dog-level AGI, we'll have to start paying very
careful attention to some of the things you've been talking about

-- monitoring intelligence increase in AGI's
-- ensuring there are good ways to control AGI's

At this point, when none of us even has a dog-level AGI to show off and
study and experiment with, I think our time is better spent actually working
on AGI, rather than extensively speculating about AGI Friendliness, or
making accusations regarding the potential destructiveness of each others'
AGI projects.

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT