RE: The ethics of argument (was: AGI funding)

From: Ben Goertzel (ben@goertzel.org)
Date: Mon Nov 11 2002 - 04:51:19 MST


Eliezer wrote:
> Specifically, your means-end analysis is saying that "dumbing down the
> Singularity" is a good way to get to the goal "AGI funding", and I am
> attempting to point out all the negative side effects that are the reason
> for the global ethical constraint "try to share ideas that you yourself
> believe, rather than trying to manipulate the audience into cognitive
> states that are useful for your short-term goals".

I certainly believe in "sharing ideas that I myself believe" rather than
lying.

But I don't believe in "always choosing to share the relevant ideas that I
most strongly believe" independently of context.

I believe it's ethically OK to choose which ideas to express, from among the
MANY ideas one believes, in a context-dependent way.

According to your eccentric definition of "ethics", people definitely have
widely variant ethical codes. Apparently my ethical code is slightly
different from yours.

> With respect, Ben, I think I've done a fair amount of Singularity PR over
> my lifetime, so I don't think it's fair for you to say that I consider
> Singularity PR to be unethical.

It's very true, you have done a lot of Singularity PR, and you've been VERY
EFFECTIVE at spreading the Singularity meme to a certain (important) narrow
audience. You deserve many congratulations for this.

But your methods seem not to be able to spread the word beyond this narrow
audience.

So, it's not that you consider Singularity PR in itself unethical; but
rather, that you consider the methods for Singularity PR that Slawek was
suggesting unethical. Sorry I didn't phrase things clearly.

> You're arguing from fully general uncertainty
> again; can you
> give a specific X in Friendly AI theory that you do not think it is
> possible to usefully consider in advance?)

There are many examples. One example is, the stability of AGI goal-systems
under self-modification. To understand this at all in advance of having
simple self-modifying AGI's to experiment with, one would need a
tremendously, immeasurably more sophisticated mathematical dynamical systems
theory than we now possess (or than seems feasible to create in the near
term). Yet you seem to be making some very confident assertions in this
regard in CFAI.

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT