Re: Posting to this list (was: Why friendly AI (FAI) won't work)

From: Thomas McCabe (pphysics141@gmail.com)
Date: Thu Nov 29 2007 - 13:44:34 MST


> I am working on AI

Whatever you may think of me, SIAI, or this list, *please stop* and
consider the possible consequences of what you are doing (see
http://www.intelligence.org/upload/cognitive-biases.pdf,
http://www.intelligence.org/upload/artificial-intelligence-risk.pdf).
Not-very-well-understood AI is a serious threat to the existence of
the entire planet; we cannot afford to take stupid risks.

> and came to this list looking for discussion and
> feedback on some issues like the morality of experimenting on AIs and
> the need to incorporate FAI principles. These issues seemed well within
> the stated purpose of the list, so I didn't believe I needed to stop and
> study everything that a particular group has done before posting.

"It is the explicit policy of this list not to rehash the basics. SL4
is for advanced topics in futurism and technology. If we've discussed
it once before, or if it's something we think posters should already
know, you may be courteously referred to the archives, or to another
list." - http://www.sl4.org/intro.html

> But
> it's not my list, so I'm quite happy to look elsewhere for an
> appropriate discussion forum if these sorts of questions are not welcome
> here.

You might want to check out
http://www.transhumanism.org/mailman/listinfo/wta-talk or
http://www.agiri.org/email/.

> Just as a reminder, the stated purpose of the list is: "The SL4 mailing
> list is a refuge for discussion of advanced topics in transhumanism and
> the Singularity, including but not limited to topics such as Friendly
> AI, strategies for handling the emergence of ultra-powerful
> technologies, handling existential risks (planetary risks), strategies
> to accelerate the Singularity or protect its integrity, avoiding the
> military use of nanotechnology and grey goo accidents, methods of human
> intelligence enhancement, self-improving Artificial Intelligence,
> contemporary AI projects that are explicitly trying for genuine
> Artificial Intelligence or even a Singularity, rapid Singularities
> versus slow Singularities, Singularitarian activism, and more."
>
> As to the specific topic at hand, I've read about FAI to various depths
> for some time, though not enough to be anything vaguely close to an
> expert. The arguments presented have not convinced me that it's a viable
> option. But I could easily be wrong, so I posted my reasons, looking for
> convincing counter-arguments. Which I haven't seen yet. So I'm
> continuing on with my original believe set.
>
> I'm surprised that if you really believe that FAI is essential to the
> future of the human race, you don't try to evangelize it and patiently
> explain it to newbies. You'll get a lot more converts that way that
> arrogantly telling anyone who doesn't agree with you that they don't
> know what they're talking about and obviously haven't read the
> literature or they would agree with you.

This is a good point. We should set up another list for explaining the
literature, answering newbie questions, etc.

> But I wouldn't worry about me creating a non-friendly AI. There are many
> other groups better funded and with smarter people. Right now, I'd worry
> about Google. (I know, I'm not the first to suggest that.)

The potential negative payoff is so huge that it's worth paying
attention to risks with tiny probabilities, as long as you take care
of the ones with large probabilities first.

 - Tom



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT