Re: FAI prioritization

From: mwaser@cox.net
Date: Thu Apr 03 2008 - 11:30:15 MDT


---- Daniel Burfoot <daniel.burfoot@gmail.com> wrote:

> Second, note that there is no strong reason to believe FAI is really
> possible. We only know intelligence is possible by looking at human
> intelligence. There are no "friendly" humans in the sense that we'd require
> from an AI (there are no provably friendly humans).

1. I claim to be a provably Friendly human. What sort of proof do you need?

2. It is my contention that you are *never* going to get an AI that is more provably Friendly than a provably Friendly human (please assume that that is meant to be precise phrasing).

3. I *know* what Friendliness is. Do to time pressure from other projects and the fact that I'm apparently totally incompetent at conveying some things via a mailing list (I've had success with several people face-to-face), I have not convinced people of that fact (and therefore look like a nut-job -- though a rational one ;-) -- but I can assure you that Friendliness is not only possible but actually rather easy.

The problem is that humans have derived ethics from the bottom up via evolution. There is actually a very simple, very clear top-down design that actually corresponds to *the best* of the bottom-up results (obviously choosing the best choice wherever there are disagreements between humans -- which always exist).

> Third, from an altruistic view of things, it's not at all clear that
> advancing AI will make the world a better place. It's very possible that it
> will make the world a terrible place, for reasons including but not limited
> to the Friendliness problem.

Again, I strenuously disagree. An FAI will be one of the best things that the world has ever done (again, precise phrasing).



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT