Re: Manhattan, Apollo and AI to the Singularity

From: Michael Anissimov (
Date: Thu Aug 24 2006 - 15:07:43 MDT

On 8/24/06, Richard Loosemore <> wrote:

> This impression would be a mistake. To take just the issue of
> friendliness, for example: there are approaches to this problem that
> are powerful and viable, but because the list core does not agree with
> them, you might think that they are not feasible, or outright dangerous
> and irresponsible. This impression is a result of skewed opinions here,
> not necessarily a reflection of the actual status of those approaches.

I'm sure that many on the list would be interested in seeing you write
up your ideas as web page. A search for "Loosemore friendliness" only
brings up your posts on this list. Don't let the "list core" get you

The problem about designing motivational systems for AIs is that 99.9%
of people who attempt it have such a poor conception of the problem at
hand that they aren't even wrong. For example, see this:

The guy who wrote this probably isn't a kook, and might even come
across as quite intelligent in person. It's just that his ideas for
AI are complete nonsense.

I'm sure your ideas aren't, and I know they've been discussed on this
list before, but it would be nice to see them on a static page.

Michael Anissimov
Lifeboat Foundation

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT