Re: Hiding AI research from Bad People was Re: OpenCog Concerns

From: J. Andrew Rogers (
Date: Tue Mar 25 2008 - 00:42:27 MDT

On Mar 24, 2008, at 9:55 PM, Jeff Herrlich wrote:
> I'll try to keep this minimally political in accordance with list
> rules. I think it is most realistic and most cautious to assume
> that various governments will take an interest in AGI, as proto-AGIs
> develop.

How are current nominal "proto-AGIs" distinguishable from the myriad
of other dead "proto-AGIs"? I keep seeing this assumption that
governments can magically discern real AGI from the hundreds of crap
AGI projects that they *thought* were "proto-AGI" over the years. It
seems to me that people need to re-calibrate their model of what
government can discern with respect to AGI.

Describe, in very strict terms, how a proto-AGI is discernible from
parlor-trick AGI. What scientific measure can we use to categorize
the two?

> For example, DARPA will often begin long-horizon, low probability
> projects, based on the small hope that something useful will come
> out of it. I think that government involvement may be unavoidable
> from a practical POV. [ I could be wrong, of course, but I figure
> it's safer to play it conservatively].

Non sequitur.

> From a strategic POV, I think that what we need to do is make the
> US government an "Allie" of Friendliness. Obviously, we would need
> to *effectively* convey the importance and ethical responsibility of
> the Singularity. But there are some aspects of the US gov's
> structure conducive to this. For example, our advocacy can be
> concentrated on the next US President - really, only one person
> needs to be genuinely convinced of the gravity of the situation.
> Presidential power will take care of the other logistics, or at
> least we can reasonably hope so. Of course, the more US politicians
> we can convince, the better. I'm not saying that the US government
> is totally perfect or anything - just that I think this is the
> "safest" approach.

This bit of pablum is delusional and mostly stupid, sorry. Not just
on one point, but several points, mostly because it lacks an
intersection with reality.

I think it is safe to say that this line of reasoning has no business
on the SL4 list, mostly because it is more platitude than "reasoning".


J. Andrew Rogers

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT