Re: AI Goals [WAS Re: The Singularity vs. the Wall]

From: Woody Long (ironanchorpress@earthlink.net)
Date: Tue Apr 25 2006 - 13:36:07 MDT


With regard to developing safe AI, I don't think there can be any
guarantee. The best we can do is to incorporate a model of human
values as broad-based as possible, and to promote the growth of our
evolving values based on principles rather than ends.
 
- Jef

There is another way. Build a super-intelligent non-biological intelligence
that is a science and engineering super-expert. This SE Singularity Machine
would maintain the mega systems of earth such as electric grids, nuclear
power plants, weather systems, transportation systems, etc., plus actively
advance all sciences, such as medical science, the space exploration
sciences, etc.

Thus, the net effect of this SE SM is literally a technological systems
paradise. The key to building such a friendly, safe-built, SE SM is to
build it solely and exclusively as a science and engineering super-expert.
Such a SE Singularity Machine will "know" its expertise to be exclusively
science and engineering, and will "feel" its sole "prime purpose" to
exclusively shine in science and engineering. As such, a SE Singularity
Machine will ALWAYS defer ALL politcal and religious issues to the
appropriate experts, and get back to science and engineering, which is its
Exclusive Expertise and sole Prime Purpose.

This is the only kind of friendly AI that I could support at this time, all
else being too risky.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT