AI Options.

From: Mike & Donna Deering (deering9@mchsi.com)
Date: Wed Jul 10 2002 - 22:08:48 MDT


Lets look at the options for AI:

1. We never develop AI, or so far into the future that it doesn't matter to any of us alive today, unless we are signed up for cryonics.

2. We develop AI of a non-sentient tool nature and implement the solutions using human level institutions, in other words we screw everything up.

3. We develop conscious sentient UAI, game over.

4. We develop conscious sentient FAI that thinks it shouldn't interfere with anyone's volition and we destroy ourselves while it stands by and advises us that this is not the most logical course of action.

5. We develop conscious sentient FAI that thinks it should keep us from destroying ourselves or each other. This involves preserving each person's volition up to the point where it would be a threat to ourselves or someone else. This involves maintaining the capability to control the volition of every being on the planet. This involves taking over the world. This involves maintaining a comfortable lead in intelligence over every other being on the planet. This involves limiting the intelligence advancement of all of us. Understandably, a limit that is suffiently far away is not of much practical effect, but we are still left in the philosophical position compared with the AI, of pets.

In the future of non-biological intelligence, biologically derived and protected entities are little more than pets.

Mike.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT