From: Tim Freeman (email@example.com)
Date: Mon May 05 2008 - 21:00:03 MDT
From: "Stuart Armstrong" <firstname.lastname@example.org>
>The corporate model
>... there would be the definite risk of AI designers following the
>letter of the law rather than the spirit, thus the system needing
>constant legal updating to keep up with ways of gaming the system.
>This would require a nimbleness never before seen in government.
>The government model
>...The difficulty of legislating friendliness remains, though less of
>a problem than in the corporate model...
I don't see why it's less of a problem.
>The centralised model
>1) You must accept deletion if the AIC imposes it
>2) You must not interfere with the physical set-up of the AIC
>3) You must not interfere with the AIC's data collection operation,
What's this entity called "you" in the rules above? Humans have
brains in their heads. They can think only with their brains and
cannot copy that process onto another computational substrate. This
makes it possible to identify humans by name and count them
essentially by labelling the skulls. The process doesn't work for
entities that don't share those anatomical restrictions, so it's not
obvious how to identify the entities you're talking about any more.
However, identifying the entities is an essential requirement for the
laws you're proposing there to have meaning, so the whole thing seems
Making sure these laws retain their meaning in changing circumstances
leaves you with the same legislative problems as before. Isaac Asimov
spent many years writing about the unintended consequences of a
superficially similar set of rules.
Using words to confine an entity that's going to be a better lawyer
than you seems pretty dicey.
I don't see how to win with the slow-fast scheme. Once the dominant
force in the world is a disorganized bunch of AI's competing with each
other, and those AI's are able to do engineering, then neither human
cognition nor human evolution are driving things any more. A
collection of AI's isn't an individual AI, so no one program written
by humans is in control any more either. With no traceable ongoing
cause-and-effect bewteen humanity and anything identifiable that
matters, I'd say we lost. I don't see hope of getting back on this
horse after being thrown off.
J. Storrs Hall believes a slow takeoff is survivable, judging by his
recent Singularity Summit talk. I didn't read his "Beyond AI" book
yet so I don't know how he's able to believe that.
-- Tim Freeman http://www.fungible.com email@example.com
This archive was generated by hypermail 2.1.5 : Wed Jun 19 2013 - 04:01:38 MDT