Why playing it safe is the most dangerous thing

From: Philip Goetz (philgoetz@gmail.com)
Date: Thu Feb 23 2006 - 21:33:09 MST


I was just at an NIH biomedical computing interest group meeting where
Ben Goertzel & Bruce Klein & some others presented a summary of Ray
Kurzweil's book, /The Singularity is Near/. An idea that occurred to
me, during the section on government regulation, is that:

- The worst possible outcome of the Singularity is arguably not total
extinction, but a super-Orwellian situation in which the people in
power dictate the thought and actions of everyone else -- and,
ultimately, George W. Bush or some equivalent wins the singularity and
becomes the only remaining personality in the solar system.

- We've already seen, with genetics, what happens when, as a society,
we "take time to think through the ethical implications". We convene
a panel of experts - Leon Kass & co. on the President's Bioethics
Committee - and, by coincidence, they come out with exactly the
recommendation that the President wants.

- A scenario in which we take time to "consider the ethical
implications" and regulate the transition to singularity is almost
guaranteed to result in taking those measures that strengthen the
power of those already in power, and that seem most likely to lead
lead to the worst possible scenario:
Dubya-(or-Cheney)-equivalent-as-Ubermind.

- The internet is the decentralized, difficult-to-control thing that
it is only because the government wasn't prepared for it, and wasn't
able to supervise its construction.

- If we consider, on the one hand, that the internet, developed
rapidly with little regulation, worked out well and egalitarian; and
on the other, that a cautious approach is practically guaranteed to
lead to the worst possible outcome...

- ... we must conclude that the SAFEST thing to do is to rush into AI
and the Singularity blindly, without pause, before the Powers That Be
can control and divert it.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT