From: Matt Mahoney (firstname.lastname@example.org)
Date: Fri Apr 24 2009 - 09:31:51 MDT
--- On Fri, 4/24/09, Alexei Turchin <email@example.com> wrote:
> Types of unfriendly AI
> Generally, it is necessary to distinguish 4 main types of
> unfriendly AI:
> a) AI, which has one of the subgoals explicitly hostile to human (e.g., AI,
> which converts the whole solar system in computronium for calculation of the
> number pi, and for this to destroy humanity.)
> b) AI causing harm to a person by the "folly" - that is, not understanding
> that it is harmful. (robot removes all round objects out of the room,
> including a head of a man, or AI which sends all the people of Paradise.)
> c) AI, which is a world governor and controls Earth, and then there happens
> a software failure, and all control is severely disrupted. (The crisis of
> complexity, or viral idea, or division by zero.)
> d) And of course, the conflict between the two friendly AI. (But with
> different system of subgoals.) E.g.: conflict of ideologies during the Cold
> War, or religious war.
> Or more?
e) The AI does exactly what we designed it to do. The end of death, disease and suffering, total control over our environment. If you want super powers or a magic genie, wish granted. If instead you want to be happy with what you already have, you can reprogram your mind. It's a simulation of course, but you don't know, or care. Inevitably you optimize your utility function to maximize your happiness, because that's how you are programmed. Because you only have a finite number of mental states, you stay there.
f) Nanobots and internet worms with superhuman intelligence compete for computing resources. DNA based life can't compete and goes extinct. The resulting godlike intelligence believes itself to be friendly.
-- Matt Mahoney, firstname.lastname@example.org
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT