RE: Fighting UFAI

From: pdugan (pdugan@vt.edu)
Date: Thu Jul 14 2005 - 10:28:54 MDT


Peter Voss said:
 what poses the bigger risk: an AI with a mind of its own, or
>> one that doesn't.
>>
>> What are specific risks that a run-of-the-mill AGI poses?
>>
>> Peter

Ben Goertzl said:

>As a more realistic alternative to paperclips, consider the possibility of a
>superhuman AI that holds "advancing science, mathematics and technology" as
>its ultimate goal. Such an AI might well want to pulverize humans so as to
>use their mass-energy as computing material that will lead it to greater
>discoveries and inventions.
>

 Consider the phase space of AI design's lacking a self-reflective or
conscious general intelligence to be much larger than that of AGI's exibiting
the general qualities deemable of being "a mind of its own". My intuition
having relatively little knowledge of the cognitive problems associated with
general intelligence is that the mindless but efficient, goal-driven AI design
space is much bigger than the mindful reflective goal-driven AGI space. A
mindless AI might brute-force Ben's examples of a sience goal or happiness
goal without requiring the trappings of consciousness, computroniumizing us
all in the process, but a mindful, adaptive, reflective AGI might do exactly
the same after much subjective deliberation. The question comes down to: which
region of design space does Friendliness stand a better chance of surviving
recursive-self improvment? I've had the idea (for the sake of example) that
Friendliness involves empathy, which involves an inclusive self-plex. A
mindless AI could turn this into something of an empathy hack, simply
associating a string labeled "Self" with everything object it can process,
while a legitimate mind stands a much better chance of extrapolating the
empathy measure into stronger internal Friendliness as its design improves
into greater complexity. So my proposed answer to Peter's question is an AI
having a mind is less risky than otherwise. At the least, I'd find my death in
the face of global apocalypse much more interesting if I knew its perpetrator
had some really well reasoned motivations behind the slaughter, rather than
just some blind replicatory optimization process.

 -Patrick Dugan



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT