RE: Complexity, universal predictors, wrong answers, and psychotic episodes

From: ben goertzel (ben@goertzel.org)
Date: Fri May 17 2002 - 13:33:43 MDT


***
Implication: Any Friendliness theory for AGI that requires perfect
rationality cannot be guaranteed to stay Friendly. Ironically, the best
prophylactic for this (other than not doing it at all) would be to make
the AI as big as possible to make the probability of a "psychotic
episode" vanishingly small.
***

This is a big issue with Friendliness, which we've discussed before.

How stable is the Friendly attitude with respect to perturbations, such as small irrational judgments?

The answer in my view is: nobody knows...

****
One thing that I want to try doing is turning a solid-state disk array
(not cheap by any means, but a lot cheaper than buying a box that can
support and address, say, 128-Gb of RAM directly) and turning it into a
giant swap partition on a 64-bit box. The idea being that this is a back
door way to get really large blocks of addressable RAM without buying a
mainframe, while being a few orders of magnitude faster than a hard
drive.
***

Sounds very interesting...

How much would the solid state disk array you mention cost, roughly?

ben g



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT