Re: Pete & Passive AI

From: Chris Capel (pdf23ds@gmail.com)
Date: Sat Dec 10 2005 - 09:15:55 MST


On 12/9/05, Michael Wilson <mwdestinystar@yahoo.co.uk> wrote:
> Chris Capel wrote:
> > Passive AI reduces to the AI boxing problem, plain
> > and simple.
>
> Only for an extremely bad implementation. A correctly designed
> Oracle does not /want/ to optimise the state of the external
> world beyond providing people who ask questions with accurate
> information, and this desire remains stable under
> self-modification.

I should have said, it reduces to either AI boxing or Friendly AI,
depending on what you mean. The formulation of "passive AI" doesn't
gain you anything, except maybe a useful new viewpoint.

> An AI box relies on adversarial techniques
> (to use the CFAI term) to contain a black-box AGI of unknown
> goal system content. An Oracle has known goal-system content
> and any adversarial mechanisms present are emergency fallbacks
> intended to guard against uncaught design and implementation
> errors.

The difference between an adversarial technique and a design-level
change are not always clear-cut. If I were to design an AI and just
sprinkle in inhibitions here and there to keep the AI from wanting to
optimize the "external world", assuming the phrase could be anchored
properly, then that's ultimately an adversarial technique. And if this
is what passive AI is, then it does reduce to AI boxing, albeit
subtly. To really implement an AI that *really* doesn't have any
non-passive goals, you have to pretty much solve Friendliness. So PAI
isn't really an alternate strategy, as I think it was originally
presented.

Chris Capel

--
"What is it like to be a bat? What is it like to bat a bee? What is it
like to be a bee being batted? What is it like to be a batted bee?"
-- The Mind's I (Hofstadter, Dennet)


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT