Re: AI testing and containment Re: [SL4] Programmed morality

From: Eliezer S. Yudkowsky (eliezertemporarily@intelligence.org)
Date: Sun Jul 09 2000 - 13:33:10 MDT


Dale Johnstone wrote:
>
> Brian Atkins wrote:
> >
> >However to really test what an AI will do once it is
> >"loose" you would have to provide it with a quite awesome simulation
> >of the real world (Matrix-like) and then see what it does to the
> >humans. I don't think we will be able to do that even if we had the
> >hardware. So I would be interested to know of other possible ways to
> >test what the AI would do.
>
> What if the grass is pink, not green? Will it matter?

Sure, because then it would be obvious that it was a test.  Our world
hangs together.  It started with the Big Bang, evolved, and wound up
with us.  Aside from the Fermi Paradox and possibly qualia, there are no
major holes in the picture.  Now, you put an AI in The Village and it's
gonna be pretty obvious that the whole thing is a simulation.  Maybe a
stupid AI would fall for it, but not any AI smart enough for us to worry
about.  A superintelligence could probably look at its surroundings and
not only deduce that the whole thing was a simulation, but deduce ab
initio the nature of evolution and that the most likely explanation for
the simulation was a group of evolved beings worried about the motives
of superintelligence...
--
        sentience@pobox.com    Eliezer S. Yudkowsky
              http://intelligence.org/beyond.html

GO.com. Click to find what you're looking for



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT