From: Matt Mahoney (firstname.lastname@example.org)
Date: Mon Feb 11 2008 - 16:11:30 MST
--- Vladimir Nesov <email@example.com> wrote:
> On Feb 11, 2008 11:44 PM, John K Clark <firstname.lastname@example.org> wrote:
> > I have no great confidence that the first true AI will have any interest
> > in developing "Friendly AI technology". And even if He did will He feel
> > the same way in 90 seconds? After all that's the equivalent of many
> > millennia for a human?
> It can be limited in any required way, and broken into making our bid,
> given enough tinkering (restarts, etc.) and safety measures.
> Technology is much easier to check than to produce (if it's designed
> for easy inspection), so given reasonable protocol it should be
> feasible. Problems with genie-in-the-box start when you allow it out,
> and if you are not going to let it out from the start, it should be
> possible to make use of it. It's an unstable situation, so
> first-priority thing for genie to do is to produce a safe genie, after
> which we destroy the original one.
This is all assuming that AI can somehow be developed in isolation. I don't
think so. Competing systems will be distributed over the internet to take
advantage of vast computing resources and knowledge already available. They
will be developed first.
-- Matt Mahoney, email@example.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT