Re: Safety of brain-like AGIs

From: Shane Legg (shane@vetta.org)
Date: Fri Mar 02 2007 - 08:18:53 MST


  Regarding 1, we seem to have a solid grasp on the issue intuitively in

> the context of the Novamente design, but not a rigorous proof. We also
> have ways to test our grasp of the issue via experiments with
> toddler-level AGIs.

I expect something similar to develop, that is, a conceptual understanding
of how the motivation system works and evolves, backed up by experiments
on a limited AGI in a sand box setting. I don't claim this to be perfect,
or the
only way to approach the problem, but it seems like one of the better
currently
feasible ways to try to deal with this issue. I can think of a few others.

Once I've finished my PhD thesis (in a few months), I will put together a
small
document on approaches to AGI safety. I am sure that it will attract howls
of
protest for being incredibly naive, however it is my belief that we could do
with a
list of less-than-perfect practical approaches to the problem that AGI
developers
should be aware of, citing the main strengths and weaknesses of each.

I will title it, "So, you're going to build a super intelligence?" and
solicit feedback
on the document as it develops on my blog ( vetta.org ).

Shane



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT