From: Shane Legg (firstname.lastname@example.org)
Date: Fri Mar 02 2007 - 08:18:53 MST
Regarding 1, we seem to have a solid grasp on the issue intuitively in
> the context of the Novamente design, but not a rigorous proof. We also
> have ways to test our grasp of the issue via experiments with
> toddler-level AGIs.
I expect something similar to develop, that is, a conceptual understanding
of how the motivation system works and evolves, backed up by experiments
on a limited AGI in a sand box setting. I don't claim this to be perfect,
only way to approach the problem, but it seems like one of the better
feasible ways to try to deal with this issue. I can think of a few others.
Once I've finished my PhD thesis (in a few months), I will put together a
document on approaches to AGI safety. I am sure that it will attract howls
protest for being incredibly naive, however it is my belief that we could do
list of less-than-perfect practical approaches to the problem that AGI
should be aware of, citing the main strengths and weaknesses of each.
I will title it, "So, you're going to build a super intelligence?" and
on the document as it develops on my blog ( vetta.org ).
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT