From: Wei Dai (firstname.lastname@example.org)
Date: Fri Oct 22 2004 - 20:41:39 MDT
On Fri, Oct 22, 2004 at 08:55:28AM -0400, Eliezer Yudkowsky wrote:
> We're not talking about a trivial amendment of the axioms, and I would like
> to know which axioms will do the trick.
I think you're right that there are big unsolved problems lurking in the
choice of axioms. I don't know if Schmidhuber has responded to this thread
yet, but I bet he wouldn't deny that. Maybe choosing the right axioms
is no easier than programming a self-improving AI using a procedure
> That's the entire problem of Friendly AI in a nutshell, isn't it?
It's a slightly more general problem. How does the AI programmer make sure
the AI will do what he wants it to do, which may not necessarily be
"friendliness"? I think we need some other term for this more general
problem, as solving this problem will automatically solve Friendly AI, but
not vice versa. Perhaps "safe AI"?
> Yes, but then the FAI problem is generally overlooked or treated as a
> casual afterthought, and journal referees aren't likely to point this out
> as a problem. Hardly fair to Schmidhuber to zap him on that one, unless
> he's planning to actually build a Godel Machine.
I don't see why its unfair to zap him, as long as we zap everybody else
for it. The safety issue needs to be part of every paper on AI, even if
it only says "no safety considerations were included in this work" or
"safety is still an unsolved problem", just to remind potential
implementors that it's not safe to actually build the AI as described.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT