RE: Volitional Morality and Action Judgement

From: Ben Goertzel (ben@goertzel.org)
Date: Sun May 23 2004 - 19:19:49 MDT


Eliezer wrote:
> Having general intelligence sufficient unto the task of
> building a mind
> sufficient to self-improve is not the same as being able to
> happily plunge
> into tweaking your own source code. I think it might literally take
> considerably more caution to tweak yourself than it would
> take to build a
> Friendly AI, at least if you wanted to do it reliably.
> Unlike the case of
> building FAI there would be a nonzero chance of accidental
> success, but
> just because the chance is nonzero does not make it large.

We've had this discussion before, but I can't help pointing out once
more: We do NOT know enough about self-modifying AI systems to estimate
accurately that there's a "zero chance of accidental success" in
building an FAI. Do you have a new proof of this that you'd like to
share? Or just the old hand-wavy attempts at arguments? ;-)

Note, I am not advocating proceeding in a way that RELIES on the
possibility of accidental success. I'm just making a conceptual point.

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT