RE: Summary of current FAI thought

From: Ben Goertzel (ben@goertzel.org)
Date: Tue Jun 01 2004 - 17:41:45 MDT


Eliezer wrote
> A Really Powerful Optimization Process is an AGI but
> non-sentient, if I can
> figure out how to guarantee nonsentience for the hypotheses
> it develops to
> model sentient beings in external reality.

I conjecture that modeling sentient beings in external reality, within
plausible computational resources, poses sufficiently difficult
data-analysis and pattern-recognition problems that it can only be
addressed by an analysis/modeling system that is itself a sentient being
;-)

In other words, I conjecture that sentience is a necessary consequence
of being a Really Powerful Optimization Process Operating Within
Plausible Computational Resources.

However, I don't believe that human-style sentience is a necessary
consequence. This gets deep into the definition of "sentience", of
course....

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT