Re: Safety of brain-like AGIs

From: Ben Goertzel (ben@goertzel.org)
Date: Wed Feb 28 2007 - 07:46:46 MST


Shane Legg wrote:
>
> I don't know of any formal definition of friendliness, in which case,
> how could I possibly
> ensure that an AGI, which doesn't yet exist, has a formal property
> that isn't yet defined?
> That applies to all systems, brain-like or otherwise.
>
> If we consider informal definitions, then clearly some humans are
> friendly and intelligent.

Yes but no humans are **guaranteeably** friendly and intelligent...

>
> Perhaps a very intelligent and friendly system, in the informal sense,
> might be just what
> we need to help us come up with a formal definition of friendliness
> for super intelligences?
>
> Shane

That may perhaps be how things unfold, I agree...

-- Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT