From: Ben Goertzel (firstname.lastname@example.org)
Date: Sat Oct 23 2004 - 19:09:03 MDT
This stuff has been gone over many times in the archives. To sum up my own
1) I believe that working out a truly self-modifying, superintelligent AI
system is a hard problem but a solvable one. I think I have a viable
solution, which however will take some years to be completed (see
http://www.realai.net/AAAI04.pdf, or www.agiri.org). So far as I can tell
Eliezer does not yet have a viable solution, though he may well come up with
one in the future.
2) About proving whether Friendliness can be guaranteed or not. Science is
not at the level now where this kinda thing can be *proved*, but every
indication is that Friendliness CANNOT be guaranteed. There is not a shred
of evidence, intuitive or mathematical or scientific, that it can be
guaranteed for any superintelligent AI system.
-- Ben G
> -----Original Message-----
> From: email@example.com [mailto:firstname.lastname@example.org]On Behalf Of Maru
> Sent: Saturday, October 23, 2004 6:13 PM
> To: email@example.com
> Subject: Re: Weaknesses in FAI
> Not to distract from the very important discussion on
> fundraising, but after reading up on all Eliezar's works, I have
> to ask: What do *you* guys see as the current theoretical
> weakness of FAI? Is it proving Friendliness can be guranteed or
> not? Is it working out a truly self-modifiable AI? Or in
> verfiying that strong superintelligence is possible, and not just
> optimized weak superintelligence? I'm dying to know what the
> minds who thought it up think.
> One last thing, hopefully this'll never be relevant anyway: Would
> a brute uploaded mind ever be able to aspire to strong
> Do you Yahoo!?
> Declare Yourself - Register online to vote today!
This archive was generated by hypermail 2.1.5 : Sun May 19 2013 - 04:01:09 MDT