RE: Weaknesses in FAI

From: Ben Goertzel (ben@goertzel.org)
Date: Sat Oct 23 2004 - 19:09:03 MDT


Maru,

This stuff has been gone over many times in the archives. To sum up my own
views:

1) I believe that working out a truly self-modifying, superintelligent AI
system is a hard problem but a solvable one. I think I have a viable
solution, which however will take some years to be completed (see
http://www.realai.net/AAAI04.pdf, or www.agiri.org). So far as I can tell
Eliezer does not yet have a viable solution, though he may well come up with
one in the future.

2) About proving whether Friendliness can be guaranteed or not. Science is
not at the level now where this kinda thing can be *proved*, but every
indication is that Friendliness CANNOT be guaranteed. There is not a shred
of evidence, intuitive or mathematical or scientific, that it can be
guaranteed for any superintelligent AI system.

-- Ben G

> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org]On Behalf Of Maru
> Sent: Saturday, October 23, 2004 6:13 PM
> To: sl4@sl4.org
> Subject: Re: Weaknesses in FAI
>
>
> Not to distract from the very important discussion on
> fundraising, but after reading up on all Eliezar's works, I have
> to ask: What do *you* guys see as the current theoretical
> weakness of FAI? Is it proving Friendliness can be guranteed or
> not? Is it working out a truly self-modifiable AI? Or in
> verfiying that strong superintelligence is possible, and not just
> optimized weak superintelligence? I'm dying to know what the
> minds who thought it up think.
> ~Maru
> One last thing, hopefully this'll never be relevant anyway: Would
> a brute uploaded mind ever be able to aspire to strong
> superintelligence?
>
>
>
> _______________________________
> Do you Yahoo!?
> Declare Yourself - Register online to vote today!
> http://vote.yahoo.com
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT