From: Ben Goertzel (firstname.lastname@example.org)
Date: Fri Jan 18 2002 - 18:11:00 MST
> > I would say, rather: Maybe programming benignity
> > early on will have a
> > beneficial effect on later-stage superintelligent
> > AI, or maybe it won't.
> > Furthermore, of all the ways I have seen of
> > attracting investor dollars,
> > this has to be one of the most indirect and
> > unworkable ;>
> I think we are in agreement here Ben. I beleieve that
> one of the keys to creating an SI is understanding in
> as much detail as possible how our own brains work.
Actually, I don't agree with that! I think that is one possible path to
creating a artificial intelligence, but not the only possible path; and I
suspect that it *won't* be the first path followed. I suspect that AI will
move faster than brain science in the next decade, even though it's lagged
behind in the past 2 decades.
> That much I think everyone agrees on. I go further
> however and strongly suggest that another key is to
> fine-tune the neuro-logic behind the complex array of
> chemical activity which makes us 'emotionally happy'.
> This idea of creating an SI that is the
> super-embodiment of Spock speaks more about the person
> suggesting it that any true SI.
Yes, I do agree there. In all probability an SI will have its own complex
of emotional motivations, different from ours but not "absolutely rational"
in any sense...
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT