Peter Voss's deep dark subconscious demons [was: Ben vs. Ben]

From: Ben Goertzel (ben@goertzel.org)
Date: Sun Jun 30 2002 - 13:35:35 MDT


Well, Eliezer, I don't think I want to get into a detailed thread on the
fascinating and scintillating psychology of Peter Voss! ;>

My impression is that, overall, Peter does not agree with you any more
closely than I do, regarding your theories of AGI, Friendly AI and related
topics.

He is more of a "superfast hard takeoff" believer than I am (in that way
resembling you), but he seems *more strongly than me* to believe that the
time now is too early to be seriously focusing attention on the Friendliness
of AI systems (a major difference from you).

But I'll let Peter restate his own views if he wishes...

-- ben g

> -----Original Message-----
> From: owner-sl4@sysopmind.com [mailto:owner-sl4@sysopmind.com]On Behalf
> Of Eliezer S. Yudkowsky
> Sent: Sunday, June 30, 2002 1:02 PM
> To: sl4@sysopmind.com
> Subject: Re: Ben vs. Ben
>
>
> Ben Goertzel wrote:
> > E.g. I know Peter Voss thinks it's just way too early to be seriously
> > talking about such things, and that he's said as much to
> Eliezer as well...
>
> You and Peter have different AIs. Peter and I have our differences on
> cognitive science and Friendly AI, but I think we have pretty much the
> same take on the dynamics of the Singularity and the moral
> responsibilities of a seed AI programmer. If Peter says his AI doesn't
> need a controlled ascent feature yet then it's because his current code
> is doing some experiential learning, but it's not strongly recursive -
> it doesn't have Turing-complete structures modifying each other.
>
> Also, Peter is in general much more of a perfectionist. He might not
> always agree with me on what constitutes a problem, but if he sees a
> problem I would expect him to just fix it. In general, you strike me as
> someone who needs a strong reason to fix a problem and Peter strikes me
> as someone who needs a strong reason *not* to fix a problem, so if *even
> Peter* thinks his AI doesn't need a controlled ascent feature, it
> probably doesn't.
>
> --
> Eliezer S. Yudkowsky http://intelligence.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT