From: Brian Atkins (firstname.lastname@example.org)
Date: Sat Jun 29 2002 - 19:19:33 MDT
Ben Goertzel wrote:
> > Ben Goertzel wrote:
> > >
> > > If my projections are correct, it will be years, not months, before this
> > ^^
> > > kind of thinking is directly relevant to Novamente work [i.e.,
> > until we have
> > > a system that is autonomously generally intelligent on a level where its
> > > Friendliness or not is a worry.] But we'll get there -- we're
> > in this for
> > > the long haul, for sure...
> > >
> > See, you keep saying things like that, and we all keep getting
> > freaked out.
> > Please try to insert a mental roadblock in your head such that every time
> > you find yourself considering a decision relevant to the potential risk
> > increase/decrease of an existential risk, that you don't choose the
> > potentially riskier choice based solely on your own intuitions.
> No choice regarding Novamente is based solely on my own intuitions, it's a
> collective effort, even though I have a bigger mouth about it than the
> others involved.
It doesn't matter how many intuitions are consulted. You're still avidly
missing the point.
> > If you can't be 100% sure (hell, even if you can), then you need to take
> > the safest path unless there is a super darn good reason not to.
> We have very scant resources right now. These resources are devoted to
> implementing and testing and tuning cognitive and perceptual mechanisms, in
> an unfinished system that has no autonomy and no goal structure and no
> feelings, etc. I think this is the best current allocation of our
Resources don't matter really. Whether you are one guy in a garage or
a Manhattan Project, this really doesn't change how you should be looking
at and addressing the risks. The only difference is how much realtime
> Until we have a system that implements the autonomous, goal-based aspects of
> the Novamente design, there is effectively zero chance of anything dangerous
> coming out of the system.
More famous last words
> It would be more useful for us to spend our time mitigating the existential
> risks of nuclear or biological warfare, than to spend our time mitigating
> the existential risks of the *current Novamente version* experiencing a hard
> takeoff, because the latter just can't happen.
Why not take the time now to think about and design on paper the various
ways you expect to minimize risks in Novamente? Why is it you have time
to sit down and do detailed design work on the seed AI part of Novamente
but you can't also spend the comparatively less amount of time working on
this additional stuff? Is it being driven by your plan to commercialize
> When we implement the autonomous goal-based part of the system and we have a
> mind rather than a codebase with some disembodied cognitive mechanisms in
> it, we will put the necessary protections in the codebase.
> I have now said this many times and it probably isn't useful for me to say
> it again...
Perhaps if you say it enough times the existential risks will magically
-- Brian Atkins Singularity Institute for Artificial Intelligence http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT