Re: How hard a Singularity?

From: Brian Atkins (brian@posthuman.com)
Date: Sun Jun 23 2002 - 15:25:29 MDT


James Higgins wrote:
>
> At 02:18 PM 6/23/2002 -0400, Brian Atkins wrote:
> >Eugen Leitl wrote:
> > >
> > > On Sat, 22 Jun 2002, James Higgins wrote:
> > >
> > > > to get the Singularity in full force the more people die (that's my
> > > > interpretation of his vision, at least). Personally, I'd rather let a
> > > > few hundred thousand people die while making certain that the
> > > > Singularity won't just wipe everyone out. I mean, what's the point in
> > >
> > > I agree with this assessment.
> > >
> >
> >Just a pedantic nitpick, but if Eugene gets his future where all countries
> >pass and perfectly enforce laws against AI development, and it therefore
> >takes at least 20 more years before we get some alternate Singularity
> >technology such as uploading, we are talking about quite a few more deaths
> >than a "few hundred thousand":
> >
> >20 years * 365 days/year * 150k deaths/day = 1095 megadeaths
>
> Sounds perfectly reasonable to me if it means saving the human race as a
> whole. Even if I'm one of those 1 billion deaths.
>

Certainly I agree, and I'm sure almost everyone here does too. The problem
is that you can't know with any surety that building an AI will mean
killing off the human race until you at least build one and test it out
a bit. The protocol for developing a FAI contains various things to allow
us to safely do this. What would be irresponsible would be to make it
impossible to do this experiment and get the data because of unproven gut
feel reactions to the idea of seed AI.

-- 
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT