From: Olie L (firstname.lastname@example.org)
Date: Tue Feb 07 2006 - 18:33:16 MST
>From: Richard Loosemore <email@example.com>
>Subject: Re: JOIN: Joshua Fox
>Date: Tue, 07 Feb 2006 10:13:49 -0500
>In short, I am of the opinion that the approach to AGI they espouse is
>going to lead to an AGD (Artificial General Dumbtelligence), which will
>never reach the level of human intelligence even after many more decades of
>painstaking work, and hence will never be a threat.
Decades? I thought you said never?
If powerful AI (~AGI) doesn't eventuate within decades - for whatever
unanticipated engineering difficulties there may be - , work on it won't
Furthermore, the longer it takes to develop an AI that can improve AI (~~
Seed AI), the more likely it is to create a faster take-off. Which is more
likely to create a "bad" situation.
One issue that will continue to be a problem (likely within decades) is that
although no single methodology will create an AGI, if enough separately
created tools get thrown together... I don't know. I can't /show/ what's
>From: Russell Wallace <firstname.lastname@example.org>
>Subject: Re: JOIN: Joshua Fox
>Date: Tue, 7 Feb 2006 15:23:30 +0000
>On 2/7/06, Joshua Fox <email@example.com> wrote:
> > Russell Wallace wrote:
> > > I don't think the Singularity is inevitable.
Key word: Inevitable.
Very very different from "highly likely"
>In fact, I can think of
> > three plausible scenarios in which it never happens, and there might be
> > a fourth and fifth . . .
> > What are those? The only ones I have seen in the literature are:
> > - total catastrophe for human civilization
> > - some unknown factor that puts limits on exponential progress for
> > intelligence and technology.
>1. De facto world government forms, with the result that progress goes the
>way of the Qeng Ho fleets. ...
We'll put this under "regulation", then, shall we?
(There are "black-market"/"underground" research considerations for
practicality of regulating various researchy forms... but anyhoo...)
>2. Continuing population crash renders progress unsustainable. (Continued
>progress from a technology base as complex as today's requires very large
>populations to be economically feasible.)
This could be categorised more generally as a contributing factor to severe
Similarly (4) - "total catastrophe" - doesn't have to be anything like an
existential threat. Sufficient economic recession will impede technological
development, particularly AI development.
Hell, all it takes to cause a huge setback for computer - related research
(~AI) is damage to very small areas of the Earth, particularly around SF bay
area (Silicon Valley), Massachusetts, Bangalore, Tokyo and Dresden. (No,
I'm not suggesting this would "wipe out" the computer industry. But think
of how much Comp tech is concentrated in small tech centres around the
world, and how much a couple of small disasters could set the industry back)
The computer industry is only... uh... "supportable" thanks to the large
number of stable, supporting industries.
>3. Future political crisis leading to large scale war with nuclear or other
>(e.g. biotech or nanotech) weapons of mass destruction results in a
>fast-forward version of 2.
Yeah, I think this is the same thing, broadly, as (4) - total catastrophe.
There are other possibilities:
6) Cultural shift away from (specific forms of) technological development.
Although this is unlikely to be sufficient, given the strength of tech
development, culturally-inspired technological regression has happened
It doesn't even have to be against all technology (generalist Luddism) - it
could be very specific anti-computer tech.
It is concievable (neither likely nor good) that over 50 years, most
societies will adopt a position advocating natural, slow food. Slow food is
yummy (benefit), so there is a "motivation" for anti-GM food, anti
7) Engineering challenges on AGI - a variant on (5) - unforseen limit
I can't say. I don't know that anyone else can reasonably deny with
sufficient knowledge: There may be impediments that slow the development of
AGI by many many decades. By this stage, other forms of technological
development may be advanced enough so that the "rapid takeoff" element of
AGI won't have the same disjunctive impact that it would in the next
If we already have open life-expectancy, enhanced biological intelligence,
nano-assembly, cyberware, (!) hyperdrive, each of which has arisen slowly,
would the slow implementation of AGI create a technological disjunction for
There's still the predictability problem. But it's not as much of a
disjunction - it's a much smoother bump.
Again, this seems /unlikely/ given current trends of development. But for a
different society, with a different order of inventions...
>From: "H C" <firstname.lastname@example.org>
>Subject: Re: Hard Take-off Re: JOIN: Joshua Fox
>Date: Tue, 07 Feb 2006 02:09:37 +0000
>You can't really agree or disagree about hard take-off.
>If the resources are available for hard take-off, then it happens. If
>computational resources are more limiting, then it won't be so hard of a
It's not just the computational resources - referring to the AI Jail
stuff... you can't argue from the inability to demonstrate the impossibility
of something that it is guaranteed to happen.
You can't guarantee that an AI bound to a computer, with no interaction with
the wider world can't escape (dissenters will be told to shut up and read
up)- but you can't use that fact to predict that it will escape to create a
Computational resources are not the only limiting factor.
Factors that influence how hard the takeoff "knee" is include:
1) Computational resources
2) Other resources - particularly nanotech.
- it doesn't have to be replicators. Tunnelling electron
microscope-level nanotools etc will make it much easier for a "runaway AI"
to create replicators
3) "first instance efficiency" - I know there's a better term, but I can't
remember it. If the first code only just gets over the line, and is slow
and clunky --> slower takeoff
4) AI goals (how much it wants to improve)
And in Goals lies the rub
This archive was generated by hypermail 2.1.5 : Fri May 24 2013 - 04:00:51 MDT