Re: The Singularity vs. the Wall

From: Richard Loosemore (rpwl@lightlink.com)
Date: Mon Apr 24 2006 - 07:47:13 MDT


Some of your points are well taken (there might be hugely dramatic
technologies that come along after the Singularity, like time travel,
which make us forget how fabulous the Singularity itself once seemed),
but on some other points you and I are talking about different things
while using the same word.

I do not assume "AGI" to be just another form of computer engineering.
I believe there are strong reasons to argue that it would arrive in
something close to human-mind form, not RPOP form. (The argument in a
nutshell: it is just *easier* to do it that way, so this is the first
way that it will be done). The main thing that would be different about
this AGI is that it would not have all the dangerous stuff that
evolution (in her sincere but cockeyed wisdom) designed into the human
mind: no aggression, no jealousy, no thirst for dominance, etc etc.

 From that perspective, some of the things you say about AGI simply do
not make sense to me.

I could pick up on several different examples, but let me grab the most
important one. You seem to be saying (correct me if I am wrong) that an
AGI will go around taking the same sort of risks with its technological
experiments that we take with ours, and that because its experiments
could hold existential risks for us, we should be very afraid. But
there is no reason to suppose that it would take such risks, and many,
many reasons why it would specifically not do the kind of risky stuff
that we do: if you look at all the societal and other pressures that
cause humans to engage in engineering ventures that might have serious
side effects, you find that all the drivers behind those proessures come
from internal psychological factors that would not be present in the
AGI. For example, we do stuff at breakneck speed and cut corners
because we don't want to wait a hundred years for every new technology,
and we do that because we individually don't live that long. If we
lived for a million years, we could afford to hang on with a new
technology and get it to the point where we were extremely sure of its
safety.

It would take a long essay to reinforce the point I am making here, but
the bottom line is that from the moment that someone builds the kind of
AGI I am talking about, we will be vastly safer than we are at the moment.

Richard Loosemore.

Phillip Huggan wrote:
> It's not fair to call AGI the only Singularity. At the end AGI is just
> another engineering
> technology; you still have to specify some goal for it to serve. It
> just so happens that a good chunk of our global industrial base is
> devoted towards both improving computer hardware and programming bigger
> software, so this powerful engineering technology may appear "ahead of
> its time". But even post-AGI there should still be progress. A
> time-machine would give you direct access to the machine's whole future
> light-cone volume instantly. The ability to manufacture black-holes,
> travel near C or even harvest antimatter may be a bigger accelerant
> post-AGI, than AGI would be now. In this context a technology like MNT
> or indeed any exponential manufacturing method, can be seen as
> Singularity-ish.
>
> AGI spacecraft might not blow apart but their time-machines or their
> particle accelerators might. This is why I think humans should have the
> ultimate say on any AGI action. If ve is just allowed to force vis
> engineering concerns, ve will eventually come across problems that may
> be as intractable (from the PoV of us meatty fragile humans that will
> suffer extinction from any AGI engineering-gone-wrong) to it as our
> societal problems are to us. With AGI you are attempting to take on all
> human extinction risks in one giant bite. I'm not saying don't do it,
> I'm just saying don't hold achieving AGI to be the be all and the end
> all (even though it might be), especially if a superior pathway presents
> itself in the decades ahead. Don't turn on SkyNet if we got a good
> thing going in the decades ahead.
>
>
> */Richard Loosemore <rpwl@lightlink.com>/* wrote:
>
> This is an important thread.
>
> I think that Kurzweil's tendency to emphasize exponential curves of
> general technology is really a nuisance. I don't care how many curves
> he finds, I think that without AGI we simply cannot get more than a
> certain rate of new technological development. Frankly, I think it will
> hit a plateau. (I even have my suspicions that we already did hit the
> plateau).
>
> And as for AGI, this for me is *the* definition of the Singularity.
> None of the rest is very important. Without it, I see so many
> limitations on the ability of human engineers to handle the complexity
> of ordinary technology that I do not believe there will be any dramatic
> advances in the near future. I think spacecraft will continue to blow
> up because of single-line errors in their code, and the IRS will fail
> again and again to upgrade their computer software.
> <SNIP>
>
>
> ------------------------------------------------------------------------
> Blab-away for as little as 1¢/min. Make PC-to-Phone Calls
> <http://us.rd.yahoo.com/mail_us/taglines/postman2/*http://us.rd.yahoo.com/evt=39663/*http://voice.yahoo.com>
> using Yahoo! Messenger with Voice.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT