Re: SIAI & Kurweil's Singularity

From: 1Arcturus (arcturus12453@yahoo.com)
Date: Fri Dec 16 2005 - 08:32:28 MST


Jef Allbright <jef@jefallbright.net> wrote: Some of us think that one possible solution to the problem of
unfriendly AIs is to aggressively augment and amplify the intelligence
of humans--and more importantly, the intelligence of human social
organizations composed of augmented humans--such that we have a broad,
powerful, and evolving base of intelligence based on human values in
place to deal with the threat of unfriendly AIs. Society is already
proceeding down this broad path, but certainly not with any sense of
urgency.

  Jef,
   
  I still find this confusing. If humans agument and amplify their intelligence using technology, they *will be* AIs. That is, a significant portion of our intelligence will be 'artificial'. The only 'unfriendly AIs' would be unfriendly enhanced-humans and the pure machines with near parity. This scenario isn't much different than the present scenario: humans threatening humans, and advanced technology being able to harm humans, but overall the balance of powers and human control over 'pure machines' (with less than human intelligence) keeping any existential catastrophe at bay.

    On the other hand, some of us think that the risk of unfriendly AI is
so great in its consequences, and possibly so near in time, that
humanity's best chance is for a small independent group to be the
first to develop recursively self-improving AI and to build in
safeguards which, unfortunately, have not yet been conceived or
demonstrated to be possible. I don't disagree with this thinking, but
I assign it a very small probability of success because I think it is
vastly outweighed in terms of military and industrial resources that
can and will pick up the project when they think the time is right.

  The U.S. military has been working on machine-human interfaces for years, augmenting human cognition, and so on. Of course they are also working on pure machine AI, but so far this has just resulted in narrow applications (not general intelligence).
   
  I'm not sure the military would *want* an AI with full humanlike general intelligence :) (incl. the ability to talk back, refuse orders, etc.) Considering they probably want to keep things under their (human) control, I figure that they will be unlikely to develop humanlike AI or any sort of runaway self-improver.
   
  So if *they* don't, and private industry doesn't, and some mad loner or loner group doesn't (considering their likely lack of resources this seems the likely bet), then I don't see the threat as being that great. The military and private industry are pushing toward human augmentation, with ancillary machine AI and a tendency to interface humans with the machinery.
   
  Ever the optimist,
   
  gej

__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT