From: 1Arcturus (email@example.com)
Date: Fri Dec 16 2005 - 09:03:44 MST
Olie L <firstname.lastname@example.org> wrote:
It's not so much a "never". It's that the best way to achieve
superintelligence is to start with an expandable design. Once there is
superintelligence, it would be much easier to figure how to integrate human
Superintelligence should begin with intelligence, which is in humans. Humans are making rapid advances in understanding this intelligence (cf. Kurzweil's discussion of acclerating progress on reverse-engineering the brain). We already know how to expand designs, and in 30 years we will know very much more. According to Kurzweil, we already have 'neural transistors' which allow two-way communication between neurons and machines. People will be easy to integrate, likely already fully integrated, by the time we crack the intelligence nut. Then, of course, superintelligence will follow easily (hopefully).
As I read it, the big difference is that certain groups, including the SIAI,
envisage that the ability of an AI that is able to directly modify itself
could result in a rate of technological improvement far beyond current
trends, even if current trends are exponential. Such self-improving AIs
(~seed AI) do not fit neatly into the GNR revolutions, as described by
They are the "R" part of GNR. Kurzweil (confusingly IMO) groups machine AI with robotics.
I don't see that being a necessity at all. Uploading is _very_ different
from a full understanding of brain ops.
How so? I kind of agree with Kurzweil, that if we really understood the brain (human intelligence - the 'software'), we could already implement it on our current machines. The hardware isn't the holdup, the software is. Embodiment (even virtual) would suck at today's methods, but human intelligence could be instantiated on machines.
Consider for a moment that a lot of people won't want to change themselves.
Would they rather be ted by upgraded humans (who have human foibles
and motivations) or be "governed" by AIs who lack self-interested biases?
I think the discussions on this list tend to demonstrate that humans, even you humans, disagree over how to define lack of self-interested bias, and how to discern it. If you don't think the majority of humanity would be wildly suspicious of a 'black box' intelligence, with a supposed perfectly morality (designed by humans with foibles and motivations, no less!), then I would say you are being naive.
Humans who want to stay unmodified are going to face a real crisis, but I don't think there will be many such humans. Humans are going to try to keep up with each other's capabilities, as they always have, out of paranoia, if nothing else. And if the time comes that unmodified humans cannot even understand or compete seriously with their modified brethren, they will have to work out some sort of practical arrangement together, hopefully in both of their best interests.
Yes, and I'm sure you're aware of humanity's success in long-term planning.
Foresight is not our greatest quality; being amongst rapid rates of
technological development does not help us guide all the development by all
Humanity's success without advanced technologies cannot be fairly compared to humanity's success with the advanced technologies, specifically the implementation of human-machine merger.
Our ability to understand our human condition will have its own sort of 'hard takeoff' after the reverse-engineering. Our ability to work out a working harmony between our individual, competing interests will also be strengthened immeasurably by access to machine intelligence capabilities. Artificial intelligence, in humans merged with technology, will likely allow us to solve all sorts of social and political and ethical problems that now seem so intractable, solved now only by force or accident. And our ability to solve these problems will be advanced by our machinelike control over the nature and functioning of our own minds, and our ability to advance it further.
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT