Re: Augmenting humans is a better way

From: Brian Atkins (brian@posthuman.com)
Date: Sun Jul 29 2001 - 16:01:25 MDT


I am pretty willing to call it a draw, but had to say a few last things...

There is no way to PROVE 100% w/o a doubt right now that AI is very likely
to precede RNIs. However what I think I have at least shown is that we are
at a point in time now where there is at least a real possibility of having
real AI. The hardware is here. The software /may/ be here. This however is
lacking in RNIs, so what we can at least say is that there is NO chance of
having RNIs in this decade. Will you at least give me that Higgins?

James Higgins wrote:
>
> I know that researchers have been able to interface devices with the
> nervous system. There is a vision system that can take a camera image and
> directly stimulates the optic nerve to produce rudimentary vision in some
> blind people. Thus giving blind people real-time vision (albeit of a very

My understanding though is this is not neuron-level integration. It is
a very blunt hack. In terms of direct neuron input/output things are still
at the handful-of-connections phase I believe. In fact we have met with
someone here in Atlanta that works with a Dr. Kennedy who has a company
commercializing this direct neuron tech for patients that cannot move.
I believe it was asked of him when will the number of neurons we can
connect up start doubling every year, and he suggested never.

> low resolution currently). So we can say that their has been success in
> neural interfaces. On the other hand, as far as I know, no one has ever
> created a working general AI of any order. Webmind/Biomind may be the most

Non-general AIs that beat humans in many different areas have been
developed.

> advanced in that field but if I remember correctly they have never run the
> system as a whole. So I'd be tempted to say that, at the moment, research
> into neural interfaces seems further along than research into AI.

Whatever, but you'd be wrong. RNIs are at the stage of AI in the middle
20th century- both the hardware (ability to connect automatically to
large numbers of individual neurons in many different people) and software
(figuring out how to increase memory, intelligence, etc. with an implant)
do not exist. With AI we have the hardware, and MAY have the software- we at
least have the software for some areas of human ability.

Now RNIs may progress faster than AI did on its timeline, but you cannot
say that at the current time RNIs are anywhere near AI. Perhaps you
simply know less about AI than you think you do about RNIs and this is
biasing you to believe RNIs will arrive first?

> > > But it will most likely take many steps to get to that point, especially
> > > based on Eli's Seed AI.
> >
> >Ok, but you will agree that in a SI vs. somewhat augmented humans match,
> >the SI can get the Singularity done quicker, probably extremely quickly
> >by whipping up some very advanced replicating nanotech hardware. The
> >only real question is how long it takes to achieve SIness.
>
> Given a fully functioning General AI, the availability of strong nanotech,
> and that this AI has access to nanotech (I find this VERY unlikely), then
> yes. But what fool is going to give a seed AI access to nanotech? And
> this still requires a functioning General AI that we still have no idea how
> to build.

That is not what I meant, I meant the seed would grow up into a SI, and
once we have a SI it would very very quickly be able to make nanotech
(which the humans hopefully haven't figured out yet).

BTW, what fool wants to give nanotech to 6 billion unsupervised sentients?

>
> > > development, a computer that runs 10 times faster has almost no effect on
> > > the speed of the developer. Nural implants could, on the other hand, have
> > > incredible impact on the speed of developers as maybe they could think code
> > > instead of typing it. I'm not saying that this is going to be necessary,
> > > but it would definitely be helpful and may be necessary in order to keep
> > > the proposed time line.
> >
> >Actually with stuff like Flare we are beginning to see how computing
> >power can help developers out. Just like how software helps Intel
> >engineers create chip designs, software will eventually help software
> >people create code. Actually it already does, but it is a pretty limited
> >effect.
>
> Don't get me started on Flare. That is going to take a long time and a
> huge amount of effort. Great idea, but I doubt it will get the necessary
> resources.

Hey you could always help out...

>
> >I don't buy the argument that there is a major difference between the
> >speed we think and type. I know when I'm coding I spend MOST of the
> >time simply thinking about what to type next. And I was always the
> >fastest/most productive coder wherever I worked... if it was the case
> >that typing speeds were what was holding software creation back, you
> >could simply throw more developers at a project and it would get done
> >faster. Or hire professional typists and let the programmers talk really
> >fast :-)
>
> More developers = longer development cycle. It is a myth that adding
> developers speeds up development! A professional typist would get in the

Exactly, that is what I said. Yet if it was simply typing speed holding
things back, what gives?

> way, try using voice recognition (even good ones) for coding some time.
>
> Personally, I can code at least as fast as I can type. If I could type
> faster (85 wpm last time I checked) I'm fairly certain I could code
> faster. But I am admittedly and oddity. I once personally produced some

I can also code as fast as I can type WHEN I KNOW WHAT TO WRITE. But
I seriously doubt you sit in your office or cube everyday and type
continuously for 8 hrs. You most likely type some, then sit there and
compile, then test run it, then find a bug and then go looking through
your code to figure out where it went wrong, etc. etc. Faster typing
will not really speed this process up much at all. The only way to
really speed it up would be to more directly get the ideas of what
you want to software to do directly from your mind into the code. Again
that would be an extremely complex task for a RNI to pull off.

> 350,000+ lines of working, clean & (mostly) reusable code in a single year
> (while gathering requirements, doing architectural design & technical
> management). Maybe this is why *I* think a neural interface would be so
> darn helpful (because I want one now)!

If we use 46 weeks and 5 working days a week and 8 hrs a day that is under
200 lines of code a day on average. At 85 wpm and an average of 8 words per
line of code, that means your actual time per day spent typing would be
under 20 minutes per day. No matter how you stack it, it appears that
typing speed was not a significant factor in this particular 350K LOC
project.
 
> >Finally, if you think SI (whether AI or human-based) is all we need,
> >then why the bias of wanting human-based ones instead of AI first?
>
> I'm not biased! I'm simply trying to state that we don't have any real,
      ^^^^^^^^^^
> honest estimation of when we'll get either neural implants or general
> AI. I might, personally slightly prefer the human path due to my end goals
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> and my concern over the Sysop. But any path that truly works and ends with
> myself + wife + friends uploading into a reasonable, non-restrictive
> environment is fine by me!
>

You look slightly biased to me, but perhaps I'm missing something. I also
want your end goals of everyone uploaded (everyone who wants to) and safe
in a non-restrictive environment, and yet I didn't end up picking the
human path. The AI path looks quicker and safer to me, and also the most
likely end state no matter what unless we kill ourselves off first.

-- 
Brian Atkins
Director, Singularity Institute for Artificial Intelligence
http://www.intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT