RE: Augmenting humans is a better way

From: Ben Houston (ben@exocortex.org)
Date: Sat Jul 28 2001 - 20:35:00 MDT


Hi Brian,

>Just because a company may exist in a
>jurisdiction outside of the USA, does not mean that their research and
>development efforts won't be severely impacted by a clampdown on research
>into RNIs in the USA. Do you think IBM will just uproot all those people
>and make them move to Japan?

I agree with this. The world is becoming more integrated/cooperative at the
international level and such options are becoming much less viable.

>This seems to be getting away from my point, which is that biologically
>augmenting humans most likely will not enjoy the widespread support and
>resources that the original poster was predicting. It is unlikely in my
>opinion that companies in the USA or Europe will make attempts to
>commercialize such tech, and there is even a chance such products might
>be outlawed the same way that cloning has been in some countries already.

Please understand that cloning was banned because it is currently quite
unsafe. Did you know that for each successful cloned animal the researchers
had to induce hundreds of pregnancies that failed in various ways? They
banned it because even though it wasn't safe people were stilling willing to
try it.

I expect that non-FDA/FTC approved implants will be banned as well. Just
like non-FDA/FTC artificial hearts are currently banned from being implanted
into humans.

> > > >Exactly, it may well be impossible to come up with a one-size fits
all
> > > >technology for something as uniquely individual as the brain.

It probably is possible to come up with something that is similar to a
one-size-fits-all model? What is your specific argument against such a
thing?

> > > > And what
> > > >company will take the risks to commercial it if they know that for
many
> > > >people it won't work, or they even risk getting sued?

Who said that neural implants would have a low probably of working with
individual patients? What is your specific argument against such a thing?

> > > >We live in a
> > > >country where Dow Chemical got sued by women who got breast implants.
> > > >Will companies really expose themselves to the kinds of risks
involved
> > > >with neural hacking?

You forget that overall Dow Chemical made a lot of money and that they did
make and sell the breast implants in the first place.

> The vast majority of potential users will probably be
>pretty satisfied with external wearable apparatus, and I think this is
>where the real action will be. Many of the things you want can be done
>with wearables- you only need to get access to the internals if you want
>to really try and increase the raw intelligence or speed of thought or
>direct memory capacity.

Access to last three things is the Holy Grail.

>Actually there are some people around here that think they know that. They
>simply haven't proven it yet. This is quite different again than the state
>of progress in RNIs where no one really has any idea yet how to do much at
>all besides linking a few neurons to a computer.

I would posit that you actually have no clue as to what the state of the art
is.

>No one knows how to
>increase your working memory so that you can remember 50 phone numbers at
>once.

We know where it is instantiated in the brain. We also know how to
modifying one's performance although not to the extreme you are talking
about.

>In fact, reverse engineering the evolved mess that is the brain may
>be very very hard.

It may be hard but there are many well-funded professors and graduate
students working on it everyday.

>At least machine learning folks have created non-general
>AIs that can excel at specific tasks like chess. Even that puny
accomplishment
>Is much more than the progress so far in RNIs.

Like I said before you don't seem to know what the current state of the art
in neural implants is.

>An AI should only be eventually constrained by how much computing power
>it has available. The same will hold for your RNI.

Exactly right.

>How can a RNI that
>is internal to your skull, or at most wearable, possibly match the
>computing power available to an AI?

Who said the bulk on the processing power must be done on person... maybe
one has an uplink to off person processors? I don't see any hard limits to
the processing power available to a human with a neural interface.

>Actually with stuff like Flare we are beginning to see how computing
>power can help developers out. Just like how software helps Intel
>engineers create chip designs, software will eventually help software
>people create code. Actually it already does, but it is a pretty limited
>effect.

The above idea of using computers to help out people out in their chores is
the basis of the computer revolution. This includes helping Intel design
new chips or helping a developer write a new program or me to write this
email. It is in no way what so ever a "a pretty limited effect."

Cheers,
-ben houston
http://www.exocortex.org/~ben

-----Original Message-----
From: owner-sl4@sysopmind.com [mailto:owner-sl4@sysopmind.com]On Behalf Of
Brian Atkins
Sent: Saturday, July 28, 2001 8:15 PM
To: sl4@sysopmind.com
Subject: Re: Augmenting humans is a better way

James Higgins wrote:
>
> At 04:47 PM 7/28/2001 -0400, you wrote:
> >James Higgins wrote:
> > > >Exactly, it may well be impossible to come up with a one-size fits
all
> > > >technology for something as uniquely individual as the brain. And
what
> > > >company will take the risks to commercial it if they know that for
many
> > > >people it won't work, or they even risk getting sued? We live in a
> > > >country where Dow Chemical got sued by women who got breast implants.
> > > >Will companies really expose themselves to the kinds of risks
involved
> > > >with neural hacking?
> > >
> > > Hello? Sorry, but I just HAVE to point this out. Did you know that
there
> > > are more countries in the world than the United States? Personally,
> >
> >Yes and almost all of them are less advanced when it comes to biological
> >and computing sciences. Many of them are close or even equivalent, but
> >those same countries are also the ones who will likely be even less
> >likely to work on Really Scary Human Augmenting science. Think Europe.
> >So if you have to bail out of the USA that is going to extend the
> >bio-based Singularity timeline even farther than I am already thinking
> >about.
>
> Same companies, different countries. You can buy medication overseas that
> the FDA has not (or will not) approve for sale in the US. Multi-national
> corporations make individual countries mostly irrelevant when it comes to
> holding back new technology/advances.

Ok, so how many PhDs does IBM employ in countries outside the USA compared
to how many it has working here? Just because a company may exist in a
jurisdiction outside of the USA, does not mean that their research and
development efforts won't be severely impacted by a clampdown on research
into RNIs in the USA. Do you think IBM will just uproot all those people
and make them move to Japan?

This seems to be getting away from my point, which is that biologically
augmenting humans most likely will not enjoy the widespread support and
resources that the original poster was predicting. It is unlikely in my
opinion that companies in the USA or Europe will make attempts to
commercialize such tech, and there is even a chance such products might
be outlawed the same way that cloning has been in some countries already.
Some countries like Japan might make an attempt, but without support
of the other countries their progress will be slowed down or may not
even actually take off if they realize the potential market of users may
be very small. The vast majority of potential users will probably be
pretty satisfied with external wearable apparatus, and I think this is
where the real action will be. Many of the things you want can be done
with wearables- you only need to get access to the internals if you want
to really try and increase the raw intelligence or speed of thought or
direct memory capacity.

>
> > > if/when they come up with implants that offer a significant mental
> > > advantage and have a low chance of screwing you up I *will* be getting
> > > one. I don't care if I have to go to Japan, Europe, Russia, Mexico or
> > > Chiba City (CyberPunk is my favorite fictional genre). When it
becomes
> > > possible to do, it will also become possible to get (and without
waiting
> > > for FDA approval)! Then, assuming these have a significant effect on
> > > intelligence, the next series will likely be available sooner than
might be
> > > expected (you have to assume the developers are going to use their own
> > > product). I also imagine that income for upgraded individuals will
> > > drastically go up, which will make affording the next upgrade much
> > > easier. Which is another reason why I'd want to get on the boat
early.
> > >
> > > But, that said, this will still take a very long time. Possibly much
> > > longer than the AI path. However, I will NOT say that the AI path is
> > > likely to be faster than this path since NO ONE IN THE WHOLE WORLD HAS
EVER
> > > CREATED ANYTHING REMOTELY SIMILIAR TO REAL AI. And thus it is
IMPOSSIBLE
> >
> >Now you are the one making claims.. for all you know Webmind may very
well
> >be remotely similar to real AI. In fact you have Ben here making that
> >claim. I do not see anyone around claiming to be near to finishing a
> >Real Neural Interface. RNIs seem to be around the stage of development
> >that AI was back when computers were using vacuum tubes.
> >
> >A different way to look at it is this: with the computing power of the
> >near future, AI is at the stage now where we can do real scientific
> >experimentation. That (being able to really experiment) almost always
> >leads to breakthroughs. RNIs are not there yet. I think you will agree
> >with me that the AI path /definitely/ seems to be much farther along from
> >these two perspectives.
>
> No, I'm specifically NOT making claims. I'm taking a show me attitude.

You did make a claim regarding the existence of something near real AI.
And you are using that claim to simultaneously claim that the AI path can
not be shown to be more likely to succeed first. Have you examined the
Webmind design and code in detail and determined that it is not remotely
similar to a real AI? Do you care to address my two points showing how
much more advanced AI research already is compared to research into real
neural interfaces?

>
> IF/WHEN they come up with implants that A) make a significant difference
in
> mental capacity and B) aren't likely to screw up the recipient. I'm not
> making any claims there.
>
> "When it becomes possible to do, it will also become possible to get". If
> the time is taken to design such an implant, it will become available in
> one fashion or another. With enough money you can even go buy a nuclear
> weapon, so you will be able to buy implants once they have been designed.

I am not disputing that of course.

>
> If I had an implant that significantly enhanced my mental ability, I feel
> confidant that I could negotiate for much better pay. I'm already quite
> good at this anyway.
>
> As for Real AI, when someone gets one working then we can talk. The fact
> is that no one knows what Real AI is going to require. I also think Ben
is

Actually there are some people around here that think they know that. They
simply haven't proven it yet. This is quite different again than the state
of progress in RNIs where no one really has any idea yet how to do much at
all besides linking a few neurons to a computer. No one knows how to
increase your working memory so that you can remember 50 phone numbers at
once. In fact, reverse engineering the evolved mess that is the brain may
be very very hard. At least machine learning folks have created non-general
AIs that can excel at specific tasks like chess. Even that puny
accomplishment
is much more than the progress so far in RNIs.

> doing a great job and is probably the most likely (that I know of anyway)
> to succeed. However, no one can predict with any credibility that he will
> succeed. I like to think he will and I would bet that he would also.

Well it has already been shown to be able to learn and accomplish certain
specific tasks such as stock market prediction. Again, more progress than
in RNIs. When do you think we'll have a RNI that increases your IQ to 300?
And where do you think AI research will be by then? It'll probably have
already succeeded is the answer.

>
> > > to estimate if/when we will ever get real AI. Without incredibly
massive
> > > funding it may take 15-20 years just to build a knowledge base
sufficient
> > > to kick start the thing. And you can't seriously argue the point
because,
> >
> >Knowledge bases (shouldn't this be one word?) already exist both in
natural
> >form (the world, the Net) and in prepackaged formats like Cyc. Again, you
see
> >that AI is farther along in development.
>
> Yes, they exist. Are they of the correct form? Do they contain
sufficient
> knowledge?

Between the existing bases, the Net, and the potential to interact with
the real world, I don't see any lack of information. This is an
uninteresting
issue- if a human kid can learn then so can a properly done AI.

>
> > > honestly, you don't know otherwise. I give very serious credit to Ben
> > > Goertzel's opinions on AI (keep up the great work) and I doubt he
could, in
> > > all honesty, give any sort of realistic time line for the first Real
AI
> > > (TM). Thus I don't you, you don't know, we don't know.
> >
> >He may be unwilling to do so in public, but I can tell you that it
> >won't take until 2030 according to rumors I hear...
>
> Certainly the hardware will be available before then (there is a strong
> track record to predict that on). But there is no track record to predict
> Real AI. In order to make predictions on how long it will take to
> understand it. We don't understand Real AI yet. I could just as easily
> say we will break the barrier for traveling faster than the speed of light
> by 2050, but that also requires knowledge that we don't yet have and thus
> we can not predict this.
>
> So everyone agrees that 2030 is the outside estimate, but that is just a
> hopeful guess (that I also share, BTW).

If we put 2010 and 2030 as the outside dates then the most probable time
is 2020? Do you think we'll have RNIs by then?

>
> > > Even self upgrading AI will take many steps to get there. Same exact
> > > thing, just a different route. No technology is going to just go
*blip*
> > > and produce Singularity.
> >
> >Steps at computer speed, not biological hacking speed. VAST difference.
>
> You're probably correct, but they both require steps. And no one knows
for
> certain, maybe an advanced biological implant would allow for
reprogramming
> & expendability, which could put both on similar terms.

An AI should only be eventually constrained by how much computing power
it has available. The same will hold for your RNI. How can a RNI that
is internal to your skull, or at most wearable, possibly match the
computing power available to an AI? No matter what computing substrate
you use the physical space constraint on the RNI will limit it. The only
way a human could compete with an AI would be for the human to upload.

>

(regarding a very quick Singularity)

> >Actually you cannot say that for certain about AI. We definitely can
> >say that about the biological route, at least up till the point we
> >get nanotech/inloading. There is nothing to prevent an AI that is
> >smart enough from developing a quick route to nanotech and then yes
> >*blip* away we go.
>
> But it will most likely take many steps to get to that point, especially
> based on Eli's Seed AI.

Ok, but you will agree that in a SI vs. somewhat augmented humans match,
the SI can get the Singularity done quicker, probably extremely quickly
by whipping up some very advanced replicating nanotech hardware. The
only real question is how long it takes to achieve SIness.

>
> > > You know, producing the first Real AI may be so difficult that it may
just
> > > require augmented humans to get their in any reasonable amount of
> > > time. Have you considered that possible reality?
> >
> >No I do not see that as a reasonable possibility. Most AI scientists
> >will agree that even if we can't design an AI, we can evolve one. By
> >brute force if we have to by simply trying all possibile code. It's
> >like picking a combination lock, if the lock is openable at all then
> >you will eventually open it just by trying all ways. And the rise in
> >computing power makes this almost inevitable by 2030 or even earlier.
>
> Don't agree, especially after your argument. Trying "all possible code"
> would take a very, very long time. Unless a significant percentage of
> available resources were devoted to this it could easily exceed
> 2030. Computing power does you no good when it comes to software

Okay I'll give you 2040 max. Actually I misread your original message
and didn't the "in a reasonable time" part. But anyway, such a search
is very unlikely to be needed.

> development, a computer that runs 10 times faster has almost no effect on
> the speed of the developer. Nural implants could, on the other hand, have
> incredible impact on the speed of developers as maybe they could think
code
> instead of typing it. I'm not saying that this is going to be necessary,
> but it would definitely be helpful and may be necessary in order to keep
> the proposed time line.

Actually with stuff like Flare we are beginning to see how computing
power can help developers out. Just like how software helps Intel
engineers create chip designs, software will eventually help software
people create code. Actually it already does, but it is a pretty limited
effect.

I don't buy the argument that there is a major difference between the
speed we think and type. I know when I'm coding I spend MOST of the
time simply thinking about what to type next. And I was always the
fastest/most productive coder wherever I worked... if it was the case
that typing speeds were what was holding software creation back, you
could simply throw more developers at a project and it would get done
faster. Or hire professional typists and let the programmers talk really
fast :-)

>
> Plus, it might be much more likely for enhanced humans to get friendliness
> right the first time.

If their IQ was higher then I agree. If they simply are faster typists
then I disagree. However, as you see above I don't think we'll have a
bunch of super high IQ people available until after 2030, which is too
late to have any effect. Furthermore if we do get to super high IQ
people before we get AI, then you run into the problem of worrying about
what else the high IQ folks will do with that brainpower, especially
if the brainpower becomes somewhat widespread. See below...

>
> > > > > > the singularity than the imho cringing one proposed by the
> > Institute of
> > > > > > building an AI and - if everything works out as hoped - maybe
> > humans will
> > > > > be
> > > > > > permitted to scale the heights; what I would call the
"singularity by
> > > > > proxy"
> > > > > > path. I, for one, intend to participate DIRECTLY in the
> > singularity. I
> > > > > > hope there are at least a few others here as well.
> > > > > >
> > > >
> > > >In order to participate directly in a transhuman based Singularity
you
> > > >would have to be one of the first humans enhanced into transhumanity.
How
> > > >do you plan to achieve that? Even if you do, the vast majority of
humanity
> > > >will just be riding your coattails no matter which path occurs first.
> > > >
> > > >Secondly, without an AI to guide things, what prevents individuals
> > intra or
> > > >post-Singularity from using nanotech or other ultratechnologies in
> > destructive
> > > >ways in an anarchic fashion? I'd like to hear a brief but coherent
> > timeline/
> > > >description of how you think this would play out. Our argument is
that
> > while
> > > >it all probably would turn out ok, it would generally be safer to get
a
> > > >Friendly AI in place first.
> > >
> > > Well, personally, I'm still not sold on this whole Friendliness
> >
> >Care to answer my questions?
>
> Which questions?
>
> The participate directly one? Don't care. I would *like* to participate
> but if it happens without me and goes smoothly I'll happily ride someone's
> coattails.
>
> Destruction / Anarchic? First, I have nothing against anarchy. Actually,
> an anarchy where the individuals treat each other respectfully would be my
> preference. As for destructive technologies, nothing. My personal belief
> is that either super intelligence will promote friendliness or we're
doomed.

Sure that'd be great if everyone treated everyone perfectly. But in my
experience you are smoking crack if you think that is realistic. In a
world without a Sysop, how can that possibly last? If you're worried
that one very well tested AI can go wrong, I'm worried that 6 billion
uploaded humans just might have a few bad apples for whom access to IQ-
enhancing technologies just might enable to do very bad things with
very advanced technologies that quite possibly are much more difficult
to defend against than to use offensively. How do you prevent that from
happening?

Finally, if you think SI (whether AI or human-based) is all we need,
then why the bias of wanting human-based ones instead of AI first?

--
Brian Atkins
Director, Singularity Institute for Artificial Intelligence
http://www.intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT