Re: Uploads and AIs (was: Deliver us from...)

From: James Higgins (jameshiggins@earthlink.net)
Date: Sat Apr 07 2001 - 10:59:15 MDT


This is exactly the type of discussion I wanted to initiate. This very
much helps to explain why you believe your Friendly Seed AI is the best
approach.

At 07:45 PM 4/6/2001 -0400, Eliezer S. Yudkowsky wrote:
>For me, the superiority of AI over uploading lies chiefly in two facts:
>First, uploading is a technology years ahead of AI *or* military
>nanotechnology. If you're postulating that uploading was naturally
>developed before both of the "alternatives", I want to know how. If
>you're trusting a transhuman AI, a sort of limited Transition Guide, to
>upload the first humans and leave it to them from there, I want to know
>why this path doesn't subsume almost all the risk of straightforward seed
>AI development.

Ok, could someone please explain the difference between nanotechnology and
"military grade" nanotechnology?

It is still possible that some form of uploading may be possible before
AI. They are already working on exactly simulating very small portions of
the brain. This will continue to expand until they are able to simulate a
brain in general. The big question, of course, is how do they scan
someone's brain into such a simulation. Nanotechnology makes this much
easier, of course. And, depending on the description of military grade
nano, such tech may exist before AI. It may also be possible to do an
external scan similar to an MRI, only much, much more sensitive. I'm not
an expert in this field by any means, and AI may arrive before brain
scanning ability, but it may not.

>That said, if I had both an uploading device and a seed AI in front of me
>- *which is not the case* - which one I'd choose would depend on how good
>the AI was. If ve'd been run through a few rounds of wisdom tournaments
>(see _FAI_), and just looked better than human at handling both
>philosophical crises and self-modification, I'd go with the AI, of
>course. Ve'd be starting out with a much higher level of ability and
>morality.

Question, why not do both?

This leads to some critical questions that I have not seen information
about (not that it isn't written somewhere). In what environment do you
launch the seed AI? If the SAI was on a single, non-networked computer it
should be unable to effect the physical world, no matter how smart it
became. Or is the plan to give the seed AI complete access to the
Internet? or ?

Also, it is obviously expected that this seed AI, after upgrading many
fold, will invent and gain access to nanotechnology. How does this
occur? Even if it invents nanotech, the first assembler has to get
physically built. How is this expected to occur?

Depending on these bounds, it may be completely reasonable to launch an
array of seed AIs or upload any number of humans.

>As I currently see it, it only takes a finite amount of effort to create a
>threshold level of Friendliness - and more importantly, structural
>Friendliness - beyond which you can be pretty sure that the AI has the
>same moral structure as a human; or rather, a moral structure which can
>handle anything a human can. Then the human's inexperience at
>self-modification, and emotional problems, become disadvantages.

Hmm, interesting. Is this covered in some reasonably compact section of
FAI? I really wish I had time to read everything, but I'm unbelievably
busy (and probably will be for the next year+).

>However, it seems to me nearly certain that the potential for a hard
>takeoff - supersaturated computing power - will exist years before
>uploading becomes possible. Thus, the question is simply one of Friendly
>AI, unFriendly AI, or someone blowing up the world.

Those are good arguments for seed AI.

>I'd probably go with one human, three at the most. I'd be mostly
>concerned about finding a human who was (a) willing to hold off on the
>emotional modifications and concentrate on just increasing intelligence
>for a while, and (b) finding someone who, at least overtly and explicitly
>and as a surface-level decision, thinks that rationalization and
>irrationality and non-normative cognition is a bad thing. I doubt that
>Christian L. *believes himself* to tolerate irrationality, and that is
>perhaps the single most important quality to start out with.

Actualy, I was thinking about whole research teams. Preferably exiting
teams that work well together. Just upload them into hardware so they can
work faster and build on that. Leverage the hardware so the best minds can
spend vastly more time trying to solve the hard problems.

Imagine if you, Eliezer (do you have a team?) could be uploaded and think
at 2x your current speed. Then consider that without needing to sleep,
eat, etc. you probably gain an additional 2x worth of available time. And
you can work 4x as fast as previously without distraction. A year and a
half later you could think 8x as fast, etc.

Plus, thanks to the way computers work, you get a previously impossible
ability. There is absolutely no reason why we couldn't have 100 Eliezers
in software. And since they would all be the same person, they should work
very well together. So now you can focus on 100 different problems at the
same time. So, for instance, you could have one of them who just writes
papers and talks to us without delaying the project any. :)



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT