Re: SL4 meets "Pinky and the Brain"

From: James Higgins (jameshiggins@earthlink.net)
Date: Tue Jul 16 2002 - 11:36:56 MDT


Ben Goertzel wrote:
>>James, this is pure slander.
>
> Seriously: I think I'm going to have to side with Eli on this topic.
>
> "Taking over the world" has the flavor of trying to make oneself,
> personally, the ruler of the world, so that one can enforce one's
whims and
> desires and plans on the world in detail. This is not what Eliezer is
> proposing, exactly.

I think that "exactly" is where the problem arises. I've already
explained my reasoning in another post and, once again, I apologize
about the conotation. If I was wiser I would have couched it better.

> In fact, he is not even proposing to create software that will definitely
> "take over the world".

He has in the past, but this is a moot point and was not related to my
reasoning on that post.

> > I think he is proposing to create software that will have a *huge
influence*
> on the world, but not necessarily control it in any full & complete way.
>
> And, I am proposing to do effectively the same thing. Anyone seeking to

Right, you also qualify as wanting to take over the world. Taking over
the world to create a utopia (a real one, not some fluffy vision of one)
would be a good reason to do such a thing, but could still entale taking
over the world (at least as a starting point).

> produce superhuman AI is really pushing in this direction, whether they
> admit it to themselves or not. It's only to be expected that a
superhumanly
> intelligent mind is going to
>
> 1) have the capability to "rule the world."
>
> 2) exercise at least its capability to *strongly influence* the world
> [understanding that it may lack the inclination to actually *rule* the
> world]

Exactly. But in regards to that "exactly" above #2 is kinda a grey
area. You don't have to be "President of Earth" if you control the
financial markets...

> To illustrate this point, let's consider a science-fictional "semi-hard
> takeoff" scenario. Suppose in 2040 we have a world with lots of advanced
> tech, including a superhuman mind living in a data warehouse in Peoria.
> Suppose some human loonies try to hijack a plane and fly it into the data
> warehouse. What's the AI gonna do? Ok, it's going to stop the plane
from
> making impact. But after that, what? It has three choices
>
> 1) take over the world, enforcing a benevolent dictatorship to prevent
> stupid humans from doing future stupid things to it and to each other
> 2) make itself super-secure and hide out, letting us humans maul each
other
> as we wish, but making itself impervious to damage
> 3) try to nudge and influence the human world, to make it a better place
> (while making itself more secure at the same time)...
>
> Let's say it mulls things over and decides it has a responsiblity to help
> humans as well as itself, so it chooses path 3). But it doesn't want
to be
> too intrusive. It decides that releasing drugs into the water supply
that
> would make us less violent would be too controlling and intrusive, too
> dictatorial. So it decides to release a global advertising campaign,
> calculated with superhuman intelligence to affect human attitudes in a
> certain way. It creates movies, video games, ad spots, teledildonic
fantasy
> VR scenarios. It discovers it can control our minds highly
effectively in
> this way, without resorting to direct brain control or to physical
violence
> based control.

I'd call this ruling the world. *It* decides how everyone should be.
Just because it shows some restraint in how it does it and what areas it
chooses to influence does not change the fact that it is, in effect,
ruling the world. The key is that *it* makes all of those decisions and
they are all within its own power.

> It's not a question of trying to take over the world, it's a question of
> trying to build and bias future beings that are going to either take over
> the world or strongly influence it.

So, once again, I apologize for how my post came across. I believe what
I said to be factual but it definately came off the wrong way. The US
government rules my/our country but that is, most of the time, a good
things (compared to the alternatives). "Ruling the World" is neither
good nor bad, right nor wrong. How one rules the world determines such
things. Creating a Singularity is not directly ruling the world, but is
doing that pretty much by proxy since the AI would have had golas and
beliefs in rough approximation to those the programmer(s) wanted it to have.

James Higgins



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT