Re: Collective Volition, next take

From: Chris Capel (pdf23ds@gmail.com)
Date: Sat Jul 23 2005 - 19:37:14 MDT


On 7/23/05, Russell Wallace <russell.wallace@gmail.com> wrote:
> On 7/23/05, Chris Capel <pdf23ds@gmail.com> wrote:
> > Can you explain this a bit more? There's a specific scenario I have in
> > mind. The AI is created to determine the CEV of humanity, and it
> > decides that the CEV is to make a large number of different societies
> > run on different principles, some of which are insulated from the
> > others. Do you admit that this scenario is possible?
>
> It's certainly conceivable. Unfortunately I don't think it's at all
> likely, because the historical trend has been very strongly towards
> more centralization of power and a tighter web of control [...].

The only reason this is possible is that the masses of people, however
good intentioned on average, are easily manipulable. The main reason
this is the case is that they're apathetic to the concerns of
self-government. They're not involved citizens. They're appallingly
informed. (This in turn is partially because news media is
profit-driven and the most profit turning news is the most
sensationalist news, which lets political leaders run campaigns based
on fear- and prejudice-mongering. News media fail to keep the
electorate informed in a purely capitalist system.) Collective
volition has the opportunity to find out what kind of goverment people
wish they had, not just the kind they deserve. People know that
economic recessions are bad. They know that hunger and religious
oppresion are bad. The majority of people in the world don't want to
live in a theocratic state (at least if the religion is the other
guy's). If collective volition is done right, a solution that would
sound deceptively appealing if presented by a manipulative leader to
uninformed masses of people would be as obviously wrong to most people
as it is now to informed people. That global warming is occurring
would be obvious to everyone. That GWB is the worst president EVAR (if
it's the case) would be obvious. It's likely that our extrapolated
selves would warmly embrace the autonomy and humanity of large classes
of people that are currently subject to huge amounts of prejudice
and/or mutual hatred. It's possible that the extrapolation process
would eliminate the phenomenon of science being censored or
manipulated on ideological grounds. And so on.

> (and this in
> the presence of increasing intelligence, education and communication,
> which suggests that extrapolating for more of these things will _not_
> solve the problem)

Apathy is immune to having information available to make better
choices. Apathy makes it so they don't care. CV would not be affected
by apathy. People are remarkably self-centered and short sighted. They
live with blinders on. They only learn about, give attention to, the
things that directly affect them. National and global politics is
simply theater. But if they were informed about the relevant facts,
and asked their opinion, I think you'd see a remarkable homogeneity of
opinion, and I think that opinion would be what most existing people
would consider fairly enlightened. I think the process of
extrapolation is so fundamentally different than any sort of education
or media we have today that comparisons are pretty much moot.

Not to say that doing any of this right would be easy. While I trust
the basic goodness of people, if they have the right knowledge, I
think that the process of increasing their knowledge without
programmer bias is an incredibly tricky and dangerous one. If I could
remind CV defenders of my earlier post now, where I outline how this
could happen in various ways; I think it deserves a reply or two.

Chris Capel

-- 
"What is it like to be a bat? What is it like to bat a bee? What is it
like to be a bee being batted? What is it like to be a batted bee?"
-- The Mind's I (Hofstadter, Dennet)


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT