From: Michael Vassar (email@example.com)
Date: Wed Aug 23 2006 - 08:59:16 MDT
>That [limit in the supply of high IQ teachers] is certainly true, but the
>lack of teachers possessing high
>intelligence and other desirable qualities is not an inevitable fact, but
>rather a consequence of the organization of the educational
>Just as is, for instance, the lack of university AI professors who
>take AGI seriously ;-) ... these things all tie in together. Given
>the deep cultural opposition to radical innovation and invention and
>deep individual insight, it is remarkable how much individuality and
>enterprise and creation actually comes about.... It is remarkable
>that the exponents on Kurzweil's graphs are still as big as they are.
>In large part I suppose this is because our culture values wealth, and
>even though it dislikes radical innovation, it rewards wealth which is
>sometimes a consequence of radical innovation...
Fascinating. In two paragraphs you have included 7 common (in Transhumanist
assumptions or one step inferences that I vehemently disagree with,
1) we could/can allocate more ability to some professions than we do via
market or governmental mechanisms
2) non-experts with normal human motivational systems (with distinctions
between causing and allowing to happen and diminishing returns to the
motivational feature of a possibility as a function of it's scope) can have
a rational basis for taking AGI seriously (MNT is different in this regard)
3) more common belief in Trans issues, and particularly in GAI, would be
4) our cultural opposition to innovation is unusually great (compared to
where?). I do believe that a few cultures (mostly between 1860 and 1950)
may be (or have been) more friendly to innovation, but it isn't the norm.
5) I think that it is remarkable how little innovation comes out of our
society, when one appreciates how many people there are. This is an
important point. I think that most extremely innovative people project
their own abilities and temprament onto a larger fraction of the global
population than they should, and as a result, that they expect more of the
things that they could do if they had the time and energy to be done by
someone else, thus they expect more innovation to take place than does.
This is important because an overestimation of how many people like them are
out there leads to an erronious feeling of powerlessness in the face of
historical forces, the fatal flaw of insufficient Hubris. I think that Nick
Bostrom, among others, could make Extremely large contributions to
singularity safety if he could only be convinced to project less of himself
onto his model of other people distributed throughout society.
6) I think that Kurzweil's graphs fail dramatically in "predicting" the
past. TSIN overtly claims that the last 20 years of change exceed that of
the previous 86 years of the 19th century, which I consider laughable. They
are far less substantial than the previous 86 years in essentially every
metric I can think of. Population growth, life-expectancy growth, even
linear economic growth (even though money's utility is closer to
log-linear). I think that those graphs are primarily useful in showing the
inaccuracy of naive assumptions about the nature of historical technological
change. They do this by working out the implications of those assumptions
overtly, as is rarely ever done, with no data mining "corrections" (often
justified in terms of low-hanging fruit) for the sake of matching theory to
history. Basically, this makes TSIN the equivalent of a hypothetical
cosmology book that took general relativity, ignored the cosmological
constant (in this model equivalent to the "low hanging fruit hypothesis),
because there was no reason to believe in it, and pointed out that general
relativity, in the absence of other assumptions, strongly suggests a
7) in the last 50 years or so I am almost at a loss to give examples of
great wealth following radical innovation. Great wealth almost always comes
from small incremental innovations in the modern world. (unlike the world
of the late 19th century?)
>Forget about the test... if we
>-- took 100 technically, scientifically and conceptually gifted, and
>sane people from the membership of lists like SL4, extropy, AGI,
>wta-talk and the futurist community generally
Are there 100 such people? Maybe a few dozen. Then I'd round a community of
100 or 200 with many mathematicians, a couple doctors, and some people with
good practical technical skills for making the community work. If we wanted
to work on IA or MNT obviously, LOTS of people with chemistry, MEMS,
cybernetics, biology, and other scientific backgrounds.
>-- put them all (er, us all ;-) in some isolated facility [Los Alamos is
>nice, but I'll advocate a Caribbean island]
Islands have major advantages, but there are other advantages to building a
community in the US or Canada. They are mostly empty.
>-- added in our in our families as well, as desired on a case by case
>basis-- added in a crew of system administrators (not to imply that
>sys-admins aren't highly intelligent, many would be included in the
>above groups), cooks, maids, and other useful support staff, and some
>competent managers to rule them all
>-- added a couple hundred million dollars of standard and experimental
>-- added a big annual budget to fund research in university labs, so
>as to bring other minds into the picture
>-- let everyone just work on creating a positive Singularity, without
>the need to earn $$ in other ways
I'm working on it, but it will take at least 4 more years.
>How much faster would this bring the Singularity about, and how much
>would it increase the odds of the Singularity being positive?
Quite a bit I imagine.
>Gee, this is a crazy fantasy -- except this, or some approximation of
>it, would easily be achievable if someone like Bill Gates or Larry
>Ellison or George W Bush decided it was important
Agreed. As stated, got people on it, though to re-iterate, Nick Bostrom is
in a relatively good position to make it happen sooner. Any 3rd world
leader could also do it, or really any of the wealthiest few hundred people.
I doubt that the fourth or fifth hundred million add that much marginal
>The point is: the problem has to do with the allocation of people and
>funds, not with isolating who is good or not.
Disagreed. Both are essential. Admittedly, tests probably have no
the judgment of existing capable people, but they are fast and can cover a
large population quickly. Still, I'm pretty sure that a group of 100
excellent people can work faster and more effectively than a group of 500
good people which includes those 100 excellent people.
This archive was generated by hypermail 2.1.5 : Wed May 22 2013 - 04:01:22 MDT