RE: Why I'm not more involved with academia

From: Ben Goertzel (ben@goertzel.org)
Date: Sat Oct 23 2004 - 09:18:15 MDT


Eliezer,

I don't really think this is an SL4-worthy topic [as the focus is your own
career and life rather than the Singularity and related themes], but I'll
make one remark.

In fact, knowing you moderately well as I do, I think that getting a PhD and
becoming an academic would be a reasonably good choice for you. Getting the
PhD would be fairly easy for you, and once you had it, you could get a job
as a professor, guaranteeing you a lifetime income, with fairly minimal
duties beyond doing and publishing your research. And, perhaps more
importantly, you having a PhD would make it significantly easier for SIAI to
raise money for your research.

Personally, I benefited a lot from the research-time I obtained via being an
academic for 8 years; and in my business pursuits now, I'm taken more
seriously than I would be otherwise because I have a PhD.

There's something to be said for boldly forging your own path in life, but
there's also some value to playing the games of the society you're embedded
within.

-- Ben G

> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org]On Behalf Of Eliezer
> Yudkowsky
> Sent: Saturday, October 23, 2004 9:02 AM
> To: sl4@sl4.org
> Subject: Why I'm not more involved with academia
>
>
> Jeff Medina wrote:
> > Robin Lee Powell wrote: "IIRC, Eliezer is not allowed to put Ph.D.
> > after his name. That pretty much rules out this avenue of approach."
> >
> > That absolutely *does not* rule out this avenue of approach.
>
> Correct.
>
> > Many
> > respected journals and conferences in the relevant areas are
> > blind-reviewed (such that the academic credentials of the author of
> > the paper is made irrelevant, because the author's identity & other
> > info is kept secret), and even among the many which are not quality
> > submissions are never rejected or looked down upon simply due to the
> > lack of a Ph.D. by the author.
>
> Papers which are *not* blind-reviewed show a *decided* bias toward known,
> prestigious researchers and institutions. That is *why* some
> journals are
> blind-reviewed. Most aren't, and it doesn't help that other journals are
> blind-reviewed if the one you want to target isn't. I've read studies
> assessing the bias, but though I googled on "effectiveness of
> peer review"
> I failed to track them down. I did find other interesting material
> including, to pick an arbitrary example, a study by Rothwell and Martyn
> (2000) showing that peer review in two neuroscience journals was not
> reproducible; that is, agreement between reviewers is not significantly
> greater than chance. My recollection is that this result is
> widespread in
> studies of this kind. I include this tidbit by way of saying that
> experimental study of peer review has produced surprising and alarming
> results, so be sure to check out peer-reviewed studies of peer review
> before praising its effectiveness.
>
> (Rothwell PM, Martyn CN. Reproducibility of peer review in clinical
> neuroscience. Is agreement between reviewers any greater than would be
> expected by chance alone? Brain 2000;123:1964-9.)
>
> This has nothing to do with the reason I don't write additional academic
> papers. I just thought I'd mention it.
>
> **
>
> First, you'll note that I say "write additional papers". I allocated one
> month in early 2002 to write "Levels of Organization in General
> Intelligence" (LOGI) in the best academic style I could manage. It
> actually took four months. Since then the draft has been online at
> http://intelligence.org/LOGI/. In 2005 this paper will finally appear in
> "Artificial General Intelligence", eds. Goertzel and Pennachin, to be
> published by Springer-Verlag in 2005. (The three-year delay was for the
> entire book, not my own paper; *I* turned in my homework on time.)
>
> The fact that none of the people plaguing me to write papers have even
> *noticed* "Levels of Organization in General Intelligence", speaking
> instead as if I haven't written *any* papers, is indeed related to the
> reason I am not more involved with academia.
>
> I've come a long way over the eight years since 1996. People said to me:
> Write up your ideas about AI in a web page. In 1998 I did. Then new
> people came along and they said: You'll never get anywhere with this, no
> one will be interested enough to pay you to do this. In 2000, thanks to
> Brian Atkins, the Singularity Institute started up. Possibly that
> impressed a few people who never thought I'd get that far. Then
> new people
> came along, to whom Eliezer had *always been* a part of the Singularity
> Institute, so it wasn't impressive, and they said: No one will ever pay
> attention to you unless you do as we say and write some kind of paper
> targeted at academia and get it published. In 2002 I did. I
> didn't expect
> anyone to notice, and no one did, but the effort of writing the
> LOGI paper
> served to help me unify my ideas and force me to read relevant literature
> and therefore I account it a partial success. And lo, the people said:
> What you really need, Eliezer, is to write some kind of paper targeted at
> academia.
>
> Someone always thinks there's just one more thing you need to do. *That*
> never changes, no matter how many times you fulfill the request.
> They just
> find something else for you to do. Often it's something you've already
> done. I wasn't puzzled by this. I expected it. Thus the particular
> things that I did were selected strictly on the basis of their needing
> doing, rather than to one-up naysayers.
>
> Case in point: Dr. Eric Drexler and _Nanosystems_.
>
> Before: Eric Drexler has no PhD and hasn't written up his ideas in great
> gory technical detail. People tell him: Eric, no one will pay attention
> to you if you don't have a PhD. People tell him: Eric, you need
> to write
> up your technical ideas in great gory detail in a way that a wide
> audience
> can understand.
>
> Eric spends six years writing _Nanosystems_ and making it presentable to
> any technical reader without demanding a specific background in
> chemistry,
> physics, or computer science. Eric defends _Nanosystems_ as his
> thesis and
> receives the world's first PhD in nanotechnology from MIT.
>
> Afterward: None of the naysayers read _Nanosystems_ or even mention it
> exists. No one pays any more attention to Drexler than before.
> They just
> shift their criterion to something else Eric hasn't done yet. Often they
> indignantly proclaim that Drexler hasn't given any technical presentation
> of his ideas - complete indifference to the work already
> accomplished. The
> same people who liked Drexler before still like him. The kind of people
> who objected to Drexler before find something different to which
> to object.
>
> I suspect those who objected to nanotechnology did not say:
> "Hm... I have
> no idea whether I like this or not... but wait! Drexler doesn't have a
> PhD! Okay, now I've decided that nanotechnology is impossible
> and Drexler
> is scaring our children." The causal sequence of events is more like,
> "Eek! Too weird! Hm, it seems that I disbelieve in nanotechnology. I
> wonder why I disbelieve in nanotechnology? (Searches for
> reason.) It must
> be because Drexler doesn't have a PhD, hey, yeah, that's it." After
> Drexler got a PhD, exactly the same process took place, only the
> rationalization search terminated elsewhere.
>
> Drexler has a personality far better suited to academia than I'll
> ever be.
> He's humble. He did everything by the book, the way he was
> supposed to.
> Academia... to put it bluntly, they spit in his face. And
> Drexler had a
> vastly easier problem to explain, in a field with all the underlying
> physical equations established and agreed upon. If Drexler
> didn't make it
> in academia there's no chance in hell that I could do so. Friendly AI
> would be two orders of magnitude harder to sell to academia than
> molecular
> nanotechnology. I pointed out that last part to Drexler, by the way; he
> agreed. And come to think, while he didn't say a word to me against
> academia or the academic system, Dr. Eric Drexler is *not* on the list of
> people whose advice to me included getting a PhD.
>
> I don't want to sound like I'm criticizing Drexler's
> intelligence. Drexler
> did not have Drexler's case to warn him. Drexler's choices were
> different;
> he may have had nothing better to try than getting a PhD and spending six
> years writing a technical book.
>
> But people seem to be absurdly optimistic about how easy it is for the
> actors on stage to carry out the helpful advice shouted from the
> audience.
> Then again, as plenty of studies show, people are also absurdly
> optimistic about the course of their own lives - except for the severely
> depressed, who are sometimes properly calibrated with respect to
> outcomes,
> a phenomenon known as "depressive realism". (I am not making this up.)
> Part of the reason why people are absurdly optimistic is that they think:
> I'll just do X, and then everything will be all right! Not: I'll try to
> do X, it will take four times as long as I expect, I'll probably
> fail, and
> even if I succeed, only one in ten successes of this kind have as
> great an
> impact as the one I pleasantly imagined.
>
> I remember meeting Chris Phoenix of CRN at a Foresight Gathering,
> and Chris
> Phoenix spoke optimistically of the day when molecular manufacturing is
> proved possible, and all the naysayers have to admit it... and I said:
> "Yes, Chris, we can look forward to the fine day when the naysayers are
> presented with a working example of mechanosynthesis, and they
> are finally
> forced to stand up and say, in unison: 'Oh, but that isn't *really*
> nanotechnology.'"
>
> Did I get a Ph.D., nothing would change. I'd just hear: oh, but
> you aren't
> an eminent scientist in the field, go write more papers.
>
> If I were the sort of person who chased all over the map - starting
> companies, getting PhDs, whatever - then I wouldn't be here in the first
> place. My life would have happened to me while I was making other plans.
> Antoine de Saint-Exupéry: "Perfection is achieved, not when there is
> nothing left to add, but when there is nothing left to take
> away." People
> overestimate conjunctive probabilities and underestimate disjunctive
> probabilities; they overestimate the chance of many things going right in
> sequence, underestimate the probability of a single thing going
> wrong. The
> way to success is to remove everything from the plan that doesn't
> absolutely *have* to be there. The way to have any chance at all of
> finishing on time is to do nothing that is not absolutely necessary.
>
> Is being a part of academia absolutely necessary to success? I
> don't think
> so. No one's told me to get a PhD in something because in-depth
> technical
> mastery of that subject is absolutely necessary to the creation
> of AI, and
> yet that is *supposed* to be what PhDs are about. No one's said a word
> about learning or knowledge. It's all about the impressiveness of some
> letters after your name. I know I'm far from the first person to
> point out
> the massive failure of the educational system, but it remains
> just as huge
> a problem and just as horribly awry. The failure doesn't go away just
> because someone has pointed it out before.
>
> To tackle AI I've had to learn, at one time or another, evolutionary
> psychology, evolutionary biology, population genetics, game theory,
> information theory, Bayesian probability theory, mathematical logic,
> functional neuroanatomy, computational neuroscience, anthropology,
> computing in single neurons, cognitive psychology, the cognitive
> psychology
> of categories, heuristics and biases, decision theory, visual neurology,
> linguistics, linear algebra, physics, category theory, and
> probably a dozen
> other fields I haven't thought of offhand. Sometimes, as with
> evolutionary
> psychology, I know the field in enough depth to write papers in
> it. Other
> times I know only the absolute barest embarassingly simple
> basics, as with
> category theory, which I picked up less than a month ago because I needed
> to read other papers written in the language of category theory. But the
> point is that in academia, where crossbreeding two or three fields is
> considered daring and interdisciplinary, and where people have to achieve
> supreme depth in a single field in order to publish in its journals, that
> kind of broad background is pretty rare.
>
> I'm a competent computer programmer with strong C++, Java, and
> Python, and
> I can read a dozen other programming languages.
>
> I accumulated all that (except category theory) before I was twenty-five
> years old, which is still young enough to have revolutionary ideas.
>
> That's another thing academia doesn't do very well. By the time people
> finish a Ph.D. in *one* field, they might be thirty years old, past their
> annus mirabilis years. To do AI you need a dozen backgrounds and
> you need
> them when you're young. Small wonder academia hasn't had much
> luck on AI.
> Academia places an enormous mountain of unnecessary inconveniences and
> little drains of time in the way of learning and getting the job
> done. Do
> your homework, teach your classes, publish or perish, compose grant
> proposals, write project reviews, suck up to the faculty... I'm
> not saying
> it's all useless. Someone has to teach classes. But it is not
> absolutely
> necessary to solving the problem of Friendly AI.
>
> Nearly all academics are untrained in the way of rationality. Not
> surprising; few academics are fifth-dan black belts and there are a lot
> more fifth-dan black belts than fifth-dan rationalists. But if I were in
> academia I would be subject to the authority of those who were
> not Bayesian
> Masters. In the art of rationality, one seeks to attain the perception
> that most of the things that appear to be reasons and arguments are not
> Bayesian. Eliminate the distractions, silence the roar of
> cognitive noise,
> and you can finally see the small plain trails of genuine
> evidence. One of
> my academic friends once asked me to look at a paper on decision theory;
> the paper described the conventional theory, presented a problem,
> and then
> proposed several different individual patches to the conventional theory
> and analyzed the patches individually, concluding that none of the
> solutions were satisfactory. I replied by arguing that the conventional
> theory actually contained *two* independent foundational errors, which
> needed to be simultaneously refactored to solve the problem, and in fact,
> he needed to look at this whole problem a different way. And the
> one said:
> But I have to take the traditional theory as a point of departure and
> then present changes to it, because that's what the reviewers
> will expect.
> And I said: Okay, but for myself I don't have to give a damn about
> reviewers, and so I plan to go on using the solution with two
> simultaneous
> corrections. That bias against two simultaneous changes, owing
> to the need
> to take the conventional theory as a point of departure, was justified as
> necessary by pointing to social forces instead of Bayesian forces. That
> makes it a distraction.
>
> I refuse to accept that entire class of distractions. As an independent
> scholar, I never have to give any reason for saying something or thinking
> something that points to social forces instead of the facts of
> the matter.
> I have the freedom to do the right thing, without the faintest bias
> toward the academically acceptable thing except insofar as the
> academically
> acceptable thing happens to be right. I have the luxury of
> giving no more
> credence to an idea than the weight of Bayesian evidence calls
> for, even if
> the idea has become fixed in academia through any of the non-Bayesian
> processes that prevail there.
>
> Now, most of the time, I don't second-guess academia - certainly not in
> established fields with great weights of evidence, nor after
> learning just
> the basics of something. Like I said, it's dangerous to be half a
> rationalist; if you learn the skill of challenging conventional ideas,
> you'd damn well better learn the skill of accepting conventional
> ideas, or
> end up worse off than before. But sometimes, on the fringes, AI for
> example, people just make stuff up that sounds cool, and it becomes fixed
> because everyone repeats it. Look at Freudian analysis: not one scrap of
> experimental evidence. It was a major academic field, with peer-reviewed
> journals and everything, but not the faintest hint of science. If that's
> the academic standard then academia's standards are too damn low. Or
> sometimes the people in one field don't know about the results in another
> field, and they say things that are silly and get past the reviewers,
> because the people who could catch the mistake work in a
> different building
> of the college campus. That likewise happens, *a lot*, in AI.
>
> It seems to me that the secret of effectiveness is refusing to be
> distracted. At one point in my life I did permit myself to be
> distracted... by writing freelance programs, by planning to start a
> company... eventually I noticed that the only projects in my life
> that had
> ever done the slightest bit of good were the ones that were *directly* on
> track to the Singularity. *Not* the distraction projects that I thought
> would provide resources or whatever, but the projects that were directly
> part of the critical path to AI. In 1998 I took one month out of my
> all-important plots to accumulate Singularity resources to write
> "Coding a
> Transhuman AI", and in the end CaTAI that was the only thing I did that
> year that actually mattered. And that was very much the story of
> my life,
> until the day I finally snapped and decided to concentrate solely on the
> Singularity. Today I refuse to be distracted. Not by academia, not by
> technology companies, not by anything. All I ask of myself is that I do
> this one thing, solve this one challenge of Friendly AI.
>
> > 2. If lacking a PhD really becomes a problem... well, why not get
> > one? PhD students get living stipends and support for their research.
> > So even being a PhD student may well put Eliezer in a better position
> > to pursue his research than the current scenario allows. Further, if
> > he (or anyone else involved) doesn't like the idea of being forced to
> > take 2 years of coursework for the PhD, he could always pursue the PhD
> > outside of the U.S., where PhDs are pure research degrees with no
> > course requirements.
> >
> > 3. There are a couple of schools (e.g., The University of Technology,
> > Sydney, in Australasia) that award PhDs by prior publication. After
> > applying, you put together a portfolio of your research, and write an
> > overarching paper that illustrates your contribution to the field of
> > study, and if deemed PhD-level, you are granted a PhD. I've come
> > across at least a few professors in the UK and elsewhere who have
> > received their doctorates in this manner. (I've also seen quite a few
> > professors with just Master's, but this falls back to point 2 above).
> >
> > Of course, most people aren't aware of some or all of the above
>
> I wasn't aware. Thanks. If I don't need to spend eight years, that does
> shift the cost/benefit ratio. But not far enough, I'm afraid.
>
> --
> Eliezer S. Yudkowsky http://intelligence.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT