RE: Miller's The Mating Mind

From: Ben Goertzel (ben@goertzel.org)
Date: Sun Sep 29 2002 - 21:41:41 MDT


Eliezer wrote:
> The potential donors are interesting; the potential capitalists
> are not, due to their mundane methods of calculating ROI. Their
> idea of high-tech stuff is sl<4.

Actually, hi-tech investors are not always as naive as you seem to think.

Admittedly, some investors are ABSOLUTE IDIOTS.

However, others are extremely broad-minded and future-savvy.

I know quite a few wealthy people who

1) understand SL4 ideas reasonably well

2) do not strongly believe that donating money to pure research would make a
more significant impact on the Singularity, than investing money in
commercial technology develoment.

Consistently with this, they are investing their MONEY with a view toward
MAKING MORE MONEY while PROMOTING HI-TECH DEVELOPMENT generally. They are
being businesspeople, but also using their money to push technology forward.

Their view is different from mine, but it is not an idiotic one. After all
the bulk of progress toward the Singularity is clearly being made by
commercial efforts. These faster and faster computers we see each year are
not produced using money from donors motivated by rational altruism.

A lot of these investors believe that the best way to get to SL4 technology,
is to first fund SL2 technology, then SL3, then SL4. This kind of
incremental approach is the way things are normally done in business.

> To Ben the Singularity appears to be icing on the cake (what the
> cake itself is will remain a mystery, though I suspect it is simply
> financial gain).
> The thing is Ben doesn't grok seed AI, which is essential to
> getting anything transhuman within a timeframe to possibly beat
> nanotech.

I wonder where you cook up these assertions about me, Eliezer. Surely not
using Bayes' Theorem? If so I think you need to adjust your prior.... I'd
suggest the Solomonoff-Leven universal prior distribution....

No, it is not true that I'm more interested in financial gain than in the
Singularity. I am a philosopher-scientist-engineer above all -- and
transcending death and obsoleting reality as we know it are vastly more
important to me than making money. If you look at what I've achieved
intellectually in my life, and then look at my pathetic bank balance, I
think you'll find this statement well validated ;->

I have made the choice to pursue commercial software development based on
"narrow AI" as a route to funding AGI research, and as a way of feeding
myself and my family. Having made this choice, I take my narrow AI work
(e.g. in bioinformatics) very seriously. But that doesn't change the fact
that my primary life goal is to participate in obsoleting death and
transforming mind & reality through AGI (and potentially other scientific
research).

Unlike you, I and my Novamente colleagues do not have a patron to take care
of us. We do not have the option to spend 100% of our time working directly
on AGI. We're pleased to be in the position of spending some of our time on
AGI, and some of our time on AGI-related narrow-AI work. It sure beats
flipping burgers!!

> His "sort of" interest has a lot to do with this
> incomprehension.

It is really very silly of you to repeatedly insist that I am only "sort of"
interested in the stuff I've been writing about and working on 60+ hours per
week for the last 15 years or so.

> He doesn't get Friendship Programming esp. structure, which is
> also essential.

By which you mean: I have a different theory of how to create ethically
positive AGI's than you do.

I really believe I *understand* what you're saying about Friendship
Programming. I just don't agree with you. In my view, you have never
adequately addressed the issue of "concept and goal drift through repeated
self-modifications."

It is not as though you've proved a mathematical theorem about Friendly AI,
or made a solid empirical discovery about Friendly AI. You have not
formulated anything regarding Friendly AI about which it can be said "any
reasonable, educated human should be expected to accept this upon reading
it." You have simply formulated some interesting, plausible conceptual
arguments. They are thought-provoking and well-thought-out, and I think
you're an excellent theorist. But they're just plausible conceptual
arguments -- which you find more plausible than I do ... not surprisingly,
since you're the one who formulated them....

> Of course it's plausible that one might get CFAI and Seed AI and
> why they matter and still have a cursory interest, owing to apathy
> or antipathy deeper than I can address here.

"Seed AI" is a very broadly accepted concept. That an AI when intelligent
enough will be able to modify its own code and architecture, thus
exponentially making itself more and more intelligent -- not many AI
researchers doubt this, actually. The big open questions here have to do
with the rate of the exponential increase, and the difficulty of getting to
the stage where intelligent goal-directed self-modification can begin. You
have a higher estimate of the rate of exponential increase than almost
anyone else. However, I have a higher estimate of this rate than almost
anyone else except you ;->

CFAI, to me, is a much more speculative thing than seed AI. I think we'll
have a much better idea about how to guide the development of a
superintelligent AI through the Singularity, after we have some
near-human-level AGI systems to study. I think that time spent on such
issues now is largely wasted time. I think it's more useful right now to
work on creating near-human-level AGI's that we can test and learn from,
than to work on creating plausible speculative theories of how we'll make
our human-level AGI's ethically positive.

> As for me, I'm only
> interested in the matter of saving the solar system from total
> sterility.

Your dedication is admirable, and your creative thinking on the future of
AI, technology, mind and reality is often excellent.

However, your attitude toward me and many other human beings, really gets on
my nerves sometimes.

-- Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT