RE: JOIN: Alden Streeter

From: Ben Goertzel (ben@goertzel.org)
Date: Fri Aug 23 2002 - 12:53:12 MDT


> First of all, is this group still active? I just joined a few of days ago
> and I haven't yet received a single message (perhaps my spam filter is to
> restrictive?).

The group is active, but it's experiencing an atypical lull.

Over the last few months there has been quite a lot of activity.

> I've just started to read some of the documents on AI on intelligence.org, and
> some of the archives for this group as well. I have a few questions and
> comments already, but since I haven't thoroughly researched all of the
> background material yet, if I am covering old ground just let me know.

I encourage you to check out my own AI project, Novamente, at www.realai.net
;)

>By trade, I am a computer programmer - mostly plain old business
> apps though, not AI (although I have always been interested in AI, I never
> really found a way to get into the field).

Jobs in AI are hard to come by these days. However, if you are interested
to contribute some unpaid "spare time" to an AI project, let me know;
ben@goertzel.org. There are certainly ways an experienced, non-AI-expert
programmer can contribute to our AI project (and perhaps to other AI
projects as well, though I can only speak for myself).

> For I
> while I was interested in the transhumanist movement, but lately I have
> grown to disliked their irrational emphasis on anthropocentrism, utopia,
> eudaemonia, and especially their pet political movement of libertarianism.

Well, I think the difference in attitude between the prototypical
transhumanists and the typical member of this list, is fairly interesting.

For instance, Max More, whom I greatly respect, simply doesn't believe we're
going to see human-level AI for a long long time. He believes that enhanced
humans will come first, and will exist for a long time before the terribly
tough problem of human-level AI is solved.

I am tempted to attribute this to a psychological weakness on his part -- in
other words, I'm tempted to posit that Max emotionally prefers the idea of
enhanced humans to that of superintelligent machines, and hence assesses the
odds of real AI lower than he would otherwise, given his general futurist
outlook.

But yet how can I know this? How can I know how rational is MY OWN belief
that real AI is essentially just around the corner (i.e., perhaps 5-10 years
away to a baby AI with its own general intelligence, making its own meanings
& learning about the world).

Because of this kind of doubt, although I do hold my own beliefs strongly, I
try to be tolerant of Max and others who have the incomprehensible gall to
disagree with me ;->

-- Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT