Re: [agi] Two draft papers: AI and existential risk; heuristics and biases

From: Ben Goertzel (ben@goertzel.org)
Date: Thu Jun 08 2006 - 09:25:43 MDT


Hi,

Peter: Your message was crossposted to AGI and SL4, but I am replying
only to SL4 as the topic seemed to have little directly to do with
AGI, in the form it's evolved into...

> Consider that the theory of evolution is not part of the world's
> consensus.

Indeed, this is pathetic.... But, something like 80% of the world's
population believes in reincarnation, so this is hardly surprising...

Building consensus among the world's population about anything of
significance is not very realistic.

However, building consensus among the population of individuals who
share a common a-superstitious, rationalist, Singularitarian
perspective is not so obviously a vain goal...

>Consider that the Bayes' Theorem is not part of the
> scientific consensus. It isn't even part of this list's consensus!

Hmmm... IMO, the latter statement is pretty silly...

I have never met a scientist who did not accept Bayes Theorem as a
piece of mathematics.

The argument between Bayesian and other approaches to uncertain
inference and statistical modeling is not about whether Bayes Theorem
is true or not, but rather about which heuristic assumptions one
should make when applying this and other probabilistic mathematics to
the real world. Classicial statistics makes one set of heuristic
assumptions, conventional Bayesian statistics makes another ...
alternate approaches like Walley's imprecise probabilities or NARS
make yet other assumptions.... I happen to find some of these
assumption-sets preferable to others, but that's another story..

-- Ben

These
> are ancient ideas - way older than us. The consensus lags *centuries*
> behind people who think.
>
> > It IS my contention that there is a relatively simple,
> > inductively-robust (in a mathematical proof sense) formulation of
> > friendliness that will guarantee that there won't be effects that *I*
> > consider undesirable, horrible, or immoral. It will, of course/however,
> > produce a number of effects that others will decry as undesirable, horrible,
> > or immoral -- like allowing abortion and assisted suicide in a reasonable
> > number of cases, NOT allowing the killing of infidels, allowing almost any
> > personal modifications (with truly informed consent) that are non-harmful
> > to others, NOT allowing the imposition of personal modifications whether
> > they be physical, mental, or spiritual, etc.
>
> How relatively simple? Evolution doesn't do simple. I doubt that any
> human goal system has a simple mathematical formalization.
>
> -------
> To unsubscribe, change your address, or temporarily deactivate your subscription,
> please go to agi@v2.listbox.com">http://v2.listbox.com/member/?listname=agi@v2.listbox.com
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT