From: Samantha Atkins (firstname.lastname@example.org)
Date: Wed Jun 16 2004 - 16:43:37 MDT
How about if you give each human being a self-improvement kit that
extracts and extrapolates their volition and uploads this information
to the AI? Such a kit might be a very interesting application/testing
ground of parts of the technology and very well received as a
self-development tool by many people. I would certainly try one out.
Just a thought.
On Jun 15, 2004, at 4:13 AM, Philip Sutton wrote:
> Hi Eliezer,
> > > PS: Do you mean this literally or are you assuming that the FAI
> > > a collective volition function will externally observe (or
> converse with)
> > > 6+ billion people individually and inductively model them or would
> > > expect the collective volition function to just gather what
> > > it can from all sorts or primary and secondary sources (like we
> do) but
> > > just on a more massive scale?
> > EY: Yes.
> Assuming that your one-word reply was not intended to be ambiguous,
> then apparently 'all of the above' was what you had in mind?
> That means that to create the collective volition to ensure that a
> superAI is friendly the AI will:
> - directly inspect what goes on in the brains of over 6 bllion+ people
> - will directly observe or perhaps converse with over 6 bllion+ people
> Could a pre-singularity AI do this? That is, it seems like the
> technology required goes way beyond anything humans are likely to
> develop in the next few decades and the scale of the task is clearly
> Also, is it likely that more than a small percentage of the 6 billion+
> people would agree to have an AI trawling around in their heads. If
> only a small (most likely non-representative) sample of humanity
> participate in the mind reaming, will the data collected be adequate.
> Are you proposing to do this mind reaming against the will of those
> who object? Or are you proposing that the AI do the reaming without
> asking for permission?
> If the tasks outline above can only be accomplished by a post-
> singularity AI, how will you ensure friendliness in any advanced but
> pre- singularity AIs?
> Cheers, Philip
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT