RE: Regulating AI Development

From: Ben Goertzel (ben@goertzel.org)
Date: Fri Mar 01 2002 - 14:00:19 MST


Computers are too cheap and widespread, and so is programming and cognitive
science knowledge.

AI can't be successfully regulated except via imposition of a draconian,
Luddite police state.

And I doubt anyone on this list would seriously advocate such a thing.

-- Ben G

> -----Original Message-----
> From: owner-sl4@sysopmind.com [mailto:owner-sl4@sysopmind.com]On Behalf
> Of doug.bailey@ey.com
> Sent: Friday, March 01, 2002 1:31 PM
> To: sl4@sysopmind.com
> Subject: Regulating AI Development
>
>
>
> I searched through the archives and could not locate prior discussions on
> the topic of if we should and, if so, how we should, regulate AI
> development through some formal function or body.  I confess I have not
> thought very deeply on this subject but after the last few days
> of postings
> it seems it would be productive to discuss if and how AI
> development should
> be regulated.  Substantial discourse is ongoing about what approach might
> be successful in achieving certain thresholds of AI.  However, I
> am fearful
> that once the core method for developing AI of a respectable level of
> sophistication (Sophisticated AI) is achieved we'll be faced with a sudden
> and huge problem of how this particular technology might be
> advanced beyond
> this point without imperling the human race.
>
> There are many schools of thought as to where we go from Sophisticated AI.
> I believe Eliezer's general approach (correct me if I'm wrong) is we need
> to achieve Friendy AI as soon as possible before potential "evils" such as
> self-replicating nanotechnology and/or "unFriendly" AI hit the scene.
> Kurzweil's general approach seems to be to enhance ourselves and thus have
> "humanity" and all the baggage you get with it built-in to any AI.  There
> are other approaches out there, some we are aware of and many others we
> might not be aware of.
>
> Should we have a regulatory framework in place to determine as a
> community,
> as a race, how we proceed with developing the seed AI we use for a takeoff
> event?  Others have written about this for other profound technologies
> (e.g., http://www.foresight.org/NanoRev/Forrest1989.html ).  I can't think
> of a more profound product of technological development than a Power.
> Thus, if you agree with development-by-committee approach of regulation
> then I would presume you would agree that regulation of AI development is
> appropriate.
>
> How should AI be developed? I'm trying to resist the "mad
> scientist working
> in his basement" scenario but it or some variation comes to mind when
> considering how to regulate AI development.  Regulating weaponized nuclear
> capacity is fairly easy due to the huge financial resources and easily
> identifiable physical plant requirements to develop such capacity.
> Regulating nanotechnology is more troublesome - especially once
> the general
> blueprint for self-replicating nanotechnology is understood.  However,
> regulating AI development would seem to be the most difficult to regulate.
> I would offer up that the way to regulate AI development is to (1) jointly
> determine what the correct approach should be and then (2) invest so much
> financial and intellectual momentum into this approach that it is
> actualized before any other approach.  Regulation by domination
> or "beating
> them to the punch" seems the best chance we have for ensuring a
> human-friendly post-Singularity environment.  This requires faith in the
> quorum though - or whatever mechanism is created to make decisions.  What
> if the U.S. federal government creates a panel populated with the
> _perceived_ gods of the AI pantheon, i.e., Minsky, Lenat, Kurzweil,
> Hopfield, Penrose, Moravec, etc.  Perhaps these guys get it all
> wrong.  How
> might we otherwise form such a panel?
>
> Maybe I'm just getting a realization others have already made  through a
> different avenue (or perhaps the same), but it seems that identifying the
> preferred AI development path and then putting the momentum behind that
> approach to get there before something else hits the streets first is the
> best way to maximize our chances of survival in a post-Singularity
> environment.
>
>
>
>
>
> ______________________________________________________________________
> The information contained in this message may be privileged and
> confidential and protected from disclosure. If the reader of this message
> is not the intended recipient, or an employee or agent responsible for
> delivering this message to the intended recipient, you are hereby notified
> that any dissemination, distribution or copying of this communication is
> strictly prohibited. If you have received this communication in error,
> please notify us immediately by replying to the message and deleting it
> from your computer. Thank you. Ernst & Young LLP
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT