From: Marcos Guillen (firstname.lastname@example.org)
Date: Wed Jul 07 2004 - 13:49:31 MDT
The Second Cognitive Systems Workshop, hosted by Sandia National
Laboratories and the University of New Mexico has just taken place
(http://www.sandia.gov/cog.systems/cognitive_workshop/index.htm). An array
of government program offices, universities and private companies, including
us, presented the result of different approaches to the development of new
cognitive systems. Agents, Neural Networks, AI bots and artificial brains
projects, most of then with an emphasis on defense applications, where
discussed and even showcased to a list of pre-screened attendees.
Apparently Artificial Development is no longer alone attempting a full-force
cortex emulation, as a few other groups push for similar projects under
DARPA funded programs. However, I'm very please to say that we do lead the
pack: I introduced during my keynote the 'Kjell' Persona
(http://www.ad.com/press/jun302004.html), our firs CCortex-based Autonomous
Cognitive System, and discussed in detail the preliminary results of
intensive testing of CCortex emerging cognitive capabilities. We also
announced a new India research center, where 50+ extra neurologist and
programmers will joint the American and European teams already working on
the project, as the very encouraging 'Kjell' results has prompted us to
speedup development and update our internal Roadmap: A few surprises are
coming your way, you have been warned :)
Anyhow, enough CEO parlance already. I was wondering how you factor the
defense industry predictable entry in this area with the 'friendly AI'
thinking. I assume that you have always considered how likely it is that the
first successful AI happens to be a no-so-friendly defense project.
I do understand that we all love to discuss about qualia and high level
ethics, but I will also like to politely suggest that maybe we should start
to consider a more down to earth approach to the most likely scenario: the
progressive incorporation of Autonomous Cognitive Systems to the world
military forces, whit limited political supervision.
In such a scenario, a comprehensive 'friendliness' approach would be
unworkable, while any proposed limitation to system design would have to be
hammered out between the industry, the public, the politicians, and, let's
hope, some scientific input.
So, what about re-focusing part of the discussion to such a plan-B,
compromise scenario? If no authorized voice starts tackling the problem from
this perspective, we risk the debate may end up being hijacked by
uninformed demagogues, fringe populist, and the tinfoil-hat brigade, very
much the same think that happen with the steam cells research, a very
unpleasant situation for any industry.
President and CEO
Artificial Development, Inc.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT