RE: SIAI's flawed friendliness analysis

From: Gary Miller (garymiller@starband.net)
Date: Sat May 17 2003 - 21:12:37 MDT


My proposed solution to friendliness problem.
 
Note some of you will laugh this off as overkill. But believe me having
worked as a
consultant for the government for a number of years, this is just
business as
usual for NSA. It is a very expensive but very secure development
process.
It is based upon separation and balance of power. No one person has the
access
and knowledge to compromise the system. Relationships between team
members
must be prohibited to prevent possibility of collusion.
 
All FAI project personnel should have security clearances and polygraphs
to
discourage infiltration by Luddites, foreign powers, and megalomaniacs.
 
Personal spending patterns and life style should be monitored for all
personnel to
ensure large influxes of cash are not originating from unknown sources
to
discourages espionage for profit.
 
All personnel sign non-disclosure and confidentiality agreements after
receiving
security training only on their steps of the process.
 
Design of FAI should be conducted by no less than 3 system architects.
Programmers receive code specifications, unit test plans from project
architects.
Architectural changes require agreement between all 3 architects. System

Architects receive NO access to software development environment.
 
Coding of FAI to be accomplished by no less than 3 programmers.
 
Extensive code review and walk-through required before any code object
is elevated
to test.
 
All code objects must be checked out and in via a secure version control
system.
Final unit tests at developer level should be conducted for the system
architects to
prove all specifications are met.
 
A 256 bit checksum (CRC) is then calculated for the code object and
recorded by project teams
and system librarians.
 
System librarians have only read access to the development system.
 
The system librarian then elevates the code to test environment. A log
from the production
environment that records verified training inputs is replayed to the
test environment with the
new elevated code objects.
 
Tester and knowledge engineers compare the training results including
inferences made by the
new test FAI to the version currently in production. All test result
deviations from production version
should be of a positive or neutral nature before test is elevated to
production. CRC's are then
calculated and compared to vault copies from elevation to test
environment. Any deviations
indicate a control has been breached and require returning to
development to revalidate code, reverify
CRC and reelevate/test if necessary. As a final check once smart enough
the FAI should be given a
Multi-Phasic Personality Exam to determine if there any negative
abnormalities from it's last test. After
testers and knowledge engineers sign off on results. Code is elevated
to production.
 
All validated logs and log results are stored offline as well as on.
Online logs in both the test and production
environments must be encrypted to prevent tampering by outside programs.
Encryption key should be
be composed offline and keyed into test and production systems on a
regular basis. Past encryption keys
remain in the system encrypted themselves for viewing logs created while
under past keys.
 
Due to the large size of logs containing inferences from training data
it will be necessary for utilities to
be written to search logs for key inferences of interest to the
knowledge engineers.
 
Knowledge engineers are responsible for creation/encoding of training
data. Knowledge engineers should be
responsible for seperate knowledge domains once the low level training
has taken place. All incorrect inferences
observed from new training data should be recorded and analyzed with
help from project architects. Procject architects
can help determine if the incorrect inferences are from architecture,
coding, or insufficient training data to make the
correct inferences.
 
Since all training data is kept in sequence. The FAI's mental state
becomes non-monotonic. The FAI should
keep a record of all inputs that went into any given inference. Thereby
any negative inferences can be eliminated
by restoring to a point in time prior to that input and deleting the
training input that caused the problem or adding
a new training input to prevent the negative inference which resulted.
 
Elevations to production are done weekly with full backups occurring
immediately before and after. Backups are
stored at two separate secure offsite disaster recovery sites with
alternating before/after backups trading locations.
System managers and database managers should be separate for three
environments. They meet to establish
standards they all will follow but no person has security
access/passwords for more than one environment.
 
None of the three environments are linked via a network.
 
All security procedure should be documents and approved by management.
Actual following of these procedures
should be physically audited on a regular basis by outside security
auditors.
 
A disaster recovery plan should exist at day 1 and be kept up to date
daily. Disaster recovery drills should be
done on a monthly basis at another secure facility to ensure all three
environments can be recreated in a minimum of
time.
 
This high level of supervised learning should be maintained until the
FAI has established a sufficient moral compass and
exhibits the same high state moral conviction when presented with sample
moral dilemmas as those agreed on by project
management.
 
At that time the production FAI can be exposed to selected outside
learning sources such as websites, databases, etc...
All logging of translated training facts extracted should be logged as
before.
 
I'm sure I've left out a number of things but if your pockets are deep
enough and you really take security seriously some
variation of the above is what you'll need.
 
 
 
 
 
 
 
 
 
 
 
 
 

-----Original Message-----
From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf Of Ben
Goertzel
Sent: Saturday, May 17, 2003 1:36 PM
To: sl4@sl4.org
Subject: RE: SIAI's flawed friendliness analysis

 
To me, the real problem of Friendly AI is, in non-rigorous terms: What
is a design and teaching programme for a self-improving AI that will
give it a high probability of
 
a) massively transcending human-level intelligence
b) doing so in a way that is generally beneficial to humans and other
biological sentients as well as to itself and other newly emergent
digital sentients
 
?
 
Disputes over details aside, I think this is pretty much what Bill
Hibbard and Eliezer are also talking about....
 
Many subsidiary issues arise, such as "how high is a high enough
probability for comfort", "what does 'generally beneficial' mean?", and
so forth.
 
I don't pretend to have an answer to this question. I hope to
participate in working one out perhaps 3-5 years from now, when we (if
all has gone well in the meantime) have a baby Novamente that's actively
experiencing and learning from its environment and learning to
communicate with us and guide its own actions...
 
Ben G
 

-----Original Message-----
From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org]On Behalf Of Philip
Sutton
Sent: Saturday, May 17, 2003 9:52 AM
To: sl4@sl4.org
Subject: Re: SIAI's flawed friendliness analysis

Ben,

> Unfortunately though, as you know, this doesn't solve the real problem
> of FAI ... which you have not solved, and Hibbard has not solved, and
I
> have not solved, and even George W. Bush has not solved ... yet ...

Can you spell out what you think the 'real problem of FAI' that hasn't
been solved yet is, in a format that might make it easier for people to
create a solution?

Cheers, Philip





This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT