From: James Higgins (email@example.com)
Date: Tue Jul 02 2002 - 10:24:16 MDT
At 03:01 PM 6/30/2002 +0200, Eugen Leitl wrote:
>On Sat, 29 Jun 2002, Ben Goertzel wrote:
> > One option we've considered is to create a huge mirror of a large
> > portion of the Net, for the system's use. However, this would cost
> > mucho dinero!
>All current search engines maintain large fraction of the web in their
>cache. I think it should be easy to arrange to have an air-gapped AI
>reading a large fraction of that. Google has been known to try strange
>things in R&D. Clearly there's a tremendous market to nave e.g. a natural
>language interface finding facts in iterative user sessions.
Actually, the Internet Archive (www.archive.org) supposedly has numerous
copies of the entire web (dating back to 1996 - their going for a
historical record of it). At present this takes about 100TB of storage. I
imagine they would be willing to help provide a limited access copy of the
web for AIs to learn from.
I still think that using a very highly secure interface, even to an
isolated collection, would be a necessary safety measure. (See my previous
post on Limited Internet Access for details).
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT