In 1996 Brewster Kahle, with Bruce Gilliat, developed software to crawl and download all publicly accessible World Wide Web pages, the Gopher hierarchy, the Netnews (Usenet) bulletin board system, and downloadable software. The information collected by these "crawlers" does not include all the information available on the Internet, since much of the data is restricted by the publisher or stored in databases that are not accessible. These "crawlers" also respect the robots exclusion standard for websites whose owners opt for them not to appear in search results or be cached. To overcome inconsistencies in partially cached web sites, Archive-It.org was developed in 2005 by the Internet Archive as a means of allowing institutions and content creators to voluntarily harvest and preserve collections of digital content, and create digital archives.
Is there an answer to this question (If it cannot be answered, say "unanswerable"): When was the program necessary to crawl and archive the web created?
1996