Wayback Machine:

Since 1996, they have been archiving cached pages of web sites onto their large cluster of Linux nodes. They revisit sites every few weeks or months and archive a new version if the content has changed. Sites can also be captured on the fly by visitors who are offered a link to do so. The intent is to capture and archive content that otherwise would be lost whenever a site is changed or closed down. Their grand vision is to archive the entire Internet.

Please answer a question about this article. If the question is unanswerable, say "unanswerable". Where can sites be captured by Linux clusters?
unanswerable