Given the URL of a #GeoCities site, how do I download a coherent copy? https://web.archive.org/web/20091027000043/http://geocities.com/kensanata/
If I add the date to my wget invocation, it only downloads the pages it got at the exact same moment, which won't do. And I can't follow the official instructions because I'm too stupid to use the Internet Archive Advanced Search.
How can I search for all my pages from the GeoCities Special Collection 2009 and get the identifiers I need?
@ckeen What do you do with the 600GB torrent: download it all? Do you use a command line bittorrent and give it a URL pattern and only get those files?
I'm not even sure they have individual Geocities "sites" as a collection in single archive. If they do, where can I find a link to "the latest archive of http://geocities.com/kensanata/ in the Way Back Machine?"
@kensanata That's not easy if possible. First there's no guarantee that such a thing exists. It looks like that some links on your site 404'ed when the crawler came by for whatever reason.
I assumed that the wayback engine offers a way to get all the data from one crawl related to that site but It looks like I am wrong. Maybe send the folks at archive.org a message?