Screenshotting or PDFing of a website is an increasingly important archiving tool, to supplement wget. I've come across a lot of websites that won't render any content if not connected to a live server.
I couldn't agree more. I wish more sites would load without needing multiple seconds of JS execution and AJAX. One of my TODOs is to get full-page screenshots working as well.
I was doing this before with the chrome --dump-dom flag but the output I was getting was garbage, and no more useful than the simpler wget download. PDF turned out to produce really nice, readable archives about 75% of the time, so I kept it in. Text-based sites tend to do a good job of having PDF-friendly styling.
It came out of a screenshot/archiver I've been working at at Mozilla, but I've split it up as the screenshotting is shipping and DOM archiving is still way outside Mozilla's comfort zone.
I googled it, and --dump-dom simply dumps the output of document.body.innerHTML. I have not used headless chrome at all, but I imagine it would not be that hard to get it to dump document.documentElement.outerHTML instead.
(execute JavaScript on the page is most likely possible?)
I mentioned it below, but I tried getting DOM snapshots using chrome --dump-dom, but the output usually didn't render well without a <head> section. (chrome only outputs the <body>) I could attach the headers from the wget file... but then it starts getting messy and complicated.
Agreed. I research new media and archive.org is invaluable to me. I worry that current web sites won't be able to be preserved. (much like many of the flash sites and real audio of the past are largely gone.)
What version of Google Chrome do you need for the PDF export to work? I tried it on 58.0.3029.96 (Linux) and this does nothing (no error messages, it just quits without writing any files):
Edit: I'm completely baffled that such widely used software as Google Chrome can have this written in the man page: "Google Chrome has hundreds of undocumented command-line flags that are added and removed at the whim of the developers."
Highly recommend switching to wpull (https://github.com/chfoo/wpull), which was built as a wget replacement. It's what grab-site uses, which is a successor to ArchiveTeam's ArchiveBot.
"grab-site is made possible only because of wpull, written by Christopher Foo who spent a year making something much better than wget. ArchiveTeam's most pressing issue with wget at the time was that it kept the entire URL queue in memory instead of on disk. wpull has many other advantages over wget, including better link extraction and Python hooks."
Use zotero and you have your own personal Pocket with snapshots. In addition, you can add tags, organize stuff into folders, etc.
https://www.zotero.org/
The question was asked if you can use your API endpoint URL (https://[pinboardusername]:[pinboardpassword]@api.pinboard.i...) straight into ifttt. Maciej confirmed that it should work, the problem is that you’re essentially storing your login credentials in a 3rd party service, and you don’t know if they’re storing and transmitting it securely.
Could you add an option to either add tagging, or separate the tagged items into folders?
e.g. "programming/", "docker/" etc, I often find myself digging through my Pocket archive trying to find that one article I found 6 months ago and it gets incredibly annoying
I like having the sites by timestamp because they're guaranteed to be unique, and it makes traversing them easy. I'd be happy to add a tag column to the index though, which you could use with Ctrl+F to find articles. https://github.com/pirate/pocket-archive-stream/issues/1
I've been thinking along those very same lines for a long time (this project makes me wish for more free time).
I have half a mind to fork this and add something like https://github.com/internetarchive/warcprox, or at the very least walk through the generated HTML and brute-force inline all assets as a first pass :)
I've been thinking I'd love to have a WARC archive of all my browsing. So many times sites I remember seeing have gone offline, and didn't get archived by the big services. Ideally this has to happen with browser cooperation, so it can save resources from complex dynamic pages, including responses to user action.
This must happen either in the browser or in a proxy like the linked warcprox, in order to catch everything. But the proxy solution is getting less practical every day with key pinning and HSTS.
Maybe a future firefox will have an option to export everything to WARC?
I would be very on-board with adding a warc exporting option. I also hate how Chrome tosses all history older than 3 months. Running an archiving proxy hooked up to archive.py would kill both birds with one stone.
Can one automate extensions through headless chrome ? then you might be able to trigger WarCreate instead (It will be more efficient to run the pocket export urls through WAIL though - this should give you the warcs you want)
Well, no, but on a Mac .webarchives are Spotlight indexable and make for a nice single-file archiving approach, and I might actually have some old code that tries to convert between the two...
Also, I've been hacking away at http://github.com/rcarmo/newsfeed-corpus for a bit. Hadn't thought of doing archival on everything, but having this tied in seems like a logical step ;)
Or EML/MHT. It's the format the email programs use to store the HTML mail incl all pictures, JS, CSS, ... in one plain text file. IE 9-11 also supports that format (file -> save as...) but calls it MHT?
You see something is flawed in Redux at the point you have to pass strings (uppercase constants defined somewhere) around, import them in every file, pass them as identifiers of what you should do with each data.
Slowing down the inevitable tide of https://en.wikipedia.org/wiki/Link_rot. When I cite blog posts or want to share sites that have gone down, I can swap out the links for my archived versions.
this is really cool! I always had in mind a project where you save every page you visit, and somehow expose them in the future to know what you visit and maybe remembering you important stuff based on some heuristic.
Just for articles, mind you, not entire websites.