Do you mean backing up an entire domain? Like example.com/*
If so that's starting to roll out in v0.8.5rc50, check out the archivebox/crawls/ folder.
If you mean archiving a single page more thoroughly, what do you find is missing in Archivebox? Are you able to get singlefile/chrome/wget html when archiving?
Ah ok. One caveat: it's only available via the 'archivebox shell' / Python API currently, the CLI & web UIs for full depth crawling will come later.
You can play around with the models and tasks, but I would wait a few weeks for it to stabilize and check again, it's still under heavy active development
You guys probably hear it all the time, but you are doing lords work. If I thought I could be of use in that project, I would be trying to contribute myself ( in fact, let me see if there a way I can participate in a useful manner ).
If so that's starting to roll out in v0.8.5rc50, check out the archivebox/crawls/ folder.
If you mean archiving a single page more thoroughly, what do you find is missing in Archivebox? Are you able to get singlefile/chrome/wget html when archiving?