If you are caching private information you are going to have a bad time. The expiration feature is misleading. It will only run when the site with the library is loaded.
So if you're in a random internet café checking your email on a site that uses this library to cache your emails and then close the browser and leave all that private information, your email content, is cached on that browser. If a savvy enough person saw you on this site and knew what it did that individual could go to the browser and simply use Web Inspector, in the case of Chrome, and check out what was cached and read all of your cached emails.
There is no way for the client to know it's not a secure environment unless you ask the user. You also could delete the cache on log out, but what you can't do is delete the cache when the browser or tab closes, because then you don't have a cache at all. And more than likely most devs wont even think on this and just imagine that their cache will expire at some point.
Gmail solves this problem by requiring a per browser opt-in via, on Chrome at least, a browser extension for client side caching.
I haven't investigated to see if there's some way to use the HTML5 cache.appcache window.applicationCache to solve this problem. That actually has an expire.
Cache management is hard to get right. Make sure the data you send is cacheable and let the browser manage it.
Browsers have complex mechanisms in place to deal with broken firewalls, high-latency links, non-conformant servers and proxy. Any client-side caching implementation is going to be slower and less complete than its browser counterpart.
I have to agree. As nice as these kinds of systems sound (I've designed experimental ones myself), in most applications its just not the right way to handle it. The first gotcha is that localStorage isn't async itself. If you have lots of data that has to be pulled out or stored, you might end up locking up the thread and bring the rest of the app to a standstill.
Another important issue is that all browsers already have caching mechanisms built in. Spend your time verifying and correcting that your headers are being set properly for your js and stylesheets.
Also, localStorage only gives you 5mb of storage. That space can be taken up very quickly if you aren't careful.
There are some very limited cases where this kind of thing can be useful. One that I'm reminded of is mobile web applications. Steve Souders talked about this briefly at the jQuery conference around this time last year. However, all the major mobile browsers already support cache manifests, which mostly obviates the need for something like this.
I agree, its not a good approach in general. Correct cache headers on the server is probably the way to go in many cases.
I originally created this handle working with an external API in a simple static files only app. It works really well for that usecase and solved my issues. It was then easy to make in a general way, so I decided to chuck it out in the wild and see if I can help anybody else.
Perhaps I should add an area to the README to define when you should (and maybe more importantly when you shouldn't) use this.
Browsers don't have built-in partial caching, which I consider the biggest missed opportunity for improving the state of modern web development. Cache HTML blocks, then send a list of tokens identifying those blocks to the server, so it doesn't re-generate those pieces of content. It's conceptually simple, declarative, possible to implement as 100% backwards-compatible and could be used pretty much everywhere.
Another big thing that's missing is partial forced expiration. This is when the entire page is cached, but certain blocks are excluded from caching. Upon the next request to the same page, the browser sends a normal request with one addition: an extra header listing the transient blocks. Thus, the server has an opportunity to respond with updates only, rather than the entire page.
SPDY only affects content transfer, whereas what I'm describing would affect both transfer and content generation.
Moreover, SPDY is a highly complex proposition, whereas what I'm describing is dead-simple. I have implemented something very similar using JS, cookies and local storage. Of course a built-in browser support would be much, much more desirable.
That’s pretty complex for a caching solution. Not to mention the edge cache would need to keep multiple copies of the same document to diff them, which makes it completely unusable for anything with quick turnover.
With HTML5 Offline (App Cache) you could easily do this on the application level, of course. And with SPDY you can just fetch the fragments separately without any performance penalty. Basically current trend is to enable more gradual caching instead of implementing some higher level resource multiplexing.
Biggest missed opportunity in my opinion? Preparsed DOM/XML. E.g. Fast Infoset[1].
I think you're misunderstanding what I'm asking for. As I've said, I've already implemented that kind of caching in JS. It's really simple. The problem is, local storage and cookies are not ideal implementations.
Local storage is a bad fit for caching. You can compose the document from smaller fragments and leverage the browser cache. It’s simple AJAX. That’s what I meant by gradual caching.
I thought a bit about your solution, and I guess it’s doable using existing technology AND leveraging gradual caching without JS.
Using standard HTTP/1.1 session protocol this would require additionally requesting /foo/fragment, SPDY can just use SS push.
> GET /foo/fragment
< 200 OK HTTP/1.1
< Etag: 1
[…]
Now on refresh conditional GET for /foo hits cache, and conditional GET for /foo/fragment doesn’t. Abracadabra: partial caching.
> GET /foo
> If-None-Match: 1
< 304 Not Modified HTTP/1.1
> GET /foo/fragment
> If-None-Match: 1
< 200 OK HTTP/1.1
< Etag: 2
[…]
While it looks like some new technology, it’s simple really. Browsers already compose pages by fetching scripts, images, embedded objects, iframes. This is not that different.
Moreover, this approach provides backwards compatibility, because if the browser doesn’t resolve XInclude, you can just do that yourself using AJAX.
This solves your problem, but requires changes only on the client side.
What does this library offer over just using localStorage without a library? The browser API is already really easy, and with no fallbacks for IE<9 i don't really see any advantage in using this...
Maybe it would be an idea to state that clearly in the readme on Github? I guess it's probably obvious for people who know memcache, but for a frontend developer the library seems like a general localStorage wrapper. Your library deserves more than that :)
I've been using Amplify.store for some time now, and I would highly recommend it. It uses localstorage where possible, but it also includes a lot of sensible fallbacks including browser-specific strategies. It also allows you to add your own strategies if you wish.
That's neat. locache has a simple wrapepr around localStorage - the "locache.storage" object, so it would be easy to add an extra layer that supports other browsers. I had considered this already, but not had the need yet. There is an implementation in the Mozilla docs to mimic the localStorage API but store the data in cookies.
Why doesn't this use the userData for IE 6/7? Given that those browsers are the least performant, wouldn't it make sense to ensure caching works on them?
As I pointed out below, Amplify.js, an established Javascript library, has the Store module that supports all manner of browser storage (including userData) with a unified interface that abstracts away the specific storage mechanism.
1. why have a different method for multiple values rather then check parameter type for the normal methods (e.g. if 'set' received an object or get received an array).
2. getMany returns an array instead of an object mapping key to value. Sure may be more efficient but less convenient to use.
Hmm. Good points.
1) I made them different operations just for clarity - however, it would make the API a little nicer if they were the same.
2) I agree with this - an object mapping would probably be easier.
Pull requests, issues etc. all welcome. Otherwise, I'll have a look at it myself.
So if you're in a random internet café checking your email on a site that uses this library to cache your emails and then close the browser and leave all that private information, your email content, is cached on that browser. If a savvy enough person saw you on this site and knew what it did that individual could go to the browser and simply use Web Inspector, in the case of Chrome, and check out what was cached and read all of your cached emails.
There is no way for the client to know it's not a secure environment unless you ask the user. You also could delete the cache on log out, but what you can't do is delete the cache when the browser or tab closes, because then you don't have a cache at all. And more than likely most devs wont even think on this and just imagine that their cache will expire at some point.
Gmail solves this problem by requiring a per browser opt-in via, on Chrome at least, a browser extension for client side caching.
I haven't investigated to see if there's some way to use the HTML5 cache.appcache window.applicationCache to solve this problem. That actually has an expire.