If you trawl through the IETF and more recently WHATWG mailing lists you will find that every time caching comes up - either with Last-Modified or when ETag was being ratified the proposal for cross-origin caching also comes up, and is rejected.
The browser vendors just spent the past 4-5 years locking down cross-origin access in the DOM because of all the security and privacy implications that come up. Corporate profiles and ISO standards don't even accept running the code - let alone caching it (i've worked on plenty of corporate projects where you aren't allowed to even use Google hosted JS - it just won't run due to AD policies).
To give you but one example of what arises with this new vector. Say I were to go to the top 50 banking sites and take a sample of a Javascript file that is unique to each site. I would then take the hash of each file, and in a single web page include the script element with a hash of each of those files. When a visitor hits that page and the browser attempts to load each script element, if it loads it in 4ms then I know the file was in the cache and that the visitor is a customer of that bank.
Well that is just one, and as I mentioned cross-origin requests and access have been further locked down recently, not opened. With all the different versions and variants of libraries you are implementing and exposing a lot to save very little. You wouldn't even save 1% of web requests. as mentioned, this isn't a new idea, I'm just telling you the reasons why it hasn't and won't happen
a) would require all servers and browsers to be updated for what is a marginal gain
b) privacy nightmare