First, everyone go out of their way to break REST[1] caching by eliminating proxies from SSL (for some good, some bad - reasons).
And now we're trying to shoehorn it back in?
It used to be that a local caching squid proxy was a great way to make load times of various "front pages of the Internet" bearable on a shared low bandwidth uplink (local/national news sites etc typically being served from the cache/lan).
New ssl/tls kinda-sorta breaks that (there's no middle ground - either install intercepting cert that catches everything, or abandon caching on everything. Either cache CNN. com and medical records, email(webmail) and Facebook messages - or neither).
AMP might be a bridge too far - but some kind of (semi) public "signed, not encrypted" would still be a good fit for hypertext applications/documents - because of the caching benefits.
And now we're trying to shoehorn it back in?
It used to be that a local caching squid proxy was a great way to make load times of various "front pages of the Internet" bearable on a shared low bandwidth uplink (local/national news sites etc typically being served from the cache/lan).
New ssl/tls kinda-sorta breaks that (there's no middle ground - either install intercepting cert that catches everything, or abandon caching on everything. Either cache CNN. com and medical records, email(webmail) and Facebook messages - or neither).
AMP might be a bridge too far - but some kind of (semi) public "signed, not encrypted" would still be a good fit for hypertext applications/documents - because of the caching benefits.
[1] As excellently outlined and contrasted by Fielding in his thesis: https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm