Azure has 3 CDN products from what I can tell. They have semi-overlapping feature sets, though I guess now HTTP/2 is taken care of. We were using one of them and experienced frequent timeouts. Switching to another fixed it. They don't explain why they have these different products in the first place, either.
That is largely because Azure CDN unlike other azure services it is not really from Microsoft at all, it is basically re-branded solutions from two competing vendors who both have different feature sets, different service qualities ( as you noticed) and compete with each other.
This is also why it is not simple to deploy on both of them for the best coverage. Akamai has better PoPs in some markets and Verizon has better in others.
If you want to use both to their strengths it not possible directly from Azure, you will need to set up both services and then configure your DNS for geo-location routing based and track effectively in which geographies which would be better .
MS recommends also largely on these lines in their own docs "Azure CDN from Akamai POP locations are not individually disclosed. Both providers have distinct ways of building their CDN infrastructures. We recommend against using POP locations to decide which Azure CDN product to use, and instead consider features and end-user performance. Test the performance with both providers to choose the right Azure CDN product for your users."
Which makes Azure look like even more of a joke. They couldn't strike a real deal, buy or build a CDN - so they just integrate a couple? Then shrug and tell their customers to study sort it?
The CDN product wasn't like this a while ago - it was just one offering.
That's what I really don't like about Azure.
They seem to have multiple differently named services that seem to basically do the same thing (Azure Service Bus/Azure Storage Queues is one example that comes to my mind)
Also, the Azure Portal is ridiculously bad and slow.
I actually like the Azure portal more than AWS. The speed is annoying yes, and n-level menu structure was overwhelming at first but the ability to manage all services from single SPA makes it much better than the way AWS console handles it, also the ability to minimize and do multiple actions at the same time balances out the slowness.
I'm not sure what MSFT thinks is a good buyer experience on this.
Buyer: I want to buy a CDN
Msft: do you want the Verizon or the Akamai one?
Buyer: um the CDN one?
Msft: due to some corporate deals we've made you have to choose
Buyer: %#]%^}^}%!!!1
When Verizon enabled this they forgot to test with Firefox ESR. So they disabled enough ciphers that Firefox ESR could no longer connect to their servers. We reported it to them and they rolled back and disabled HTTP2 the next day. Guess they finished testing now.
No mention of HTTPS, but don't all the current browsers, iirc require HTTPS for HTTP/2 support? Which would be the bigger boost, if they allocated certs (similar to letsencrypt) for the azure cdn. Or do they just require a cert setup for http2 support?
The standard technically allows non-TLS HTTP/2, but most clients, and I'm guessing, servers, don't support non-TLS HTTP/2. nginx, for example, doesn't support non-TLS HTTP/2.
Nginx certainly supports unencrypted HTTP/2. I'm using it for my personal website to have Haproxy as a separate TLS terminator. Mainly so that my cert renewal scripts still work if there isn't an old cert available.
HTTP/2 is allowed to work over insecure connections as per the spec, but all browser vendors have decided that their browsers will not support unencrypted H/2 (no browser at all currently allows H/2 over HTTP). This means that in theory H/2 can run insecurely, but in practice it cannot.
I tried this a couple of days ago as I set up a new server, and when I enabled HTTP/2 on non TLS server blocks, my clients would just try to download a binary blob instead of showing the page I expected.
That's a client issue, not an Nginx issue. Try accessing it through an stunnel tunnel, for example.
That said, AFAIK, the protocol negotiation works at the TLS level, so non-TLS servers need to listen on different ports for 1.1 and 2 if you want both.
We’re not breaking any news here, because Google Cloud Platform support for SPDY has been enabled for many years, and support for its successor (HTTP/2) was enabled earlier this year.
Yeah, I thought they knew. Here's what I thought might've been on their list
1. You get a single IPv6 per instance (as opposed to a CIDR range).
2. You still have to deal with IPv4 CIDR ranges
3. ...which means you have to deal with all the overlaps still :(
IMO I would love it if my ec2 instances were IPv6 only, and then use public IPv4 addresses or dual stack loadbalancers as necessary for public ingress.
> That said, while HTTP/2 is now out, IPv6 is still a sad story, especially with AWS.
The problem I see with IPv6 is that it very much encourages a unique, even static address for each device on your network, which is a privacy and security hole that NAT addresses for IPv4. So I'm not exactly dying to use it if I can avoid it.
With a /64 (which is the standard given to residential connections) you could have a new ip address every second for your computer's entire lifetime if you wanted to and still have the majority of the address space on the internal network available.
> If you have a properly configured firewall why is it problem? Especially given the massive address space.
Privacy? Security? Do you really need the whole outside world to know how many computers are on your network, and which one is browsing which sites or doing what exactly at which time? Most people don't, for privacy reasons... and if an attacker can single out a particularly sensitive computer to target, then that becomes a security risk too. If it's avoidable then you avoid it, it's as simple as that.
EDIT: I don't understand these random downvotes. The concern actually exists, that's why (as one comment pointed out) people have been proposing privacy extensions to IPv6. What in the world are people downvoting? You don't like reading facts?
You're likely being down voted due to your attitude. It's one thing to politely argue a position, quite another to ask rhetorical questions as though everyone else is stupid for not knowing the obvious.
Thanks for explaining, but what attitude are you seeing in the first [1] post? In this one I was annoyed at the downvotes on the first one and (later) this one, but what was wrong with the first one? People downvoted it for no reason... this seemed just a continuation of that.
That post seems fine. I'm not sure why it was downvoted.
If it makes you feel any better, some of my comments get downvoted too, especially in the first hour or so. Then over the next 24-48 hours the scores will go up again.
The worst thing to do is complain about the downvoting, though, because that pretty much forecloses any chance of recovery.
People aren't proposing privacy extensions, it's a finished standard that's been enabled by default in most major operating systems for many years now.
> People aren't proposing privacy extensions, it's a finished standard that's been enabled by default in most major operating systems for many years now.
Apparently [1] it's not that simple, and apparently [2] stuff came after that standard in as late as 2014 (and I can't really say I've been keeping up with IPv6 news enough in the past 3 years to know how things look different right now compared to 2014).
I also see a related Defcon article from 2015 [3] which claims "IPv6 is bad for your privacy" though I haven't read it yet.
On what devices? Home routers, probably. Things with an extensible stack based on bsd or linux, no, they have very good firewalls. Better I would say, since you don't have to deal with NAT crap.
Probably because it requires sockets to be kept open. Makes it a lot easier to DDOS the server (because you need to maintain HTTP + TCP state), especially at CDN traffic levels.
Too bad IPV4 NAT has made open ports on client machines nearly impossible, otherwise maintaining this connection would not be required.
Turns out it's extremely difficult to implement correctly and in a truly optimized fashion. Lots of variables. Do it wrong and you can hurt performance instead of help it.
Although correct use of the feature is still debatable, interoperability with CDNs is not. The link header standard is the de-facto way to communicate with frontends about what to push.
The next release of Caddy will support push, yes. But you'll have to configure it (for now - until we can figure out a good way to do it correctly, automatically).
Well, Caddy doesn't even support push until the next release. But even then, the Go standard lib will have to proxy push frames through. We could look into a contribution to Go or a workaround when we or a contributor has time.