I'm not sure what you are saying is completely correct. However in the cloud you should design for that, everything will eventually fail and your service should manage. ie. avoid stateful VMs, provide sufficient redundancy etc. Design for failure. This is not just about Azure, it's a common cloud design practice.
Yes, you absolutely _should_ design for that. That's why it's kind of hard to get mad at this shortcoming, because the answer is just that you should have been following best practices anyway.
In practice, though, I often found myself getting annoyed that I had that enforced on me. On AWS or most other providers I've used, if you have a workload that isn't particularly mission critical you can get away with just expecting the machine to always stay up. It'll probably only go down once a year or so. Obviously not the best way to design a system, but a handy tradeoff to be able to make in order to save a little cash or complexity.
Well, the world does not move on ideal conditions. I have used AWS and Azure extensively. My use case has been mainly around hadoop ecosystem. Here is what happens on azure(HDInsights Specifically):
You have a cluster running, that does some work at certain times in a day. Out of the blue, Several machine would be "re-imaged" and that means it would delete all the logs, working directories etc.Its like a snapshot restore. So when this happens, you have lost any custom changes you had done on the cluster. You lose any 3rd party jars you had placed.
Those machines, join back your cluster and your jobs would no longer run because of missing jars Unless you program to download all files on every reboot.
Most important thing though is lack of prior notifications. I have had cases when a "re-image" happend when my CPU was 100% utilised and my RAM was used 90%. So my box was working on something and azure decided to apply a patch!!
As per frequency, its once every two weeks, sometimes more.