While an architecture where two distinct pointers could point to the same location is esoteric today (minus `mmap()` tricks), go back 30 years and it was not only esoteric, but downright the most popular system in existence! [1]. Who's to say something just as esoteric becomes the mainstream? (although I hope not---it was horrible).
ARM Cortex M4 supports adressing individual bits (often registers of hardware peripherals) of memory locations as one additional memory location writing to, or reading from a single bit.
Also Analog Sharc DSPs (which, I think, still are being sold with this architecture, and still used, even though I've used them only 10 years ago) alias their memory four times, depending if you want to access it as 16bit, 32bit, 48 or 64bit data.
Probably so the compiler doesn't just optimize out the statements, seeing as they don't seem to have a visible effect on the program execution because they're never read again.
because writing to one location causes the data to change in another (a system register) you don't want the compiler to assume that data in an IO register can be cached in a CPU register
Harvard architectures might be an other example that's not really strange. You can have two different pointers with the same value. However, why are we trying to compare data and instruction pointers? They are pointers for different types of data. One being instructions and the other data.
True. But C is pretty restrictive in what you can assume. POSIX is more lax (for the programmer, not for the compiler) because it mandates certain behavior---like an all-0 bit pattern for NULL pointers, and the ability to case `void *` to a function pointer (and back again).
That "`NULL` need not be physical 0s" [1] is a real pain in ANSI C because `memset(astruct,0,sizeof(astruct))` won't NULL out pointers---you have to explicitly assign NULL to each pointer. It's one of the reasons I tend to stick to POSIX if I can.
[1] In source, a 0 in a pointer context is a NULL pointer, but the architecture could mandate "all 1s" as a NULL pointer.
The zero-ness of NULL is a good example of the phenomenon I'm talking about. How much time have people wasted over the years tweaking and worsening code afer somebody pointed out that some platform at one point might not have used the all-zero bit pattern for NULL? In how many cases did the program that was changed get anywhere close to such a strange, special-purpose, and likely obsolete computer? I'm guessing the answer is "vanishingly close to zero." Being exactly ANSI C correct (as opposed to relying on POSIX's stronger guarantees) comes with an engineering cost, and there's no sense paying that cost unnecessarily.
It's one thing to do something because you found that it works. It's another if you know what the rules are, and why breaking them will work in this case, and if you (or someone else) needs to port the code, you know the potential problem spots that need attention.
[1] MS-DOS with its FAR pointers.