How would any of these prevent a process from doing whatever it wants with its own memory, for example in the way I described on my previous post? Also Microsoft Pluton at least does not do memory tagging.
If a process wants to commit suicide, there is hardly anything that can be prevented I guess.
These measures prevent access to the tagging metadata from userspace, that is how they prevent it, and usually MMU isn't accessible to userspace.
> By using spare bits in the cache and in the memory hierarchy, the hardware allows the software to assign version numbers to regions of memory at the granularity of a cache line. The software sets the high bits of the virtual address pointer used to reference a data object with the same version assigned to the target memory in which the data object resides.
> On execution, the processor compares the version encoded in the pointer and referenced by a load or store instruction, with the version assigned to the target memory.
Now if one goes out of their way to work around security mitigation and enjoy the freedom of corrupting their data, maybe they should ask themselves if they are on the right business.
> However it is now being combined with ARM technology
You mean PAC. PAC does not really do what you think it does; the original program can still corrupt its data as much as it wants (even due to bugs). And I don't think it has anything to do with Pluton.
> If a process wants to commit suicide, there is hardly anything that can be prevented I guess.
> Now if one goes out of their way to work around security mitigation and enjoy the freedom of corrupting their data, maybe they should ask themselves if they are on the right business.
No; what I have been trying to do for the entire comment chain is to show that this is how things work _right now_ (the entire process' memory being a single opaque object/blob with the same tag), and that for obvious reasons you cannot forbid that and just claim that "there will be NO alternative" because you will always be able to continue doing what everyone is doing _right now_, barring a significantly upheaval in the traditional definitions of computing.
If you provide an allocator that integrates more closely with the hardware then that is all and fine, but you just _cannot_ claim there will be no alternative specially when the alternative is just to continue what you're doing right now.
Again, we've had many architectures with memory tagging, hardware bounds checking, whatever you can think of. E.g. even x86 had memory tagging at a time (386 segmentation which was globally panned), hardware bounds checking (MPX, completely ignored due to the reduced performance), etc.
Pretty much possible and already available for those listed above, with Solaris having a decade of experience in production.