I think you're focusing too much on the literal specifics of the "attack vector" and not enough on the surrounding context, or the real world utility/threat. You're not accurately putting yourself in the shoes of someone who would be using it and asking whether it has a sufficient cost benefit ratio to merit being used. Isn't that what you mean by "carefully examined?"
If you want to talk about a state level actor, I hate to break it to you, but they have significantly more powerful and stealthier 0-day exploits that are a lot easier to exploit than a tactic like this. Guess what's the last thing you want to have happen when you commit cybercrime? Do it in public with where there's an immutable record that can be traced back to you, and cause a giant public hubbub, maybe? So, I can't imagine how someone could think there's anything noteworthy about this unless they were unaware of that.
That's somewhat the unintentional humor and irony of this situation -- all the researchers accomplished was proving that they were not just unethical but incompetent.
Upon further reflection, I think what I wrote regarding anonymity specifically was in error. I don't think removing it would serve much (if any) practical purpose from a security standpoint.
However, I don't agree that what happened was abuse or that it should be deterred. Responding in a hostile manner to an isolated demonstration of a vulnerability isn't constructive. People rightfully get angry when large companies try to bully security researchers.
You question if this vulnerability is worth worrying about due to the logistics of exploiting it in practice. Regardless of whether it's worth the effort to exploit I'd still rather it wasn't there (that goes for all vulnerabilities).
I think it would be much easier to exploit than you're letting on though. Modern attacks frequently chain many subtle bugs together. Being able to land a single, seemingly inconsequential bug in a key location could enable an otherwise impossible approach.
It seems unlikely to me that the immutable record you mention would ever lead back to a competent entity that didn't want to be found. There's no need for anything more than an ephemeral identity that successfully lands one or two patches and then disappears. The patches also seem unlikely to draw suspicion in the first place, even after the exploit itself becomes known.
In fact it occurs to me that a skilled and amoral developer could likely land multiple patches with strategic (and very subtle) bugs from different identities. These could then be "discovered" and sold on the black market. I see no convincing reason to discount the possibility of this already being a common occurrence.
The only sensible response I can think of is a focus on static analysis coupled with CTF challenges to beat those analysis methods.
If you want to talk about a state level actor, I hate to break it to you, but they have significantly more powerful and stealthier 0-day exploits that are a lot easier to exploit than a tactic like this. Guess what's the last thing you want to have happen when you commit cybercrime? Do it in public with where there's an immutable record that can be traced back to you, and cause a giant public hubbub, maybe? So, I can't imagine how someone could think there's anything noteworthy about this unless they were unaware of that.
That's somewhat the unintentional humor and irony of this situation -- all the researchers accomplished was proving that they were not just unethical but incompetent.