You're correct, but I think it's worth going into a bit more detail about some of the tradeoffs involved in the concept of "Responsibility" as it applies to security research and exploit discovery, because there are a few clashing objectives that shape what the right choice is each time if we're considering how to minimize harm to all parties (if someone just wants to sell it to the highest bidder none of this applies). On the one hand, a vendor provided, well tested full patch is of course the ultimately desired permanent fix.
But what I think many people forget when they get "responsible disclosure" in their minds is that there are often bandaids users can do to protect themselves immediately regardless of whether a patch is ready, so long as they know about it. And since it's always possible and generally unknowable as to whether someone else might have already found the exploit and be using it, there is extra hard to calculate risk. Releasing it without a patch may lead to some users getting exploited, but it could also actually protect some users from being exploited or at least allow them to minimize the harm. Once it's known about in the wider community, it is also easier to check whether it's been selectively deployed anywhere. The lag time between notification and vendor patching is itself a risk (and of course there is lots of room for perverse incentives in all this).
So the real core issue with Coordinated Disclosure is that there is not in fact a Right Answer in general, any choice may help one group at the expense of another. Many researchers and organizations try to split the difference with standardized policies that seem to strike the balance, perhaps with occasional exceptions if it's serious enough. But ultimately it really is up to the discoverer and it's wrong to insist they conform to what the vendor finds desirable, particularly since ultimately the responsibility for the blunder lies with the vendor. It's a hard area and researchers should be respected for the work they do on their own terms.
tptacek's position is well thought out and defended, by himself and others, and generally includes what you are covering here when expanded in detail. I feel for him, because he's put in the position of being the mouthpiece for this all too often and it must get tiring repeating himself. Sillysaurus did a summary of his prior comments on this a while back and added some additional resources[1].
I myself have found myself in total agreement in one instance, and then a week or two later a big exploit comes out that makes me really wish some patches had made it out first.
What I think it comes down to is that any vendor needs to assume the exploit can come out any minute after notification, and act accordingly (if it's important, they better damn well get it patched quick). Any researcher should assume that if they act like an asshole and aren't accommodating in some way, they'll get raked over the coals by at least some of the technical public. As tptacek noted, coordination is best, and that requires a dialogue.
Also worth noting is that sometimes there is no patch. Some security problems are of the degree that the entire process is fundamentally flawed, and in those cases there's little to be gained waiting for the vendor, unless the vendor is working to notify all clients and recommend they cease use of the affected service or product. If, for example, you identify a flaw in in how a protocol is defined, and almost all implementations are flawed, the only responsible thing to do might be to publish publicly. Otherwise you're just favoring some groups over others in some way or another.
But what I think many people forget when they get "responsible disclosure" in their minds is that there are often bandaids users can do to protect themselves immediately regardless of whether a patch is ready, so long as they know about it. And since it's always possible and generally unknowable as to whether someone else might have already found the exploit and be using it, there is extra hard to calculate risk. Releasing it without a patch may lead to some users getting exploited, but it could also actually protect some users from being exploited or at least allow them to minimize the harm. Once it's known about in the wider community, it is also easier to check whether it's been selectively deployed anywhere. The lag time between notification and vendor patching is itself a risk (and of course there is lots of room for perverse incentives in all this).
So the real core issue with Coordinated Disclosure is that there is not in fact a Right Answer in general, any choice may help one group at the expense of another. Many researchers and organizations try to split the difference with standardized policies that seem to strike the balance, perhaps with occasional exceptions if it's serious enough. But ultimately it really is up to the discoverer and it's wrong to insist they conform to what the vendor finds desirable, particularly since ultimately the responsibility for the blunder lies with the vendor. It's a hard area and researchers should be respected for the work they do on their own terms.