Thanks for the link. It's quite a weak rebuttal however. For example "this inability to distinguish between unset and default values is a nightmare" is ignored. And, to me, "If it were possible to restrict protobuffer usage to network-boundaries I wouldn’t be nearly as hard on it as a technology" is the most damning criticism and that's also ignored. The so-called rebuttal basically amounts to "we're using it to make oodles of money in the ads pipeline so it must be good software engineering." The claim about success in the market is completely true, but it's also not a rebuttal to the claim that the designers of protobufs are apparently ignorant of current computing theory and just patched together an ad hoc mess. Nobody capable of observation disputes that the standard substandard quality of most professionally written software is no impediment to success in the marketplace, there are too many examples to claim otherwise.
> this inability to distinguish between unset and default values is a nightmare
proto v2 and v3 let you distinguish set and unset default values (no idea about v1)
> If it were possible to restrict protobuffer usage to network-boundaries I wouldn’t be nearly as hard on it as a technology
It's a serialization format. It doesn't claim to be anything else. When people use it as their application's heap data model, they're misusing it. People are lazy and love to use their wire format as an internal data model (no one likes writing converters to/from the wire format), so this problem plagues everyone, but wire serialization is a fundamentally different problem from representing data within your application. When you conflate these problems, you get abominations like Java Serialization.
> wire serialization is a fundamentally different problem from representing data within your application
I agree with this and the author of the criticism evidently does too, but his position is that failing to make this distinction indicates that protobufs are poorly designed. I don't see how "programmers are lazy and fail to work around the poor design" is much of a rebuttal.
In my mind a good design has one or more appropriate representations for each abstract type. The marshalled representation is certainly one, but one might also want more than one in memory representation of the same type depending on the problem domain. The old Lisp trick of representing an immutable linked list with a single contiguous block of memory is one example[1].
> failing to make this distinction indicates that protobufs are poorly designed
The author never offers any evidence that protobufs fail to make this distinction, just that people are lazy and misuse them.
You can't control what people do with generated types. I don't know how you'd even write a linter for Java or C++ that would know what an appropriate usage of protobufs is, to say nothing of writing such linters for every target language and then integrating them into every possible build system, CI framework, and code review application.
Those thorough Google code reviews you complained about earlier -- one of the things they taught me was not to use protos as the application data model. Read the proto off the wire, then convert it to another type or if you're in a hurry wrap it in another type that does validation checks and hides the proto accessor methods.
> I don't know how you'd even write a linter for Java or C++ that would know what an appropriate usage of protobufs is
I wouldn't try to massage protobufs into satisfying that need, but I do agree that it's an area that should get more research and development. Looking at it from a linter perspective is making the problem way harder than it needs to be though. Verification is, generally speaking, a much more difficult problem than construction. For example it's much easier to construct a product of two large primes than it is to verify that a given number is a product of same. Anyhow, I'm not one of those people who thinks the theorists know everything and practitioners are all idiots, I'm just one of those people who think practitioners should learn from theorists and that, sadly, the former are often irrationally resistant to the notion. Not always though, TLA+ is a great example of a tool working software engineers use to build real systems that are theoretically verifiable.
I was being a bit cheeky about the code reviews. I do think that the reviewers delighted in the opportunity to be shitty to an unseemly degree, but I also agree that it was net beneficial. That said I did find the engineering quality in the SRE orgs was significantly higher than the SWE ones. Which is the complete opposite of every other company I've ever worked for.