Some of the fault for this lies with client libraries. Many, including popular ones[1], either don't provide a way to get the body of error responses at all, or make it particularly cumbersome to do so. For example, success bodies might be parsed automatically by registering the right parser for the content type, but to parse an error response you need to manually instantiate a parser and deserialise the body. Until error responses are first-class objects in client http libraries, devs will continue being lazy.
I wish I only saw this once. This is my life. I integrate with 3rd party APIs for a living...what I've learned is that everyone's 'REST' API is terrible in one way or another.
Microsoft Silverlight (RIP) only exposed 200 and 404 status codes to the application when using the browser's HTTP stack, presumably due to some API limitation somewhere along the line.
I remember reading that ISP's and browsers can do interesting things to non-2xx responses. In that case applications do not even have a chance to handle the error. IIRC Facebook's API does this because they encountered problems with mobile browsers deciding to do weird things with non-2xx responses.
No, this hasn't ever been a thing. This is just stupidity on the side of the developers.
You may be thinking about DNS hijacking by ISPS, but not non-200 issues. There are bound to be some terrible apps out there that respond badly as well, but that's really no excuse.
I am not talking about the app I am talking about the environment the app is running in. Of course there are crappy apps (I've made a bunch of them!) I remember IIS doing funky things with non-200 return codes. 5xx codes would have some default handler. I am certain this was an issue with configuration and a dysfunctional development process. In the shop I was in, in the mid 2000's, we the devs didn't have direct access to the web server and having the IT guys make any update took on the order of weeks. Instead of fixing the IIS config we just returned 2xx and handled the error in the app layer. We also had a couple SOAP(shudder) services that always returned code 200 with error codes in the response XML body. Much of our app code used this pattern for consistency. We also would deploy our services on various other web servers we didn't have full control over so it was just easier to use this pattern.
A quick glance at history of HTTP can explain this without incompetence. HTTP is fundamental to the world wide web and existed from the beginning in 1989. Fielding's dissertation introduced the idea of REST in 2000. And REST became part of mainstream client libraries in what, 2010? So there's a 20 year period during which code was written against HTTP libraries which were not idiomatic from a REST perspective. Returning status codes in the response body is a common workaround for compatibility with code written in this 20 year window.
Status codes aren't something introduced by REST, they've been a standard part of HTTP since HTTP 1.0. They've been in widespread use for decades. People use 200 for errors for a few reasons, but it's not because HTTP 0.9 didn't have them.
I started programming professionaly in 2005 and I can assure you that back then, outside of the browsers, people really didn't know or understand response codes.
There's a load of early, popular, Stack Overflow questions that revolve around what status to return when, and how to actually return those codes in your language. For example, in ASP.Net/IIS it was actually quite hard to stop IIS 6 (?) from swallowing your 500 xml response and serving a custom html error page. If I remember correctly some browsers didn't support PUT properly either.
So while you are technically correct, the codes were in use, you are historically wrong, few people outside of a small community understood their use.
I remember this article from the first time around, when people were really getting into REST and there was a fierce debate about strict REST Vs RESTful. In reality RESTful has mainly won and this article was on the wrong side of history.
> I started programming professionaly in 2005 and I can assure you that back then, outside of the browsers, people really didn't know or understand response codes.
> So while you are technically correct, the codes were in use, you are historically wrong, few people outside of a small community understood their use.
I started in the late 90s, and even back then they were commonly used. They weren't just well understood by developers, but they were actually in non-developer's lexicons as well. If you asked the typical geek back then what a 404 was, chances are they'd be able to tell you that it meant something was missing. So I don't agree with you that they were understood by "few people outside of a small community" at all. Using status codes has been standard practice for decades. Maybe it took a couple of years at the beginning of your career for you to notice them, but there simply wasn't this time period of decades when they weren't in use until REST came along.
Given that ajax was only added to browsers in 1999 and AJAX didn't really catch on seriously until 2003/2004ish, what you're claiming is at odds, again, with actual history.
404s are a different matter because you'd get 404 pages, so it's not at all an argument to support your position. It doesn't mean developers understood what to return from an ajax call, or that they understood HTTP verbs.
I'll remind you again that it was actually fairly hard to return the correct codes from a lot of frameworks, so objective facts are at odds with your recollection of events.
I stick by my assertion that the vast majority of web developers in our industry didn't really understand http until the latter half of the 2000s. I also specifically remember presentations to all our developers both junior and senior of how browser caching worked, which to most developers at the time was a bit of a mystery, and the correct headers to return to control it.
Here are some examples from 2008 of developers on SO discussing things which seem obvious today:
> Given that ajax was only added to browsers in 1999 and AJAX didn't really catch on seriously until 2003/2004ish, what you're claiming is at odds, again, with actual history.
Ajax is not the only way to access a web service.
> 404s are a different matter because you'd get 404 pages
…which are returned with a status code of 404. That's where the name comes from. Non-200 status codes were ubiquitous even back then.
> It doesn't mean developers understood what to return from an ajax call, or that they understood HTTP verbs.
Ajax is irrelevant and we're talking about status codes, not verbs.
> I'll remind you again that it was actually fairly hard to return the correct codes from a lot of frameworks, so objective facts are at odds with your recollection of events.
Perhaps the technology _you_ were using made it difficult, but it certainly wasn't the general case. PHP, classic ASP, mod\_perl, mod\_python, CGI scripts… they could all respond with non-200 status codes easily. Which ones are you thinking of that made it difficult?
> Here are some examples from 2008 of developers on SO discussing things which seem obvious today:
Come on man, clueless questions get asked about extremely well established things on Stack Overflow every single day. That doesn't mean that those concepts are suddenly no longer well understood, it just means that the person asking is a beginner. And neither of those questions mentioned status codes at all!
I think IIS7's 5xx idiocy was more driven by misguided security concerns. Same as HTTP.sys requiring you to get permission to listen on a port versus not needing any permission if you just open a socket. Or same reason telnet.exe doesn't get installed by default.
yeah but there are bits in the ~2000 era backend request pipeline that don't understand anything but 200 or do weird things, embedding response code in 200 body lets us tunnel through whatever weird legacy shit
> there are bits in the ~2000 era backend request pipeline that don't understand anything but 200
There's badly written buggy software from any time period doing all sorts of crazy shit, but only accepting 200 certainly wasn't the norm, or even commonplace back then.
This was a common workaround, so that customers don't get nervous, as some browsers showed an increasing number in a red circle at the top right. (i.e. the number of JavaScript errors)
This used to be necessary with flash clients, as they wouldn't get the response body if the response had an error status. I can't think of a reason to do something like this nowadays that flash is dead.
That leads to cleaner code for the frontend frameworks, better interoperability and simpler debugging. Web server error codes are for web server failures and application level errors are communicated via a 200, properly. I run in to this all the time and once we adopted it (for all our APIs), life has been better for customers and our internal teams.
> "application level errors are communicated via a 200, properly"
Wiki:
2xx Success
This class of status codes indicates the action requested by the client was received,
understood, accepted, and processed successfully
RFC 2616:
10.2.1 200 OK
The request has succeeded
Not really "proper" to use it that way, but if it works for you and your users I guess go for it. Also, I highly disagree with the rest of it (cleaner code, better interoperability, simpler debugging), but those are matters of opinion so there's no point in arguing them.
> client was received,
understood, accepted, and processed successfully
How is this not processed successfully? The request was processed successfully by the webserver ... and the error was handled by the application, so you can reliably say it was processed successfully at that level, down to the CPU. Narrow interpretation to fit a narrative has no technical merit. That being said, standards are set with certain models in mind and are imperfect almost uniformly, so I don't put too much stock in adherence to what someone imagined or put through committee at some point. I do try to find the best way to iterate reliably with least astonishing abstractions and predictable interfaces. I reason about it in the terms of the standard and it serves my teams.
> Web server error codes are for web server failures and application level errors are communicated via a 200, properly.
No, expressly in the HTTP spec, 5xx error codes are server errors. 4xx codes are all different kinds of application-level (usually resource-specific) errors.
> Web server error codes are for web server failures and application level errors are communicated via a 200, properly.
I guess google, facebook, microsoft and all the others are doing it wrong then? Or Improperly according to you? Because they do not respond with a 200 on application level errors in their APIs.
They made a choice to treat multiple applications as one. That's not what I would prefer and is less useful in my opinion. There are some CDNs and payment processors that are in operation, that use the 200 error response, so I tried it. I think it's clearly superior. YMMV
The needs and use case for Google, Microsoft, and Facebook are radically different than anyone else. No developer should imitate them blindly because, chances are, the conditions you are working in do not match the big four. There is no ideal way of coding, instead, you match your style to what is needed. I wish more people understood that.
It's an error code, not a massively distributed system. There's no reason why the solution for the "big four" should be different than for anyone else. Especially since their APIs are designed to be used by everyone else.
You can blame old browsers for this. In the old days if you returned a non 200 the browser could die in all sorts of interesting ways. Heaven forbid if you try and use any verb other than GET or PUSH, it probably turns the fan off your computer to try and start a fire or something.
Also, these days, I think the blame is more likely lazy devs, poorly complying frameworks, or compromises to make things easier for some frontend library.
The problem is that people want an RPC mechanism, and REST gives them a document transfer mechanism.
If your API will never be navigated by a human operating a browser, a lot of the REST specification is inapplicable (navigation links, etc.)
So you're throwing out a lot of REST regardless, and the question becomes where to draw the line between ease of implementation and compliance with a standard that doesn't really fit your needs.
Another problem is that people want an RPC mechanism, when what they need is a state-transfer mechanism.
It turns out that the semantics of RPC — attractive as they undeniably are — are pretty poor for building real-world distributed systems, while those of REST as a pretty good (or at least better) fit.
People need both. REST works well for CRUD operations that don't have complex side effects or constraints. What's often missing in this is intent. I just want to do "the thing" to this particular account or whatever.
Personally I like to very selectively add RPC actions on top of the base resource. Tacking an RPC action onto the resource URI allows you to encapsulate the intent of the user's action, handle all the updates required server side, and then return the updated representation.
I almost always have both. I tend to use event sourcing and the intent is mandatory. There might be 20 reasons to modify one resource, each with its own list of side effects to later perform.
So it's usually POST resource/:id/action
and that's fine.
> So it's usually POST resource/:id/action and that's fine.
There's nothing wrong with that. It's not anti-REST or anything, assuming you satisfy the other REST constraints, ie. each request is self-contained and any stateful resources are designated by URLs.
As an aside, I'm personally not a huge fan of human-readable URLs because it encourages API consumers to rely on/construct URLs client-side, which is not REST.
I think a more properly-REST approach would be to PUT a representation of the resource with the action applied. That is, rather than POSTing to /resources/:id/close, one would PUT a closed version of the resource to /resources/:id.
I don't see why that would be more REST. POST is to be used for side-effecting operations, PUT is an optimization over POST for idempotent effectful operations.
Certainly a PUT solution might have some advantages for replayability in case of network partitions, but REST doesn't this choice dictate one way or the other.
> If your API will never be navigated by a human operating a browser, a lot of the REST specification is inapplicable (navigation links, etc.)
That's incorrect. REST was designed for services of all kinds, not just human interfacing services.
The point of links is to support service upgrade via good old encapsulation. Consumers of your API shouldn't just guess links like is commonly trumpeted as REST, they should navigate to the part of your service they need from a well-defined endpoint, and this navigation path has a well-defined lifetime specified by cache control headers (HATEOAS).
The service promises to honour the lifetime of any visited URL on that path as specified by said headers, and any client trying to use that path after the expiry date must be prepared to possibly receive error codes.
The biggest anti-pattern of all: marketing your API as RESTful, when it is really more RPC-like.
I agree on all points of this article. The only nitpick I have is that tunneling through GET/POST is strictly necessary for HTML forms, since they do not support other verbs.
> The only nitpick I have is that tunneling through GET/POST is strictly necessary for HTML forms, since they do not support other verbs.
Only if you ignore AJAX in its entirety. Yes, the browser's capabilities are limited, as it must be a common denominator and can't describe every situation, and thus, can only GET/POST in certain forms.
If API's are best represented via REST, then why aren't software API's in general RESTful (e.g. I dont POST triangles to my GPU)? The answer: SOME API's are best represented with REST, but most are not.
I agree but also have counter example, perhaps not yet as an interface for two systems. But let's start with this:
What if I can ask my OS to do things I normally do over some protocol like HTTP in a RESTful style? Create user, list directory, find the last login, tell me linux kernel version.
^ has been done as a separate monitoring tool like Osquery, but can't we make such protocol natively? I don't want to parse my command-line output if I can just speak in one human-readble, machine-friendly dialect.
>What if I can ask my OS to do things I normally do over some protocol like HTTP in a RESTful style? Create user, list directory, find the last login, tell me linux kernel version.
The /proc filesystem is somewhat close to that when it comes to GET.
>I don't want to parse my command-line output if I can just speak in one human-readble, machine-friendly dialect
That's not what REST (the original concept) is about.
The author of REST, Roy Fielding, feels that they're intertwined [1]:
HTTP/1.1 is a specific architecture that, to the extent I succeeded in applying REST-based design, allows people to deploy RESTful network-based applications in a mostly efficient way, within the constraints imposed by legacy implementations. The design principles certainly predated HTTP, most of them were already applied to the HTTP/1.0 family, and I chose which constraints to apply during the pre-proposal process of HTTP/1.1, yet HTTP/1.1 was finished long before I had the available time to write down the entire model in a form that other people could understand. All of my products are developed iteratively, so what you see as a chicken and egg problem is more like a dinosaur-to-chicken evolution than anything so cut and dried as the conceptual form pre-existing the form. HTTP as we know it today is just as dependent on the conceptual notion of REST as the definition of REST is dependent on what I wanted HTTP to be today."
I believe REST as Fielding defined it has some great benefits. But I'm not a purist. And actually, neither is the person who wrote this post. Because no matter how much the disciples try to wriggle there way around it, using cookies for sessions is not allowed.
"We next add a constraint to the client-server interaction: communication must be stateless in nature, as in the client-stateless-server (CSS) style of Section 3.4.3 (Figure 5-3), such that each request from client to server must contain all of the information necessary to understand the request, and cannot take advantage of any stored context on the server.Session state is therefore kept entirely on the client." - from Fieldings dissertation.
To be fully compliant you have to do use something like HTTP Basic Auth, where you resend the username and password with each request.
I think you should use cookies to store a token that the server can then use to determine whether the "context" of the request is "logged in" or not. I do it. But it's technically a violation.
I question that idea. Cookie is a standard header. So is Authorization. It's up to the server to require either. Basic auth header removes the need for a handshake to negotiate tokens. Authorization: Bearer (token) is now a standard. I suppose Cookie: (token) could be too, but too many people felt dirty.
My point is that Cookie does provide the credentials required for the call contex, this fulfilling the self containment requirement.
Missing anti pattern: consider URLs insecure, especially if for a web browser. Don't include customer details (name, email, account number, etc) or search queries (free text) unless you have determined the security settings on your logging, audit, proxies, .., all conform to data protection requirements that suggest they should be encrypted and only visible to necessary staff. If you expect pages to be bookmarked or shared, then consider the security impact of where they are stored on local machines too, including in caches if your company is silly enough not to enforce disk encryption for all users.
> consider URLs insecure, especially if for a web browser.
A web site I used recently puts your session ID in the URL. If you log in, then alter the URL to remove the session ID, you appear logged out.
It gets even worse. Clicking the "Log out" button on the page simply removes the session ID from the URL. If you go back and reload the web page with the session ID in it again, you still appear logged in.
The page also doesn't use HSTS so is easily vulnerable to SSLStrip.
> A web site I used recently puts your session ID in the URL. If you log in, then alter the URL to remove the session ID, you appear logged out.
Nothing inherently wrong with that, but it depends on the situation.
> Clicking the "Log out" button on the page simply removes the session ID from the URL. If you go back and reload the web page with the session ID in it again, you still appear logged in.
This is common in servlet-container-land (Java) for handling browsers that don't support cookies. The URLs are rewritten to include the session-identifier (usually a cookie) as a request parameter instead.
There's no other mechanism to associate an HTTP request with back-end state (logged-in/out, etc.) except for session identifiers transmitted by the client browser (through cookies, headers, request parameters).
My #1 complaint is there is no OOP client available from service provider. i.e. building a driver to consume the response and turn that into your client code. The irony is I often write my own harness for REST service, and those are generally object-oriented, because I don't want to speak HTTP over and over. Basically I built my a client while writing test, but I need to write test to assert my client which was developed to help writing my test.
Anti-pattern #1: Using REST. It's just cargo cult along with everyone describing it and then implementing it differently. The only good part it has is how to layout the URL, and even then it's just common sense.
One can't really fault REST for the flawed implementations - Fielding's thesis is very explicit about what REST is and isn't.
Just because something becomes an often-misapplied buzzword doesn't say anything negative about the original concept so much as the people mis-implementing it.
seriously i dont really care whether an api is restful or restish. at the end of the day its all the same to me. just give me an api endpoint, request and response objects nicely documented and i am golden.
Have we considered that these REST "anti-patterns" exist because REST is fundamentally inappropriate for what most people are trying to use it for? What if you can't shoehorn your functionality into the handful of REST verbs? What if none of the status codes make sense?
What ever happened to plain old RPC? Have we stopped to consider that people tunnel things through POST or GET requests because it's easier and more flexible than trying to cram your functionality into GET, POST, PUT, PATCH, or DELETE?
If you find yourself using a lot of these anti-patterns, maybe you should consider switching to something a little less "REST-ful".
Most things on the web are half-assed, and half-assed HTTP APIs enable rapid results (before the problems start).
REST is not fundamentally inappropriate, it just needs a lot of careful design about domain objects, link relations, and media types. Generally, people are not very good at thoughtful design.
It didn't help that REST began to trend as an idea right around the same time that intentionally schemaless JSON was replacing schema'd XML documents as the preferred way of over-web information interchange. For consumers, schemaless JSON snippets were attractive for partial processing; for developers, they were attractive for rapid iteration. For makers of "Web 2.0 Mashups", JSON-returning APIs were attractive because processing XML with circa-2006 "cross-browser" nightmare-mode Javascript was about as pleasurable as pulling teeth.
People saw these APIs being called REST, they tried to understand REST, got overwhelmed halfway through, called it REST-like or RESTful instead, and that's how we arrived at where we are.
During this time, RPC wasn't cool or buzzword-compliant, so the people who still RPC did it for good reasons and didn't really blog about it. The quip to consider RPC is nonetheless valid; stuff like gRPC or Thrift are at least proper RPC frameworks, and a much better idea than someone trying to ducktape something with GET and POST for the millionth time.
Luckily, soon, GraphQL will be the newest entrant in this space, and will have to contend an influx of superficially-informed people enticed by its promise. It may have a better track record than REST, because a partial implementation of GraphQL will better resemble GraphQL than a partial implementation of REST will resemble REST.
It helps that GraphQL has an actual spec, protocol, and defined schema and is intended to be shipped as a separate microservice that just implements your schema according to the spec.
REST is more "hodge podge of concepts" that you can hack on and pollute and misunderstand at will.
I guess nothing will stop people from making Frankenstein schemas or poorly performing resolvers but at least the consumption is mostly uniform?
Some people would rather take a hodge lodge of concepts over a hodge podge of implementations. HTTP clients have been done to death, GraphQL clients on the other hand...
>REST is not fundamentally inappropriate, it just needs a lot of careful design about domain objects, link relations, and media types. Generally, people are not very good at thoughtful design.
Honestly, this should be a common part of web frameworks by now. It's a travesty that our industry hasn't developed a solid foundation for this kind of thing 16 years after it was published.
The best we've got on this mark is the Waterken server, which is pretty good, but not good enough.
> Have we considered that these REST "anti-patterns" exist because REST is fundamentally inappropriate for what most people are trying to use it for?
Or perhaps people actually don't understand REST, but think they do, thus leading to endless blog articles about "REST levels" and other nonsense.
> What if you can't shoehorn your functionality into the handful of REST verbs?
GET and POST can represent any arbitrary program (they map to the lambda calculus after all). You don't need any more than that, in principle. The other verbs are merely optimizations.
> What ever happened to plain old RPC?
The inescapable failure modes of RPC are exactly what REST addresses.
There's nothing really wrong with RPC in my book, just don't call it REST when it's not. I'd say the bigger issue is that a large number of people implement an RPC variant and call it REST because it's on HTTP and not SOAPy.
Is RPC just ad hoc endpoints? The info on REST is overwhelming, to many different opinions, for a simple SPA do programmers really need a formalized client-server contract? Can one just access server capabilities through ad hoc endpoints and write custom code to fetch the data they need? servers define procedures, and they return data.
I ask because I'm going to start writing my first http api for a small SPA.
It just causes so much busy work. Then busy work leads to bugs, which leads to framework abstractions, which leads to implicit nuance in almost every part of web development.
RPC and client-server contracts, as well as static-typing, is a bloody god send and the whole world will be in a better place once we formalize and accept it (I didn't say standardize I said formalize).
I'll defend REST's utility for simple data retrieval. If I just need a paginated list of comments on a video, being able to quickly hit /video/1093/comments and get that list back is really nice.
More complex use cases, and specifically non-idempotent operations, is where I find REST doesn't hold up as well as RPC.
> If I just need a paginated list of comments on a video, being able to quickly hit /video/1093/comments and get that list back is really nice.
REST isn't about human-readable URLs. The link to the comments should have been part of the representation returned for /video/1093 (typically JSON these days, so a "comments" property of the object).
As ben_jones stated, a client and server need a formalized contract or you're doomed to break things. Whatever you use needs to at a minimum be:
- internally consistent
- sufficiently documented that a developer on either side understands what the other side does
REST is just a set of rules to follow that attempt to avoid some pitfalls and provide a common parlance.
nothing wrong with a good well documented json rpc over http
the main difference is endpoint negotiation vs object state transfer - consider an account representation - if you can withdraw, the link to perform the withdraw operation is there in the server response. if you can't, the link to the withdraw operation is not there. this is how the returned value convey the object state and how it let client explore object operarions.
say, it's the difference between returning an object instead of a struct, and it's also why json alone is not compatible with rest, there's not an agreed schema to identify operations coming along with the data so that a client can actionit - json-ld and hal can, if you reallyhate the idea of xml tho.
As a shorthand, think of REST as about nouns, and RPC about verbs.
/account/1 is REST because the you're looking for an account (noun), the account id is 1, and you're using the HTTP method GET.
/search/hello is RPC because it uses a specific verb (search) to call a remote procedure to search for the 'hello' keyword. You're taking an action for which there is no appropriate HTTP method, so RPC can be used instead of REST.
This doesn't cover 100% of every situation, but it's useful as a quick mnemonic.
That's incorrect. REST has no such restrictions on URLs. URLs can be completely random numbers with no human-discernible meaning.
REST is about architecture, about where state should live, how long it should live, how messages between entities should designate resources/state, all so you can preserve encapsulation and maximal flexibility for service upgrade. Your service is not RESTful if you don't meet these criteria.
RPC has no such restrictions, which means you're free to do everything wrong, which virtually everyone does, and you'll still be doing RPC correctly.
Probably not. If "what this is and how it relates to other things" is determined by looking at the URL path and not the media type (for "what it is") and link relationshops (for how it relates to other things) it's not REST.
And that's the issue. Most engineers don't understand HTTP. Just the amount of people who confuse PUT and POST is pretty high. Getting people to remember that media type exists is hard. Add in the rest, it's just not happening.
But the unicorn "properly designed REST api" is quite nice.
Since i can't trust anyone to do it right though, I'll just use GraphQL because its easier to get people to do right.
Reading over your comments, I'm curious what you would consider to be an API that properly implements REST principles according to the original dissertation.
When people mention RPC in this context, are they talking about a generic concept, or are they referring specifically to SOAP? Because SOAP was always a complex mess, pretending that WSDL files somehow added semantic meaning to data, when it fact it was just a serialization format with excessively verbose data typing and validation of element presence/counts. SOAP made even the smallest projects feel like a bloated nightmare typical of "Enterprise bureaucracy". SOAP is the very reason why people eagerly latched onto JSON and REST the moment it became viable.
For a public facing interface, I haven't really found REST to be lacking. Adherence to REST buys you simplicity and (if kept to the standards) some implicit understandability.
That said, REST is not the right fit for _every_ use case. The same simplicity mentioned as a strength also limits its flexibility. I think this is no more abundantly clear than microservice oriented architectures. More and more these architectures are moving towards different patterns/protocols for various reasons (gRPC comes to mind).
> What if you can't shoehorn your functionality into the handful of REST verbs?
REST doesn't have a handful of verbs, HTTP has a handful of predefined verbs (but supports extensions). REST is an architectural style that does not specify the underlying protocol.
> What if none of the status codes make sense?
Again, that's an HTTP issue not a REST issue. And it's not likely to be a real issue (HTTP status codes may be insufficiently precise—but already support additional data for disambiguation—but I can't imagine a situation where none of them make sense.)
Much as I would like to agree with you, that war was already fought and lost in the early '00s. Port 80/443 were the two ports that couldn't ever be firewalled because web browsing depended on them, so everybody rushed to cram their RPC requests into some kind of HTTP-based protocol.
HTTP != REST though, as the article notes. There are other ways to use port 80/443 that neither violate HTTP nor go full REST - websocket being an (extreme) example.
Even the HTTP monopoly is changing. There are lots of new protocols (HTTP2, QUIC, WebRTC) that work besides firewalls and things like ALPN gives a standardized way to tunnel new protocols over an encrypted connection.
My favorite is when everything returns a 200, but the response is something like:
Sometimes they even include the 403 in the response, almost like the developer is giving you a giant middle finger.