Hacker News new | past | comments | ask | show | jobs | submit login
Why REST is not a silver bullet for service integration (getprismatic.com)
75 points by century19 on Jan 25, 2015 | hide | past | favorite | 23 comments



> Is the web a really a model we want to emulate?

I can't think of a single more successful achievement of software engineering, so can someone explain why I shouldn't answer with an emphatic, "yes!" to this question?

I get the web has all kinds of flaws, and maybe we can do better, but as far as I know, we haven't even come close to actually building something as useful or ubiquitous or stable as the web.

I've been using redis pipes (and databases in general) to get services to talk to one another, and that has its uses, but why must I do that or do REST? Why can't there be a place for both?


> I can't think of a single more successful achievement of software engineering, so can someone explain why I shouldn't answer with an emphatic, "yes!" to this question?

Because you've taken a single question from TFA out of context as a straw-man. Yes, the web has been enormously successful. But that's simply irrelevant to the practical questions posed by the author. To stitch together the full question that Ben Morris is really posing using a quote from the article:

Is the web a model we really want to emulate [as] "the only collaboration fabric for a large and complex set of decoupled, autonomous and collaborating services"?

To which I could just as emphatically quip "no!" Note the "only" in there. Morris is explicitly not throwing REST out, but questioning the REST-uber-alles mindset by pointing out limitations of that paradigm.

Immediately after the quote you pulled comes (IMO) the best part of the article:

It’s not just HTTP that provides a limited model for service integration. The web has been an inspiration for REST but is it really that successful a distributed system? It’s slow, fragile and prone to security problems. It’s vast, decentralised nature makes it impossible to find anything useful without indexing engines that are so vast that only one or two companies have been able to create them.

It's not, again, that the web isn't successful. It's that the applicability of solution to problem is being questioned.


Maybe the article could have been more succinct with some SOA jargon but a service description language (WSDL) and service discovery system (UDDI) are important and not really a well solved problem in practice despite the multitude of standards made back in the 90s. So, the question posed to me by the author is what has REST provided for us so far that we should continue to expand upon that it solves legitimate concerns of enterprise architectures. I can say for sure that UDDI failed in enterprise by now and with REST it really seems like we've stopped progress there anyway. Instead of either, everyone has moved to message passing architectures that accept the reality that services are extremely varied in maturity and semantics across the enterprise world. REST is about as vague of a standard as AMQP or even JMS as far as service standardizations and ontologies go.

To me REST in enterprise is simply reflective of how broken SOAP and XML-RPC was in practice in enterprises and made for far too inflexible, monolithic services that'd take years to move forward major versions. You can decompose REST respurce URLs at least and try to scale out from there instead of being stuck to the service definition's XML schema for every resource definition. Sure, I've seen some Schemas that are composed but in the world of enterprise software vendors and internal bureaucratic services you will devolve to the pathological cases of simplicity in your architecture to get the bare minimum done, and REST works better in practice here than anything WS standards related.


But he doesn't mention a lot about alternatives. Of course we could try to avoid having a single solution for everything but if there is no better option right now then REST can and should be used. If his point was to say that REST only works in some limited circumstances then he should have said what those were and, again, what other option we have for the other cases. I still think REST works for a lot of things but you might want to add some extra tips or standards within an organization.


None of the complaints about time coupling and location coupling are specific to REST at all. They're simply properties of any distributed system, though they're more noticeable the more latency-bound the system is. And since CPU clock speed won't really be getting any faster, we have to scale out if we're going to scale at all.

I think this illustrates pretty well that an actual understanding of what REST means is rarer than it should be.


Discovery mechanisms can help with location coupling to some extent, but of course not solve it. Protocols and queueing help with time coupling to some extent (like ZeroMQ).

Split brain issues are harder to solve, but of course there are protocols to deal with that too.

Small nitpick:

> And since CPU clock speed won't really be getting any faster, we have to scale out if we're going to scale at all.

What does CPU clock speed have to do with all this? It certainly doesn't affect communication latency and is a rather poor indicator of computing performance.

While clock speeds have stagnated, single core (thread) performance has been steadily increasing. Compilers are just not fully exploiting additional computing power yet.

Then there's always NUMA (non-unified memory access, multiple CPUs and memory subsystems networked together at hardware level) and in a larger scale, RDMA (remote DMA).


My intention with the CPU speed comment was to illustrate that we can't just throw more on single cores. Even distributing work across cores comes at a latency and efficiency penalty. And furthermore, even that isn't enough for some of the web scale applications that have to be spread across thousands of computers distributed across the world in order to handle the load and remain at reasonable latency.

So my point was mostly that distributed applications are unavoidable because you just can't scale up past a certain point.


The URL itself provides a certain amount of location decoupling, and you can add in redirects to help with that.

Temporal coupling aspect I also find to be misleading, you can for instance use an ATOM feed (or equiv) to record events, its then up to clients to use them as appropriate.



This article would be much more useful if the author discussed alternatives. As it stands, he's not really telling us anything most serious API developers don't already know. We choose technologies like Amazon SQS because it provides the durability and scalability we need for services to talk to one another. The author fails to say why this is a bad thing, other than to talk about coupling. There is always coupling in such a system, whether it's monolithic or microservice-based; the real discussion is what either choice buys us. Don't mind scaling vertically? Build a monolithic API. Want to scale horizontally? Build microservices.


Honest question: What does “scaling horizontally” and “scaling vertically” mean? I assume that “scaling” means “do more/less work of the same kind in the same amount of time”.


Scaling horizontally is basically "we can add machines to make it go faster" and "scaling vertically" means "getting a faster or more disk/memory/CPU/etc. makes it go faster". In horizontal scaling you're usually talking about adding "commodity" hardware, though that's not actually that true any longer (I believe AWS at least uses customized boards, form factors, etc. to achieve better economy of scale.)


Vertical scaling is where you add more resources to a machine. E.g; more RAM or processing power.

Horizontal scaling is where you add more machines to your pool of resources (servers).

Examples;

Horizontal scaling would be adding more web servers behind the load balancer to facilitate more traffic.

Vertical scaling would be adding more RAM to your database server to keep all the data (or just the indexes if your database is that big) in memory.


The short version: scaling horizontally means getting many more boxes scaling vertically means getting a bigger box (replacing the one you have)


The point about temporal coupling is not solved by asynchronous messaging either. If the client requires a response before proceeding then client will block regardless of type of messaging used and the responding service will have to be available for the client to receive a response.


I think that part of the problem with REST implied by the article is that it is purely request/response, rather than supporting full bidirectional communication.

To me the term "temporal coupling" is skipping some details, since the real consideration is the duration of the transaction vs the duration of the transport session. REST-over-HTTP can't directly represent transactions which span TCP sessions, and this is a problem if the transaction is very long or the connection is choppy.



I believe at least some of the pain points of "modern REST" mentioned in the article would be eliminated by wider adoption of proper FSM-based REST implementations, e.g. [1] or [2] (the latter is a spec and not a concrete implementation). This way, REST becomes a "template" for an "autonomous agent"'s logic, kinda like our brains are a "templates" for our personalities. Adoption of such "template" helps to establish invariants, which are crucial for large systems.

[1] https://github.com/basho/webmachine

[2] https://github.com/for-GET/http-decision-diagram/

edit: formatting


> Why REST is not a silver bullet for service integration

Since REST is so often misunderstood, even by many of its advocates, I’ll read the title charitably, and assume that the author is addressing claims made by well-meaning but misguided people. But, with respect to the arguments of actual REST experts such as Roy Fielding himself [1], the title is a straw man.

> Is the web really a model we want to emulate?

People tend to think that the technicians who use tools depend solely upon the engineers who make tools which are based soley upon the theories that the scientists and researchers discover. But, what often happens is that engineers first build things (like bridges) that scientists then study in order to come up with a general theory that is applicable to nature.

The Web is just such an example of this kind of sequence of events. You see, until TLB’s WWW took off, there were several competing efforts to create platforms for networked information systems. The earliest, and perhaps most (in)famous is Ted Nelson’s Xanadu. The thing that kept tripping up some of the other efforts was the focus on information provenance. That is to say, almost everyone thought that we needed two-way hyperlinks so that a document was always connected to its source [2]. Of course, the WWW did away with that and also just happened to become wildly successful. But, here’s the thing: The Web’s success went against some theories that went all the way back to at least 1960, perhaps in 1945 with Vannevar Bush’s Memex. So people like Fielding studied the web to come up with a new theory of networked information systems in the same way that scientists might study bridges to come up with a theory of bridge building.

This is all to say that it’s wrong to think of REST as being a post-hoc justification of “HTTP as a good information architecture”, rather the point was to figure out why the Web’s architecture is successful, and to come up with a theory that may be generally applicable to networked information systems.

> REST creates temporal coupling…

> …and location coupling too

Believe it or not, a “server” is just a name, and “abstraction” is just naming things. Components in RESTful systems are only ever “tightly coupled” to an abstraction. But, one of the most common mistakes people make in trying to understand REST is that they focus on the URI part of the API when it’s the media types that are most important. This is probably because people are used to the somewhat impoverished abstractions afforded by classical OO languages (nb - I am a big fan of OOP) [3]. Indeed, if you follow the prescription of REST which says that most of an APIs descriptive effort should be focused on media types as well as HATEOAS and caching, then you end up with a system that is actually less coupled than even the most interface-heavy OO architecture.

> REST comes in many different flavors

I’m presently working on a paper that I hope will remedy that.

[1] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hyperte...

[2] https://www.youtube.com/watch?v=cCvf2DZzKX0#t=2970

[3] And also because Fielding didn’t have enough time to give media types proper treatment in his dissertation: http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hyperte...


I totally agree with your comment. This should be on the top.

Now my rant: I blame frameworks for the "inventor's drama" whenever someone builds a "RESTful API". We need frameworks (or libraries) that focus on the hypermedia part, support linking, URI builders ("forms") and feeds. The frameworks should treat HTTP as a protocol and not as a set of design decisions á la "Do I use singular or plural in my resources?!!".

And of course there is temporal coupling. That's why we do not buy big batch processing closets from IBM to build websites. The notion of HTTP is temporal coupling.


You keep using that word "REST"... I don't think that word means what you think it means.


If so many people misunderstand REST it's worthless.


Disagree with the point about location coupling being eradicated by a broker. A broker has its own location. It also seems to miss the point that a resource URI is itself an abstraction representing the canonical source.

Re spikes in demand, the article ignores that REST unlike SOAP is cacheable. REST is popular precisely because SOAP has struggled in enterprise as business systems have moved online and the traditional services haven't been able to deal with the traffic.

Re transactions, REST architectures are stateless. But the wider problem illustrated in the article is that enterprise architecture is based on pre-web thinking and uses patterns which are suitable for secure, robust, transactional/stateful, low traffic internal systems (banking, payroll, ticketing etc). This is pretty much the opposite in every way to web architectures - so what good looks like in one is what bad looks like in the other.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: