The problem is that we don't use HTTP(S) for micro-services but rather ZeroMQ and/or RabbitMQ. Many async pub/sub platforms are much more complicated to test. I assume Hoverfly doesn't yet support other protocols but does it plan too?
Or maybe its just that micro-service at this point must mean simply RESTful API over HTTP(S) (which is sad because I think its a subpar protocol for IPC particularly pub/sub).
Given today's demand for streaming like apps I'm not sure I'll ever build a plain ole HTTP Request/Reply platform (which I guess is what most people equate to micro-service).
Hi. Hoverfly author here. I agree, messaging is vital in micro-services environment. We chose HTTP(s) as the first protocol to "virtualize" to help us better understand what additional functionality is needed (like hoverfly middleware).
The problem with 0MQ/RabbitMQ is that it's not that straightforward/transparent as in HTTP proxy. Initial thoughts to capture messages would be to use something like this http://api.zeromq.org/3-2:zmq-proxy as a forwarder.
I'm curious as to the use case for this, so was wondering if you could please elaborate. When you use messaging the communication is asynchronous and therefore the applications are decoupled by nature. So when testing a service like this in isolation I would assume you just need to assert that it publishes / consumes messages correctly, not needing to worry about what happens with the services on the other side of the broker. I think I am missing something?
For plain fire and forget you are correct that async pub/sub is decoupled and in fact while our monolithic app is monolithic it actually is fairly decoupled because of this (e.g. it often is the receiver of its own messages it published).
However there are async request/reply scenarios. Basically a client requesting for a stream asynchronously (this is different than normal request/reply) or even plain request/reply over a messaging protocol (which is done often for better or worse).
For our case it isn't so much that the monolithic app is poor architecture wise (in fact its sort of convenient to have a homogenous fleet of servers) its that the monolithic server makes prototyping a pain (ie some nodejs developer doesn't want to have to boot up the massive Java app).
I think a mirror tool would be useful for this.
I think Hoverfly like thing could be built as a RabbitMQ plugin or I guess you could publish messages to multiple exchanges has one way of recording/mirroring.
The interest in using HTTP for microservices is because it's familiar to most developers. It's the same reason almost all public APIs are based on HTTP. There are just so many tools in the HTTP API ecosystem, from management gateways to documentation generators.
Interestingly, even public APIs with push capability are usually HTTP-based (e.g. Twitter's streaming API, GitHub webhook notifications, etc). I predict most microservices will end up working this way too.
It also really depends on what is meant by HTTP. If we are including Websockets and HTTP2/SPDY than yeah I might see HTTP as more viable choice over AMQP, Thrift, custom sockets, etc.
The problem is internal communication and (especially micro-service where you might have many hops)... latency is often a serious concern. Often the best technology to reduce latency is not HTTP. In fact its best you don't rely too much on the protocol (ie be agnostic). Since you brought up Twitter see Twitter Finagle: Most of Finagle’s code is protocol agnostic, simplifying the implementation of new protocols.
And that leads to the previous segue that perhaps Hoverfly should be written in a protocol agnostic way. I'm not sure if it is (ie how coupled it is at the assumption of HTTP 1.1).
Otherwise I see Hoverfly as just a HTTP proxy tool.. of which I have used similar tools such as TCPTrace and ProxyTrace for ole school SOAP (oh the pain).
Indeed, internal communication is often treated differently. Even at my company we primarily use ZeroMQ between microservices, because it's fast and our (small) team understands it.
The movement behind the HTTP ecosystem is undeniable though, and the common practices for public APIs are moving internally. You might check out an API conference or meetup sometime to see what I mean (not that any of us are obligated to follow this trend; I'm just bringing it to light).
A lot of people we have spoken to have asked for this, so support for messaging protocols is definitely on our roadmap. Whether it will be incorporated into Hoverfly, or provided as a separate tool hasn't yet been decided though.
All business operations are declared on interfaces, which is used to codegen a client & http request handler. Normal app code uses the codegen'd client, and we have a REST api which runs the codegen'd http request handler (and underlying business logic that we write by hand).
This gives us certain kinds of flexibility that we use. That said, it's still a monolith. Even if we split each call onto a different server ultimately they will be relying on a shared database schema residing on the same, singular MySQL server.
I am in this situation right now. We are frozen in the planning stages of moving a Java monolith toward a micro-service architecture. This seems like a very straight forward understandable approach to help make that happen. I read the article and could easily understand what the tool is trying to do and how it can help.
This is John Davenport at SpectoLabs. We have used AspectJ to create new seams (in Daniel's terms) in Java monoliths that can then be intercepted via http.
It all depends on what sort of bind you find yourself in. Having seen both the Hoverfly approach is far preferable, but one approach may be to use AspectJ, create the seam via http and then intercept using Hoverfly. YMMV.
I'm also looking for more suggestions of how people will want to use Hoverfly with Java projects, and so please do give me a shout via twitter @danielbryantuk if you have any ideas!
One sticking point; what if the monolith does not have an endpoint you can call with an http request? What if you have a client that consumes the back end with server rendered pages and no services are exposed with url endpoints? Do you first have to make the monolith's internal functionality accessible via http requests?
Adding 'seams' into the monolith (and exposing this typically via REST or AMQP) is the most effective way I have found.
The alternative is to 'screen scrape' the data returned from calling the monolithic application via the interface exposed now. For example, calling a website page, parsing the HTML data returned, and extracting what you require. However, the code/algorithms you have to write are typically complicated, and the resultant code often fragile and highly-coupled
Or maybe its just that micro-service at this point must mean simply RESTful API over HTTP(S) (which is sad because I think its a subpar protocol for IPC particularly pub/sub).
Given today's demand for streaming like apps I'm not sure I'll ever build a plain ole HTTP Request/Reply platform (which I guess is what most people equate to micro-service).