Hurl runs super fast without startup latency unlike a lot of tools in this category written in node. The plain text format can be checked in and this can be part of your CI process. Hurl can capture data from previous requests to run workflows and serve as a testing tool. The main downside is the lack of a GUI, although I imagine it being not that hard to do something like make a plugin for VSCode that will add a run button to the text file and display the result in the editor.
One particular use-case for Hurl is that you can use the same file with REST Client to test locally and then add assertions and run it on CI/CD with Hurl
I recently switch from custom Bash wrappers around curl to restclient.el [1]. It has similar features. Especially nice is the integration with jq for fetching specific data (or inspection of results with jq-mode). And, whoever is inclined to appreciate it, the fact that I can stay within Emacs. No need to get familiar with a new UI/UX.
Not two days ago I had the thought that it'd be so nice if I could write tests that were just plain text HTTP requests, and that the assertions would be just be comparing the responses to stored plain text HTTP responses, kind of like how snapshot assertions works in the React world. From a cursory glance this looks even better than what I had in mind, can't wait to give it a spin.
That's definitely more convenient. I think it could be nice to have an additional test suite not written in the same language as the thing you're testing. It would force you to interact with your program the way the rest of the world would. Rather than relying on mocking, setting up test data, and reaching into the internals of your code, you have to set up your test data through the API. This wouldn't be feasible in most of work I've done in my professional career, but in an ideal world I think it could be beneficial not to rely on internals for testing, at least for some set of tests.
Yes, that’s generally a good thing to do. This tool has the potential to be much more useful for consumers of your service to document it or interact with it.
It’s not so hard to make this in any language. Just emit output to a text file and have a flag in your test command to overwrite the snapshots instead of asserting that the current output matches the snapshot.
I found that I didn’t even use assertions sometimes. I’d just check git to see if there’s a diff in the snapshots for refactoring
A quick example to check the website body can be achieved with
$ vim dnsmichi.at.hurl
GET https://dnsmichi.at
HTTP/1.1 200
[Asserts]
body contains "Everything is a DNS problem"
$ hurl --test dnsmichi.at.hurl
While reading the documentation and all its great ways to use assertions on response https://hurl.dev/docs/asserting-response.html and play with regex, and even built-in JSON parsing, I thought of querying the Algolia search API for HN:
KATT https://github.com/for-GET/katt is the same concept, but following the pattern matching philosophy. Written in Erlang, available as a CLI tool as well but needs the erlang runtime installed.
Disclaimer: I'm one of the authors, thus biased, but the reason I'm mentioning KATT is that the low barrier of entry for captures and asserts makes it a nice requirement tool for non-techs to write complex API scenarios.
- passing data from a request to another (with captures or with cookies, each file being its own cookie session),
- gettings resources and checking payload SHAS
- checking responses (JSONPath, XPath on body response, headers etc...)
- retry / polling scenarios (polls until a resource is created for instance)
- comments things (we use Hurl as documentation for APIs workflows)
You can do these things using scripts and curl of course. We appreciate having a tool that works locally on Windows / macOS / Linux and integrates nicely in CI/CD. Hurl uses a (very small) fragment of libcurl. curl is much much powerfull, we love it.
Why use a DSL and not your favorite scripting language or test-framework of choice for this? All the examples on the front page can be written in same number of LOC in many such popular languages.
- chaining - It's just code so put multiple get calls after one another
- captures - Just variables. With sessions you can also get automatic cookie-capture.
- Payload - Yes
- Parsing responses - stdlib has json and xpath libraries that also can be written as oneliners. Html not always being pure xml is a problem, but can be solved with other libraries, like beautifulsoup.
- Retries - a bit more hassle to set up but also doable with relatively small effort.
- Comments - yes
What's missing would be benefits given by the declarative nature, such as more human friendly output on jsonpath assertions. But that's often easily extended with functions in the test framework and outweighed by the flexibility given by allowing you to extend the capabilities with more functions or put computations between chained calls or other external inputs/outputs needed in the tests. How often do you really need to test only the http call and nothing else?
Thanks for the quick reply. I've used curl on windows before calling it from AHK for quick dirty one off scripts. Hurl could replace that with the ability to chain requests. Thanks.
Emulating an HTTP session (with cookies passing) between request is more complicated for instance. Retry based on response content is doable but easier with a declarative format. jq is perfect for JSON response, what about HTML/XML response? Our testers prefer to write a text based declarative test, instead of a Bash script. It depends on your needs/backgrounds.
Using a binary can be (in my experience, 99% of the time) an enormous convenience and performance boost over including a whole runtime for those languages.
You can get a worse version of this functionality out of IntelliJ's http client[0]. Hurl seems better for several reasons, although I wish there was an example showing a few more useful tricks:
1 - When the content of a JSON variable is an XML string, or a JSON string, detect this and format white space for readability.
2 - Support pulling either the entire request or just the body from a file, and looping over all files in a directory.
3 - Pull data out of a response, put it in an environment used by later lines in the script.
Written in rust, lets you define a series of http urls to hit and assert info about what's returned. i.e. a count of items in a json array, or a string value of one specific json string.
My qualm with this app is for a Linux user you can already build a system like that to yourself by simply using telnet and typing the HTTP directly down the pipe. The bonus is that you get to memorize all the protocol messages that way too.
The abstraction is provide better UX. Anyone will have to write stuff over telnet to provide chained requests, variables, assertions, etc. Once you do that, you basically get a reimplementation of hurl.
Curl is a good tool to familiarize yourself with, but I've written test with curl and bash it is very cumbersome this is another level of awesomeness for that. If all you are going to do is connect to a server with some json, then curl is ok, I agree that nc/telnet/socat might be a bit extreme.
Hurl is not enough for some of the tests I want to write, i.e. testing sending a 50MB file, or dynamic content.
Nice tool. I’m just wary of learning custom syntax. Maybe could use the same primitive of existing unix tools and make the whole a lot more orthogonal and frictionless. Jq syntax for parsing json, awk for text
Taking this further, it would be interesting to see this applied to cloud infra descriptions for deployment testing. Yes, you can write it in a programming language but it’s tedious and this same idea would be applicable: Get a resource ID then make some detailed describe calls to assert that it’s provisioned as expected.
Haven’t tried it yet, thanks for the experience report - what aspects did you find tricky?
Agreed on Gherkin vs. declarative, a t depends on who the test is for. I’d probably only use Gherkin for APIs if my product was developer-focused (eg Stripe, Plaid).
Really impressed. The ability to chain the http calls with assertions is great. The one thing I would prefer is to set the environment variables also in the .hurl file rather than in a .env file. Also, how do you access the key of a nested object?
What do you mean exactly by saying "easy to extend"? There's no support for external "script", right? It would need recompile, which means branching the source.
Yeah, this was my initial thought. It reminded me of its REST API format, which supports post-response scripts (written in JS) to e.g. set a variable up for an access token after an OAuth exchange. Hurl's post-response handling seems a bit limited in comparison.
I'm not sure if it can do the sort of web scraping (from the DOM) that Hurl claims to support, though. I guess you can run Hurl in CI, too.
It's a very good way to show folks how to use an API, and I try to check in these files to VCS. You can also write test assertions with it and it also supports environment-based secrets.
Hi I’m one of the maintainer of Hurl! You can capture data from the DOM using XPath and inject it in next requests (classic example being a CSRF token for instance). One thing to have in mind is that Hurl is working on the HTTP level, there is no JavaScript engine, you get what the network gives you, like curl.
Hurl runs super fast without startup latency unlike a lot of tools in this category written in node. The plain text format can be checked in and this can be part of your CI process. Hurl can capture data from previous requests to run workflows and serve as a testing tool. The main downside is the lack of a GUI, although I imagine it being not that hard to do something like make a plugin for VSCode that will add a run button to the text file and display the result in the editor.