I've been using HTTPie a lot more recently. It really takes the tedium out of using curl and I can produce color coded output for people. However, I am still finding myself in situations where I can't figure out how to induce the correct request. In other words, the user friendliness of being able to do things like construct JSON from parameters is great until it isn't.
Is there anything similar for GUI users? The standalone application form of Postman is popular with some coworkers for general HTTP work as is Fiddler on Windows.
Paw is really nice. I love that you can string together multiple requests with "dynamic values" (connecting the output from one request to the inputs of another, with transformations/functions/whatever). I use the command line for a lot of things, but Charles + Paw is a super powerful combination for tinkering with HTTP requests outside of code.
I guess it's still growing in popularity but hasn't hit the inflection point of becoming a standard tool.
The concept of an HTTP CLI seems obvious to me in retrospect, but I'm not a web developer, so maybe there's a reason this isn't as useful it would seem at first glance?
I was punning on the fact that the last part of jq's documentation introduces an almost full fledged functional programming language with function definition, map etc etc
I really like this because by default it gives all of the http headers and makes everything look really nice. Curl still has its place, and I wouldn't dream of replacing it, but I would definitely use this as a sort of command line shortcut. Cool project.
I like using HTTPie for many things, however the current release does a bad job of rendering XML, e.g. it'll display `<sitemapindex xmlns="http://…">` as `<ns0:sitemapindex xmlns:ns0="http://…">`. But — I just checked and found the not-yet-released v1.0.0 fixes this by removing the XML formatter completely as discussed in https://github.com/jkbrzt/httpie/issues/443 so my gripe is sorted.
I still tend to go back to cURL when I want to see exactly what's been received, and use httpie for when I know the response headers and body serialization are fine and I want to see the data therein.
I keep switching between curl + jq vs httpie. Lately I've been using mostly httpie. It is a great tool. One of my favorite things is it builds json objects (say for 'put' and 'post' for example) using command line arguments. So can have:
$ http put url key1=val1 key2=val2
If one of the fields is a larger nested object can use :=
$ http put url key1=simpleval1 key2:='{literaljson...}'
I don't use this kind of programs all the time (but I did last week) so I eventually forget that I installed httpie and end up using curl or wget. I should alias curl to httpie and use \curl for when I really need curl.
Because the syntax is easier to remember, it has syntax highlighting for js, html and more and it's super easy to send json, form data, headers etc. It's better and quicker for prototyping APIs for example.
BTW, curl is great too but I hate these kinds of posts. I keep seeing people on HN saying "not sure why anyone would use Y over X" completely disingenuously as if they are expressely not acknowledging the legitimate use cases of Y.
Not sure why anyone would use dropbox over a self-hosted sftp server, right?
> I keep seeing people on HN saying "not sure why anyone would use Y over X" completely disingenuously as if they are expressely not acknowledging the legitimate use cases of Y.
Kind of like titling a project "X: a CLI, Y-like tool for humans"...
If this had been made an interface to libcurl, or added to a separate tool in the curl distribution, most of the novel functionality could have been adopted 3 years ago without the need to support a separate project and HTTP library.
Rephrase: If this was implemented with libcurl and bundled with cURL it would no longer need to exist as a separate project because cURL would develop and support it. Does that make more sense?
I think I understand what you're saying, but all it would be is a change of ownership/packaging. httpie is only an interface to lower level libraries which already exist (requests and therefore urllib3).
Whether it's an interface to requests or to libcurl matters fairly little. It'd be quite a drag if it were to libcurl in fact because syntax highlighting would be a lot harder, for example.
"all it would be" is a gigantic wealth of new functionality that requests/urllib3 does not have (compare libcurl and urllib3 features), along with greater support, developer base, test coverage, user base, and lower tco. httpie gains a wealth of advantages and cURL gets syntax highlighting. It's hard to overstate how advantageous to httpie it would be.
You are extremely confused at which level it is beneficial to merge projects.
You don't simplify things by going from one popular lib to another, especially when they both have legitimate use cases. Requests is one of Python's most highly regarded libs and a very popular one at that. Httpie is not its only user.
And what you are describing is not a simple process. Gaining tests and features (what features anyway?) is not good enough for it.
> You are extremely confused at which level it is beneficial to merge projects.
Not really, since if you read my comments I suggested they could have been merged into cURL three years ago (after starting four years ago). It wouldn't even be a new project, it would simply be an add-on tool. And that's the point.
Doesn't this assume the maintainers of both projects would be amenable to such a thing, and otherwise be perfectly in unison with regard to vision and goals?
Actually, it mainly just assumes the author knows C. If they had they could have simply taken cURL and written some wrapper functions to do the JSON query part and the syntax highlighting, then submitted their modifications. No extra project needed, no grand vision required.
> Why are you being weirdly insistent that there only exist exactly one tool for making HTTP requests on the command line?
I'm not. I'm being weirdly insistent that adding features to one good tool is better than having 10 kinda-ok tools all with different features.
Imagine if VLC or MPlayer was actually 10 different video players, all of which supported different media formats and had different features. That was actually somewhat the case for a while; kplayer supported some things, the five different GTK/Gnome players supported some things, they were all relatively buggy and playing video was always annoying. Except for VLC and MPlayer, which just did everything, and were mostly fantastic.
How did VLC and MPlayer do this? They designed their apps specifically so that they could expand in features, and allowed people to contribute new technology. They supported runtime codecs and extensions. And they had a large development and user base, and people saw the benefit in having one tool that supported as many ways possible of doing the one thing they wanted: playing a video.
Having one tool to handle all the weird uses of HTTP may not be reasonable, but having a toolkit that bundles all the weird uses of HTTP would be infinitely more useful than having to download 100 different projects just to munge HTTP requests. There are many such toolkits for other technologies. This is not a new or contentious idea.
Not only is this a contentious idea, it's positively absurd on its face. Your logic dictates that every tool with similar goals cannot be maintained separately, and that a single person or group should hold a complete monopoly on how we do any given thing in computing.
VLC and MPlayer were two of "10 different video players" when they were made, and a great many other video players, some significantly older and with higher support, were eventually supplanted by them (at least VLC, anywway).
In your little model here, that would never have happened. You'd utterly kill innovation and progress with this.
I cannot overstate how mistaken you are here, it is completely and wholly a terrible idea, at every turn. Maybe there's something I'm not getting, I certainly hope so.
I used MPlayer and VLC as an example of good development. You then say that if they had been developed the way I describe, they would not exist. But they were in fact developed the way I described. And they are not imaginary. And there is no free software monopoly on video players (not coincidentally because free software was designed to kill monopolies in the first place). If you can somehow re-state this incredibly convoluted argument maybe we can re-address it, but i'm pretty sure you only reinforced my point.
> You then say that if they had been developed the way I describe, they would not exist. But they were in fact developed the way I described.
This is literally false. They were not developed in a vacuum like you describe, but as one of a great many competing products, all of which were older. You claim they should have been folded in with other, older projects, but they weren't and because they weren't, they improved beyond those other projects, and are still here, having grown beyond the projects you claim they should have been absorbed by.
My argument was less than 50 words, if you can't parse it, I can't help you. This is also nearly a week old, has completely lost my interest, and likely the interest of everyone else on this site. Why are you still replying?
What makes you assume that the cURL project wants to support features like syntax highlighting, a second style of command line arguments, ... and would accept such contributions?
Is there anything similar for GUI users? The standalone application form of Postman is popular with some coworkers for general HTTP work as is Fiddler on Windows.