One pattern I really like is opening the networks tab of the browser, finding the request I'm interested in and "copying as curl" - the browser generates the equivalent command in curl.
Then I'd use something like https://curlconverter.com/ to convert it into request code of the language I'm using.
In a way curl is like the "intermediate representation" that we can use to translate into anything.
I also use the browser's 'copy as curl' function quite frequently as it's so convenient, having all auth and encoding headers set to _definitely_ work (instead of messing around with handmade, multi-line curl command lines)
Be aware that online service like this one might log your request which could have sensitive data, I'm not saying it does, but those websites give me the creep
I watched the network logs, and it doesn't seem to transmit anything. Additionally, their privacy info clearly states:
> We do not transmit or record the curl commands you enter or what they're converted to. This is a static website (hosted on GitHub Pages) and the conversion happens entirely in your browser using JavaScript.
Privacy policies on most websites mean jack when they can be changed at any time for any reason, imo.
Not to mention the pattern nowadays is: offer a free service, pay a little lip service to privacy concerns, then the enshittification train comes rolling down the tracks a few years/ months later.
Not saying this site is going to go down that path but IMO giving the benefit of the doubt with regard to privacy on the internet is bad practice for 2024.
I agree that it's possible, and that the majority of utility websites do use a backend to provide their utility, but it seems curlconverter.com doesn't make any requests to a website to convert and instead does so in javascript.
It would be nice if more sites offered themselves as PWAs that worked when you set "throttling" to "offline" in the dev menu, so that you could ensure that no data is leaving the browser when working with sensitive info.
Maybe that would be a nice browser plugin. Something that blocks any further requests. I guess it would work similarly to ad blockers, only once enabled blocks everything.
Just unplug the cord, or disable the WiFi, for a few seconds. As we are presumably discussing sensitive data, nothing is as certain as the end of the cord in your palm.
I kinda wish the address bar in any browser had an "advanced" popout menu that's basically a curl frontend with all of its bells and whistles. Basically move that functionality from the dev tools.
Sadly most major browsers go the opposite direction, removing more and more "advanced" functionality and information from the address bar (e.g. stripping the protocol, 'https://', from the URL)
Firefox on Android has recently started to hide the URL entirely when using the search-from-addressbar feature. Instead of the URL of the search page, it shows the search terms, which is redundant since the terms are already shown by the page itself.
Yeah, that has made my life so much easier when troubleshooting an API endpoint. I can tweak the request params to run against a local instance as well as pipe through jq for formatting etc.
This is pretty interesting. It's not like HTTP needs an intermediate representation, but since cURL is so ubiquitous, it ends up functioning as one. cURL is popular so people write tools that can export requests as cURL, and it's popular so people write tools that can import it.
The benefit of curl over raw HTTP is the ability to strip away things that dont matter.
Eg an HTTP request should have a client header, but they're typically not relevant to what you're trying to do.
curl is like an HTTP request that specifies only the parts you care about. It's brief, and having access to bash makes it easy to express something like "set this field to a timestamp".
Actually "Copy as cURL" adds much that is not required. In some cases this can be useful. However if all one cares about is what is actually needed to succesfully request a resource using HTTP, then "Copy as cURL" always includes too much. It includes "things that dont matter".
HTTP is more flexible than "Copy as cURL". There are things that can be done with HTTP that cannot be done with cURL.
(NB. I am not a developer. I am not writing software for other people. I do not profit from online advertising. I use a text-only browser with no auto-loading of resources and no Javascript engine; I never see ads. Been using HTTP/1.1 pipelining, which all major httpds support, outside the browser, daily, for many years. It works really well for me; I rely on it. As such, I am not the appropriate person with which to debate the relative merits of HTTP protocols for developers who profit from online ads served via "modern" web browsers.)
That's kind of what I mean. E.g. I believe curl will add a Content-Length header, which is good to have, but I don't need every example HTTP call to show me that.
To me a curl call is kind of shorthand for "these are the parts unique to this request, do the appropriate thing for general-use headers". If I see a raw HTTP request missing a Content-Length header (assuming it could use one), I don't know whether to assume that I do the normal thing, or whether the server ignores Content-Length, or perhaps if the server specifically errors out when it's set.
Vice-versa, if a raw HTTP request does have a Content-Length header, I'm not sure if that means it's required or just supported.
If I see a curl call specifying Content-Length, it sets off the "something weird is going on" bells in my head. Nobody specifies that in curl, so it's presence is odd and worth looking at.
I've used a similar tool as part of API logging, filtering out the signature on the bearer token... It's useful with request id for dealing with error reporting.
https://curlconverter.com/ is a great example of intelligent UX. Whatever browser you're using is shown in the instructions for "Copy as cURL". Very clever.
In curlconverter.com clicking on "C" redirects you to the --libcurl option documentation page instead of generating a C snippet.
Wouldn't a more user-friendly way be to still generate a C snippet, but to mention that it can be done with the --libcurl option too?
Then I'd use something like https://curlconverter.com/ to convert it into request code of the language I'm using.
In a way curl is like the "intermediate representation" that we can use to translate into anything.