Hacker News new | past | comments | ask | show | jobs | submit login

Yup, performance was one of the reasons I decided to port it to Rust. The other reason being that Rust gives you a single binary that is easier to deploy compared to python.



Nice work! As shared down-thread, I really loved and wanted to use `httpie` but the non-trivial amount of startup has put me off. Very happy that you made a Rust alternative because I really don't like `curl` that much -- it requires quite the amount of incantations for non-trivial requests. `ht` and `httpie` definitely improve ergonomics at important places.

So, kudos!


Why do you need more performance from a CLI test tool? I'm honestly curious.

It's basically curl (fast) plus simpler and easier interface and pretty printers. For performance in shell scripts you can use curl, and for troubleshooting the IO time dominates anyway.


Why use something slower when an equivalent faster tool is available?

There's definitely a noticeable delay on my machine with starting up the python interpreter. Enough that it dominates most actual request times to fast servers. (`http get www.google.com` is ~460ms while `ht get www.google.com` is ~130ms)

For a tool I'm constantly using to check APIs I'm developing, I really appreciate snappy commands that give me results that feel instantaneous.


It adds up. I once tried to scrape an internal company website / API because we had no PDF exports and wanted to download + export to PDF, with `httpie`. It's an amazing tool but all the startup times compounded pretty badly. I switched to `curl` (which is a bit more pain until I got the full command line right, granted) and the script finished in ~95 minutes as opposed to the ~202 minutes it took with `httpie`.

With about 0.5s startup time, it means every 120 requests add a full minute to the final time. And I had to scrape ~125K URLs back then.

For a daily casual flow 0.5s startup time might not be much (although people like myself get irritated by that as well but I do recognize it's a minor inconvenience). But when doing mass-scripting such delays can very easily compound to non-trivial time inefficiencies.


I think people need to start distinguishing between CLI and TUI as TUI aren't really meant for scripting but CLI tools are.


Be that as it may, I still liked `httpie` (and `ht`) more than `curl` even though `curl` seems superior in terms of configurability.


Why don't you use cURL then? cURL is written in C, which sure is teh performance you're looking for.


I did just that in the end. It took me some fiddling with headers and form parameters and file uploads, and it worked.

It's just that httpie's (and thus ht's) CLI usage is a bit more ergonomic.


For mass-scripting I'd prefer to use a native HTTP library with connection pooling.


Sure, I can agree with that somewhat.

Still, it was an one-off thing and I didn't want to turn it into a project. Using a for loop, `curl`, `wkhtmltopdf` and `parallel` eventually did the job.


I for one am totally on board with writing atrocious grep/sed/awk pipeline mashups, doing whatever it takes to get a one-off job done with as little programming as possible. Sometimes the important parameter to optimize is brain cycles spent, or time to result, not CPU time or optimal usage of network resources.


Absolutely. I'd even argue that the brain cycles spent should be the first and most important metric for one-off tasks.

That's why I recently asked people here about what to use if not shell scripting. I think I'll either use a less confusing scripting language that is almost like bash/zsh -- namely `rc` -- or I'll brush off my OCaml skills.

That need was borne out of my frustration that I routinely spent non-trivial amounts of time on one-off scripts so at one point I got fed up and started looking for alternatives.


I currently have the dynamic language blues – Python to me just doesn't feel like as much fun as it used to. Maybe it's getting too big and/or too crufty. At the moment I'm having great fun with Rust instead, which feels as magical as Python and C did when I was twenty years younger.

That said, bash/sh are really good at setting up pipelines to pass around lines of text between Unix tools. That is a really cool thing, and most languages don't really make this as first class concept. Because bash does, and I have twenty years of experience with it, I mostly put up with Bash. It's not a great language but it gets the job done, literally.

I recently read some stuff about Julia which made me interested in it as a potentially better Bash (and hopefully more fun than Python) –

https://julialang.org/blog/2012/03/shelling-out-sucks/

https://julialang.org/blog/2013/04/put-this-in-your-pipe/

https://docs.julialang.org/en/v1/manual/running-external-pro...


> I currently have the dynamic language blues...

Same, I used to be a Ruby (and Rails) fanatic a long time ago but this has subsided. Nowadays I love Elixir and I reach for it in many scenarios where I don't expect CPU-bound workloads but I am acutely aware that the startup times (when scripting) are atrocious, plus working with it in business context for 4.5 years made me painfully aware of the problems caused by a lack of static typing.

> At the moment I'm having great fun with Rust instead

Same, I love it more than I loved almost any other tool I tried in my life. Just not sure how economical it is to reach for it for throwaway one-off tasks. I suppose with more experience in it it would become feasible.

> It's not a great language but it gets the job done, literally.

Agreed, I have achieved a lot with bash and `parallel` alone. But at one point it became a death by a thousand paper cuts thing; I know I can do the task but I can never quite remember all the little annoying details when e.g. iterating numbers, or file list from a previous command, or was there anything specific about piping inside the scripts (there is not, but you have to be mindful of config like `set -euo pipefail`) etc.

It might be me getting stupider and less tolerant with age but you know what? I am okay with that. These tools / languages should be intuitive and easy to remember!

I am not abandoning bash and zsh scripting, ever. But my tolerance towards banging my head against the wall was severely reduced lately. And, as mentioned in other comments of mine here, I'll either use `rc` or re-learn OCaml -- since it's extremely terse, compiles to very fast binaries (faster than Golang and only slightly slower than Rust) and has static strong typing.

But time will tell.

---

Thank you for the fantastic links. Opened them on my phone, they'll definitely be researched.


aiohttp for Python is relatively performant and in a fairly easy to write language.


Isn't that a library and not a CLI tool?


Yes, in the context of my comment higher in the thread about using libraries. This gives you a lot more control of the connection pool and queues, which can get a lot more performance than simple cURL usage. From the cURL manpage:

  curl will attempt to re-use connections for multiple file transfers, so that getting 
  many files from the same server will not do multiple connects / handshakes. This 
  improves speed. Of course this is only done on files specified on a single command 
  line and cannot be used between separate curl invokes.
So using a library gives you more flexibility in reusing connections (and avoiding the expensive TCP/TLS handshake) than the simple patterns you can use with the cURL CLI.

See also: https://docs.aiohttp.org/en/latest/http_request_lifecycle.ht...


Oh believe me, I am aware. I just didn't want to write a program for my task back then (even if it would have likely taken almost the same amount of time as it turned out in the end).

That's why I am searching for an alternative to the classic shell scripting and I have almost settled on OCaml.

Python is alright as well but never really liked it: just when you might need a tool that's quick to launch, you'll hit the Python interpreter startup overhead. Same goes for many other interpreted languages.


To be honest, you approached this problem pretty badly.


This might be a useful comment if you had spent the time to offer thoughts on how it could be better.

As it is this comment serves only to gratify your ego whereas advice might help readers. It's worse than adding nothing at all.


It was supposed to be a one-off task and I was confident at the time that I can script it quickly.

I still scripted it but it took at least half a day.

I'll still behind the idea that one has to be able to do one-off tasks without starting a dedicated programming project. It's what scripting languages are meant for.

But I did misjudge the speed at which I'll be able to finish the task, that much is true. Ironically I knew exactly what to do from the start but several small and annoying quirks of the shell scripting languages lost me quite a bit of time.


Your comment is approached pretty badly.


Because slow startup time is obnoxious for any command-line script. There's no reason to start up an entire scripting virtual machine just to make an HTTP request. No one should be writing serious CLI tools in an interpreted language.


And yet for decades serious CLI tools are being written in POSIX shell, Bash, Zsh, Awk, Tcl, Perl, Ruby, Python, and no one really complained about speed until now, when hardware is so fast, finally people are starting to notice slow startup times??? C'mon. Python is not Java.


I can definitely tell a difference, and it's irritating. The authors of these tools might not notice or care but I actively avoid them. Magic wormhole is one such example.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: