Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Ht – HTTPie Clone in Rust (github.com/ducaale)
251 points by ducaale on Feb 5, 2021 | hide | past | favorite | 96 comments



Nice work!

These days I try to avoid interpreted CLI tools, you never know when they’re going to stop working.

Rust/Go CLI tools basically work on Linux, Max, Windows without too much fuss.

Wish someone would rewrite Homebrew in Rust!


It's really a matter of culture.

Things like httpie, jupyter, etc should be provided as pyz and binaries as well, not just pip installable, but python devs don't do that really often.

It's a shame, because cli tools should be black boxes, people should not have to think about the implementation. I regularly see cli python tools that output stack traces on errors, and it should not happen.

Anyway, if you want those tools to be always around, I really encourage you to grap a usb stick, and put them on there as pyz and binaries. It's always nice to have them at hand.

Here is an example procedure for both (run them in a venv).

For pyz files:

    python -m pip install shiv --user # (Yes, I realize how ironic this is).
Then:

    python -m shiv httpie -o http.pyz -c http
(download httpie, and creates a http.pyz file that will contains httpie and all it's dependancies, plus code to trigger the http command when called)

This will let you do "python http.pyz" anywhere you have a modern python and enjoy httpie without the need to pip install it.

For an exe (this is an example on linux, the commands are a bit different for windows and mac):

    python -m pip install nuitka httpie --user
You need to have a C compiler and python headers around as well (gcc on linux will do, E.G: sudo apt install build-essential python3-dev on ubuntu).

Then:

    python -m nuitka $(which http) --onefile --linux-onefile-icon /usr/share/pixmaps/python3.xpm --follow-imports    --assume-yes-for-downloads

Wait quite a bit.

Now you have a nice stand alone AppImage you can call with ./http.bin to be used on any linux system, and not care if Python is installed to play with httpie.

Here are the resulting files (available for 5 days) so you can see what it looks like:

https://drop.chapril.org/download/a335a8e92f98ee02/#fHC1SvOw...


Is this better than using pex?


Pyz works with windows but not pex.

Nuitka produces a standalone exe so no need to install python.


This looks like something to watch out for: https://github.com/rust-lang/rust/issues/62569

Basically Rust ignores SIGPIPE, so if you don't use write!() or handle SIGPIPE yourself, large amounts of data going through a shell pipe can result in a panic.

I wonder how many Rust CLI tools have the issue.


There's truth in that, but it depends a bit how you install it. The OS packages are very reliable, and creating your own bundle (through a venv, or through something like pex) is pretty reliable too.

(EDIT: talking about Python packages specifically, not so sure about other things)


My experience with Python has been that once you get past the initial setup (which may be tricky if you haven't figured your Python installation out yet), everything just works.

As you say, if something doesn't work due to conflicting dependencies, usually creating a virtualenv solves it. I don't bother these days, as just installing everything with --user tends to always work.


Use pipx. It's the only thing you should install with pip (and yes you should use --user to install it).


Yeah, I just read about it after seeing it around for years, and it looks fantastic. I might try it, but I've never had a problem installing apps, so I don't have a pressing need.


To be honest, the number of apps installed with python isn't really that high so you can get away with it most of the time. I like the fact that pipx solves the problem so I just do it once and forget about it.

The real problems start when people write scripts and notebooks etc that use libraries that might just happen to be installed or, even worse, install new libraries. The only solution to that is often a scorched earth approach where you vow to never install anything outside a venv again.


I also like Rust tools but Python is quite manageable using either pyz or pipx: I use the latter to install each tool in its own virtual environment. Periodically I run “pipx upgrade-all” to install updates (including injected plugins) but otherwise spend almost no time on it.


I never looked into Homebrew as I know that its mainly for MacOS.

Since you are sort of making the connection for Homebrew to work without fuss on Linux and Windows as well, if implemented in Rust, I am asking myself what it would offer that a user of Linux and Windows is missing?

Ordinary Linux distributions already have a package management and there is scoop and chocolaty for Windows.


Yeah, I can see how you could take that from my comment, it was a bit of a throwaway last line.

I mainly want it to be rewritten in a compiled language for performance reasons, it's a sluggish beast. I haven't tried to use it on Linux (where it has some support), and definitely not on Windows (winget/chocolatey would be my port of call there).

Examples of tools I use across operating systems that were written in Rust/Go, that pretty much are drop an executable somewhere and run:

ripgrep, starship, tailscale, terraform, vault, etc.

Whereas when something is written in Python, Ruby or similar, I'm rolling the dice, and I'm only an apt-get or dnf away from it breaking when I really need it.


I didn’t read their comment that way. Although Homebrew does also exist for Linux, I think the absolute majority use it on macOS.

Personally, being a user of Homebrew on macOS I’ve been thinking about wanting brew to be rewritten in Rust too. My own reason being that I think it would be faster when run, compared to the version they have of it currently which can be quite slow sometimes. And because I like Rust as a language and for its memory safety features. So I’d rather see brew implemented in Rust than in say C or C++, if a compiled language was to be used.


I don’t think that homebrew will ever get rewritten in anything other than ruby. It may be feasible to rewrite brew itself, but all the formulae are written in ruby, so moving away from it would insta-delete the entire ecosystem.


This is great! I've also come to really like curlie (https://curlie.io/). It is written in in Go and provides an `httpie` like interface with `curl` like performance.


Curlie is actually way better because it doesn’t mess with the default curl arguments. You don’t need to (re)learn httpie’s overly verbose command line parameters if you already know curl, and you don’t need to rewrite scripts or snippets that currently use curl.


Then why not just use curl?


No pretty colors, no JSON formatting, some kind of security vulnerability every month. Maybe the last one isn't quite fair though given curl's usage and the number of security researchers looking at its code.


Curlie uses curl, so security concerns about curl are certainly not a good reason to use Curlie.


Curl does output JSON nowdays, but I should try it for the colors.


Thanks. I have previously used bat which is not just a frontend but it hasn't had commits in a while.

https://github.com/astaxie/bat


Not to be mixed with the Rust version of `cat` (but with wings): https://github.com/sharkdp/bat/


I think the biggest value of this vs httpie is that it doesn't use the system TLS libraries (openssl/gnutls/whatever) but instead uses rustls. Less attack surface, though admittedly if someone has a compromise that works on arbitrary HTTPS traffic, you're probably pretty boned anyway.


Another advantage: your installation isn't tethered to a system Python implementation that may go away and nuke the env.


pyenv/pipx is a thing...

The binaries of Rust/Go replicas should also be updated but to my experience few package managers provide packages for apps written in those languages.


So now you need Python + pipenv + the actual program? Versus just the binary with go/rust (or any compiled language for that matter)?

I think that's exactly the kind of complexity arusahni was referring to. Being able to deploy a single binary with `go get` or `cargo build` without worrying about anything else is really convenient.

I guess it depends on the use case though, I use Python extensively at work and have little complaints.


More like `pipx install httpie`. To update all python apps : `pipx upgrade-all`. For setting up pipx, you can use a pyenv managed python, like latest version, independent of whatever version is installed on system/distro. And you do this only one time. pipenv is for managing virtual envs not python itself.

`go get` and `cargo build` don't "deply" the binary but merely build it. It's `sudo make install` vs `apt update` all over again. Point is, from the complexity of app management POV, those language don't magically solve anything that python doesn't, not to mention the bigger size of the binaries built with Rust/Go.

I do choose rg/fd/fzf over python-equivalents all the time but I acknowledge that I have to manually keep track of installing/updating these programs.


Ripgrep and fzf have been part of official repos of pretty much all mainstream distros.


> I do choose rg/fd/fzf over python-equivalents all the time but I acknowledge that I have to manually keep track of installing/updating these programs.

No you don't. I'm not sure why you're spreading misinformation. And there is no viable Python equivalent to ripgrep anyway.


I meant, personally, for power utilities, I try to use statically-compiled tools rather than scripts...

What the spreading misinformation part about? :)


Because tools like ripgrep are in your Linux distros package repos. You don't need to update them manually.


Well I'm on Xubuntu 18.04 and none of them are available on official repos and I do manually fetch the binaries from their repos (thanks for your awesome work and for providing `deb`s for rg btw ;) ) but yeah my observation was based on empirical evidence on older distros...things seem to have changed...

https://repology.org/project/ripgrep/versions https://repology.org/project/fd-find/versions https://repology.org/project/fzf/versions

Still I find quite a lot of Rust apps that I like to use but there is not a distro package for them and the repos don't provide a binary or a deb or PPA. At least with Python repos, because of the nature of scripting vs compiled languages, I can simply isolate anything with a simple `pipx install app`


I don't know what pipx is, but `cargo install ripgrep` works.


"Versus just the binary" .. "with `go get` or `cargo build`"

Here's how I've installed `ht`:

   $ cargo install ht
Here's how I've installed httpie:

   $ pipx install httpie
If we assume the corresponding tools are already installed/configured (such as go/cargo), then what is this complexity you are speaking of?


Isn't `cargo install` similar to `make install`? Usually it is advised against handling binaries in system paths independent of distro package manager.

Also, sometimes there is no `make uninstall`. Does Rust enforce an uninstall routine in the manifest?

With pipx, every app is contained in its own venv automatically and does not interfere with distro-managed packages.


In the sense that it is not a replacement for a real packager manager, yes. It is both slightly more restrictive and also not; as the sibling says, it installs to a particular directory, and that means uninstalling is easy, however, there is an install hook, but no uninstall hook, so if that puts files somewhere, which in theory it can, they won’t be removed on uninstall. This also kinda sorta means people don’t do that in the install hook.


No, cargo install installs to $HOME/.cargo/bin


And this is why I refuse to install any Python CLI and just look for an alternative instead.

Who wants to deal with pyenv or whatever when you can just get a static binary?


IIRC I'd need both pipx and pyenv, because pipx alone will use whatever Python distribution is in my path.


Thank you so much for doing this! I can feel the slow startup time of httpie every time I use it. ht has made it disappear!

For users of Arch Linux, I submitted an AUR package: https://aur.archlinux.org/packages/ht-bin/.


Damnit AUR, it really has got everything.

*Mumbles ubuntu-based user"


Wait, fyi it is actually 404ing rn.


For any Arch user running into this thread, wanting to install this: the project went through a rename (to xh), and a package has been added to the Arch "community" repo.

So all you need to install this now is:

    pacman -Syu xh


Thank you!


Nice work, saw this referenced on another Show HN recently: https://news.ycombinator.com/item?id=25939042

Adding HEAD as one of the methods would be welcome so we could get headers without pulling the whole page. Also, I did find the arguments for the -p switch in the source, but that should probably go in the README too.


That's my project :-) Ht was indeed very useful as it was trivial to compile it for armv7, statically linking openssl. Thanks @ducaale!


Thanks for the feedback and the link :).


I took a stab at something like this a few years back (https://github.com/saghm/rural), but yours is much fuller-featured. I'll definitely try this out soon!


> The release page contains prebuilt binaries for Linux, macOS and Windows.

Thank you so, so much.


Very nice! httpie is my favorite HTTP client, and it's always nice if command line tools run a bit faster.

Unfortunately the name clashes with a binary that's part of TeXlive:

$ pacman -Qo /usr/bin/ht /usr/bin/ht is owned by texlive-core 2020.57066-1

But I guess then the packagers will need to rename the "ht" binary to "http" (and add a conflict with httpie) when packaging.


Some httpie tricks:

Use --offline to see the request but not perform it:

    http --offline get google.com
    GET   / HTTP/1.1
    Accept: application/json, */*;q=0.5
    Accept-Encoding: gzip, deflate
    Connection: keep-alive
    Content-Length: 5
    Content-Type: application/json
    Host: google.com
    User-Agent: HTTPie/2.4.0
Pipe stuff to httpie to put it in the payload:

    echo "Hello HN" | http --offline get google.com
    GET / HTTP/1.1
    Accept: application/json, */*;q=0.5
    Accept-Encoding: gzip, deflate
    Connection: keep-alive
    Content-Length: 13
    Content-Type: application/json
    Host: google.com
    User-Agent: HTTPie/2.4.0

    Hello HN
And use http-prompt, because it's awesome: https://github.com/httpie/http-prompt


Very nice work, seems really fast! Since it’s written in Rust, do you plan to release a Windows version ?

HTTPie requires Python so Ht could have another advantage being easier to install.

We are also developing an HTTP client in Rust using curl (it’s an overwhelmed place!), it’s still very young but I link here anyway https://hurl.dev

[edited] there is already a Windows version, so really good job!

> The release page contains prebuilt binaries for Linux, macOS and Windows.


> HTTP client in Rust using curl

Which could possibly use Hyper under the hood :-P

https://github.com/curl/curl/wiki/Hyper


Hate to be that guy, but it's a bit weird to pick the HT part of HTTP, which stands for HyperText, when the tool only deals with the Transfer Protocol.


I’m intrigued by the current version number and what seems to be a quick pace of development. I see that you have a “HTTPie feature parity checklist” (issue #4) on the issues list. Is that the sole indicator (aside from bugs) of what a version 1.0 would be like?

Also, please post another Show HN when the backlog is almost done.


Thanks, I will try post it one more time once the tool is stabilized.

>I see that you have a “HTTPie feature parity checklist” (issue #4) on the issues list. Is that the sole indicator (aside from bugs) of what a version 1.0 would be like?

Yes, that pretty much sums it. Also The list might expand if I notice other HTTPie features I forgot about.


Compiled vs interpreted -- is it faster?


It definitely is.

  % hyperfine 'http get EXAMPLE.COM' 'ht get EXAMPLE.COM' -w 10 -r 100

  Benchmark #1: http get EXAMPLE.COM
    Time (mean ± σ):     200.8 ms ±   7.4 ms    [User: 178.9 ms, System: 21.1 ms]
    Range (min … max):   166.5 ms … 221.0 ms    100 runs

  Benchmark #2: ht get EXAMPLE.COM
    Time (mean ± σ):      22.7 ms ±   3.2 ms    [User: 8.7 ms, System: 5.6 ms]
    Range (min … max):    15.6 ms …  31.3 ms    100 runs

  Summary
    'ht get EXAMPLE.COM' ran 8.84 ± 1.27 times faster than 'http get EXAMPLE.COM'


Wow. That's serious difference, I didn't expect that for fetching an url.


cpython startup time might account for a good fraction of that


Yup, performance was one of the reasons I decided to port it to Rust. The other reason being that Rust gives you a single binary that is easier to deploy compared to python.


Nice work! As shared down-thread, I really loved and wanted to use `httpie` but the non-trivial amount of startup has put me off. Very happy that you made a Rust alternative because I really don't like `curl` that much -- it requires quite the amount of incantations for non-trivial requests. `ht` and `httpie` definitely improve ergonomics at important places.

So, kudos!


Why do you need more performance from a CLI test tool? I'm honestly curious.

It's basically curl (fast) plus simpler and easier interface and pretty printers. For performance in shell scripts you can use curl, and for troubleshooting the IO time dominates anyway.


Why use something slower when an equivalent faster tool is available?

There's definitely a noticeable delay on my machine with starting up the python interpreter. Enough that it dominates most actual request times to fast servers. (`http get www.google.com` is ~460ms while `ht get www.google.com` is ~130ms)

For a tool I'm constantly using to check APIs I'm developing, I really appreciate snappy commands that give me results that feel instantaneous.


It adds up. I once tried to scrape an internal company website / API because we had no PDF exports and wanted to download + export to PDF, with `httpie`. It's an amazing tool but all the startup times compounded pretty badly. I switched to `curl` (which is a bit more pain until I got the full command line right, granted) and the script finished in ~95 minutes as opposed to the ~202 minutes it took with `httpie`.

With about 0.5s startup time, it means every 120 requests add a full minute to the final time. And I had to scrape ~125K URLs back then.

For a daily casual flow 0.5s startup time might not be much (although people like myself get irritated by that as well but I do recognize it's a minor inconvenience). But when doing mass-scripting such delays can very easily compound to non-trivial time inefficiencies.


I think people need to start distinguishing between CLI and TUI as TUI aren't really meant for scripting but CLI tools are.


Be that as it may, I still liked `httpie` (and `ht`) more than `curl` even though `curl` seems superior in terms of configurability.


Why don't you use cURL then? cURL is written in C, which sure is teh performance you're looking for.


I did just that in the end. It took me some fiddling with headers and form parameters and file uploads, and it worked.

It's just that httpie's (and thus ht's) CLI usage is a bit more ergonomic.


For mass-scripting I'd prefer to use a native HTTP library with connection pooling.


Sure, I can agree with that somewhat.

Still, it was an one-off thing and I didn't want to turn it into a project. Using a for loop, `curl`, `wkhtmltopdf` and `parallel` eventually did the job.


I for one am totally on board with writing atrocious grep/sed/awk pipeline mashups, doing whatever it takes to get a one-off job done with as little programming as possible. Sometimes the important parameter to optimize is brain cycles spent, or time to result, not CPU time or optimal usage of network resources.


Absolutely. I'd even argue that the brain cycles spent should be the first and most important metric for one-off tasks.

That's why I recently asked people here about what to use if not shell scripting. I think I'll either use a less confusing scripting language that is almost like bash/zsh -- namely `rc` -- or I'll brush off my OCaml skills.

That need was borne out of my frustration that I routinely spent non-trivial amounts of time on one-off scripts so at one point I got fed up and started looking for alternatives.


I currently have the dynamic language blues – Python to me just doesn't feel like as much fun as it used to. Maybe it's getting too big and/or too crufty. At the moment I'm having great fun with Rust instead, which feels as magical as Python and C did when I was twenty years younger.

That said, bash/sh are really good at setting up pipelines to pass around lines of text between Unix tools. That is a really cool thing, and most languages don't really make this as first class concept. Because bash does, and I have twenty years of experience with it, I mostly put up with Bash. It's not a great language but it gets the job done, literally.

I recently read some stuff about Julia which made me interested in it as a potentially better Bash (and hopefully more fun than Python) –

https://julialang.org/blog/2012/03/shelling-out-sucks/

https://julialang.org/blog/2013/04/put-this-in-your-pipe/

https://docs.julialang.org/en/v1/manual/running-external-pro...


> I currently have the dynamic language blues...

Same, I used to be a Ruby (and Rails) fanatic a long time ago but this has subsided. Nowadays I love Elixir and I reach for it in many scenarios where I don't expect CPU-bound workloads but I am acutely aware that the startup times (when scripting) are atrocious, plus working with it in business context for 4.5 years made me painfully aware of the problems caused by a lack of static typing.

> At the moment I'm having great fun with Rust instead

Same, I love it more than I loved almost any other tool I tried in my life. Just not sure how economical it is to reach for it for throwaway one-off tasks. I suppose with more experience in it it would become feasible.

> It's not a great language but it gets the job done, literally.

Agreed, I have achieved a lot with bash and `parallel` alone. But at one point it became a death by a thousand paper cuts thing; I know I can do the task but I can never quite remember all the little annoying details when e.g. iterating numbers, or file list from a previous command, or was there anything specific about piping inside the scripts (there is not, but you have to be mindful of config like `set -euo pipefail`) etc.

It might be me getting stupider and less tolerant with age but you know what? I am okay with that. These tools / languages should be intuitive and easy to remember!

I am not abandoning bash and zsh scripting, ever. But my tolerance towards banging my head against the wall was severely reduced lately. And, as mentioned in other comments of mine here, I'll either use `rc` or re-learn OCaml -- since it's extremely terse, compiles to very fast binaries (faster than Golang and only slightly slower than Rust) and has static strong typing.

But time will tell.

---

Thank you for the fantastic links. Opened them on my phone, they'll definitely be researched.


aiohttp for Python is relatively performant and in a fairly easy to write language.


Isn't that a library and not a CLI tool?


Yes, in the context of my comment higher in the thread about using libraries. This gives you a lot more control of the connection pool and queues, which can get a lot more performance than simple cURL usage. From the cURL manpage:

  curl will attempt to re-use connections for multiple file transfers, so that getting 
  many files from the same server will not do multiple connects / handshakes. This 
  improves speed. Of course this is only done on files specified on a single command 
  line and cannot be used between separate curl invokes.
So using a library gives you more flexibility in reusing connections (and avoiding the expensive TCP/TLS handshake) than the simple patterns you can use with the cURL CLI.

See also: https://docs.aiohttp.org/en/latest/http_request_lifecycle.ht...


Oh believe me, I am aware. I just didn't want to write a program for my task back then (even if it would have likely taken almost the same amount of time as it turned out in the end).

That's why I am searching for an alternative to the classic shell scripting and I have almost settled on OCaml.

Python is alright as well but never really liked it: just when you might need a tool that's quick to launch, you'll hit the Python interpreter startup overhead. Same goes for many other interpreted languages.


To be honest, you approached this problem pretty badly.


This might be a useful comment if you had spent the time to offer thoughts on how it could be better.

As it is this comment serves only to gratify your ego whereas advice might help readers. It's worse than adding nothing at all.


It was supposed to be a one-off task and I was confident at the time that I can script it quickly.

I still scripted it but it took at least half a day.

I'll still behind the idea that one has to be able to do one-off tasks without starting a dedicated programming project. It's what scripting languages are meant for.

But I did misjudge the speed at which I'll be able to finish the task, that much is true. Ironically I knew exactly what to do from the start but several small and annoying quirks of the shell scripting languages lost me quite a bit of time.


Your comment is approached pretty badly.


Because slow startup time is obnoxious for any command-line script. There's no reason to start up an entire scripting virtual machine just to make an HTTP request. No one should be writing serious CLI tools in an interpreted language.


And yet for decades serious CLI tools are being written in POSIX shell, Bash, Zsh, Awk, Tcl, Perl, Ruby, Python, and no one really complained about speed until now, when hardware is so fast, finally people are starting to notice slow startup times??? C'mon. Python is not Java.


I can definitely tell a difference, and it's irritating. The authors of these tools might not notice or care but I actively avoid them. Magic wormhole is one such example.


Sorry about asking this, I don't mean to be mean or anything, please don't take offense on this - I'm honestly intrigued.

I would like to understand from where a question like this comes, I understand different people have different experiences and backgrounds, but being on this site I would assume this would be kinda obvious.

Can you share anything about your experience that can help me understand it?


FWIW as a developer whose worked in both interpreted and compiled languages (but not Python) it's obvious to me that the compiled version will be faster, but it's not obvious that it'd be meaningfully faster. I'd actually expect network latencies to dominate here in many cases.

As another commenter mentioned I suspect what's being measured here is the Python startup time rather than performance once things are running.

My guess would be a faster to start interpreter (like lua or v8) would rival ht performance, while a slower to start compiled language like the JVM would be as "slow" or slower than httpie.


I agree with you, and if he had asked if it was meaningfully faster, it wouldn't have struck me as confusing as it did.


I know many people who wouldn't necessarily understand that compiled programs tend to be faster, because it just hasn't come up, or because in their domain it's not really important or even clear.

If you've been working primarily with Javascript, for example, or even worse, Typescript, the question of "is this thing compiled or interpreted" is "yes". Compilation becomes a piece of either build tooling, where the artifact isn't a concrete binary, or even an attribute of runtime.

It really wouldn't be too hard to imagine a shop built on Javascript and Python, where compilation just doesn't factor into the system in a meaningful way.


> the question of "is this thing compiled or interpreted" is "yes"

Isn’t that answer applied to every programming language?


Thanks, I see what you mean. I think the TypeScript example makes a great point on the difference of perspectives.


While I would expect the answer to almost certainly be "yes", theory and practice do not always match up.

The "compiled vs interpreted" qualification was meant to preempt responses along the lines of "Of course it's faster -- it's compiled!"

I'm curious -- were the question simply "Is it faster?" how would you have responded?


How could it not be?


Really nice. It needs a style that is suitable for a terminal with white background though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: