If you want to build a backend in Rust, Axum (which uses hyper underneath) is pretty recommended these days, as it's all in the tokio ecosystem. Actix Web is good too, but it has its own ecosystem of libraries. I read the book Zero To Production in Rust [0] which was a great overview on not just Rust but scalable backend architectures as a whole.
Interestingly, Cloudflare wanted to use hyper but found that it was too correct, so they had to build their own [1].
I tried warp [0] and I am unimpressed so far. Pretty complex, limited documentation, buggy. The builder paradigm they used feels pretty constrained and, in my opinion, achieve the opposite of the simplicity it is supposed to bring. I was surprised it is so popular.
Maybe I need more time or a favorable comparison to another framework to appreciate it.
Not sure why you used warp, it's not really recommended by anyone these days for Rust backends. Try Actix Web or Axum, both have decent documentation and even online courses on YouTube, which is how I mainly learned them.
We interestingly ran into issues with actix and the AWS lb. The lb takes some liberty with how it handled connections and actix seems to be "to the spec" so we were seeing a lot of dropped connections. Placing nginx between the two resolved the issue but it's fairly disappointing that we need that layer when it should be unnecessary overhead. I'd love to give axum a try if we could find time and see if it behaves better.
> Interesting, I'm curious about the details here. Does the lb reuse connections for multiple requests or something?
I'm curious too. There must be more to it than that because LBs reusing backhaul connections is standard practice. It's not only an optimization but in many cases you'll quickly hit ephemeral port exhaustion if you don't. TCP connections are distinguished by (src_ip, src_port, dst_ip, dst_port) tuple. For this leg you're probably only varying the src_port portion, and all of the valid options are cooling in time_wait state.
The first time I used SQLx I was completely blown away by this. It’s an incredibly useful feature that I’ve never seen before and now I’m actually spoiled by it!
The only downside is that you have to install sqlx-cli and you have to run a couple of special commands to make your binary compile offline or on CI/CD where you don’t provide that database connection.
I use Prisma Client Rust [0], coming from the Node.js world, it works great. Some people like to use sqlx though, I'm not too big into writing SQL for every little thing.
Async is not necessarily as useful for database connections since connection counts will generally be in the 10s and handled with a thread pool. It's pretty low overhead to just use pooling.
Yea but you still want a good async interface if you're already using async, even if it's mocked over an underlying threadpool. Otherwise you have to wrap all your DB clients in your own adhoc async wrappers which feels kinda bad, imo.
I've used diesel async for personal projects, and while it does work seamlessly with diesel at times, there are some rough edges - I ran into an odd issue where the order of imports from the base diesel package and diesel async mattered, and it would compile with one above the other, but not the other way around.
"A fast an correct HTTP implementation" makes sense for Deno, as Deno is founded by the same person who founded Node.js and the correctness of Node's HTTP server was different from that of other scripting languages.
Oh I mean Deno's HTTP server is excellent in the same way Node's is. That isn't cited as one of the primary reasons for creating Deno. However Deno is built on a different platform (Rust rather than C) so it uses a different HTTP library.
I'm pretty sure Deno is contributing significantly to Hyper.
Deno's HTTP server is more balanced when it comes to tradeoffs than node's I think. Node is optimized for IO-bound programs is what I have heard.
When I'm somewhat dialed in to a subject, I'll frequently notice some concept or tech mentioned in one discussion will in turn get its own thread. Probably the same thing here but I was not privy to the original conversation.
How does this compare to Rocket (https://rocket.rs/) - I've just started using Rocket and it seems to provide everything that Hyper does and a lot more out of the box too
Others have explained the difference, but I wouldn't use Rocket at all, to be honest. It's barely maintained, the maintainer left for some time without telling anyone and without giving any repository access to other contributors, and it has a bus factor of one. I'd recommend Axum since it's backed by the Tokio contributors which makes it much less likely to be unmaintained later on.
The 0.5 release is literally years in the making and there's no indication that it'll be done any time soon. The creator said they finished most of it in a 2020 comment and it's still not done.
Hyper is a relatively low level HTTP library, while Rocket is a web server framework. So Hyper doesn't concern itself with things like routing, authentication, state management, template rendering, static file hosting and so on, which Rocket provides. Hyper is only focused on being a good implementation of the HTTP protocol and serving as a base for higher level libraries and applications. You could use Hyper to build a framework like Rocket - though AFAIK Rocket uses its own HTTP implementation.
It's kind of like that difference between Photoshop and libpng.
Any recommendations for rust template engines? I'd like something that can easily render labeled fragments of a template instead of requiring me to split a page into a dozen little files. Kinda like inline {{block}} definitions in Go's html/template. Speed is also nice.
From template-benchmark-rs [0] I found sailfish [1] (fast, but no fragments(?)). render-rs [2] and syn-rsx [3] both let you write html in rust macros which is cool (maybe that can substitute for fragments?). Then there's gtmpl-rust [4] which is just Go templates reimplemented in rust.
Sure. They're called 'partials' sometimes. Useful if you want to rerender just part of a page. This is a pattern used by HTMX, a 'js framework' that accepts fragments of html in an http response and injects it into the page. This is good because it avoids the flash and state loss of a whole page reload. See the HTMX essay on template fragments for a more complete argument [0].
This is a go template for an interactive todos app [1] that I'm experimenting with. The html content of the entire page is present in one template definition which is split into 6 inline {{block}} definitions / "fragments". The page supports 5 interactions indicated by {{define}} definitions, each of which reuse various block fragments relevant to that interaction. I'm in the process of converting it to use embedded cozodb [2] queries which act as a server side data store. The idea here is that the entire 'app', including all html fragments, styles, http requests and responses, db schema, and queries are embedded into this single 100-line file.
Why is it that "it's correct" is something worth mentioning? Presumably if it's not correct it's either not an implementation, or not known that it's not correct.
For HTTP, being pedantically correct is a tradeoff. On the plus side, you prevent weird potentially unexpected behaviors by cleanly rejecting off-spec behavior instead of accepting potentially broken input and seeing what happens.
But you also can't interact with everyone else anymore, because a whole lot of people run slightly off-spec software, successfully, and sometimes without knowing it.
It's a good question with an unsatisfying answer. Networking is kind of difficult.
Sitting down with an RFC and coding up what it says is nowhere near as simple as it seems like it should be:
* RFCs are often ambiguous, I've seen teams implement a protocol in a way that certainly seems to follow the RFC but won't be widely interoperable.
* RFCs are often incomplete, many protocols are specified across a lot of different RFCs, and by different authors, so it's easy to miss important details or even whole RFCs exacerbating the above point.
* RFCs are regularly released as protocols evolve and invalidate older versions of the protocol (or early experimental versions of the protocol, etc) often. Sometimes the newest RFC does a lot of work to disambiguate what would be allowed by older versions see for example this recent RFC https://httpwg.org/specs/rfc9110.html
* Independent of what the RFC says, there's what Cisco does (or MS or Google or other large influential imlementors).
* A lot of protocol implementations don't implement the full protocol (that is various extensions or rarely used features).
* A lot of protocol implementations implement what the RFC says in the most commonly aggreed on way, plus a compatibility options for other commonly used implementations, plus some oddball interpretations of the RFC that the authors like.
* There's vendor-specific extensions.
* There's common but unspecified behaviors that implementations tend to converge on, but which aren't easy to intuit.
* There's implementations that handle mixtures of versions (e.g. implementations of http 1.1 that handle 1.0 or 0.9 just fine are common).
And so on. So to answer the question "is 'it's correct' worth mentioning?" - yes, something along these lines tends to be important.
Of course "what does correct mean" is a giant can of worms....
In the case of hyper I think it means: an attempt to be extremely precise in what is allowed (and as close to the disambiguated RFCs as possible), and strict as well - not being particularly loose in what it accepts as input. That's my interpretation anway.
"Sitting down with an RFC and coding up what it says is nowhere near as simple as it seems like it should be"
I learned this for myself when I tried coding an IRC server for fun. Quickly found that I made more progress, faster by just using Wireshark to see what an established server was doing and copying that.
Share the same general sentiment, but in the case of HTTP, many implementations are forgiving of conventional usage that may not match the specs; so I guess it's worth mentioning that this impl is strictly by-the-book.
Yes, "by the book" is a very commonly used metaphor in English, even when no book is involved.
It's roughly equal to saying that you are doing something strictly by the rules (and the rules don't have to be contained in a book or even exist in text form).
Some uses would be like: "My boss insists on doing everything by the book".
I don’t know if it’s what is being referred to here, but “provably correct software” is a whole thing. It takes the idea of software tests to the next level, and is very important in some contexts.
Appears to have heavy use of `async`, etc. Has anyone gone about creating JNI bindings for an `async` API? I imagine you start the pool in a context object and then access the APIs through that via some callback mechanism? I'd love to read someone's example code if you're willing to share.
"ureq" is bit different from Hyper. Hyper is lower level. In same category as "ureq" - there is "reqwest" which is built on top of Hyper and has also "blocking" option (aka without async).
Interestingly, Cloudflare wanted to use hyper but found that it was too correct, so they had to build their own [1].
[0] https://www.zero2prod.com
[1] https://blog.cloudflare.com/how-we-built-pingora-the-proxy-t...