Hacker News new | past | comments | ask | show | jobs | submit login
Introduction to HTTP Multipart (adamchalmers.com)
79 points by adamch on April 25, 2023 | hide | past | favorite | 17 comments



The article talks about multipart/form-data in particular.

Another thing one might run across is multipart/x-mixed-replace. I wrote a crate for that. [1] I didn't see a spec for it, but someone since pointed out to me that it's probably identical to multipart/mixed, and now seeing an example in the multer README it clicks that I should have looked at RFC 2046 section 5.1.1, [2] which says this:

> This section defines a common syntax for subtypes of "multipart". All subtypes of "multipart" must use this syntax.

...and written a crate general enough for all of them. Maybe I'll update my crate for that sometime. My crate currently assumes there's a Content-Length: for each part, which isn't specified there but makes sense in the context I use it. It wouldn't be hard to also support just the boundary delimiters. And then maybe add a form-data parser on top of that.

btw, the article also talks specifically about proxying the body. I don't get why they're parsing the multipart data at all. I presume they have a reason, but I don't see it explained. I'd expect that a body is a body is a body. You can stream it along, and perhaps also buffer it in case you want to support retrying the backhaul request, probably stopping the buffering at some byte limit at which you give up on the possibility of retries, because keeping arbitrarily large bodies around (in RAM or even spilling to SSD/disk) doesn't sound fun.

[1] https://crates.io/crates/multipart-stream

[2] https://datatracker.ietf.org/doc/html/rfc2046#section-5.1.1



> Another advantage of multipart is that the server can stream each part separately. For example, say you're uploading 5 files by encoding them into a JSON object. Your server will have to buffer the entire JSON object into memory, decode it, and examine each file.

That's not true. You can stream JSON, too. You just have to do something fancier than JSON.stringify().


I wonder why we didn't use a framed representation rather than delimiters that have to be searched for (which isn't so simple). It makes writing a streaming MIME parser much harder. With a compulsory Content-Length things would be much easier.

At least with multipart/form-data we get to avoid transfer encodings, which are also quite annoying to handle (especially as they can be nested, which is probably the worst aspect of RFC2046).


I agree Content-Length should be mandatory. It's not even specified. Arguably and unfortunately with the standard as is, parsers shouldn't even use it if it's there, because if there's a boundary before that length, your results will differ from other parsers. (My crate expects it though.)

> [Not having Content-Length:] makes writing a streaming MIME parser much harder.

I don't think that's a big problem for Rust application servers where there are nice crates for efficient text searching you can plug in. Maybe more so for folks doing low-dependency and/or embedded stuff, especially in C.

But it's just dumb IMHO when you want to send arbitrary data to have to come up with a random boundary that you hope isn't in the data you're sending. With a strong random number generator you can do this to (un)reasonable statistical confidence, but that shouldn't be necessary at all.


I speculate that it's because it would be easier for humans to read and write boundaries than count bytes.

Multipart predates http/1.0 and was written for email. It wasn't unheard of in the early days to directly enter SMTP commands. It would also be more readable on clients that didn't support mime.

https://datatracker.ietf.org/doc/html/rfc1945#section-3.6.2

https://datatracker.ietf.org/doc/html/rfc1521#section-7.2.1


Streaming. What if the part(s) are being generated on the fly, and don't exist to have a length? Requiring that the sender specify the length up front requires the length to be known. Maybe I'm (e.g.) tarring/zipping up a lot of data, and I don't want to hold it all in memory/write it to a temporary file first.


Shameless plug for my multipart crate: https://github.com/cetra3/mpart-async which I've been using happily in production for a long time now


One thing the article doesn't mention, but that's also interesting is that multipart is also heavily used for email. With multipart, you can send an HTML version along with a plan text version of your email. Email attachments are also solved via multipart. So I think this explains some of the decisions that were made when specifying HTTP Multipart.


In just the past week I used multipart/form-data to proxy an incoming file upload stream to another backend server which then streamed it to a cloud bucket. Working great so far and was fairly simple to set up!


Does HTTP/2 have support for parallel uploads? As an end-user if I happen to select 1 huge file first, followed by a bunch of tiny files, having to wait for the entire huge file to be streamed to the server before it even knows about any of the others seems less than ideal. It also seems odd there's no way to provide a hint to the server about individual file-sizes up-front (only Content-Length which is your upper bound for the total size of all files). I'm also curious how clients are expected to generate boundary strings and what is supposed to happen if the uploaded file is discovered to have an instance of the boundary string in it. Or are we just relying on that being sufficiently improbable not to worry about it (a la guid uniqueness...)?


Yes, HTTP/2 requests run on separate streams and will interleave chunks. They still have some head of line blocking however, since the TCP stream will not allow reading from another stream while waiting for another stream’s chunk to retransmit (for that you need HTTP/3).


HTTP/1 requests (uploads in this case) are also separate to some degree (though there are fairly stringent limits on connections per domain iirc which HTTP/2 resolves via the mentioned streams/multiplexing of connections).

The problem they have specifically would be that in a single request (form post for example) those uploads will be linear.

Solution really boils down to paralellizing the upload, using protocols/standards like https://tus.io/ or S3-compatible APIs to push the data up then syncronize with a record/document on the server.


You can technically add a Content-Length header for each part. It's not forbidden by the RFC, but nor is it common. It caused problems (https://github.com/square/okhttp/issues/2138) for OkHttp, and they eventually removed it. Might be fine for internal-only use, though.

Boundaries are a lot like UUIDs, and rely on the same logic. When generating random data, once you have enough bits, the odds are against that sequence of bits ever having been generated before in the universe.


EDIT: Was looking at the obsolete RFC. The current version, RFC 7578, actually forbids all part headers other than "Content-Type, Content-Disposition, and (in limited circumstances) Content-Transfer-Encoding".


> You can gzip the entire Multipart response, but you cannot pick and choose compression for particular parts.

Sure you can. I designed a system where the client uploads a multipart-form that's three parts, the first part is JSON ("meta") and the next two parts are gzipped blobs ("raw.gz" and "log.gz"). The server reads the first part which is metadata that tells it how to handle the next two parts.

I happen to be using Falcon and streaming-form-data on the server side.

https://falcon.readthedocs.io/en/stable/

https://streaming-form-data.readthedocs.io/


> I happen to be using Falcon

heh I'm always annoyed by this unfortunate name clash:

https://github.com/socketry/falcon




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: