Cool stuff. Really, though, this is still relying on a rather large runtime library: the physical, data-link, and network-layer drivers.
Now what'd be really awesome to see, would be one of those Operating System guides that shows you how to write an OS kernel, in assembler, that can speak HTTP. Even just limiting yourself to targeting the synthetic hardware of a VM program, it'd still be quite a feat.
Bonus points if the entire network stack has been flattened using the hand-rolled equivalent of stream-fusion. :)
That is just running the TCP/IP layer speaking RS232 and relying on an external IC for lower layers. It's not at all what the GP is looking for.
It should probably also be noted that a minimum TCP header, with no data attached, is 20 bytes, so to implement a 'full stack' in 68 bytes is a pretty strong indication that you're relying on off SoC memory to handle the packet buffering.
From the first line of the source, it's a PIC (microcontroller) program, rather than something for x86 (which VirtualBox and friends emulate), so, no.
There are a few open source PIC simulators -- e.g. [1] -- and I would guess you might be able to get it running, since the link layer is SLIP over a serial port. You'd just have to wire up the simulator's serial console in the right way.
Here's another simpler implementation of an HTTP server in Linux x86 assembly from last year, coincidentally by the one who did the Seiken Densetsu 3/Secret of Mana 3 translation hack and the old Starscream 68k emulator:
My comments as an inexperienced assembly developer, assuming this is optimising for binary size:
- The pug/doN macros do an extra reg-reg copy if passed a register - and the recursive definition calls pop/pop/pop instead of just add %esp, -4*N, you could shave a few bytes
- AT&T syntax will always look weird to me, but the heavy use of macros and local labels is quite elegant
- A little bit of candid swearing in the comments? Fine by me, but is this officially associated with canonical?
> - A little bit of candid swearing in the comments? Fine by me, but is this officially associated with canonical?
Assuming you mean Canonical Ltd., the company behind Ubuntu, this has absolutely nothing with them — this is hosted on canonical.org, not canonical.com.
Agree, AT&T syntax was just not designed for human reading. I doubt this is too optimized for size, since there are obvious tricks that it misses.
Another observation: the strlen code is incorrect, as it also counts the \0. We can fix this, and make the code 1 byte shorter (in glorious Intel syntax):
Thank you! About the time you wrote this, I discovered that the strlen code was incorrect in a different way as well, and then I went to sleep. Sort of embarrassing.
This is practically axiomatic in assembly language programming.
It's just not worth it to turn you code into what you'd need to turn it into in order to make it as small (or as fast) as it can possibly be on that specific version of that specific microarchitecture from that specific manufacturer, such work being undone by the next version of the hardware.
> AT&T syntax will always look weird to me
AT&T syntax is meant to be a generic assembly language syntax; it's supposed to look equally weird to everyone, regardless of what CPU they're writing code for. GAS will accept Intel syntax, or a somewhat heterodox variant thereof. NASM is the usual assembler of choice on modern x86 Unix-a-likes, I think.
As a web developer who isn't familiar with assembly or any web server more barebones than nginx, what benefits does something like this provide? Speed? Could this be a solution for an extremely simple directory/static file web server?
This is a simple, single-threaded single-process accept-read-respond-loop web server. It's vulnerable to trivial trickle DoS attacks and probably has other issues. There are no advantages, the author just did this for fun.
The TCP part comes from C code in the kernel, so this headline is a little misleading ;-).
Agreed. However, it should be safe from buffer overflows, path traversal attacks, XSS, and obviously CSRF. It should be fine other than DoS. Let me know if you find any exceptions.
It's hard to be vulnerable to XSS and CSRF with all-static content, no?
So, not only will a trickle DoS other clients, each byte will also force an O(n) traversal of $buf (burning CPU). Granted, buf is only 1000 bytes, but that's not great.
It looks like a request with no space could force you to walk (`repne scasb`) through invalid memory after $buf. Also maybe corrupt it (unescape_request_path).
It will also fail to correctly parse HTTP/0.9 (not a big deal, but part of spec). The parsing code ignores the existence of verbs other than GET. (Doesn't check that the verb is GET either.)
We don't validate that paths start with /, we just skip that byte. Okay:
mov (path), %al
...
cmp $'/, %al
je badreq
Since valid GETs are of the form:
GET /foo.txt HTTP/1.0
^-- path=buf+5
As you point out, a client close will cause SIGPIPE causing a crash (DoS).
That's all I see. But I'm not an asm expert and I'm sure I've missed something.
> It's hard to be vulnerable to XSS and CSRF with all-static content, no?
You would think, but actually Apache managed to be vulnerable to XSS by including bits of the request URL in its error paegs, if I remember right. Last millennium, I think.
> So, not only will a trickle DoS other clients, each byte will also force an O(n) traversal of $buf (burning CPU). Granted, buf is only 1000 bytes, but that's not great.
Hmm, while I hadn't thought about that, and I should have, I think that's probably okay; basically you're saying that you can get the machine to burn up to, say, 2048 cycles by sending it a small TCP packet. Which means that a 4-core 2GHz server machine can't handle more than about four million packets per second (well, one million until I parallelize), which is about 85 megabytes per second, or 680 megabits per second. There are probably other bottlenecks in the code, the kernel, or your data center that will kick in first. It's probably more effective to DoS the server by just requesting files from it.
> It looks like a request with no space could force you to walk (`repne scasb`) through invalid memory after $buf.
It's possible I could have gotten this wrong, but I did try to limit the number of bytes it would scan to the bytes that it had actually read, by doing
mov (bufp), %ecx
before the repne scasb. Did I screw that up?
> HTTP/0.9 ...verbs other than GET.
Yes, those are unimplemented features, and you're right that their lack makes the server behave incorrectly; hopefully they don't result in security bugs. I think they don't matter in practice, since nobody sends HTTP/0.9 requests or HEAD requests, except by hand, do they?
> We don't validate that paths start with /, we just skip that byte.
Right. And the $'/ check below is to keep you from saying
GET //etc/passwd HTTP/1.0
and getting /etc/passwd. In case that matters in 2013.
I know it does this by a given flag, but in some tests I have seen some HEADs between my GETs. I haven't used ab for long time, so don't quote me on that. Have u tried httpress[1] as a benchmark tool?
How about a simple check against the first byte equals G (DEC 71) if it is a GET? Shouldn't be that expensive, I think.
I don't know if ab sends HEAD requests! Thanks for the link to httpress; I've been having trouble with ab failing at high concurrencies (1000 concurrent connections) and also being the bottleneck.
Ah, it's possible repne scasb halts when ecx drops to zero (that would explain some of the string length asm code I found when I googled it). I'm not very familiar with x86 mneumonics apart from the basics ('mov').
To be a valid TCP packet, it needs to contain at a minimum a 20-byte IP header and a 20-byte TCP header, plus the one byte of payload. In practice your server is probably receiving the packet over Ethernet, so it probably has an Ethernet header and things like that, too, but that's a minimum. You could approach it over, say, SLIP.
This is normally the kind of question I ask about anything involving HTML/CSS only or JS only =D PoC's based on low-level concepts are the ones that make you curious about everything from top to bottom. Even though assembly is the least abstract and most esoteric of programming (some would argue opposite) spaces, the program actually reveals itself quite quickly knowing just a few tid-bits. This is how you get to see that even the most low-level aspects of programming are quite accessible.
Nginx will almost certainly be faster, and is somewhat robust against DoS attacks. I didn't write this to provide benefits. There are situations where this would work better than nginx (where, say, you don't want to spend any time configuring anything) but there are better existing solutions for those cases.
Definitely meant it's ROFL web scale, asynchronous, non-blocking, event driven, message passing, nosql, sharded, clusters of highly available, reliable, high performance, real time, bad ass, rockstar, get the girls, get the funding, get the IPO, impress your mom, impress your cat ... applications.
and I just got finished rewriting all my large webapps in some obscure Java framework for performance, because of some benchmarks I saw on HN. Guess now I have to rewrite it all in assembly, because more performance is always better right?
Me too. You could probably find a single-threaded, small file benchmark where they compare similarly (or this even compares better — it does almost nothing). But this is not most benchmarks. Large files or multiple clients will bench this server poorly compared to MT + sendfile(2).
This server is single threaded and artificially serializes requests, at a minimum. The copy through userspace is going to hurt compared to sendfile for larger files.
I made it fork. Now, on my netbook, it's able to handle in the neighborhood of a thousand requests per second and 20 megabytes per second, with up to 2048 concurrent connections. Not, I think, spectacular performance, but acceptable for many purposes. You can still DoS it by opening 2048 concurrent connections to it; as long as they are open, it will open no new connections, and it has no timeout.
> Can you explain? I am genuinely curious as two the line of your thoughts now.
The worst forms of bias and discrimination are unexamined, because they can fester and influence thought and action without ever being questioned. It's difficult to argue someone out of a position they don't even realize is a position that is up for argument.
Now what'd be really awesome to see, would be one of those Operating System guides that shows you how to write an OS kernel, in assembler, that can speak HTTP. Even just limiting yourself to targeting the synthetic hardware of a VM program, it'd still be quite a feat.
Bonus points if the entire network stack has been flattened using the hand-rolled equivalent of stream-fusion. :)