What is interesting here is to compare this one man's effort with "ancient tools" well-researched by two generations of old-school programmers with that amateurish hype and nonsense about node (author publicly stated in his blog that he has no experience with server side development on a POSIX system) and golang - a minimalist approach, carefully crafted by a rigorously trained "professionals". It is not just about KBs and milliseconds.
Btw, this "Hello world" example tells a little, basically, how efficiently memory allocation, and IO layer are implemented (that's why no one still could beat nginx).
Much more interesting comparison would be of some "real-world scenario", say, "implement an http look up for some public data-set, imported into a persistent local storage, say, Postgresql" and then compare not just throughput rates, but also resource usage.
>Btw, this "Hello world" example tells a little, basically, how efficiently memory allocation, and IO layer are implemented (that's why no one still could beat nginx).
Actually at least on other guy just beat nginx just other day -- don't remember the name, it was some Japanese made server.
I think you might be thinking about Kazu Yamamoto and Mighttpd2? (Written in Haskell, incidentally.) I think this was the web server that beat out nginx on a largish number of cores (42?). For the life of me I can't find the original paper with the comparison graph, otherwise I'd link to it...
Anyway, a lot of the performance work which went into mighttpd2 is automatically shared by all Haskell applications. (Most of it went to optimizing the GHC VM I/O Manager which handles I/O in the Haskell runtime. It's sort of like libev/libuv only at the VM level instead of as a library.)
It doesn't really matter how well the LISP version was implemented because in the end people will choose the language with modern libraries and support.
You can be as edgy as you want and use LISP but real-world scenarios take into account economical factors. And surprise, surprise no company gives a shit about the beauty of a solution.
And libev is a C wrapper around epoll and other system calls. There's not much code needed to use epoll directly, that's what John Fremlin's TPD2 web server (the previous "world's fastest Common Lisp web server") did:
What libev provides is a very portable wrapper around all the different non-blocking IO system calls in all the different Unixes. That's the hard part.
Indeed, Node.js is a thin FFI wrapper on top of lots of C (and C++) code that does all the heavy lifting, but it is still useful. I think Woo is useful the same way.
the title implies "the heavy lifting" is written in LISP.
Every node user knows the net lib nodejs uses isnt written in javascript.And most also know that in order for node to be useful,scripts should do as little as possible and delegate any serious operation to C/C++.
It makes me really happy to see projects like this for common lisp. I still relatively new to the lisp world but I have fallen in love with the language and development process.
I see so many 'blazing fast' HTTP servers, in every language imaginable. Does anyone write a slow HTTP server any more?
The real challenge is in writing a useful, useable server which still stays fast under load. In contrast, you have to be writing terrible coding horrors for your home-grown static file web server NOT to be wire-speed :)
(No offence to the writers of this particular server, I haven't looked at the code)
The reason people care about speed here is the reason people care about speed in all infrastructure projects: every millisecond of the timeslice spent doing work inside the transport is a millisecond you're not accomplishing application logic. Efficient http code lowers distributed system latency and saves power.
Classically, "code is data" is used upfront, in creating macros that manipulate Lisp code as data, before sending the result to the interpreter or compiler like normal functions and data. Ideally you develop your own Domain Specific Language that provides power in expression and execution for what you're trying to do.
As implied in other answers in the subthread, nowhere in this paradigm does random input data get treated as something that can safely be executed. READ in Lisp Machines is mentioned because reader macros like #. allow data in the form being read to be evaluated, which in this domain is an obvious big no-no.
As early as 1983, I think earlier, it was recognized that things like eval servers were a bad idea if accessible by the outside world.
So, I am really just a beginner. To me it seems to be the case that attackers may find a way to break out of a particular data processing instruction and execute arbitrary lisp code.
If you say, this can't be so, you may well be right...
As I understand it, unless you're using a Lisp with an reader macro that calls eval, or your code calls eval, and you allow arbitrary input into that code while running vs. loading/compiling, it's just not going to happen.
I distinctly recall, c. 1983, that 'read' was called as part of the NFILE protocol.
The reason I noticed was that I had created a package that did not use 'lisp:', and therefore 'nil' was no longer 'lisp:nil', which broke the NFILE client.
Maybe this got fixed sometime between then and 8.3. (Or maybe NFILE was ripped out altogether?)
I also recall that Jeff Schiller found some remote vulnerabilities. I don't know if they involved 'read', but they certainly could have. Again, this was long before 8.3.
One of the greatest barriers to lisp for me is the lack of free IDEs and better tooling. Having cut my teeth in programming using VB, I generally find any language that doesn't come with a GUI builder and a half-decent IDE with intellisense out of the box to be dismal.(Scripting languages is the exception) I really hope someone will improve on the tooling of lisp. I strongly believe in the mantra that you should not do something that the computer the computer can do better, I believe a smart intellisense autocompletion is much better than having to figure out the intricacies of emacs/vim/<insert your favourite editor here>. I really like the syntax of lisp, it would be really sad if I can't be productive with it. :(
Yes, there are a lack of non-commercial IDEs for Lisp that are purpose-built and give a great "Out of the Box Experience".
However, don't dismiss SLIME due to emacs. It contains all the nice IDE functions - symbol completion, attaching (and editing!) long running processes, function tracing, stack tracing, find references etc.
Access to a REPL in a running system image is an amazing and productive experience.
SLIMe doesns't have a GUI but is pretty good tooling. has Autocomplete according to symbols available to your image out of the box, xref (who calls function/who setsvariable/etc), a menu for managing threads, an object inspector and much more.
For Clojure there's a plugin(albeit a very big plugin) over IntelliJ IDEA, called Cursive that I've heard a lot of nice things about: https://cursiveclojure.com/
Btw, this "Hello world" example tells a little, basically, how efficiently memory allocation, and IO layer are implemented (that's why no one still could beat nginx).
Much more interesting comparison would be of some "real-world scenario", say, "implement an http look up for some public data-set, imported into a persistent local storage, say, Postgresql" and then compare not just throughput rates, but also resource usage.