Hacker News new | past | comments | ask | show | jobs | submit login

On the performance front, I would not expect TimL compiled down to VimL to be as performant as native VimL, even when done as a preprocessor instead of at runtime. One reason is because, as stated in the README:

> TimL functions are actually VimL dictionaries (objects) containing a dictionary function (method) and a reference to the enclosing scope.

So calling TimL functions have overhead that calling VimL functions don't. I don't know if there are other performance quirks too, but I would not expect this to produce the most optimal VimL code.

That said, code that exists always performs better than code that was never written in the first place, so if this causes plugins to be written that otherwise would not, then that's a net win. It's also entirely possible that the performance concerns end up not being an issue.

Personally, I've never written a VimL plugin, and I've never had a reason to use Clojure, but now I'm tempted to try both.




So, the overhead I'm referring to isn't so much stuff like function dispatch (which is almost immeasurable with the heavy lifting happening in C), but idiomatic overhead. Creating an anonymous function to map across a lazy sequence wrapping a persistent data structure doesn't have a chance in hell against a native for loop on a native vim list. I actually did quite a bit of optimization in this area (that's where chunked seqs came from), and it's quite usable for many tasks, but it's still potentially bottlenecking so I never really found myself "trusting" it for anything significant.

Of course, I am in a rather unique position of being able to bang out well optimized VimL in my sleep, so paradoxically that biases me against my own creation.


> Creating an anonymous function to map across a lazy sequence wrapping a persistent data structure doesn't have a chance in hell against a native for loop on a native vim list

... until this thing actually gets traction and one decides to integrate a native TimL interpreter into vim alongside VimL (making this implementation a shim for older versions). Of course this would create two competing standards (until vim 8?) but the more than reasonable out of the box interop and readily available usefulness and unleashed power (you mentioned macros to alleviate pain points) makes it a honestly very reasonable scenario compared to python/ruby/lua bindings, which are honestly foreign and require an external dependency.

All of this is a sign of a brilliant hack: immediately useful despite having to bear with some caveats, with a clear path towards the future. Thank you sir.


I considered talking about the idiomatic angle, but I know very little about Clojure, and not much more about VimL, so I erred on the side of not listing that.

I am curious how much optimization TimL has. I haven't had a chance to actually look at it beyond the README (and I don't know if I'd understand anything if I did). Does it leverage an existing Clojure compiler, complete with optimizations, and treat VimL as a target architecture? Or is it closer to an AST transformation with all optimizations being hand-written for TimL?


> So calling TimL functions have overhead that calling VimL functions don't.

But that's easy to optimize away, I imagine. I read this as a way for forming closures and it's easy to see during compilation if the closure is necessary.

Actually, it's not a case right now, but it is reasonable to expect to have TimL generate on average better VimL than handwritten, like many C compilers do with asm. I guess "make it work" is more important right now than "make it fast", but if it becomes used, I'm sure it will improve quickly on performance front.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: