Elisp is awesome! Emacs is truly an integrated development environment also thanks to this great extension language.
This paper provides a nice overview of past and also very recent language developments. For example, support for bignums (Section 7.3) is one of the most exciting new features.
Thank you very much for putting this together, and for all your work on Emacs!
If you are used to any reasonably modern programming language, writing any serious Elisp is an exercise in frustration, it is hard to even find out at all how to accomplish basic things, and once you do, there are tons of pitfalls, it is a legacy platform and it feels like its evolution over time was largely accidental. I am talking about doing basic things, working with lists, maps and strings, opening files, running processes, making HTTP requests, etc.
For example, if you want to use a map, you have three choices: you can use alists, plists or hash maps. There are no namespaces in Emacs Lisp, so for each of the three data types you get a bunch of functions with weird names. For alists get is assoc and set is add-to-list, for hash maps get is gethash and set is puthash, for plists get is plist-get and set is plist-put. For each of those types it is easy to find basic use cases that are not covered by the standard library, plus it is easy to run into performance pitfalls, so you end up rewriting everything several times to get something working. The experience is the same across the board, when working with files, working with strings, running external processes etc. There are 3rd party libraries for all those things now because using the builtins is so painful. In many ways it feels like programming some old web browser where you either need to patch the standard libary with 15 dependencies or learn complicated incantations for really basic tasks.
The ideas behind emacs and emacs lisp are great, so I really wish someone succeeded in creating a modern implementation from scratch and it gaining some traction, basically what Clojure did for Common Lisp. By the way even Common Lisp is way more modern than Emacs Lisp...
Some of the things you mention are also discussed in the linked paper. For example, quoting from Section 8.11 (Module system):
"... other than inertia one of the main arguments in favor of the status quo was that Elisp’s poor man’s namespacing makes cross-referencing easier: any simple textual search can be used to find definitions and uses of any global function or variable ... "
A few of the issues you mention can likely be improved, or resolved by using a better abstraction, or by deprecating features that are no longer needed. The Emacs community is typically very responsive and willing to discuss many issues in great detail. For example, if you run into performance issues or missing features, you can use Emacs itself to file a bug report, using:
M-x report-emacs-bug RET
As an example for switching abstractions and also addressing one of your points, I find that working on strings can be extremely frustrating and error-prone, both in Elisp and also in other languages. In Emacs, it is often much more convenient to work on buffer contents instead. So, to perform complex string operations in Elisp, I often spawn a temporary buffer (with-temp-buffer), insert the string, and then perform the necessary modifications on the buffer content. A significant advantage of this is that I can use common editing operations such as moving forward or backward by several lines or characters. When done, I fetch the whole buffer content (using for example "buffer-string"), and use that as the result. In many cases, and especially if the data are logically structured, it is better to use Lisp forms instead of plain strings.
However, some of the basic things you mention are in my experience quite straight-forward. For example, with only a few lines of code, I can copy the front page of HN as HTML text (in fact, the whole HTTP response) into the current Emacs buffer:
This automatically spawns a TLS connection, and I find this API quite convenient. I can then operate on the response as on any buffer content. Also, I can inspect the connection for example with list-processes, and many other functions. Spawning processes is quite similar, using for example start-process.
To write code that performs decently in Emacs-Lisp, you have to work with buffer contents rather than substrings, and it is just another frustration, the primitives for working with buffer contents are low level and heavily stateful, you end up writing imperative code that looks a lot like pascal or whatever, while loops conditioned on regexp searches, tons of setq's, moving point and mark around etc. It isn't a convenient way of programmatically modifying text. The basic examples of doing search and replace described here:
I think the issue is not that there are missing features, it is that there are too many features. Elisp would benefit a lot from pruning and standardization of its many, many, many builtins.
This is a very hard problem to recover from, because backwards compatibility is so important for something like Emacs. Breaking old versions would be completely unacceptable, which makes it very difficult to remove any features.
The great irony is that elisp, as with most of the languages of its venerable era, is not very good at processing text.
One of my emacs experiences is playing "escaping the regex" about once annually, where I need to use backslashes in a regex, but first have to escape an emacs string.
I'm not very good at elisp, so this is what a basic begineer regex looks like to escape a ' in a shell (so, I want the bash command to go from ' to \'). The regex in emacs I came up with was:
(replace-regexp-in-string "'" "'\\\\''" $)
Now maybe there is some sort of shortcut for passing a string straight to a regex that will make me look silly, but for a text editor with emacs' power and flexibility to make a regex look that hard bought a smile to my face.
Emacs has dedicated functions that let you handle such cases more conveniently. For example, in your case, the function regexp-quote may be useful:
(regexp-quote "\\") ⇒ "\\\\"
In addition, structured data are often better represented as trees. In Emacs, you can use the function "rx" to translate Lisp forms from a dedicated tree-based mini language into regexp strings. The primary advantage is that you can more readily reflect the logic of the intended regular expression in this way.
For example:
(rx (or "a" "b")) ⇒ "[ab]"
and:
(rx (and (or "a" "b" "test") "c")) ⇒ "\\(?:test\\|[ab]\\)c"
I'd be curious what you think are good languages at processing text. A sibling post points out that you are probably just not using some functions that would help. To that end, look into them. I suspect the problem is that you are wanting tools from your current toolbox to look a lot more familiar, though. Hence why I am asking what languages you think are good at processing text.
Elisp is odd to me, because you need to get used to thinking in terms of buffers of text. Most commands require a trip through a buffer. Which is not at all bad, but is very foreign to my expected view.
That said, once you get used to basically programming how you interact with a buffer, it does get very fast very quickly. And can often read rather nicely.
Well, I suppose there are two ways to be 'good at processing text'. The first way is to quickly and efficiently let the user do what they want, which is how Python approaches the problem, eg, str[0:len(str)-1] + "'\''". I've only learned python in the last 12 months, and I need access to a lot less documentation than I need in elisp to write that.
The second way is to have a model that is so strong that it is worth the user learning that model (this is a rarer approach, but is more common in the lisp families - Clojure, for example, does this with variables). If elisp has this, it would be a good idea for someone to mention that in the documentation, because I don't recall ever seeing anyone say "Wow! This elisp model of processing text is amazing! I've been doing it wrong my whole life!". I have seen variants of "which is not at all bad, but is very foreign".
I don't know of a best language, but manipulating text strings is the foundation of most web serving. So I'd say that the popular languages for that (python, perl, php, etc) are quite a bit better than elisp at processing text.
This actually takes it in a direction I disagree with.
Once you get used to manipulating buffers, elisp actually is easier than most other languages. What it is bad at is processing strings. Because it is good at processing text buffers.
I am on my phone, but if this thread is still going this evening, I'll show roughly what that snippet would look like. Will be, roughly, (forward-word) (move -1) (insert "/"). Obviously, some differences if you were moving words or paragraphs. But basic idea is the same.
That is, don't try and manipulate strings. Manipulate the entire buffer. If needed, build a new buffer with what all you want.
The irony is that elisp has a decent dsl for manipulating text. But you have to embrace the mutable nature of it. Which is unexpected for many working with a lisp.
TXR Lisp is a dialect that has decent features for munging text. It's part of a two-language combination called TXR; the other language is the TXR Pattern Language: a notation for matching small and large scale structures in text documents and streams.
TXR Lisp allows various classic Lisp operations to also be used on vectors and strings (think car, cdr, mapcar* and numerous others). It has array referencing that support Python's negative index convention. Sequence subranges are assignable places, so if s holds "firetruck" then (set [s 1..-1] "lic") changes s to "flick".
TXR Lisp is loaded with useful features, all packed into a small executable accompanied by a small library of Lisp files.
The phrase "what Clojure did for Common Lisp" reads awkwardly for me. What did Clojure do for Common Lisp?
I think most of these are supported through generic functions. Though, I think moving beyond alists is something you probably only ever have to do once you are doing some heavy lifting programs. In which case, I am curious if elisp is the best place to write that program.
Your list of things that are painful in Elisp: working with lists and maps? For real? Opening files? `find-file is hard?
As for hard to find out how to accomplish basic things what about the extensive in-editor help system? C-h f for functions, C-h a for apropos and C-h i for info? It would be hard to find a better documented software system anywhere.
Sure you need a few libraries to make things better, but the fact that there is a healthy ecosystem of packages is a good thing.
Maps are fiddly in Common Lisp too, all the data structures feel a bit crufty compared to Clojure.
Elisp is being improved with every release. Lexical scoping was added without much ado; this release got concurrent threads, radix trees. It's not that broken we need to throw it away and start again. IMHO it's not broken at all.
Rewriting to modernize would effectively kill emacs lisp.
but if you work with file contents as strings in Emacs Lisp, you are going to have a bad time, because performance of this is terrible enough for it to easily turn into a problem. So instead you insert-file-contents into the temporary buffer and then do a ton of low-level, error-prone imperative processing on the buffer contents itself using things like goto-char, search-forward, using markers etc. Most of the major emacs packages are littered with code like this, here is one random example:
That's indeed completely unreadable and I wrote it. But it also isn't typical. In this particular case I replaced an earlier "functional" implementation with this abomination to fix a performance bottleneck. Though I think I might have gone overboard -- unfortunately the commit message isn't very good either and doesn't explain why it was necessary to switch to this style of doing things in addition to what the message actually talks about.
(--map (list (substring it 3)
(aref it 0)
(aref it 1))
(magit-git-items "status" "-z" "--porcelain" "-u" "--ignored"))))
Generally speaking you don't have write code like what you linked too unless you have a very good reason to do so (I doubt that I had in this case, but who knows).
> creating a modern implementation from scratch and it gaining some traction, basically what Clojure did for Common Lisp
it's not a new implementation - it's a fully incompatible new language. Zero CL code runs in Clojure because the operators are incompatible/different and any existing CL code needs to mostly re-architected, since Clojure has a very different approach (more functional, less object-oriented, less state, lazy data structures, no linked lists, hybrid libraries /language partly using hosted language, ...).
Well, guile supports elisp. The project was to replace the Emacs elisp runtime with the guile VM which would bring some benefits (a runtime that could be used without emacs, speed etc).
Counterpoint: Emacs lisp is effective because it’s in Emacs. Within the editor space, Elisp is an absolute star, since the nearest competitor is vimscript...
As a stand-alone language it’s pretty weak, even compared to other lisps. Default dynamic scoping comes to mind.
While I mostly agree that Elisp is fairly weak compared to other lisps, I should point out that it does support lexical scoping (and that is the recommend usage)
Vimscript was a mistake, if you ask me. The power of vi is its edit commands and the ability to bang text through shell commands. A very careful, considered expansion of a few features in vi to improve its integration with outside script interpreters (Perl, Python, etc.) as well as language servers could've produced an extremely compelling editor.
As it is, I've found that you can do a lot with just vi, tmux, and bash (along with all of the utilities).
> A very careful, considered expansion of a few features in vi to improve its integration with outside script interpreters (Perl, Python, etc.) as well as language servers could've produced an extremely compelling editor.
No, neovim is still the same concept: you're scripting the editor. It's still this gargantuan thing with all of these things you bolt on and reams of documentation to wade through. It still has the vim philosophy (which is the same as the Emacs philosophy, only not as good): my editor is an operating system and I want to rewrite all of my software to run inside of it.
I'm thinking of something altogether different: you leave the editor alone and you script the operating system around it, using the editor only as a flexible control interface for everything. This is the original vi philosophy. It means you don't have to rewrite anything. Any new program you install, if it can send/receive text on STDIN/STDOUT/STDERR, then you can use it in vi.
So what's wrong with vi? It has a few shortcomings: no unlimited undo, no ability to pass the cursor (row,col) position to an external command, and working with multiple buffers is more painful than it needs to be.
> This is the original vi philosophy. It means you don't have to rewrite anything.
Users will need to rewrite many things, if they use multiple platforms.
> Any new program you install, if it can send/receive text on STDIN/STDOUT/STDERR, then you can use it in vi.
Except even the unix toolspace is not cross-platform, despite POSIX.
Minimalism is good, but the OSes failed to provide good, composable, cross-platform tools.
By the way, Neovim's :terminal command enables more scripting and re-use of the many unix tools. w3m and lynx are much more useful if you can overlay editor keybindings over them instead of writing a pile of code to use their (insufficient) CLI APIs.
Personally, I regard the reflective and introspective capabilities of Emacs as extremely powerful tools for learning more about Elisp and the entire environment.
For example, every time you press a key (or key sequence) that results in any action, Emacs internally invokes a function (most of them written in Elisp, some of them in C) that achieves this action. So, when you want to automate a task and have a rough idea how to achieve it by pressing key sequences, you can inspect their associated functions and their source code, and study how they are called and even how they do it.
As a quick start, you can—using Emacs itself!—display a list of all functions that are bound to any key sequence. To get this, simply press:
C-h b
This gives a good first indication of which functions are important for editing, because functions that are very frequently needed are typically available via a predefined key sequence.
But then, your next question may be: What did C-h b actually do? That is, which function did this actually invoke? Of course, one straight-forward way is to simply search for "C-h b" in the buffer that shows all these key bindings, because you know it must appear there. Another way is to use C-h k C-h b.
But then, what did C-h k actually do?
To find out, we apply C-h k to its own key binding:
C-h k C-h k
and from this, you see that C-h k internally invokes the Elisp function "describe-key".
At this point, it is essential that you also have the (Elisp) source code installed. If you have it, then you can simply browse the Elisp definition of describe-key, and see how it achieves its task.
The same goes for every other function you may be interested in: You can browse its definition, copy it (for example) into the scratch buffer, change it, evaluate it and run it. You will find that not much is needed to get started with Elisp, because many tasks can simply be written as a sequence of standard commands of which C-h b provides a good first overview. You can also press C-h l to get a list of the last few input keystrokes and the invoked commands.
In addition to that, to automate simple tasks in Elisp, you often only need a few simple commands that let you fetch text from any position in the buffer and to insert it (see for example the functions buffer-substring-no-properties and insert), the ability to launch external processes (the "Processes" chapter in the Elisp info documentation is great) and a way to perform editing operations in a temporary buffer (see with-temp-buffer). To find out more about these functions, simply use C-h f.
And, of course, you can simply use C-h k C-h f to see what C-h f does!
One essential point: To make Emacs keybindings more convenient, I recommend to turn Caps Lock into an additional Ctrl key.
If you're not already a lisper I would recommend learning Common Lisp from one of the many excellent books. Learning the differences for Emacs Lisp afterwards is a breeze, and you get a much a more general purpose view of Lisp.
> The only thing really good about elisp is that it is integrated into Emacs.
But that's really, really good compared to most programs, which are customized by dialog boxes or INI files. Imagine how great it would be if you could script your web browser using some semi-decent programming language, and your scripts would work 30 years from now.
Emacs has a function for transforming a window of text under ROT13, rot13-other-window. The Emacs manual [1] says, "For example, a review of a film might use rot13 to hide important plot points." This example was added in 2012. The source code [2], on the other hand, gives a different use case: "ROT13 encryption is sometimes used on USENET as a read-at-your-own-risk wrapper for material some might consider offensive, such as ethnic humor." This comment was added no later than 1992.
I think the implication is that once upon a time, Emacs had a comment/docstring which wasn't politically correct. Let the shunning or boycotting begin...
>During the early years of Emacs, the main complaints from users about the simple
mark&sweep algorithm were the GC pauses. These were solved very simply in Emacs 19.31
by removing the messages that indicated when GC was in progress. Since then
complaints about the performance of the GC have been rare. Most of them have to do with
the amount of time wasted in the GC during initialization phases, where a lot of data is
allocated without generating much garbage.
At 300 baud, I bet the GC messages took longer to display than the GC itself.
I use emacs all the time, but I don't really extend it. More recent editors are extensible via web tech such as javascript, which sadly often comes at the price of having an electron base. Javascript does have the advantage that you can console.log() everything and discover functions easily for a given object. I still don't know how to discover stuff when I need it in elisp except M-X apropos which often fails to find something useful.
Do you use something like `counsel` or `helm`? I personally use `helm` and have found `helm-apropos` to be a much better experience mainly due to fuzzy matching. I'm sure there is a counsel analog.
Out of curiosity, do you happen to have an example of a search you've tried that doesn't produce something useful?
Did not know about helm. I just installed it and I think that this alone might solve 75% of the problem, thanks!
As an example, let's say I'm new to elisp and I am wondering how to insert a character to a buffer. `apropos` "insert" just returns too much stuff. `heml-apropos` seems much better though, with a readable output. However, you still have to know that the function is named "insert", which you can spend lots of minutes on. In a Javascript parallel universe, there is a high probability that I would have done `console.log(buffer)` and I would have found the `buffer.appendChars()` method in seconds. You can quickly find what you can do with an object in Js/Python/PHP, but in elisp, I have yet to find.
> As an example, let's say I'm new to elisp and I am wondering how to insert a character to a buffer
You can use google as with any other software. If you search for 'elisp insert character' with google the first hit is the insertion section of the manual which tells you the functions:
The fact that google is your goto method shows that the discoverability is poor. I could code in JS offline for hours if needed, because all the information is there in the objects, but I would hit a wall in 5 minutes with elisp because this "object oriented" concept and console.log(anyObject) simply do not exist.
I use google when coding JS code too, because completing functions of objects for discovery is useful only in the more trivial cases. For anything non trivial using google gives much quicker results than browsing any kind of documentation.
In case of Emacs I use the manual as a reference (looking up things which I already know about) and I use google for discovery of new elisp concepts.
This paper provides a nice overview of past and also very recent language developments. For example, support for bignums (Section 7.3) is one of the most exciting new features.
Thank you very much for putting this together, and for all your work on Emacs!