Hacker News new | past | comments | ask | show | jobs | submit login
Emacs Is Not Enough (project-mage.org)
151 points by lycopodiopsida on Jan 13, 2023 | hide | past | favorite | 160 comments



I can't but reflect back on another post I made today, which is that everything fails at scale. Literally everything. They just fail in different ways and made different tradeoffs along the way.

For example, this is why I find myself using "print" debugging on a process of 10^4 values. It is fun to think that, "maybe I can step debug this" on that many values, but... that is well beyond my capability to keep it in my head, such that any help the debugger gives is added to zero. Which... is not good.

That is, complaints that ridiculously large forms cause issues in a debugger seem... pointless? Lets say you managed to keep that from crashing, do you actually have a way to make it workable?

Don't get me wrong, if folks are constantly opening JSON files that are gigabytes in size, it makes sense to focus on that and make sure it doesn't crash. I just, can't imagine what benefit you get from opening an X gigabyte file. I can't even see what the advantages of opening an X megabyte file are. That is literally beyond my brain power.

So, looking to rewrite an emacs is always one that strikes me as a huge risk of throwing the baby out with the water. There is more than a fair chance that what you like about the current setup is a necessary implication of choices that you think you don't like.


Print debugging is a technique that will never go out of style. It works on any system, is used by programmers of any level of experience, is very quick to use and requires no tooling to understand. And sometimes, even if you do have other great tools at your disposal, it’s still the easiest way to track down a problem.


I view it as "tracer bullets." I know it will make noise, but I'm planning to look at the pattern of the noise to see what I can reason out of it.

Is it a complete answer? Of course not. Neither is trying to spin a wheel that you are truing. Produces a ton of output that is roughly in an expected pattern, and can be acted on.


As a general rule I prefer print debugging. I’ll step in with a breakpoint when I have to but odds are I’ve gone through a round of prints first.

I do not enjoy stepping through all of the context setting up the system to fail. I’ve not tried a debugger that can “back up”, that may well be a different experience. But I loathe stepping through code, walking on eggs, overstepping, “oops, thar she blows, start over”. Like Bugs Bunny on the piana.

Rather just toss in some prints, let fly, sift through the wreckage. My builds are fast enough to make the turn around acceptable.


Logging - for debugging or for other purposes - is a very useful tool.


Well yes, but I must quibble...

> Print debugging.... It works on any system

No it does not. It requires a console, and not all systems have a console.


Any system that is capable of emitting any data, can emit bytes from print debugging. Could be log file, socket, or even pulses along a GPIO pin.

If the system can't emit any data, well, you're kind of stuck anyway.


> If the system can't emit any data, well, you're kind of stuck anyway.

It's also by all recognition, functioning perfectly :)


Having worked in embedded, debugging through an Oscilloscope is a cherished memory that I sometimes talk to junior colleagues about. Knowing these things are possible can sometimes help people find more novel ways to think about debugging their applications.


Pulses on a GPIO pin are not print statements.

A huge proportion of computing devices have no method of producing any out put, other than to some form of actuator, at all.

"print statements" are useless in this class of device.

Stretching the definition of "print statement" to recording pulses from a pin is, well, a stretch!

Poor young programmers. If the only tool you have is a hemmer, every problem is a thumb.


> Pulses on a GPIO pin are not print statements.

Yes, they are. How do you think text terminals work?


Really? You are being disingenuous.

It is a long way from "pulses on GPIO pins" to a print statement.


It's precisely one text terminal or LED scroller away.


> It's precisely one text terminal or LED scroller away.

This is quite strange. Let us remember the context

> Print debugging is a technique that will never go out of style. It works on any system,

So you are suggesting that watching the blinking of a GPIO pin (I hope you have an oscilloscope, or some other probe, rather than applying to to your tongue) is the same as "Print debugging"?

I contend that in any sensible use of technical English "print debugging" is text output to a console.

In that context there are a lot of systems, possibly most computer systems, do not have access to that. It is not unusual for developers to go to elaborate lengths to attach consoles to these systems - not the same as " It works on any system, "


If you added code to tickle the GPIO specifically to gain insight into the process, whether you leave it in or not, then it’s “print debugging”. Many a microcontroller developer has tickled an indicator LED in their day.

If your simply monitoring activity through an external source (whether it’s an oscilloscope or watching traffic through a network analyzer) then that’s basically black box testing.


> A huge proportion of computing devices have no method of producing any out put, other than to some form of actuator, at all.

Actuators work too. It's not unheard of in robotics to debug by adding statements that make some actuator do specific movements that communicate the information you want to surface.

Anything you can use to output nonzero amount of bits to the environment can work for print debugging.


> No it does not. It requires a console, and not all systems have a console.

It does not. You just send the log output somewhere (often a different system). If you have a deployment with no logging infrastructure (either local for smaller setups or distributed for large scale), you have a major problem beyond just debugging visibility.


> You just send the log output somewhere

How?


Via network socket, serial port, parallel port, however the system in question communicates with the world.

If it's an actual standalone deaf, dumb, and blind black box with zero world interaction then you should, at least, have a hardware probe and step debugging capabilities and you can always flash a sign of some kind on some bus or another.


Unless you're debugging something so simple that it can only flash a led, you can have a serial console onto it

You can print debug on an Arduino for example

So, yes, it works on pretty much any system.


Even then, you could write a print function that writes morse code pattern to the led :p


> Even then, you could write a print function that writes morse code pattern to the led :p

And you then have a different system.

My point is consoles are not ubiquitous in computing. Computer systems that lack them are very common


Well, you're right, of course. Another obvious one is that print debugging doesn't work for your GPU kernels.


Well, not every system has a framebuffer, but here's a Carmack tweet that comes to mind:

"gl_FragColor.x = 1.0; is the printf of graphics debugging. Stone knives and bearskins."

https://mobile.twitter.com/id_aa_carmack/status/568861553245...


He also seems to like a timing/GPU/CPU debugger called PIX from MS, I'm impressed. I wish they had started with the hands on, but the presenters in this video[0] all show why print debug will never be enough. Getting back to the OP, for me even IDE integrated debugging is worse than print debugs. You need to so much data to make it useful for me, which makes things like PIX, APM and other application tracers the thing I go for during debug.

[0] https://www.youtube.com/watch?v=UH-o5cG_QWo


As a side note: While using wgpu-rs to do Rust game development, I grew very fond of RenderDoc (https://renderdoc.org/).

It has great Vulkan compatibility and it helped me greatly when building an object picking buffer by visualising everything from call trees to resource contents and metadata. I believe it should come in handy for any GPU-based project, not just 3D/2D graphics.


Which is why a GPU debugger with frame tracing is so much better option.

By the way, there are actually ways to expose a print function on shader code, provided there is driver support.

https://github.com/KhronosGroup/Vulkan-ValidationLayers/blob...


You'd be hard pressed to find any. You can even get console over JTAG/SWD and near every microcontroller has it.

Only when you get into realm of few cents per microcontroller then you don't have it


It requires a way to get bytes out of the system. You can connect an external console to that.


Do you have an example of one that doesn't have one?


This is in the realm of ignorant questions. It could well be that the majority of computer devices have none. E.g. the micro controller in my washing machine


Why is it an ignorant question? The nicer ones let you use JTAG etc connections, and debug to console using ITM or RTT. (STM, nRF etc) Others allow debugging via serial.


> Why is it an ignorant question? The nicer ones let you use JTAG etc connections, and debug to console using ITM or RTT. (STM, nRF etc) Others allow debugging via serial.

Because all those systems, even the very nicest, all over the palnet, very really have "JTAG etc connections,"

Those are things you attached to the system. You attach them in a complex error prone process, at the end of it you have a different system.

A lot of (most?) computers do not have consoles.

It is a common error amongst developers (who do not work in embedded systems) to assume there is a screen supporting text of some sort. It is an error


You connect the debug probe to your MCU (pogo pins or w/e is convenient for the form factor), connect it to a PC over USB, and print to a terminal window in Windows, Linux etc. Or a serial-USB bridge if it doesn't support debug probes. If the MCU is so basic it doesn't support UART, bit-bang it with GPIO. Too easy!


> Print debugging is a technique that will never go out of style. It works on any system,

So if you connect up a bunch of wires and everything goes well "too easy" you can get output from some, not all, systems.

What you describe is no what the comment about "print debugging" means, to me.


I think you're onto something general here.

At scale, there are two operations needed: zooming in on the details of interest, and surveying the big picture to find the details of interest. One or the other of the two are often lacking, either in power or flexibility.

Critically, I don't think you get both power and flexibility along both types of operation and retain user-friendliness.


cat, grep, awk, sed, cut, paste, column


There are ways to debug at scale that aren't step debugging. You can set conditional breakpoints, or breakpoints that run code, or even backwards debugging. You don't have to settle with print debugging, ever. But sometimes it's faster to use it.



I just, can't imagine what benefit you get from opening an X gigabyte file.

Maybe I don't fully understand the context, but... why shouldn't you open a big file?

I haven't read the whole article, but I did read that the author complains that a 172 kb text file makes the editor slow. It seems that the syntax highlighting is the culprit.

I have a similar problem with my current editor of choice. I used to have a "raw" visualizer for all kind of files. Sometimes the file format is just unknown and it's useful to take a look at its contents (does it start with PK?) or you want to make a quick overwriting edit (to change some binary flags) or edit text in a huge machine-generated text file without any formatting beyond line feeds.

If I understood the author correctly, he's saying that structured editors are superior to a syntax highlighting system that's based on regexps, when you use them for programming. I agree wholeheartedly. It's a clear-cut case of the "don't repeat yourself" principle.


I’ve used emacs to open and edit binaries back in the old days - basically one long line. We often needed to install proprietary software in non standard locations so changing the embedded strings within binaries worked great and better than using vi (pre vim era). Always best if the new path had a length less than that of the existing one.

In any event I guess the whining about editors will never stop but meanwhile they seem to be GoodEnough(tm) for me and most things.

Emacs used to be mocked for being bigger than the OS but now I think most editors are much larger. I used to work at a University lab helpdesk of sorts where first time users of UNIX systems would ask us for help getting started entering their first CSCI programs on a Sun or other UNIX-like os and my unscientific but high n observation is that vi, emacs, ed or cat all worked and occasionally people would find a way to get confused equally on any method when starting out. Level of education didn’t seem to make a difference.

Editor ergonomics seem to be very personal, like furniture so I’m glad we have so many options.

On another side note I am glad that most text input fields across many operating systems and applications usually do the right thing with Emacs cursor control sequences e.g. CTRL-a, CTRL-n, CTRL-p, CTRL-e. So hopefully that legacy lives on


Editor ergonomics seem to be very personal, like furniture so I’m glad we have so many options.

A long time ago I read some piece by Groucho, about how it's impossible to find a beef (or ham, or turkey, not sure which it was) sandwich any more. He goes to some sandwich shop and ask for a beef sandwich and they offer him a beef + hard boiled eggs + lettuce sandwich with mayo, cherry tomatos and oregano, or a beef + cheese + spinachs + nutmeg sandwich or whatever. But he can't just buy a simple roastbeef sandwich, maybe with a pinch of mustard.

I used Notepad in Windows for simple text, that used to work OK. Then I changed to some free editor created by a guy as a programming exercise with some interesting extras. Now both have the same problem: if I need to open big text files for some reason, it takes forever.

Edit: big as in ~ 1 MB, sometimes smaller.


> A long time ago I read some piece by Groucho, about how it's impossible to find a beef (or ham, or turkey, not sure which it was) sandwich any more. (...) But he can't just buy a simple roastbeef sandwich, maybe with a pinch of mustard.

I need to find that piece, because it resonates. For me, it's hot dogs. I can't find plain hotdogs anywhere anymore. All I want is a bun, a sausage, and some ketchup and mustard on top. But no, everyone has to add at least cucumbers and fried onions, and if you're not careful, you'll end up with bread full of a large assortment of veggies, with barely a sausage in sight.

I imagine the reason for this is economics: this green stuff is probably dirt cheap relative to the sausage, but lets the vendor triple the price of a hot dog without making you feel they're price gouging you.

For the past decade in my area, IKEA was the last bastion of pure, unadulterated hot dogs. But even they recently took that off the menu - the basic hot dog now comes loaded with useless greenery.


Where's the pragmatism?

I personally don't open GB size files (that's not true, I sometimes do but it depends on the format and I tend to use vim (which is a very minimal configuration on my system these days, since I've moved to Emacs) for them because I'm typically already in a terminal munging the thing anyway).

I prefer to use the terminal to deal with these things because those tools can help me decompose the problem to a more reasonable space. That seems like a more useful way to spend my time than complaining that a massive file of text won't open on my editor for $REASONS, when it's, IMO, most likely that I wouldn't be able to make sense of the damned thing if I had it all opened in front of me anyway.


I believe the author of the article is criticizing Emacs for being a jack-of-all-trades that use neither the best approach for plain text (keep it simple, Syd) nor the best approach for code that I swear it's structured editing, after some time working in a system very similar to what some-mthfka describes in a sibling comment.


It's not Emacs fault that CSV is a garbage format. Anyone who has spent time dealing with CSV understands the hell that is lurking in it's shadows. I don't think the rants solution, though I did not finish reading it because I frankly got bored, is going to fix any of this, either.


That isnt the problem he is describing tho, it is that you have no primitives in emacs that correspond to a rectangle of text entry boxes like the semantics of csv. You only have sequence of chars.

The author wants people to rewrite emacs as a design system for arbitrary structures of data, with hooks for moving around the structure and editing the structure and i guess contents


I see. I thought the rant was a mess, personally, so I stopped reading it. Theres some irony to be found in an unstructed rant about... structured editing.

Anyway, if that's the point, then maybe tree-sitter can help us get there in the future? I still fail to see how this is an Emacs problem to solve. The vim/neovim model is also using what are essentially string dumps in the form of buffers. I'm sure every other editor is, too, but without going as far as actually having buffers, though I'm not invested enough into anything else to know what it's doing.


> It seems that the syntax highlighting is the culprit.

In that case, it was the fact that adding a bullet point to a list would rescan the whole list (so it could simply update a bullet count in the header). You would need to implement incremental parsing for that, and that's not very easy. And certainly not natural.

> If I understood the author correctly, he's saying that structured editors are superior to a syntax highlighting system that's based on regexps, when you use them for programming.

Absolutely, that's one of the points.

I must add something, though. It's not just about speed. And, in fact, it's not even just about editing.

Most exciting possibilities of the structural approach stem from the fact that you can start thinking in terms of objects: then you write textual interfaces for those objects, for the purposes of textual (or even graphical) interaction.

The bare-bones example is that if you had a table, or a tree in a note-taking application, then you could query that tree, but you would still retain the textual interface. Even better: you could embed any editor within any other editor, which would directly correspond to a compound structure at hand.

To give you an idea of a note in a KR (knowledge-representation, prototype OO) system:

    (create-schema power-of-structure
      (:is-a kr-note)
      (:title "The Power of Structure")
      (:tags '("seamlessly-structural editing" "power"))
      (:introduction '("Welcome to Project Mage!"
                       "The purpose of this article is [...]"))
      (:table-of-contents nil)
      (:sections (create-schema))
      (:related-links nil)
      (:footnotes nil))
This note [1] could have any other structure, any other slots, of course. But it's just an object in memory.

Now, all you would have to do is lens that object, aka, construct a tree of embedded editors for it (which Rune will also do via KR, just like the object above).

Another example: code. A large comment block? View and edit it via a note-take application, or a markdown application, or what have you.

Another example (leaving some details aside): attach comments to any piece of code (up to a character). That comment doesn't even have to be a part of the editing workflow. I think that's pretty powerful (and there are more use cases, see Alchemy in [1]).

[1] This example is directly from https://project-mage.org/the-power-of-structure


I had not realized that you wrote the fine article. As I told scruple in a sibling comment, I've been working in a pretty similar system for... some time :)

So I have some reading to do about that Mage Project...


What kind of system was that, if I may ask?


I'll send you an email message later, to the address on the page.


Alright : )


> I can't but reflect back on another post I made today, which is that everything fails at scale.

That reminds me of: Terry Davis' TempleOS Brutal Take Down of Linus Torvalds

https://youtu.be/gBE6glZNJuU?t=388

All computer people today have been jedi-mindtricked, they obsess on if there's one thing they all wanna show you they know is scaling. Everybody is obsessed with scaling. Guess what. Scaling works both ways. You can get bigger. What happens if you look the other direction and you scale down? It doesn't get bad, it gets worse when you scale up. It gets better when you scale down.

I think it's a fundamental problem of flexibility of specialization vs generality. I laid out my thoughts on this on the website in the "On Flexibility" article (There's an article on print statements, too, by the way).


That is why debuggers and OS support trace points and events for debugging at scale.


I've been working obsessively on a trivial and currently private emacs package, for a while now, and (having not read the article, yet) would concur with the idea that Emacs is great, there's little else like it, but it's also not enough; the promises that it makes (re: freedom to the user, access to internals via elisp, etc) are not fulfilled. It goes a great ways towards delivering it's promises, and casual users will feel the freedom it gives them (great things can still be done with it! Miles ahead of anything else!), but that just makes the walls all the more frustrating when you hit them. No specifics, but I do find the performance situation and the elisp and rendering implimentations to be at the center of my gripes. I am young enough to hope to see, and perhaps even contribute to, a successor to the Emacs philosophy.


It would be interesting to have such a general project go somewhere.

While in principle structural editing sounds like an incredible advance, there are 'good enough' advantages to plain-text tools that make it a much more practical solution. The other issue is of course integration with existing tooling, which you either skip entirely or compromise on the design.

What I feel is missing, between the description of "old, bad state of things" and "utopian vision" is a review of some of the projects that already tried to achieve this ideal state. It turns out there are a number of them, and most of them failed to achieve any traction or impact [0].

The rants are very long, so I skimmed quickly the one about git; I understand the complaints, although git is only bringing me joy and no pain --interactive rebase, absorb and a few aliases made it a breeze. But in a similar fashion there are projects trying to solve its fundamental issues, like pijul(.org); what are they missing?

[0] https://github.com/yairchu/awesome-structure-editors/blob/ma...


Really, to me, this project is all about writing a few applications for myself. This may get lost throughout the pages of writing, but really, I am doing this for myself: flexible note-taking, a comfortable REPL, and, _at last_, a Lisp IDE (especially, with comfortable print-statement debugging). Structual editing is just the means to get these specific things right.

Of the link provided (on a skim, so I may be wrong), they seem to be all doing structural editors for specific languages, and that's where their ambitions end. That's their focus: some language. My focus is: applications. Within a power-user environment, which can't be done without a GUI toolkit, which can't be done without image-based programming.

Surely, there are some efforts which I like. Glamourous Toolkit. Not particularly about structural editing, but whatever Smalltalk stuff you take, it just tends to be interesting. Ultimately, they fail to deliver in some way, for me at least. (I comment on GT in the "All Else is Not Enough" appendix article.)

I mean, yeah, look: practical applications. First of all: usable to myself. Then: flexible enough to fit anybody else.

PS The topic article is a rant, I admit. And I rant here and then a bit in the main article, sure. But, please, don't be too quick to classify everything there as a rant, even if it stylistically looks so.

PPS I have to take at pijul again, maybe I missed something, thank you.


Well, I think they tend to focus on a particular language as a first step to reduce the scope. I had to refactor a python codebase, and the same function was used in context managers, decorators, and functions. Because of these disparate syntactical structures there was no way to refactor things easily or entirely de-duplicate some code. This is not an issue you would have with a Lisp.

> Glamourous Toolkit Thank you, I was just looking at knowledge management solutions, and did not find anything really satisfying, so I was just building a small prototype for myself. I'll have a look at it iin more details!

>please, don't be too quick to classify everything there as a rant I think you could alleviate it by providing some navigation allowing to skim more easily (I understand how much jank and sucking goes into editing large files so the comedic effect is a bit lost in the PTSD) or having more focus on achieving your vision. It's mostly editing. But your writing doesn't have to appeal to everyone either, it's your choice.

By all means I wish you good luck and I'd like to check in on the project in one year or something.


> Well, I think they tend to focus on a particular language as a first step to reduce the scope.

Yeah, that's a way to do this, but if you start small, you will then later find yourself walled within your initial assumptions about what you really want. Oh, well.

> By all means I wish you good luck and I'd like to check in on the project in one year or something.

Sure, and thank you!


I never cared about structural editors because the argument always seemed to be that being able to ruin the parse (going from a state where it parses to one where it doesn’t) with your editor is bad. Because I don’t care: I want the supreme flexibility of going from state A to B through some ill-formed textual editing much more than I want to be protected from ending up in a bad parse state, since syntax errors are one of the more simple things that a programmer has to deal with.

But I am perfectly willing to embrace structural editing if it makes editors much, much more efficient and less complex (e.g. you don’t need to cache things).

Maybe things become “janky” because many of us are a bit too good at limiting source files to at the most 1K lines, so we tolerate the minor hickups that we encounter?


> I want the supreme flexibility of going from state A to B through some ill-formed textual editing

Seems like the author agrees with you (albeit in a different article):

> … the traditional structural editors do tend to impose some limitations … But these are not the inherent qualities of structural editing. This is just the way someone chose to implement it. So, when I say that Rune provides seamless structural editing, what I mean is that the other kind of choices are possible: the kind where you are editing as if the structure is only apparent, but not in your face, and so, it's just like there are no seams.

https://project-mage.org/the-power-of-structure#orgeb59167


Yup. Apparently.[1] Seems that we agree that your typical structural editor is either too gimped or to inflexible to be useful. But I believe him when he writes that it is possible to make a unified language for editing all kinds of structures. I certainly wouldn’t wanna learn a new editing mode for every little yaml/json/java/rust/yaml-superset/yaml-superset-superset.

[1] But I don’t have time right now to read that main article to find out, since Firefox claims that it is a 2 hour+ read. :p


I'm not an emacs power user. I use emacs solely.

Compared to me the author feels like a power user of emacs (or was one) and is making broad arguments against the existence of people like me: not power users.

I've never run into these issues they talk about and I don't really know elisp.

Mostly, over vim, I just like chord editors more than modal editors. If a more modern terminal-based chord editor came out I'd try it.


I think his point is that you cannot use Emacs solely. You probably need a browser and a spreadsheet and lots of other software.


Can nano or pico do it? Can vim do it out of the box? For that matter, how does VSCode stand up to "very large" CSV files? I really don't know but I suspect this is less of an Emacs "problem" than it is a matter of fact for most editors that aren't specialized to the specific task of "very large" CSV files.


But Emacs has browsers and spreadsheet programs and a lot of other software. Out of the box it even comes with a great editor.


When reading this article, I think there is an interesting parallel to be made with Firefox.

Emacs has questionable technical underpinnings. It is an old project; they've all learned a lot since the 1970s. ELisp wouldn't be built that way today, it wouldn't be written in C, it'd be designed with keybindings for a modern keyboard - probably cloning vim. Break from Emacs tradition and build something that is good at editing text maybe.

Firefox faced a similar challenge from modernity with changing security and performance demands. But they chose to remove XUL killed off their own extension ecosystem and put them in a permanent "Chrome but worse" category that they have been unable to escape from.

In some ways it is impressive that Emacs has managed to avoid being killed off in a massive rewrite attempting to chase other text editors. The temptation much be there, it has a lot of deficiencies. But it is a unique and rewarding piece of software for anyone who wants what it does.


> it'd be designed with keybindings for a modern keyboard - probably cloning vim.

Or not. This is something some have a hard time understanding, but some of us prefer a non-modal editor and like having simpler key chords instead of key sequences. After all, vi isn't that much younger than Emacs, and the technical underpinnings of its command language are just about as old as the oldest versions of TECO Emacs. My point is that the difference between vi-style keystrokes and Emacs-style is a matter of taste.


Bill Joy, maker of Vi, was rather impressed by Emacs's input model in fact:

> I think one of the interesting things is that vi is really a mode-based editor. I think as mode-based editors go, it pretty good. One of the good things about EMACS, though, is its programmability and the modelessness. Those are two ideas which never occurred to me.

https://web.archive.org/web/20060701083055/http://web.cecs.p...

I am used to Emacs's parallel universe of keyboard shortcuts now, but I think you could make a good Emacs using standard Windows keybinds.


> This is something some have a hard time understanding, but some of us prefer a non-modal editor and like having simpler key chords instead of key sequences.

I'll second this.

I literally grew up on VI - my dad installed it with UNIX tools on the family 486 running MS-DOS. Then I dedicated a week to learning Emacs around the turn of the century, and I've never looked back.

To be sure, I still whip out vim for quick editing of remote server config files and it's my editor in mutt. But there's just so much power in emacs - I'm reminded of Vivek Halder's "Levels of Emacs Proficiency"[0], and the fact I do most everything in Emacs these days (I live in org-mode, play music in EMMS, run git through EGG, etc, etc). It's hard to think of anything with this level of consistency of keybindings, nor something that does so many things I need to do.

[0] - https://www.vivekhaldar.com/articles/the-levels-of-emacs-pro...


> In some ways it is impressive that Emacs has managed to avoid being killed off in a massive rewrite attempting to chase other text editors.

No point chasing when you’re the leader.

It would have to happen in a fork, and a fork attempting it would run into all the problems Richard Stallman identifies when these things are proposed (and rejected). It’s not like they’re unaware that some decades old critical components need to be redone. But this is emacs, not a FAANG throwaway, so that kind of work has constraints and expectations and standards to meet. Chunking garbage at it because emotions isn’t an option there.


The entire article comes down to this quote:

  WHY IS EVERYTHING SO JANKY AF?


And my reaction was, is it really so yanky? Maybe it's not yanky enough for me to notice. Except for the handling of long lines, that's really slow. I wish it is just yanky.

However I don't understand one thing. Did the author switch to something else or he's still using emacs after this long and convoluted rant?


Long lines are 'fixed' in Emacs 29.1. They've redone bits of the engine that made it slow.


There is so-long.el which is part of Emacs since 27.1 http://git.savannah.nongnu.org/cgit/so-long.git/tree/so-long...


> Did the author switch to something else or he's still using emacs after this long and convoluted rant?

Building my own solution which the website is dedicated to.


That’s the good part. The bad part is the lengthy complaining about needing something better than emacs before concluding that emacs is the best.


I had to laugh when I read the complaint about evil mode interactions. Running an emulator for another UI paradigm, unhappy about how it works.


> And then there are the floaters, the passers-by. They judge Emacs solely by its features. > But that's not how a power users judges it. Instead, the power user judges a piece of software by what power it provides and what he could do with that power to help himself.

In that case I'm not a power user. I tried using emacs for a good amount of time, probably about a year of wall time has been spent with me using emacs as my primary editor. I installed a few packages, and never tried to script anything myself. My issues with emacs were not easily scriptable.

Although all-in-all, I didn't find emacs particularly lacking in features compared to vim, which is my preferred editor.

In the end I stuck with vim because it's slightly less clunky in my very subjective experience, and I am faster with it. And it is visibly slow, even if you set up an emacs server ("what is a text editor 'server'?" I can hear vim and nano users muttering).

Rather than just s**ing on emacs I want to present the things that I did like about it, compared to vim:

- The ability to quickly cycle through recently yanked/cut things. It's possible to do this on vim, but nowhere near as easily as M-y. - Keyboard commands are universal and not modal, so I don't need to learn two totally different sets of commands for simple movement and editing. - Keyboard commands are also available in many prompt-based tools like bash, gdb, and many REPLs. Don't tell me about vi-mode in bash unless you actually use it and like it.


I've found the emacs rant itself amusing, since it is a very old piece (slightly younger than me) and gargantuan piece of software, and it shows. On the other side, it is still the most extensible and moddable editor out there and its crown juwels (org, magit) are unmatched.

As for the project itself, I remain sceptical. Partly because I do not see how it would be more amazing for general text-editing tasks than emacs/vim + tree-sitter, partly because it is written by an adept of a language which is since decades more known for rants about programming than for delivering amazing software for end-users.


> text-editing tasks than emacs/vim + tree-sitter

If you like mediocrity, sure. That's what so many have OK'd, and that's fine, not everybody has to care about tools.

And you are right for being sceptical: it's only natural, I would be too if I saw what I wrote without knowing what it was. As an Emacs user, I could be dismissive of new efforts, because, you know, emacs is enough and all.

And I have said this a few times in the comments, but I am far from tired: the point of structural editing is not just editing.

If you have time, take a look at Alchemy and what I want to do in it:

https://project-mage.org/the-power-of-structure#AlchemyCL

Slime for lisp didn't do it for me.

My thoughts on tree-sitter:

https://news.ycombinator.com/item?id=34375137#34379449

> org is unmatched

Yeah, but is it even good? I am using it myself, have been for many years. It doesn't bring me joy to say this: but it's not enough either. Far from it. Miles, and galaxies away from it. It could be so much better (the experience of note-taking and computational notebooks).

> an adept of a language which is since decades more known for rants about programming than for delivering amazing software for end-users

Look, all the rest of the languages didn't deliver it for me either (and for the computing industry at large, even though many people would prefer not to think about it).

Computer industry is very young. Very young. I get the skepticism, but just think of it!

Emacs kind of did deliver, to a point, for some time -- and that's written in lisp. One of the longest-living pieces of software.

Or are you saying that Common Lisp itself is a problem? I mean, it's was simply the most practical choice on my part. I don't consider myself an adept, I don't consider CL perfect, but it's suitable for what I want to do.


For a text editor mediocricity is fine. It has to solve many tasks for many people. It has to be shown that structural editing makes an editor simpler, faster and in suited for a broader range of tasks. Until then, text is the king.

> Yeah, but is it even good?

Yes, it is. Is the only solution which was able to replace several outlines + highly specialized solutions like OmniFocus for me. And that, being a simple text outliner. It has no point to compare it to some hypothetical solution which does not exist and may never exist at all. We need it here and now.

> and that's written in lisp.

It may be more of an artefact of the time when emacs was created than some intrinsic advantage of lisp in general. The way I see it, the advantage of emacs is that it is mostly written in the own script language. Would it be written in lua instead (a more common choice these days) not much would change as long as you can eval code on the fly.

> Or are you saying that Common Lisp itself is a problem?

I would not say that CL itself is a problem, but on the other side the lisp community likes to write essays about the perfect software in the sky but delivers mostly nothing. I cannot do otherwise than compare it rust, which is very young language compared to CL, but the community cranks out amazing and useful software on a weekly basis.

So, yeah, while in theory nothing speaks against amazing software being written in CL (as in any other language apart from brainfuck, really) in praxis it raises a lot of red flags for me.


> For a text editor mediocricity is fine.

No, not for the text editor, but may be for some users. If a tree falls, does it make a sound?

After all, maybe you are just not a power user and don't have the same frame of reference that power users have.

> It has to be shown that structural editing makes an editor simpler, faster and in suited for a broader range of tasks. Until then, text is the king.

Text might be the king until then, yes, but so is C++ and the children running with AKs in Africa. Doesn't mean it's right or acceptable.

To me, I have argued extensively, that yes, structural can be absolutely simpler and faster and more versatile. Without a question.

> I would not say that CL itself is a problem, but on the other side the lisp community likes to write essays about the perfect software in the sky but delivers mostly nothing.

I am sorry, but I am not a part of that generation, if there's one. Yes, I can see how strange it is to not see what's claimed to be a very powerful language overtake the world. I agree on that, it's a bit mysterious.

But some things just take time, I think. For the people to get accustomed to it; to go through some pain first, perhaps.

Look, the only reason I am using CL is because it's an image-based language. That's simply crucial for interactivity. What are my other options? Smalltalk isn't bad at all, but I like macros too much, and the stability too.

> in praxis it raises a lot of red flags for me.

You have to account that the community hasn't been as nearly as large as for any other mainstream programming language. So, aren't your expectation, perhaps, a bit too high for the output of that community? It only has something like 3k packages online. Python has 420000+. Where is all the amazing python userland software that has lived for >5 years?

And, you know, as small as it is, people have published some pretty cool libraries. Stable libraries. Sublanguages. Macro libraries. FWIW it's pretty cool as it is. A game was published two days ago: https://store.steampowered.com/app/1261430/Kandria/

Is it amazing? Haven't played it, don't know. Maybe? But what exactly is all that software you are expecting to begin with?

I don't know the name for it, but there ought to be some cognitive bias that says: "things that don't happen, can't, and won't".

Of course, I get it. I would be sceptical too. Maybe even very sceptical. But there's no other way to deal with sceptisism other than to analyze what's actually proposed, and whether it's doable.

https://project-mage.org/why-common-lisp


> After all, maybe you are just not a power user and don't have the same frame of reference that power users have.

Maybe. Or maybe your ideal of an editor has nothing to do with what everyone else wants from it, including power users.

> Text might be the king until then, yes, but so is C++ and the children running with AKs in Africa. Doesn't mean it's right or acceptable.

I will ignore the "but the children" polemics, because it is true for C++ and that is what Rust shows. The important part is "shows" - rants do not. In the end, talking is cheap - show me a better editor than emacs first, because a promise cannot edit text.

> Yes, I can see how strange it is to not see what's claimed to be a very powerful language overtake the world. I agree on that, it's a bit mysterious.

Yeah, indeed, if you have a wrong premise about a language being "very powerful" per se and refuse to re-evaluate it, then it is "strange". Or you accept that the powerful parts of lisp do not matter in the end, the disadvantages are real and that it comes from an era with little collaboration and small-scale software in general. Then you shrug and move to, you know, actually delivering something.

> Is it amazing? Haven't played it, don't know. Maybe? But what exactly is all that software you are expecting to begin with?

Ah, the inevitable CL debate :) You can find a bunch of libraries, half-written and abandoned decades ago, but the fact is also - I have software written in rust on my mac right now, and be it the inevitable RipGrep, but I cannot remember ever using or having something installed in CL. And it will be true for most people. It is a pretty much dead language which is, on the other side, made undead by claims about some perceived PL-superiority which does not result in any significant number of useful software being written. When you stop drinking the lisp-superiority-coolaid, not much remains to see, such is my perception after some diving into CL and Clojure.


Alright, I get it, you don't like CL. Coolio.

> show me a better editor than emacs first

I will. Just one correction: it's not going to be an editor. It's going to be many editors forming a cohesive whole.

Rune: Rune Is Not an Editor.

Other than that, what can I say? Yes, I have to prove my words, even though, in my opinion, the writings have enough to convince you that it can be done. And I will do my best to deliver.


I wish you to succeed, of course :)


Thank you : )


> I cannot remember ever using or having something installed in CL

I get you, of course. There are not a lot of user-facing, open-source software. When there is, they tend to be big and not CLI utilities like ripgrep, and not well advertised. A few ideas: pgloader (from python to CL), pgchart, OpenMusic, Opus Modus, the Scorecloud app, CEPL to play with graphics, regex-coach for windows, the Lem editor, Maxima, Axiom, Sucle (Minecraft clone), PTC's Creo Elements multi-million lines 3D CAD software with a free version, StumpWM…

but also you are ignoring all the big historical projects (some still in activity like Maxima).

Try https://ballish.margaine.com/ for a fast code search.


I merely skimmed this article because I personally don't care or much about the details, but also because I completely and fully understand his pain.

I've done quite a bit of the "searching for the perfect system," including doing Emacs + Org Mode for a few years.

I ditched it because I do like a lot of modern tools, and today I use a ton of little scripts and hacks, mostly around zim-wiki, but what I think the author is getting at is more-or-less the "hypercard" thing.

Namely, it could and should be far easier to cobble together systems of programs as individuals that work together and that don't strongly separate "user" and "developer."

(aka screw GNOME? :) )


All I can say is "Glamorous toolkit"

https://gtoolkit.com/


Wrote a review on it on the website, copypasting:

Glamorous Toolkit[1] promotes the idea of moldable development[2].

There's a talk on it: Tudor Gîrba - Moldable development.[3]

The basic idea is to have multiple views and editors for any piece of data in your system (including code). Kind of interesting, but the toolkit looks and acts more like a fancy computational notebook type of environment, but without explicitly being a computational notebook.

The site on moldable development states its difference with literate programming:

They are similar in that they both promote the use of narratives for depicting systems. However, Literate Programming offers exactly a single narrative, and that narrative is tied to the definition of the code. Through Moldable Development we recognize that we always need multiple narratives, and that those narratives must be able to address any part of the system (not only static code).

And that's a sensible viewpoint. But I still see it as an advanced version of a literate programming, all done within an interactive environment.

The focus of Glamorous Toolkit seems to be on explaining a code base or a certain part of the system via presenting it via a custom tool.

But I am not too convinced with the top-level development model / workflow it assumes for you. I guess it's too narrowly-focused / opinionated.

It's also a custom fork of Pharo, so the question of long-term stability is even more unclear than that of Pharo itself.

I can't say I can compare it to Project Mage in any meaningful way, except it's also a live environment.

[1] https://gtoolkit.com/ [2] https://moldabledevelopment.com/ [3] https://www.youtube.com/watch?v=Pot9GnHFOVU


More recent follow up discussion: https://news.ycombinator.com/item?id=34380373

This whole thread was a great read and since I have many times benefitted from finding gems like this on HN through a search engine well after the fact, hence leaving the link above.


I very much like this article. I especially like the comparison to TempleOS. I think Terry Davis blew computing wide open. That is what a personal computer can do, when it’s treated as an instrument instead of an appliance, and the user treated like the player of said instrument, instead of a child in a china shop.

Imagine if you could plug your guitar into a computer, have that audio file, be a widget in a buffer, and then, pass it to other functions just like any other object, process it using, idk, map(car), and then sync it to a video using list functions, maybe with different implementations for the data structures under the hood. Or if you could make a game, modify it while you’re playing it, share that game as an image with a friend, and have them open it up, interact with it the same way they do with every other data structure from email to org mode.


Every time I see something like this, I'm also reminded of this demo of the Xerox Alto from a few years back. I'll link to the start of the most relevant piece: https://youtu.be/AnrlSqtpOkw?t=549

I'm always struck by the useful directions that desktop computers were going in the 70s, and how computing could be so much better than it is right now. But we've veered so off-course from all of this, away from composability and towards independent boxes ("applications") that can't talk to each other at all.

Emacs is in some ways the best thing we have today in the other direction, but it pales in comparison to what's possible.


What Emacs users want is a version of Smalltalk with a superior keyboard driven experience, featuring which-key et al.

> I'm always struck by the useful directions that desktop computers were going in the 70s, and how computing could be so much better than it is right now. But we've veered so off-course from all of this, away from composability and towards independent boxes ("applications") that can't talk to each other at all.

This is the consequence of deliberate corporate agendas, but many people are still in denial of that.

The worse thing that happened to this kind of computing was the policy of forcing everything through the web interface, which was deliberately crippled to let the industry majors to get a heads up in browser based software.


> What Emacs users want is a version of Smalltalk with a superior keyboard driven experience, featuring which-key et al.

No, more like Interlisp, which is kinda like Smalltalk in terms of being an active environment, only without having to deal with... you know... Smalltalk in order to use it. Plus, there's something to be said for a plan that's probably now mostly forgotten, to re-write Emacs to use Guile as its language and compile other languages to Guile so programmers could extend it in any of multiple languages and they'd all interoperate. The target language doesn't have to be Guile, of course; the modern conception would likely involve WASM and/or LLVM bitcode.

https://www.emacswiki.org/emacs/GuileEmacs


Interestingly TJ Devries just did something like that for neovim. He made a transpiler from vim9script to Lua to allow compatibility. I found that around neovim there is a similar crafting vibe you find around Emacs.


And they've got Fennel and some Fennel based frameworks to enable them to configure the Vim settings in a Lisp.


    > Emacs is in some ways the best thing we
    > have today in the other direction, but
    > it pales in comparison to what's
    > possible.
Imagine joining Blender, sensors, actuators, voice assist, Ai, lisp, Emacs.

If a project could attract the backers Blender is getting to explore and document design possibilites for Emacs that's a move in a better direction.


Yes, yes, yes and yes. I am glad to see this sentiment expressed here!


Very true. From one vantage point, things look like this:

1970s: The Visionaries - showing us how the future (of desktop computing) can look like.

2023: pre-1970s state stuff has become pervasive, runs faster and is cheaper, but not nearly at the level of sophistication that the 1970s had. (State of the art is still free clones of Unix, a 1960s OS.)

So where is the course correction? And where are the 2023 ideas that as look far into the future from today as the J.C.R. Lickliders, D. Engelbarts, T. Nelson and A. Kays of the day?

(Thankfully, in some areas, things look better - e.g. mobile phones, robotics, machine learning applications etc., so the point here is only valid about destop HCI progress.)


I wonder if the sidefx houdini guys knew about smalltalk and if kay and his team ever saw houdini. The universal connecting data is very similar (and one of the most pleasurable thing ever)


>Imagine if you could plug your guitar into a computer, have that audio file, be a widget in a buffer, and then, pass it to other functions just like any other object, process it using, idk, map(car), and then sync it to a video using list functions, maybe with different implementations for the data structures under the hood.

Amazing, thank you. This comment, and the replies to it, perfectly summarize what I am looking for in a complete LISP environment. True integration will treat not only text blocks/streams, but multimedia of any kind, seamlessly and as first-class citizens. The old LISP engineers understood that, but as another person said, the limitations of that time put significant barriers to that goal.

We no longer have such limitations; it is time to revisit that model.


I find the fascination with TempleOS honestly weird, and I think people are honestly only fascinated by it because of the author. We have had smalltalk for the longest time. We have had so many attempts at "embed rich content into shell", we have had all of this!

Hell, all of what you describe is basically available in the form of Mathmatica.

And "share a program with a friend" is called a webpage with Javascript. We have the universal VM, just not in the shape that you would like.

I don't want to be too dismissive of this stuff but I think at one point we gotta try using all these new fancy ideas that would be good for everyone, instead of constantly just imagining "what if they existed". Let's actualize a bit!


XEmacs was a great replacement back when UNIX environments were found lacking in IDE offerings.

I still have muscle memory for the stuff I used between 1995 and 2005, across various UNIX commercial workloads.

Nowadays thankfully all the environments I care about have quite good IDE support in some form, while Emacs experience at its core has hardly core, although it is much better than in the XEmacs vs Emacs days.


Hi! I am the author. I will be glad to answer any questions.

First of all, I don't want another Emacs rewrite, much less in Guile. Mixing languages is not good for power-use, which requires ease-of-use, or at least conceptual simplicity. I talk more about it in the article in the Project's Philosophy/Homogeneity section in [1] The Power of Structure.

I am proposing we need to really start considering a different paradigm, and attempting to do it right, and that's structural editing. People are wondering if it's possible to be writing better structural editors. We all know there have been attempts to do those, and, well, lo and behold, those were janky too.

But they don't have to be. When people start thinking of structure-editing, they immediately jump to the "how do we do C++". Well, in fact, I could tell you how we could do exactly that: you could start small. You start with what you know. And you know that you could, say, start with structuralizing the {} brackets. That's a semantic unit. So, that's a start. Even without getting down to the compiler level.

But I am not arguing I am about to do wonders for something as complex as some mainstream langauge in terms of a structural editing. I believe it can certainly be attempted, though, and certainly improved, peacemeal. And what you can't do: mix it with the traditional string-based editing. Or take python: that one would structuralize pretty nicely by indentation. Would it accomplish everything? No. But it would certainly help.

The gist of it boils down to the fact that you don't need to start at that very complex level, you can do things piecemeal and still get many benefits. Ask yourself this: can you edit a /list/ structurally, i.e. edit like in a string-based editor while maintaning an actual list behind the scenes? Sure, you can. A tree? Absolutely. Look at Paredit.

THERE'S NO REASON FOR THAT TO BE JANKY. NO reason why that wouldn't work.

It can absolutely be done.

And, really, structural editing like I am proposing /subsumes/ string-based editing, because you can just write a specialized editor for general strings, and use that for things you don't know how to structure yet. And yet, even those string-based editors can be specialized further, as some semantic units like words, expressions and even characters are often immediately apparent.

What does that give us? At the very least: object identity and programmatic access, and having the ability to pick your own data structure.

This kind of small things are what's actually going to be very useful for stuff like note-takers and computational notebooks and REPLs and what not. We don't need to start with programming languages (though I am going to do a Common Lisp IDE).

Please, ask me anything! Let's talk!

PS Another very important point is that ambiguity can be localized. Look at the Alchemy section in [1] where I discuss dealing with the reader (but I also talk about it in Rune).

PPS And thank you for posting this. It's exciting to be reading comments. Truly.

[1] https://project-mage.org/the-power-of-structure


I find most of your complaints incomprehensible.

> Well, alright, I don't see no CSV-mode to arrange everything into a pretty table and then let me filter/sort/edit the damn thing.

`csv-mode` is right there on ELPA, the default package source for Emacs. You went to MELPA, which only has packages from people who don’t want to license their code the same way as Emacs. Install `csv-mode`, then type `C-c C-a` to format it into columns, `C-c C-s` to sort by a field, and `C-c C-n` to sort numerically by a field.

> A table editor within emacs will be janky, it will be a slow heap of cowdung, outspreading and dispersing, channeling the fumes.

This is completely wrong. There are half a dozen Emacs packages that give you variations on the theme of a table, and none of them are slow or janky in my experience.

> Well, scrolling that thing was not fun, I will tell you that. Neither was getting spammed with:

All the errors you report look like they were caused by packages you have installed, not Emacs itself.

> Timed out waiting for property-notify event

This one is actually caused by your clipboard manager rather than an Emacs package.

Etc.


> `csv-mode` is right there on ELPA

Didn't know about that one. Jesus, Elpa, really? Not that it changes anything.

>> A table editor within emacs will be janky > This is completely wrong. There are half a dozen Emacs packages that give you variations on the theme of a table, and none of them are slow or janky in my experience.

Haha, have you used org-made lately? Have you tried formatting a large table? Is it fast, you think?

Look, make a 100colx100000row table that you can edit (although, something like 10 by 20 will do the trick too). Not just view, but edit. It's not a question of whether it will be janky, it's a question how long you have to wait before exiting the damn thing in a furious fit of anger. That's just what you get with plain-text, plain and simple. I don't see how this is even an argument.

And then, why do you think it is that there are half-a-dozen packages there for it anyway? Is it, perhaps, because no one of them really does the job?

> This one is actually caused by your clipboard manager rather than an Emacs package.

I take a rant license on that one.

> Etc.

But, you see, that's the problem: these problems don't end.


I’ve used various large `org-mode` tables with great success and no rage, but a table that large means that you need a database, not a spreadsheet. Use sqlite for it instead and you will be much happier. Use the right tool for the job; nobody is forcing you to use Emacs for every single thing.

I find that whenever people complain that Emacs is slow or janky, it is usually because they have configured too much stuff to run all of the time. They got excited by the possibilities and downloaded every single shiny package they could find. After a few years of downloading shiny packages Emacs just isn’t fast any more, and so they blame Emacs.


> you need a database, not a spreadsheet

Exactly, a custom structure, a custom object. That's my point. And that point is: the ability to specialize on custom structures, which are unavoidable if you are to keep your sanity.


But why are you complaining about that. Who told you that it would be a good idea to use Emacs instead of a database? Why did you ever come to that conclusion?


Because it's often claimed than Emacs can do everything. Or, if not claimed, than assumed. And if not assumed, people go and try to do this stuff anyway. I gave an example of an object-oriented spreadsheet in my article.

You can't stop people from trying, but that's just the wrong paradigm to try those things. My point: we can still have an environment that can do all of these things. Embed a spreadsheet and interface a database textually. Why, is what emacs giving us really the end of it, you think? Because I don't.


You must have misread those claims.


Yeah, right : D

In any case: there are quite obvious limitations, and, to me, they weren't very apparent when I started out in Emacs.


VisiData : tables : Python :: Emacs : text : Lisp


> But they don't have to be. When people start thinking of structure-editing, they immediately jump to the "how do we do C++".

What does that mean?

> start with structuralizing the {} brackets. That's a semantic unit.

So, vim.

Getting simple structure done is not the problem. We have them everywhere and everyone can build their own tools in proper environments. Supporting the big picture and custom structure is the unsolved problem. The best we get in that realm would be support for XML, Lisp, maybe also JSON and YAML and the likes. But all those are hyper specialized tools, optimized for those specific cases. What we lack is something good which generalize this.

> The gist of it boils down to the fact that you don't need to start at that very complex level,

No, there is, there always is. Because if you start simple, you always end up with the wacky unsatisfying solutions at some point. Nothing scales well to infinity. Micro-managing and macro-managing are different scopes with different solutions. Simple is good for the micro-levels. Complex is good for the macro-parts. Think about the text-editing of advanced text-editors, and the abilities of vim. Advanced text-editors are simple, and not bad. But compared to the complex editing of vim it still cannot be compared.


> What does that mean?

That means they are trying to solve some very difficult, general problem first. But, you see, that's exactly the problem: you don't want to go general. You want to go: specialized. And here's the key: then you want to mix and do the interplay for your simple, specialized well-working elements. And, indeed, you can define some general interface properties for that, once you have it.

> Getting simple structure done is not the problem.

I agree!

> What we lack is something good which generalize this.

I can't emphasize this enough, but trying to find a general structure and fit it for everything is a path to failure, and I am arguing vehemently against such structures or approaches [1]. Strings are such a general structure. Although there were others proposed, like Ted Nelsons zig-zags. That's where you do not want to go.

But again: generalization is not the problem, the problem is the ability to specialize.

But, yes, you are right: there has to be an overarching system, and that's exactly what I want to do. A system where the simple parts can interplay. And importantly: embed.

The simple structures I listed, and a few others? Those will be enough for the applications that I want to do. And that will be plenty useful to me, already at that.

See, the fact of embedding itself lets you manage the complexity, because that's where a lot of complexity lies within: in hierarchies. And then, when you are doing editing operations, you have full and easy introspection into all the structures that you are operating on, so, doing them right will be possible.

PS I have been using vim for quite a few years, and I don't see it as some kind of complex editing (other than bindings and modality which takes getting used to). Maybe I am misunderstanding what you mean by this exactly, though.

PPS I am sorry for responding slowly, there are quite a few comments.

[1] https://project-mage.org/on-flexibility


> trying to find a general structure and fit it for everything is a path to failure

Depends on whether you consider everything literal. That nothing can solve everything and your mother is obvious. But solutions which can handle majority, or even 99.99% of relevant cases of your domain, I think we already have that today for structures. XML for example was optimized for decades to handle structures. Not all parts are good for the majority of cases, but the general ideas and concepts are universal enough that flavors of them will always appear when tackling such problems.

So instead of build yet another new solution, maybe just look at the existing tools and how we can make them more useable for your usecases.

> PS I have been using vim for quite a few years, and I don't see it as some kind of complex editing (other than bindings and modality which takes getting used to). Maybe I am misunderstanding what you mean by this exactly, though.

Normal editing today is: you press a key, something happens, that's all. Sometimes you add a modifier, but that's mostly it. Vim on the other side, you have modes, you have parameters for key presses, you compose commands to create new commands on the fly. That all is several steps more complex than just a simple key press. Vi-Input is a whole language in itself, highly complex on a cryptic level, all just to macro-manage micro-tasks.


> So instead of build yet another new solution, maybe just look at the existing tools and how we can make them more useable for your usecases.

Existing solutions don't integrate within an environment, and not within each other.

> XML

I go into much trouble to explain my position on general data structures in the article I have linked previously. You can't, in general, really optimize general data structures for specialized use-cases. That's why they are called general!

You wouldn't use a map where you could use a vector, right? But a map would still do the trick, no?

If you start doing all that, you are then trying to fit logs through a meatgrinder. You then have bloat, inefficiency, and all that other good stuff. I mean, you see, this is exactly why we have so many problems right now: people trying to use a general solution for everything. But all those solutions aren't good enough for one reason: they can't specialize. And you can't make them so. (That's the whole point of the topic article, too, just applied to strings.) And they can't interoperate.

> even 99.99% of relevant cases of your domain

Currently, that's more like 50% and the rest is unattainable without access and the knowledge of structure. And the first 50% doesn't even work that well either. I list a few applications on the website, so, don't take this as a bunch of empty claims: those can't be done efficiently (or practically) via the general means.

> Normal editing today is: you press a key, something happens, that's all. Sometimes you add a modifier, but that's mostly it. Vim on the other side, you have modes, you have parameters for key presses, you compose commands to create new commands on the fly. That all is several steps more complex than just a simple key press. Vi-Input is a whole language in itself, highly complex on a cryptic level, all just to macro-manage micro-tasks.

It may be complex in comparison to notepad, sure, but, I mean I haven't seen what's so cryptic about it, not really.


> Existing solutions don't integrate within an environment, and not within each other.

Integration is surface, the interface to the user. Not the concepts or libs.

> I go into much trouble to explain my position on general data structures in the article I have linked previously.

Sorry, but not reading that, not after the first article. Too long, not enough meat, too unfocused. You should write shorter articles and stick to the topic when you want to reach people. Life is too short and full of content.

> If you start doing all that, you are then trying to fit logs through a meatgrinder. You then have bloat, inefficiency, and all that other good stuff.

And if you don't work with a solid foundation, you will start accumulating bloat of small pieces.

> I mean, you see, this is exactly why we have so many problems right now: people trying to use a general solution for everything.

That's wrong. Problems will always exist. Everything is a compromise of tradeoffs. It's not possible to have no problems. Well, except to no do it at all. The main reason why people tend to use more generalized solutions, is because it allows them to move faster, and specialize later if necessary.

Structures are also such a compromise. It enables you to handle more and press it into a form, but you will lose liberty to use forms outside your structure. Understanding and managing the tradeoffs is relevant here. There is a reason why we have types in the first place, and not write everything directly in memory like on a canvas. Similar there is a reason why we not have a thousand different types doing slightly the same thing, but use inheritance to build specialized types from a handful of basic types. XML is such a basic type, for tree-structures. And it is a good starting-point to build a foundation for structured editing. It is not the only type, but at the moment, the problem is, that there is not even this.

> It may be complex in comparison to notepad, sure, but,

Notepad is not an advanced editor. Is there even any editor which has a more complex editing than the vi-family?

> I mean I haven't seen what's so cryptic about it, not really.

Then maybe you just don't understand the problems tackled here at all.


> Integration is surface, the interface to the user. Not the concepts or libs.

That's the kind of viewpoint you get after years of using Unix or Windows.

> Sorry, but not reading that, not after the first article.

Ok. That one is not a rant, but whatever, you are arguing against its contents, and I don't see why I should be repeating myself. Sorry it's too long.

> And if you don't work with a solid foundation, you will start accumulating bloat of small pieces.

Exactly: you need a solid foundation to give you all the flexibility & meta-flexibility that you need. Otherwise meet bloat.

> That's wrong. Problems will always exist. Everything is a compromise of tradeoffs < XML is such a basic type, for tree-structures.

Hey, look, you keep arguing with the article you have decided not to read, and you are arguing with a strawman. I don't know what you think I am trying to do.

> Structures are also such a compromise. It enables you to handle more and press it into a form

You know what real flexibility means? It's the ability to decide when you want structure and when you don't. And this is the kind of structural editing I am proposing: you can simply recreate a string-based editor within it, for any of your elements. At the higher-level, it can be reused. No structure where none is required? Sure. I am all for it. But there's much more structure out there that meets the eye.

> Then maybe you just don't understand the problems tackled here at all.

Or maybe you don't? Because, explain your point better, please.


What do you think about treesitter? https://github.com/tree-sitter/tree-sitter

The idea is to sync changes in the text to a tree structure, then have all the structure manipulation functions built on top of it. See the gif here for a visual representation: https://github.com/nvim-treesitter/playground


I like the fact of make people's lives easier. It's a universal solution which may be used in many editors: can't argue against reuse.

But is it perfect? I don't think so. Whatever is responsible for structural analysis should also be responsible for editing and all the extension capabilities, if you are to have a truly powerful system.

For me, the real problem lies in the integration with the client environment. Tree-sitter is fundamentally separate from the client application. That places questions on the introspection capabilities.

I mean, suppose your comment has structure. Suppose it's a markdown document, that comment in your code. And suppose you want to edit it that way: as a markdown document (or anything more complex). So you may want a markdown parser for that. Suppose you can write that parser in tree-sitter. Suppose you can even do structural-embedding. But how are you going to control and customize it? How are you going to write complex interaction policies between different structures? Embedding is where such seperation starts running into limits.

Why does this happen? I believe it's simple: APIs aren't enough. What you want is for the user to have access to the building blocks of whatever is being manipulated. And, so, keeping those blocks away from the user is never going to be that.

What's more, tree-sitter is still not really structural. That means it can't handle ambiguity well. For instance, in a lisp expression, you can insert a stray parenthesis. Paredit or a tree-sitter solution is going to tell you that your structure is malformed. A truly powerful structural editor is going to simply assume that parenthesis doesn't disrupt the structure around it, and so, it will simple keep track of it, and maybe it will find a match within the same expression at some later point in time.

Yes: a structural lisp editor doesn't have to care about a stray parenthesis. It just keeps keeping everything around it intact. It's a local ambiguity.

And, at last, the very important idea of structural editing is not even editing: it's the ability to treat structures as objects. It's the ability to present objects using a textual interface. Suppose you have a table structure, multiple layers of it. You need direct access to all that. Editing it should be easy, but it's really secondary.

Tree sitter doesn't deal with this.

I still see tree-sitter as useful, because you still need to deal with all the plain-text files out there. Perhaps, it could be used to import plain-text into a structural-editor representation, and then it may be, again, used on small subexpressions to identify their nature, incrementally, at runtime (but not the whole document, like what a usual incremental parser would do). So, that's the value of it as a parsing tool.

So, yeah, it's a useful tool, but when it comes to some advanced points of power-use and extension, it's not really enough.

Thanks for the question, it's thought provoking in many ways.


Do you think managing CL packages in a way similar to 1968's Grail (https://www.youtube.com/watch?v=2Cq8S3jzJiQ) could work? I'm thinking of a visual overview of packages were a user can connect packages together (maybe by drawing lines or by holding a key down and clicking on packages to link them), and can afterwards click on the vertices connecting the nodes to configure which symbols get imported.

It might be a bit UML-ish, but that's probably because I spent a bit too much time playing around with Umbrello.


Man, I love these old demos. And I haven't seen this one before. I now understand much better what Alan Kay meant when he said: I felt like sticking my hands right through the display and actually touching the information structures directly. This is the first system I have used, and practically the only one since that I would call truly intimate. [1] Thanks for this link.

Well, in terms of UI, I see no problem whatsoever in building such a workflow. The way it will work is: lenses (aka editors) are cells (aka widgets). So you will be able to position things freely, anywhere. You could even connect and see actual symbols get imported and such.

As for the CL side, unimporting a symbol (aka removing a connection) is possible, so I see no problem there either : )

[1] https://www.youtube.com/watch?v=QQhVQ1UG6aM


Ironically, "how we do C++" is exactly how Lucid Emacs came to be, based on their experience with Lisp Machines.

https://www.reddit.com/r/programming/comments/25r6pw/a_demo_...


Basically, when you say 'structural editing', do you mean making a a parse tree for every kind of input and having a modal command language that permits traversal and editing of that parse tree. Like what sapling https://github.com/kneasle/sapling is attempting to do ?


Nope. Parsing isn't even necessary for most things I want to do. Moreover, I don't really care for trees: any objects, any structures, any source can be used. There will be a common interface for it (like jump to the next semantic unit or search a lisp object). But any object can have it's own programmatic interface, and not all objects need to implement all of the interface (e.g. a read-only object).

I have expounded the point about structural editing (tangentially) on some other comments here, in this top-level thread (unless you want to read the article [1]).

[1] https://project-mage.org/the-power-of-structure


I an curious about the CL part. Can you elaborate why the REPL input and output should be in different buffers?


In fact, I think every output should have its own buffer (or a place to end up in).

On the one hand, this is a UI problem, so that's just how I would prefer it. When you have a lot of output (like thousands of lines in a batch), you really don't want to go looking for where that thing begins.

On the other hand, you can't easily access the latest execution run programmatically. So, then, the input part shouldn't be there as well.

That doesn't mean you couldn't select a run of input-output dialogue, though. Visually, the run of UI input-outputs could be the same, it's just that how you work with all that stuff that has to be structured.


> First of all, I don't want another Emacs rewrite, much less in Guile. Mixing languages is not good for power-use, which requires ease-of-use, or at least conceptual simplicity.

This is a huge problem with GNU. Lisp is an extremely elitist language, and extremely divisive. It's not taught that well in GNU documentation either, in my opinion.

The least elitist language today is probably JavaScript. We already had Atom, and then things like VS Code, that make heavy use of JavaScript.

But does using JavaScript as the scripting language imply that the editor must be Electron-based? Surely you can have a thinner JavaScript-based app for desktop? And who cares whether it runs in a browser.


What a strange, whiney article.


After 25 years my dot emacs is _only_ 1000 lines.


I did a lot of custom bindings. Those can really bloat up the config. + Lots of packages with their configs.


so... instead of contributing, why not rewrite the whole thing? ok that would certainly help

/s


They're going to try to ruin emacs like they are with x.org. It's inevitable. Too many people want it gone. It's too good.


Who is “they“?


Big Wayland


What does that mean?


it is a joke, like Big Pharma


It's amazing how everybody knows but nothing is done and apparently wrong peoples keeps being elected... It's literaly electric shocking-like.


Others and I without digging that much have expressed something similar, i.e.

https://www.reddit.com/r/emacs/comments/so7os8/notdeft_notes...

https://www.reddit.com/r/emacs/comments/speq69/uomf_pathinde...

https://www.reddit.com/r/emacs/comments/zl8nfa/is_there_some...

https://takeonrules.com/2022/02/26/note-taking-with-org-roam...

The long story short is that we need something to process text automatically in easy to compose and easy to query ways. Emacs peoples mostly hate databases, and SOME DB models are strict, hard to keep bending, graph DB have not succeed so much so far, so it might be ad unsolved problem, a VERY OLD one, let's say:

- ~612 BC Ashurbanipal of Nineveh cataloguing system

- ~245 BC Callimacus Pinakes cataloguing system

- ~1545 libraries of Babel by Conrad Gessner

- 1673-94 Gottfried Wilhelm Leibniz, Scrinium Literatum

- ... to the Mundaneum (Paul Otlet/Henry La Fontaine)

- ... to the web first concept, the search engine concept

...

Natural language is not much computable directly, some modern ML tools try to work on it, some classic algorithms try to do the same, most results are not much good at small scale, they are just good at very large scale for very limited results.

But the point of Emacs is another, is the LispM concept, witch was also the Xerox PARC Smalltalk workstations concept: as we need ways to act on text with automation so we need OSes as a single application, where the source is live, changeable, where anything is a function the user can use anywhere so that if I have a CAS on my desktop I can solve and ODE with it inside an email and link the email in a paper.

Old systems was limited by the tech of their time, Emacs by the size of its community and the burden of legacy it have, but they show a starting point that's effective. The rest is still to be invented, implemented and done. NO OTHER TOOLS so far have proven to be better in general so...


Agree at all but bashing "representations" and APIs seems strange and asking for troubles.


But it is such a perfect place to start, my love


this article is hilarious because the vim csv plugin 'just works.'


Turned out the author didn’t know that csv-mode exists.


Yep. I will have to add it to the article (and why it's insufficient as well).


Just make sure that you’ve checked the documentation beforehand, so that you don’t miss anything obvious again.


So does the Emacs one.


TLDR?


Treating a document like a 2d text buffer is the root of all evil. Treating a document like a string is even worse. Emacs does both, so does everybody else. The big idea is mapping the doc to a tree of nested structures whose schemas define constraints, which constraints define the editing and viewing semantics of the parts of the tree you're looking at. Also lisp should have won the 70s and we're not done relitigating that. Also he has a 5 year plan to prove this all.

Still deciding if I buy it. only wants 1000 bucks a month to work on it full time, I think exploring the area is worth that at least.


I understand the argument for it, but I can't make heads or tails of the implementation article discussing constraints. How do constraints help us at all? For context, I wrote a graph-based CSP-solver and I'm still completely lost.

I hit page-down 40 times and wasn't even halfway through. I feel like billing $1000 to skim the proposal.


Seems like the argument is in the "Rune" part even more down below. I think it boils down to having specialized editors which you embed, and to a common interface that you may define over that. The point on ambiguity localization is kind of curious too.

I don't think there's an argument to be had for all structures, just that you can do it for each custom structure, and that's the point.

Maybe the editors of the old tried to bite off too much when they attempted embedded structures, and they didn't have the right abstractions in place. If you look at https://tylr.fun it shows that things that weren't being done back then, there are interesting approaches now.


> I think it boils down to having specialized editors which you embed, and to a common interface that you may define over that.

Emacs modes, in other words?

Like how, in Emacs, C-n nearly always does "move to next line" but it could mean "highlight next mail message" and "Enter" nearly always does something with the current line but it could mean "open currently highlighted message" or "newline and indent as per language-specific rules" or "send this line to a subprocess" or whatever.


Modes don't do embedding. Try having a fully-seperate mode for a comment section in your code. Try to do that contextually, too.

Modes don't do that. Neither do they let you control the underlying structure, they give you no ability to treat structures as objects. Yes, they do provide a common interface, but that's where the pros end. Ofc its emacs, and there are projects like Multi-Major-Modes that at least try to subdivide a document into editable areas... speaking of ridiculously slow.


Hey, I just wrote a general reply on structure editing here in this thread, please do check it out. Also, I have added a notice in the article to skip the Fern section, as it's, indeed, probably best left for last, in case you are interested in the platform as a whole. I should have really done that before.

As for constraints and prototype OO: I think those will simply do great for GUI building, and for flexibility. I think you need such abstractions to be able to deal with customization and complexity of embedded structures. I am basing Fern on the Garnet GUI framework [1], which had ~80 projects and was pretty fun to use judging by what people say.

[1] http://www.cs.cmu.edu/~garnet/


So this is another take on the whole "text as a structure"-idea, where there are so many failed projects already? Hasn't emacs even some packages around this?


Article written by someone who experienced Emacs as being slow and janky. Yet Emacs is one of the fastest software I use. Now I am definitely an Emacs power user: I started using the native-compilation branch as soon as it came out, I don't mind building from the source the latest version available when I see something that picks my interest, I've got about 3 000 lines of custom elisp code, I'll be using other really fast stuff, like burntsushi's amazing ripgrep, directly from Emacs, fzf too, etc.

I don't understand the criticism about Cider: I use it daily.

Emacs 29 with libjansson and native-compilation is plenty fast.

And most of all: apparently every single new version of Emacs not only runs but also compiles faster. And this is coupled with my computers which only gets faster too.

At the moment I'm running Emacs 29 on a AMD 3700X, soon to be replaced with a 7700 or 7700X. That's going to be yet another performance gain.

Most software only get slower and slower with each release (I was already using IntelliJ IDEA back when it was a version 4 and already bloated, since then it's been downhill perf-wise): Emacs, on the contrary, always keeps getting faster.

And the goodies added to Emacs lately are just insane: native-compilation, LSP support, tree-sitter (it's just been included in Emacs and it should bring yet another round of crazy speedups)...


TLDR - Emacs sucks.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: