Man, I'm conflicted. I mean, Zed works pretty damn well. So far my biggest annoyance with Zed though is that it's constantly trying to download language servers and other tools and run them. And sure, that's handy, but 1. I don't really want it, I'd much rather only use distribution-provided tools. 2. It doesn't work at all on NixOS, so it's just wasting its time and our bandwidth constantly downloading and trying to update binaries that will never run.
The thing is, I would just disable it, but you can't, as far as I can tell. There's this somewhat angry issue about it here:
They might have a point but beyond whether or not they have a point regarding the fact that it automatically fetches binaries from the Internet, not having an option to disable it is just cruel.
I still like Zed a lot and I also have a big appreciation for the design of GPUI, which I think is a very well-designed UI library that puts focus on the important things. So, I hope it goes well.
Oh, thank goodness. Yeah, that's going to be a major quality of life improvement for me. I had a feeling it'd eventually make its way into Zed eventually, but when I initially read the issue I was under the impression that there was no plans to add options around this, which I found confusing.
If any zed devs are in this thread: I highly highly suggest that any auto-download or upload (be it telemetry, plugins being downloaded, and worse: plugins uploading god knows what) is opt-in or at the very least easy to opt-out.
The eagerness to download stuff without my consent at the moment precludes me from using this e.g. in a job that touches a sensitive proprietary codebase.
This is a non-starter for many larger companies. With supply chain attacks being what they are currently, this would directly prompt Security teams to block this outright.
And with that all my interest is gone and I won’t bother with zed.
I highly recommend Little Snitch or opensnitch to protect oneself from rogue developers. Yes, anybody downloading things or uploading things without my consent is a rogue.
Oh sure, you can create an FHS and have it work, though personally I wouldn't. After all, Zed itself actually does work without an FHS, it's just that any binaries it tries to download will not. Which is actually not a huge problem in my case.
I just tried it on NixOS 24.05. It starts, but nothing happens when I click "Open a project" or Ctrl+O. It's as if it lacks the ability to show a file selection dialog.
This is almost definitely the problem they're facing although I think that description is a little bit odd. It's missing the operative word: "portals". It's the XDG desktop portals service that is involved here. What you need to ensure is that you have a desktop portal provider set up that provides org.freedesktop.impl.portal.FileChooser. What's kind of neat about the way xdg-desktop-portals is architected, you can pick and choose different implementations for different services. This is especially useful outside of desktop environments where you might need to use e.g. the wlr provider for screenshots and screen capture, but you still want e.g. KDE file dialogs.
It's unfortunate that the documentation for XDG desktop portals (and generally, setting up a complete desktop setup when using compositors like labwc or Sway) is relatively poorly documented. I have my feelings about the pervasiveness of DBus services everywhere but overall I like desktop portals.
Apparently yes. I tried installing xdg-desktop-portal-gtk at first, but that didn't work. xdg-desktop-portal-kde did.
But now I get issues that are likely due to problems with downloading language server binaries and running them, as the parent comment indicated. When I open a Rust project it says "Language server rust-analyzer-2024-07-08 (id 1) status update: Failed to load workspaces."
NixOS 24.05 contains an older version of zed, as feature updates are generally not backported to stable NixOS releases. Try running the package from nixos-unstable instead.
Well that got rid of the core dump each time I quit, and it fixed the language server issues. So together with the portals configuration it seems to be working as well as it can.
I really wish they would bundle up the basic Language Servers with the download (HTML, CSS, TypeScript) so it at least has parity with VSCode in this regard
Just a suggestion. One of the best features of pure text editors (and incredible, not all of them implement it) is autosave keeping the "unsaved" state of the file.
For example, if you make some changes in a file (new or not), don't save the changes, close and open the editor, the state of the opened files are kept like I never had closed the editor. The unsaved files are still unsaved. New edited files are still there, unsaved, ready to user manually save them.
Notepad++ works that way, and it is an amazing feature.
Similarly, I have unlimited persistent per-file undo turned on in Neovim. I can open any file I've edited previously and walk through the full history of how it got there. With Undotree [0], I can even navigate branching paths in development. I don't know how people live without this.
What are your undo settings? I set undofile and undodir, but not sure if it's unlimited.
One issue I have is if nvim is closed and the file is touched by some outside process (say git pull) it clobbers the history. Do you know if there's a fix to that?
Just an fyi, I have shot myself in the foot with Sublime's version of this. I became dependent on using unnamed/unsaved documents for quick notes, then at some interval I would clean up. And because Sublime would remember, I could rest safe that they would be there even if closed and reopened until I cleaned them up myself. Well, I also got so hooked on Sublime, I set it as my default system text editor. Then, (more than once), I would click a downloaded text file or something that would open in another window. Then after browsing or something I would be back in my original Sublime window. Close it for the day and as I was closing other windows realize there is another Sublime window still open with that document that I read early ... and all my other temp notes were gone! If you are good at grepping you can still find the files cached on your system with a little work, but something to watch out for. Or just get used to saving files somewhere.
I'm trying to follow how this can happen as I use Sublime's cache feature for temporary notes between meetings and want to make sure there isn't some corner case I've just not run into yet. The two related scenarios I can grok from this are:
- Create unsaved or modified versions of saved documents -> close Sublime completely (no prompt, documents go to cache) -> open download.txt -> new window has tabs for the cached documents and a new tab for download.txt
- Create unsaved or modified versions of saved documents -> open download.txt in a new Sublime window (2 windows open now) -> try to close unsaved/modified documents -> get popup warning that changes will not be saved (because it isn't the last window so they won't be saved for the session persistence)
But both of these are safe (i.e. you don't lose anything unless you click the button saying you want to lose something) so there must be another path to failure I'm missing.
There is a possibility that this has been fixed in newer versions or it was just a problem for me in SublimeText3 on Linux. But it happened, more than once. Your second version above is the one that I believed caused me the issue. I still use ST, but have autosave plugins and save everything to cloud storage now just for peace of mind.
Yes, never trust features like these for anything important, we're just not in that era of computing where losing user state is a cardinal sin.
Had the same issue.
Though you could use a shortcut to quit the editor instead of closing windows
Note that Sublime Text always prompts for each unsaved file in cases where their content could be lost. We heavily prioritize issues with data loss. That being said I still wouldn't recommend keeping important stuff unsaved, really they should be fully backed up like everything else important.
I did the same thing, with the same limitations for years, but I've transitioned to using the tiny package `DailyOrganizer` which can create a note for each day, along with a small custom command to open my note directory in the quickpick (to browse old notes). Having this has meant that I just throw notes down, maybe I forget them maybe not, but it at least they'll be saved properly.
I have a tab in Sublime Text for my todo list, which I created several years ago and never bothered to save. It's a great feature for indecisive procrastinators.
Emacs definitely does this. I have saved many files from power outages. M-x recover-file, but the user has to recover the file right away when he opens it again or else a new auto-save will overwrite the old one. I think that's the case.
Scratch is (I think) intended for use for executing 'this session' elisp code as the buffer is set to lisp interfactive mode, not intended for where you store your scratch text.
Other buffers behave differently, maybe scratch isn't useful for a large number of emacs users, however scratch is working as designed.
Just if you close the entire editor. Editors with this feature, if you close the file it will ask if you want to save changes, click no and the changes are lost.
It's more common than you would expect in IDEs: VS code, sublime, notepad++, though I would love to see it adapted to other types of software such as audio, graphic editors, etc.
I have fallen in love with Zed on Mac, so glad to see it will still be an option when I switch back to Linux. My main concern is the collaboration features; just seems like a nonsensical addition. I have zero influence over what editors my teams use, and I work with dozens of different people on collaborative development every year - I'm not going to be persuading anyone to switch, and so that feature is just dead code and security risk. Even if I worked on a small and consistent team, I don't think the value-add justifies the complexity and risk.
For what it's worth, other major code editors Jetbrains and VS Code also offer real-time collaboration built in. For Jetbrains, it's a paid feature. For VS Code, it's free.
I love the VS Code implementation (haven't reviewed the other two). If I'm pairing with someone remotely, I don't have any issue having them download VS Code. We provide a config in our project repo for VS Code, so it's really quick for people to get set up enough to join the real-time collab session with me. `brew install visual-studio-code` and then `code .` in our repo, plus OAuth with Github to authenticate the collaboration feature.
I think it really is great. Makes pairing much easier, and really speeds up drugery like refactoring 500 cases where it doesn't quite make sense to do a codemod. It's not quite like the upgrade from Word97.exe to Google Docs since we have git, but it feels similarly amazing to "just" to be talking about some code, click the other user's icon to jump right to their cursor and help them get unstuck.
I personally bounce between VS Code, Xcode, and nvim+tmux, and I don't have a problem with keeping a "lowest common denominator" editor around for collaboration or pairing. I also keep a regular keyboard at my desk so I don't force people to type on my Glove80/Kinesis Advantage.
My experience with the VS Code implementation has been constant desyncs between the editor state on the different users computers. At least some of them I could reliably reproduce by using language server commands.
The main reason I'm excited for zed is for an editor that built this in from the beginnings and has the same feature with less bugs.
As someone who hates Microsoft, I just wish that other colleagues wouldn’t force me to use their editor to collaborate. I wish there was more effort to build something editor agnostic.
You could use vnc and allow the remote user control. This is how I used to do IT help for relatives in the early 2000s. It's certainly far less secure, but I'm not sure what you're asking for will ever happen.
I have felt similarly about collab tools. Even if the tools in an editor look cool, someone on the team is gonna get left out because they use a different tools. It feels a bit like the wrong layer for the collab tools to live.
Absolutely agree, the collaboration features put me off a bit. I think it can very well become a very successful and popular editor without those features. Perhaps they can invest in those feature when they have a much bigger market share.
The collab features are actually what got me to try it, and why it’s still installed on my machine.
I have no reason to use Zed over Kakoune or VS Code for working on my own (open-source VS Code, so no Liveshare).
I wanted to work on code with someone a few weeks ago, and we both downloaded Zed and started collaborating very quickly. It was a very smooth experience.
What does Zed use as the UI toolkit? Looking at the code they have a handmade UI toolkit called gpui. Does that map directly to OS/DE specific GUI bindings? I can't find where that's happening
EDIT:
Holy sh*t, they actually have bindings for each OS and built a Rust abstraction on top of that. That's pretty wild
> Holy sh*t, they actually have bindings for each OS and built a Rust abstraction on top of that. That's pretty wild
I grew up developing Windows apps using the native Win32 API:s, and there was nothing particularly daunting about it. Using what the OS provides shouldn't be considered such an outlandish idea, and being scared of it is causing stagnation and waste (looking at you, Electron). The code here is only a couple of thousand lines per platform; surely only a small fraction of the entire code base.
I'm not sure how much of it is people being (irrationally) scared vs. looking into native APIs and making the call that it just isn't worth it, given how much messing around you have to do to get basic functionality working vs. the web where you can throw a UI together very quickly to validate an idea.
I've looked into native Linux development a few times, for example, and haven't even been sure what's the best toolkit to use. It feels like you'd have to invest quite a lot in a particular toolkit to even see if it can do what you need, coming fresh from web dev like a lot of people who go for Electron obviously are (myself included).
Sure, you can get stuff out faster by using a web view, but it will suck. It's the usual conflict between speed of development vs control. In the 90s we had Visual Basic for that, but if you were serious about quality you would use something lower level.
Linux is kind of a special case because its ancient ancestry and open philosophy means it doesn't offer standard UI components at all (and with Wayland, there's not even a standard API for creating windows). You either draw the pixels yourself on the screen or you use some library for it. But if you're targeting primarily KDE you would use QT, and if you target primarily Gnome you use GTK.
With MacOS and Windows there is a rich set of standard UI controls implemented by the OS, that ensure integration with the OS and a consistent look & feel. When you use a webview you lose all that.
We have a couple of blog posts digging into gpui, but here is one from just after rewriting and shipping gpui2: https://zed.dev/blog/gpui-ownership
We’ve slowly been building out gpui to be super ergonomic and fluid for us to build the kind of UI we need to.
As a designer that just picked up Rust last February it’s been really nice to have something that is so comfortable to work with without compromising our performance goals.
I hope they add UI support for proportional type. I've bounced off the editor every time I've tried it since so many UI elements end up truncated or overly wide in general because of the insistence on fixed-width font.
I wonder if they ever considered using Qt. Not sure what the status of that is for rust projects. Sounds like it does the same as what Z is doing, mapping user interaction to os bindings and rendering the UI using the GPU.
However silly it is, I've always hated the aesthetics of VS Code. I know it's themeable but despite that the overall look and feel just isn't right on MacOS or Linux. That side bar drives me crazy.
I find that out-of-the-box Zed is much prettier and feels more native than VS Code. But for a tool that we spend hours using each day, how it looks and makes you feel really matters.
I am enjoying experimenting with Zed. I have kept my extensions and configuration to a minimum which is a refreshing change compared to the cluster that my VSCode installation has become.
The first thing I do on any VS Code fresh install is to switch the sidebar to the right. Pure heresy for many people, I know. But I want my eyes to naturally land on code, not on a file tree.
Its actually pretty obvious advantage if you are often ctrl+b hide sidebar because instead of jarringly moving start of the line you are revealing code without moving it.
IntelliJ is much slower than any other editor including Zed and VsCode it's much slower to open and navigate, much slower to work with, much slower, it's so slow! but the code completion, refactoring, code navigation, and debugging features and endless other smart features are incredible. For me, that extra intelligence and code awareness boost translates to way faster development overall, even if the IDE itself takes a bit longer to load or work with or consumes huge amount of memory. Sometimes the smarts outweigh the raw speed.
If you're familiar with nvim, you slowly realize how bloated and unnecessary the indexing is in intellij. It makes the experience so awful and for what? A file search feature that takes multiple seconds to find a file in root
Indexing maybe. But there's more: IntelliJ understands your code, and this make more sense for static/strong typed langs. We do a lot of Kotlin and IntelliJ is indispensable.
This is very true. PyCharm is by far better than any other IDE for professional python work. With how dynamic Python is, PyCharm's completion and static analysis is pretty remarkable.
Same. I was just using (heavily customized) vim for many many years, and was kinda ok, but then I eventually switched to IntelliJ products at work, and while I don't really like the idea of using a product with subscription, I cannot quite switch to anything again, and have to install them even on my personal machine. Didn't try Zed though.
And it's quite annoying that I really don't think that the product I'm actually using is something crazy complicated only an for-profit enterprise can implement. Just the same as in my vim-years, I only need a good editor, that helps me to type less. In interpreted languages (Python, PHP) I don't even use debugger, debugging via print actually feels totally fine to me. What I need seems pretty similar, and is seemingly included into every modern editor I know: vim-keybindings, good "code smell" highlightings, autocomplete & some refactoring automation. The devil is in the details though. Vim emulation is never perfect, but in IntelliJ it's usable, which doesn't happen often. Static-analysis IntelliJ does seems pretty basic to me, yet somehow even that level is usually lacking (also, the ability to disable specific suggestion via annotations in IntelliJ products is great, as I feel like this functionality is only really useful when I strive for 100% "green" status, just conciously disabling what I'm not going to fix). But the most annoying thing is auto-refactoring. I mean, it feels like a simple thing, I never tried, but I think most common refactoring patterns I do daily I could automate quite easily, given I already have basic syntax-tree operation implemented. But somehow even IntelliJ is pretty poor on refactorings, and what I've seen in VSCode plugins is even worse. Again, no idea about Zed. I guess I should try it.
What do you do to make Idea slow? On my computer, everything is blazing fast. Idea is very fast, like 1-2 seconds to open a project and then it's just works instantly. Same about vscode.
I tried zed for a few weeks because I'm generally sympathetic to the "use a native app" idea vs Electron. I generally liked it and its UX but:
1. VSCode is pretty damn fast to be honest. Very rarely is my slowdown in my work VSCode loading. Maybe I don't open very large files? Probably 5k lines of typescript at most.
2. Integration with the Typescript language server was just not as good as VSCode. I can't pin down exactly what was wrong but the autocompletions in particular felt much worse. I've never worked on a language server or editor so I don't know what's on zed/VSCode and what's on the TS language server.
Eventually all the little inconveniences wore on me and I switched back to VSCode.
I will probably try it again after a few more releases to see if it feels better.
I'm on the same camp, but in the end it turns out we were not putting it to the actual, real, hard-world test.
VSCode is very fast for me, when I open it in the morning and just starting my day.
But once I've opened the main project and 7 support library's projects, and I'm in a video-call on Chrome sharing my screen (which is something that eats CPU for breakfast), and I'm live-debugging a difficult to reproduce scenario while changing code on the fly, then the test conditions are really set up where differences between slow/heavy and fast/lightweight software can be noticed.
Things like slowness in syntax highlighting, or jankyness when opening different files. Not to mention what happened when I wanted to show the step-by-step debugging of the software to my colleagues.
In summary: our modern computer's sheer power are camouflaging poor software performance. The difference between using native and Electron apps, is a huge reduction in the upper limit of how many things you can do at the same time in your machine, or having a lower ceiling on how many heavy-load work tasks your system can be doing before it breaks.
> In summary: our modern computer's sheer power are camouflaging poor software performance. The difference between using native and Electron apps, is a huge reduction in the upper limit of how many things you can do at the same time in your machine, or having a lower ceiling on how many heavy-load work tasks your system can be doing before it breaks.
Same can be said about a lightweight web page and 'React' with tons routers all in SPA and vdom. Maybe the page is fine when it is the only page open, but when there are other SPA also open, then even typing becomes sluggish. Please don't use modern computer's sheer power to camouflaging poor software performance. Always make sure the code uses as little resource as possible.
That brings a Python "performance" talk to mind that I was recently listening to on YouTube. The first point the presenter brought up was that he thinks the laptops of developers need to be more modern for Python to not be so slow. I had to stop the video right there, because this attitude isn't going anywhere.
You know what? I actually believe in having developers work (or maybe just test) with slower computers (when writing native apps) or with crippled networking (when doing web) in order to force them consider the real-world cases of not being in a confy office with top-notch computers and ultra high-bandwidth connections for testing.
I agree with this approach. I used to always have hardware no more than 2 years old and were med-high to high spec. When I helped troubleshoot on my families and extended families devices and internet connection I saw how normal people suffered on slow systems and networks. I since operate on older devices and do not have gig internet at home every web and app designer should have to build or test with constraints.
I think dev containers can help here. You have a laptop that can run your editor, and a browser. The actual build is done on a remote machine so that we're not kneecapping you by subjecting you to compiling kotlin on a mid range machine, but your laptop still needs to be able to run the site.
Heheh no. I'm in my 30s. My opinion comes from experience. I like to travel a lot, and have been several times on trips that brought me to places where the norm is a subpar connection. Taking 30 seconds to load the simplest bloatware-infested blog that doesn't even display text without JavaScript enabled, teaches you a thing or two about being judicious with technology choices.
This is giving me flashbacks to editors of yore; EMACS, Eight MB And Continually Swapping. I remember reading almost the exact same comments on Usenet from the 80s and 90s.
It’s also 2024 and you still can’t share JavaScript objects between threads. Do not underestimate the horror that is tracing garbage collection with multiple mutator threads. (Here[1] is Guile maintainer Andy Wingo singing praises to the new, simpler way to do it... in 2023, referring to a research paper from 2008 that he came across a year before that post.)
That’s not entirely surprising. Emacs’s UI is a character-cell matrix with some toolkit-provided fluff around it; VSCode’s is an arbitrary piece of graphics. One of these is harder than the other. (Not as harder as VSCode is slower, but still a hell of a lot.)
Getting the same amount of current engineers or possibly less that actually care and know about performance can work. There’s a reason applications are so much relatively slower than they were in the 80s. It’s crazy.
Anyone that believes this can prove it by taking down an existing popular product with a better engineered and better performing competitor built for the same cost.
I was using computers in the 80s. They did a very small fraction of what we ask them to do now and they didn't do it fast.
I have had to open the parent folder of all the different code bases I need in a single VSCode window, instead of having an individual window for each.
I much prefer having individual windows for each code base, but the 32G of ram for my laptop is not enough to do that.
If I were to run multiple instances of VSCode, then the moment I need to share my screen or run specs some of them will start crashing due to OOM.
I don't notice much of a problem from multiple windows. I sometimes have a dozen going.
It's the language extensions in the windows that can cause me problems e.g. rust-analyzer is currently using more than 10GB! If windows are just for reading code and I'm feeling some memory pressure then I kill the language server / disable the extension for that window.
I have more problems with jetbrains. 64GB isn't enough for a dev machine to work on 10s of Mbs of code any more...
Like the sibling, I have no problem with keeping multiple windows open and I only have 16GB RAM (MacBook Pro). It must be language extensions or something like that.
It's a Prisoners's Dilemma. Since apps are evaluated in an isolated fashion there is an incentive to use all the resources available to appear as performant as possible. There is further incentive to be as feature-rich as possible to appeal to the biggest audience reachable.
That this is detrimental to the overall outcome is not unfortunate.
There's not extra apparent performance in using Electron. A truly more performant solution will be still more performant under load from other applications.
The extra performance is on the side of the developers of the app. They can use a technology they already know (the web stack) instead of learning a new one (e.g Rust) or hiring somebody that knows it.
> In summary: our modern computer's sheer power are camouflaging poor software performance
I somewhat disagree. Features sell the product, not performance[1], and for most of the software development you could count on the rising CPU tide to lift all poorly performing apps. But now the tides have turned to drought and optimizing makes a hell of a lot of sense.
[1] They are more of a negative sell and relative to other feature parity products. No one left Adobe Photoshop for Paint, no matter how much faster Paint was. But you could if feature parity is closer, e.g. Affine vs Photoshop.*
Yes, but more in a QoL way. I say negative as in - if you don't have it you lose a customer, rather than if you have it, you gain a customer.
If performance is a feature, then it's not an important feature. Otherwise, people would use Paint, for everything.
Or put it another way, you want to do X1 task. It's editing a picture to remove some blemishes from skin. You could use a console, to edit individuals pixels, but it would take months/year to finish the task if you are making changes blindly, then checking. It could take several days if you are doing it with Paint. Or you could do it with Photoshop in a few minutes. What difference does a few ms make if you lose hours?
Now this is only task X1 which is edit blemishes, now you do this for every conceivable task and do an average. What percent of that task are ms loses?
> if you don't have it you lose a customer, rather than if you have it, you gain a customer
I completely agree with that take. That's exactly the reason why, for example, whenever I'm about to do some "Real Work" with my computer (read: heavyweight stuff), all Electron apps are the first to go away.
My work uses Slack for communications, and it is fine sitting there for the most part, but I close it when doing some demanding tasks because it takes an unreasonable amount of resources for what it is, a glorified chat client.
Well, I think you are missing a subtle issue. They may not switch but they might pay more if it’s faster. They also might not switch to paint but if photoshop performed terribly they may switch to a dozen different tools for different purposes. This kind of thing already happens.
Yeah, all I need to do to reliably show the drastic performance difference is open 5 different windows with 5 different versions of our monorepo. I frequently need to do that when e.g. reviewing different branches and, say, running some of the test suites or whatever — work where I want to leave the computer doing something in that branch, while I myself switch to reviewing or implementing some other feature.
When I start VS Code, it often re-opens all the windows, and it is slow as hell right away (on Linux 14900K + fast SSD + 64GB RAM, or on macOS on a Mac Studio M2 Ultra with 64GB RAM).
I'll save a file and it will be like jank...jank... File Save participants running with a progress bar. (Which, tbh, is better than just being that slow without showing any indication of what it is doing, but still.)
I've tried to work with it using one window at a time, but in practice I found it is better for my needs to just quit and relaunch it a few times per day.
I try Zed (and Sublime, and lapce, and any other purportedly performant IDE or beefed-up editor that I read about on this website or similar) like every couple months.
But VS Code has a very, very large lead in features, especially if you are working with TypeScript.
The remote development features are extremely good; you can be working from one workstation doing all the actual work on remote Linux containers — builds and local servers, git, filesystem, shell. That also means you can sit down at some other machine and pick up right where you left off.
The TypeScript completion and project-wide checking is indeed way slower than we want it to be, but it's also a lot better than any other editor I've seen (in terms of picking up the right completions, jumping to definition, suggesting automatic imports, and flagging errors). It works in monorepos containing many different projects, without explicit config.
And then there's the extensions. I don't use many (because I suspect they make it even slower). But the few I do use I wouldn't want to be without (e.g. Deno, Astro, dprint). Whatever your sweet set is, the odds are they'll have a VS Code extension, but maybe not the less popular editors.
So there is this huge gravity pulling me back to VS Code. It is slow. It is, in fact, hella fucking slow. Like 100x slower than you want, at many basic day-to-day things.
But for me so far just buying the absolute fastest machine I can is still the pragmatic thing to do. I want Zed to succeed, I want lapce to succeed, I want to use a faster editor and still do all these same things — but not only have I failed so far to find a replacement that does all the stuff I need to have done, it also seems to me that VS Code's pace of development is pretty amazing, and it is advancing at a faster clip than any of these others.
So while it may be gated in some fundamental way on the performance problem, because of its app architecture, on balance the gap between VS Code and its competitors seems to be widening, not shrinking.
Vscode is very snappy for me on less powerful machine Ryzen 3900 (Ubuntu, X-windows). I have a good experience running multiple instances, big workspaces and 70+ actively used extensions and even more that I selectively enable when I want them. It's only the MS C# support that behaves poorly for me (intentional sabotage?!).
I wonder if you have some problem on your machine/setup? I'd investigate it - try some benchmarking. It's open source so you don't me afraid looking under the hood to see what's happening.
> I'll save a file and it will be like jank...jank... File Save participants running with a progress bar.
I don't see that at all. Saving is instant/transparent to me.
There is so much possible configuration that could cause an issue e.g. if you have "check on save" from an an extension then you enter "js jank land" where plugins take plugins that take plugins all configured in files with dozens of options, weird rules that change format every 6 months e.g. your linter might take plug-ins from your formatter, your test framework, your ui test framework, hot reload framework, your bundler, your transpile targets...
If saving is really slow then I would suspect something like an extension is wandering around node_modules. Probing file access when you see jank might reveal that.
I have that kind of fast, smooth experience with VS Code, too - but that is when I open my small hobby monorepo, or only when I don't leave it open all day. When I open a big work monorepo (250k files, maybe 10GB in size, or 200MB when you exclude all the node_modules and cache dirs, the slowness isn't instant but it becomes slow after "a while" — an hour, or two.
I do actually regularly benchmark it and test with no/minimal extensions, because I share responsibility for tooling for my team, but the fact that it takes an hour or two to repro makes that sort of too cumbersome to do. (We don't mandate using any specific editor, either, but most of my team uses VS Code so I am always trying to help solve pain points if I can.)
And its not just the file saves that become slow — it's anything, or seemingly so. Like building the auto-import suggestions, or jumping to the definition by ⌘-clicking a Symbol. Right after launch, its snappy. After 2-3 hours and a couple hundred files having been opened, it's click, wait, wait... jump.
Eventually, even typing will lag or stutter. Quitting and restarting it brings it back to snappy-ish for a while.
It is true that maybe we have some configuration that I don't change, so even with no or minimal extensions, there might be something about our setup triggers the problems. Like we have a few settings defined at the monorepo root. But very few.
But before you think aha! the formatter! know that I have tried every formatter under the sun over the past 5 years. (Because Prettier gave my team a lot of problems. Although we now use it again.)
We have a huge spelling dictionary. I regularly disable the spelling extension though, but what if there was an edge case bug where having more than 1000 entries in your "cSpell.words" caused a memory leak on every settings lookup, even when the extension wasn't running? I mean... it's software, anything is possible.
But I suspect it is the built-in support for TypeScript itself, and that yeah, as you work with a very large number of files it has to build out a model of the connections between apps and libs and that just causes everything to slow down.
But then, like I mentioned nothing else I've seen quite has the depth of TypeScript support. Or the core set of killer features (to us), which is mainly the remote/SSH stuff for offloading the actual dev env to some beefy machine down the hall (or across the globe).
To us, these things are worth just having to restart the app every few hours. It's kinda annoying, sure, but the feature set is truly fantastic.
> Eventually, even typing will lag or stutter. Quitting and restarting it brings it back to snappy-ish for a while.
Hmm. I've not experienced that. Something is leaking which can be identified/fixed. There are quick things you could do to narrow it down e.g. restart extension host or the language server or kill background node processes etc.
I generally have it running for weeks... although I do have to use "reload window" for my biggest/main workspace fairly often because rust-analyzer debugging gets screwed up and it's the quickest fix from a keyboard shortcut. I may be not seeing your issue for other reasons :)
FWIW I can recommend "reload window" because it only applies to the instance you have a problem with and restores more state than quit/restart e.g. your terminal windows and their content so it's not intrusive to your flow.
> but the fact that it takes an hour or two to repro makes that sort of too cumbersome to do
Yeah, I know what you mean. I now schedule time for "sharpening my tools" each day and making a deliberate effort to fix issues / create PRs for pain-points. I used to live with problems way too long because "I didn't have time". It's not a wall-clock productivity win.... but the intangibles about enjoying the tools more, less pain, feeling in control and learning from other projects are making me happy.
It's too bad VSCode doesn't "hydrate" features on an as-needed basis or on demand. Imagine it opens by default with just text editing and syntax highlighting, and you can opt in to all the bells and whistles as you have the need with a keystroke or click.
I think people just have very different tolerances for latency and slowness.
I keep trying different editors (including VS Code), and I always end up going back to Neovim because everything else just feels sluggish, to the point where it annoys me so much I'm willing to put up with all the configuration burden of Neovim because of it.
I tried out Zed and it actually feels fast enough for me to consider switching.
Sublime Text 3 is still one of my favorite editors. I use VSCode lately because of its excellent "Remote SSH" integration - but when it comes to latency sublime has it beat.
Zed does not feel fast on my machine, which is a 13900K/128gb ram. It is running in xwayland though, so that could be part of the problem. It feels identical to vscode.
I was always a fan of Sublime Text and I moved away from it once because VSC felt more "hassle-free". The extensions just worked, I didn't need to go through endless JSON files to configure things, I even uncluttered its interface but at the end of the day I returned to good old Sublime Text. Now with LSPs it requires way less tinkering with plugins. I only wish it had just a little bit more UI customizability for plugins to use (different panes etc). Maybe with Sublime Text 5 if that ever comes.
Also about the speed: VSC is fast but in comparison... Sublime Text is just insta-fast.
I have used Sublime Text my entire pro programming career. Before that I used emacs for a while.
I love it and will not switch it for anything. It is maybe one of the best pieces of software ever made. A lot of the things such as multiple cursors, command palette etc where first popularized by ST.
Today, I use it to write Rust, Go, web stuff and with LSP I get all the autocomplete I need. I also use Kitty as a separate terminal (never liked the terminal in editor thing).
Things like Cmd-R and Cmd-Shift-R to show symbols in file and symbols in project work better, faster and more reliably than many LSP symbol completions.
ST4 is my go-to for quickly viewing and editing individual files. It really is instant compared to VSC.
I don't really run ST with any complex plugins though and leave cases where I want those for VSC. The ones I have installed right now are just extra syntax highlighting and Filter Lines (which I find very handy for progressively filtering down logs)
I still use ST for opening huge files. 9 times out of 10 if a huge file cannot be opened in any other editor, I will open it in subl and it will be just fine.
I paid for Sublime, but moved to VSCode because at least at the time it had better hassle free support for more languages. Including linters, auto formatting and just generally convenient stuff.
I‘m not sure where it stands now. My guess is that Sublime has caught up for mainstream languages, but the support for languages that are a bit more niche like Clojure or Zig is nowhere near as good.
I miss the speed and editing experience of Sublime though.
I was the same as you but in the end I returned to Sublime. Nowadays with LSP plugin you don't need much, just LSP + extension to support your language and that's about it.
They changed the licenses to 3 year from lifetime though, so it's a bit of a bummer but at the same time I get it.
Sublime's focused/minimalist UI is nice. VS Code sometimes feels like it tries to do too much.
My ideal editor would probably be something like a variation on Sublime Text that's modeled more closely after TextMate while keeping the bits that make Sublime better (like the command palette).
Sublime is the better Textmate. What would you do to subl to make it more like mate? I used textmate for years and years before switching to ST and it was a drop-in replacement.
Not that this was necessarily better in terms of capabilities, but TextMate had a very pleasing Unix-style extension model where there was no mandated language and extension commands used scripts/executables written in any language. There was even a nice graphical editor for fine-tuning exactly what input they would be given and how their output would be acted upon.
TextMate was very much "Mac OS X UI sensibilities combined with Unix power", whereas ST pretty much has its own self-contained philosophy that's then brought to Mac/Windows/Linux in a slick way.
The two are pretty close, but between the two TextMate feels more like a golden era OS X desktop app thanks to several small differences and tiny Mac-isms, and I'd like Sublime to have that feel too.
I also feel TextMate had the nicer overall UX. When I first tried Sublime, TextMate had the better text rendering (IMO). Sublime has more features but still doesn’t feel as slick somehow.
I’ve recently returned to Sublime from VSC. I prefer VSC’s UI for following links to definitons/references, but in most other ways I prefer Sublime’s nimbleness.
I'm begrudgingly stuck with VSCode because of language support in the smaller-community languages I work with, but any time it starts being a dog (and it doesn't take much, think a 20MiB test data file) I switch back for that purpose.
I'm also never letting it anywhere near a merge again, after the worst merge in my years of using git. Sublime Merge doesn't give me the same warm feelings as Sublime Text, but it works, and it won't choke on a big patch and apply a huge deletion without showing it to me first.
If you run xeyes and the eyes follow your cursor when it's above the application you want to test, it's running under xwayland. If they don't follow your cursor, the application is running under native Wayland.
Welp, looks like it is running native wayland yet the cursors are blurry. The only time I have ever experienced that is when an app is running under xwayland.
I use Helix and feel the same way. The pickers/fuzzy finder particularly have no equal for speed in any editor I’ve found. (Zed seems pretty fast but I didn’t get on well enough with it to find out how it performs with more serious use.)
fwiw I’ve also found the configuration overhead much lower with Helix than for pretty much any other editor I’ve seriously used.
This makes me want to use Helix, because while I love the idea of a terminal editor, I'm not the kind of person to whittle away a day screwing around with my config files.
It's the main reason I switched from Neovim. I didn't want to maintain a thousand lines of Lua of stuff to have a good baseline editor. I only wanted to maintain my configuration idiosyncracies on top of an editor with good defaults. I think there are Neovim distributions that accomplish mostly the same thing, but then I fell in love with Helix's Kakoune-inspired differences.
Helix has been stalled for a few months, and there are issues that make it frustrating to use at times. For example, :Ex and friends have been relegated to the plugin system (the root cause of the stall, it hasn't been merged). I still prefer it to the config overhead of nvim (as well as the kakoune-style movements!), but the paper cuts have hit a threshold and I've started writing my own text editor (I'd probably use Zed, were it not for lack of kakoune movement support): https://youtu.be/Nzaba0bCMdo?si=00k0D6ZfOUF8OLME
Stalled how? There was a release a couple of months ago. There's another on the way. There are regular changes merged in. There's been foundational changes (events) made to enable new features. The plugins are being worked on, and whilst the speed may not be for you, that doesn't mean its stalled?
The Helix community is the worst part about Helix. Especially the not so benevolent dictator of the project. Way too many comments like “if you don’t like how it’s done go use a different editor” instead of listening to feedback. That’s fine if they don’t care about adoption (they publicly say they don’t), but an actively hostile community doesn’t give me confidence in the editor, despite it being quite nice.
Author here. I listen to feedback, but it's hard to incorporate every possible requested feature without the codebase becoming an unmaintainable mess.
We're a small team with limited time and I've always emphasized that helix is just one version of a tool and it's perfectly fine if there's a better alternative for some users. Someone with a fully customized neovim setup is probably going to have a better time just using their existing setup rather than getting helix to work the same way.
Code editors in particular are very subjective and helix started as a project to solve my workflow. But users don't always respond well to having feature requests rejected because they don't align with our goals. Plugins should eventually help fit those needs.
I like this response. Kudos to sticking to your vision; it's easy to be swayed by users into building a kitchen-sink-fridge-toilet. If you build for everyone, you build for no one.
The community is welcoming, and will help solve issues. However, it’s true (and good IMHO) that the project seems to have a strong idea of what is and is not a core feature. They prioritise building what you might call the Helix editing model and the Helix vision for what an editor should be.
Importantly, Helix isn’t (or doesn’t appear to be) trying to become something approaching an OS, or to be a faster, easier to configure way to get an editor that works like [your preferred configuration of] vim or emacs with lower input latency.
I applaud these things! I like the Helix model more than the vim or emacs models, and the project’s priorities for what should and shouldn’t be in an editor core are pretty well aligned with my own. I do not find I’m desperate for plugins to fix some major deficiency, though I’m sure I’ll use a few once they become available.
This is all what I want to see and fits my definition of a good “benevolent dictator”, maintaining focus and taking tough decisions.
I do maintain a reasonable set of extra keybindings and small configuration changes, as well as a very slightly modified theme [0], but I don’t think many of them are essential and I try pretty hard not to conflict with Helix defaults or radically diverge from the Helix editing model.
It works for me right now, and keeps getting better (rather quickly if you install from git as I do). I’m excited for the future, especially seeing some of the features and improvements moving through PRs.
I've found attitudes like this to be the worst parts of the community.
Maybe it's quite nice because of how they've approached building it? I've been actively watching Helix for quite a while now, and I've observed as hostile those who approach the project are.
From what I've seen, they do listen to feedback. Perhaps similar to the person who said it had stalled, people take not saying yes as not listening to feedback?
Yeah, I think people turning up with an attitude of entitlement or a presumption that something should be a priority for the project summons at least resistance, if not hostility. I’ve never seen anything from the project that I’d call hostile, if anything, I’ve seen patience.
For that reason, I’m glad adoption is a non-goal [0] as it allows for the explicit exclusion of popular demand and copying other “successful” projects as criteria for decision making.
[0] I wish many more projects and companies would follow suit! Something well crafted to be loved by a small, committed, and sophisticated user base/audience is, almost without fail, so much more valuable and special than something designed for mass appeal (or evolves towards it once someone smells a juicy exit). Sadly, that’s not often where the incentives lie.
> I think people just have very different tolerances for latency and slowness.
I wonder if it's because of a form of "touch typing". I'm not really looking at text appearing as I type. My fingers work off an internal buffer while my mind is planning the next problem. If not so deep in thought to almost be blind, I am reading other docs / code as I type. I am not an ultra fast typist but if I mistype, I can feel it and don't need the visual feedback to know it. I might be this way because I am old and have used tools with lag you measure in seconds.
I only care about latency if it interrupts me and I have to wait and that's typically not typing but heavier operations. I am utterly intolerant to animations. I don't want less I want zero, instant action. I don't want janky ass "smooth scrolling" I want crisp instant scrolling. I have no idea why animations are even popular.
Some of the text-editor latency discussion reminds me of high screen refresh rates for office work. When people "check the refresh rate" they have to do that violent wiggling of window to actually have large content moving faster enough to see a difference. You have to look for it to then get upset about it.
The worse case would be if it's more of an illusion like fancy wines - a fiction driven by context. Lie to someone that an editor is an electron app and they will complain about the latency. Software judgement also has toxic fashion and tribal aspects. Something unfashionable will accrue unjustified complaints and something cool or "on your team" will be defended from them. I'm reminded of Apple fans making all sorts of claims about rendering unaware that they were using Apple laptops that shipped not running at their natural resolution and visibly blurry. Your lying eyes can't beat what the heart wants to believe.
> people just have very different tolerances for latency and slowness
I've honestly never considered this and it's genius. I have always been surprised when people recommend kitty as a "fast" terminal when it takes 200ms (python interpreter) to start up, which is unbearable to me.
But yeah, people sometimes just open a couple and see speed in other areas that I don't care about.
I would actually say that this is more of a system/OS issue to a point. Why doesn’t my OS keep such often-used programs in memory, simple opening a new window when clicked, like mobile OSs do? Just because desktop hardware can get away with a lot more, I believe that making programs go to a background mode, and pausing its thread would make everything so much smoother with zero, or even beneficial effect on memory/battery consumption.
It’s not genius. It’s just very appealing to those on the side of wanting something faster, because - like all topics like this - everyone is always looking for subtle ways to signal themselves as somehow patrician. “Oh, well, some people just want more ownership of their computer, that’s why I use Linux :)”, is similarly thought-terminating. The conversation shouldn’t end there.
Interesting. That tells me there's something wrong with my neovim config. When I open a file for the first time, it takes some time before it shows the contents of the file. It's not even a big config, but maybe I'm using a plugin that slows things down or something.
> I don't know what's on zed/VSCode and what's on the TS language server.
Microsoft's latest embrace-extend-extinguish strategy is keeping just enough special sauce in (frequently closed-source) vscode extensions and out of the language servers. They do the same thing with Pyright/Pylance.
Bandwagoners are keen to class everything Microsoft does to be competitive as EEE. This is just…them building a product. Throwing their weight around, building something really good, releasing it for free, something that only a handful of other companies could do? Hell yeah! It’s shady. But it’s not EEE.
TS itself is lock-in. I mean, the entire point of JS is that it's portable, and there's certainly no lack of compile-to-JS languages that are already finished and have much more powerful type systems and existing libs/ecosystems.
Enjoy your VScode projects exclusively on Windows a couple years down the road, or rather, contribute to MS' coding ML models to make yourself obsolete even before. Windows already posts home everything it has gathered on you the second it connects to the net, and I'd expect vscode to as well.
But the infanterists in our profession manage to get it wrong, every single time.
Erm, you do know that a founding principle of TS, is that the “compile” step is literally just stripping out the type annotations. You could implement it with a Regex if you really wanted to.
The only place this rule is broken, is TS Enums, and that generally considered to have been a mistake, but one that too old to rectify.
Historical accident I think. JS has no enum concept, but early on TS devs believed that enums were an important feature. To TS produces some small JS fragments for every enum to mimic enum behaviour. It’s not exactly a lot of code, or deep woven into the final output, but it is code that doesn’t have exist in the input.
Later I think people realised that enums aren’t that important, and certainly not important enough to break the golden rule. But alas it was too late. Maybe JS will get an enum concept, and then TS can drop its hack. But until then, it’s the one spot where the TS “compiler” produces output code that doesn’t exist in its input.
Yeah, bun for example can execute typescript files directly. It does not include the tsc or anything, it just strips out type annotations and executes the remanining file that is valid JS.
Could you perhaps consider a worldview that doesn’t place you as being better than everyone else that doesn’t share your preferences? I bet you don’t think that LLMs are going to replace you, rather you’re suspending disbelief to paint the most bleak picture of the future you can come up with, and, again, maximise the blame you place on everyone that isn’t as GOOD as you!
I agree with you -- but aiming for 1ms performance is pretty hard. That is 1/1000th of a second. Your keyboard probably has higher latency than that. Physics cannot be defeated in this regard.
Expanding on this, there's a detailed analysis of the various contributors to editor latency (from keyboard electronics to scanout) by one of the jetbrains devs at[1]. They show average keypress-to-USB latency for a typical keyboard of 14ms!
Yes but it takes longer than that for the signal to reach the usb port. And i doubt if many of us are typing at 1000 keystrokes/second. Apparently that's around 12,000 words/minute assuming average word length of 5 characters.
I thought so too, but for a while I had 2 144Hz monitors on my Mac Pro[1] and very much noticed it in the UI, window dragging was smoother, browser scrolling too, absolutely noticeable.
[1] Then Apple released the Pro Display and Big Sur and people wondered "how does the math work for a 6K display and bandwidth?" The answer, they completely fucking broke DP 1.4. Hundreds of complaints, different monitors, different GPUs, all broke by Big Sur to this day just so Apple could make their 6K display work.
My screens could do 4K HDR10 @ 144 Hz. After Big Sur? SDR @ 95 Hz, HDR @ 60Hz. Ironically I got better results telling my monitors to only advertise DP 1.2 support, then it was SDR@120, HDR@95Hz.
Studiously ignored by Apple because they broke the standard to eke out more bandwidth.
Properly levereged GUI editors have the potential to use the extra refresh rate for smother animations/smooth scrolling, though that's pretty far away from Emacs territory.
I do not notice any difference between my 120Hz work MacBook Pro and my 60Hz home MacBook Air. I might notice if I did a side-by-side comparison and looked closely. But why would I?
Honestly I don't think that the problem with VSCode is speed, even. It's bloat. It uses gobs of RAM just to open up a few text files. I compared it to Sublime Text a while back and it was something like 500 MB (for Sublime) to 1-1.5 GB (VSCode). That's not acceptable in my view.
If you type and wait for the letter, I could see that being annoying. My brain works more in waves, my hands type a block and it's there on the screen. I've never once thought of character latency, but maybe that's my HPB roots.
> Integration with the Typescript language server was just not as good as VSCode. I can't pin down exactly what was wrong but the autocompletions in particular felt much worse. I've never worked on a language server or editor so I don't know what's on zed/VSCode and what's on the TS language server.
VSCode cheats a little in this area. It has its own autocomplete engine that can be guided by extension config, which it mixes seamlessly into the autocomplete coming back from the LSP. The net result is better autocomplete in all languages, that can’t be easily replicated in other editors, because the VSCode augmentations can often be better than what an LSP produces.
Mostly by being more flexible in its inputs and outputs than an LSP. An LSP is generally trying to perform deep static analysis on your code to provide suggestions. The upside is extremely accurate suggestions, with a pretty much 0 false positive rate (I.e. it never suggests anything uncompilable), the down side is that they tend to be much picker about their inputs.
If the code is currently in an un-parsable state, and a valid AST can’t be produced, then the LSP is forced to work with whatever parsed version of the code it was last able to build a valid AST for. Making the autocomplete results, incomplete.
VSCode on the other hand is basically performing tokenisation and fuzzy search on those tokens. It doesn’t really care about the validity of the code, that means more false positive suggestions (I.e. suggesting stuff that can’t compile), but very robust handling of un-compileable code. That plus prioritising LSP suggests over fuzzy suggestions, results in VScode providing a very nice graceful fallback for LSP failures, that people probably use more often than they expect.
A few weeks ago I had this giant json text blob to debug. I tried Gedit first, and it just fell over completely. Tried vim next, and it was for some reason extremely slow too, which surprised me.
VSCode loaded it nearly immediately and didn't hang when moving around the file. I have my complaints about VSCode, but speed definitely isn't one of them.
Not to my knowledge, outside of whatever Debian comes with. Keep in mind this was on a Chromebook - so it would have been running in a VM on a rather memory restricted system. That said, VSCode would have been running in the same parameters.
Just found the file. 42MB on a single line. Takes 5 seconds to open in vim, and about 3 seconds for the right arrow to move the cursor one char over. Nothing like gedit, but slower than I expected.
I'm pretty sure this is syntax highlighting. It's a known issue to be slow for large files in Vim because it is synchronous. Try starting Vim with syntax highlighting off:
This makes sense. I recently learned that VSCode is clever enough to automatically disable some features (which includes syntax highlighting among I guess other things) when it detects that the file is too big according to some heuristics (like probably, length of the longest line, or maybe just total size of the file).
So IMO I think vim is being "too dumb" here and should be able to adapt like VSCode does. But, meanwhile, if you want to test under equal conditions, you can disable VSCode's optimization by disabling this setting:
This makes a world of a difference when your editor is configured to wrap lines, or clip or w/e.
You probably happened to have VSCode configured to do something that mitigates the problems of having an extremely long single line, while Vim was not configured to do that.
In case you don't want to investigate the problem, but want to make a more "fair" comparison: use a language that you are comfortable with to format the file with linebreaks and indentation and then load it in different editors.
> You probably happened to have VSCode configured to do something that mitigates the problems of having an extremely long single line, while Vim was not configured to do that.
For mainstream users. Particularly in the case of vim, the end-user is more likely the figure out that this is a configuration problem and can be adjusted.
Sure, just tried it. This is time to open, show the initial contents, then exit.
nvim is much faster to cursor around, except when you hit the opening or closing of a json block it hangs a bit, so I'm guessing it has some kind of json plugin built in.
I did some research and it seems that this particular slowness is due to single line file and if there is some syntax highlighting used with vim/neovim, it reads the line completely to do it correctly.
VSCode reads only the visible content and does not load everything for that line. It tokenizes the first 20k chars of the line at maximum, defined by the "editor.maxTokenizationLineLength" setting.
Weird, I had the exact opposite experience. I had a large Markdown file I was editing and VSCode would simply hang or crash when opening it. Neovim on the other hand actually was able to navigate and edit just fine.
I work with giant jsons every day and always have to fall back to nvim as vscode is terrible. Vscode even has a default size limit where it disables editor features for json files larger than a few megabytes.
Nvim works flawlessly tho even with syntax highlight and folding.
I don't use it as my main editor (I'm far too used to the Jetbrains editors to make the switch, they're just too smart), but it's the best one for CLI apps that use EDITOR, like git. It boots up basically instantly even when it hasn't been launched in a while and I can make my commit messages and immediately close stuff up at the speed of my thought.
In the morning vscode is ok, come noon, it’s the primary thing eating my battery and it’s getting slower and slower; day end it’s unusable. Sure, restart it, I know, but it’s fairly terrible though.
Zed looked pretty cool but the amount of extensions VSCode has makes it difficult to justify a switch. I do think that the SQL extensions for VSCode are pretty terrible, so maybe that's something where Zed can capitalize.
Interestingly the biggest issues we're having with VSCode have nothing to do with the IDE itself and are instead related to the TypeScript language server. There are so many bugs that require the TypeScript language server to be restarted, and there's little the VSCode team can do about that. Made a new file? Restart. Rename a file? Restart. Delete a directory? Restart. Refactor a couple of classes? Might need a restart.
We're also having some serious language server slowdowns because of some packages we're using. And there's not much Zed can do here for us either. It's really unfortunate because the convenience of having a full-stack TypeScript application is brought down by all of these inconveniences. Makes me miss Go's language server.
Yeah, this was mostly my experience. The Zed editor was fast, but it just felt like it wasn't as good as other editors. For me, the version control integration was particularly poor - it shows some line information, but diffing, blame, committing, staging hunks, reviewing staged changes etc are all missing.
There were a bunch of decisions that felt strange, although I can imagine getting used to them eventually. For example, ctrl-click (or jump to usages) on a definition that is used in multiple places opens up a new tab with a list of the results. In most other editors I've used, it's instead opened up a popover menu where I can quickly select which really I want to jump to. Opening those results in a new tab (and having that tab remain open after navigating to a result) feels like it clutters up my tabs with no benefit over a simple popover.
Like you, I'll probably try again in a few releases' time, but right now the editor has so much friction that I'm not sure I actually save any time from the speed side of things.
Have to agree on the VCS story. I’d switched over to using Zed more or less permanently, but I eventually moved back because I kept having to open Intellij to resolve conflicts.
A lot of IDEs these days offer a three-way-merge interface that massively improves on the conflict resolution process. Different tools have different interfaces, but generally you have three panes visible: one showing the diff original->A, one showing the diff original->B, and third showing the current state of the merged file, without conflicts. You can typically add chunks from either of the two diffs, or free edit your resolution based on a combination of the different options.
I find resolving conflicts through this sort of system tends to be a lot more intuitive than trying to mess around with conflict markers - it also helps with protecting against mistakes like forgetting conflicts or wanting to undo changes. If you're not used to it, I really recommend finding a good three-way merge plugin for your editor/IDE of choice.
You should give Theia Ide [1] a try. It's plugin-compatible with VSCode, same user experience. It's slower to start and takes more memory but on my 3 y.o. intel Mac it is definitely snappier than VScode.
> 2. Integration with the Typescript language server was just not as good as VSCode. I can't pin down exactly what was wrong but the autocompletions in particular felt much worse. I've never worked on a language server or editor so I don't know what's on zed/VSCode and what's on the TS language server.
I had similar experience with JavaScript where it kept showing me errors (usually for ESM imports) even though things were fine. In VSCode, things worked without fuss. I've been testing out JetBrains Fleet [1] as well and its language support is far superior compared to Zed.
Hah, similar here. I keep trying it out after seeing posts here and there, but I can't seem to switch from VSCode.
For nearly anything I do it is fast enough, it starts in less than 2 seconds, and the main thing I like about VSCode is ability to switch projects with fuzzy autocomplete. That means I can jump between repos also in a few seconds, which is a huge lifesaver given I switch things frequently.
Yeah, I agree about VSCode being sort of fast enough. Computers are getting faster and I’m on a M-series mac which makes web rendering much faster but still I feel like as far as electron apps go: VScode is basically the golden child.
I have a mono repo. There's lot in it. And lot many files. Typescript. Go. Python. I have a lower end mac book Air. Not having any issues with VS code.
Yeah my experience has been that you aren't going to suffer performance problems with VSCode unless you have an incredibly large codebase. Past a certain point I'm sure Vim/NeoVim/Zed are probably much more performant, but the differences in smaller codebases is barely noticeable IME.
My only problem with VSCode is that it's owned by Microsoft. I'm willing to put up with some extra friction if it allows me to escape their ecosystem even a little bit.
My general rule is if I can get at most of what I need from the open source version of something, I use it. Even if it's less user friendly.
You are able to do so, but is it allowed by the website's terms of service? It may say that you are granted the license to extensions only with Microsoft builds of vscode.
Microsoft isn't a stranger to distribution restrictions and software usage limitations. I remember uploading Visual C# Express 2010 (freely downloaded from Microsoft's website, without license keys) to a local file sharing website to ease the downloading for my local study group and got a letter from Microsoft's lawyer to take it down.
After that our study group transitioned to Mono with Monodevelop.
An actual example is that the Python LSP extension on the offical marketplace has some "DRM" that makes it pop up a fatal "You can't use this extension except with the real VSCode" error message. People have been playing whack-a-mole with it by editing the obfuscated JS to remove that check, or by using an older version from before they added the check. https://github.com/VSCodium/vscodium/discussions/1641
Sorry, I should have been more specific and said FOSS. VSCode is still encumbered by the weight of a mega corp. It's like saying Chrome is open source. Sure it is, but it still exists to serve the corporation that owns it.
There is some sort of vendor lock-in VSCode. It at least used to be extremely difficult to make GitHub Copilot to work with codium. There is something closed source in VSCode that makes the difference.
It was so difficult to maintain, that I ended up switching to VSCode. So the ”lock-in” worked.
It sounds like you are trying to define freedom as Stallman would. Based on that, here are his “4 freedoms”…
1. The freedom to run the program as you wish, for any purpose.
2. The freedom to study how the program works, and change it so it does your computing as you wish. Access to the source code is a precondition for this.
3. The freedom to redistribute copies so you can help your neighbor.
4. The freedom to distribute copies of your modified versions to others. By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.
Which of the above does MIT not provide? Honestly, which one?
What you seem to be looking for is to take away the ability for somebody who writes NEW code to be able to choose a license for it. You want to take away their freedom?
And why exactly? What “user freedom” does this serve?
Well, it forces that users will get access to FUTURE code that developers write.
I think it is a stretch to suggest that a developer writing new code makes existing users less free. Forcing a license for the new code certainly does make the developer less free though.
If “having the freedom to take away freedom does not make a society more free” then the only morally acceptable choice is to stop using the GPL. Is that what you were trying to say?
I mean look at the case of Spotify's Car Thing. They sell you a hardware product, and then they can discontinue it in the snap of a finger. Users are out money with little to no recourse. Luckily Spotify is refunding customers, but only if they ask for it, but that isn't always the case for the discontinuation of hardware. Without free, as in freedom, software customers become enslaved to capitalism where they have to buy the newest hardware because their OEM only supports hardware for a certain amount of time. With free software, I can take the software from the vendor and provide updates to the product for much longer amounts of time. But because people want to use MIT, BSD-2/3-clause, Apache-2.0, et. al., consumers cannot reap the full benefits of what Free and Open Source Software truly means.
It uses indentured neural networks to write code for you. You're a neural network! You just have rights because you ain't digital (and way larger and possibly using quantum effects). Smh
You mean except for all of the good plugins. Or the ability to use a custom plugin store. Last I read, the open builds struggled with removing all of the MS telemetry and some may still be leaking.
Completely agree. Furthermore, you could always just not pipe it to sh, read it first if you care so much. Releasing and maintaining packages across a range of distros is extremely hard and time consuming, and they just released the linux version.
I don't see how maintaining a 150 lines script is more convenient and less of a hassle to maintain than having a pipeline building a flatpak, an rpm, a deb and a plain tarball with binaries.
In 2024, everyone looking for a code editor knows how to extract a tar.gz right?
> In 2024, everyone looking for a code editor knows how to extract a tar.gz right?
I'll raise my hand and say I still get the `tar` terminal command options confused and have to pause and figure out the file format I'm dealing with and the options. So, no, I usually don't know, and have to look it up in the manpage/help. "Was it -xvfz for this one? Shit I just did this recently..."
It's time consuming only if author interested in good UX. If author wants to use their users as alpha-testers, then he can spent a minimal amount of time on packaging.
Given that it's open source, it's not the authors' problem to package it. You can package it for your distro, or wait for someone to do it.
It will be better because you presumably use it. Chances are that the authors don't use the same distro as you do, so they are not in a good position to make a package for you.
It's other way around. Any method of installation is insecure by default. Moreover, hackers are able to penetrate even multi-layered security defence systems sometimes (for a short period of time). What makes this 0-security system secure?
My argument is that the install method is just piping a curl command to your shell is _no less secure_ than any other typical application install procedure, and the user experience is pretty decent.
I don't think we should be generating "loud warnings" about so called "insecure install methods" nor should we fault the Zed authors for not solving software security.
The point is that when you use a distro, you trust that distro and its maintainers. If you use the package they build for you, then you rely on this trust.
Now if you use a random script from the internet, then you don't give your distro maintainers a chance to actually review the package and instead you blindly trust this script. Arguably you increase your attack surface.
Also a system package manager checks the packages (there is signatures and stuff), whereas piping a script to curl doesn't do that at all. So if the server is compromised, you just execute random code. It's harder to compromise the system package manager.
Which is not the same thing as a signature on the package, is it?
> Distro maintainers in general do not audit the code they package.
First, it depends on the distro. Second, they certainly do at least some kind of due diligence before packaging a new project. So there is some amount of selection (which you don't find in npm, cargo or pypi).
Yes, an one 0 security installation method cannot be less secure than an other 0 security installation method. Both are insecure.
However, when source code and compilation instructions are available, an independent maintainer can verify source manually, compile it in isolation, test in it in isolation, make patches, add SELinux rules, make package, then sign the package, to produce a secure package, which can be safely consumed by end users.
Because you don't know how the script is going to try to install the program. A double-click installer on Windows has a standard approach that results in the program being placed in C\Program Files and the files being tracked and an uninstaller being placed in a centralized location. On Linux, any random "installer script" could spew files all over your /usr or anywhere else with no way to clean them up. This could even break your OS.
The Linux equivalent to double-click installer is ... a double-click installer, Flatpak. Or for even more bonus points, make the app fully portable as an AppImage. In the rare case I can't find what I'm looking for in my distribution repos, I look for an AppImage.
macOS for example checks the crypto signatures of downloaded apps, so it’s much better than randomly executing code from the internet.
I think even Windows does this nowadays.
I’m not asking to support all distros. But at least one between flatpak and snap is enough to support pretty much all distros out there in a clean manner, not with curl | sh
I always see this comment and understand its reasoning, but people who check what they are installing are the same people who can download and check a shell script.
In this case it's 150 rows with spaces and comments and the first one is
But linux [1] has absolutely zero security measures, and this has basically free reign over your computer to send off your .ssh folder, your browser cache, to install a permanent keylogger, etc.
True, but where's the difference between downloading a binary and executing it vs. downloading a script and executing that which will then download a binary and execute it?
In both cases, you trust the publisher and in both cases the publisher gets equal access to your machine.
Oh - you mean you're downloading the source code, then audit it, then compile it and only then you run it?
That's super great. That has saved you from the xz backdoor and all other supply chain attacks and will be of great help to you in the future. Let's hope no backdoor ever slips past your code review.
> where's the difference between downloading a binary and executing it vs. downloading a script and executing
The difference is that the attack vector of the shell script is an easier target.
If someone was to be malicious; they could manipulate the script and inject some sort of payload in disguise. It's an easier vector to damage than say an compiled package. One that's less prone to being detected in that the script could go for days undetected.
With the executable you can compare the checksum and with the whole package compiled it is less prone and more tricky to alter.
Unless that script is under monitoring 24/7, I'm going for binary but they don't support BSD anyway.
If I were to serve a targeted exploit like this, I would certainly hide it in the binary and have the binary determine whether it's running in the targeted environment and then run the payload.
It's much, much easier to hide a malicious payload in a binary than an easily auditable shell-script. And it's much easier to make a decision of whether the payload should be enabled or not if you are already running on the local machine.
If you don't trust a publisher, you really can't run anything of theirs. Shell script or, especially, binary.
Well, it can actually check if it’s being downloaded from the browser or from the shell (user-agent), so unless you are downloading it and running the downloaded script, it might still spoof what will get executed. Also, it can itself download other scripts.
See, I wouldn't. I would go for the script to either inject the payload to the package or inject to the host.
Even if it's auditable, how many people are actually verifying the shell script before hand?
You've just been given a command to download and execute.
And the potential of having lots of users downloading a shell script has a quicker attack path than users downloading the package. You have custom repos, holding their own distro packages for the software.
Obviously most distributions provide package managers that should be used for unified automated update mechanisms and gpg signing. Superior to curl | sh in every way.
It's not uncommon that the curl | sh method actually, among other things, detect what distro you're running and add the repos before installing via the package manager, so in the end it depends on what the script actually does. Atuin does it well for example: https://docs.atuin.sh/guide/installation/ -- and offers other options (as you should).
We're actually not going to be doing that for much longer. Lots of users kept querying how it was installed, where, how to remove it, etc.
The response of "it depends, we probably used your system package manager" was not often well received. Users who know how to use their package manager tended to just do that anyway, and not use the script.
I don't really understand the decision to completely stop doing it. If the script has logic to do A,B,C in different cases, why not just implement an --uninstall flag that does the opposite of A,B,C? Then users don't need to know or care what "type" of installation was done.
Of the three distros I know to more detailed extents, Debian, Arch and RedHat, none of those make it easy to install and keep updated a third-party package through the built-in package manager.
In all cases, signatures and repositories need to be configured, often requiring both root access and usage of the CLI and in all cases much harder than running an installer script (which might be doing exactly these steps).
To achieve easy means of installing using distro package managers means including the application in the distro itself, but now it's beholden to the distro's software update policies and thus stuck on that specific version for years or even decades.
That is not what a v0.something of an end-user centric desktop application wants for themselves.
There's flatpak, which is cross-distro, sandboxed, and is installed by default on most distros. It uses xdg-desktop-portals to request access to files through a desktop-provided file picker.
Sadly code editors aren't really suitable for flatpaks, since they usually require access to dependencies installed on the host. This can be worked around by using dev containers, vor the IDE has to ne developed with sandboxing in Kind (like GNOME Builder).
Do you know difference between alpha, beta, and quality software? Linux distros have different goals, or different channels for different qualities of software, while vendors wants their users to be free alpha or beta testers.
I'm never using this editor unless it can install itself and work completely offline, without going for downloads and making web requests , it is crucial, especially after totally not related xz fiasco and the white house praise for rust.
This might seem funny until you read Ken Thompson's "trusting trust" paper and realize that bootstrapping Rust is a so overwhelming task that someone implemented a Rust compiler in C++ for this purpose: https://github.com/dtolnay/bootstrap
I mean, who knows what kind of malware is transparently being injected in all Rust programs out there.
If you want a fast, low-memory-footprint editor with no spurious network connectivity and a conventional desktop UI, check out Geany: https://geany.org/
`unshare --user --net zed ~/file-to-edit.txt` seems to work fine. it just shows an "auto update failed" warning in the bottom, but seems otherwise functional. does that work for you?
Some modern compiled languages such as Zig and Go can be officially bootstrapped from a C toolchain. And a C toolchain can be bootstrapped with Guix using only a 357-byte blob. This gives some good confidence that you can bootstrap a malware free toolchain using auditable source artifacts.
Rust however, does not have an official way to be bootstrapped from a C compiler, which means developers must use a previous version of the compiler to build a new version. In this situation, you can never be sure a malware was not injected in a previous version of the compiler (see the Ken Thompson paper for an example). There's no way to know because you are using a unauditable blob to create another blob.
The mrustc solution is not good because there are essentially 2 implementations of the same compiler that have to be kept in sync. It would be much better if Rust used a solution like Zig's: https://ziglang.org/news/goodbye-cpp/
I installed zed a couple of days ago, tried it for a Java project. It was soooo bare-bone that it vanished from the drive shortly after.
Maybe I'm doing something wrong, I got java/maven plugins but there is no XML highlighting. Java does have highlighting but that's it... OH, and and I installed it this time and I noticed "downloading json-language-server"... (it was there before probably but didn't notice)... like WTF - didn't even ask if I want to... utterly rubbish experience.
For a simple text editor I prefer BBedit on mac, which is native and blazing fast. And for something slightly more complex I usually end up with `code <file>` to quickly edit it...
That explains a lot though... I have a bit of a qualm with it:
1) the "language support" list both languages as well as runtimes (immediate facepalm)
2) Java is somewhat popular I'd say... not supporting it but having support for things like racket is even weirder...
Now see, I'm the opposite. I would like to pay a reasonable fee to drive a silver-and-oaken stake through the heart of the collab features. I will pay real money to just make it all go away. As others have said, I work in an environment with lots of different tools so collab stuff like this is just visual noise, let me turn it all off.
i.e. very similar to how other editors approach it
e.g. vscode uses network features to make people use the non fully open source version instead of codium (and it's otherwise subventioned by MS to reach the part of the editor/programmer market visual studio can't reach but IMHO if it wouldn't be that is also how they would bring in the money)
Cool to see a new editor in the arena with a lot of resources behind it, but I'm trying to find the selling point besides "it's really quick".
Great feature but there's a lot more stuff I need for a truly outstanding editor, what are the novel pieces?
The bar is ridiculously for editors (vim & emacs configurability, vscode just works, jetbrains can do it all) - what will/does it bring to the table to compete?
I've been looking (for years!) for an editor with code highlight which can open single files as fast as notepad++ but on linux, I have to say I'm really happy about zed.
I also use it to open folders with source code and markdown documents without having to boot up an intellij editor
I can see the appeal, as the demo looks really smooth; then again, I'm a terrible slow developer, so personally I find saving few ms here and there irrelevant to my daily workflow
that is actually really cool - I always felt like (and surely am not the only one) that vim is great keybindings but an okay editor. If someone addresses this that'd be incredible.
I was watching thorsten and the primeagen's chat yesterday https://www.youtube.com/watch?v=8XweSqTYdMQ and thorsten was describing a few challenges with translating vim's functionality into zed.
Part of it being that zed doesn't have an intermediate layer between keyboard input and keybindings, so by the time the vim layer is hit it has been translated to a keybindings - that limitation kind of put me off.
I downloaded ZED for a quick play-around, but was quite shocked to find out that editing and saving a file runs an auto-formatter on it _by default_...
Whoever thought that was a great idea obviously has never worked with version control, with other people on a project? Sorry, but this is such an obviously wrong default setting, I'm surprised nobody pointed this out before?
Enforcing auto-formatting is a common practice in my experience.
Currently working on a project where the repo will refuse commits that are not following the repo-specific formatting settings.
No, it isn't. And anyway, I downloaded a generic text editor which has no idea of what autoformatting settings are applicable to my repos (maybe it differs per repo?), yet is trying to autoformat anyway? For example, it decided to replace ' characters in a YAML file with ". WTF?
The _default_ setting should be to save as-is.
Evidently this is all very new to you, sounding slightly histrionic.
The zed complaint is purely about it be auto enabled. For each language there is usually a standard and at least one tool. Most people want formatting and can’t stands code bases where sometimes it’s a single quote, sometimes a double quote.
Or there's a "save without format" command that I used once (when working on a pywal template for a zed theme, that is not valid json but Zed really wanted to format it)
But this is the _default_ setting. If you want autoformat on save, that's perfectly fine. Just do not make it default. I can't think of any other editor that does this.
If I have to change one character, but the autoformatter reformats the whole file instead... that is a problem. My actual change will be lost in the formatting changes. And who says that I want to reformat anyway?
EDIT: I usually work on projects with a long history. File endings, tab/spaces, etc. are usually all over the place, and we haven't touched actual code yet. I usually have no authority and time to fix formatting issues, especially in "miscellaneous" files like yaml. And the PRs in most places I'd worked at are rejected if they contain something other than what is relevant to the topic of the PR. And then there is the issue of the hidden change, when you reformat a 1000 line long file, and also make an actual change - this will be very easy to overlook.
And finally, I might be using another tool for 99% of the editing (I use IDEA), yet sometimes I just want to edit a file quickly, outside this tool. So I do have an autoformatting setup in IDEA, should that mean that I can't use another editor for quick changes?
Looks like they're developing their own Apache-licensed GUI framework for this, called GPUI. I think of text handling as one of the trickier parts of building such a framework, so one specifically made to support a text editor would seem to be a pretty good foundation for a general purpose GUI toolkit. I wonder if they (or someone else) will pursue it as an alternative to Qt.
Many UI libraries being built today want to be very forward-focused, so they focus on being as general as possible. This does make some sense, especially considering that, for better or worse, using a web browser engine as a UI has become increasingly popular of a decision. However, in the end this leads to almost all new "greenfield" UI projects trying to develop scalable vector UI rendering engines that need advanced and highly optimized vector rendering libraries like Skia and Pathfinder. Having everything in vector all the way through is elegant, but it's complicated.
The insight with GPUI is that it's not really necessary to be that general, the vast majority of UIs are made up of a relatively small number of different primitives that you can build on to basically do anything. So instead the vast majority of what's going on in GPUI is layers of roundrects. Text rendering is the classic approach of rendering into glyph atlases. I think this is a vastly more sustainable model for a UI library.
I don't know if GPUI is ready to be used on its own, but it does have a spiffy if brief website.
Given that Zed actually has good "UI-feel", it tells me they are focused on the right things. A lot of new greenfield UI frameworks are spending a ton of time on trying to build extremely generic vector graphics systems but the actual widgets feel bad and are missing all kinds of tweaks and nuance. Here's a good litmus test for text editors: what happens if you double click and drag? In most good UI frameworks, this should result in word selection and then expanding that selection left or right. In a lot of smaller greenfield UI libraries, something vastly less useful will happen :(
Lots of the app’s UI right now is a layer of components on top of gpui (check out the ui crate!) that are pretty Zed-specific at the moment.
Some of these things will likely be made more general and have dedicated gpui elements built for them (button, input…)
I think not rushing to cover everything right out the gates is giving us the time to feel out apis that feel good to write and work well for us. Hopefully in the near future that translates to a UI library that is awesome for the whole rust community to use.
Thanks for the links. The approach described in that blog post seems like it could actually achieve crisp, native-looking text. What a welcome improvement that would be compared to the blurry, misshapen, overlapping, or poorly laid out results I've seen from other new GUI frameworks.
Their toolkit is developed in their monorepo and is not on crates.io nor versionned, so they can do breaking changes any time. Seems risky to use in 3rd party projects.
Today, sure. That doesn't preclude it from maturing into something more generally useful, nor from eventually getting its own repo. I've built more than a few libraries that started out as functions and data structures within application code.
I have an old Intel Mac Pro 2015, which slowly transitioned from my working laptop to a personal use laptop. I'm using VSCode there and it works fine. I mean I've never faced any slowdowns because of the VSCode.
I had a small project coming up and decided to try out Zed. As it's a native app I thought it would perform better than VSCode. But to my surprise it was not the case. The performance was actually worse.
And as for the TS integration, the overall experience is worse than on VSCode. The autocompletion works in a weird way, no way to just look at available methods, I have to start typing. It's just frustrating. I even decided to give another go to Sublime Text and it felt much better than Zed.
So Zed didn't work for me, but I'm sure it will work for somebody else.
I've kept my neovim config, vscode, and zed configs in parity for a while now. To the point that the keybinds and behaviors are the same (or as simliar as they can be) across all three. In my personal experience zed is eating into the time I use vscode, but not really touching neovim as much. It really has come a long way, and I'm excited I'll be able to use it on my Linux machine without having to jump through hoops.
It comes down to using the vim extension and making use of the context it adds when setting key binds. Both in settings and keybind json files you set commands for certain vim modes, or bind native VSCode commands to your leader. Zed does almost the same but with no defined leader key so you just have to be more specific about the command and the context they are executed in.
For me the "killer feature" is a graphical editor (like VSCode or the Jet Brains editors) but with performance more like vim. I'm also very much enjoying the modal editing, which VSCode lacks.
Wait, Zed is a modal editor? All i've seen is that it has vim mode, which most editors have and i generally find it insufficient.
Granted these days i still prefer Kakoune style modal editing (i use Helix, currently), so not sure i could move back to Vim style anyway. Nonetheless if Zed has real, first class support i'd be interested... but a second class compat layer is not sufficient in my view.
When I specified the modal editing I was referring to how the workspace search in Zed brings up each result in an editable "window" allowing me to make edits across my whole project from 1 tab. VSCode's workspace search feels much more limited in comparison.
I'm not seeing it in the docs, maybe I should write up a little something on my editing experience!
Also to correct my self, I think I mistakenly said `modal` when I should have said `buffer` earlier.
So searching across the project brings up your results in multiple buffers, each about 5 lines (expandable to more) and you can do all of your normal editing within each/all of the buffers.
If I happen to write something up, I'll try and remember to share it in this thread.
It does, though I found learning and setting it up to be more complicated. My preferred editor is one that's very simple to setup and use (e.g. Sublime, VSCode, Zed, nano). Emacs is cool, and maybe someday I'll get around to using it but so far it hasn't met my needs.
Fair enough, I have personally spent a decent chunk of time configuring my Emacs setup (though it has mostly stabilized at this point). You may be interested in checking out Doom Emacs[0] if you want to take a stab at it in the future. It sounds like it would be an out of the box experience closer to what you would want.
- fast enough to compete with neovim. Idk if it’s my previous interest in display engineering, but I substantially notice the speed
- vim bindings…. Satisfactory. I don’t struggle to navigate at all, feels pretty native to me. I can split panes every which way till Sunday
- collaboration mode is pretty great
- Ability to have your current pane magnified
- Ability to set your terminal font size to a different font size than your editor (been looking for this for years in a terminal emulator)
- Super clean and crisp ui. TBH it was too much ui when I first tried it, I stopped using it almost immediately. But I have it a second try and got used to it. It’s still a lot more than vim but hey
- Outline mode (pretty sweet)
- Multi-file buffers (makes editing text across multiple files stupidly easy)
- Cracked team. Awesome people, super transparent, just some sick engineers doing sick engineering
I haven't used Zed in the last year, but Zed's search across codebase display was divine. I don't want to necessarily open the file when looking at search results to see additional context in the matching sections. Zed brings up a view with all the results where you can expand context, and IIRC even edit in the results panel without having to open the entire file.
It's also collaboration-first, and unlike VS code, I believe the software behind collaboration mode is open source
Have you had much success with VS Code's multiplayer extensions? I've found them buggy to the point of useless, but maybe things have improved. Zed, on the otherhand, is developed by people who understand pair programming, which is my priority.
No not much experience there since multiplayer editing has never really been a part of my personal workflow (mostly a lot of screensharing), but I can definitely see that being useful for people that use it regularly.
Not the OP but I tried hard, looking for an easy pair programming solution. Worked decently a couple of times and inexplicably failed most of the time.
This is why I'm excited to try Zed. I regularly "pair" via Pop, but keybindings and lag make it hard to switch seats, so we basically decide at the beginning of the session who is going to hog the keyboard, and that's a crippling dynamic.
I use it as my secondary editor (after Sublime) but could easily see myself switching in the not too distant future. It's incredibly fast, possibly even more so than Sublime, and really well designed. While the UI design of an editor is possibly not that important to a lot of people, I find it really matters to me for unknown brain reasons, I get anxious if I ever have to use VS Code as it has zero attention to design details.
I'm really pleased for the Zed team on reaching this milestone. I think the only thing holding me back from it being my daily driver is the built-in Pyright (which I hate) and lack of Ruff support.
Tried it with mangohud and scrolled up and down a 100-line c++ file with no lsp enabled. 30fps. Absolutely not ready yet. Not sure I'm willing to leave Emacs, but gpui looks cool and I hope someone makes a fast Emacs client with it some day.
That seems a bit rude. You get the QA you paid for - zero.
And nevertheless, whenever Windows software doesn't work in Wine, you shouldn't think "Wow, how did you fuck that up?". They never promised it'd work in WSL.
It's a company, not volunteers. They're obviously have some long-term strategy to extract money beyond support (it's an editor). They are doing a lot ok marketing right now (dev-rel).
It's very much okay to have high expectation, even if the product costs zero. The user is the product, and so on.
Code that panics on bad external input (such as the OS) is incredibly sloppy. They already have the Result — they can just bubble it up and present an actual error message (and maybe even ask for diagnostics, etc).
WSL is a pretty niche version of "Linux". I would guess that close to 0% of what makes it to the front page of HN had a QA team that explicitly tested it on WSL.
It's pretty self-evident that Linux support can't be expected to mean Windows support. If something is broken in the Windows simulation of a Linux GUI stack you should be complaining to Microsoft, not to the developers of a program that works fine in a normal environment.
Maybe they didn't do any on Windows, because this is for Linux, not Windows. WSL is still not Linux. They do appear to have Windows build instructions, though[0]?
I've not heard of QA in open-source projects... unless it's something peddled by a big corp (eg. Chrome, Go, VSCode etc.)
You are lucky if there's some automated unit testing, but that's as far as these things go. Programmers don't like, don't know and don't want to know how to QA. Also, they generally look at QA with contempt... so, unless forced to, they won't do it.
Is there anything stopping anyone from making it a flatpak and maintaining it? I'm personally not surprised that they're reluctant to take on more maintenance responsibility than necessary.
Yeah, right on! We Linux users love dicking around getting software to work on our multi-variant systems. Why maintain a universal package when you can sit and read through issues from nerds trying to get your software to work on insert trendy distro here
Definitely looks pretty rough so far (running Debian GNOME) -- font rendering looks wonky, and resizing the window is slow and unresponsive. But I'm very optimistic for what's to come!
I like using zed when I'm on the MacBook. It's quite fast, looks good and has some neat features like multi file editing.
But I don't get the utility of all the collaboration features. It's noise to me, and feels like they could have invested that energy in other areas.
I work in a small fully remote team, and our tool of choice for collaboration is git. Why would I want to edit the same file while someone else is editing it too? Who will commit it? If I want to discuss a part of the code with someone screen sharing works perfectly. There's no need to bring in simultaneous editing.
It's such a technically hard feature to develop but just doesn't seem to have any utility for me.
I kind of feel the same about collaboration features, I never use them in any editor, just git and video calls/screen sharing. Ironically though the collaboration features are their monetization plan with the base editor as FOSS, so hopefully for them we’re in the minority on that opinion…
Zed is nice and all, but I simply cannot trust a VC backed editor of all things. Eventually, enshittification will occur and I really don't want that to happen to one of my core daily programs.
VSCode is run by a mega corp that does not need to squeeze money out of it to make revenue, whereas that is what Zed must do as that is their only produt.
IMO for a graphical program that's fine, but in general I really hate hard requirements for a GPU which I've seen in the wild multiple times. Just simulate the darn thing in software, I don't care if it takes 10x longer, I have all the time in the world.
yes. it's working fine for me on 9 y/o integrated intel graphics.
but it's kind of still a weird statement to make. i thought it was generally the OS's job to supply the vulkan layer, and that mesa -- which just about every linux OS will be using -- provides pretty robust software implementations of those things as fallback. what would cause them to require a "physical" anything?
The last collaborative editor that I could use locally successfully was gobby. Currently its development is very slow or seems abandoned. I've been waiting for Zed because it was introduced as something that was "multiplayer-first" from the beginning. Reading the docs now, it looks like I need a feature called "channels" that I couldn't confirm can be used fully locally. Is there a way to use Zed as a collaborative editor fully locally?
To keep things simple yet powerful is the key to find their place in the market IMO. Don't know about the rendering speed (never had issues with other editor), but that's a bonus anyway.
There's something interesting with the light mode / default theme I got after downloading and opening on Apple silicon:
Sidebar contrast is too low, yet, spot on for the wrong contrast ratio target (3.0, for fill, versus 4.5 for text/bg).
I'll file an issue on GitHub eventually, feel free to pass along email in my profile if y'all see this and have someone who is already nerding out on this stuff.
Context on why, and before I get more fuzzy/opinionated, why I'm comfortable speaking to this is some quasi-authoritative tone: I built a new color system that ended up being launched as Material You at Google, at its heart is getting contrast while having expressive colors instead of just flat black/white, so I really appreciate the effort here.
Fuzzy/opinionated territory:
Problem with the low contrast here isn't just that it doesn't literally hit a 4.5 ratio. IMHO this isn't strictly verboten, if I thought that it would mean the engineer part of my brain was too in control. There's an argument to be made its good the sidebar isn't distracted. Problem is disabled states traditionally lower the foreground brightness, so it crosses over into "disabled element" territory when you visually parse it.
We'd appreciate the issue and discussion! We've been aware of contrast issues for some time, and I personally have been thinking about switching our color representation from HSL to OKLCH to give us more traction on these problems. But I've been working on Linux and am not a designer, so I haven't had the chance :D
To the Zed folks here, can you please add a little line to say that it is an editor, for people like me who are not in the loop. There's nothing clear on the landing page or on the docs page that indicates it is so. The video shows an editor, but plenty of software has built in editors.
My first impression is the dark mode color contrast is poor compared to VSCode defaults (I tested a few things with CCA Colour Contrast Analyser). I'm sure this is all configurable but it was off-putting to me. I'm still interested in spending more time checking out Zed.
Zed is co-founded by one (or more?) original developer of Atom. So, it's a successor in a sense that it is a new project by the same author.
Atom was developed at GitHub, and GitHub Inc remains the owner of the original Atom project. From their perspective the successor of Atom is VSCode - developed by their parent company, - despite the claims by a former Atom engineer.
- spawning nodejs whenever you edit JSON files seems overkill, i'd prefer they use something native and more lightweight, or a way to completely disable it
- text still looks a bit blurry on low DPI screens
- doesn't support LSP properly, completion items are missing some data
- Rust for plugins.. this is painful, compare it to Sublime Text's python API, it's night and day..
Pros:
- Fast and responsive
- UI is simple yet effective
- drag&drop layouting, something i wish Sublime Text had..
- built-in terminal
- built-in Debugger (not yet ready)
Few more months of developments, and i'll certainly switch from Sublime Text, i'll be a little sad because i wrote plenty of plugins for it
I however worry about their business model, i have 0 interests in their AI/collaboration stuff, i'll probably maintain a fork to get rid of all that crap, they should setup something as a back up plan, a small paid license, just for support, i'll be happy to buy one
> - Rust for plugins.. this is painful, compare it to Sublime Text's python API, it's night and day..
Yes, this is unfortunate as they've unsuitably chosen the barely usable & unstable "component model" for their Wasm plugin layer. It's really only half-decent in Rust (to write the code & compile to CM non-standard version of wasm binary. it's also only truthfully usable to call components _from_ rust too.)
I think they are banking on the eventual support for cross-language async - which likely could never come, or could take longer than the company stays solvent!
Is (Python) debugging on the roadmap somewhere for Zed, or will this remain out of scope?
I have a fast editor in Sublime already, but I’d consider jumping ship from VS Code to Zed if I can set some breakpoints and look at local variables and whatnot (very basic IDE stuff).
Not very good experience after opening a simple Python script with no external dependencies in zed for linux. They use Pyright and there was an error and warning that were both incorrect. VSCode uses Pylance IIRC and it's not complaining.
Awesome. Been looking for a next-gen Atom for coding. I use PyCharm most of the time, but sometimes its overkill with its eternal indexing ... :) So I often find myself bringing up SublimeText for working on individual files as opposed to a whole project.
Love it. My VSCode takes 3GB of RAM and that's a single window with like 5 files open at one time. I've long been looking for a good-enough replacement (though I don't think I'll be able to leave debugpy for a while)
As a longtime vim user text editing is not an unmet need or unsolved problem. Lack of time, energy to execute on everything is a much bigger problem. And the very biggest and most dangerous unsolved problems I can see on all our plates involve democracy and climate.
Heck I'd like to "solve" all issues with public restrooms in the US, for example, or the lack of planning for trees or shade or water conservation, first, before I'd spend time on Yet Another Hip New Text Editor. The latter is perhaps several hundred slot ranks down (at most generous to it) in my priority list.
I do not get the focus on collaborative editing (surely niche?) while the Remote Development in VS Code (in which "remote" can mean in a docker container running on your local Docker, or a container elsewhere, or a whole-ass other computer you own, or a rented computer/instance in le cloude) seems like such a more game-changing feature, similar in some ways but probably less work.
And make that the thing you charge for. ¯\_(ಠ_ಠ)_/¯
Zed's focus on high performance might be misplaced. Compared to editors like VSCode, the performance boost feels marginal. To convince developers to switch, the emphasis should be on enhancing the overall developer experience. Marginal speed gains alone aren't enough to make me move away from VSCode, and I don't care if a tool is written in Rust or any other language.
Yep. As a VS Code user, I can’t say that improved performance has been anywhere near the top of my wish list for…half a decade, at least.
And yeah, I get it, boo hoo, electron, blah blah. There’s always going to be the rev head at all costs crowd. I don’t think that appealing to them should be this prominent through. The value proposition just isn’t there.
Generally a big fan of Zed. Super fast and quite innovative in their grep UI. My biggest current gripe is Zed's filesystem watchers are either broken or misconfigured on Mac. If I do a `git rest --hard` via terminal or github desktop UI, zed doesn't detect it and I'm forced to do a hard reset of the app to get back to a synced state.
I don’t think I could ever switch to a windowed app as editor, vs a TUI, eg neovim. The remote story is never great for me. It forces your editor to slowly bloat to become your entire IDE. Native remote dev using tmux is so nice. Can anyone persuade me otherwise?
Now, me personally (and this is just one man's tiny and insignificant opinion in a sea of billions of people!),
I personally, am slightly more inclined to give a slight bit of additional weight to the opinions of people closer to the vim/vi side of editor use, than I am to give to people on the Electron-based side...
The same team behind Zed created Atom and the Electron framework. But that doesn't say anything about Zed either. The only thing that's shared between Zed and Atom is tree-sitter (https://en.wikipedia.org/wiki/Tree-sitter_(parser_generator)).
I am not against Zed in any way -- note that I have upvoted and favorited this article.
Zed looks like it holds promise on several fronts -- most notably that its code (to the best of my knowledge at this point in time, and kindly correct me if I am wrong) is decoupled from JavaScript, Electron, and Chrome/Chromium and other browsers (and other slowness/bloatedness) in general...
My comment, if it was directed, was directed to all of the (posters?/bots?) that claimed directly or indirectly, expressed or implied, that one or more of the Electron-based editors are faster than one or more of the non-Electron based editors, when clearly Electron adds a whole lot of unecesary bloat and slowdown to editors that use it (which is one of the reasons why Zed was apparently written: "Engineered for performance Zed efficiently leverages every CPU core and your GPU to start instantly, load files in a blink, and respond to your keystrokes on the next display refresh. Unrelenting performance keeps you in flow and makes other tools feel slow." (from the Zed website: https://zed.dev/))
Whether or not the same team worked on Electron in the past is not relevant.
What is relevant to Zed is only its codebase, and whether or not that codebase is tightly coupled or decoupled to other software that bloats it and slows it down or not.
I wouldn't want a collaborative text editor that sends all my data to their servers, but I have incredible respect that they're very open and transparent about this fact on their website.
You don't see that kind of behaviour from Microsoft and Apple.
I started using this a few hours ago and so far am really pleased with the experience. Vim keybindings mostly work as expected and TS integration works great oob. I can totally see this becoming my primary editor going forward.
Installed it on my Fedora 33 box running the AMD drivers from the kernel and a 6800 GPU and I can game no problem with proton and steam but Zed ran very very slowly. Sluggish. Immediately uninstalled. :/
Ah, I'd love to try this. But I have a hard cross-platform requirement (Windows/Linux/MacOS) and I can't seem to get this running in WSL. Will keep checking if that improves in the future.
Windows builds are out there. You can build it yourself as well. They haven’t matured as much as Linux ones yet. But your requirement of portability is definitely fulfilled.
If I understand correctly I need a graphical card - my current linux laptop does not have one. Until I upgrade to a newer model I will uninstall my copy of zed.dev - couldn't even launch it.
It's SO BAD when people say ":just pipe this shell script to bash!" for their installers. I just can't take those projects seriously if they think that's acceptable.
I am curious - if they provided a link to the script instead, would it have been ok? If you want to see the code before running, you can just redirect to a file before running it.
The only reason why I dropped (and Im not alone) using Zed is the arcaic UI sublime-like search functionality. Please revisit that part because I really want to use ZED.
When searching I get almost entire file snippets with the search content and scrolling through them would take forever. In comparison see Lazyvim or IntelliJ products search UI, (even vscode is OK, though it requires mouse a bit), you should be able to scroll through found lines, and while you do that you can see the surrounding context of the selected line.
nvim+lazyvim+telescope (which uses ripgrep and/or fzf). Fantastic, that's the gold standard for finding files, grepping, looking for references to variables etc. Love it.
Lazyvim or pycharm search functionality. Even vscode is better in that regard, though it kinda requires using mouse. (I love sublime btw, except for searching)
Does anyone know what is their monetization plan, or if they even have one? Editor with even this much polish takes a lot of time and effort. How is it being funded? Can we expect useful features to progressively get locked behind subscription as it grows in popularity (a la Gitlab)?
We envision Zed as a free-to-use editor, supplemented by subscription-based, optional network features, such as:
Channels and calls
Chat
Channel notes
We plan to allow offer our collaboration features to open source teams, free of charge.
Edit 2: They have apparently also already raised money via private equity. I am quiet soured on "free" products which will almost always be enshittified as the pressure to turn profit grows.
Yeah, I just can't get excited by anything this foundational that has monetization plans. While neovim is a pain to configure and will probably never be a polished "product", it's completely free to use, with no weird monetization features that might start out in good faith, but slowly creep into must-have parts of the software.
I'm perfectly willing to pay for some types of software, but for something as fundamental as my text editor, I want a model that doesn't depend on a company that needs money. That may sound a bit backward, as it otherwise depends on the goodwill of volunteer contributors, but that's the model I prefer and actually believe in.
Interesting the decision[1] of building against glibc instead of musl. Any reason for not using musl instead (and doing a static binary)? This would avoid the compatibility issues e.g.: Alpine and Nix.
At first I thought this might be a creation of Zed Shaw (whose Learn Ruby the Hard Way, was the best introduction to that language, back in the day; and Mongrel was great).
I really don't have much to say, just wanted to thank you for officially releasing a Linux build, and supporting us at all. We, the silent majority, very much appreciate your work. Every release of every application brings out the moaners, this is to be expected. Thanks.
> To install Zed on most Linux distributions, run the shell script below.
This is not an acceptable way to install anything on Linux. If you want to target Linux users you can't distribute with a shell script for installation.
I get that the idea is to reduce friction to installation and trying it out, but most Linux users - the ones you want filing bug reports anyway - are ones who will do due diligence and inspect the shell script to see what kind of opinions it makes about how to install the software.
For example, I see that the shell script downloads a tarball and unpacks it to `~/.local`, then tries to mess with my PATH variable.
Well, my local directory is `~/local`. So that's not where I want it. Actually, I would want it in `~/local/zed`, isolated from the rest of the installations in there. Then the PATH variable stuff just creates junk files since I don't use zsh. So I end up having to figure out the URL to the tarball and install it myself.
My point is that if you just listed the download link to the tarball, it would actually be closer to your own goal of reducing installation friction. The shell script is so much more friction because I have to read bash code instead of just clicking a download link.
"[...]And of course, the journey isn't over yet-we'd love your help, particularly if you're excited about:
- Helping bring Zed to your distro. Either by packaging Zed or by making Zed work the way it should in your environment (we know many people want to manage language servers by themselves).[...]"
I sympathize with the situation that Zed developers are in. They are thinking of the user experience first and foremost, and when trying to distribute on Linux, faced with an overgrown, chaotic landscape that utterly fails to provide the basic needs of application developers, such as the ability to distribute a binary that has no dependencies on any one particular distribution and can open a window and interact with the graphics driver, or the ability to require permissions from the user to do certain things.
I do think that my work contributes to help with this use case. Looking elsewhere on this thread I see that they are having problems fetching and running a nodejs binary successfully. Fortunately, nodejs is a piece of software that can be built and distributed statically. I have not packaged up this one in such a manner but I have done a proof of concept with CPython: https://github.com/allyourcodebase/cpython
That said, if they want to allow users to install Zed through a system package manager, they will need to cooperate with the system and rely on system nodejs instead of trying to fetch it at runtime. Fetching and running software at runtime is fundamentally incompatible with the core mission of Linux distributions (curation, vetting, and compatibility patching of all software that is to be run on the system).
> I sympathize with the situation that Zed developers are in. They are thinking of the user experience first and foremost, and when trying to distribute on Linux, faced with an overgrown, chaotic landscape that utterly fails to provide the basic needs of application developers, such as the ability to distribute a binary that has no dependencies on any one particular distribution and can open a window and interact with the graphics driver, or the ability to require permissions from the user to do certain things.
But Linux does provide a very simple and easy way to do this — Flatpaks. They're completely distro-independent, allow you to package up and distribute exactly the dependencies and environment your program needs to run with no distro maintainers fucking with it, allow you to request permission to talk to the graphics drivers and anything else you need, and you can build it and distribute it directly yourself without having to go through a million middlemen. It's pretty widely used and popular, and has made the silent majority of Linux users' lives much better, although there's a minority of grognards that complain endlessly about increased disk usage.
Maybe I'm just old-fashioned, but I don't like Flatpak (or Snap or AppImage). They still don't seem to have solved all the desktop integration issues. I do not like running apps that bundle their own dependencies, because I don't trust the app developers to be on top of security issues. I trust Debian maintainers (despite mistakes in the past) to keep my system's base libraries up to date and patched. Why would I trust some random developers of some random app to do the same?
> Maybe I'm just old-fashioned, but I don't like Flatpak (or Snap or AppImage).
That's certainly your prerogative, and I hope traditional distro packages stick around — I think they will, since they are the basis of so much fundamental infrastructure. And I'm sure there will be a cottage industry of converting flatpaks to .debs or .RPMs in the future if flatpaks become the dominant way of distributing GUI software :)
> They still don't seem to have solved all the desktop integration issues.
They haven't solved all of the issues yet, but while snaps and appimages are still struggling mightily, flatpaks seem to be making pretty good progress on that front, at least if you stick with modern Electron (not the old version Discord has!), QT, and GTK applications. And I think generally all of the issues are solvable, and not only that, but solving them will leave the Linux desktop in a much better place than it was before, because we can build in broker-based sandbox permissions, and things like making each GUI toolkit automatically use the native file-picker of the user's desktop environment (something GTK4 and Qt5 support via the relevant Flatpak portal).
> I don't trust the app developers to be on top of security issues. I trust Debian maintainers (despite mistakes in the past) to keep my system's base libraries up to date and patched. Why would I trust some random developers of some random app to do the same?
I understand where you're coming from here and this is a common objection to sandbox packaging solutions, but I think there are a few problems with it.
First of all, Dependabot exists: all maintainers of Flatpaks need to do to keep their dependencies up-to-date is enable it for their application repository and then just keep an eye out for emails from the bot and approve the automated pull request when those emails show up. You can do it all from your smartphone! I've done it. Importantly, there would be absolutely no need to manually patch system libraries or backport patches, or any of that nonsense, if we didn't adhere to the distribution model of packaging, because then there would be no delay in releasing libraries, you could just get the libraries directly from upstream, and there would be no point releases or anything of the sort. So a lot of the very appreciated and difficult work that distribution maintainers have to do every day is work that is made necessary by the model of distribution in the first place. So yes, we'd be expecting application maintainers to keep their dependencies up to date, but that job would itself become much easier.
You might say that part of the distribution maintainers' job is to actually inspect library updates from upstream to find vulnerabilities or whatever, but there are far too many packages and dependencies for them to actually do that. I very highly doubt they are actually trawling through all of the code to try to spot vulnerabilities, and that seems like a job best left to the far greater number of much more knowledgeable eyes directed at open source libraries upstream.
This model doesn't just eliminate a lot of unnecessary work either — it distributes the workload; now, instead of one team having to break themselves to keep every system library up to date, everyone shares the burden of keeping the libraries they use up to date. This does open up the possibility of lazy application developers not pressing the "fix my dependencies" button, to be sure, but the amount of dependency hell and cross-distribution portability problems that packaging dependencies with applications solves I think outweighs that concern. Security isn't the only consideration here, there's also other practical considerations. Otherwise, we'd all be using Qubes xP
Furthermore, it should be noted that many of the larger dependencies of Flatpaks, at least, are handled through platforms and platform extensions and SDKs, where bundles of interrelated dependencies are actually separate packages from the application Flatpaks, and thus can be updated by upstream independently. The key with them is just that they, too, like regular applications, become insoluble independent of distribution, and capable of being maintained by upstream as a result, and you can also install multiple versions of them if necessary.
In the end, I think it's a trade-off. But I seriously don't think dynamic linking and having to keep all of the versions of every package on your operating system in perfect lockstep to keep them all using the same version of a dependency, tying your system library versions and app versions and OS versions itself into one big tangled ball of interdependency, where you can't upgrade application B because it shares a dependency with application A and would require a newer version than application A knows how to use if you upgraded it, and having to continually backport security patches from newer versions of that dependency to the version that your system is still in lockstep with is a sustainable and sensible model, especially because of how much work it foists on one single team.
I appreciate all your comments in this thread. I wasn't aware of how competitive Flatpak was and I still haven't played with the technology - but I am more interested in it now.
Also for the record, I wouldn't have complained about them primarily linking to a Flatpak. It seems like a perfectly reasonable alternative to a shell script installation.
It seems to me the most neutral one is AppImage.
Flatpak being the favorite of “not-Ubuntu” people and Snap being only preferred by Ubuntu…but still having a huge user base due to their enormous market share.
I have a shell script that will recursively copy and rewrite the rpaths of every shared-object that all elf files in a sub-directory reference to bundle it up. It obviously can't handle dlopen(), and ld-linux cannot be specified as a relative-path to the executable, but it works for many binaries.
Of course that has the problem that vendoring always has; you have pinned every dependency, which is great for making the software work, but you miss out on security updates.
> This is not an acceptable way to install anything on Linux
You might want to tell the rest of the software world how unnacceptable it is, because a huge amount of software, and especially dev tooling, is installed in this exact way.
It's especially hard for young or fast moving projects, most distro packaging just isn't very compatible with this velocity.
I'm personally on NixOS , which usually makes it easy to always get the latest and greatest, but eg would I really want to add a third party apt repository for Zed, which introduces complications and also can make changes to my whole system, rather than just having zed install itself in a local user-owned directory? I don't want to end up with 15 different third party apt repositories... adding those actually provides a higher amount of trust than shell scripts that only run with user permissions.
And there are similar considerations for most other distros. Arch is probably the only other one, next to nix, where it's quite easy to stay up to date.
(zed is already an official Arch package, btw, and before that it already was in aur, and of course it is in nixpkgs already)
It's not ideal, but whenever some pattern propagates across the ecosystem, there are probably valid reasons why.
I disagree. I’m on Linux for my main installation and I know I can inspect the bash script if I want to.
It’s impossible to please everyone. Pipe to sh is simple, transparent, and easy to do. If reading through 200 lines of installation script is too much then reading through thousands of lines of Zed’s code base will certainly be too much.
Not so transparent[1]. Packages from a package repo are signed, usually with keys not stored on the same server so if someone nefarious breached a server they can easily replace a bash script, they can't re-sign and replace a package.
Sure it's safe if you download the script then review it then install it, but hey, you reviewed it last time, it's probably unchanged, what's the harm of piping it directly to bash next time you need to get set
My question is why they didn't just make a Flatpak. Then they and their users wouldn't need to go through any of this hassle and distro fragmentation at all. Even if they didn't want to publish it on Flathub, Flatpak supports single file packages people can directly install as well.
Were they not aware of `flatpak build-bundle`? They could have just built it once and ran that command on the result and then put that on the archives of their repository and been done with it. It's not like a regular package build where there are different system conditions to keep an eye on. It would work no matter what.
But that's literally already true, and at least with Flatpak they'd only need to make a single package to support all distributions and system configurations, whereas what they're already doing is supporting 15 different packaging systems and distributing a fragile install script that more people will have problems with. So this objection makes literally zero sense.
It isn't, though? Unlike the other packaging formats, it actually works across distros and independent of system setups, so if you choose it, you aren't limited to a specific distro or group of distros like the other packaging formats. Therefore, if you choose it, you don't have to deal with any further fragmentation of the Linux desktop. So yes, while you are "technically" correct, which is the best kind of correct, you're practically speaking quite wrong. It may technically be just another packaging format, but unlike the other ones, it completely removes the need to worry about fragmentation entirely if you adopt it, whereas if you use a bass script, various system configurations will conflict with it, and if you use a discropackage, then you'll keep helping to make new packages for various discros.
As a point of clarification, the script does not edit your zshrc file, it just prints a suggested edit that you may want to make to that file in order to add zed to your PATH.
There are two schools of thought. One strives for correctness, even if that requires extra effort. Another is "anything goes as long as it somehow kind of works more than it doesn't."
(Actually it's most probably a spectrum rather than a binary division, but I'm no philosopher or sociologist, so for example's sake I'll operate with this simplified model here.)
The world en masse is generally preferring the latter (picking the easiest solutions, no matter how shitty they are - that's how we ended up with what we have today*), but among the engineers there are a significant number of people who believe that's how things should be.
There are numerous issues with copying and pasting `curl | bash` invocations from random webpages: all sorts of potential security issues, the installed software (if it works) could be installed in a way different from how your OS/distribution does things (or from your personal preferences), leading to all sorts of future issues, etc etc. Someone probably has a good write-up on this already. But - yeah - on the other hand, it works for number of people.
___
*) And, of course, the opinions if what we have today is "good progress" or "unbearable crap" also vary.
| among the engineers there are a significant number of people who believe that's how things should be
There are close to zero people who tend to think like that among actual engineers. That's why we have reliable transportation and bridges and skyscrapers that work for (soon to be) centuries. On the other hand, we have lots of them among self-professed "engineers" who have changed many monikers over the past couple of decades and will probably call themselves "gods" in a few more years down the line.
> There are close to zero people who tend to think like that among actual engineers.
Oops. My apologies - I meant exactly that, that a significant number of engineers believe in correctness and sound approaches, but I had a brain fart writing that comment. It should've been "believe in the former".
No idea about how many non-software engineers take various shortcuts, though. But I think there's a non-negligible number of electronics engineers who do so - I'm not an expert in that field, but it's not unheard of skipping coupling capacitors or using a resistor divider instead of a voltage regulator to cut down the costs (because that still works... until it doesn't, of course).
Don't apologize; GP is being a pedant in order to pick a fight. The "real" definition of "engineer" doesn't matter; your post makes just as much sense if you'd instead used "software developers".
Can we please instead interpret people's comments in a charitable manner, as we can reasonably assume they were intended, not in the manner that allows us to pick pedantic fights with them?
>There are two schools of thought. One strives for correctness, even if that requires extra effort. Another is "anything goes as long as it somehow kind of works more than it doesn't."
...
The world en masse is generally preferring the latter (picking the easiest solutions, no matter how shitty they are - that's how we ended up with what we have today), but among the engineers there are a significant number of people who believe that's how things should be.*
I often have trouble articulating this at work. I will steal this and use something like it when advocating for correctness as opposed to shitty short sighted solutions. Thanks
This also includes the link to the tar files, so, you dont need to read the bash file to download tar file.
The ones who are interested in this issue will spot this page, anyways. Maybe, they can make it more convenient for visitors to check this page.
> My point is that if you just listed the download link to the tarball, it would actually be closer to your own goal of reducing installation friction. The shell script is so much more friction because I have to read bash code instead of just clicking a download link.
This comes across as rather entitled. They offer an easy installation path that works for most people. They also went out of their way to provide alternative installation methods and instructions [1]. All while gifting you and the world free and open source software.
My interest is not in using this text editor as a consumer, but in guiding software development culture in general, particularly when it comes to installation of Linux applications.
You've dedicated your adult life to massaging ABIs and openly admit your
"non-polished" solution is an idealistic holy grail, and yet you expect
some text editor with a non-existent path to profitability to have hewn
to your every private thought about Linux binary versatility, and that
sir is bullshit.
Pipe the script to cat before you pipe it to sh and take a look. It's downloading an executable to ~/.local/bin. If that's not your preference, there are many other options for obtaining the software, via your distribution or manually. I feel the backlash to this pattern is pretty overblown. They're not attempting to hide anything, just make the common case convenient.
A lot of the backlash is around the tool downloading and running an arbitrary shell script which could contain anything, and overlooks the fact that that shell script then downloads an opaque binary which could also contain anything. If you're paranoid about security read the code and build it from source, otherwise curl | bash is trusting the authors just as much as any other method.
Probably the biggest problem with the `curl | sh` approach is it bypasses package maintainers. I agree it's really no different than if you compiled malicious code yourself (or pulled in a 3rd party bin repository). However, one of the functions of a package maintainer is finding/being notified of security issues.
I'm thinking of the recent xz attack. Imagine how bad that would have been if xz was commonly installed via `curl | sh`.
All this is to say `curl | sh` is probably fine if the org is reputable, however, you should be having second thoughts if this is a repo ran with a bus factor of 1.
Yet the xz attack specifically targeted the packages and nothing else. And it worked, to a point. All I’m saying package maintainers are human and can’t detect everything.
Sure, but that convenience will come to bite you later. What happens when you want to update it?
Their full install docs is like 5 lines of code so it is much preferred to do it that way. Every distribution is different. The ideal install here would be to add a unique apt repo for zed and then it becomes part of my normal update process. Updating a binary in a directory is not the end of the world... but I would prefer to know that upfront versus needing to hunt down where it was placed in order to do the updates.
edit its 4 lines. seeing this is much preferred to parsing a bash script that is intended to support all distributions:
Suggesting that users install software outside of official repos isn't more convenient than using a repo and standard package management tools. As soon as there's an update, you'll learn exactly why that is the case.
You can just read the script that you're curling rather than pipe it into sh directly. It seems like it just extracts the binary from a tar.gz and puts it into ~/.local.
"reading a script" is actually a worse user experience on Linux than just using repositories or flatpak, though. It's pretty rude of software developers to put the onus on users to verify that they're not doing something outright malicious in terms of the installer.
Most repositories have some sort of vetting process as far as I'm aware. In the case of Zed, because it's open-source, it can be examined more completely, although I don't think it's expected for every update to be heavily scrutinized.
In the end, at some point you either have to inspect every line of code yourself or trust others to have done it for you. Package managers fall into the latter category.
Instead of pasting it in terminal, I opened a new tab and read it. There’s maybe 200 lines, most of which aren’t relevant to my platform. Didn’t see anything unusual.
I then proceeded to install tens of thousands of lines of code I didn’t read onto my machine.
My point? People really seem to be bike shedding this install script bit. If I was a malicious actor I wouldn’t be hiding the bad parts in the install script.
200 lines versus the actual install steps which is 1. wget the tarball, 2. extract the tarball to .local/bin, 3. done, or a few more steps to add the desktop file.
That's an incredibly weird response to a comment primarily concerned with the user experience of vendoring software on Linux. Not only does it not engage with my comment but it also virtue signals quite a bit, don't you think?
It is incredibly ironic when looking at your post history that you state that you have "been involved on[sic] [...] the nuances of interface and user experience". Does my comment not meet that very criteria?
"reading before you run" eliminates all convenience of the one liner. Their linux docs are way better because it shows you exactly how to do it on a per-distribution basis. when it comes time to update the software I would prefer to know how exactly it is installed so that I can update it correctly.
Three months from now I won't remember using the script to install it. And the contents of the script could completely change. This is not a helpful take.
This is not a helpful take for you. The same method works fine for me over the last decade. Taking notes helps, having some helper scripts helps. If one’s invested in a technology, one finds a way to remember.
Debian packages are often old. Hence people found a way around.
> You just described how the script is less convenient to meet the preferences of the commenter you replied to.
Well… no. The person I reply to doesn’t say anything about preferences. They want to know how to update the software, the script is the best reference.
You know nothing about what I do. Keep editing code. You just grind on an infrastructure brought to you on a silver plate? Like an editor is the only thing we have fuck around with.
I am less concerned about it being malicious and more concerned about it doing something I do not want re: how the software is installed. Installing software from the distributions package manager is always preferred to doing something manual. When it comes time to update the app, I would prefer to not have to do that in a roundabout way.
I really don't get why this is the modern editor style of choice.
20% (35 chars) of screen space permanently wasted on a always on file browser (meanwhile the animation showcases fuzzy finding)
4% (7 chars) of screen space permanently wasted by line numbers (why are the numbers cut off on the right?)
2.7% (5 chars) of screen space taken up by a gutter
So 27% of screen space effectively dead 99% of the time.
Why do people do this to themselves?
I can't quite figure out how to get the gutter to truly only appear when needed (I can't remember why) but in my vim configuration 2 chars of space are taken up by the gutter and the rest is for the actual code. The current line number is in the bottom right, and if I need to go to a specific line number I have `G` for that. If I need a file explorer, there's the default Netrw one, there's NERD Tree, there's a terminal (I actually rarely need this anyway, but I can understand not everyone can cope, but I can't comprehend why you would need it on 100% of the time).
Why does the "modern text editor" waste so much screen space?
I have a 1200p laptop monitor which gives me 174 chars of horizontal space at a comfortable font size. If I split that in half I get two terminal windows worth of 87 characters each. If I keep my code under 85 characters per line, not only is it easier to read, I can keep a man page or another piece of code on the other half of my screen.
> 20% (35 chars) of screen space permanently wasted on a always on file browser
That is toggleable. Cmd+B on Mac. I usually keep it closed, but it's just a shortcut away when I need it.
> 4% (7 chars) of screen space permanently wasted by line numbers
You can disable that in the settings with:
"gutter": { "line_numbers": false }
> 2.7% (5 chars) of screen space taken up by a gutter
You can also disable the other items in the gutter to free up all of that space.
> So 27% of screen space effectively dead 99% of the time.
You can also press shift+esc at any time to toggle a fullscreen pane of whatever you are working on when you need more space without affecting your editor's state. I don't know the name of that action, I actually found that accidentally.
Edit: I forgot to mention, you can actually disable the tab bar now too if you want even more space. You would just need to rely on the tab switcher feature or file search to move around.
I would damn hope you can configure/disable this. But why is it the default?
And if the answer is "discoverability" then where is the default-on fuzzy find, default-on command palette, default-on context menu, etc?
My point was not to claim Zed was bad because I had the ignorant misapprehension that it was incapable of being cleaner, my point was to ask why people desire such a cluttered workspace by default? Most people I see using these editors _don't_ disable all this clutter.
I haven't tried Zed and am unlikely to, but I get 238 characters of fantasque sans mono 11pt on my 1200p screen, so I could give up those spaces and still have two vertical panes (assuming Zed supports vertical panes and the file-browser isn't duplicated).
I think lots of people are comfortable with smaller fonts, but I find myself genuinely straining my eyes too much and getting headaches if I go smaller than this, and I already wear glasses (although I should probably update my prescription).
Oh, there's no "right answer" to font size, but the fact that my size would work on a 1200p screen (and many of my coworkers have significantly larger screens and younger eyes than I) could go towards explaining why the sidebar is on by default and the gutter is so huge.
I agree with you and probably have a similar setup to you.
There's a % of people that like to think deeper about their tools, but I think most folks don't care enough or might be struggling with higher priority things at work. Plus, you don't know what you're missing.
For me, good setup is like compound interest that just keeps paying off over time.
There's the relative number line etc, but I've never actually encountered a situation where I felt the need to make a jump to a line number on the screen and didn't do it with basic vim motions instead. Every time I'm going to a specific line number, it's because I'm following an error message that references a file and line number.
1. curl | sh, seriously
2. The default theme is so low-contrast that I seriously struggled to read text. I could not find something that was, like, actual white on actual black.
3. I can figure out how to enable Copilot, but not to open a file. (I had to resort to “zed file.cpp” from a terminal.)
4. vim keybindings are not bad, but also not perfect.
5. It feels… laggy? Isn't this supposed to be fast? Whenever I move the cursor over a symbol, it first moves and then like 100 ms later, it tries to highlight that symbol everywhere. And that takes time. In a 200-line file.
6. Ugh programming ligatures. Where are preferences to turn it off? Where are the preferences for anything?
OK, well, I guess I could use this if I had nothing better. But if the point is that it's supposed to be zero-lag, #5 really destroys the point for me.
It's pretty much guaranteed to work cross-platform, and if you're worried about it you can save the script and view it yourself. You're about to run their binary on your machine, why are you concerned about the script you're downloading?
> 2. The default theme is so low-contrast that I seriously struggled to read text.
The landing page when I opened the app had an option to choose from about 40 themes. I tried 3 of them and they were no _worse_ than VSCode's defaults.
> but not to open a file.
usual keyboard shortcuts, and system menu bar?
> Where are preferences to turn it off? Where are the preferences for anything?
I couldn't even get it to run at all. It's in the official repo of my distro, but when I installed that package and tried to execute it, the binary launched another executable with a 'zed-cli://' URI pointing to some socket/named pipe it tried to create under /tmp (but didn't) as an argument, then just sat there doing nothing. Seems like it's doing some sort of local client-server implementation -- not sure why a standalone desktop app would be designed that way.
It never spawned a window, and possibly because the process is itself launching another process, nothing is output to stdout or stderr to indicate what's going on.
I'm a sucker for text editors. I've used so many at this point. Notepad++ from way back. Anyone remember Komodo, the Perl focused text editor from ActiveState? BBEdit. TextMate. Sublime text. Atom. Visual Studio Code. All kinds of IDEs from Eclipse to the IntelliJ family and the full fledged Visual Studio. I've used many flavors of vim and learned emacs multiple times. I doubt I've named half of the editors I've used.
I'm at the point where I just can't motivate myself to try yet another. In my experience, they all have their strengths and weaknesses. My rule of thumb now: use whatever the majority of people on my team use. For non-team related work I find the community around Visual Studio Code to be good enough that it does what I need most of the time. I use bog-standard vim when I ssh into boxes.
komodo the editor (i recall it as a semi-commercial alternative to eclipse, much like intellij, but based on mozilla UX code?) was funny, because exactly the time it got traction and people started to talk about it, the tech news were inundated with Comodo the TLS operator caught doing shaddy stuff (and if i recall, blaming some hackers)
Komodo had some promise, in kind of the same way the original hype about Perl 6 had promise. For its time, it had features that were not widely available in other editors. However, I found it to be slow and buggy. And that was compared to Eclipse which was notoriously clunky. (Note: a quick look at the modern Komodo Editor, which appears to still be actively developed, looks much closer to a Visual Studio Code clone and is nothing like I recall the original)
I recall it was pitched as a slightly lighter alternative to eclipse and intellij but initially geared towards Perl development (with plugin support for all languages). However, that kind of middle-ground wasn't popular at the time and devs mostly split into the full featured IDE camp or the stripped down editor camp.
Editor hype cycles come and go. That's part of the reason I am so jaded when I see a new cycle start for a new editor.
Seems like a good VSCode alternative, but I'll stick with my editor of choice. I imagine it will be 1~2 years before Zed is bought by Microsoft and either squashed like Atom or replaces VSCode.
The thing is, I would just disable it, but you can't, as far as I can tell. There's this somewhat angry issue about it here:
https://github.com/zed-industries/zed/issues/12589
They might have a point but beyond whether or not they have a point regarding the fact that it automatically fetches binaries from the Internet, not having an option to disable it is just cruel.
I still like Zed a lot and I also have a big appreciation for the design of GPUI, which I think is a very well-designed UI library that puts focus on the important things. So, I hope it goes well.