I just want to say that this project is amazing. At the risk of sounding hyperbolic, I think Rust is the most exciting thing that's happening in computing today. This sort of project that plausibly replaces software traditionally written only in C/C++ with something that has performance parity, but is in a language where contributions are relatively accessible and safe, is the most exciting thing even within the bounds of an intriguing ecosystem.
As someone who is especially concerned about the performance of my tooling these days due to what seems to be a generally infinite willingness to accept web apps that are slower than desktop apps from decades ago, and which seem to continually demand more resources year over year, I really appreciate that such a distinguishing eye has been given to Alacritty's speed and resource usage. Some contemporary alternatives like Electron-based terminals are academically interesting, but are programs I'd never want to use due to the huge step backwards in these areas.
One question: do you have any plans to use Alacritty to try and advance the state of terminal emulators more generally? e.g. Displaying images, richer interfaces that don't depend on ASCII bar characters, graphs, properly tabulated results, etc. This is a direction that I wish we were going, but it's not clear to me how to get there without many sacrifices.
> do you have any plans to use Alacritty to try and advance the state of terminal emulators more generally?
I hadn't replied to this because others had already provided all of the info I have. To summarize, the author of notty[0] and I are talking about a collaboration[1]. notty has done a ton of pathfinding in this area on identifying how to add many of these features in a backwards compatible way. I'm really looking forward to see where it goes!
Is adding those features in a backwards compatible way really that important? Couldn't you just have a program send an escape code telling your term to go into "new" mode, and implement some completely different standard?
Or I suppose use terminfo, but I like the idea of dealing with text streams better.
notty author here. notty is basically way down a yakstack for me - I wanted better CLI/TUI tools, so I wanted to write a framework for writing CLI/TUI tools, but some of the features I wanted aren't supported by terminals, so I started writing a new terminal. But I don't know anything about graphics programming & this is really far away from what I actually wanted to be doing - so when jwilm showed me alacritty & mentioned implementing notty with it I was pretty stoked.
That is not how Copyleft software licenses work at all. All the AGPL gurantees is that 'Bigco' must contribute back to the community any modifications they make to the software.
Exactly! How many bigco's have a terminal emulator incorporated into a product? If you're just using this to run tmux, vim, etc. the AGPL's strengthened sharing provisions aren't going to affect you at all :-)
I was just pointing out that regardless of modification/distribution/whatever, bigco policy is to not allow ANY AGPL code within a 10 mile radius of any computer owned by said company.
The author(s) are free to use AGPL, but there are significant downsides if they care about adoption.
These aren't "weird corporate policies", they're very sensible. If they wish to use such software they need to be very careful in how, and track its use, and they just don't think having such a framework is worth it.
It's a downside if it leads to general non-adoption, either directly or because a competitor with a different license gets the market share.
I'm all for the moral stance, but moral purity in a vacuum is essentially irrelevant. Effective morality is about impact on the world. A morality that's only about the good feelings of the purist is sterile self-indulgence.
> bigco policy is to not allow ANY AGPL code within a 10 mile radius of any computer owned by said company."
Wait, that seems extremely paranoid, even if only meant figuratively... Can you explain the thinking on restricting the use of AGPL'd licensed applications?
It's a very common company policy, because it's 'never use GPL' is a much easier rule to follow than 'only use GPL when it doesnt expose the company to risk'. Programmers aren't lawyers.
You might be thinking of Lesser GPL? It should be immediately obvious why any Bigco would treat the AGPL like an exploding canister of infected blood and sharps.
The AGPL treats web publishing as the same as binary distribution. If a bigco (e.g. Google) used AGPL code as part of a web service (e.g a web-based email client) there is a risk that they'd be required to comply with requests for source code. It's a pretty scary license. I wouldn't touch it... and I run a teeny tiny little speck of a website by comparison.
This is the mindset that lead to people not realizing the impact of shellshock. If your webservice shells out to use any other tools (imagemagick for instance) the shell is now part of your app.
Which then begs the question why the author chose AGPL over regular GPL, if it's unlikely to ever apply in practice. What was the author worried about?
Meanwhile, it's much easier for a BigCo to have a blanket policy for a license which has incredibly high theoretical dangers and little clarity around its scope. And I don't blame them.
Is that true? My understanding of the AGPL was that any software product which uses it as a component becomes subject to the AGPL - it has the linking semantics of the GPL, not the LGPL. If that's not the case, please do disabuse me of my misconception!
Yes, it is like the GPL. But in this case, where the product is a standalone application, that distinction shouldn't matter unless you're actually planning on bundling it into your own product somehow.
The problem is that "outside of the company" can be murky. What if the company outsources? What if the company hires contractors? What if the company employs an intern - does the intern now have the right to distribute the software?
These are the legal landmines that BigCos want to avoid, mainly because they're questions that have not really been decided.
That only addresses part of the reason for the policy. Please, you have to be aware the legal world and companies is very complicated, and smart people spend a lot of time analyzing this.
There's a big difference between "should be fine" and "will be fine". When the stakes are small, the former can be enough. But as the stakes get larger, people favor the latter.
At a large company, the stakes get large in two ways. One is that all the numbers are just larger. But more important is that an individual decision maker's career success can become dependent on a relatively small number of things. E.g., if a lawyer approves a license that should be fine but actually isn't, that could substantially harm career prospects. It still may be a small problem overall for a major company, but if it means somebody gets fired, those are pretty big stakes.
"Integration into your products" is too narrow. For example, AGPL may mean that contractors who use company internal web services must be given access to the source code of those services. That's a frightening prospect for companies.
I meant use more in "install and use, maybe fix a few bugs", which is what I'd primarily expect for a terminal emulator. It's not the kind of software you're likely going to specially interface with your systems, unless you ship it with your own OS.
Software can be "shipped" to users in ways you may not anticipate. Even a lowly terminal emulator might find its way into a POS system, factory line, etc. (These interfaces are often shockingly primitive).
And even if not, developers might use internal code search, find what they want, and then copy and paste. The pushback on AGPL code (and GPL code even) comes from the difficulty of establishing internal policies to keep the code segregated. Much easier to have simple-to-undersatnd policies enforced at the boundaries, e.g. "no AGPL, period", instead of "AGPL code is OK for software that won't interface with our systems, as determined by either biased engineers or technically-shaky lawyers."
I think you make a good point in this instance, so I upvoted. The follow-on discussion establishes the policy of big-cos pretty well and I don't think Alacritty much benefits from the AGPL since actual integration into other software base is unlikely (except for some of the sub-libs perhaps, but personally I feel that fundamental libraries flourish better under less-restrictive licenses anyway)
Hah, yeah I'm aware, but the potential of Rust is huge.
We talk a lot about open source these days, but meanwhile the tools that we all use are sitting on huge substrates that the vast majority of us aren't contributing to and probably never will due to the complexity hurdle that needs to be overcome.
This includes our web browsers, our terminals, our editors/IDEs, our operating systems, our security software (OpenSSL, NaCL, OpenSSH), and if you're a developer, things like our databases. Although I can ostensibly write C and C++, I still don't contribute to these projects because oftentimes a whole new set of local conventions around build tools and utility libraries needs to be learned for every project, and there's a high bar of experience required before contribution is possible without the risk of introducing a memory leak or security problem.
Rust has the potential to change all of this, and that's really, really big.
> Your last paragraph suggests what you really want is a notebook style interface (in the style of mathematica) rather than a terminal.
I definitely appreciate Mathematica and its sort of rich prompt is probably closer to what a terminal should look like rather than what we have today. But most of what I'm doing all day is text editing and using companion tools like Git, which isn't a good fit for it. I'd much rather that those Mathematica utilities come to my terminal rather than me having to go to Mathematica.
Reading his comment, his concern isn't with c++ as a language, it's with the tooling and development practices around it. He suggests that because of the generally project specific nature of the tooling and development idioms, generally a higher bar of experience is required to contribute.
I can't argue with him there, and I'm about as much of a C and C++ fan as it's possible to be.
That is far from what I meant, and the suggestion that Rust is "salvation" is the stuff of infantile delusion. I'm a big fan of Rust, too, but it isn't going to be the "salvation".
I personally use Hacker News Enhancement Suite for Chrome and tag users whenever I find myself mentally rolling my eyes. Then when I'm reading a page with lots of comments and see the "SKIP" tag I collapse the thread.
It is an amazing project! One thing to note is that the terminal emulator itself isn't GPU-accelerated (there's no parallel computations that run on the GPU), the UI graphics are only rendered by the GPU (much like in the Chrome browser).
"... generally infinite willingness to accept web apps..."
Interesting.
I stay in textmode. Hence I do not need an emulator.
If I need graphics I access the files over VLAN from another computer designed for mindless consumption of graphics, like the locked-down ones they sell today with touchscreens, etc.
My understanding is that emulators like xterm can redraw the screen faster than VGA. I remember this can make textual interfaces feel snappier.
But I doubt that jobs execute any faster in X11/Wayland/whatever than they do in textmode. I cannot see how the processes would be completing any sooner by virtue of using a graphics accelereted emulator.
But I could be wrong.
I sometimes use tmux for additional virtual consoles because on the computers I control (custom kernel, devices and userland) I do not use multiple ttys, just /dev/console.
I rarely ever work within tmux. I only use it to run detached jobs. I view screen output from my tty with something like
case $1 in -B|-E|-S|-t)
tmux capturep $@ --
exec tmux showb $@ --
esac
I'm not a seasoned tmux user. I was a very early adopter. tmux is useful high quality software IMHO.
Not sure why I would ever need these slow "web apps".
I guess the third parties controlling the endpoints might be able to utilise the data they gather about users. And I am sure some users appreciate the help. Thus it is a symbiotic relationship.
I am continually making my "tooling" faster by eliminating unecessary resource consumption. It is an obsession of sorts. Constant improvement.
But given that I am working with text, graphics processing is not something I need. I would not mind being able to run my non-graphical jobs on a fast GPU, but my understanding is that the companies making these processors are not very open.
For example, the GPU in the RasperryPi.
Always interesting to hear how others are meeting their computing needs.
The Web is two things. First, it's the promise that a certain runtime with a specific minimum set of capabilities is available almost anywhere. Secondly, it's a staggeringly-huge installed base of stuff written for that runtime.
I don't think there's anything out there in the that matches the volume of deployed HTML, CSS and JS in the wild.
The horribly sad part is that HTML, CSS and JS are a gigantic Rube Goldberg implementation of "run arbitrary code in a safe sandbox," because the Web is also the world's biggest collection of legacy dependency.
IMHO, the source of the engineering cringe making everything so much sadder and less than what it could be is that the W3C/WHATWG/IETF/etc are made up of consortiums of large, foghorn-equipped corporations - corporations that have vested interests in advertising, consumer retention, and strong guarantees of indefinite consumption.
I've never really gotten the reasoning behind the technical directions the Web's gone in; a lot of things have stuck and worked, but so many more have flopped, yet the associated implementations for both the successes and failures have to be maintained going forward indefinitely.
The iterative pace on the various Web standards is another problem - things go so fast that the implementations can never get really really good, and Chrome uses literally all of your memory (whether you have 2GB or 20GB, apparently!) as a result.
---
Regarding $terminal_emulator being faster than VGA, I can emphatically state that virtually all of them are disasterously slow. aterm had some handcoded SSE circa 2001 to support fake window shadowing (fastcopy the portion of the root window image underneath the terminal window whenever the window is moved; use SSE to darken the snagged area; apply as terminal window background) but besides that sort of thing, terminal emulators have more or less never been bastions of speed.
If by VGA you mean true textmode (the 720x400 kind, generated entirely by the video card), I don't think there's much that's faster than that. Throw setfont and the Cyr_a8x8 font in there to get 80x50 (I think it is, or 80x43) and you have something hard to beat, since spamming ASCII characters at the video card's memory will always be faster than addressing pixels in a framebuffer.
Which is why GPU-accelerated terminal emulators are so interesting: they're eliminating as many software/architectural bottlenecks as possible to make those expensive framebuffer updates as quick as possible. It's definitely the way to go; games are generally rated on their ability to push GPUs to >60fps at 1080p (and increasingly 2K/4K/8K), so the capacity is really there.
The i3 window manager could be considered one of many comparable similar implementations to tmux. It's not perfect (it's not as configurable as I'd prefer), but it'd get you X and the ability to view media more easily.
I do really appreciate the tendency to want to view a computer as an industrial terminal appliance though. Task switching is still best done by associating tasks with different objects in physical space, so keeping the computer for terminal work and keeping tablets (et al) for other tasks does make legitimate sense.
---
Regarding data usage, that's a tricky one - most successful Internet companies provide some kind of service that necessarily requires the collection of arguably private information in exchange for a novel convenience. As an example, mapping services don't truly need your realtime location but having that means that they can stream the most relevant tiles of an always-up-to-date map to you. The alternative is storing an entire world map, or subsetted map(s) for the locations you think you'll need, but that'll kill almost all the storage on phones without massive SD cards.
---
I find elimination of unnecessary resource consumption a fun concept to explore, almost to the point of obsession. In this regard I often come back to Forth. I was reading this yesterday - http://yosefk.com/blog/my-history-with-forth-stack-machines.... - and it explores how Forth is essentially the mindset of eliminating ALL but the smallest functional expression of the irreducible complexity of an idea, often to the point of insanity. It's not a register-based language so it's never going to beat machine code for any modern processor, but it's a very very interesting concept to seriously explore, at least. (And I say that as someone interested in actually using Forth for something practical, as described in that article.)
---
AFAIK, the RPi actually boots off the GPU, or at least the older credit-card-sized ones did. I'm not sure about the current versions.
ATI released some documentation about their designs a while back with the subtext of enabling open-source driver development. I don't think that panned out as much as was hoped.
My understanding is that Intel has both NVIDIA and AMD beat nowadays when it comes to Linux graphics support; the two former vendors still heavily rely on proprietary drivers (on Linux) for a lot of functionality.
Sadly, since they both have to successfully compete in the market, they're unlikely to release their hardware designs in significant detail anytime soon. (Even if they did like the idea of merging, single gigantic monopolies have a lot of risk, and the behemoth that resulted would be impossible for Intel to compete with, likely.)
So, learning OpenCL and CUDA (depending on the GPU you have) is likely your best bet. There are extant established ecosystems of resources and domain knowledge for both implementations, and the relevant code is not too tragically licensed AFAIK.
> The alternative is storing an entire world map, or subsetted map(s) for the locations you think you'll need, but that'll kill almost all the storage on phones without massive SD cards.
And that's what HERE Maps does best, without needing massive storage.
Do you know of any forks or clean implementations of browsers which cut out legacy support more aggressively and/or are tuned for performance? Something like Chrome with less overhead because it doesn't bother to support deprecated features.
Unfortunately there's currently nothing out there that generally meets all of the points you've touched on. There are some projects that tick one or two boxes, but not all of them.
Dillo parses a ridiculously tiny subset of HTML and CSS, and I used it to browse the Web between 2012 and 2014 when my main workstation was an 800MHz Duron. Yes, I used it as my main browser for two years. Yes, I was using a 19 year old computer 2-4 years ago. :P
Its main issue was that it would crash at inopportune times :D taking all my open tabs with it...
The one thing it DID do right (by design) was that the amount of memory it needed to access to display a given tab was largely isolated per tab, and it didn't need to thrash around the entire process space like Chrome does, meaning 5GB+ of process image could live in swap while the program remained entirely usable. This meant I could open 1000+ tabs even though I only had 320MB RAM; switching to a tab I'd last looked at three weeks ago might take 10-15 seconds (because 100MHz SDRAM) but once the whole tab was swapped in everything would be butter-smooth again. (By "butter-smooth" I mean "20 times faster than Chrome" - on a nearly-20-year-old PC.)
I will warn you that the abstract art that the HTML/CSS parser turns webpages into is an acquired taste.
---
Another interesting project in a significantly more developed state is NetSurf, a browser that aims to target HTML5, CSS3 and JS using pure C. The binary is about 3MB right now. The renderer's quality is MUCH higher than Dillo's, but it's perceptibly laggier. This may just be because it's using GTK instead of something like FLTK; I actually suspect firmly kicking GTK out the window will improve responsiveness very significantly, particularly on older hardware.
I have high hopes for this project, but progress is very slow because it's something like a 3-6 man team; Servo has technically already superseded it and is being developed faster too. (Servo has a crash-early policy, instead of trying to be a usable browser, which is why I haven't mentioned it.)
---
The most canonical interpretation of what you've asked for that doesn't completely violate the principle of least surprise ("where did all the CSS go?!?! why is the page like THAT? ...wait, no JS!?? nooo") would have to be stock WebKit.
There are sadly very few browsers that integrate canonical WebKit; the GNOME web browser (Midori) apparently does. Thing is, you lose WebRTC and a few other goodies, and you have to lug around a laundry list of "yeah, Safari doesn't do that" (since you're using Safari's engine) but I keep hearing stories of people who switch from Chrome back to Safari on macOS with unilaterally positive noises about their battery life and system responsiveness.
I've been seriously think-tanking how to build a WebKit-based web browser that's actually decent, but at this exact moment I'm keeping a close eye on Firefox. If FF manages to keep the bulk of its extension repository in functioning order and go fully multiprocess, the browser may see a bit of a renaissance, which would be really nice to witness.
I generally use rxvt because xterm is too slow. It doesn't have menus or URL highlighting and scrollbars are nonfunctional in the presence of screen, tmux, Emacs or generally any interesting terminal app.
On Windows I use mintty and I turn off scrollbars there. I simply don't use the mouse to interact with the terminal other than to select text, and that's with selection buffer to copy.
Speed is highly relevant to me. Most modern terminal emulators are very slow, most noticeable when you get a lot of output in a panel in something like tmux.
(Simplistic benchmarks that test full screen scrolling usually hand the crown to terminals that don't bother to refresh the screen with everything output, but that's not the only bit of a terminal emulator that can be slow.)
Interesting. For me, there isn't enough integration between the GUI and terminal ... they shouldn't seem separate, but should be bridged to create a coherent experience.
As a suckless terminal user I would say it is not the goal of the terminal emulator to provide scrollback, gnu screen or tmux are far better tools for that purpose.
In the case of tmux, it's only a single line change to activate mouse scrolling. It's nice to have the times when I switch over from a browser and instinctively try to scroll around.
Why do people still say C/C++? They are two different languages with different purposes and strengths/weaknesses. Rust might be a worthy competitor with C++, assuming many improvements down the line, but it's not even in the same category as C, no matter how much the enthusiasts like to claim otherwise.
I really disagree with the authors definition of minimal.
Terminal emulators have such a minimal user interface as it is it's a bit boggling that I have to make the case for the following "bloat" that other terminal emulators have.
I need scrollback because I do occasionally pick up my mouse and grab things that have scrolled off the screen. Tmux doesn't help with this but maybe there is some magic that I don't know these days.
I need tabs. At any given time many of those tabs might have instances of tmux somewhere in their multiply nested depths, generally on remote hosts.
I'm not going to start tmux on every local prompt just so I can use Alacritty and thus intentionally starting a tmux in tmux funshow.
I use "Monitor for Silence" "Monitor for Activity" pretty consistently.
It's free software so I glad the author is making something and hopefully enjoying the process. I can't really use this or consider it until he reconsiders. Maybe he'll get some collaborators that will argue him around on this.
> Features like ... are better provided by a terminal multiplexer
I would strongly argue that this thinking is putting the cart before the horse.
I don't use tmux, nor do I want to (though occasionally I have to use screen as a hack to keep programs running on remote servers, and I hate every second of it). Solutions like tmux arguably exist because terminals have poor UIs, and the terminal protocol is too weak to form the foundation for the kind of interactivity and statefulness provided by modern graphical UIs. If terminals were as powerful as, say, web browsers (not that I'm suggesting that anyone conflate them), the world would be a different, happier place.
I think Hyper [1] is going down the wrong path, but I strongly believe a new "terminal-oriented UI model/protocol" could be invented that would scratch every possible itch — good for text, mouse support, custom UI widgets, seamless remote connections, multiple screen regions — without sacrificing functionality at all.
There's been a lot more pushback on the scrolling decision than I had anticipated. It's not something I want in my terminal, but it seems that a simple feature like this is essential for others. Perhaps I should reconsider.
I worry that a "simple" feature like this may be overly complex internally. Performance with large amounts of output is also a concern. At least if we were to add support, it could be designed as a build feature and be removed completely if it were undesired.
Scrollback support was added to AmigaOS ca. 1987 (with 2.04), and enabled on machines with 512KB RAM and a 7.xx MHz M68k CPU... Performance was not a problem then. Of course the highest resolution most people would run it on would be 640x512 back then, with typically 2 bit planes. But data volume has grown much less than CPU speed and memory bandwidth.
Incidentally, AmigaOS' terminal design is worth exploring - it is a fascinating example of layering. And the AROS re-implementation, while not very clean, is also partially object oriented (in C; disclaimer: I wrote part of it) in how it layers the console units from the simplest to most complete (cut and paste support + scrollback). Even back in the 80's performance was good enough for this that AmigaOS started using dispatch-function based OO all over the place, and the AROS re-implementation of the console code uses that method, which is not at all the fastest way of doing it, but it's fast enough even on real M68k's.
Any modern PC is going to be at a minimum several hundred times faster.
All you need is to maintain a linked list of lines, and add code to free or reuse when you reach the maximum size of the scrollback buffer. If you want you can easily also just use a ring-buffer and wrap around if you want to set a size limit in bytes instead of lines, and just maintain indexes into it. It's trivial to do this in ways that doesn't cause performance issues.
The developers of the VTE widget had a problem a few years ago. The widget was using 16 open file descriptors per terminal emulation instance. This was causing problems for terminal emulator programs that had the architecture of a central server process that does all of the terminals on multiple X displays.
A GUI terminal emulator for such an architecture needs at least two open file descriptors, one for the connection to the X server and one for the master side of the pseudo-terminal. The other 14, it turned out, were being used by VTE's scrollback buffer mechanism, which involves writing data that have scrolled off the top of the screen out to (temporary) files.
They had managed to reduce this, by rearranging the structures of the scrollback files, to 8 open file descriptors per emulation by 2011, and reportedly it will be soon down to 4.
Interesting tidbit #1:
It was mentioned elsewhere in this discussion that the alternate screen on many terminal emulators has no scrollback. This is because the programs that switch between primary and alternate screens aren't actually doing that as far as they are concerned. They are switching between scrolling mode and cursor addressing mode (see http://superuser.com/a/715563/38062 for details), the latter not really having the concept of a negative row coördinate.
The VTE widget was using twice the number of open file descriptors, because both scrolling and cursor addressing modes had scrollback files.
LXTerminal has this single centralized emulator process architecture, too. It has a rather nasty open file descriptor leak with which one can render LXTerminal completely unusable in about 1 minute (if one has an open file descriptor limit of 1024).
I've already tested alacrity and it's super easy to configure, the only thing stopping me from making the move is scrollback. I understand not adding tabs or a GUI config: I use my WM to do tabbing on linux even though my terminal implements tabs. However scrollback is an absolute must for me. If you implemented scrollback I would be able to switch terminals from terminology (at first glance).
I use tmux a lot, but I don't use it for every session. To me, forcing me to use tmux to get scrollback is precisely violating the idea of 'one tool doing one thing'. You're forcing me to compose a Swiss army knife (tmux) into situations where all I need is a knife (a terminal that works well with the idioms of my environment).
I don't need tabs (I use i3wm), I don't need splits or session management (when I need that I use tmux). But I do often need to scroll back on a temporary session when I didn't plan for it in advance. I open a lot of terminals. They're never all going to be tmuxed.
I respectfully disagree. Alacritty follows the Unix philosophy of doing one thing, and doing it well. I used to think that terminal scrollback and tabs were great ideas -- but switching to tmux changed my mind completely. Tmux is so much more capable for managing your session history. The terminal's tab and scrollback features can never match this. They're just bloat :p
> I respectfully disagree. Alacritty follows the Unix philosophy of doing one thing, and doing it well. I used to think that terminal scrollback and tabs were great ideas -- but switching to tmux changed my mind completely. Tmux is so much more capable for managing your session history. The terminal's tab and scrollback features can never match this. They're just bloat :p
This is the problem of Unix philosophy, because it applies differently for different people.
For example, I use i3, that is a tilling window manager, so I don't need tabs or split management for tmux. Actually, I removed tmux since I started to use i3 because using i3 features feels much more natural, since they apply to every window. However, there is only one thing that I can't have with i3 that is scrollback buffer, so while I don't need a terminal with tabs, I need support for scrollback buffer. Adding tmux just to get scrollback goes against Unix philosophy.
> Adding tmux just to get scrollback goes against Unix philosophy.
I agree. Rather than adding scrollback support to allacritty, maybe someone could write an independent program for scrollback support (a la dtach/abduco for detaching/reattaching)?
Such a program would be useful for all terminals which lack scrollback (alacritty, st, possibly others).
For the record I use st with dtach and dvtm; scrollback is supplied by dvtm, but it would be nice to decouple it some more.
100% agree, to the best of my knowledge there exists no tool that adds scrollback support to a terminal emulator without doing anything else. My solution now is to use tmux for this, but it is not really elegant. Piping everything through less is not an option :P
If you want decoupling, look at the AmigaOS design: The console (terminal) consists of a bunch of independent elements:
- Device drivers feeds raw input to input-handler
- console.device manages a single console window. it receives raw input from the windows message port (courtesy of the input-handler) and "cooks" it into escaped character streams (which can include things like mouse reporting), and processes simple output that it translates into window output.
- console-handler receives the escape codes and interprets more complex sequences before passing the result on to the application that has the console open, and writes output back to the console.device.
Most of this would run in separate threads.
This lets any application open special filenames like "CON:" to open a console window.
Within console-handler, multiple different "units" are layered - in AROS (AmigaOS compatible open source), the basic (no copy and paste, no reflow, no scrollback) unit, the unit with copy and past and reflow, and the unit with above and scrollback, are layered on top of each other via inheritance, using a system-wide OO system modelld after Commodores old BOOPSI (basically a simple vtable based approach with a "catch-all" method dispatch entrypoint for user-added methods; it's not fast with deep inheritance hierarchies, but it's fast enough for this kind of use).
The copy and paste itself is implemented via a separate tool - ConClip - started on boot (and optional; if you don't start it you simply don't have cut and paste), which receives messages about what to cut and paste and writes it to the "clipboard.device", which by default writes each cut/copied piece of data to the CLIPS: volume as IFF formatted files, which by default maps to a directory in a ram-disk, but which can be re-mapped elsehwere. This all happens asynchronously, to accommodate cases where people e.g. remapped CLIPS: to a floppy and had to swap it in (rare, but possible if you had to deal with low memory situations).
This is something that frustrates me to date with Linux etc. - AmigaOS was far more decoupled, with clear, well-documented boundaries for people to hack on (e.g. several people wrote alternative console-handlers and console.device's that you could totally replace the original with to the point that any application that used terminal windows could be made to use your preferred console device. Even third party components tended to follow this approach (e.g. compression in AmigaOS is usually done via the XPK suite of libraries, which provided a third party API for opening compressed data streams, that let you plug in any compression algorithm as a library - as a result most apps in an Amiga system that supports compression can support most compression algorithms you drop in a system-wide library for).
Thanks, that's really interesting. I grew up with Amigas exclusively until getting a family PC around 2000, although I didn't tend to use the CLI or do any programming back then.
I'm aware of BOOPSI, and the datatypes system which sounds similar to what you describe.
One problem on AmigaOS was(/is?) the lack of packaging and dependencies, e.g. installing many programs on a fresh copy of Workbench won't work, due to missing libraries, etc. Thankfully that's easier to manage these days by scouring Google and Aminet, but it's still manual.
Interestingly, I've found Amigas to become more stable over time, unlike e.g. Windows where some people recommend formatting every year or so to remove cruft. The more stuff you install in Workbench, the more libraries, etc. you accumulate, so the fewer problems you encounter trying to install/use other things. I'm not sure if this is a consequence of the OS design, or from developers bending over backwards to avoid problems (e.g. conflicting names, etc.)
Yes, BOOPSI was the model for the OOP used in AROS.
> One problem on AmigaOS was(/is?)
Is, sort-of. Package managers didn't enter the scene until much later, but Commodore did release Installer, which while not a package manager provides a s-expression based mechanism for describing installation flow.
It's alleviated because Amiga libraries tends to very strictly insist on backwards compatibility, so you should generally be able to drop a newer version of a library over an older version and things will keep working (and the libraries and all compliant binaries contains version numbers).
But of course the community today is very small, and was smallish originally too, and so it's gotten easier and easier to deal with.
If there was to be a resurgence (there is new hardware but it's expensive niche PPC hardware; AROS runs on pretty much "anything", but is incomplete), it'd need a lot of big overhauls - in particular memory protection (some work is ongoing but it's hard due to AmigaOS APIs relying a lot on pointer passing) and SMP, but also lots of tooling we take for granted today like package management.
I'm not holding my breath for that, but I do wish more AmigaOS ideas will get picked up elsewhere. Linux still feels like a hodge-podge in comparison.
I use i3 too, but it doesn't replace tmux or screen for me for a simple reason: I can't maintain state for a remote server as i3 windows. My screen session outlive my laptop uptime by years. And easy API to let a remote terminal management tool create child windows/tabs would be fantastic...
When I need state in remote servers, of course I still use Tmux. I was referring to local Tmux sessions, that I mostly used for tabs/splits before I started to use i3.
> Alacritty follows the Unix philosophy of doing one thing, and doing it well.
That principle is often misapplied, and I think that's true here, too.
The "do one thing" about Unix is really about composability (e.g. "find" doesn't need to sort because you do "find | sort"), but you don't compose a terminal app with anything.
A terminal app that has terminal features doesn't violate any principles of simplicity.
That particular example of find not needing to sort persistently annoys me. sort doesn't know anything about the structure of its input, so it has to read and buffer all of it before it can sort it. find knows that its input is a tree of strings, which it could exploit to produce sorted output at the cost of buffering one directory's worth of filenames at each level of the tree.
It's rarely a significant problem in practice, but it annoys me in principle!
To avoid cluttering "find" with a sorting interface, we could use the modern technique processing push-down:
If you do "blah | sort", then "sort" could ask its upstream processing node whether it supported sorting on the requisite fields, and "push down" the necessary sort-order descriptor into the "blah" step.
That requires two things: That the pipe API sets up a communications channel between the two programs in a way that makes them aware of each other and able to exchange information; and secondly, that the pipe protocol is based on typed, structured data. I want both things.
Imagine if you had that, then you could conceivably also do:
and psql would automagically rewrite its query to:
select firstname, lastname from foo order by lastname
That's the future I want to live in, anyway.
The inability to do this sort of thing really a product of a failure to modernize the 1970s text-oriented pipe data model. I believe PowerShell (which I've never used, only read about) provides a mechanism to accomplish this sort of thing, at the expense of being extremely Microsoft-flavoured.
I don't think there's anything even vaguely scifi about those abilities, but the Unix world is hampered by a curious reticence to innovate certain core technologies such as, well, Unix itself. That's why we still have tmux and such.
... or the downstream program could ask the upstream one (or get automatically along with the stream) about meta-data/type information for the stream it is being passed, and then it could benefit fully from already-known information. Though that does not solve the need to potentially read the full stream and buffer it before doing the processing.
I use st as my terminal application. Inside, I run dtach to provide detaching/reattaching functionality. Inside that I run dvtm to provide multiplexing and scrollback. Inside that I run bash. Inside that I run ad hoc commands.
Everything's highly composable, e.g. I can switch out bash for zsh, fish, etc. I can switch out dtach for abduco. I can switch out dvtm for tmux or screen. I can switch out st for xterm or urxvt. And so on.
Adding an extra layer for scrollback, separate from a multiplexer, wouldn't disrupt anything, and would provide more flexibility for composition.
Each to his own. Your setup sounds like a parody of the most outlandishly neckbeardy things devs can do in a shell. Most users don't want to deal with that sort of "layering".
> Your setup sounds like a parody of the most outlandishly neckbeardy things devs can do in a shell.
I shaved off my neckbeard, I'll have you know! ;)
My setup's no more outlandish than using tmux or screen, except instead of typing `tmux` or `screen` I type `shell`, which aliases a `dtach dvtm` one-liner (with a few options sprinkled around, so I don't have to bother with config files).
The point is that none of these applications care if/how they're composed; if I want to add in or swap out something, it's just a change that one-liner.
Not so if, say, my terminal application were hard-coded to rely on tmux, as some sibling comments have suggested.
As I pointed out elsewhere, in AmigaOS you actually did often compose the terminal/console handler with other software... The reason we don't in Unix-y systems is that we've gotten used to terminals implemented as standalone applications rather than as reusable components that integrate well with other things.
In AmigaOS it was fairly common for applications to reuse the standard console handler - the same one used for the shell - to provide a custom console or even editor windows etc.. For minimal integration all it takes is to read from/write to a filehandle.
That said, though as I noted elsewhere, even AmigaOS got scrollback in the console handler by '87 - the extra code measures a few KB; it'd be more hassle than it was worth to split it out.
> A terminal app that has terminal features doesn't violate any principles of simplicity.
It still depends how these features are implemented.
For example in dvtm scroll back history is made searchable by piping it to $PAGER. Similarly, copy mode is implemented by using $EDITOR as an interactive filter.
This wasn't a feature of the original VTE videoterminals these apps are emulating though. I'm pretty pro-scrollback, but I have to say the way its been implemented so far has frankly not been very principled.
Well, the difference all comes down to how one views tmux, and I'm on the side of hating it. I personally use tmux only to background applications, and any of its other features are "bloat" to me.
After reading everyone praising tmux, tmux, tmux, I tried to use it several times, several days or weeks each time. The thing only got in my way all the time, to the point that now I prefer to lose sessions than to use tmux. It has to be a really important and peculiar and heavy operation, to have me still launch tmux punctually.
Not your parent, but here's my take. For remote usage, tmux doesn't provide much beyond vanity over screen, and is less likely to be installed on a shared host. Locally, I generally get by with &, bg, fg, and Ctrl-Z if I absolutely can't open another TTY, and a tiling window manager (or Emacs) provides a superior tiled workflow for the applications I use.
There are some cases where tmux could help me, but my X session and terminal emulators are stable enough where I'm not worried about them crashing and interrupting my shell session. As such, I have no need for tmux, and using it just for terminal emulator scrollback seems hamfisted.
The only use case I can think of is running large distribution updates which might potentially pull the rug out from under your graphical session, but I tend to run those on a non-X terminal if they look sketchy.
Edit: I'd totally consider using tmux as an alternative to X on my Raspberry Pi, but if I have X/Wayland, It doesn't offer me much.
It's not cheating if you gain those features back by using a separate program (tmux) and still see a performance improvement over all other tools, which is what the author is implying. All that does is suggest that terminals might not be the proper layer at which to implement those features (and for the record, I'm on the side of having scrollback, but the lack of it isn't a dealbreaker).
I'd be very happy if you more closely integrated with tmux and used the features from tmux instead transparently. I don't care how the terminal does scrollback, as long as it does. If you forwarded the scrolling commands to tmux then great. Same thing with tabs, panes etc.
As much as I like tmux (although I exclusively use it via byobu), the single most annoying thing is that it won't let different viewers see different content. (eg start two terminal emulators, run tmux in both and any switching you do in one affects the other. There are supposedly elaborate workarounds, but far too much effort.)
In Byobu, you simply need to create a new session. Ctrl-Alt-F2. Then Alt-Up/Alt-Down to move among sessions. Each user can have their own session easily like this! Shift-F1 for the hotkeys, if you need a reminder ;-)
Ctrl-ALT-F2 would take a miracle to work. Shift does as you describe, but the net effect is still nowhere what is intended.
Also note I am the same user. The functionality I want is how byobu behaves with screen. eg you can start 3 xterms, in each one run byobu. And each one can jump around as they see fit all sharing the same screen session. No work, no fuss and exactly sane.
The tmux behaviour baffles me. I can't understand why anyone would want all their viewers to change in sync. Short of a classroom demo environment, it really doesn't make sense as a default.
I'd be delighted if by default byobu did whatever was needed to make tmux behave usefully.
Interesting... Frankly, I love the tmux default, to sharing the window, while also supporting the concept of "sessions". The shared view makes paired programming with a colleague across the world, while on the phone or a video conference super easy. Different users start in the same "session", and share the view of "windows" and "panes (aka splits)". If they want separate views/control, then they start separate sessions.
For your usecase I can understand. But when it is the same user using defaults it makes absolutely no sense. Why would they want separate xterms on the same display to be in lockstep by default?
Am I the only one who runs more than one terminal at a time? What happens now is I start an xterm on monitor #1 and start byobu within that. Various windows (whatever you switch amongst on pressing F3 and F4) are started - eg one might be client code running, one might be server code, one might be a database server etc. But sometimes for example I want to look at the client code output and server output simultaneously. At that point I switch byobu to the client window, and start a new terminal on monitor #2 and tail logs or whatever is appropriate. It is an annoying pain that I can't just run byobu and switch as I see fit.
In any event this is a multi-year frustration for me. People keep coming with convoluted workarounds (pointing tmux to tmux as far as I can tell) which it then isn't possible to figure out how to apply to byobu. All the while I wonder why two xterms running next to each other would ever want to be in lockstep by default?
A concrete example of what I mean. The user starts 3 xterms each running byobu with no additional parameters:
for i in 1 2 3 ; do xterm -e byobu & ; done
Why would they want the view in each xterm to be identical, and why would they want changing the window (is that right terminology? pressing F3 and F4) in one xterm cause the other two to change in lockstep?
Ok, now make byobu do that since it is how I actually use tmux. Also make sure it happens on every system I use. This gets tedious and annoying very fast. Heck mouse support not being on by default, requiring a single line in a config file on every system is already annoying enough.
The problem isn't that it is possible in theory. It is annoying in practise, unless you only have one machine and only need to do all this once in one system.
> There's been a lot more pushback on the scrolling decision than I had anticipated.
Even if tmux is in theory a better solution, it's not such a radically better solution that the benefits outweigh the switching cost. Terminals with scrollback have been around at least 30 years at this point [1], and scrollback is by far the majority use mode. You're asking 95%+ of your potential userbase to spend a bunch of time retraining themselves for little or no benefit.
Scrolling down and back is a key part of the web user experience. Even if every terminal emulator got rid of scrollback, most of your users would still have scrolling back baked into their brains. When they want to see something that just scrolled past, they are going to do what they do in a web browser, which is to look for a scroll bar, hit a scrolling key, or perform a scrolling gesture.
That's not to say you shouldn't try radical things. But if you want user adoption, you have to make sure the benefits you offer are very much larger than the costs you impose. So radically different UI can't be about as good as the existing one; it has to be radically better.
[1] And of course terminals are made to emulate teletype machines, which had infinite scrollback to begin with.
Scroll back is the only feature missing that would block me, assuming you permit changing the colour mappings.
I generally run five or six desktop terminals, only one of which is running screen locally. I more normally run something like screen to get persistence across suspends on an SSH connection. I usually don't do much side by side stuff in terminal so tmux isn't a big win for me. Also tmux didn't run on cygwin for the longest time, and I expect the same experience on all my platforms.
However the big tmux users in my life would still use it as is.
I am the one with you. If you decide to add scrollbar later, please at least make it optional.
I am using firefox with pentadactyl, and the firefox is configured as a minimal terminal like st. Basically, firefox becomes my GUI terminal. But what I want is a much lighter weight GUI terminal than firefox.
I am using three terminals, st for regular use. mlterm for its image support (sixel). and firefox for heavy-GUI needs. Alacrity seems perfect to replace st now.
> I worry that a "simple" feature like this may be overly complex internally. Performance with large amounts of output is also a concern.
Just do it. I doubt it will be hard to beat tmux performance, in my experience it's slow like molasses and I only use it if I must - remote persistency, screen sharing, etc.
I love tmux since I regularly move between OSX and linux machines across several versions. With tmux I can create one .tmux.conf file that standardizes my tab/split behavior across all of the systems regardless of the emulator I use. This works a lot better for me than having to install specific emulators on each machine, several of which I do not have administrative rights over.
> I don't use tmux, nor do I want to (though occasionally I have to use screen as a hack to keep programs running on remote servers, and I hate every second of it). Solutions like tmux arguably exist because terminals have poor UIs, and the terminal protocol is too weak to form the foundation for the kind of interactivity and statefulness provided by modern graphical UIs. If terminals were as powerful as, say, web browsers (not that I'm suggesting that anyone conflate them), the world would be a different, happier place.
I have the opposite impression. I actually anjoy time spent in my tmux + vim setup while I hate the time spent in a web browser. Web browsers seem bloated to me and I always feel like I'm forced to use the mouse too much while using them. I've heard about vimperator, but every time I've tried it seemed poorly integrated (the authors did an incredible job nonetheless).
The only drawback of delegating scrolling to tmux that I see is that my experience running tmux locally and using ssh to use a remote machine with another (remote) tmux session wasn't great. Keyboard shortcuts were usually caught by the local machine and not by the remote one. So I just don't run tmux locally when I want to ssh into a remote machine. Problem solved more or less.
> Solutions like tmux arguably exist because terminals have poor UIs
Whilst I totally agree with you, I think Tmux is a lot like vim in its power. Along the same note, I'd wager a gTmux, much like gVim would be q real nice way to multiplex terminals when we get the UI to beat the TUI.
I mean Terminator is basically gTmux, right? They aren't quite the same: tmux is a terminal multiplexer, Terminator is a terminal emulator with tabs and panes. Both approaches have benefits and drawbacks.
If you want keyboard only, I don't think GUIs can beat TUIs. But yeah, there's a learning curve.
The problem with GUIs is a very display-session centric view. tmux/vim work fine over SSH, Terminator/gVim don't. With tmux, your sessions are separate from your terminal instance. If your X session crashes, depending on how you started tmux, you just have to relaunch a terminal and reconnect to tmux. This is pretty invaluable. So this separation is powerful, and IMO very Unix-y.
The multiplexer v emulator is the key difference. At least for me. Exactly because of the ssh and session persistence you mean.
For the thing to replace the TUI, I'd expect something in between current terminals and X11. With the simple, limited (and thus easily remotable) data of the current terminal emulators, but with much more drawing capabilites than the current grid-limited ascii art.
I came to this idea when trying to get vim up to a full IDE. Trying to get even half of netbeans' interface into vim just takes so much space in the ascii grid. And anything dynamic moves half the screen a shitton.
Ok, so you're interactively using a program on a remote server and you don't want it to die if you lose your connection? Screen isn't a hack in that case :)
It presumably doesn't feel like a hack to you because you're used to this mindset.
To me, it's a deficiency in the lack of session management for SSH. All that SSH gives me is a two-way pipe to the other server's I/O. That's simple and elegant, but why does it create a new pipe every time? It's connection-oriented, which is a concept that hasn't seen any innovation since the 1970s.
My preferred innovation here would be a local shell that had remote access. Rather than pipe I/O to the remote shell, give me a local shell which happens to execute its verbs on the remote machine, and let the remote file system simply be a volume. All my session state (including command history) can be local, there's no need to keep that on the remote host. A remote host is just another context.
I believe Plan 9 tried something similar, but very few people have picked up on its innovations.
~ $ cd /ssh:tol-eressea:.
/ssh:tol-eressea:/home/db48x $ ls
db48x.net libvirt-sandbox rpmbuild zone.sh zone.sh~ zone.txt zone.txt~
If you've set ssh up to use control sockets then it can reuse existing connections.
MOSH is also nice; it decouples the program you're running from your connection, just like screen does, and it also uses udp instead of tcp so that it doesn't have to worry about dropped connections. Even your client's ip address can change and everything keeps working.
> Features like GUI-based configuration, tabs and scrollback are unnecessary. The latter features are better provided by a terminal multiplexer like tmux.
This seems kind of like saying "everyone should use their computer the same way I do". I don't use tmux; maybe I should learn to use it. However, I seem to be getting along fine without it, and no scroll back is a deal-breaker for me.
That said, this is still a cool project and I wish them success.
> This seems kind of like saying "everyone should use their computer the same way I do".
People do have that attitude from time to time, in this case i'd phrase it more like "no one seems to use their computer the way i want to, so i wrote some software"
Firstly you insinuate that the author is requiring people to interact as they do - this evidently is not the case, a suggestion has even been made on one of many ways to behave differently.
Secondly the "I seem to be getting along fine without it" statement pointlessly hampers progress. There is no basis or reasoning for this, instead there is a decision - whimsical by the looks of it - to not use it. You could say that you don't anticipate the gains of the system to be worth the transition cost, or you could actually try it and have some useful criticism, or any number of other things.
Thirdly, the linked page doesn't ever mention the 'minimal' your parent introduces. Minimal implies sufficiency (least sufficient, but sufficient none the less), the Alacritty page states simple - which does not.
I find the final comment hilarious. You have just denounced a tool based on an implementation triviality (which can be easily bypassed) and choose to summarise with a statement as undoubtedly false as it is trite. Did you read the page?
I think it's a cool project, but I personally wouldn't use it because it lacks a feature I expect terminal emulators to have and am not motivated to change how I use my computer. Other people may be happy with this program. If someone implements the feature I want some day, then I might use it as well.
I'm sorry if I came across as overly critical of the project, that wasn't my intent. I just think they're limiting their audience by assuming that a feature that isn't important to the developers is unimportant to users because a workaround exists that the developers are satisfied with. People really don't like to change how they do things, regardless of whether the way they're doing them now is "the right way" or "the best way". Maybe they're making the right decision and everyone really should be using tmux, and if so, great, they'll have have a community of happy users. If their target audience is someone other than me, I'm okay with that.
Moreover, tmux does its own output parsing, so when you do your `cat 1gb_file.txt` inside tmux inside a terminal, you have two layers of output parsing happening. I can't see how that doesn't impact the performance that is claimed for Alacritty. But perhaps tmux is really fast.
I wouldn't trust a third party application to be part of the performance experience I'm claiming for my own application, though.
Couldn't you say that about SSH? If your application is taking forever reading from the stdout buffer, your application will block when it is trying to write to stdout with say, printf, or fprintf.
tmux does provide full scrollback with mousewheel support too.
# Enable mouse support including scrolling
set -g mouse on
# Versions prior to 2.1 may want this too:
set -g mouse-utf8 on
history-limit 5000 # 5000 lines of history per pane. Adjust as needed.
It's not as good as native scrollback though. For example, by default, as soon as you select something in tmux, the selection goes away, and you have to hit a tmux-specific keybinding to paste it back into the terminal. That's never what I want! If I'm selecting something it's probably because I'm going to copy it to my system keyboard. I think you can disable this part, except of course what you're left with at that point is a tmux selection, which is NOT a terminal selection, meaning you still can't copy it to your system clipboard.
Also, scrolling is somewhat unreliable, although I still haven't figured out why.
In any case, I use tmux in some of my terminal tabs, and very frequently I have to hit ⌘R to disable terminal mouse support just so I can select & copy something without tmux interfering (I could also hold down the Fn key, except I use an external Das Keyboard, and the Das Keyboard folks still haven't figured out that their Fn key should actually behave like Apple's Fn key and let the system know when it's pressed by itself, as opposed to what it does now which is simply modifying the keypress events for other keys without sending any independent event for the Fn key).
Apple's Fn key isn't ideal. The USB HID usage ID is not one from the keyboard or consumer key pages. It's in one of the "vendor defined" pages, meaning that every keyboard driver supporting has to specifically recognize the device vendor and model, because without that context one cannot know what a vendor-defined ID means, and every new keyboard supporting this requires an operating system device driver update across many operating systems as well as system firmware updates to machines whose firmwares recognize this vendor-defined ID as a keyboard key, all to add another vendor ID in to the drivers' lists of "this vendor+device has Apple's Fn key".
Still, Das Keyboard has been making a Mac-specific keyboard for many years now, you'd think they'd at least reach out to Apple to see if they can get their keyboard recognized as having a Fn key (or, alternatively, provide a kext that adds support themselves). But the one time I asked them about it, they didn't seem to even care about how the Fn key behaves, so I doubt they've even made an attempt.
that as the case may be, the same argument applies to vim (i use vim, with at least some of its keyboard shortcut glory, and love it); it might be faster in a lot of cases, but at the end of the day editors like sublime, atom, brackets, etc all have a much larger user base because people aren't willing to learn
No, they're criticising tmux as a proffered alternative to a full-featured terminal emulator, which is often used by non-power user programmers who are not willing to learn (like me).
I switched to tmux a few years ago and never looked back. What is your problem with mouse? I have quite basic tmux config, very basic terminal emulator (st) and enjoy mouse scrolling even in nested tmux scenarios. Selection is done with vim keybindings which is much faster than mouse especially if you scroll and look for something visually.
I can relate to your feelings. In the beginning I was _very_ skeptical about running tmux locally. But very quickly I reconfigured tmux as I felt and stopped noticing at all if I work locally or remotely. Everything is very smooth and pleasant since then.
The only bad thing I remember: default tmux keybindings suck. I just redefined almost everything.
> The only bad thing I remember: default tmux keybindings suck. I just redefined almost everything.
It's true, e.g. ^B / Ctrl+B is a terrible prefix. All of this just makes tmux harder to learn, and less portable/transferrable. But I think there's value in having to feel out the ideal configuration for your workflow - but the prefix is still inexcusable.
Are y'all on Linux/xterm or something? I've had trouble getting this to work in OSX with the default terminal. At one point I got it to partially work by installing some scary looking plugins, but it broke other mouse behavior for me (copy/paste I think).
I'm on OSX and mouse-support. vim bindings, clipboard, and extended scrollback in tmux "Just Works™" in both Terminal.app and iTerm2. The only external software that I had to install separately is reattach-to-user-namespace to get the clipboard working.
The only problem I encountered was really bad kernel panics when the tmux server exits (i.e. last session is closed), but it has been fixed as of tmux 2.1 as far as I can tell.
I'd say give it another try; the problems you were having might no longer be an issue now.
I never managed to actually like tmux. It breaks too much of what I'm used to, like mouse wheel scrolling, Ctrl-arrows, ESC in Vim and a lot of other small things.
The ESC thing is fixable by configuration, at least, but I really doubt I can make tmux behave like my native terminal.
I'm not quite sure what you expect Ctrl-arrows to do, but I'd be surprised if tmux can't be made to keep that behaviour.
If you spend a lot of time in the terminal, there's a lot of value to be gained from using tmux. It takes some configuration to get value from it, but there is value there.
tmux has copy-mode (^B : copy-mode <enter>) which lets you scroll through history and copy out snippets without using the mouse. There are also some tips out there for nested tmux sessions.
For what it's worth, I use tmux just like the author of this project: a single urxvt terminal with tmux running inside (no scrollbars, tabs, menubars, etc) and it works for me.
At some point, every application (browser, terminal, tmux, ...) re-implementing their own version of tabs is kind of silly.
Ideally "look at 'window' X of application Y" (where window == window / terminal tab / tmux pane / whatever) is what the local window manager should be used for.
E.g. instead of 1 tmux session with 4 tabs in it (or 1 terminal window with 4 tabs in it), I will run 4 mosh/tmux sessions (or 4 terminal windows) with 1 tab each, and use my regular window manager commands (i3) to switch between them.
In theory this is a better separation of concerns, and I can have one set of key bindings to do all window switching. Not:
1. Use window manager keys ctrl-foo to get to the right desktop/terminal window
2. Use terminal keys ctrl-bar to get to the right terminal tab with my tmux session in it
3. Use tmux keys ctrl-zaz to get to the right tmux pane
(Obviously a contrived example.)
...that said, I still use a crap load of tabs in Chrome, so theory != practice.
I avoid tmux and screen to. Instead I use SSH multiplexing (great for rsync tab complete too), and let the window manager so its job.
All I need is an orthogonal solution for persistent shells (how about I persist arbitrary login process trees, graphical or textual, OK?) And I'll truly have no need for tmux.
Then again I don't ssh too often for too long, so the last part is endlessly low priority.
Surprised people consider tabs and scrollback similarly basic features. As someone using only basic linux terminal emulators like x/u/aterm or urxvt its interesting to see that people consider tabs a basic feature, while I don't know a single term without some scrollback.
Really makes you realize how different people's expectations are.
yeah, it's less "unnecessary", more "provided elsewhere". in this case, he's saying by tmux... which does mean learning a bunch of new keybinds, which is a little upsetting
I just wanted to say that this looks like a fantastic and very cool project! Congratulations on the speed.
Personally, the lack of scrollback and tabs is a dealbreaker for me. I know that I'm supposed to use tmux for that, but I can never remember how tmux scrollback and tab switching work without thinking about them. Plus I rely heavily on mouse selection of multi-screen text in the scrollback buffer. So I'm unlikely to be part of your target audience. (However, I could live without GUI config and menus, because I configure my terminals once every few years at most.)
Also, a silly question: Do you support color emoji in the Terminal? I've never quite managed to get it working on Linux.
I like opinionated software. If scrollback is better implemented on a different layer, leaving it out is reasonable.
But I'm not using tmux because I use i3 as wm which does all the tiling/splitting. Now using a new tmux instance for each of my terminal (can be dozens) only for scrollback seems not the right way though. Any recommendations what would be The Right Way here? Anything that only implements scrollback maybe?
Just to present an alternate opinion here: I haven't used my terminal's scrollback on purpose in 5+ years now, and it's fine. To me, Alacritty's trade off is perfectly acceptable, and even desirable.
As far as I can tell, using Tmux's scrollback instead has no downsides of note, but some _very_ significant upsides. For example, (1) shared buffer between terminal windows, and (2) copy/paste modes that are usable with Vim shortcuts.
Hm. So, for that to work, I'd basically have to forever hardcode the terminal to launch tmux every single time. Basically, this new terminal + tmux = the old terminal behavior.
I'm not saying this is necessarily a bad thing (haven't tried it), I'm just saying this is the way to get my scrollback... uh... back.
Might as well ship the terminal with tmux as a hard dependency and launch a tmux child process by default.
> Hm. So, for that to work, I'd basically have to forever hardcode the terminal to launch tmux every single time.
Yeah, but it's not as bad as it sounds once you "move down a layer" and treat Tmux like your terminal manager rather than your terminal. When I need to a new shell, I don't open a new terminal window; instead I open a new Tmux "window" (the spiritual equivalent to a tab) and do the work there. I keep the same two terminal windows open with nested Tmux sessions for months at a time. By extension, opening new Tmux sessions is also an extremely rare event because the ones I already have are persistent.
> Might as well ship the terminal with tmux as a hard dependency and launch a tmux child process by default.
I think it's still nice to leave some room for customization here. Screen is still a pretty good Tmux alternative for example (in fact, five years ago you would have said that Tmux was a screen alternative rather than vice versa), and some people might prefer to use that instead.
You can also set tmux as your default shell with chsh. I've been doing this for a while on macOS and have enjoyed it (other than user namespace issues that still drive me crazy)
I may be reconsidering this position[0]. If scrolling support can be added in a non-intrusive way behind a feature flag (so it could be completely compiled out), it could potentially have a place in Alacritty.
Sounds great! I totally understand the motivations behind the decision to leave it out, but it's a great candidate for a compile time flag. I would love to throw out gnome-terminal for this!
the use case seems to be using tmux inside the terminal emulator, with tmux you'd use its own scrollback buffer
(edit) from the project's github page in fact
The simplicity goal means that it doesn't have many features like tabs or scroll back as in other terminals. Instead, it is expected that users of Alacritty make use of a terminal multiplexer such as tmux.
The problem with this is using tmux screws with using mouse for selection (since tmux takes over the mouse and does its own selection thing, which usually doesn't do what I want).
This approach also means you can't do anything interesting like what Terminal.app does with detecting prompts, marking them, and letting you jump back to them (or clear history back to them).
This approach could be excused if the terminal actually natively integrated with tmux, thus providing its own gui splits/tabs that represent tmux's panes/windows, but it doesn't sound like it does that.
I just disable tmux's mouse usage by putting "bind m set -g mouse off" into my tmux.conf: tmux is useful enough just with key shortcuts, and I generally dislike terminal programs that use the mouse anyways.
Also, some terminal emulators (iTerm2 on Mac) let you disable mouse grabbing by holding a special key while clicking, maybe Alacritty could implement something like this?
It does, but it's not completely reliable for me. Also, you have to configure it to make it actually work.
For example, here's my mouse-related tmux config:
set -g mouse on
# enter copy-mode by scrolling, but don't select the pane
# The usage of #{mouse_any_flag} just forwards mouse events when in a fullscreen app that wants them
bind -n WheelUpPane if -F -t = "#{mouse_any_flag}" "send-keys -M -t =" "if -F -t = '#{alternate_on}' 'send-keys -t = Up' \"if -F -t = '#{pane_in_mode}' '' 'copy-mode -e -t ='; send-keys -M -t =\""
bind -n WheelDownPane if -F -t = "#{alternate_on}" "send-keys -t = Down" "send-keys -M -t ="
# Start copy-mode with PageUp
# For PageDown, if we're not in copy-mode, discard it
bind -n PageUp if -F "#{alternate_on}" "send-keys PageUp" "copy-mode -eu"
bind -n PageDown if -F "#{alternate_on}" "send-keys PageDown" "if -F '#{pane_in_mode}' 'send-keys PageDown'"
tmux handles mouse events very well. Scrolling, pane resizing (in tmux and vim), selection work once you add one line in config. For me it works so perfectly that it successfully goes in nested tmux sessions over ssh.
However, some people complain about mouse issues. I would suspect that some terminal emulators are trying way too hard to properly handle mouse events.
Hey, nice project :) I do have a question: you mentioned that urxvt is difficult to configure (because of .Xresources format?), but then you also say that "GUI-based configuration is unnecessary", so how exactly is Alacritty easier than urxvt?
One feature that I really miss in rxvt is an easy/fast way to change the color scheme, or at least reverse colors (like with xterm, which if correctly configured it's just ~3 times slower than urxvt). This is something really important when your screen receives direct sunlight.
Specifically this. Without being above-average proficiency with X, the format and available options are likely to be difficult to figure out.
> "GUI-based configuration is unnecessary", so how exactly is Alacritty easier than urxvt
The config file is well documented and in a human-friendly format. Most flags will also take effect immediately without restarting the program.
> One feature that I really miss in rxvt is an easy/fast way to change the color scheme, or at least reverse colors (like with xterm, which if correctly configured it's just ~3 times slower than urxvt). This is something really important when your screen receives direct sunlight.
You could have two `colors` sections in the config file and just uncomment one or the other. Not quite as convenient I suppose. One thing I'm considering as a key-binding option is to exec a command. This could be anything like `sh swap_config.sh` and then you could bind it to whatever you like.
You don't need to implement this in the terminal. You can bind macros to keys in anything that uses readline. Search for "inputrc" and the READLINE section in bash(1). While this is usually used for binding builtin functions (i.e. editing, history), you can simply provide the literal expansion.
# in ~/.inputrc
$if Bash
# <f12> - find this with "<Control-v>{KEY}"
"\e[24~": "sh \"${HOME}\"/path/to/swap_config.sh
"
# (the newline is included, which is
# usually bound to accept-line)
$endif
The string indicating the key to binding to (left of the ":") can change depending on the environment (terminal, os, etc), so use <Control-v> to investigate what is actually being sent by the terminal into readline.
Cool project, but I have literally never thought a terminal was excessively low-performing enough to prevent work from getting done. What applications benefit from a terminal that's even an order of magnitude faster than the alternative?
Have you even had Control-C take forever to kill that `cat` you accidentally ran against a 1GB log file? I have. Most terminal emulators are 'dumb' and try to render the whole backbuffer sequentially even if what you are seeing is no longer the tail of the output stream.
It's not so much that this is 'fast' (because even gnome-terminal which is not what I'd call crazy fast is 'fast enough' most days), but that it's much more responsive as a result. By locking the terminal refresh to your screen refresh rate and only rendering 'current' data this removes a lot of headaches you can run into with other terminal emulators (like the aforementioned cat of a 1GB text file).
I've done that (we all did), but "can't reproduce". So this seems to depend a lot on the emulator, eg. I'm using Konsole, which is superb, and don't see that problem there.
Konsole is the only emulator I've used (other than actual tty) that doesn't have this problem. It's actually been frustrating, becasue there is plenty about it that I don't like.
I'll be giving Alacritty a try shortly - if it does what it says on the tin, it's exactly what I've been looking for.
Thanks, though I've been on linux for the last couple of years.
On the mac side I had other performance issues with Terminal.app (particularly when using all of widescreen + tmux - lots of flicker during move/refresh operations). iTerm2 did well for me though, iirc.
The only real issue I run into is clipboard wonkiness when console apps integrate the clipboard (nvim+twmux).
Beyond that, it has a ton of features - none of which I use since I manage my sessions with tmux. So I guess it's mostly a matter of not wanting a sledgehammer when the right tool is a finishing hammer?
I use nvim inside Konsole/Yakuake and I don't have any issue with the clipboard.
"+p and "+y works fine
Also Shift+Inst and Shift+Supr keeps working fine.
Terminal performance is fine at smaller sizes and with less going on.
In a multi-pane tmux window with vim, performance issues start to become noticeable. Many people I've talked to have experienced a situation where a bunch of output is being written to the screen, they panic to hit C-c, and then all you can do is wait for it to finish. This just isn't an issue with Alacritty.
Alacritty is about having tools that don't get in your way and don't distract you from what you're trying to accomplish.
With the C-c issue specifically, I don't think that emulator throughput is the most likely culprit.
It could be that the emulator is not handling inputs fairly: maybe it tries to process all available input from the pseudoterminal before processing the next batch of keyboard input. Or it could be that the pseudoterminal (kernel driver) is not configured to send the signal as expected. Or it could be that the process you're trying to interrupt is not responsive to the signal.
I maintain a terminal emulator that is not optimized at all and I just tested interrupting a process that was dumping one gigabyte of text to the screen. The interrupt was handled instantaneously.
It isn't the most likely culprit. The mosh people point out that the place where people hit this is with a SSH session to another machine. Where the data are actually building up is the SSH connection. It's not a problem with the terminal emulators at all.
It's more of a user comfort / perception thing. Using a slow terminal is basically like playing a game with really inconsistent frame rates. It's a distraction and can cause you to make mistakes from mistimed inputs. In both cases what you're really looking for is consistency more than raw speed.
On Windows "std::cout" can take upwards of 3ms. I noticed this when writing high speed camera software that was supposed to hit my callback every 2ms. Instead it was limited by my print statement!
Windows console is very slow (and terrible in many other ways). I think it is beyond salvageable. The only workarounds is probably write into a disk file or a pipe.
However, keeps being slower that any tty on any *nix. Sometimes fish autocomplete hangs for a few seconds when on the same machine running ubuntu fish autocomplete always is instantaneous. I don't know if it's related to something about the tty emulator or something weird on cygwin.
3ms sounds pretty dire, if you weren't writing very much. Out of curiosity, was this with or without `std::ios::sync_with_stdio(false);` ?
I've found syncing makes a huge difference to IO perf in Windows, e.g. `_getc_nolock` is much faster than `getc`. (Assuming of course that you can get away with it.)
These have little effect, also omitting std::endl has little effect. The effect is somewhat mitigated by buffering, so that if the time between subsequent st::couts is large, you won't see the runtime overhead.
Terminal speed is the primary reason I (and some others) use 'terminology', which is a relatively quick terminal emulator out of the enlightenment project.
However, a terminal emulator + X11 and so on can eat a bit of a CPU with noisy processes, eg. the output of mpv eats maybe 5-10 %, because it updates every(?) frame, so maximum work for the whole display stack. Getting the terminal emulator out of sight can get a bit more battery life in these cases.
(Somewhat related: If you have infinite scrollback it turns out that /tmp is actually very finite and can be filled by the wrong command with a couple dozen MB/s.)
Sounds like an option to slow down output rendering to e.g. one 1 frame/s might be an interesting feature for a terminal emulator. Still enough to keep an eye on a long-running process, but less overhead?
Glyphs are rasterized once and stored in a texture atlas. When rendering a glyph, the fragment shader pulls from that texture. Once loaded, the glyph stays loaded for the duration of the program.
Got it, just a heads-ups that texture atlas tends to hammer your GPU texture upload if you want to support UTF or non-latin(esp glyph-based) character sets.
Not trying to be discouraging just something to keep in mind if that's a direction you want to go. Pretty excited to see a GPU + Rust based stuff making it out into the wild.
Sure, but you're either going to have to generate the whole font up-front(can be many thousand characters) or you need to re-upload as you use/generate new characters which can thrash unless you're very careful about the regions you lock(and you have a driver that behaves appropriately).
Most font renderers I know do a tiered LRU cache of 3-4 texture "pages" which hurts your drawcall batching but tends to be a nice tradeoff in texture usage.
Okay so I dug into this. There is a font cache on the GPU and another in CPU ram. I believe it will fall into the drawcall batching issue you are concerned about... but terminals don't need to get >60FPS in most cases.
You really have nothing better to render on your GPU and store in that video memory than your terminal? I also use a web browser, and feel like it could use a performance boost a lot more than my terminal (particularly as I don't actually believe that using textures in this way is actually the most efficient way to render fonts with OpenGL).
There's Distance Fields[1] and Loop-Blinn[2] aside from standard textures that I'm aware of.
Valve uses the first, very few people use the second because of patent issues. They're more computationally expensive in some cases so there's always tradeoffs to be made depending on your hardware.
Just another vote for scroll back - would not use without. I do use tmux occasionally but it is just too awkward beyond keeping stuff running in SSH. I also have a tiling window manager and since I use mouse a lot in browser and other GUI apps it's too much of a pain to switch to keyboard only navigation for the terminal.
I want to encourage you to keep your product vision. It makes totally sense.
I am a heavy tmux + vim user, developing on a remote server. So, I have all my dev sessions always running and can access them on any client. I never needed scrollback or tabs in my terminal. tmux has it all. Excellent window and pane management + scrollback included.
And even on a remote connection I feel speed differences between terminal emulators as I wrote in another post. So, there is a strong need for such a product and great that somebody is innovating a console app in a time of locked-down fancy touch devices.
Well done and keep on going. Don't be intimidated by different requirements. Your product strategy is right (at least for me).
I put this comment elsewhere in the thread, but maybe if I put it in this subthread you'll get to see it.
I tested on my laptop (a ThinkPad X250) and alacritty is slower than xterm. xterm can display find / at 80x24 in 11 seconds, but alacritty takes 17 seconds or more, depending on how large the window is (smaller seems to be slower) and whether it's on-screen or not (off-screen seems to be slower).
There's a small subset of systems experiencing this. Do you happen to have a Radeon video card? In the profile I looked at, glClear was calling down into (through libxcb) __poll_nocancel which was eating 99% of the CPU time. I'm not sure if there's an issue open for this yet, but it's something we're looking into. One of my testers during development ran into this so we're aware of the problem.
> My current Bash prompt contains a unicode character. I guess this is unsupported?
Multibyte characters are 100% supported, but only if they are available in the chosen font. The fallback fonts feature will resolve this issue for you.
> Possible bug: on macOS when I minimize Alacritty, and I put it on focus again, it tries to select text. Strangely, not always.
Definitely a bug, and it's one that I knowingly shipped with (usually I just resize my terminal once and then it's static). The events triggering selection seem to work slightly different on macOS than Linux. The issue should be easy to resolve, but this comes down to making time.
> Multibyte characters are 100% supported, but only if they are available in the chosen font. The fallback fonts feature will resolve this issue for you.
How do I enable this feature?
The characters and ⑂ don't work in Menlo.
and do work in Cousine for Powerline font.
In Terminal.App and iTerm2 this works in both fonts. Do those applications also have a fallback fonts feature?
For some reason I can't run Emacs in terminal/unwindowed mode in Alacritty. Every "regular" character I type (ie to enter text into a buffer), Emacs says it's "unrecognized".
Control sequences work - ie I can exit with C-x C-e.
Works fine in regular Mac OS X terminal.
Any suggestions appreciated. Looks like an awesome project!
Nevermind -- Turns out I was just trying to type some characters in Emacs' welcome screen, which doesn't allow that in any terminal. I was so focused on kicking the tires of Alacritty I wasn't paying attention.
Nope. The idea is that perf should be good enough that it's not necessary.
withoutboats and I are hopefully going to collaborate and add notty protocol support to Alacritty. This should make text splits as performant as GUI splits.
I (withoutboats) agree. The real problem in terminals is that this goes both ways - if you let tmux or just vim/emacs split the window, you get no GUI integration - but you use the GUI to split the terminal window, you now have two disconnected shell sections, so none of the CLI integration works.
The notty screen splitting protocol should support a "GUI" interface shared by a single process, solving this problem.
cool project, but my question is why? rxvt is plenty fast for general purposes. if your bottleneck is the terminal emulator then you're doing something wrong. can you really read at ~10mbps?
No, of course you can't read all of the text at 10Mbps. The problem is that when you start some task which has a lot of spew, just having all of that text scrolling past can slow everything down to the point where the task actually takes longer! Even a few percentage points of slowdown can add up to minutes just sitting there twiddling your thumb.
Just a few weeks ago I accidentally ran a command on a remote machine that generated so much spew in the few seconds it ran before I hit control-c that it took 5 minutes to scroll through before I could do anything else.
But yes, you probably want to avoid that much text spew even if your terminal is super fast.
You think that your bottleneck is the terminal emulator, but you are wrong. As the mosh people pointed out a few years ago, the output from the remote machine has to scroll for 5 minutes because it is all backed up in the SSH connection between your machine and the remote one. Your bottleneck isn't in the terminal emulator at all, and changing terminal emulators to whizzy new ones will not make any difference to it.
This initial release should be considered to be pre-alpha
software--it will have issues. Once Alacritty reaches an
alpha level of readiness, precompiled binaries will be
provided for supported operating systems.
Might be a bit early, but will the Windows support be good for Bash on Ubuntu on Windows? Right now I use xming and run xterm inside it, but that is not a perfect solution, especially for things like vim. It works for now, but I am definitely looking for a good alternative.
Generally continuously, but it depends on how you scroll. If you're in `less` for example and holding the down key, you will probably see one line at a time, and it should be very smooth. It's possible with high key repeat rates that you might get multiple lines on some frames.
It sounds like a fun project, but I don't really understand what performance issues this solves? I don't think I've ever had an issue with slow terminal rendering using the default terminals on Ubuntu or Mac OS. What sort of applications do you run where it becomes an issue?
On the other hand, something like mosh [1] seems like it could be really useful on slow network connections. But that's not about rendering faster.
Not all people are equally sensitive to graphical performance issues. The default Ubuntu terminal is capped at approximately 40fps[1]. This is deliberate and hard-coded. It's not an integer multiple of any common screen refresh rate and it looks very bad. There is no way to configure keyboard autorepeat so new input is shown with consistent timing. I consider this bad design and I'm happy that Alacritty is limited only by the monitor. But some people might never notice the timing problems in other terminals.
keyboard autorepeat doesn't have anything to do with the terminal emulator (or I would be surprised). There's an X11 setting, try "xset r rate <delay-ms> <repetitions-per-sec>", e.g. "xset r rate 170 30".
The timing of the input itself doesn't vary, but the timing of the visual feedback does. I like a fast 60Hz autorepeat, and I rely on visual feedback for precise positioning (I find this has lower cognitive load than Vim style character/word/line/etc counting). If the terminal is displaying at 40fps then some separate inputs will be merged and displayed at the same time. And if you like 30Hz autorepeat, instead of consistent 2 frames per input, you get a mixture of 1 to 3 frames per input depending on how the cycles line up. It makes it much harder to hit the exact character you want if you're using autorepeat for navigation.
Ideally the terminal should have MPV-style motion interpolation for supporting keyboard repeat rates that aren't an integer multiple of the display's refresh rate.
> I like a fast 60Hz autorepeat, and I rely on visual feedback for precise positioning (I find this has lower cognitive load than Vim style character/word/line/etc counting).
I thought I was the only one who turned the key repeat way up and then navigated things by moving one line/character at a time really fast! I've always felt a bit guilty about this due to feeling like I'm just too lazy to get used to using more typical navigation, but "preferring lower cognitive load" sounds like a more more positive spin on it.
I have slowly, over the course of 20 years moved to holding down movement keys to trying (and not always succeeding) to remember to use forward and backward search to jump around to the string I am moving to.
I think it's the "right thing to do" but it's hard to not just hold down 'k' ...
But I acknowledge there's a general problem with the slow screen refresh rates we have. What are people doing if they want an ergonomic FPS shooter, or a video decoder that must decode videos with arbitrary video and audio rates? Probably ignore the problem and drop a frame here and there.
I started up Alacritty on macOS and the improvement in responsiveness and speed is immediately obvious relative to Terminal.app. It's not like it's a huge deal, but why put up with compromised performance when you don't have to?
Have you tried it? If not, start it up and then tail -f a log of some sort or even use vim for a while. The experience is much better than in an OS X terminal or iTerm 2.
> Make sure you have the right Rust compiler installed. Alacritty is currently pinned to a certain Rust nightly
Ok, I'll... um not do that. Hope you publish a build soon though!
edit: more seriously, the nightly compiler situation on rust is going to become a problem as it gets more developer use. I really hope they're able to stabilize it.
edit 2: I'm really sorry if I derailed the conversation in a not useful way, @jwilm
Rustup automates this process so much that it costs you nothing to compile from a specific nightly release without compromising your existing configuration.
I downloaded the nightly within a minute, and I compiled a release within 125 seconds. Immediately after I changed back to stable. It took two cli commands to swap between. NPM couldn't get all my packages for my React crud app that quickly.
One thing that Rust really seems to be doing right is tooling whether it be Cargo's package and compilation management or Rustup's toolchain management, and their 2017 roadmap is pretty much entirely about tooling: https://github.com/rust-lang/rfcs/blob/master/text/1774-road...
You don't even need to swap back and forth. The README for Alacritty tells you to use `rustup override` to set the compiler, and rustup remembers overrides on a per-directory basis, meaning that invoking rust from anywhere outside the Alacritty working tree will still use your globally-configured default compiler.
Looking at the code, it looks like this basically only relies on two things: inclusive ranges, and clippy. But you don't have to use clippy this way; it's just one way of doing it. So it's really one feature. EDIT: Oh oops, and custom derive, which is stable in a month.
> I really hope they're able to stabilize it
In general, "nightly" is never going to be stabilized. Remember, it's how Rust development works. Some people will always want to be on the cutting edge. I elaborated here: https://news.ycombinator.com/item?id=13277438
Also, nightly-only is less of a deal here, since this is mostly an application, not a library. End-users won't need to worry about having Rust at all.
My feeling is that having a lot of people running on different rust versions is likely to raise the bar to contribute to the project; for example if I go to hack on a python project the odds that I have to change my interpreter setup are basically zero.
In this specific example, I saw that building the project required mucking with my rust version/setup and decided that the cost of that was too high for me to proceed. Totally possible that I mis-evaluated the cost, but that was what happened in my head.
I don't at all mean to tell you what is best, I do think highly of your project and wish you all the success.
It is a little unfortunate, but Rust isn't blind to the problem. The community is converging on everyone using rustup (e.g. as of a few weeks ago, the install page recommends it https://www.rust-lang.org/en-US/install.html ) because it makes managing things like cross compilation[0] and upgrading stable compilers much easier. It also makes working with pinned compiler versions smoother: run `rustup override set nightly-2017-01-05` (or whatever date is recommended) in the project's directory, and that single command will both install that compiler and ensure it is used for the project (and only that project) when one invokes cargo or rustc. (I think it's great that Alacritty even helps people with the process, as I guess you noticed: https://github.com/jwilm/alacritty#prerequisites )
In any case, the nightly split is Rust deciding that it is good to allow people to use experimental features (i.e. working out if they're good/bug free) while also resisting infecting code that doesn't want to risk breakage---such projects use a stable compiler meaning even their dependencies won't be able to accidentally rely on something unstable---and thus hopefully avoiding defacto stabilisation of low quality features.
While it's nice that they want to converge on rustup, I greatly prefer using my distro-provided compiler in almost all cases. Right now that is rustc 1.14, which means that that alacritty is currently beyond my compilation capability.
I don't think this is nearly as bad as the grandparent suggested though. The language is young, I'd prefer experimental features remain in the experimental branch rather than get bad design stuck in the language.
I think if you're doing dev, it's completely reasonable to expect the dev to install a newer toolchain. Cargo makes all of the actually painful parts of contributing to a project pretty trivial.
Don't worry. The Rust community is sensitive to the desire to only install Rust with a package manager. There was even talk (a few days ago) of making an LTS version of the Rust compiler for use in more conservative distros that don't want to update it every 6 weeks.
On another front, Rust is working hard to get people off of nightly. This project, for example, uses 3 unstable features. Clippy can already work on stable (edit: code that is linted with Clippy can also build on stable in some setups), Custom Derive will be stable in less than a month, and inclusive ranges is a pretty minor feature.
I would expect this project to begin working on stable for Rust 1.15, but I'm not affiliated with it directly, that's just my guess.
Sorry Manish, I had meant that you can have code that can be built using a stable compiler while still using Clippy via another means. I typically use stable, but I run clippy on my code anyway. I realize that my post is confusing, I'll edit.
> for example if I go to hack on a python project the odds that I have to change my interpreter setup are basically zero.
This doesn't match my experience at all - I frequently switch between projects that require python2 or python3, and I manage that using virtualenvs, which is a different problem than setup.py solves, just as rustup is separate from Cargo.toml
I have a lot of python experience and only a little bit of rust, but in my opinion the years-long python 2/3 split has been a lot more painful than the rust nightly/stable split... the fact that Rust produces binaries that work in any case vs. the user needing the right Python interpreter installed is a big part of this.
Yes, it is absolutely a barrier to contribution. That's part of the trade off a maintainer makes when choosing to use unstable features. It also means getting into distro packagers won't work; as they're all packaging stable only.
Most users are on stable, and with 1.15, the next release, the most used nightly feature is being stabilized. We're working on it!
> In this specific example, I saw that building the project required mucking with my rust version/setup and decided that the cost of that was too high for me to proceed.
Agreed it would be nice if you could compile it with your package manager's provided version of rust.
Sibling comments mentioned rustup; with it this is the entirety of the mucking required [as described by alacritty's readme, and confirmed by compiling it myself]:
rustup override set $(cat rustc-version)
The override is local to the project of the directory you set it in. This will also download that version of rust if necessary.
Rustup makes it painless, though. I have a ton of versions of Rust installed. I use different versions for different projects. Some versions I just tested to find where the performance regression was, so I could just remove them, but they don't bother me.
Just for completeness, there is one other major unstable feature that Alacritty requires. #![feature(step_trait)].
I forget the precise details, but I believe this was required for using newtype wrappers as a `Range` for indexing. Specifically, the grid can be indexed as a range like
Using nightly for a single build and going back to stable for regular work is not hard. It's a command or two to build and switch, and a command to switch back.
Using nightly rust to build a project in development to check it out is like using a beta release of a library to build a project to check it out. In both cases, you expect that it will likely be using stable dependencies in the future, and you can still take a look now.
It seems like the Cargo.toml file for binaries should be able to specify this, and cause the crate to be built with the nightly if it is installed. This way you only have to enter cargo build --release.
Jwilm: This is great, and I'm really looking forward to following this project; beers on me and thanks for the effort!
A few findings from my side if you want some feedback, I generally work mosh'd into some beefy servers with a long running tmux I resume -- so I'm probably the use case this is aimed at (client: xps13, archlinux).
1) If I create a vertical split view (tmux_key+v) while I already have some output in the left side of the split, and have nothing but my prompt in the right side; then resizing the split is instant/snappy.. However, if I then do a find / in the 'new' (right) split, ctrl+c it after a moment and then resize it lags/judders hugely -- I'm not sure what's going on there but let me know if you'd like me to try and explain that more if you can't reproduce from that.. This doesn't happen in termite..
2) I had to set offsets and use a giant font to make it look reasonable on my (highdpi) lappy:
font:
normal:
family: SourceCodePro # should be "Menlo" or something on macOS.
style: Regular
bold:
family: SourceCodePro # should be "Menlo" or something on macOS.
italic:
family: SourceCodePro # should be "Menlo" or something on macOS.
size: 26.0
offset:
x: 4.0
y: -30.0
Hm, if we're doing GPU rendering for speed, I'd suggest uploading vector glyph data to the GPU and rasterising on the GPU in the pixel shader, rather than using FreeType. See here: http://wdobbie.com/post/gpu-text-rendering-with-vector-textu... . The WebGL Demo is really impressive - it lets you zoom in and out on a multi-page PDF at speeds I haven't seen anywhere else.
This is not fast. It uses less texture memory but it is not faster than 2 triangles per character. This is why no one really uses this technique in a production system right now.
>tabs and scrollback are unnecessary. The latter features are better provided by a terminal multiplexer like tmux.
I beg to differ. I don't really know whenever there's a project that's almost perfect, there's some braindead decision that cripples it with no good reason.
I'd understand it if some more advanced or exotic feature wasn't available, but scrolling?
This is a very interesting concept and another example of what can be done with Rust. However and without the intention of discouraging the author, I did not find any performance improvement from Alacritty using Ubuntu 16.04 on an i7-4500U (using integrated graphics HD 4400). Here are some numbers, simply printing the contents of 446 files:
At 80x24:
gnome-terminal:
real 0m0.848s
user 0m0.032s
sys 0m0.072s
Alacritty:
real 0m6.832s
user 0m0.032s
sys 0m0.164s
At fullscreen:
gnome-terminal:
real 0m0.819s
user 0m0.020s
sys 0m0.088s
Alacritty:
real 0m8.972s
user 0m0.064s
sys 0m0.164s
The font was a tad smaller by default on Alacritty, changing it made no significant difference in the numbers. Since the difference in performance was quite noticeable I decided not to test other possible configurations, but I could do so if it might help.
My graphics card has a pretty poor performance in general so that might be an indication that, since the performance of Alacritty is directly impacted by the graphics card, it might be useful for the author to determine the "minimum requirements" for Alacritty to outperform the competition.
In any case, it might not be a fair comparison as the author has stated that this is a pre-alpha release, but maybe he can find it helpful in some way, as he suggests he hasn't been able to find a test in which Alacritty didn't perform as well as another terminal.
thread 'pty reader' panicked at 'index out of bounds: the len is 24 but the index is 24', /buildslave/rust-buildbot/slave/nightly-dist-rustc-linux/build/src/libcollections/vec.rs:1371
or
thread 'pty reader' panicked at 'cursor fell off grid', src/term/mod.rs:634
While GPU accelerated 3D interfaces (like it in 3D games) is a good idea (at least one could mix data visualization and with controls - the way WebGL guys do it) a terminal emulator does not require any acceleration, leave alone having a Nvidia drivers or Cuda as a dependency.
What a decent terminal emulator should have is standard compliance and decent font rendering (and freetype is good-enough).
Lousy engineering will lead to lousy code, especially when the main objective is to show off (engineering is, obviously, not an objective.) Btw, using Rust is not an engineering.
Everything that can be GPU-accelerated should be GPU-accelerated. The GPU is far more energy efficient, and every bit of offload onto the GPU leads to a decrease in CPU consumption. Terminals can be particularly CPU-heavy when running a chatty program.
Oh, just as interesting, plugging this into VR system! Make it way easier to multitask and work on lots of systems (at the risk of looking incredibly goofy).
Sadly VR is useless for reading :( I wish it wasn't so. You're much better off with a high resolution screen with the ability to zoom in and out between landmarks. Maybe a head tracker to make navigation more intuitive.
I wouldn't hold my breath. Maybe for causal reading but not for day in day out 8 hours a day. It's also solving the wrong problem when it comes to immersion. I fly FPV which is a fuzzy intermittent analog 640x480 screen and it's incredibly immersive. People are fully immersed into their tiny mobile phone screens for a large portion of their day. AR peaked with Pokemon Go and we didn't even need Google Glass, Magic Leap or HoloLens. We already have the hardware for full immersion and it's sitting right in front of you (or in your hand).
> Both the utf8parse and vte crates that were written for Alacritty use table-driven parsers. The cool thing about these is that they have very little branching; utf8parse only has one branch in the entire library!
From a simplicity point of view table-driven parsing is pretty neat. However, it does mean you'll be getting a lot of branch misprediction in your single branch, since it's harder for the CPU to predict where it will branch to. You could probably go faster with some handcoding in the parser.
Rust has macros so any sort of tables can be rolled out into more efficient code, like if you wrote switch-case in C. It should be possible to rewrite vt_state_table! once later, when it becomes a problem.
I didn't even realize my (iTerm2) terminal emulator wasn't fast until I tried Alacritty. When doing non-intensive tasks, the difference is less one of vision and more a "feel". And it feels SNAPPY. And, as a heavy tmux user, I'm definitely your target audience.
But...the font rendering doesn't look as good as iTerm's, at least not yet.
I suspect I'll be swapping once you're at a public build release.
I really like this. It combines my love of tmux and vim with my interest in rust, system software, terminals, and my eternal quest for the fastest, simplest, most cross platform terminal development environment. Great job - looking forward to running nightly builds of this.
EDIT: Ah, after a little sleuthing, the recent post from OneSignal on why they chose rust for one of their services makes sense :).
How fast is it? I haven't had a terminal that was as fast as xterm with a matrox millenium ii. That was 20 years ago which is pretty sad. Of course the terminals look better these days.
From what I remember, and I'm possibly wrong with this, the Matrox Millenium cards were the last generation where 2D performance was a design focus - the Matrox RAMDAC was the best there was at the time. After the Voodoo 3 was released (IIRC the first combined 2D+3D accelerated video card) the 3D arms race started, but 2D performance hasn't moved a lot since then (at least in consumer-level hardware), hence the move to software being written in OpenGL etc. to benefit from hardware 3D acceleration.
I'm guessing that "workstation" cards (designed for CAD etc.) may have moved on a bit since then, though.
On my laptop this terminal is not very fast. A cached 'find /' runs in 11 seconds on an xterm, 17 seconds on maximized alacrity when it's on screen, 25 seconds on alacritty if it's off-screen, and 44 seconds on a 80x24 alacritty.
I don't know why it's slower when you make the window smaller, or when it's not being displayed. I expect the answer is "some kind of OpenGL bullshit" but beyond that...
gnome-terminal takes about 5 minutes to do this, but it has unlimited scrollback and as far as I can see alacritty has no scrollback whatsoever.
I was just having a little joke, apologies if it came off as a criticism. It seems very strange to me that people are still talking about speeds of terminals, but not my field (any more) fortunately.
One of the first things I wrote was a terminal emulator using telnet to run on PC-DOS using a port of curses to connect to our Sun 3's. To think in 2017 people are still concerned about terminals is very surprising to me.
The problem is the terminals have been getting monotonically slower over the last 20 years, whereas the amount of build spew I need to grope around in to find the relevant error has not decreased :-/
yeah, I use ide's (Xcode/Appcode/VS/Delphi) for all my work so not really an issue. I can see if you're stuck with command line compilers though it would be annoying. I would hate to go back to make files and command lines ugh!
I would argue the reverse :-). I've used terminals in the past, had the beauty that is screen, even gave courses on vi. When bitmapped screens became available it was ide's for me.
Looks great so far. Other than scrolling support, the one thing I miss the most is the use of the up arrow to scroll through history. Ctrl+R is great, but sometimes I just want to scroll through my most recent commands.
Wow, I am definitely the target for this. I often have tmux panes watching fast-scrolling log files while trying to continue to work in another pane. I've been trying to tweak tmux to perform better, but it really is the rendering speed that's holding it back.
The lack of scrollback/tabs/etc doesn't bother me at all - I use tmux for this exactly as suggested.
I really, really like this so far. Interestingly, it's dependence on tmux (which I really like overall) for 'extra' terminal features presents some problems for performance and usability.
Tmux has its own non-trivial rendering bottlenecks, the most significant of which comes into play when you have multiple clients attached to a session. As a test, I went into a notes folder and did `grep -r e .`. When Alacritty was sized larger than mate's default terminal, mate's default terminal finished rendering first. When Alacritty was sized smaller than mate's terminal, Alacritty finished first. Also of note was that Alacritty running tmux rendered slower than mate's default terminal without tmux. This was an uncontrolled experiment, especially since this was a tmux session with a couple windows with a couple panes per window, on a tmux server with 2 other sessions (with a lot of vim windows etc), but something tells me the results would be the same if I used a tmux server with one session/window/pane. As a tmux user, this isn't a huge deal to me, but it should be concerning the the Alacritty devs since Alacritty requires a multiplexer to be usable.
My biggest concern, however, is not performance related, but usability related. Consider this use case: Alacritty -> ssh into remote server -> run tmux on remote server. How am I supposed to paste anything into that remote tmux session now? Am I supposed to nest my remote tmux session in a local tmux session? That sounds awful! I've found satisfactory workarounds to the lack of copy/paste when working locally, but it falls apart when I can't rely on duplicating the tmux register to the clipboard (and vice versa) because the clipboard is remote.
If I can find a workaround to the remote paste issue, I will probably use Alacritty exclusively. Otherwise, I can't use this terminal for remote work, and I'd rather not run two different terminals just so that rendering is faster _sometimes_
> concerning the the Alacritty devs since Alacritty requires a multiplexer to be usable
The project initially started to be an optimized tmux renderer. It's not supposed to appeal to everybody. That said, there's a big segment of users with tiling window managers that are only blocked by not having scrollback, and we're talking about adding it. Features like tabs/splits will likely never be introduced.
> If I can find a workaround to the remote paste issue
What platform are you on that the selection copy and mouse paste isn't working? It's also possible to configure this to another keybinding if you prefer.
I'm using Mint Mate (Sarah), can't get a context menu on right click. I installed all of the dependencies mentioned in the readme, though maybe Mint requires something that Ubuntu doesn't. Tmux mouse support works, interestingly enough.
> Features like tabs/splits will likely never be introduced.
I wholeheartedly support this. Count me among those that like the idea of an "optimized tmux renderer".
edit: I just realized you weren't referring to a context menu, but select/middle-click. Duh, didn't even think to try that as I don't really use that feature much. That'll do the trick for now, though apparently they plan to remove this feature from the next gnome release? Seems odd to me -- Anyways, thanks for the suggestion, now I can use alacritty for remote sessions!
Oh thank you so much!!! I am right now in dire need of light terminal (read st's equivalent) for macOS, since iTerm2 felt bloated since few years ago and font rendering is kinda meh, and it is slow, and it has many many features I really do not need, and Terminal.app simply doesn't make the cut (no true color support for example). I need speed, true colors, and minimalistic terminal as possible, since I use tmux (tabs and gui not needed) for anything if I need more than one terminal screen. Not sure if this is worth the hassle to set up right now, I might just wait for alpha release. But I am watching this on GitHub and can't wait to try it! (plus it's Rust which almost made me dance in my room)
As a tmux+vim user, this hits right home for me. I never use terminal tabs and do almost everything inside tmux.
The only time during development I use a other app is when I start neovim-qt, just so I have faster rendering and squeeze even more performance out of it. If Alacritty is giving me the same speed without me having to spawn a graphical vim for it, sign me up!
I'm going to try this as my main tool for a couple of days and collect some feedback :)
Very impressive. I am manipulating huge amount of text data on a regular basis directly in the terminal, smoother exhaust experiment is a huge win. I wish I can give you some money right now to support the development.
Having never thought my current terminal emulator was slow I was surprised to immediately see a difference with Alacritty!
That being said, every time I install a package useing apt-get (Xubuntu) Alacritty crashes with the following:
thread 'pty reader' panicked at 'index out of bounds: the len is 24 but the index is 18446744073709551615', /buildslave/rust-buildbot/slave/nightly-dist-rustc-linux/build/src/libcollections/vec.rs:1371
I guess we're not quite at 1.0 yet but looking good otherwise!
Awesome! This is exactly what I've been hoping for. The state of terminal emulators on macOS is particularly bad, at least when it comes to speed. Both the built-in term and iTerm have a lot of features, but really start to lag on big screens with a lot of text. I used to run urvxt under XQuartz for this reason, but there's scaling problems with retina screens these days.
Nice work. Hopefully this can fill a particular void for folks that want no-frills fast terminal emulation.
I can't remember the last time I thought that my terminal was too slow, except maybe when I tried that Electron-based program a while back (Hyper, I think it was). My priorities are more like the following:
1. Stability. I crashed Alacritty thirty seconds after opening it; possibly related to issue #12.
2. Emulation correctness. In Terminal.app, the cursor often gets out of sync when I "turn the corner" (i.e., backspace across a line boundary).
3. Font rendering. Text in some terminals just looks ugly.
4. Features. Alacritty doesn't seem to show the number of rows and columns when I resize. Scrollback!
This looks like a really interesting project, but it seems really strange to make performance such a high priority. I tried the find /usr test, and it seemed equally fast in Terminal.app and Alacritty.
so this uses GPU-accelerated rendering with OpenGL. TBH I have never used OpenGL and when I read "GPU-accelerated XYZ" this still sounds like magic to me because I've know idea how this works. Could you point me to some resources where I can read up on this stuff? if this helps you: I am not a newbie, I already know C, C++ and Rust, but I haven't done any graphics programming at all yet. For example I only have a very rough idea what shaders do.
I learned this stuff a long while ago, but I found a promising textbook for you to review [0]
tldr; the "easy 80%" of the effort in GPU rendering, specifically, is reformatting graphics data in forms that the GPU prefers. (The "hard 20%" is dealing with confusing APIs, broken implementations, etc.) Graphics techniques themselves may apply various mathematics(geometry, trigonometry, a little bit of linear algebra, DSP), either on the CPU or GPU. To render things you make decisions about the processing and output format and write data and algorithms accordingly.
Wow. I just installed this on macOS, and the difference in speed compared to iTerm 3 is huge!
I always assumed that it was my bloated vim and tmux configs that made it feel a bit sluggish sometimes, but it turns out i was the terminal. Now everything feels instantaneous.
After some color bugs have been ironed out I'll switch full-time.
Great idea and I hope Alacritty continues to evolve because it should eventually be the fastest given the GPU integration. However, st is faster on my system, supports bitmap fonts like SGI screen, handles true color, works when no GPU is present, has half the LoC, and has less dependencies.
I am using iTerm2 on a maxed-out MBP 15 Retina quad core and Xshell on a $150 Asus Cherry Trail netbook. You won't believe it but Xshell on the crappy netbook feels light-years faster and more responsive than iTerm2 on the MBP.
Wondering how Alacritty will perform, looking forward.
For those who rely on tabs, one great advantage of relying on the multiplexer instead is that your "tabs" live within the terminal, so when you ssh into your session, the machine has all your "tabs" ready and waiting instead of tied up in a non-accessible GUI.
Not sure why they rebuilt a clipboard library when "clipboard" exists (I think it might even be used within Servo, not sure): https://crates.io/crates/clipboard
Just installed on Linux Mint. The installation was quick and painless from the instructions, and it is noticeably faster than my previous MATE terminal. We'll see if I notice a lack of scrollback.
Love the project! Completely agree with minimalistic philosophy. I can see why some people feel like scrollback would be needed, I personally myself always work in tmux sessions, but still.
This is really cool though I can't seem to get it to build on my mac. Though the stock OSX terminal is plenty fast for me. Maybe I don't do enough intensive work
As Steve mentioned, it was mostly for ffi. There's also a few places where I'm doing my own bounds checking in order to provide nicer error messages. After doing the bounds checking, doing an index operation without the standard library's bounds checking requires unsafe.
The amount of time spent doing GPU work is rather small. Battery life tends to be better on my Macbook with Alacritty than with other terminals. This seems to suggest that power consumption is actually less than with CPU based renderers (and that your GPU fans shouldn't be spinning).
I didn't realize there's no Windows version available yet, I was totally thinking about my Windows laptop. My Apple (work) laptop is much better about not turning on fans willy-nilly.
Not necessarily due to "your Windows laptop", sometimes it's due to hidden/forgotten vendor bloatware and Windows' very own background services. That is, due to shabby software, not your laptop. I disabled Windows Update (rather check it manually every other month) and went through the list of services that'll realistically never be used even indirectly to disable and voila, no more random fan spinning! Until I open a WebGL page or something that is.. that "3D JS" will heat up even a current mobile workstation with a 3GB-VRAM Quadro GPU!
If you use any modern OS, you are likely staring at textures composed by the GPU. This is no different and, in fact, can eliminate some of the middle-"men".
Out of curiosity, why do you love iTerm? It's always struck me as kind of ugly (especially its preferences). And AFAIK, the only real feature it has that Terminal.app doesn't (besides the native tmux integration, because I still don't really see the point) is support for apps customizing the 256-color palette on the fly (e.g. the initc capability), and wile I really would like to see Terminal.app gain support for that seeing as how the xterm-256color terminfo it uses declares that it works, lack of support for that isn't a good enough reason to switch away.
The coolest feature of recent versions of iTerm is the "Selection respects soft boundaries" feature: with it on, iTerm will detect vim/tmux splits and constrain the selection to just one side of that split. For my workflow, at least, it makes a huge difference.
Also, if you start tmux as "tmux -CC" iTerm will open the tmux session in a new window, with GUI tabs for tmux tabs and GUI splits for tmux splits.
iTerm has https://github.com/ravenac95/sudolikeaboss. Integration of your 1Password passwords with your terminal, which makes life amazing. Typing and copy/pasting passwords(not to mention copy/paste is not exactly secure) is a major time-suck. I'd love generic support for this feature in something like Alacritty.
Interesting, but I don't think I've ever felt the pain of not having that. It's pretty damn easy for me to use the global 1Password hotkey to bring up 1Password Mini, type a few characters to identify the login I want, hit → to expand the login, ↓ to select the password, and ↩ to then copy that password, which I can now paste into the terminal. It's not that hard.
Unless you're entering your sudo password with every other command you type, it doesn't seem like it would make much difference either way, it's pretty darn fast already.
FPS or more strictly, DUPS (display updates per second), as a function of resource use - that is, what pushes the most text, the fastest, with the most displayed frames (sent to actual display/monitor) corresponding to discrete states of display output (v vis-à-vis the terminal), per second?
Some terminals bottleneck standard out ️ display , some hardcode the display rate
A real 500fps terminal (with 500Hz display) would be nice, because timing jitter would be very low no matter what keyboard autorepeat rate you used. Autorepeat would appear smooth even when the frame rate isn't an integer multiple of the repeat rate. Although in practice something like 120fps is probably sufficient if used with MPV-style motion interpolation.
Projects like this are so so close but fall just short of the ideal. I've been thinking about this for years but I have not been at the point in my life where I could implement my ideas which are these:
1.) A UI which is just a line/text field to enter commands. Something like the command prompt but which fuzzy matches commands like the mini-buffer in emacs or the omni text field in Chrome or Firefox or even Enso from a few years back.
2.) Each command is name spaced to an "agent" to avoid command collisions. For example agent 'jarvis' would have a set of commands it response to like jarvis/foo, jarvis/bar or jarvis/baz.
3.) The output of each command is a list of 0..N items/objects rendered in a master/detail view where navigation over the list shows a detailed view of each object/item in the list.
4.) An item/object can be anything from an email, rss entry, web page, graphic, tweet, contents from a text file. Basically anything that is renderable.
5.) The output of any command can be piped to any other command which is able to parse the list of items/objects from the prior command and render its own new list.
This UI paradigm seems to cover an incredibly large set of use cases. The only use cases I can think of which are not covered are those where the keyboard input device is not sufficient; such things as graphics manipulation where a mouse or pen & tablet are needed.
The frustrating thing for me has been to witness the vast number of systems over the years that have nibbled at the edges of this paradigm but have not gone all the way. What I'm talking about mostly here are the numerous launcher systems like Enso or Quick Silver or dMenu. All these systems have UIs very similar to what I'm talking about but they're restricted to launching existing apps and controlling the options exposed in menus of existing apps.
The other class of applications I've seen that come close are the ones like that mentioned in this topic. Applications like notty where the effort is spent trying to shoehorn extra rendering capabilities into a terminal emulator.
What I want is essentially a Grand Unified User Interface (GUUI) such that applications as we know them are done away with and we only deal with commands and output.
A system where I can type web/news.ycombinator.com and a one item list comes back with that first item selected by default in the details view. And that item is the front page of Hacker News. Then I could next type email/inbox and a list of emails in my inbox are rendered. And of course while viewing one of the items in my email/inbox I could type email/reply which would render a text area to reply to my previously selected email.
As I said earlier, the use cases seem endless and this paradigm seems like it would be incredibly efficient for those who can type well.
As someone who is especially concerned about the performance of my tooling these days due to what seems to be a generally infinite willingness to accept web apps that are slower than desktop apps from decades ago, and which seem to continually demand more resources year over year, I really appreciate that such a distinguishing eye has been given to Alacritty's speed and resource usage. Some contemporary alternatives like Electron-based terminals are academically interesting, but are programs I'd never want to use due to the huge step backwards in these areas.
One question: do you have any plans to use Alacritty to try and advance the state of terminal emulators more generally? e.g. Displaying images, richer interfaces that don't depend on ASCII bar characters, graphs, properly tabulated results, etc. This is a direction that I wish we were going, but it's not clear to me how to get there without many sacrifices.