Hacker News new | past | comments | ask | show | jobs | submit login
Kitty – a fast, featureful, GPU based terminal emulator (kovidgoyal.net)
452 points by Ayey_ on Sept 6, 2018 | hide | past | favorite | 315 comments



I've been using Kitty for a month now. I really like it. It's slightly less usable out of the box than iTerm 2. The default shortcuts are very counterintuitive (no CMD T for new tab or CMD W to close tab). But editing the preferences is straightforward.

I prefer it to iTerm because it's so. Damn. Fast. It's the only software (besides Sublime) I run on my laptop that actually feels like it's using the 40 years of transistor improvements I'm paying for.

I know iTerm recently added a Metal renderer but I need ligature support, which iTerm Metal doesn't have.

Maintainer has also been responsive on GitHub.


It's the only software (besides Sublime) I run on my laptop that actually feels like it's using the 40 years of transistor improvements I'm paying for.

I wish more software fell into this category. It's still a little odd to me that something as conceptually simple as a terminal emulator should require a full blown GPU in order to perform smoothly.


It's more that we've introduced tons of layers between keypress and screen. This is very useful, but it introduces latency throughout the system. Figthing the latency while keeping the features means we have to adopt new techniques.


I think both Fish and Kakoune fall into this category. They're a shell and a CLI/TUI text editor that feel incredibly modern compared to their predecessors (i.e. Zsh and Vim).


Why? Everything that seems smooth is that way because of the GPU.


It's a terminal. Text on a plain background shouldn't be hard to do. It shouldn't involve lots of fancy hardware. We've been doing it for decades with significantly less latency on significantly slower hardware.

If you want to see just how fast it can be, grab a linux instance, press ctrl+f1 to drop to a GUI free terminal and see just how responsive and how fast it can run.

It's kind of nuts that we have to rely on something as powerful as hardware acceleration via GPU to make plain text rendering on a screen in a GUI approach the same speed as CPU rendered text in a straight terminal console.


To be fair, the text mode rendering you're using as a benchmark, is implemented in hardware as well.


Indeed, a more fair comparison is to compare GNOME Terminal to xterm. You should see that xterm is way faster, but it's also uglier and has far fewer features. Overhead isn't always about worthless bloat.


Until you think about how complex text is in a modern graphical environment. Thousands of different characters, many of which can compose, all vectors with curves that needs to be rendered, and visually optimised with regards to their position relative to the screen pixels, but also optimised wrt their size.


It's on some crazy high resolution display. If you switch to text mode, your resolution drops a ton, and the required pixel pushing power drops accordingly.


> I prefer it to iTerm because it's so. Damn. Fast.

Yup, I was also pretty surprised by this. I switched to Kitty recently, after terminology broke for me after an update. It's surprisingly hard to find terminal emulators that run on Xorg and on Wayland without making a fuzz.

Kitty does this beautifully and I had a real a-ha moment because of its speed the first time I tried it.


Have you managed to configure it to behave similar to iTerm 2 in terms of keyboard shortcuts? Would you please share your config?

One thing is, I don't get its layout, is it possible to create a vertical/horizontal split like what (shift+)cmd+d does in iTerm 2?


Ctrl-Shift-Enter opens a new "window" -- this is the terminology in the Kitty docs; it's a new terminal within the same frame, both being visible at the same time.

Ctrl-Shift-L goes through all enabled layouts. A layout specifies how the "windows" are arranged. Simple splits are available among others. I like the stacked layout: window 1 fills the full height on the left half of the screen, all other windows are stacked on the right half.

There does not seem to be a way to control the borders within the layout with the mouse; e.g. temporarily resize window 1 in the example above to be 80% of the width. That's something I sometimes miss. It's configurable, of course, but not ad hoc.


You can freely resize layouts using hte keyboard (but not the mouse) see https://sw.kovidgoyal.net/kitty/#layouts but be aware that it currently does not work in the released kitty because of a regression, so you will need to run kitty from master


Thanks, that's... certainly better than nothing. Works in the 0.11.3 release that I'm running.


It seems heavily inspired by xmonad.


That's something I really need as well - I like how fast kitty is compared to iterm2, but I'm just so used to using the splits in iterm2 + using a global keyboard shortcut to show and hide the terminal. I couldn't find anything similar in a quick glance over the config :/


For showing and hiding with a global keyword, couldn't you rely on your DE/WM to handle that for you?


Sorry, DE/WM? I assume WM is Window Manager, but can't think of what DE is, except Desktop something?

I'm using macOS and unfortunately not familiar with how I'd do that, but I'm sure it would be easy with something like i3wm on linux (which I still miss after moving away from linux 3 years ago :/), but I don't know what I could use in macOS.

Maybe I could find some third-party solution just to show/hide a particular application. I think if I had that + if I could figure out tabbing/tiling behaviour of Kitty, I would switch to it. Those two things are the main things that I need from iterm.


You can run a tiling window manager like i3 and xmonad on macOS! See:

https://github.com/ianyh/Amethyst


Global shortcut to bring up iterm2 is the main reason I use it!


From a marketing perspective I was wondering how did you discover it one month back? Product hunt? or here itself ?


It has been listed on the arch wiki under their list of available terminal emulators for a long time and I often use the wiki to look up things even when having to configure things on non-arch systems.


It's been meantioned on here a few times in conversation. That's how I stumbled across Kitty.



It's easy to find if you search for terminal emulators with low latency.


Different conclusion for the same article: I picked mlterm

https://anarc.at/blog/2018-05-04-terminal-emulators-2/


I wanted to try a new terminal, so I looked up the terminal latency comparison article that was making the rounds a few months ago. Of the ones that ran on OSX, Kitty seemed to be worth trying.


Do you use tmux? The tmux/vim combo on MacOS seems to have minimal performance gains when using GPU accelerated terminals, unfortunately.


This is what I was curious about as well. Screenshot shows a split screen though so perhaps some tmux-like functionality is available natively?


Indeed, it is! And I find that it's much better than tmux performance-wise.


Could you share your config that makes Kitty behave more like iTerm 2?


As another user of iTerm2, I am currently using several keyboard mappings from key to key (like Tab being interpreted as Esc), do you know whether Kitty supports that? As far as I can see in the docs, only examples of key to action are shown.


You could do that at the macOS level using karabiner for example. I have Caps lock as esc when tapped and Ctrl when held.


Why do you need ligature support?


Not OP, but they are an essential component of some languages' writing systems, so it could be that.


If you code in the console with this : https://github.com/tonsky/FiraCode ??? Dunno...


In this case, it's more of a 'nice to have' feature than a 'I need it' feature.


Ligatures make Haskell, Rust and JS roughly 18 times more readable, I've gotten so used to them that I hate reading code without them.


iTerm's 2 global hotkey + half screen pane combo has been stuck on it. Not gonna leave until I can do the same with another terminal emu.


I'm having trouble putting this into words without sounding smarmy, so please, that's not my intention.

How are you attributing speed to Kitty over iTerm? Wouldn't most of the sluggishness of anything be more attributed to the programs you're running in the terminal?


Try it. You'll be surprised. An amazing amount of lag comes from the routines needed to display the text some utility prints, not the utility itself.


I love this seemingly new tendency of re-building the most basic things with much better design choices than originally made in 90-s (or 70-s) — it's sometimes baffling how people are ok with using weird unintuitive tools all day every day, because it's always been this way and after spending a couple months on it everyone gets used to it.

I mean, I love how user friendly (meaning configuration and all) kitty is out of the box. That's how tools made by programmers for programmers should look like.

That said, kitty behaves a little weird when scrolling back and forth in vim — at least on my laptop with my vim colorscheme. It leaves out background of a line black until I jump on that specific line or simple refresh the screen. I wonder what might be the issue here.


The challenge is that if you build something new, users will expect it to have all the features that those 30 year in development tools have. Take idk, a chat client - used to be enough to have a list with just a timestamp, username and the message itself. Make a chat client nowadays and people expect emojis, pasting all kinds of images, web site previews, bot / api integration options, two factor authentication and everything from the start. The barrier to create something new is just a lot higher nowadays. IMO anyway.

Just look at how often a new product - like this one - is compared to existing products on HN, and how many comments are about what features other products have that are dealbreakers.


> I love this seemingly new tendency

It's not new. Just one example from the 1980s is NeXTStep, which did away with the cryptic Unix directory names (/opt/sfw, /usr/ccs, /usr/mbin, /usr/rbin, /usr/5bin, and so on) in favour of /Library , /LocalLibrary , /Apps , /NextApps , and so forth.



Thank you, I just installed it and am simply playing around, so didn't actually read the docs yet. This seems to help, however redrawing is still kind of weird, with artifacts appearing when resizing windows and such. Probably has to do with laptop's quasi-GPU. Also, alt+tab doesn't switch back to kitty's window, when kitty is in fullscreen mode.


A 747 cockpit looks pretty unintuitive to me, but the recognition that I don't know how to fly a 747 prevents me from passing premature judgement on it.


Another GPU-accelerated terminal emulator is alacritty[0].

Although it has a lot less features than kitty and really is only usable with tmux (or GNU screen, whatever), it is more or less perfect for people who want their terminal to do just one thing and that one thing pretty fast.

Disclaimer: Haven't tried kitty yet and using alacritty for a nearly a year now.

[0] https://github.com/jwilm/alacritty


I was pretty impressed when I found alacritty a while ago, but I stopped using it as I soon found out that pretty much any speed gain I got from using the terminal is pretty much negated from having to use tmux in order to have any form of window management..


Are you by chance on macOS?

This seems to be a general issue on macOS with any terminal emulator, according to [0].

[0] https://github.com/jwilm/alacritty#faq


Could you please provide more details on the speed issues with tmux?


It's, well, slow. Haven't done any measurements, but it's noticeable. To do its work tmux must be basically another terminal emulator, and it seems it isn't the fastest at it. It's still strange, neither a little context switching (tmux is an extra process in the pipeline) nor doing the terminal things should amount to more than a millisecond. I think.


I haven't had any speed issues with tmux. Maybe you have something in your config that's slowing it down?


It's extremely noticeable in certain situations on MacOS.

Use a full screen terminal with a 12pt font on a 4K display. Split the tmux window into quarters and pull up some logs in each pane. Now use the mouse to click and drag on the center vertical border; quickly resize it from left to right. The border takes about half a second to catch up with the mouse cursor as tmux repeatedly reflows all the text on the screen.


Different people have different sensitivities. I also don't want to use Gnome Terminal or most other terminals for that matter, because of latency. I also find Windows Cmd window slow. Never done any measurement, but xterm feels right.


>There are no benchmarks in which I've found Alacritty to be slower.

It's actually very, very slow.

https://danluu.com/term-latency/


I have a hard time believing this because it doesn't make sense that Hyper, an electron-based terminal emulator, would be faster than Alacritty.


Don't underestimate how bad latency can get when you're not targeting latency.


I prefer alacritty because it's less bloated. I always use dvtm


Alacritty seems far more popular (partly because it is written in Rust) though it is only some months older. But, beyond being GPU-based, it offers nothing more over other terminals. (Actually it offers less being as slim featurewise as st.) For Kitty you can even ignore GPU rendering and still have some good stuff.


I tried alacritty some time ago on Ubuntu and in my tests it was considerably slower than urxvt. In practical use both are fast enough, no perceptible difference, but urxvt is much more mature.


I also re-tried Alacritty recently, but I had severe issues using it in Sway. If it's one of two tiled windows, it warps the text in the terminal, so it isn't really usable for me.


Side note, the author's home page is an absolute relic from the past: https://kovidgoyal.net , including things like provisions for 56k modems, iframes, detection scripts for IE3 and AOL, and other gems.

It even looks like he has his own pre-jquery compliance library written about 18 years ago: https://kovidgoyal.net/scripts/VisualDocumentAPI.js


Wow, he's the creator of calibre [0].

[0]: https://calibre-ebook.com/


Oh man - there's an Array.prototype.add definition in there. This was written before browsers could be expected to have Array.push(). That's insane.


And most of it still works perfectly fine on an iPad Pro, in a completely new browser from the future. Good times!


On safari desktop and mobile, I get:

> W A R N I N G! Your browser is not supported by this site.I cannot guarantee that things will work as they should. Consider downloading either Mozilla >=1.4 or Internet Explorer >= 6


Considering Safari only exists for about 15 years, and the site is older than that, I think it does a very reasonable suggestion.


It's the same for Firefox (61.0.2), maybe the version number is detected wrongly on the new versions.


And it's so damn fast!


Could be even faster with modern tech like HTTP2, HTTP2/PUSH, gzip, brotli and all that while looking much nicer with some basic CSS. Some additional speed could probably be squeezed out with something like anycast DNS.


I'm not sure what are you talking about. His site is served with SPDY and HTTP2 protocols https://i.imgur.com/LqGSHUj.png


AFAIK SPDY is the predecessor of HTTP2 and the site doesn't use HTTP2/PUSH.


In theory, yes. In practice, all that is inevitably coupled with monstrous pile of Javascript frameworks and dynamic loading and whatnot, and loads for 10 seconds on good days.


> DHTML

takes me back!


Not to be confused with Kitty, the terminal emulator.

http://www.9bis.net/kitty/


Beat me to it. Do open source projects just not check for name conflicts when creating their packages?

It would seem prudent for this kitty to consider changing it's name to help alleviate confusion.


Yup, I was reading through thinking there was some major update to Kitty and was a bit confused when the page said it was only available for Linux and Mac.


Exactly. A bit unfortunate since this is a successor? to putty which is widely used so this name clash is kind of prominent.


Kitty is a fork of Putty, or maybe more precisely it's built on putty. I used it a lot in my previous job as we didn't have key-based auth on customer servers. With Kitty I could create profiles with login information, and where needed automatically switch user to root. Good times.



AKA: KiTTY


Please forgive my ignorance: what do these terminal emulators do better than a basic terminal application, such as Terminal.app? My work with terminal is quite basic, I edit a few files, execute a few commands and sometimes cURL something. Terminal.app has some basic colors, has tabs, and works out of box without any visible performance issues for basic usage. What improvements would these emulators bring to my daily life?


You may have better font rendering (anti-aliasing support on low-dpi screens), faster rendering and scrolling, better support for terminal attributes (crossed-out, overline, italic, etc). iTerm does a lot of optimizations so it can render and scroll faster (I volunteered to add a feature to it, and I dug a bit through the code).

For instance, iTerm supports italic, which Terminal.app doesn't. Gnome's terminal (vte, actually) does support overline (I volunteered adding it to iTerm, but I'm nowhere near close), which is handy for status lines, as well. I think there is one that takes ANSI codes seriously enough to support double-height and width.

To see what's missing, you can look into https://en.wikipedia.org/wiki/ANSI_escape_code


Terminal.app is orders of magnitude faster than iTerm, or any other alternative. As in: less than a second v 30 seconds for cat <big file>.


With the metal renderer?


For instance, iTerm supports italic, which Terminal.app doesn't

It does (just checked).


I believe it's only supported if the font has a defined italic version, and it's unable to generate italic from a regular face.


Which is a major no-no anyway, typographically speaking.


We are talking about terminals. Italics used to be drawn by skewing the pixel clock.


Sure but we have bitmap displays now. There’s no reason our tools can’t take advantage and adapt to the times.


True, but generating italics when they are not explicitly defined is a perfectly reasonable thing to do.


> What improvements would these emulators bring to my daily life?

If you're a light user, probably not much.

I spend probably 2/3rds of my day at a command line, and iTerm has a ton of handy usability features that make life a little nicer. With kitchen-sink apps like this, everyone uses them differently, but things that make my life easier include:

- command-up arrow to scroll up by command - programmable appearance changing (using the server-side shell integration) to visually prompt me when I'm not my usual UID and to show machine names in the background of the window - Deeply customizable minor usability details, like the ability to copy-on-select, which I use all the time but some people hate

Nothing that is going to empower you do do things you can't now; rather just nice features that make address the quirky needs of heavy shell users.



Why render with the GPU? I don't think I've seen a visually slow terminal emulator since 2000 and barely even in the 1990's.

Text rendering is basically just blitting cached glyph bitmaps into a buffer, and CPUs have been more than overly fast doing that for eons. And CPU rendering has none of the compatibility problems/quirks that GPUs have. I can fire up a Gnome Terminal onto an unaccelerated Vesa X11 display if my graphics card doesn't work or I haven't configured it yet.

For better responsivity I suppose a terminal that behaves like mosh would be a good approach. It would maintain the tty state in memory and only render the latest view, dropping the rendering of any intermediate diffs in the buffer. I would guess most terminal emulators do work like that these days but I haven't checked.


Input latency makes for a bad user experience. People just usually don't notice it when typing as it's so common. See https://danluu.com/term-latency/ for measurements of some terminal software and discussion of why latency is bad.

I'm not sure if Kitty is using GPU specifically to minimize latency, but it would be a good reason.


I work in video and as a consequence I tend to be very sensitive to latency (I've spent more hours than I can count trying to figure out subtle synchronization issues) yet I've just spent one minute trying to detect input latency on my st terminal (which fares pretty badly in the benchmark you linked) without success. And that's running inside tmux on top of that.

That being said I think it is valuable to have a reasonably fast terminal because it's not uncommon to have applications slow down dramatically if they output a lot of information on stdout/stderr while the terminal struggles to render everything.


I would have said the same, until I recently had to boot into an UEFI shell.

Input was just so .... immediate? It’s really hard to describe, but it was surprisingly obviou.


Yeah, the feeling is pretty much "the letter appears while I'm pushing the key down" vs. "the letter appears when the key returns to the top position". Both feel immediate in their own way.


I don't know. Try using a simple, $2 (physical) calculator against the screen of your computer.

I can definitely tell you that while I can't see the actual letter appearing on the screen - the letter on the calculator would feel as if it appeared right at the bottom position of the button, while on my PC the key would have travelled some distance off the ground before I see the letter.

It's not uncomfortable, but I could easily tell the two apart in a blind test would something like that come up.


From what I can gather from the results, st is among the faster ones when tested on linux.


I would have thought using a GPU would make latency worse. GPUs are really good at throughput, not latency.


Usually, offloading to the GPU will get you higher throughout but also higher latency. Although of course this assumes a sane software renderer, X11 API calls will take forever.


Really? It's counterintuitive considering the GPU draws the screen anyway. Is it because the shader pipeline has some inherent latency that a software renderer doesn't?


>There are no benchmarks in which I've found Alacritty to be slower.

It's depressing that he's only ever tried for throughput, when that's irrelevant next to latency. And it does have abysmal latency as per your link.


Because on moder hdpi screens, text rendering involves pushing around a lot of pixels. Especially if you do stuff like having translucent windows, aliasing, complex font rendering, smooth scrolling, etc. So, it kind of makes sense to use the GPU for this. Of course on OSX, much of the desktop is already hardware accelerated so it is easy to overstate the impact here. Apple fixed this when they launched OSX in 2000 or so.

I've been using iterm for years; when they introduced their metal rendering backend the impact was very subtle but noticable. Anyway, given the amount of time I spend using cli stuff, it kind of matters.


Text rendering is very complex. GPU text rendering is however unlikely to be faster then "native" text rendering. But it could likely make scrolling more smooth.


Rendering the vector fonts into cached bitmaps is complex. Rendering the final buffer by blitting those bitmaps isn't very.

Smooth scrolling is inherently bounded by the vsync rate of the display. As long as you can render the terminal's pixel buffer with new characters and send it to front for the next flip, all within some portion of the time you have between frames, it's as smooth as it can get. Missing a frame will cause stutter, and make it non-smooth. Given the fact that terminal doesn't necessarily need to render if there are no changes, using GPU seems like an overkill for regular use.


You could use it to watch youtube in ascii mode if that somehow tickles your fancy


Nearly all text rendering libraries I've checked uses the GPU.


freetype definitely doesn't use the GPU and it's the most used font rendering library on Linux. What libraries did you check?


Best terminal I've found except for a complete deal-breaker for me: Lack of support for bitmap fonts (meaning I cannot comfortably use them in low DPI monitors), and the author has no intention of adding them.

A pity, maybe some day either the author changes his mind or Alacritty will be less buggy to make it a straight upgrade to Kitty. In the meantime, Termite it is.


Likewise, I found this as I was trying to make it work with Terminus:

https://github.com/kovidgoyal/kitty/issues/106

Obviously using the truetype version (which isn't pixel perfect but a mess) is not an option.

Too bad. I'll stick to libvte (sakura).


Two simple speed tests:

    time find ~
/dev/null takes 8 seconds

alacritty takes 8.5 seconds and uses 75% CPU

kitty takes 15 seconds and uses 100% CPU

konsole takes 16 seconds and uses 100% CPU

    time for i in {1..2000000}; do echo $i; done
/dev/null takes 8 seconds

alacritty takes 16 seconds and uses 75% CPU

kitty takes 17 seconds and uses 100% CPU

konsole takes 16 seconds and uses 98% CPU


Can we not start throwing around these bullshit `time blah` tests. They don't measure what people thing they measure. It doesn't measure rendering speed at all.

It does vaguely measure two things:

1) How big the buffer is on the terminal when it reads in lots of output. If the buffer is as big as the output then your test reports a figure close to the `/dev/null` test. Not useful.

2) How well the terminal is in SKIPPING the rendering of lots of output. No normal terminal tries to render all of the text on screen.


All that, and it gets worse. "time find ~" always reported something like 2.x seconds for me, no matter which terminal I used (kitty, alacritty, urxvt, st, konsole), but I know from observing the scrolling lines that it always took about 15 seconds in each terminal. It basically means the command has sent the characters to the buffer, and then exited, but the terminal has not finished rendering its buffer.


Please suggest better methods then.


I think that https://pavelfatin.com/typometer/ gets close to checking the actual perceived quickness of a terminal. It measures the time from keyboard signal to pixels on the screen.

I would much prefer a terminal that still provided interactivity when text was scrolling in other parts than raw speed. Open up two panes in tmux or your favorite split screen with text scrolling in one and measure latency in the other pane with typometer and I will use whatever comes up to the top.


I think this works:

    START=`date +%S.%N` && find ~ && echo $START && date +%S.%N


Assuming the terminal programs don't 'announce' when they've finished rendering, the only way to this test properly would be a white-box approach, hooking into the code of the various terminal programs, right?


Grab framebuffer at refresh rate and measure the time until your end-of-test marker appears in the FB.


That'd do it.


A good stopwatch.


I do not care about these numbers at all.

How long between key press and character render? That's all I care about. 24 cores and still have lag when I type. My 7mhz Amiga 500 had a more fluid GUI than modern PCs. It's disgusting.


Significant part of that lag might be coming from the input device you are using. E.g. recently people noticed that one of the latest (?) Macbook keyboards had more latency than an external Apple keyboard.


I'm guessing you're on KDE (as am I) but it would be cool to see GNOME Terminal compared also. What I got from these results is that Konsole is already good enough to not bother switching.


Yes, I'm on KDE on X. gnome-terminal does not work (easily) on my distro so I cannot measure that. I did measure xterm which takes over a minute for both tests and also stresses out X. None of the other terminal emulators did that.

Still, I wonder if the terminal emulators could use less CPU and be faster if they refreshed only with the frequency of the monitor.

An interesting insight from the tests is that the execution speed of a program can depend on the speed of the terminal emulator.



Terminology 1.1 takes 26 seconds and 100% CPU on find and 15 seconds and 90% CPU on the bash counting.


I switched from Gnome Terminal after running large duplicity backup with -v failed repeatedly, apparently because the terminal didn't held the load. Switching to urxvt256-ml solved the problem.


When the terminal is too slow, the producer is throttled. This is called flow control, which is implemented by putting the consumer to sleep when the buffer from the producer to the consumer terminal is full. No backup program should fail because the terminal is to slow.


I guess that would depend on whether the backup is reading from and/or writing to an external drive without read/write buffers (eg the early CD writers from the 90s). Having the handler program paused might cause io errors there which would legitimately cause the backup to fail.

However I do agree with your point that this "shouldn't" happen in practice (ie any decent hardware you'd expect to have buffers to prevent that kind of write errors).


It is most certainly reading from a drive, or from a TCP stream. Reading from CD-ROMs shouldn't be a problem either. In other words, I doubt that the terminal was the actual problem.


It reads from a drive, encrypts each file and writes it, so there is an element of a large buffer to hold.

I Wouldn't think that the terminal is the actual problem, but running the same job side by side consistently failed on the gnome terminal and finished successfully on urxvt.

On a side note, OP remark on CD-ROMs was regarding writing, CD-ROMs write feed had to be an uninterrupted, continuous stream, and the smallest hiccup would blow the operation and render the media useless.


Yup. It wasnt fun writing CDs back in then days


`time find ~` took 5.12 seconds and 38% cpu in gnome-terminal here. kitty took 4.32 and 48% cpu, in comparison.

EDIT: alacritty took 4.15 and 54% cpu.

I should note these results are all from second runs.


Not affiliated, but recently switched to Kitty from iTerm 2 on OS X on my main at-work MacBook Pro and I’ve been very impressed. It’s snappy, easy to configure and works well with my zsh/tmux setup with a few tweaks. I would love to drop tmux in favor of kitty’s sessions and multiplexing, but old habits die hard, so it will take some time.


What does kitty buy you on top of tmux that is worth locking yourself into a particular terminal?


Speed.

Locking how? He can keep his tmux configuration and use it when he needs to in the future.


What tweaks?


Terminal emulators have tabs/windows, tmux has tabs/windows, vim has tabs/windows. All with different keyboard shortcuts and semantics.

I wish this were all unified into a single window/tab/keybinding model. That was easy to code against and write your own interactions for.

I keep hoping I'll see a boundary-pushing project like Kitty do something new in this space.


vim-tmux-navigator[0] is probably the best thing I've installed in my terminal for transparently moving between splits. Using it has allowed switching splits to become so thoughtless that it's automatic as I move my eyes.

[0] https://github.com/christoomey/vim-tmux-navigator


This really bothers me too. For a while, I've been considering writing a window manager (or wayland compositor) that tabs all windows by default, and provides an easy model around un-nesting tabs and tiling the new window. Something like the tabs of i3 meets the simplicity of bspwm. The idea being I can then just use new windows for applications, and have the same keyboard shortcuts and semantics for tabs everywhere.


kitty has remote control (the ability to contrl it fro scripts/shell prompt) which you can use to integrate its windowing/layouts with other programs. Not sure what else the terminal emulator could do in this space, but if you have ideas, feel free to discuss them on github

https://sw.kovidgoyal.net/kitty/remote-control.html


Don't forget the (tiling) window managers, that also have tabs/windows.


Wow. This is from the guy that created the calibre ebook software.


This doesn't work on half of Nvidia-powered laptops:

https://github.com/kovidgoyal/kitty/issues/456


Closed and locked with:

> I'm widely known for my extreme stupidity.

Loving the zero <expletives> given approach from the maintainer. Installing now.


Kovid Goyal is infamous/known to be a bit toxic, see Calibre: https://news.ycombinator.com/item?id=8213946 (and the setuid "issue": https://lwn.net/Articles/465311/)


Personally I'm willing to give the guy a chance -- unaware of this history, I reported an issue with kitty on Github after the last time it was posted to HN. I received a helpful and polite response. He took the time to do some investigation and explain what was going on. I was taken seriously. He may have a reputation for being difficult, but it was not in evidence.


I'm... pretty dang okay with that response on issue 456.

> To elaborate, the GL driver is incorrectly treating const variables in the shaders as uniforms.

That's a pretty serious bug in the driver. I might also simply refuse to work around something that colossally flawed.

And to which someone declares "not a smart move", and he said, roughly, "okay, I'm stupid then ┐_(ツ)_┌". Yeah. Okay. This isn't exactly flaming someone out.

(The calibre/setuid things are a little less impressive. Though on the other hand, again, I can almost see the sense; I also have some serious beefs with the core design of the linux mount API right down to the syscall level.)


He seems to be sarcastic especially when needled, opinionated, sometimes incorrect, and willing to argue a point.

On the other hand he actually fixes things.

Even if you actually were paying money for his services, which most of us are not, which would you prefer an obsequious kind person who didn't fix your problem or a dick that did.

I would take the dick every day of the week.


> I would take the dick every day of the week.

When I'm asked a question whether it's better to hire a dickhead that is an expert in some field, or a normal person that isn't an expert of this field but showing promises, I'm probably always going with the non-dickhead person.

Dickheads are hard to communicate with, they are lowering team morale, make teamwork harder so in the end expert dickheads are less valuable than non-dickheads.


Although the hypothetical rockstar asshole vs nice-but-incompetent seems like a false dichotomy, I don't have an issue with direct or abrasive people in general. Perfectly happy using e.g. Linux or systemd.

The beauty of open source is you can look at the code and even the issue tracker. I dare you to look at even `setup.py`, a file that's usually quite straight forward in Python, and tell me what it's doing. But if you're happy using kitty as your daily driver, great.

I guess you can dismiss all this as "haters gonna hate" and "what have you made?". Which is why in the comment, I refrained from stating my opinion and linked to other people's experiences/opinions instead. Just don't say you didn't know. And good luck getting the things you want fixed.


> I would take the dick every day of the week.

That came out wrong...


Not an entirely unfair reply to

> Not a smart move, imho.


What I could really use would be a terminal emulator that is:

a library that I can feed data directly into (without a pty)

LGPL, MIT, or BSD

able to quickly serialize its internal state (make me a blob)

able to restore state if I feed it a blob of saved state, even across different software builds

portable to recent versions of Linux and Windows


Theres no way you can build a terminal emulator without a PTY and still expect terminal UIs (eg stuff that uses ncursors) to work.

Moreover, the session state saving is not possible either since you need to save the internal state of running programs as well as their cached output (saving the output is really very easy but it's the least of your worries given the specifications you've described).

You might also find this problem is better solved with a $SHELL replacement rather than a terminal emulator.


Yes you can. I want to hook this up to an emulator (like qemu or MAME) with an emulated serial port. That is the PTY equivalent. I'm writing this emulator.

So the emulator gets bytes from the emulated virtual machine, and then what? This needs to get fed into a terminal. While it is possible to invoke xterm with the very buggy -S option, there is no way for the emulator to suck the state back out of xterm for coherent snapshots. Simics uses xterm with the -S option. It sucks.

A library interface is required. The whole point of a library interface is to allow session state saving. Using a $SHELL replacement is way off. That has nothing to do with the problem.

So, if you were hacking on an emulator (qemu, MAME, Simics, VirtualBox, etc.) and you were trying to emulate a device with a serial port or even a modem, how would you get snapshots to contain the terminal state? To make this work, you need that state within the emulator. Passing stuff over a PTY is not going to work.


You're only solving half of the problem (and doing it in a needlessly complicated way at that). As I said, capturing the TTY output is the easy part. Storing the running state of the program that's writing to the TTY is your bigger issue. That's why I suggested a custom $SHELL (I guess there's no need to reinvent the command line so you could probably use a multiplexer instead?) that could manage the running state of the executing programs and their PTY.


Storing the running state of the program that's writing to the... serial port or whatever... is a solved issue. Most emulators and virtual machines can do this. For example, VMWare Workstation can do it. Simics can do it. Qemu can do it.

The state of an entire OS, with all running programs, gets saved. It's more complete than when a laptop/notebook does suspend-to-disk. The emulator saves the CPU registers, the RAM, the disks, the parallel port mode, etc.

The trouble is that a connected terminal is not included in that list.


There isn't anything writing to the serial port. The process of writing to the serial port is fast enough that the IO isnt the problem. What you're after is the capture of stuff that has already been outputted on the terminal and that can be captured. It's called a scroll back buffer history. Some terminals can even make that persistent.

You keep trying to reinvent terminals with features that already exist to solve a problem that needs to be fixed in the server itself.


I didn't see a "reply" link below. This is a https://news.ycombinator.com/item?id=17949657 response.

I emulate lots of different things. I'll make up an extra-simple case to illustrate the problem.

The guest OS running in the emulator is just a bootable floppy image for testing. All it does is print a counter to the serial port, once per hour. Just after 3 hours it has written 0, 1, 2, 3. At this point I make a snapshot file named 0123.snap and let the software continue running. More numbers get printed. Eventually the terminal is showing 0, 1, 2, 3, 4, 5, 6. I decide that I wish to go back in time, so I issue a command to load the 0123.snap file into the emulator. The internal state of the emulator warps back to the point at which I took the snapshot. The CPU registers, the RAM, and everything else within the emulator are now as they were earlier. The terminal retains the old state however, because it is a separate program with no awareness of the fact that I just loaded the 0123.snap file into my emulator. The guest OS carries on from the original point of course, since that is what the CPU and RAM state dictates, so a "4" will be printed next. The terminal then shows 0, 1, 2, 3, 4, 5, 6, 4. The numbers are simply wrong. They should be 0, 1, 2, 3, 4.

I can only avoid this fate by saving full terminal state whenever I save the emulator state, and of course restoring it at the same time too. There is no reasonable way to extract terminal state from a separate terminal program. (attacking it with ptrace is not reasonable) I thus conclude that the terminal must be built into the emulator.


> I can only avoid this fate by saving full terminal state whenever I save the emulator state, and of course restoring it at the same time too. There is no reasonable way to extract terminal state from a separate terminal program. (attacking it with ptrace is not reasonable) I thus conclude that the terminal must be built into the emulator.

It's called a "terminal multiplexer" and that is exactly what I suggested with the tmux solution right from the bloody start. I've lost track of how many times I've said this needed to be solved on the server yet you repeatedly pushed back on both points when they were made, insisting it was a terminal emulator issue on the client side.


I'm normally not a BSD user. I'm more familiar with "screen" than "tmux". To me, a "terminal multiplexer" is a physical piece of hardware. It has numerous slow serial lines to which one may attach terminals, modems, or consoles. It time-multiplexes them to/from a single high-speed link, typically by interleaving the streams of data bit by bit.

I don't see tmux being available as a library that I could link into my emulator. If it can be a library, then yes my emulator could implement the tmux protocol. This turns out to be almost exactly what I was proposing, with the terminal state implemented by a library within the emulator. I'd just be missing the user-friendly aspect of automatically popping up the terminal windows when my emulator starts, but perhaps that could be arranged by having the emulator start xterm running tmux.

This is comparable to using vnc protocol for VGA. I have this implemented, but nobody likes to use it. Everybody prefers the built-in video window to show the guest's VGA.

If you meant to not link in the tmux code though, that won't work.

Instead of insisting that there is a terminal emulator issue on the client side, I insist that there not be a distinct client. The code that tracks terminal state (cursor position, current character attributes, etc.) pretty much needs to live in the emulator. That is the only reasonable way to ensure that the state can be saved and restored by the emulator. Actually displaying this state could be done by connecting a client (annoying) or just by popping up a window.

Another nice thing about having the terminal windows built into the emulator is that they all go away when the emulator does, even if that is a crash.


To be honest I think you've decided on the worst possible way to solve your problem. But good luck with your endeavour non-the-less. :)


I think it is unavoidable. It follows naturally from insisting on user-friendly snapshots.

It is typical for emulator authors to just give up, letting the terminal be inconsistent after loading a snapshot.

So far I've done just that... but my users love loading snapshots.


This whole conversation is weird, and I'm really not sure what is missing from my explanation. In case you missed it, I am an author of an emulator. I write the emulation, and my emulator supports guest software that needs terminals.

The serial port is an emulated 16550 chip. It is much more than a PTY. It has clock divider registers, an interrupt cause register, a transmit holding register, etc.

Scroll back buffer history won't let me untype something. If I take a snapshot, then a character is output to the terminal, and then I load that snapshot, the character needs to disappear from the screen. Of course, "character" could be something more destructive, like a scrolling operation or the clearing of a line.


I'm the author of several serial interfaces for 8 bit micros plus a bunch of Linux terminal solutions. If you've taken a snapshot of the server and restore that then it should resume where you left it. All the terminal emulator then needs to do is restore a scroll back session. Thus when you press backspace (or whatever) on the terminal emulator it would transmit that character to the server and the server side software will understand that a character has been deleted.

If you disable local echo and have the server side software handle that (like how the SSH protocol works) then it becomes even easier because you then also manage the terminal state (to some degree) via the server as well.

I've written quite a bit of software around this kind of area and I really doesn't sound like a problem that hasn't already largely been solved. I mean if you're feeling really lazy you could just build a terminal server that supports detachable sessions (eg tmux or screen) and that would get around the scroll back issues without you having to write a single line of code.

Like I said, I've done a lot of work with serial interfaces much more archaic than what you've described and have been able to get them working without having to write too much magic. So I suspect you've over thought your problem because it still seems really straightforward to solve from this latest description. Like I said before, terminals literally are just flows of text (ironically there's even less complexity there since you're dealing with serial interfaces rather than PTYs) so all you actually need to do is ensure your terminals output is persistent - which is actually a surprisingly easy job.



Just capture the output from the serial port in a buffer inside the emulator, and use that to save snapshots.

You can then also forward any output to a file/terminal emulator/whatever is convenient, but relying on the terminal emulator to save part of your emulator's state seems weird to me, unless you are going to ship the two bundled together.


Terminal state is far more than just text on the screen. For example, an escape code could change the color. After that, there could be megabytes of output created over many hours. Loading a snapshot should not require replaying all that data.

The point of a library is that the two are more than just bundled together. They are linked together, making one executable. The terminal window is a part of the emulator.


Your arguements don't really make any sense with regards to the technology you're discusing. Eg why would we replay past TTY output? Also escape are inlined so they would also be captured via the parents method. In fact contrary to your point, TTYs effectively are just text on a screen.

The only thing that wouldn't be inlined via control characters nor escape sequences would be the PTY operating mode (which are defined via kernal syscalls). Im talking about stuff like local echo, flow control, how CR/LF characters are handled, etc. However that's solved with a multiplexer (screen or tmux type thing) or a custom shell, thus you could save that running state and reset it when needed (shells actually already need to do this to some extent so they can pass control of the TTY to forked processes. But we'd obviously need to take things a step further and store any state changes made by forked processes as well). However now we are back to my original solution you dismissed.


I'll go through what happens without replaying. The emulator runs, and the guest OS inside it boots up. For this example, suppose it is DOS running a multi-user BBS with 2 modems. This is not a UNIX clone; there is no /dev and there is no shell.

The emulator needs to pop up 2 terminal windows so that we can log in to the BBS. We do that, and a colorful menu is displayed on each of them. At this point, we save a snapshot and shut down the host computer. Later, we start the host computer, then start the emulator, and then try to resume the snapshot. Each terminal window remains blank, which is not the correct state. The guest OS (a DOS BBS) is in the state where it is showing a colorful menu, but we can't see it at all. Our terminals, being separate from the emulator and without an ability to save/restore snapshots, are inconsistent with the emulated guest OS.

In the above example, the guest OS (a DOS BBS) never did any kernel syscalls related to PTY operating mode. It directly acted on serial port hardware. There is no PTY, and adding one (where?) doesn't help.

Fundamentally, the emulator needs to be able to save/restore all terminal state. That even includes stuff like incomplete escape sequences, such as when just the first byte of the escape sequence has been passed to the terminal.


Right, that's completely different to the thing originally described. It's also not possible without running a bespoke BBS server that can handle stateless connections. Again what you're trying to solve at the client side should really be solved at the server side.

I don't know why you even mentioned PTYs (now I understand your requirements). A PTY is just a detachable interface for local terminals, but since you're connecting via serial anyway there's no need for a PTY. There's also no need for a special emulated serial interface like you suggested because the DOS VM would have one already baked into it, just use that and have the BBS listen on a couple of COM ports.

Regarding storing the previous session, there are already terminal emulators out there that will restore the scroll back session from a previous instance, so you might find your idlogical terminal already exists.

It still seems like a daft problem you're trying to solve though. Putting aside you're suspending a server for no apparent reason (why would you even want to do this?), you're effectively running a forum for one local user. But I'm sure you also have your own reasons for these anti-patterns as well.


It's not different. A bespoke BBS server is not an option; the whole point is to run an existing piece of software for which there is no source code available. Suggesting a bespoke BBS server is like telling a person running Super Mario All-Stars in a Super Nintendo emulator that they should upgrade to a PC game.

I have implemented (wrote C code for) what you refer to as "a special emulated serial interface" and thus "the DOS VM would have one already baked into it". That is my code and it works fine, and I do indeed "have the BBS listen on a couple of COM ports".

I mentioned PTYs because I don't want them. Most terminal emulators will only accept data from a PTY, with themselves on the master side and a shell on the slave side. This alone makes these terminal emulators essentially worthless; to use them I'd really need to have my emulator act as a telnet server and then ask the user to telnet into the server! Perhaps a really screwy $SHELL could perform that automatically, but it still doesn't solve the replay/snapshot issue.

An xterm is very special because xterm supports the -S option. With this, my emulator can fork off a process and then exec an xterm with a file descriptor that I can feed data into. This works OK until I want to load a snapshot.

There may be "terminal emulators out there that will restore the scroll back session from a previous instance", but this gets unusable. My emulator supports multiple named snapshots. Using a terminal program with distinct snapshot functionality would require that the user save/restore snapshots both in my emulator and in each of the terminal programs. I have a case (other than a DOS BBS) where my emulator has dozens of serial ports, so this would place a heavy burden on the user.

Suspending a server is good for debugging it. I can go back to before a crash. I can load a snapshot to bypass slow and annoying start-up.


With the greatest of respect, the way you casually drop technical terms out of context suggests to me that you really don't understand how terminals work in the slightest. (That and the new revelation that you're restoring your BBS snapshots in random orders yet expecting your terminal to be smart enough to guess where to resume). This is why I think we are having such a high degree of difficulty agreeing on your problem.

The reason I commented about hacking the BBS software is because that's the only way to solve the weird requirements you're asking. You can complain about my suggestion all you like but it's your unusual requirements that are at fault The fault here, not my solution.

Maybe you need to stop and think for a moment that perhaps what you're asking is silly rather than just assuming that everyone else on the internet is stupid.

The closest work around you're going to get (and is buildable with your experience) would be to build a terminal server (like a minimal Linux distribution running on a VM or Raspberry Pi) which you can SSH onto from Linux or Windows (eg via PuTTY) and which has a serial interface to your BBS software (be that virtualized or physical). Install 'tmux' or 'screen' onto the terminal server and when you SSH onto the terminal server reattach to your screen session. That screen session will be connected to your BBS and you'll then have your detachable but persistent terminal sessions.

Edit: If your terminal server is a VM and the BBS software is a DOSbox instance running on the terminal server then you also only need to snapshot one host and have persistent scroll back PLUS the ability to restore from random snapshots. The only thing you would lose is your TCP/IP handshake to the xterm / PuTTY session but they're detachable anyway via tmux / screen so you'd just have to SSH back into the host and reattach to your multiplexer (screen / tmux).


I have written termcap and terminfo entries, and I am a vttest contributer. I admit that I haven't yet written a terminal. I've owned a few physical ones: VT100, VT220, VT320, VT510, and some awful Televideo thing.

I do restore snapshots in random orders, and I really want my terminal to know exactly where to resume. I don't expect the terminal to guess. I expect it to accept a blob of serialized state data, and of course I also need it to produce that sort of data on demand. I want that data in the snapshot files that my emulator creates.

I could indeed put the emulator, along with separate terminals (xterm) and the OS they require, within an emulator. This works, but adds the overhead of a whole extra OS. It gets confusing to use, takes up lots of memory, runs slowly, and takes up extra space on the screen.


Assuming all that is true, in that case you must be aware that the unusual requirements you're placing makes this completely impossible. I just don't understand how you can claim to understand how terminals work yet still expect the kind of behaviour you're expecting and making the kind of comments you've been making.

Honestly, just stick this stuff behind a terminal multiplexer and move on with your life. You're trying to over-engineer a solution to a problem you're inventing.


That sounds interesting! Out of curiosity, what is your use case for those features?


It is useful for backwards execution. This is supported in various ways by Simics, gdb, SID simulator, rr, the Moxielogic emulator, and others.

It is useful for virtual machine snapshots. You can restore the virtual machine with the terminal state intact.


Am I missing something? VM snapshots also save memory, any terminal will be intact after this?


That would be true if the terminal were within the VM, for example if the VM was a graphical OS running a terminal program.

It isn't true if the VM is communicating over an emulated serial port. In that case, the VM software must provide a terminal as hardware. Typically the VM software only provides a serial port to the VM, which the user will then associate with something else via the VM configuration. This fails when loading a snapshot because the terminal isn't part of the snapshot.

For example, suppose that the VM is a PC-AT running DR-DOS and a BBS. (from the pre-internet days, with a modem for users to dial in on) At minimum, the VM software must provide a serial port for the VM. You could configure this to connect with a real physical modem and then connect to that with another modem and some dial-up software, but then you'd have a problem with snapshots. Loading a snapshot would place the BBS in a state that is inconsistent with the dial-up software. What you really need is for the terminal emulator to be part of the hardware that the VM is emulating. It would then be properly snapshotted.


tcl.tk ?


That's what I was thinking, it sounds like the wish terminal.


This is somewhat of an aside, but the fastest terminal emulator I've ever used is rxvt-unicode. I don't even know if it utilizes the GPU (doubt it), but damn is it snappy. Try it out if you're on GNU / Linux.


This nice study goes into nicely thought out detail to compare speed of emulators, I recommend it. Urxvt is not so blazing fast at latency, and it manages great scrolling speed by dropping content. (It's my current emulator of choice though !)

https://lwn.net/Articles/751763/


Does it drop it or simply not show it? Will scrolling back through history display the "dropped" content? If so I'm fine with that, it's annoying when you finally finish a task and pipe the results to a file just to find the bottleneck slowing you down was display refreshes.


Lovely I came here just for benchmarking of this kind, it's a shame that kitty isn't tested in the article though.


At one point I tried lots of different terminal emulators but none of them were better than xterm.


I'd love to try out Linux again. Is the rMBP story any better than it was in 2014? Last I heard there were way too many driver problems.


No idea, I've never tried to install it on a MBP. When it comes to Apple hardware, I tend to stick to macOS. It's Unix enough for me and while I prefer something like awesomewm, I'm willing to accept the limitations of the desktop environment and go about my day.

Linux desktop in 2018 is incredible though. On my desktop, I'm running an RX 580 on AMD's open source drivers and I can play modern Windows games at >= 60fps using WINE + dxvk. Steam is even officially supporting it now. Just wait 2-3 years and GNU / Linux will be the way to go.


I'm the same way in general, I prefer "past of least friction" for my OS and IDE and programming languages etc. I'm using Mac OS X right now. (They may have changed the OS's name by now.) But I also like new and shiny novelties. Sometimes it's nice to configure Arch Linux with dwm and all sorts of stuff like that.

Btw I'm big fan of your games, like Link to the Past! 16-bit era was best era.


iTerm2 now supports Metal for GPU-based rendering by default (when you aren't on battery). It's now blazingly fast all of the time.


I think iterm2 doesn't support ligatures when using metal.

Which is odd, cause Kitty does, IIUC.


Hmm. People say that, but I still find iTerm2 slower and more laggy than Apple's Terminal.app…


Yes and it's so good! The Metal API is great and I'm glad iTerm2 supports it.


I've switched to kitty a few months ago and it works REALLY well. It's very fast and has the best support for ligatures I've seen in a terminal. With FiraCode, it makes code beautiful in the terminal


Despite my efforts to try FC, I always come back to Monaco.


I'm using Yakuake (konsole) with Firacode . What differences I would see ?


I tried it on MacOS (MBP retina) and damn, it renders text awfully.


Same problem here on Linux. It seems to be bypassing Freetype. Nearly unreadable on my 1080p screen. It may be fast and featureful, but the appearance of the GUI and text are currently pretty bad.


The text looks very nice on my rMBP on the built-in display and absolutely terrible on my attached 2k display.


I remember seeing this in iTerm, but it was fixed by changing the subpixel hinting mode. Is there an option for this in kitty? I couldn't find one in the config, unless it has a different name.


Exactly


About a year ago I was deciding between Kitty and Alacritty and I chose Kitty because of 2 important features it had over Alacritty: proper underline rendering (Alacritty just draws underscores) and text selection with Shift+Mouse. Kitty also compiles instantly as it's written in plain old C. The author is also very responsive on GitHub and addresses each issue quickly.


Tried it out and it seems ok. It does seem quite responsive and wasn't too hard to customize. Doesn't have a lot of discoverability out of the box since there's no menus: I had to search through the config file to figure out that tabs exist and that to make one I need to enable the command. I would love a guide of some sort: perhaps a video showing some of the features and how they would fit into a workflow. I toyed around with tmux, but my current setup relies on panes in iterm. Since there's no option to transfer over, it'd be nice to see how to migrate. Looking forward to trying out the kittens later since I didn't mess with those at all.


> Doesn't have a lot of discoverability out of the box since there's no menus

But it does have a comprehensive documentation page: https://sw.kovidgoyal.net/kitty/index.html#scrolling


And now I feel old. Menus in a terminal? Videos explaining a terminal emulator? Migration guide from arbitrary terminal emulators configs? Get off my lawn! Kids these days, never having had to compile their own xterm and then seeing the light of our holy lord and saviour rxvt, and its endless man pages.

There's a huge configuration doc[1], and the homepage already does a very good job of introducing you to its features. You have to read that, yeah, but maybe you can get a screen reader to read it for you. Dunno.

[1] https://sw.kovidgoyal.net/kitty/conf.html


Videos are the worst. No copy/paste. The only was to go back in the information flow is through a dumb scroller without any semantic addendum. It's a joke.

More importantly, making a video probably takes more time that having a nice write-up about anything.

I am 26, BTW.


Eventually I was able to figure things out and more or less the process was simple enough using the config file.

However, there are several behaviors that are atypical for OSX applications such as missing conventional system shortcuts and some things I consider confusing such as the fact that there are OS windows and then there are "windows" which are similar to panes in iTerm, but are not actually windows. Perhaps for other terminals or systems this is normal, but it isn't exactly par for the course for the tools I've worked with prior.

One of the best things about OSX is that every item that is available in the menu bar has a clear shortcut associated with it and a very helpful search box available via the cmd+? key combo which works as a shortcut for anything you don't happen to know the shortcut to offhand. Menus happen to be one of the fastest ways to navigate OSX with a keyboard for me. This app does not have any menus to speak of, which is fine, but it just makes it that much harder to figure things out on your own.

As for the video suggestion: some things just don't translate as well as text for me (I am legally blind: not afraid to RTFM, but sometimes reading is asking a bit more for me than others). Sometimes it's nice to see how someone else does it because it helps when you don't know how to do something on your own. Even an asciinema style demo is quite helpful for certain tools here and there.


Question for other tmux users: if I do not intend to replace tmux, is this still a good choice for a terminal to run your toplevel tmux session in?

(compared to Apple's Terminal.app, which I found to be the fastest and least problematic on a Mac)


Gpu powered terminal emulator? This performs almost 100x worse than Mac OS Terminal.app. Cat-ting a long file takes 100x longer to draw on the screen! Isn’t it supposed to be faster if it’s “GPU powered”?


"Some people have asked why kitty does not perform better than terminal XXX in the test of sinking large amounts of data, such as catting a large text file. The answer is because this is not a goal for kitty. kitty deliberately throttles input parsing and output rendering to minimize resource usage while still being able to sink output faster than any real world program can produce it. Reducing CPU usage, and hence battery drain while achieving instant response times and smooth scrolling to a human eye is a far more important goal."

From the performance page: https://sw.kovidgoyal.net/kitty/performance.html


That... doesn't make any sense.


I think it does if you consider "speed" as meaning response latency and perceived speed, not data throughput. From what I've read here so far, it feels fast, while not killing your battery with bulk cat'ing of text. That's my take on it anyway. Just now going to download and try out...

EDIT/Whinging: Welp, scratch that, kitty requires an OS X one version higher than what apple will allow me to install. And while it is an older MBP from 2010, at least it's fast and reliable. AND it has the multitude of ports that I like. And the magsafe.

I'm sure I'm venting into an echo changer, but, here goes. Why won't Apple simply provide me (an option to buy) a modern solid mid-level 15" rMBP for under $2000. I bought mine for $1700 and it came with a $200 ipod as a gift (sold on eBay).

Give me that. A rMBP, with all the ports, plus the new USB-C. They could leave out some of the stuff that's pricy.

Ya, give me a new version of what I have, and will last at least another 8 years (fully supported by MacOS releases), and price it under $2000. That I would buy. When I replace this one, I'm just going to have to buy something to linux or Windows (probably both, realistically), since I've been priced out the product I would have normally purchased and recommended.


It's simple, take for example scrolling a file in less. Most modern terminals are fast enough that you can do it at the key repeat rate of your computer. The difference with kitty is that the scrolling feels smoother and uses less CPU, for the same task, thereby saving battery and pleasing your eyes.


I thought macOS was already GPU accelerated with Quartz? And wouldn't Windows be doing something similar by now? The CPU is certainly not writing pixels directly out to VESA buffers in 2018, right?


Most Cocoa apps use the CPU backend of Core Graphics, which doesn't use the GPU for vector graphics rendering. (CG is usually what "Quartz" refers to, though the brand is so overloaded at this point that it's hard to make any general statements about it.) Cocoa apps do frequently use Core Animation for compositing surfaces together on GPU, though.

Most of what terminals have to do is blitting of prerendered text bitmaps, which is relatively slow on CPU and does benefit from the much faster memory bandwidth of the GPU. Core Graphics generally does not use the GPU for text blitting in most Mac apps, so having a custom renderer can help here. Font rasterization on the Mac does not use the GPU either, but the glyph cache hit rate for a terminal emulator is so high that it ends up pretty much irrelevant.

On Windows, Microsoft ships multiple rendering stacks for legacy compatibility. Most terminals are old Win32 programs that use classic GDI, which as far as I know is partially accelerated but mostly CPU (and implemented in the kernel!) Direct2D, the newer API, does use the GPU for blitting text. Like macOS, Windows still does all font rasterization on CPU. In GDI, font rasterization is done on CPU in the kernel (!) (except on Windows 10, in which the kernel calls out to a userspace fontdrvhost.exe). In Direct2D, font rasterization is done on CPU in userspace.


As a software developer, I get upset when people use old versions of my software and talk about how it isn’t modern.

This also makes me weary of statements that criticize how e.g. windows does it wrong, “except in the current three years old version”.


what is the benefit of using a GPU powered terminal on my Integraded 6000 ? rather than my iterm2 ?


Iterm2 also supports GPU accel. FYI


Only in beta, or did they release it?



I wish this ran in Windows so I could use it with WSL.

WSL desperately needs a good terminal.


Mintty works great for me. Their wsltty installer works wonders: https://github.com/mintty/wsltty


Take a look at wsl-terminal https://github.com/goreliu/wsl-terminal


It’ll probably get one when pty support comes out this fall.


Simply switch to use native Linux. It's far faster that WSL.


Wow! Really like it. The windows organisation is well done. Usually just use whatever comes with Ubuntu, but this is much nicer. As an aside, had to add TERM=xterm-256color to my Raspberry Pi's .bashrc to get it to play nice when I SSH in from Kitty (otherwise commands like 'clear' don't work).



Thanks - missed that.


Seems to render The Emoji Demo [0] nice and buttery smooth, though the skintone cube scene appears a tad broken. Quite impressed.

[0] http://www.pouet.net/prod.php?which=76627


How does this compare to alacritty?


Alacritty is quite buggy and not very maintained, the author doesn't put a lot of work in except the occasional small merge. (They still haven't finished the scrollback support branch that's been open for ages).

I've switched to kitty and am much happier.


This is less of an issue if you use a multiplexer like tmux, so how appropriate Alacritty is depends on how you use it.


The big idea of Alacritty is that it uses the GPU for superfast rendering and updates. But relying on tmux for scrolling seems to be a case of "penny wise and pound foolish". Having to round trip key presses and updates through a separate and possibly remote process (tmux) just to do scrolling, introduces a bunch of latency. "Normal" emulators with built in scrolling don't have this problem. Scrolling is purely local operation.


It's not for scrolling, it is for scrollback.


You are right. Replace 'scrolling' with 'scrollback' when you read my comment. :)


True.

I use i3 (a tiling WM) though, so tmux is redundant for me unless I work on remote machines.

And I really don't want tmux running on top just for scrolling. It hurts performance quite a bit.


Add the fact that copy/paste into/out of vim gets a little more annoying. (not impossible, just more annoying)


Is it? I have been using the scrollback branch for months without problems (although not on mac)


Even on master I've just felt alacritty being a bit glitchy with regular rendering issues. Especially when switching to full screen, which I do quite regularly.

Also kitty definitely feels snappier to me in terms of latency.


Very similar. Kitty has scrollback. Alacritty handles double/triple-click and drag correctly, although I think kitty may have fixed thia recently, making it king.


I don't even know where to start with this. We had terminals that were way faster than you could type or see 30 years ago. They didn't need a GPU and ran on a CPU 1000 times slower than what we have now.

If this is progress... I'm not sure I want it


30 years ago, we didn't have 8+ megapixel displays.

And the GPUs were character-based, with only 256 different characters, usually 16 colors/each. That's the main reason for the hardware requirements. Essentially, these shaders emulate a character-based fixed function GPU from 30 years ago, adding features like more colors, Unicode, and custom fonts.


They were also char based display at something like 320x240 resolution and maybe 16-256 colors.

Modern displays at 1920x1080 in millions of colors.

30 years ago a terminal was the whole display and nothing more. Now its a single window amid dozens of other windows including content ranging from high definition video to real-time rasterized 3D graphics.

Yes, that's progress.


Also, as others have noted, with many new layers of abstraction between user and system including the compositor, window manager, display manager, display server, and probably a host of others within the operating system and places I'm unfamiliar with.


The resolution wasn’t that bad.

VGA was introduced 31 years ago, in 1987. The text mode was 80x25 characters, 9x16 pixels/each, so the effective output resolution was 720x400 pixels. I currently develop a device with similar one, 800x480, only now I have GPU and GLES 3.2.

Back to the old times, because RAM was so expensive, that high resolution only worked in text mode, where the frame buffer only had 2 bytes per character, one for character itself, another for attributes i.e. background and foreground colors.

Graphic modes had way lower resolution, indeed.


Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: