I was 'shocked' to discover a few months back that, when I made the horrible mistake of starting a job that wasn't easily Ctrl+C killable and dumped a lot of output to the screen, I was better off paging to antoher desktop and waiting for it to go away, because the limiting factor to the data-dump wasn't the bandwidth between my machine and the remote, nor the buffer size of the terminal, socket, or whatever else in between, but the speed at which my terminal rendered text, which was rendered moot by being on another desktop where the text didn't need to be rendered and the xterm (or also iterm) session could page-past that text without disturbing my desktop display.
It makes a lot of sense. Much goes into your text rendering beyond what was necessary in your VT100. Every screen I see is anti-aliased, horizontally and vertically aligned, with 'lines' of text that can wrap or be colored or myriad other things according to the byzantine and arcane standards of the escape codes and ones that have come after. Re-drawing even a fairly large amount of text is fairly non-trivial cpu and gpu time. I found I got way better performance and less lag from my machine if I disabled the alpha transparency on my terminal windows. Processing power is very high, but still far from unlimited.
If the terminal is trying to update its display faster than the frame rate of your monitor, it's just wasting CPU cycles. A sensible terminal would rate-limit its updates and prevent the process from consuming all the CPU.
If your terminal can't draw a screen of text at 60fps on a semi-modern computer, it has a very inefficient renderer.
In my limited experience, rendering vector fonts is among the slowest parts of rendering text in a terminal. The difference between an xterm with a TrueType or OpenType font and an xterm with a PCF is noticeable.
This can make a huge difference. It can easily be worth the performance benefit to find a well-designed bitmap font[1].
I also highly recommend using urxvt[2], which not only has very fast rendering and support for a lot of modern features, it has a few settings you enable that specifically address the too-much-scrolling speed issue. From urxvt(1):
jumpScroll: boolean
[...] specify that jump scrolling should be used. When receiving
lots of lines, urxvt will only scroll once a whole screen height
of lines has been read, resulting in fewer updates while still
displaying every received line [...]
skipScroll: boolean
[...] specify that skip scrolling should be used. When receiving
lots of lines, urxvt will only scroll once in a while (around 60
times per second), resulting in far fewer updates. This can result
in urxvt not ever displaying some of the lines it receives [...]
These options can completely fix the scrolling lag problem.
If you're using Linux, I'd advise you to use a fast terminal like Terminology; it even has GPU acceleration, but I doubt you'd need that. The only one that comes close on CPU-rendering speed is urxvt.
Meh, waste is relative. I run KDE programs, Gnome programs, and a couple of other random hodgepodge things at my personal whimsy, and I'm still only using 2.5GB of the 8 on my system for non-cache work, and over a third of that is just Firefox. I've got 3.5GB literally sitting empty, according to free. I can't even fill the caches. Running another set of depedencies to get me a noticeably better terminal emulator would cost me virtually nothing real.
Of course if you're in difference circumstances the answer changes. But there's no particular virtue in using a worse terminal so that more of your RAM sits there with lots of 0s in it.
Well, fwiw my local Debian Jessie claimed terminology along with dependencies (that weren't already installed) would claim 24MB. Pretty steep for a terminal. I tried opening one each of Terminology and Sakura[s] -- and as far as I can tell, running:
i=0;while true; do echo $i;i=$((i+1));done
is, if anything, slower in Terminology. Big caveat: I've just installed kernel 4.1.6 and while I've recompiled/installed the proprietary fglrx AMD driver, I'm not entirely convinced most OpenGL apps work as they should.
This terminal emulator is also a file browser and media center, so you can save space by removing your old ones of those ;)
(tyls is ls with thumbnails, and tycat is like cat except that if you try and cat a movie file to stdout it will display a video player instead of raw bytes)
I checked it out but couldn't figure out how to add my own color schemes? I'd preferably be able to use base16 flat https://chriskempson.github.io/base16/#flat however I can't even find a way to manually add color values.
I dream of the day when we start making computers where a runaway process can't take away resources from the control interface. As in, you should always be able to kill a process, always be able to move and close a window and always be able to send a goddamn ctrl+c via a terminal. Otherwise we're in a classic priority inversion scenario - a lowly process can take away control from the user.
In many cases, you are able to send a control-c, but the process that is generating the output receives it, not the process, the terminal program, that is rendering the text. You'd have to send a signal to the terminal program, which may cause it to exit and not be what you want. And doing is often done with UI controls, like a close button/action on the window.
So it's not just priority inversion that is a potential problem, but also non-obvious layers of abstraction.
Veteran gentoo user here. I've learned the joys of --quiet on many an old laptop for exactly this reason. There is no other process that I know of that is more likely to bottleneck a terminal. Accidentally cat some-binary-file? ctrl-c that sucker asap, hope you didn't have anything else important going on in there.
The standard Windows terminal is glacially slow for the same reason. The rendering of 80*24 characters seems to much for it if asked to display text at any speedy rate. Minimizing the window while the text is scrolling by can speed things up.
There is no getting away from it with the new "retina" screens. You will use TTF fonts, antialiasing, and most certainly some compositing underneath it all.
But I find that rxvt is quite speedy (in both Linux and OpenBSD) not to cause any noticeable lag when using "pretty fonts". Unlike OS X, where both Terminal and iTerm are still frustratingly slow at rendering text. Sometimes I'll fire up a Linux VM just to have a fast terminal and nothing else.
Text rendering has typically been the limiting factor since the early to mid 90's. Paging away is a good old trick. I usually run most things under screen, and so just switch to another screen. Obscuring the window is also often enough.
I've run into this also. I generally minimize the terminal (because that's where it would happen for me), but I have had to resort to moving to another desktop.
gnome-terminal (based on libvte) will do frame dropping for rendering updates.
Unfortunately that means you're stuck using a terminal with the most brain dead shortcut keys in the world. Invariably I sit down at a coworker's keyboard, hit alt-F, and instead of skipping forward a word, I open the File menu.
Educational article. Is luajit becoming a tool of choice for this kind of hacking, rather than C or perl? Seems like a library of helper functions or some sugar could be added to make it more suitable, note the need for a readptr function and use of tonumber()
Arguably, the FFI gives easier access to stuff (TM) than Perl would allow. C would be easier if the aim was clear and you can just write the code down (well, string handling is probably easier in Lua, and there's the memory handling with GC). When the way there is what drives you instead, Lua(JIT) is nice. Sprinkle in some print here and there, try this and that without compiling stuff (though that is the smallest gain, I think)...
tonumber() is LuaJITs cast for pointers to integers. There are some arguments for it to stay.
Make sense, dynamic languages have a lot of conveniences. If having a REPL helps, there are REPLs for interpreted C
I meant to refer to "tonumber(oldsize[0])" used to cast from a ctype size_t to a lua number (doubles) (I missed that's it's also used for casting pointers). Now I realise that this is needed because implicit casting from 64 bit ints to a double would reduce precision. It's just a very minor thing that "for" loop bounds must be lua numbers.
Point taken. Seems there is some OSX and FreeBSD support effort going on in the github repo, but i am not versed enough in the BSD differences to say if that will help with OpenBSD support in any way.
How about the following approach: echo "It's meeee!" > /proc/9960/fd/0, and see on which xterm it appears?
(For those who, like me few years ago, don't know what above does - it's writing directly into STDOUT of a process, which is available as a file descriptor 0 in /proc/[PID]/fd).
xterm's STDOUT isn't hooked up to the terminal window it shows. For a demonstration, start one xterm, then start another from the first. Send something to STDOUT of the second xterm and it'll appear on the first.
On my linux machine I run 8-10 dedicated xterm windows, each runs a tmux session [1]. So the way I would solve this problem for me would be to grep the process' /proc/$pid/env for its unique identifier.
I love this kind of things. It turns software [a]synchrony in a very physical idea. I see it as similar to engine braking. The slowest part of the system in a context will drive the 'performance'.
It makes a lot of sense. Much goes into your text rendering beyond what was necessary in your VT100. Every screen I see is anti-aliased, horizontally and vertically aligned, with 'lines' of text that can wrap or be colored or myriad other things according to the byzantine and arcane standards of the escape codes and ones that have come after. Re-drawing even a fairly large amount of text is fairly non-trivial cpu and gpu time. I found I got way better performance and less lag from my machine if I disabled the alpha transparency on my terminal windows. Processing power is very high, but still far from unlimited.