'script' also does something similar, and is one of the lesser known commands.
from the man page:
script makes a typescript of everything displayed on your ter‐
minal. It is useful for students who need a hardcopy record of
an interactive session as proof of an assignment, as the type‐
script file can be printed out later with lpr(1).
Could you explain how you get feature parity with asciinema with script? I've always found script to be kinda useless, myself.
You can't really use it to replay things because you can't regulate the symbol speed easily. Its not very helpful for its stated purpose because it maintains every keystroke in-stream, making normal typo corrections visible and confusing. And it's certainly not very share-able or stream-able.
Pretty favorable experience with working with this tool at $dayjob, all but the compiling of it. Luckily they release precompiled js/css files that are fairly straightforward to consume.
The downside is that the recording format is a bit of a mess and that you end up having to hand tweak things a lot in order to make the recordings pretty. I wish the tool had more in the way of cleaning up its own recordings so that wasn't necessary. There's a small cottage industry of tools on github around this, but none of them were exceptional enough to warrant a note here.
Don't know if I want to a script that records my terminal session and upload the recording to an external server. Is there a way to keep the recording local?
asciinema is open-source and the file format is publicly documented (asciicast)[1]. You can run `asciinema rec filename.json` to save the file locally and self-host it with asciinema-player[2]. Alternatively, there's also asciicast2gif[3] that converts asciicast json to GIF image.
About a year ago, I hacked on an internal project that was exploring command-line interactions as code. Think of it like an animated CLI documentation factory. I believed and continue to believe, that short CLI animations give context to CLI usage and allow pause/copy/paste (game changer for watchers) if not in svg/gif which helps with consumability. Problem in my mind has always been keeping them up to date at scale (or their educational value diminishes). The project had a git repo that housed a collection of files following a light-weight syntax. Some complementary tooling "understood" this syntax and could create asciinema recordings while controlling pacing and execution. We then evolved this workflow into a FaaS triggered via PR and a github bot. Would like to potentially open-source the whole thing if I find some time to go through our internal approvals.
NET: Folks could add to the corpus following the modular syntax and issue a PR. About 3-5 minutes later, comments on the PR stream would have an animated svg/gif generated and added by the bot for review. If you liked how it looked, you accepted the PR. This corpus could then grow and be updated with minimal effort. A version of the client was modded to support self-signed certs and block accidental public upload.
We ported asciinema to a helm chart that could run multiple versions on a k8s cluster per ingress path to complete the story. The whole apparatus existed entirely behind the firewall.
using interact timeout as a sort of checkpoint to show output, though if you want to send single quotes you'll have to make sure they get escaped in the inner command (or just have the script in a file for expect to read)
On the surface this looks just like what's needed to make this work for me:
I like the recording tools, but don't want to be forced to embed a player from a third party service.
Looks like asciinema2gif relies on PhantomJs which brings withit an entire browser and is also no longer maintained.
This looks very bloaty for a tool that should convert terminal input to a gif file.
I've written an incomplete ad-hoc renderer in Go: https://github.com/akavel/asciinema2gif — it worked in one case where I needed it personally, but it's missing support for a lot of escape sequences, so it most probably Won't Work For You As Is. But you're free (as in AGPL) and welcome to play with it if you like!
I've only tried it few times via their official docker image; it's reasonably fast (a minute of asciicast took about a minute to convert), but PhantomJS/ImageMagick do put some load on the CPU while it's generating/combining frames. I never had any problems when I need to make a GIF, but the conversion process does indeed seems very wasteful.
Whether it is FOSS doesn't really contribute anything to solve the problem GP mentions though. The default behavior is to nudge the user to upload. One must assume this is the part of the design which allows potential monetization one day. Obviously the author may not be keen to point out this giant gap in the security of their design (it is in fact the main feature).
Users might not even think about the security risks this could create. From an enterprise security perspective this is indistinguishable from any (malicious) data-exfiltration tool. It should come with a giant warning and better defaults imo.
> One must assume this is the part of the design which allows potential monetization one day.
Oh, heaven's no! Someone might at some point try to make money from a thing they made?! From their own labor?! How awful!
Seriously: it's a stupendous tool and great service, provided free of charge and totally open source. Must we complain about the fact that they try to nudge you towards their tracking-free website, so you can see the tiny "Sponsored by Brightbox" rectangle in the footer, so that the project might yield some tiny financial return (though almost certainly not anywhere close to development costs)?
It seems to me that they've done absolutely everything right except possibly live up to the standard that apparently all open source tools should be developed by starving ascetic monks. Come on.
> The default behavior is to nudge the user to upload.
It's not that bad, if you do not provide a filename, by default it asks for confirmation before uploading. It's still easy to upload by accident[1] though, which I know is no-go in some environment.
It's a single binary (as cross platform as golang gets), which records a terminal session into a single html file which you can open for playback in the browser.
Asciinema is great and a joy to use, but one of the coolest things about how Asciinema is made is that the web player implements a virtual terminal emulator in Clojure:
I have automated a few installations of historical operating systems in qemu/bochs with the help of travis-ci. Due to all the ANSI output the job logs are unreadable. To see what's going on in the emulator I use an asciinema rec/asciinema upload in the build scripts. This helped me out a lot.
"Q: Can I provide my own wood?
A: In most cases we can handle your wood. We do require all shipments to be clean, free of parasites and pass all standard customs inspections."
I have been pronouncing it Askinnema in my head, because that's how people pronounce ASCII, and I then just added the suffix in what is the most natural way to say it for me - I'm ESL though.
You do understand this is a false dichotomy? You can have both. Some people would rather watch a video and follow along, whereas others get more out of reading text... and both fit neatly on a page right next to one another.
sure, if you're going to be lazy and only do one do text. HOWEVER advocating for the lazy path amongst fellow practitioners is not a great strategy to promote quality output.
Documentation is hard. Good documentation is harder. There are many occasions where video and video-like things are good for getting a feel for something and just plain text is good for reference. We shouldn't be saying "just use text because you probably won't bother doing everything you should". We should be saying "Hey, video, still images, and text all combine to make great documentation and help people to understand and use your tool / product." They take time, but we should care about our users, and thus we should take the time to support them.
Not sure it's double the effort. Once you've written the textual version, you pretty much only need to copy and paste it into a terminal to produce the asciicast version.
That's how I did the animations for this Powershell tool.
I think it makes sense in this case because one terminal "speaks" to another.
I agree with your assessment about plain text - do you think there is a good way to present this kind of workflow using plain text?
Thank you for making this point. Plain text is really underappreciated. When I visit a project README, I don't want to wait for the entire replay. I just to skip to the part of the documentation that is relevant to me.
A short asciinema replay can be useful for a quick demo buy don't let that substitute your text documentation. I may watch the replay only once but I will come back to a well written documentation again and again.
easy way to test if a web page sucks for blind people: Use Lynx[1] to browse it. It's not a perfect representation of what they experience but it's a very easy way to get a good impression.
1) Doesn't work with powerpoint if you have to use that
2) There's no "pause" point. A lot of time you want to have the command play to a certain point but then cause until you click "next" so that you can explain things to the audience before continuing on.
3) End up having to fiddle a lot with the JSON so the timing feels decent vs me copy/pasting into it and hitting enter.
I wrote termtosvg as an alternative [1]. It's a python program that records a shell session as a standalone SVG animation. Animations produced by termtosvg can be embedded in Markdown files or HTML pages.
That's great! I love the approach: a standalone ouput in a standard format.
Do you think you could also add support for ttyrec input?
This would allow converting existing ttyrecs and have a a full suite of tools creating all the options ready to upload between 1) standard text format record, 2) regular animated gif, 3) svg when saving space is important (while gaining zoom abilities as well!)
It's a good idea. I don't have much time for termtosvg these days so I can't promise it'll get implemented but I've opened an issue to remember it [1].
Yes. A line of the animation is a group of text elements. Somehow text selection does not work across several SVG groups. I could remove this logic but it would mean duplicating the definition of a line on every frame showing it instead of using a single definition.
I like this approach! Smaller filesize and better rendering than a gif, but just as embeddable in a GitHub markdown readme since it's pure CSS animation. Kudos, looking forward to using this.
I've been using this for years and have been really happy with it. IMO it beats other video/gif recordings because of the way it stores the data files. It records all input and output individually, so you can copy-paste from the recordings. I think it would be invaluable in a classroom setting.
One killer feature you get with asciinema & not with an MP4 is that people watching it can copy & paste from your recording. So they don't have to type out the same command you typed, they can just copy it into their terminal directly
Not what I'd consider a killer feature, but it's definitely one of the advantages.
Now, what is killer is that the recording is just a text file, so I can edit it and check it in to my source control system, rather than having to mangle some screen recording environment and check in a binary blob that's hard/impossible to edit in the future.
It's also pretty universally guaranteed to work since it doesn't demand much of the browser as far as weird javascript features go, and means I can deliver a whole screen recording in the fraction of the size of an MP4 file. It's screen resolution agnostic, making it more functional for a wider class of users than a video. I also don't have to ship a different version of the recording if I want a high contrast version for accessibility reasons.
So, yeah, plenty of reasons to use asciinema over a conventional screen recording.
My team wrote our own implementation with a similar API to get around Asciinema's weird support for React [0]. We've been using it in our web application to play back historic terminal sessions recorded by our agent. Unfortunately it hasn't been split out from the main application code yet. Open sourcing it is on the list.
ttyrec is a standard format that most people and tools understand.
And if you want something fancy that asciinema output, you can use seq2gif which support all the bells n whistles asciinema does, while allowing you to also upload the ttyrec file.
However, I would suggest not using unicode or Ansi color in your recordings.
You want to demo a feature, not show your cool terminal or the amazing gif you made. The gif is just a byproduct. Keeping to plain text as much as possible makes your recording available to people using text terminals and Braille screen readers.
(in case you wonder why I'm doing a cat /dev/random into a live postgres database block file, this is to show how I can recover the database content from the raw directory. Kids, don't try this at home. Don't try it at your parents work on their production database either. In fact, just don't do that, it is only for when your database ends up in /lost+found)
>Let me be the devil advocate: why not use ttyrec + seq2gif instead?
GIFs look poor - they don't capture colours well. They're not accessible. They don't zoom. Copy and paste doesn't work.
>And if you want something fancy that asciinema output, you can use seq2gif which support all the bell n whites asciinema does, while allowing you to also upload the ttyrec file.
Unless I'm missing something obvious (possible!), seq2gif makes a GIF file. How does this "support all the bell n whites asciinema does"?
>However, I would suggest not using unicode or Ansi color in your recordings.
It's 2019, there's no excuse for these (and emoji!) not working fine.
>You want to demo a feature, not show your cool terminal or the amazing gif you made. The gif is just a byproduct. Keeping to plain text as much as possible makes your recording available to people using text terminals and Braille screen readers.
Agreed - how does seq2gif fit in with this? Or are you saying the ttyrecord files are more accessible?
It looks like a binary format, I have nothing on my system that can read it. I do have a web browser, though.
> Or are you saying the ttyrecord files are more accessible?
I do. You can edit them like if you made a typo. They are common, so there's an ecosystem of tools if you don't like raw edits.
Next to that, asciinema looks like reeinventing the wheel, except it's no longer round, so it requires special tools and roads now. But what's a few extra dependencies? It looks better and it's new!
> I have nothing on my system that can read it. I do have a web browser, though.
In general, I prefer a standard format supported by many free software tools that lets me create a file in another standard format that I can put on my website without requiring various other dependencies, like javascripts. Especially if I can pacman / apt-get install the tools.
> seq2gif makes a GIF file. How does this "support all
asciinema #1 use case is to show a recording with color, unicode, etc. I think that's a bad idea for accessibility. You want to show features, not look cool. Still seq2gif gives you that, and the output in a standard format. You can replay the original in a normal term, copy paste etc.
If you think copy & paste from the browser showing an animated replay of the terminal is an important use case #2, huh, we have different use cases, but I don't see how it couldn't be done as another target (seq2something_else, maybe seq2svg?) from a well known and accessible format like ttyrec
Normal people want things that look cool and can be made with minimal effort. Some people can afford to stay in an abstract backend world without ever presenting anything to non developer people, but I think most of us live in the real world and use modern computers so they'd rather have the cpu run a few more cycles than having to think about the details themselves.
For the same reason people still refer to this reply on Dropbox announcement - https://news.ycombinator.com/item?id=9224 . Just because you can stitch together several commandline utilities, doesn't mean the endusers want it. This tool looks like it offers convenience, everything is in a single binary, and it's simple to use.
GIFs are worse in my perspective than Asciinema. You can't select the text in a GIF, you can't pause or seek in them, and they take up more bandwidth as the video gets longer.
Not sure about seq2gif but I think using ttyrec to capture data instead of asciinema has some benefits.
For example, ttyrec is a 33KB standalone binary while asciinema needs python. This might be a problem if you are recording something on a remote machine with limited access or resources.
You can upload ttyrec data to asciinema so nothing really changes on that front.
Unfortunately I guess only a small minority prefers 33KB binaries doing just their the job.
At least I learned about the SVG alternatives to asciinema that's both getting a better vectorial results, and saving size compared to a gif output. It is so obvious in retrospect! More kudos to the author for thinking about supporting various input formats, including asciinema.
from the man page: