I feel like this "missing middle" is common to a lot of language communities these days. Devs writing for themselves are often happy to make cool TUIs, and devs writing for "end users" are often writing webapps.
If you know Typescript and have a component framework you like it's about the same level to write a TUI in Charm as it is to write something in Wails. I kind of alternate between them.
For people with super-custom kernels, static builds might use a custom syscall thinking its a syscall for a newer kernel. Further, it might not support their kernel version (libc-to-syscall mismatch).
Then there's the fact that I might have compiled libraries with certain options that a static compilation won't get, ever (such as native CPU instructions or other optimizations).
Even worse, static builds mostly use musl as the libc implementation, for which there are many issues (DNS issues, the fact that most openssl library tests don't pass and are disabled). I wouldn't trust musl/alpine/et al in production for "serious" things.
Static builds are __convenient__, but they shouldn't be the default.
I love the fact that you can just update one single openssl lib and all installed apps use the updated version after a restart.
Static builds have their legitimate use-cases so maybe change that to "mandatory static builds". (iirc go support for dynamic linking is worked on and will become stable eventually)
More than "not bad", Textual is probably best in class for TUIs atm, but I wish Python had static builds! Even Javascript can build executables now with Bun!
JS executables with deno or bun aren’t terribly different from pyinstaller executables. Of course they’ve been made more convenient by being bundled with the toolchains.
I’ve used it twice, both times regretted and rewrote in Wails (that is, back to a web view for UI). Problems not limited to:
- (Last I checked, which is <1yr ago) Text rendering is fundamentally done wrong, it can’t access any system font, can only use the built-in font, or exactly one custom font you bundle. Good luck supporting any user-generated or web content from people who don’t use the Latin script.
- Very limited selection of widgets, a problem common to all but a few native frameworks.
- Very limited styling options for the small selection of widgets.
- Looks terrible. Okay for personal/internal stuff, definitely don’t want to ship to end users. (Obviously subjective.)
- Why? Strongly suggest updating the README to explain why this is useful, or what kind of workflow makes this useful. Just a list of features isn't that compelling to me because I'm not sure I want it.
- Because this is a gh cli extension, and not a standalone program, searching is unfortunately going to be fairly slow. There doesn't seem to be any local caching or syncing of the Github data.
- Not really a user-impacting issue, just a fun fact: because the pagination is handled via Graphql's PageInfo support in their Graphql API, you'll never be able to page through more than 1000 results in a given response.
- I'm happy to see XDG_CONFIG_HOME support, but the authors should consider also checking for a config file in $RepoRoot/.config/gh-dash/ and $RepoRoot/.gh-dash/, so that a single config could be more easily shared by members of a team who all check out the same repository.
I'm working on something like this (customizable filter views for Github; quickly drill down into PRs by different authors, affecting different files, across multiple repos, filtered by regex search on the changed files / pr body / pr title / pr comments) but based in the browser, not a TUI. The primary goal of my project is to help engineering leaders understand what is actually getting shipped, and communicate that to other non-technical leaders. If anyone is interested in being a beta tester let me know, I'm hoping to have a release publicly available at the end of this week!
> Why? Strongly suggest updating the README to explain why this is useful, or what kind of workflow makes this useful. Just a list of features isn't that compelling to me because I'm not sure I want it.
I don't mean to speak for the author, but: The 10 seconds it took for the GitHub web UI to load the homepage of this project is a great justification for its existence!
I hear that, but this is fetching data from the Github graphql API the same way that the website does, so if you're getting slow responses from their servers this won't be any faster.
There is a lot of other stuff that gets loaded and parsed when you visit the site. I see 151 requests and get a DOMContentLoaded of 27.51 seconds. The slow requests aren't between me and the GQL API as far as I can tell.
I'd be keen to check that out! I've been toying around with similar thoughts the last year or so. Metrics and dashboards aren't enough and the GH site doesn't really have an org-level view.
Has anyone thought of doing something like this for Gitlab?
I'll browse the code and see how readily it could be converted over to working on Gitlab.... also, Gitlab has a GraphQL API available to get at various bits of data surrounding a repo.
I've not looked at this code yet, perhaps it does something similar....?
I take your point. Yes, I do agree that would be clearer.
Just to restate my position for anyone else reading, I don't believe that the fact that a point could be restated with more clarity means that the original version is misleading.
Everything, everywhere, always could always be made better than it is. The question is not "could this be made better?" but instead "is this clear enough [to avoid accusations of misinformation]?"
Shameless plug: I developed a similar project that is Web-based (source [1], demo [2]) and supports having multiple connections configured. Both github.com and GitHub Enterprise are supported. Other backends (e.g., GitLab, Azure DevOps) could be added in theory. I started this because at work, we use no less than 2 instances of GitHub Enterprise, in addition to github.com.
It is a client-side application, meaning that it is very simple to host (no backend required).
Very cool. Coincidentally, I was looking into starting to use Midnight Commander (again). When I run it in a repo, I still see my overall PRs, etc. Is there an option to get it to focus on the current repo?
Love it, thank you! I couldn't get the help screen by '?' though. I can escape the input box with esc, but then '?' will refocus on the search box again.
EDIT: got it now, not sure why it didn't work before
Possibly an unpopular opinion: TUI application should work out of the box, or in other words, it shouldn't need another font installation (NerdFont in this case) to run and render properly. One of the TUI app that does this well is Helix editor and I love it.
I really agree here, as someone who uses nerd fonts a lot[0]. There should always be a default without nerd fonts.
That said, what does it actually look like without? The docs say "render properly". I haven't tried it yet, let alone without nerd fonts.
That also said, I'd give this one a bigger break than most. If you're doing development work on a machine you probably got decent control and at least enough space to install fira.
[0] I suspect there are two types of people that are terminally terminal: those that are just in their own shells and can always control their full environment and those that work in many machines. I'm the latter, but I tend to notice the latter are also the former, but make different choices because of this, or at least smarter dotfiles that will change things based on environment or (like me) have source install scripts (you can always build a program locally ;) I'm glad people live in the shells, but I hope people making TUIs can realize that there's a bunch of us that have to move machines constantly.
There’s something a little funny about a CLI interface to GitHub, a website based on making it easier to use… git, a CLI program.
Which isn’t to poo-poo the project, GitHub adds a bunch of extra features so I can see why someone might want to build up on it. It’s just funny, the winding path our files take to get to our eyeballs.
We like creating decentralised technologies to then graviate around centralised services built on top. The Internet was meant to be decentralised (it still is in terms of the protocols that make it work), but then we ended up consuming very much centralised services. Bitcoin's decentralised in terms of technology, but then we are centralising in terms of exchanges, wallets etc.
I guess the decentralisation aspects of a given technology just means it's a resilient building block. And since the only way we make sense of things nowadays is through markets, it's inevitable we starting building and consuming centralised services.
I'm not saying centralisation is good or bad btw, but I do share the irony in your reply, mostly because I find it funny when a new tech is being sold as new way of doing something in a decentralised way.
Decentralization mainly grants
flexibility/customization/freedom, but generally you don’t really care about this — freedom only really matters when you can’t do the thing you’re trying to do.
If everything is already covered without such freedom, or you don’t care about what’s not covered (or failed to conceive it), then you don’t really mind having the freedom or not. It doesn’t change much.
Gmail is a good email client — email being decentralized is only relevant to me if I wanted to get off gmail to go to say fastmail. But if I didn’t, what do I care whether the underlying protocol is centralized or not?
> freedom only really matters when you can’t do the thing you’re trying to do.
Disagree, or at least think there’s a need to clarify. This makes it look like a freedom is only occasionally relevant.
This kind of freedom derived from decentralization also exists as a persistent threat against bad behavior. If users can leave and bring their stuff with them, that constrains the choices that the platform can even consider.
Attempts to centralize should be seen as strategic attempts to change the landscape in a way that makes it easier to exploit users.
People don't care about whether their tools are decentralized or centralized (apart from a few principled individuals). They care about the UX of the tool, and so far centralised options have won on UX. I'm not sure how much of that is due to being centralised, rather than being due to companies investing in UX to gain market share preferring a centralised model.
Ofc centralised/decentralised is also part of the UX (what happens when the centralised service goes down), but so far it's not been a net win for decentralised.
I don’t use the site much at all, but it isn’t really a matter of how I see the site—they clearly added features like issue tracking, commenting, and CI, and people use those features.
I have something similar just for checking out PRs in a single bash function, powered by gh and fzf:
Screenshot: https://github.com/oxidecomputer/console/assets/3612203/2805...