This pains me. They saw the "what" but not the "why".
The point of using a command line is not because a VT100 emulator is an ideal way to view data (it's not). It's so we can combine commands, using pipes and redirection. I don't want to learn special new filtering flags. I want to get raw data that I can pipe to grep or cut or any of the other dozens of tools I've been learning and using for the past 25 years.
The point of using a distributed version control system is so I can store it all locally. This new tool defaults to only 30 items, and cannot fetch more than 100. So even if you're willing to put up with the multi-second latency of hitting their servers for every command line operation (it feels like CVS all over again), you still can't say "give me everything and let me grep it". You have no choice but to use the 3 ways they give you to filter (assignee, label, state -- not title, comments, or date).
I've got a 50Mbps network connection, a 500GB SSD, 16GB of memory, and an 8-core CPU, and now instead of putting all that to work running JavaScript to access my bug reports, I'm fetching them as plain text at a maximum of about 5000 bytes at a time. (Net throughput is on par with an Apple II with a 14.4Kbps modem.) Is this an improvement?
I do not understand why GitHub Issues aren't git repos, like most other GitHub collections. Then we wouldn't need a completely new (and slow/complex/limited) tool to read bug reports on the command line. Perlis had this figured out back in 1982: using common tools for disparate data is a killer feature. That's why we're still using the command line in a way that looks nearly identical to how it worked in the 1980's, and a few generations of other interfaces have come and gone since then and failed to displace it.
The pessimist in me would say that Github doesn't have issues as git repos because it makes migrations off of their platform more difficult. Hence making vendor lock-in stronger.
The optimist would say limiting contributions to issues on web platform made the formatting ( markdown ) consistent. So that linking back to issues and commits was more standard, and therefore made Github Issues a nice tool to use over the competition at the time.
>I do not understand why GitHub Issues aren't git repos, like most other GitHub collections.
Doing that would make their implementation trickier since they'd have to find a way to make issues play nicely with the git object system which may not be completely trivial. On top of that they have very little incentive to do that because having a custom API and custom tools to access it increases lock-in. If github issues were just git repositories you'd be able to migrate them very easily. I'm sure they very much don't want to decentralize this.
I agree with your general point though, I'm very much a shell power user but I seldom use these tools because they don't seem particularly more efficient than just using the web browser in my experience. As you point out the power of the shell is to combine commands. If I just want shortcuts to list various attributes of the project I can do that with bookmark keywords in firefox for instance.
While the issues aren't a git repo in themselves, their API is actually quite open and usable. I've written a few migration services for a past employer where management insisted on JIRA but developers wanted Github so my service would basically batch migrate changes (issue state, description, comments etc..) from Github to JIRA so management would get the updates while devs used Github solely.
Well, the "why" is certainly "because you use git from the command line; no context switch; CLI is always faster then GUI".
And of course these reasons are poor and hide the fact that with systems like Github (Gitlab and Bitbucket are the same), issue tracking and thelike are locked in. I only know the wiki which can be accessed as git repository (at least in Gitlab and Github). Which other Github "collections" are exposed as git repository?
I'm not convinced that their ad-hoc curse interface is really faster than GUI. First you have commands like "issue view" that... open the browser to display the issue. Then you have commands like "pr create" that seem to ask for tile and then description. What if you want to amend the title while you type the description? Well I guess you can always "preview in browser" when you're done. Might as well just do it from the browser to begin with honestly.
"pr checkout" is the only command that seems truly useful and a time saver to me.
Thanks for the feedback. There's also a --preview flag in the view commands if you want to see the body in your terminal. We're still working on a bunch more functionality, and we intend to add more flags to allow you to customize the things that are most useful to you. But we wanted to get this in people's hands early to get feedback to help inform where we go from here.
> The point of using a distributed version control system is so I can store it all locally. This new tool defaults to only 30 items, and cannot fetch more than 100. So even if you're willing to put up with the multi-second latency of hitting their servers for every command line operation (it feels like CVS all over again), you still can't say "give me everything and let me grep it". You have no choice but to use the 3 ways they give you to filter (assignee, label, state -- not title, comments, or date).
I love the command line too, and I get what you're saying, but why don't you just use their API?
>I do not understand why GitHub Issues aren't git repos, like most other GitHub collections.
They are probably deliberately avoiding making Github issues available as git repos because they suspect their competitors would copy the format and reduce the effort required to migrate to a different service.
It prints to stdout so you can of course grep your "gh issue list" (or at least the 3 fields that it displays: number, title, labels), but since it silently truncates output to at most 100 records, this doesn't seem terribly useful to me.
It doesn't even print a message or use a nonzero return code to indicate the output is incomplete. What's the point of running "gh issue list | grep foo"? There's no way to distinguish between "there are no 'foo' issues" and "there's 250 'foo' issues (but they happen to not be at the top of the list)".
Code search seems to be out of scope for the CLI. (It arguably doesn't even do issue search -- just basic filtering on a couple predefined fields.) But that's no problem because I've got all my source code on my machine already! I can use grep, git-grep, or any other tool I want (ack/ag/rg are popular).
I don't see why you couldn't just make a repo called <project>-issues and build the workflow into git. Put instructions on how to use inside CONTRIBUTING
The point of using a command line is not because a VT100 emulator is an ideal way to view data (it's not). It's so we can combine commands, using pipes and redirection. I don't want to learn special new filtering flags. I want to get raw data that I can pipe to grep or cut or any of the other dozens of tools I've been learning and using for the past 25 years.
The point of using a distributed version control system is so I can store it all locally. This new tool defaults to only 30 items, and cannot fetch more than 100. So even if you're willing to put up with the multi-second latency of hitting their servers for every command line operation (it feels like CVS all over again), you still can't say "give me everything and let me grep it". You have no choice but to use the 3 ways they give you to filter (assignee, label, state -- not title, comments, or date).
I've got a 50Mbps network connection, a 500GB SSD, 16GB of memory, and an 8-core CPU, and now instead of putting all that to work running JavaScript to access my bug reports, I'm fetching them as plain text at a maximum of about 5000 bytes at a time. (Net throughput is on par with an Apple II with a 14.4Kbps modem.) Is this an improvement?
I do not understand why GitHub Issues aren't git repos, like most other GitHub collections. Then we wouldn't need a completely new (and slow/complex/limited) tool to read bug reports on the command line. Perlis had this figured out back in 1982: using common tools for disparate data is a killer feature. That's why we're still using the command line in a way that looks nearly identical to how it worked in the 1980's, and a few generations of other interfaces have come and gone since then and failed to displace it.