Two years ago I was collaborating with a coworker. Most of our time was in his office. I would look over his shoulder as he typed. He had recently adopted a new IDE and spent considerable time configuring it and learning its features. He was very proud of how productive it made him.
Then one day we collaborated in my office instead. I use plain Unix tools, all independent. He watched as I did everything he did but with point-tools instead of a huge IDE. Rather than switching between files, I started a new terminal window and ran Vim. Other windows were for the build-run-crash loop, analyzers, note-taking, etc. Often I would do Vim tricks like doing a calculation by writing a formula on a line in Vim and piping the line to "bc" (`!!bc RETURN`). In fact, many of his IDE features I would access by calling out to external programs.
I had to hold back a laugh when he said:
"Oh my God! Linux IS your IDE!"
Well, duh. Isn't that's what Ken and Dennis created in the 1970s? Unix started out as the ultimate development environment! It just also happens to run everything else now.
- ctrl-E for "find and open in editor" (using fzf, substitute 'edit' with your editor of choice; the history gymnastics is to basically pretend that you typed it in manually for purposes of bash history)
- and of course, for german keyboard layout devs, rebinding capslock to alt-gr.
> Wonder if there could be a shell tool to just dump the first stackoverflow hit for a search.
There should be. I think `curl cheat.sh/programming+question` does that. There might also be a way to do that with surfraw. I just use DuckDuckGo, it almost always pops up an Stack Exchange answer in the sidebar when I search for something relevant to an SE.
I really love cht.sh, but find I never actually use it except for the occasional git syntax reminder. I think a flow that went `cht.sh -> fzf to narrow results and select a result -> vim -> OS clipboard` would have me using it all the time.
> and of course, for german keyboard layout devs, rebinding capslock to alt-gr.
I went the other way: use a US layout keyboard, and bind capslock to switch to German layout while capslock is being held down. I find that I need my fancy braces and brackets more than my umlauts. :)
I'm using US-intl (international) right now, supported everywhere I've seen. AltGr+q is ä, AltGr+s is ß, AltGr+y is ü, etc.
Might seem weird at first, but it has most of the special or combined symbols you might need, not just German ones. Other than that, it's your usual US layout suitable for vim/evil, shell, coding, etc.
>Though Google takes up a worryingly large fraction of it. Wonder if there could be a shell tool to just dump the first stackoverflow hit for a search.
I kind of thought it was a novel idea, but kind of pointless - if you know how to read a traceback you can Google it... but I guess it makes a lot of sense depending on your workflow.
`dirs` is a shell built-in for the directory stack. If you supply it with -v, it will number the output. So the trick is just to make these line up, which is also really just using a feature of `cd` that extracts an entry from the directory stack at position n. My aliases look like this:
alias dirs='dirs -v'
alias '1'='cd -'
alias '2'='cd -2'
alias '3'='cd -3'
etc.
Doesn't seem like it would make a big difference, but it's very fluid and adopts well to whatever "you happen to working on in the moment."
Okay, the actual fzf call is just the $(fzf). But when you use a keybinding with -x, your shell executes the command directly. So when you press return in fzf, indicated by the exit code being zero, we additionally do three things:
- echo a facsimile of the prompt followed by the edit command so that it looks like you just typed "edit filename"
- insert that facsimile into the history buffer so that you can redo the edit command with arrow-up
- actually start the editor
- and clear the commandline of anything you may have written beforehand, which would otherwise still be there and break the illusion.
The reasoning behind this dance is that if you just do an ordinary bind, you just get the edit_fuzzy call in your bash history, which is pretty useless. You want the actual generated edit call, not the edit_fuzzy call. So we use bind -x and "pretend that you typed in the edit call manually". We could just make ctrl-E insert the edit command into the readline buffer, but then we'd have to press return twice.
${PS1@P} is "PS1, expanded as if it were a prompt (P)".
fzf can be used as a cool selector in the terminal, it just outputs what you picked from the list, so if you run `ls | fzf`, it will pass the list of files/dirs into fzf, where you can fuzzy search, select and it outputs to STDOUT.
So if you want to edit a file from inside a directory easily, you can do:
In general though Emacs single handedly can do most of everything required in software development. One could practically spend all ones development time inside it, so I would say Emacs workflows are more impressive than Vi ones, especially since so much of great functionality is baked into it without even needing pluggins.
This I say after a couple of recent days in which the GUD gdb interface was a great help in solving a few of the problems I was working on. Does VI has something similar built in? Or for any of the common workflows like these which work out of the box, without needing plugins or extra configuration:
1. Remote file editing on other computers.
2. Rectangular/columnar text edits and manipulations.
3. Essentially infinite kill ring, i.e, a very easy to use copy-paste history: This I find is very useful, one becomes acustomed to building, blocks of text by assembling things from the kill ring. Also plays nice with point 4, easy macro record/replay.
4. Easy macro record/replay of even complicated edit sequences.
5.Inbuilt calculator, with both rpn and algebraic modes.
6. The Dir-ed mode provides a very convenient file manager.
> This I say after a couple of recent days in which the GUD gdb interface was a great help in solving a few of the problems I was working on. Does VI has something similar built in? Or for any of the common workflows like these which work out of the box, without needing plugins or extra configuration
Not quite, but the help page does mention a plugin called termdebug[1] that can be used with vim's new terminal feature.
> 1. Remote file editing on other computers.
The netrw plugin which comes with the default vim installation allows for this
> 2. Rectangular/columnar text edits and manipulations.
You can select text in column mode and delete, replace, or insert text before or after the column (which can be wider than a single column).
3. Essentially infinite kill ring : This I find is very useful, one becomes accoustomed to building blocks of text by assembling things from the kill rings. Also plays nice with point 4, easy macro record/replay
vim has registers that correspond to every ascii letter (I think emacs has this as well). From what I recall, when you delete or yank text, it goes through registers 1 through 9 which each subsequent operation shifting the previous operation to the higher numbered register. After register 9, the delete or yank is lost.
But, if you use the capital letter registers, you can append a subsequent deletion or yank to what's already in the register, which I believe would effectively act as an infinite kill ring. I'm not sure if emacs can do the same with its registers.
> Easy macro record/replay of even complicated edit sequences
Vim has this as well, and you can store one macro in each of the ascii letter and number registers as well.
> 5.Inbuild calculator, with both rpn and algebraic modes.
There may be a plugin for that, but you could certainly pipe a line of text through dc or bc and get the result in vim.
> 6. The Dired mode provides a very convenient file manager.
You can get this through the netrw plugin as well.
Even without resorting to external tools the expression register(:h @=) can act as a surprisingly good builtin calculator, and there is a moderately large builtin math library(:h functions).
Like all registers you can abuse it in many ways too, like as dirty inplace calculator when you're writing docs. For example, yank a example calculation, insert from register(:h i_CTRL-R_=), operate on your yank(:h @").
But are those functionality available natively without plugins? All these are in the default emacs distribution possibly since more than 20 years, and are in a very usable, stable state of functionality.
Sure but what’s the difference? That’s the beauty (and occasional curse) of Emacs. There’s no hard line between “native” code and plugins, outside of the Emacs Lisp interpreter.
Unix isn’t integrated enough. You can’t go very far with just throwing strings around (or bytes). You can’t safely refactor, say, Java code like you can do in Intellij.
Maybe you can piece something together manually that does the same thing, eventually. But if you have to do all of that work yourself then I wouldn’t call it integrated.
I think the point is that with *nix you aren’t just stuck with what some product manager at JetBrains thought was a cool feature.
And when you want to brew something up from scratch, or do the same task for the nth time, you don’t have learn how on earth this IDE implemented it, or learn a whole new API.
Take for instance an IDE’s find usages function, often a simple grey -r from a judiciously chosen directory gets you the same thing, but faster and more flexible (if your mind is wired that way, I suppose).
This winds up not being a big deal if you use grep a lot because you'll tend toward greppable names.
More importantly though, in a codebase big/complicated enough to need "find references" functionality beyond what you get from grep, do you not also have comments, reflection, and all manner of other garbage that you need to search for via some mechanism anyway?
For a few languages (more on the way) you also have semgrep as a way to add something a bit more powerful than "find references" into a normal shell workflow.
I have used `grep -r` a ton on our codebase and purpose-specific tools like “find usages” are always more to the point while `grep -r` can easily be misleading.
Also you can’t just tend toward greppable names (whatever that means) if you share a codebase with developers who are on Windows and wouldn’t be able to tell grep from sudo.
> I think the point is that with [Unix] you aren’t just stuck with what some product manager at JetBrains thought was a cool feature.
But you see that’s exactly what you are, because you can’t refactor Java code with what you call “Unix tools”. So your only option is to install actual IDEs on Unix.
And “find usages” finds... actual usages (e.g. method calls), not simple text matches which can end up being 80% noise. For `grep -r` you might try `Ctrl+Shift+f` instead.
I tried this approach for a while, and the thing I couldn't get my head around, is how do you undo mistakes? Like, you write a messed up sed command - are you supposed to revert your changes to the last git commit? Not make mistakes?
If I could work out a way to make undo as seamless and full featured as undo in, say, vim, then I'd be a total convert. Is there such a way?
In all fairness, is there ever really a water tight abstraction for undoing things at the file system level like you're talking about? If I'm ever trying to do some magic in a Unix-like environment, I usually always make sure that I'm using "dry run" mode if it's available or running the command against files that are in version control. On top of that, it's also essential to invest a reasonable amount of time into ensuring that the command won't run amok and start following symlinks or something like that. These are habits you have to develop if you want to operate at the command line level. IDEs often don't offer similar levels of power or control or the ability to cleanly undo large sets of changes like that.
I don't know of one - that's kind of why I'm asking. I don't think it's at all unfeasible (I think ZFS might do it, never tried it out) and I think it would be an amazing development. I think the unix model of one-purpose programs helps collaboration, maintenance, and produces higher quality programs. I just also think that because they're working in an environment that allows pretty simplistic interop (streams of bytes) and generally provides no history of changes, they tend to be fragile when used in concert, and brutally unforgiving when used alone.
If you could undo stuff easily, suddenly the 'brutal, unforgiving' part goes away, and it would enable a much more relaxing, explorative atmosphere, while also meaning people don't have to do stuff like 'alias rm=mv -t ~/.Trash' or whatever.
Reminds me of this story...
Two years ago I was collaborating with a coworker. Most of our time was in his office. I would look over his shoulder as he typed. He had recently adopted a new IDE and spent considerable time configuring it and learning its features. He was very proud of how productive it made him.
Then one day we collaborated in my office instead. I use plain Unix tools, all independent. He watched as I did everything he did but with point-tools instead of a huge IDE. Rather than switching between files, I started a new terminal window and ran Vim. Other windows were for the build-run-crash loop, analyzers, note-taking, etc. Often I would do Vim tricks like doing a calculation by writing a formula on a line in Vim and piping the line to "bc" (`!!bc RETURN`). In fact, many of his IDE features I would access by calling out to external programs.
I had to hold back a laugh when he said:
"Oh my God! Linux IS your IDE!"
Well, duh. Isn't that's what Ken and Dennis created in the 1970s? Unix started out as the ultimate development environment! It just also happens to run everything else now.