Together, vim and vi have over four times as many invocations as emacs. I like to think it's beacuse vim is more popular, but it might also mean that vim users are more likely to pop in and out of vim while emacs users run emacs once and then never go back to bash-land.
It might also mean that vim users are more likely to check their ~/.* files into a public Git repository. You really can't do that much with the information that vim was invoked more.
In addition to the fact that emacs is usually invoked once, and then operated from within, I think history deduplication could play a role as well. Essentially, the data is not so cut and dried.
I'm a near-rabid Vim user, but I also came here to point that out. You can't draw conclusions about what people in general do from a sample of people who check in their history files.
The sample size of this experiment is also not very large. If you look at the axis on the graphs, we're not talking in numbers that would let you to create solid generalizations.
"vim users are more likely to pop in and out of vim"
I suspect this is the case. I leave my emacs running for the entire work week. When I need to make a quick change and go back to the terminal I pop open vim.
This is what I do, too. I also load emacs by clicking an icon - I doubt I've run it from the terminal more than a handful of times. (I've never figured out how to make terminal emacs work properly anyway - half the keybindings don't work, and it's a different half in each terminal program.)
Going back as long as I can remember (slashdot in the late 90s), all the completely unscientific web polls I've seen [1][2][3] favour vi(m) by at least 2 to 1.
I dunno. I've done that in the past (actually using a separate init), too. Zile just seems simpler and it works fine for me. There's a billion ways to do things, it is hard to justify idiosyncrasies.
I think what started it was that I drastically simplified my emacs init file (used to be several files, very smart and thought-out but massive and filled with rarely-used stuff) and I found myself always popping into nano for quick edits to system files. Well this quickly got annoying as there were conflicting key bindings with emacs and my muscle-memory was causing frequent errors. So I googled an emacs-flavored nano and settled on the first one I tried. Then I aliased nano to 'echo "USE ZILE"' until that became second nature, too.
The confusion comes from understanding what the target of a link is. Of course, it's not the new file you're creating, it's the source. But if you were copying the file, then the target would be the destination.
Hence the confusion, I guess.
(I don't think I've ever read the ln man page. I learned it by trying it both ways until I learned which way was correct.)
I admit it's confusing, but I don't think it can be put much more succinctly than the man page makes it. The part where the name of the link itself is called "link name" really drives the point home.
I definitely refer to ln more than any other man page... Every time I use ln I say "target, link name" in my head.
cp has two interesting switches -s for symbolic links and -l for hard links. Plus it'll also link whole directory structures with -r. This might be a GNU only feature, I'm not sure.
Yes, but the commands are normally interpreted differently in English. For example, 'cp A B' is copy from A to B that is, copying source A to destination B. But 'ln A B' could easily be interpreted as link A to B or make A a link to B where it is really making B a link to A. At least, that's why I sometimes forget the order.
I think this is because 'to copy' is a transitive verb and 'to link' is instransitive, in the sense they are used here...?
But with a quick perusal of the man page, you learn how the software works, and you don't have to make assumption based on interpretations of natural languages.
(There's probably some .bashrc setting that turns off history, but I'm making it a goal in my life to know as little about bash configuration as I possibly can.)
Github is amazing. But I don't like how it's exceptionally hard to delete history of commits. I've looked around for a long time to delete commits safely and people say things like "you shouldn't be doing it anyway."
A free software should provide freedom to the user to do anything they wish. I fee like I'm chained down when I want to delete very old commits.
I'm sure there are ways to delete commits, but it is just so complicated.
Github has nothing to do with that. It's hard to delete commits from git, because of the way it fundamentally works. Part of what defines a commit is its parent commit (or commits, plural, if it's a merge), and that chain goes back to the beginning of the repo. This means that you can't remove a commit from the history without changing everything that came after.
Well, you can do that, but suddenly no one else who works with your repo will be able to pull changes, because there is no way to reconcile the histories. Think of them as divergent timelines. They're like, "last thing I saw, Obama got re-elected; what's happened since then?", and your modified repo is like, "Uh, I don't know who this 'Obama' fellow is, and don't let the Lizard King hear you talking about elections."
| A free software should provide freedom to the user
| to do anything they wish.
There is freedom. It's the same freedom that allows the root user to `dd` over the master boot record on a Linux install. People tell you to shy away from it because you can shoot yourself in the foot, not because it's not possible.
Look into git-filter-branch for wholesale rewriting of commits.
Deleting commits in git is trivial compared to every other VCS I've used. Deleting the most recent commit is just 'git reset HEAD~1 --hard' and deleting an older commit is just 'git rebase -i commit-to-delete~1' and then remove the line for the appropriate commit. After deleting the commit locally, you then just have to push with -f to Github.
Resetting the head to a previous commit will not delete the commit. It will be left dangling in the git the commit graph (it is not a head and has no children). Rebasing does delete the commit but can be done only on local commits (commits not yet pushed anywhere else). Never mess with the history of replicated repos!
"Never" is too strong here. There are plenty of legitimate reasons for messing with replicated history. For example, what if you have copyright violations (lifted code), personal details (SSNs), or child pornography in the repository's commit history.
The advice is: "Messing with the history of replicated repos is a major undertaking and is not to be undertaken lightly." But I guess that doesn't roll of the tongue as well as "never".
It would be nice if there was a "delete all references to this file" button on Github. If suddenly a file is up there that was never supposed to be, a single click to purge it from all existence would be great.
Interesting stats. I'd be wary however, of prematurely drawing any hard conclusions.
For example comparing vim to emacs, one should rather look at whether the bash history indicates a predominantly emacs or vim user, then isolate or compartmentalize that as a single vote.
Even then we can't be sure that it supports any grandiose claim, but it should in theory be a little more accurate.
Agreed, also, for Vim vs. Emacs, most users of both probably use the GUI variants at this point, which leave no shell history. I still run emacs on remote terms, but then I don't make the mistake of uploading my bash history into a git repo.
OS Integration -- copy-and-paste works better, scroll works predictably (there are quirks in OSX terminal using mouse mode in vim and in emacs), sensical full-screen mode
Advanced features -- Certain things can't be done with the terminal. For example, Aquamacs lets you see inline latex previews: http://aquamacs.org/latex.shtml
On my Ubuntu, pico is a link to nano, and I usually type nano when I want to use it. I wonder what the editor results would show with nano thrown in.
I'm surprised so many people use apt-get instead of aptitude - the only time I use apt-get is usually to install aptitude.
Also surprised people use find so much. I use locate probably 80% of the time, find maybe 20% of the time or less. Operations like this are what caching was made for.
I did not know tmux was a popular as it is. That's interesting. I am not surprised sublime is gaining in popularity...
> I use locate probably 80% of the time, find maybe 20% of the time or less
Absolutely true. find is way too slow. I was hacking on some AOSP code recently and finding a file took upwards of 1 minute. locate on the other indexes the files on your hard disk, so is generally much quicker.
Rubyist here; how does one install the BeautifulSoup module here (Macbook Pro with Mountain Lion)?
./ghrabber.py "path:.bash_history"
File "./ghrabber.py", line 3, in <module>
import BeautifulSoup
ImportError: No module named BeautifulSoup
I tried easy_install beautifulsoup but it gave a permissions error, so I ran it as sudo and that worked, but when I re-run the script it still throws the same error.
Does python lack 'bundle install'-like functionality?
I'm not very familiar with Ruby, but I know there are some similar pain-points wrt packages/dependencies and maintaining "pristine" (aka "actually working") project trees (it's not specific to python/ruby either - hence package managers like apt/yum, build tools like apache maven etc).
What I usually do, is I have a system python for packages managed with
apt (There are a lot of these, try):
I then have a local python virtulalenv for installing misc packages (eg: a more recent mercurial than the one distributed in apt on older Debian distros):
virtualenv ~/opt/python-misc
ln -s ~/opt/python-misc/bin ~/pybin
# In .bashrc:
if [ -d "~/opt/pybin" ] then PATH="~/opt/pybin:${PATH}" fi; export PATH
Note my "local" python virtualenv is before system in path, so any casual pip install "shiny-package-that-migh-break-system" just (at worst) means I have to recreate the virtualenv (or pip uninstall it).
But for projects -- I highly recommend looking at buildout. A nice summary I stumbled across here:
If you are using pip you can type "pip freeze" and it will spit out a list of all modules and the version currently installed. Most write this out to a file called "requirements.txt" in their project root. You can then use this list to install modules via "pip install -r requirements.txt"
Same functionality as bundle install, just pythonic.
What s/he said. Although keeping track of new updates is a bit of a pain, since there isn't a service like the one Rubyists have to get information about new gem versions.
There are some tasks I do infrequently enough I can't remember the paths, commands, and options... but my history always saves me. So when I know it's full of useful stuff, and especially when moving a bunch of files/functionality to a new place, I tend to create a copy of my .bash_history in another, non-hidden file.
Of course such renamed files wouldn't have been part of this analysis, so it doesn't exactly explain why files exactly named .bash_history are in Github. (For that, I'd surmise many people make an entire project-specific login directory git-versioned.) But it does hint why someone might intentionally want to remember their history in version control.
Done intentionally, it strikes me as a potentially valuable reproducibility, auditing, and training practice – documenting what was done around the time of other file evolution, making it easier for someone else to help out in a pinch with full context.
This is arguably the fault of Wordpress developers for creating a config system which forces or at least encourages you to fork their code to set your database location. This is why configuration shouldn't go in code, ala 12factor: