Hacker News new | past | comments | ask | show | jobs | submit login
Please do not delete this commented-out version (emacshorrors.com)
413 points by jordigh on Jan 4, 2016 | hide | past | favorite | 203 comments



There are places in Smalltalk code where you see:

    false ifTrue: [
        "Some code to debug to help understand..."
    ].
In VisualWorks, you can double-click next to the 1st square bracket to select the code, then instantly execute it or debug it. The reason why it's not simply commented out, is that automated refactorings in the Smalltalk Refactoring Browser can then touch the commented-out code. (St RB used to be guaranteed to always execute strict refactorings and produce correct code.)

Often such code can even use values from the current scope, so a separate debug session can be spawned to see what it "would have been like" then dismissed with no consequences. It's stuff like that which made Smalltalk an uber-environment back in the day.


In clojure, you see something similar at the bottom of files:

  (comment
    (def some-state (create-state))
    (some-fn [1 2 3])
    (do-something some-state)
  )
The compiler ignores the block, but in a REPL enabled editor, you can evaluate the forms to possibly view current state, or run functions that test behaviors or probe runtime state.


Or the equivalent

    #_(def some-state (create-state))
       (some-fn [1 2 3])
       (do-something some-state))


I guess you meant:

  #_((def some-state (create-state))
     (some-fn [1 2 3])
     (do-something some-state))
oh, and you can't have "invalid" code in (comment), but in #_ you can.


#_ comments out the form it precedes. You do not need to surround it with an extra pair of parentheses.

Didn't know about the difference in behaviour with regard to invalid code. Thanks for that.

EDIT: Leaving this here for context. I'd missed the fact that they weren't all enclosed in a single form to begin with. Parent's solution is correct in light of that.


What your parent meant is that your #_ only comments out the first form, not the other two. Surrounding them all with an extra pair of parentheses groups them into a single form.


Oops! I missed that. I wasn't looking closely enough, so it seemed to me like they were all enclosed in a single form to begin with. Thanks.


Yup you're right! I made the same mistake as okal and assumed everything is in one form! Also, I didn't know about the 'invalid' code part - so thanks for that :)


It is possible, or even likely, that a repository switch could lose that history on a project like emacs.

Guile now has an elisp interpreter and guile itself has JIT capabilities as well as a number of other features elisp hackers have grumbled about over the years. It's been mostly one man's work but it has spanned years to get guile-elisp to where it is. The discussions about migrating to it have been somewhat interesting to read. You have a remarkably solid, cross platform, well tested product and they're very very cautious to harm that, even with big speed and feature improvements on the table. It's one of the few good examples of "maybe you want to think twice about fixing something that isn't broken." Then when/if they do fix it, I bet you'll be able to build it with the old elisp for 20 years, just in case something isn't quite right...


> It is possible, or even likely, that a repository switch could lose that history on a project like emacs.

Emacs is, as far as I know, the project with the oldest history stored in git. It contains the first commit from April 18th, 1985. https://www.openhub.net/p/emacs Here, look at the first few: http://git.savannah.gnu.org/cgit/emacs.git/log/?ofs=124080 Because that link is likely to go obsolete as more commits are added, here's the first commit: http://git.savannah.gnu.org/cgit/emacs.git/commit/?id=ce5584...

That this commit even exists there is the result of 10 months of work by ESR, who perfected his RepoSurgeon tool on the Emacs repo, in order to cleanly pass over all the relevant commits and branches (and cleanup obsolete metadata from old version control systems). Here's one post about that work: http://esr.ibiblio.org/?p=5634

(as an aside, I think RepoSurgeon is a very interesting tool that operates on a git-fast-import operation stream, as invented by Keith Packard when he moved X.org to git. A git-fast-import stream is a wonderful VCS interoperable format)

Anyhow, my demonstrated point is: of any projects, I expect that Emacs is the least likely to ever lose history in a repository switch.


I see your 1985 date and I lower it with this https://github.com/Vanilla-NetHack/NetHack/commit/a1b94f052f... dated Dec 17th 1984. ;-)

Okay, this is not the official repository but one I did for us fork developers. And as we learnt last year, the NetHack DevTeam itself has been using CVS only since 2002 and apparently before that, they were just mailing around patches.

But of course you can set the date for any commit, so it's just a matter of finding someone who wants to have an accurate commit history and a project that is old enough. Especially with GNU projects, there are probably several candidates that would have a history starting in the 80s.

I was appalled when I learnt that Linus threw away the complete commit history of Linux when he switched to git. That was a veritable WTF moment.


I don't find it that implausible that the cost of preserving that history would have been higher than the value of doing so. (In Linux' case it would probably have meant writing an exporter for BitKeeper, which would have had to be done through reverse-engineering so as not to violate the license)


BitKeeper has an export command that at least exports diffs of its changesets. Merges would have probably been an issue to easily model.

But at least some of the commit history would have been possible to incorporate into the new git repository.

I'm no expert on BitKeeper, there were probably more options to achieve that but it seems Linus didn't think it was needed.


Someone ported and grafted the old history onto the current history: https://archive.org/details/git-history-of-linux You can even pull from the current linux repo.


Yeah, although git isn't really suited for stitching different repositories together.

git replace does work now somewhat better than git graft did but it shows that git was never meant to be used with multiple repositories.

Going by the commit message, Linus didn't see lots of harm by just cutting off the history:

  Linux-2.6.12-rc2

  Initial git repository build. I'm not bothering with the full history,
  even though we have it. We can create a separate "historical" git
  archive of that later if we want to, and in the meantime it's about
  3.2GB when imported into git - space that would just make the early
  git days unnecessarily complicated, when we don't have a lot of good
  infrastructure for it.

  Let it rip!
https://github.com/torvalds/linux/commit/1da177e


How is Guile Windows support ?


> It is possible, or even likely, that a repository switch could lose that history on a project like emacs.

Surely the "D" in DVCS means that many thousands of copies of the project history would be available around the world?


When the code was commented out, it would be several decades before Emacs would be switched in to a DVCS, then another 7 years from when the "please don't delete this" comment was added. Even if it was in the DVCS history, and there was a comment pointing to a commit ID, that would have been rendered nonsensical by the conversion from bzr to git.


I'd be somewhat surprised if the bzr->git conversion did in fact mess those up. I was following ESR's blog when he was doing the conversion. He seemed to feel very strongly that the conversion should not mess things up like that. (His goal was that the history should look like it would have done, if people had always been using git.)

On the other hand, I'm less confident about the conversion to bzr. And anyway the relevant question is not "would it have broken?" but "did people have reason to expect it would break?"

And on the third hand, a commit id could be referenced by author and timestamp, which would be invariant under (competent) repo conversions.


The bzr->git conversion does look like it was always in git from a looking-at-the-repo viewpoint (sans commit messages mentioning RCS/CVS, et c.); but in-repo data wasn't changed much; when building from VCS, it sticks a VCS identifier in the version; prior to a certain commit the script for that tries to grab the commit ID from bzr instead of git. However, ESR's blog does suggest that commit hashes mentioned in the source were replaced; I'm not sure if that's true.

That said, referencing the commit by author and timestamp, instead of internal commit ID is the robust solution.


Did they not record the bzr commit IDs in the Git commit messages during the conversion? If they did, finding a Git commit that corresponds to a bzr commit ID should be trivial.


In a perfect world, sure. I once worked for a company that bought another company's assets (mainly the code base) and at some point in the past the code base had been migrated from CVS to Subversion and the commit history was simply lost. They probably just took a clean copy of the main branch and started a new repository. "Why was this written this way?" was not a question you could unearth from the repository, sadly.

(Another company I worked for had commit log entries like "I like cheese" so perhaps this company wanted to erase history for a reason. Here in the 21st century version control systems finally get the respect they deserve, though I bet there's a sad number of shops that still don't use one).


Nonsensical, but still accessible


But you could _delete it today_ and it would not be lost. Right?


No idea why this was downvoted. Perfectly legitimate question given unclear circumstances.


Generally I find all copies to be bloat.

Seeing the original code will not necessarily explain old behavior. When you make a copy, how much are you planning to copy? To debug ancient code that calls 5 functions, you might need access to the histories of those 5 functions; and the histories of any functions they call, and heck maybe you only saw correct behavior on a certain OS or with certain input (none of which you may have access to). If you don't have all those histories, the "correct old code" you are looking at may even lie to you, as it suggests it is calling functions you think you are familiar with but those functions have changed in some important way.

After enough time passes — and at the rate this industry moves, time is short — the only thing that really matters is whether or not the program currently does what its current users require. And if it doesn't, you define what behavior you need it to have and find a way to fix it.


A while back, I found the following amusing comment in the source to GNU tar:[1]

    /* Parse base-64 output produced only by tar test versions
       1.13.6 (1999-08-11) through 1.13.11 (1999-08-23).
       Support for this will be withdrawn in future releases.  */
I did some digging then and found at that it had been introduced in a commit that was (at that time) almost 10 years old, to the day.[2]

1. http://git.savannah.gnu.org/cgit/tar.git/tree/src/list.c?id=...

2. http://git.savannah.gnu.org/cgit/tar.git/commit/?id=e4e62484...


:( Disappointed. I was expecting a story along the lines of the "Magic / More Magic button" story.

edit http://catb.org/jargon/html/magic-story.html


Same. I've heard some horror stories about programs failing to compile if you switch independent lines around or, even worst, remove some comments. Thought it would be one of those!


Once on an old TI machine my friends code continually acted oddly. BASIC. We tried and tried to debug it. We got it down to a single if statement after pondering for several minutes I erased the equality `=`, put it back, and ran it. Pure grasping at straws. Works perfectly. To this day I can't explain the magic equals. This thing had a very limited character set so a visually identical glyph is out of the question.

This magical equals haunts me to this day.


Old BASIC implementations often stored the code in a tokenized form to reduce the memory the code needs and to slightly speed up interpretation.

One fun issue on C64's BASIC was that you could force the interpreter to not tokenize lines, but they were printed verbatim on "list" - totally invisible to the user. Printing the line, then having BASIC parse the same output fixed the issue. Although the error was pretty obvious IIRC (some fatal interpreter error aborting operation).

I'm not sure what the TI machines did. There may have been some odd tokenization error in that line and by rewriting it, you kicked off the parser which made the bug go away.


I remember in some of 8-bit BASICs there were "color codes". Basically, invisible escape sequences, which could be however interpreted as numbers if encountered in the source text. Could be something along those lines.


There are certain places in the code where I work that fail to compile if you remove whitespace\comments - there are some static variables in macros that use __LINE__ as the unique portion of the identifier - moving them around might conflict with another definition, causing multiply defined symbols.


"there are some static variables in macros that use __LINE__ as the unique portion of the identifier"

Now that's a horror story, I hope I will never have to work with such code.


If you want a short one: It's not too hard to get flex to switch from generating compiling code to non-compiling code by removing a comment. The secret is in if the comment is "interpreted" by flex or the compiler, even thought they should both be the same.


I was hit with a flex compiler bug on a large project days away from a deadline. I think it was due to either the depth or shape of the ast as I was able to get it to finally compile by reordering the functions in the class. Then I added more code and broke it again.


That sort of thing has happened to me many times while using FreeBASIC. A few years ago the FB compiler was still young and had a myriad of bugs lurking in corner cases; I've encountered dozens of compiler crashes, strange interactions between unrelated bits of code that produced nonsensical error messages and fixed by rearranging, parsing and lexing bugs, including the lexer accepting the most remarkably garbled input, especially when I tried writing a PEG grammar for FB. Also, the preprocessor is pretty nice - looks like C's but with vastly better semantics, and if you find a bug in it, you can exploit it for even more advanced constructs! :) Like how // in early C behaved like the non-existent-at-the-time ## operator.

     'STRANGE: For some reason I have to add a comment here or FreeBasic 0.24-pre doesn't compile it
I've even seen what donatj said, of changing the source and then changing it back in order to remove an error. That sort of thing is generally a problem with the build system not producing reproducible results, such as not files not being recompiled when necessary. Since I have my own DSL that spits out FB code, and the FB code produced in different modules can have hidden conflicts, I've actually seen that one multiple times (my own fault).

Really, the amazing thing is that just a few years later FB is actually pretty stable. I don't mean to put it down; I'm quite impressed with their work.


For compiling it's rare (though pre-processor abuse is a good way of creating evil messes like that).

But totally changing runtime behaviour with seemingly unrelated code changes is usually a good indicator that you've managed to trash the stack. A compiler that manages to trash its stack, now that has potential for some extreme levels of frustration.


>we leave this version here commented-out, because the code is very complex and likely to have subtle bugs. If bugs _are_ found, it might be of interest to look at the old code and see what did it do in the relevant situation.

Here are two better alternative solutions:

  * comments (in natural language not code) that describe what the code should do
  * test cases: to dictate what code should do and prove it does do it


That's the kind of smug naivete that makes young programmers so insufferable. Your use of "should" in your two sentences is a hidden, inappropriate abstraction. You're positing the existence of a "correct design" for expand-file-name and working backwards to how you'd write it in a modern ground-up development project.

The actual problem at hand was rewriting that function in a way that doesn't break the uncounted thousands of usages in the wild, which depend on the self-described "complex and likely to have subtle bugs" original implementation.


> That's the kind of smug naivete that makes young programmers so insufferable.

That is really toxic language. It's important to call out bad practices where they exist. Not having good comments and test cases for intended behavior is a bad practice. Yes, in the real world, plenty of code gets shipped without adhering to these practices, but it's really unfortunate to see someone talked down on for showing where good practices could have prevented the current circumstances. AFAIK, neither of you have personally written those lines of code, so why is ego getting in the way here?


Because with experience comes internalization of the maxim, "perfect is the enemy of good." Mature programmers get more done and ship more good code than they can remember, while junior programmers are so focused on following perfect standards using perfect frameworks and perfect syntax that they 1) ship very slowly, and 2) belabor every decision, causing even more bugs.

Yes, the language isn't the most ... diplomatic ... but it is also frank & honestly expresses one difference a decade of experience makes for most professional programmers.


> Mature programmers get more done and ship more good code than they can remember, while junior programmers

Excellent insight!


> so why is ego getting in the way here?

I find those kinds of comments insufferable as well, but I'll assume you actually wanted an answer and not just an excuse to chastise someone:

It's because those of us who have learned we don't know everything are annoyed by people who still act like they do know everything. It hurts because we're embarrassed by remembering our younger selves.


> It's because those of us who have learned we don't know everything are annoyed by people who still act like they do know everything.

This is a beautiful way of explaining this. And so true!


The post he's responding to is the equivalent of watching a horror movie from the 70s and asking "why didn't they just use their cell phones?" It's not "important" or helpful to anyone reading this thread.


It is important. Testing was not unknown at the time, and there are still people keeping commented out code around even today.


> That is really toxic language.

It's honest communication, something that is becoming increasingly rare in this world of language policing. It's good to be nice to other people, but History as shown us time and again that freedom of speech is a very precious and important thing.

> Not having good comments and test cases for intended behavior is a bad practice.

This might be true, but nobody really knows for sure. There isn't a single development methodology that has been shown by rigorous experiments to be any better than anything else. In this case you have experienced developers doing what they think is best. Emacs is a very old and widely used project. There are very few software projects at that level of sustained impact (which is what actually matters in the end). Some humility might be wise.


>It's honest communication, something that is becoming increasingly rare in this world of language policing.

It's political correctness gone mad!

/s


> It's important to call out bad practices where they exist

I suppose. But second guessing the development of GNU Emacs in 1984 (let's be clear: a program on the short list of the most successful bits of software ever written in the history of computing) might be a bad place to start if you're trying to avoid embarassing yourself.

RMS was writing this, alone, on a VAX most likely, or maybe a Sun 2 if he was really lucky. What hardware do you expect he would have dedicated to that putative "test suite" even if he had time to write it?

Kids today.


Toxic to whom or to what? Chocolate is toxic to dogs - it doesn't mean that chocolate is dangerous in all circumstances.


Is there a sector of the community that is okay to ostracize with word choice?

I can't imagine why you would have written,

> Chocolate is toxic to dogs - it doesn't mean that chocolate is dangerous in all circumstances.

if you didn't think that.


> Is there a sector of the community that is okay to ostracize with word choice?

For sure. I'm an example. And I have a reason for this.

I remember myself from some ten years ago and I remember how far from reality my thinking was. I was largely self-taught, but I read many, many books and even wrote quite a bit of working code. I wish someone told me how naive I was, how mistaken in many areas. Instead, I went on to work - having quite a good luck and joining a very capable, professional team - and was in for the most depressing half a year in my life. All my teammates helped, and gradually I started seeing things in context and learned to appreciate all the accidental complexity and real-life requirements. However, this still was a terrifying experience for me and I'd be much better off, I think, had someone told me, without trying to be unnecessarily polite, how stupid and childish I was BEFORE I went and got bitch-slapped hard (repeatedly) by reality.

So yeah, I have no motivation to do this myself, because young programmers are indeed insufferable, but I cheer for everyone who tries to destroy the false, unfounded confidence of "young programmers". I strongly believe it will be good for these younglings in the long run.


Your post is toxic to me. Please don't ask me what that means.

Or: Sure. How about murderers? They are frequently described as inhuman, barbaric, etc. Was that really the point you were trying to make, or is there something else you are getting at?


I grant you that it's less bad to ostracize "murderers" than "[insufferable,] young programmers".

That doesn't put ostracizing young programmers in the plus column.


You asked: Is there a sector of the community that is okay to ostracize with word choice?

I responded. Explain yourself more precisely if you want me to consider your point...


And solving the actual problem would be much easier with test cases than an algorithm. This is made ever more important if the original implementation is "complex and likely to have subtle bugs" since that implies that anyone reading the comment is likely to miss 'seeing' important edge cases.

A really simple way to solve this would be to move the simplified algorithm to a test suite which iterated over some important edge conditions and validated that the simplified algorithm and the refactored code agreed. That would preserve the simplified code, but in a much more useful form where automated testing could be done against it.


That's assuming that your test cases test for all the subtle little edge cases. That's incredibly naive, and unlikely.


No test suite is complete, however, each test is one better than none. With time, (30 years!) a test suite becomes complete enough to matter, as edge cases are accounted for in the wild.


>each test is one better than none

My experience has been the reverse: lots of ultimately pointless tests add nothing to a given codebase but maintenance costs and developer frustration.

The most successful projects I've worked on used (completely automated, no code getting pushed to staging unless the CI server says it passes) tests sparingly.


Having a single test for a subtle edge case that can be missed is better than a committer and a reviewer both missing that edge case and committing code that breaks it.

I can't fathom how anyone thinks that documenting the algorithm is better than running the algorithm. The only possible explanation that occurs to me is sociological and that when someone eventually breaks the code which was only documented, you can heap scorn on those who allowed it to be broken because they weren't as smart as you are to be able to see that edge case in hindsight.

Which I guess is 'better'. I'd prefer to have a test that fails. When code breaks an edge case that nobody thought of, I'd add that edge case to the test suite. It will never be perfect, but it will work at least some of the time.

And assuming that test suites must be perfect and complete to be useful is also incredibly naive and a sign of an immature and fairly novice developer. Test suites are always imperfect, you must always guard against them and not assume they do everything for you, and yet you need to take the time to build them and maintain them since they make your code quality better when they do catch edge conditions where you weren't smart enough to see your own bugs you were introducing.

I guess some programmers are just smart enough to never commit buggy code though and don't need them...


> That's the kind of smug naivete that makes young programmers so insufferable.

You said it.


That might be workable idea if the guy writing the replacement code has such a perfect understanding of the original code that he can spell out every assumption and edge condition that the code covers (as well as write tests to cover all conditions). The best you can hope for is comments and tests that cover the new author's impression of what the old code was doing, which is not necessarily the same as what the old code did.

In this case, since he apparently doesn't feel that he has that perfect understanding, he left the original code there, so when someone says "Hey! I counted on behavior XXX, this change broke my workflow!".

And any change no matter how trivial or "correct" always breaks someone's workflow.

https://xkcd.com/1172/


You missed:

* Comments referencing the original code by date / version / commit id. Date and version are relevant because the comment might not get updated if there is ever another VCS migration.


It's a bit ironic that there's a longer comment describing why the code shouldn't be removed than there are comments describing what the code _actually does_ or _why_ inline.


I wonder what the most complex algorithm you've ever implemented was. The most complex I ever implemented was a toy crypto system that I implemented first naively, then again with simple optimizations. In neither case was there a simple way to describe what the code should do because it was inherently complex.


Good thing it was a toy.


Only comment if the code is doing something tricky, or unclear and has been written in such a way for a reason worth noting.

Don't write comments about what the code should do. If you have to write something, comment on the domain/context, not the code.


How about:

The old version of this code is under source-control <here>. The code is very complex and likely to have subtle bugs. If bugs _are_ found, it might be of interest to look at the old code and see what did it do in the relevant situation.


Where is <here> and will it always be there?


What is "always" and do we have to care about always? Sounds like splitting hairs to me.

Even the comment in question admits: "it's true that it will be accessible from the repository".

Only then it goes on to say "but a few years from deletion, people will forget it is there" - well, then just leave a comment reminding them about it.

I see no plausible reason for keeping dead, commented out code in the codebase when you've got VCS.


This doesn't seem all that awful. If the old code is significantly more readable, then perhaps it serves as good documentation about the new code.

It is somewhat surprising that this would last 30 years though. In our huge codebase, barely any lines have survived 8 years. Even less so the comments.


The actual lines themselves may not have changed much in your codebase, but unless you are diciplined about stuff like not introducing whitespace change noise, its very easy to rototill an entire codebase over a peroid of years.


Using git add -p helps immensely with this, since it highlights trailing whitespace in red and each change can be approved separately. It takes a few days at most to get used to using the -p flag, and it's worth it.


No matter how I look at this, it is terrible. If this is okay, where do you draw the line? How do I know which commented out code is okay to commit and which isn't? I understand the point of doing it, but there are many other solutions that are all better.


> If this is okay, where do you draw the line?

This argument can be used to ban anything. Where do you draw the line?

Jokes aside, one possible line is "it should make sense after a while of considering alternative solutions".


>How do I know which commented out code is okay to commit and which isn't?

By exercising judgment. Thank God that's still required, else we'd be out of a job ;)


There are tens of thousands of lines commented out in the codebase at my company... When I say "We should delete all of them" they defend "it will be good sometime" LOL. That's why we have a 500k LOC codebase and developing a feature takes weeks and months instead of days. I'm not allowed to refactor...


As something of a computational geneticist, i really appreciate this comment. because there are exon sequences in DNA, which are expressed as mRNA and often translated to proteins -- let's say "compiled"; and intron sequences which are spliced away and currently unused. However introns are the stuff of past and future genes and controls, just like lines of code commented out. They are like boxes of stuff in your attic, currently not used, but very handy when conditions change.


It's so much more wonderfully complicated than this. Introns can change the local conformation of DNA, act as enhancers, impact splicing, etc. The genome is an incredibly vibrant machine (ecosystem, really) -- it's nothing at all like linear code. Just because it's non-coding doesn't make it "junk DNA".


I've, in the past years ago, encountered a language bug in Perl 5. Adding a comment made the code work, removing it caused bizarre behavior.... Sounds like an intron with epigenetic effects to me. ;D


Are there any good breakdowns for laymen you could point to about this behaviour? It sounds fascinating.


The source control system is like an attic (some even use that term). Commenting out code is more like leaving those dusty boxes in the dining room. I suppose one little old box might be ok if it has sentimental value.


Not in the living room. On your desk.

This is one of the things revision control is for, after all. If you are really worried your tools aren't up to the task of finding the old versions, you leave a one line comment with tag and rev.


and just like life you pay for it by having to maintain and copy all of that extra genetic mass... much as the programmer pays by having more content to navigate, where most of it is not part of the program code.


I have this argument with people about comments in configuration files on production systems.

Their argument is "it explains the configuration". My argument is that it's more important to be able to quickly see what configuration is actually applied (which usually fits in a terminal window without scrolling) then it is to redundantly store help-text on the server (which also makes it hard to read what is enabled).


Comments in configuration files that document the particular value of the setting ("why do we have this set this way") or other such subtleties make sense. Comments in configuration files that document the setting itself definitely don't.

Unfortunately, many distributions ship configuration files in /etc with a pile of commented-out defaults and documentation, on the theory that doing so makes them easy to edit.

Personally, I prefer the current approach of not shipping a file in /etc at all, shipping the defaults in /usr if anywhere, and thus reducing the file in /etc to actual configuration.


Can you not tell from the running system what the currently-applied values are? That would address your concern. OTOH, if a nonsensical value without a comment (and, at a minimum, an indication of who made the change, when, and a reference to a bug/case/URL with the full details) is present, you have nothing to work with. Your only option is to change it and see if production breaks, and that is never an option until until production is already broken. At which point, you always discover that there are, in fact, multiple configuration values set that make no sense.


An additional point is that most of the time config-dumps from a running program include ALL configuration settings. 95% of the time it's simply not important - what I actually want to know is "what is not set to its default".

Which means I need the config files.

I'm not against ALL comments - i.e. if something has a non-obvious external issue, that's worth noting. But people argue that the very verbose default comments, for example, should be left in - that's pointless, and potentially hazardous.


While I agree useless comments are bad, especially in production, is there any reason why you don't just have a macro in your editor (assuming vim here) to do

  :view
  :g/^\s*#/d
(this sets vim into read-only mode and then deletes all comment lines, assuming # for comments).


Configuration files are the main initial resource for information about the configuration. If there is no reason to suspect that reality mismatches configuration, there's no reason to go down that rabbit hole.

What the current configuration _means_, however, is a different matter. I have worked on systems where the explanatory comments are only accessible in a different system. One more extra hurdle, whether they are helpful or not...


I'd say leave the comments in the config file and instead add a sanity check quick scan output to your application itself. Dev ops surely aren't as knowledgeable about the implications of config changes as the developers and notes about the potential downsides of changes are important.


grep -v "^#" config-file


in general with code, if it needs explaining with comments then its not written cleanly. sometimes there are confusing external considerations, like mathematical algorithms or additional constraints - that is what comments are for imo, explaining things external to the code that are required to understand it.

comments like "does this" are telling you the name of the function that the following code should be in.

... so unless your scripts/configuration data are written in a very esoteric language or format i'm inclined to agree with your point very strongly.


And by the time you uncomment it, the rest of the organism/code has mutated so much that the legacy snippet does something totally surprising or even destructive.


Indeed. DNA is the original DVCS.


When I am in such environments what I always to is to add some little cleanup to features that I am adding. In a code review, it is always easy to justify that I am also removing these 5 lines that surround the code and are not needed anymore. It takes time, but it helps.


it's really hard (and you have to be really brave) to do that without any tests. (Of course the code bases have no tests at all...)


When I am modifying code this assumes that I am testing that part of the code. That's the whole idea behind the cleanup I mentioned.


I think ides should have a really easy way to see old code. I'm thinking like arrow buttons next to each function and you can browse though older versions.

It's true in theory all your old code is stored in vcs but it's such a bear to actually go look at it.


I can imagine an OSX Time Machine style interface to Git that would be quite cool.



Using Git blame with Sublime GitGutter comes close: https://github.com/jisaacks/GitGutter


Git blame?


Does your IDE not?


Not in one or two clicks away in a way that doesn't take my focus away from what I'm looking at.


It's Context menu -> Git -> Show history for function in Jetbrains products.

Gives all the commits with changes inside the function, shows each one side-by-side.


oh wow, very neat. Thanks


That source code looks HORRIBLE. Is it just because of the language or because the person who wrote had no other choice?

http://git.savannah.gnu.org/cgit/emacs.git/commit/?id=488759...

It must have been done for a reason right, emacs is incredibly old in tech years and solid as a rock.


You may not be familiar with reading and writing C. String parsing in ANSI C in the 1980s looked like this. And it is completely readable and obvious to those who are well-versed in the language and practices of the time.

For example, those naming practices were pretty essential when working with 80-character monitors.

And all those ifdefs were required for dealing with DEC minicomputers and (later) microcomputers. A minicomputer, if you didn't know, was something that in terms of power and cost existed between a mainframe and microcomputer (i.e., a PC).

Breaking out the code into different functions by platform (segmented by ifdef statements) would create unnecessary code bloat unless the implementations differed significantly. And believe it or not, this was much more readable (with the technology of the time) than calling separate functions within a master function, requiring referring to each function separately. Of course, ifdefs for different platforms are still required, and are often used in a similar way today.

I guess what I'm trying to say here is that there were very practical and pragmatic reasons code used to look like this, and that things much more obscure and (to modern eyes) hard to read were written by very smart people who knew exactly what they were doing.


To be fair, though, those smart people routinely wrote exploitable buffer overflows with code just like this (and "smart people" here is not sarcasm--they were smart!) C string handling may have been defensible 30 years ago (though I'm not sure about that), but not today.


Yep, you're absolutely right--but it was a time when the entire world wasn't networked to the degree it is today, and there was little or no motivation for someone thousands of miles away to gain access to your system. Security was simply less of a concern for most programmers, so much so that it often didn't even occur to them. Until the mid-1980s, even US federal law and its enforcement reflected that.

Saying it is indefensible to manipulate strings in C is coming pretty close to saying it is indefensible to use C (and after they went to all that work to put together C11!). For most programmers, perhaps. But it still has its uses, and as of C11 safer standard string manipulation functions, as well. A competent, security-minded C programmer can write perfectly safe string handling code in modern C.

Still, it wouldn't be my first choice. A Jedi may use a lightsaber effectively, but a novice will probably just end up cutting off his leg.


It wasn't defensible 30 years ago, either.

I say that - may be I led a charmed life. Sigh.


You say that as if we don't have people writing buffer overflows and other exploits in C today.


Thank you for the insightful comment. Too often I have worked with developers that assume the original programmer was being difficult, didn't understand how to write proper code, or was inexperienced. They precede to rip out the code and re-write it "their" way.

When I come across code I don't understand I base my assumptions on the belief that the original programmer was smart, knew what they were doing, and had specific reasons writing the code as such.


String manipulation in C: not even once. It always ends up looking like this. This contains some niceties like `p[-1]` as well, which is perfectly valid but upsets people trying to check whether it's guaranteed to stay in the buffer.

Quite a lot of the #ifdef damage is to deal with VMS.


Eh, the code is not that bad but it's weird that they don't scan from the right instead of the left to find 'xxx//yyy//zzz' (which means ignore 'xxx/' because you began a new path with '/yyy', but ignore '/yyy' also because you began a new path with '/zzz').

Here is the same code in JOE (which has no VMS support and which treats foo/../bar and bar as different files - the original emacs code was written before symlinks!). JOE has a variable length string library, so you see 'vsncpy' instead of the alloca / strcpy / strcat / make_string.

http://sourceforge.net/p/joe-editor/mercurial/ci/default/tre...

Edit: Holy Cow! The new version in emacs is much more involved: https://github.com/emacs-mirror/emacs/blob/master/src/fileio...

"For technical reasons, this function can return correct but non-intuitive results for the root directory; for instance, (expand-file-name ".." "/") returns "/.."... " :-)


> Eh, the code is not that bad but it's weird that they don't scan from the right instead of the left to find 'xxx//yyy//zzz' (which means ignore 'xxx/' because you began a new path with '/yyy', but ignore '/yyy' also because you began a new path with '/zzz').

expand-file-name doesn't do that; (expand-file-name "/foo//bar") produces "/foo/bar", just like any other pathname canonicalization.

I can't help but wonder why emacs doesn't write that function in lisp, rather than C.


You are right, there must be some other function which does that.

Maybe the code predates the lisp interpreter.


Old gnu software is full of code like that. If you want some nightmares, have a look at the source of tar, ar, or other tools. If you have to comment where the control goes back to your while(1) in a 200-line function (https://github.com/wertarbyte/tar/blob/ignore-missing/src/li...), that's 100% a code smell.


Actually this code /is/ horrible in many ways (single letter variable names instead of something descriptive?!?), as is 'the old style' in general - which is why those practices are uncommon or considered 'bad' today

it can still be solid as a rock and be needlessly difficult to read, written in a poor style. they aren't mutually exclusive things, despite what many may preach.


Pretty much all portable C from that era looks like that.


FWIW, I've seen worse code in closed source, computational codes. Still, I always thought GNU were in a better category than the rest of us.


A short browse through gcc will strongly disabuse you of that notion. (Though some of that code is rotten for weird political reasons.)


It is my understanding that GCC was actually pretty clean until the EGCS fork.


Yes. You can trace much of the shitty decision making to gcc 2.95.


It is my understanding that EGCS was forked to clean GCC.


EGCS was all about generating faster code, adding features, et c.


Oh, god, so awful. All those ifdefs..

Funny: Tried to find the comment commenting it out. I found it. #IF 0. Wow.


That's a pretty standard way to comment out a huge chunk of code, because /* */ will get screwed up if there's another comment using the same convention inside, and putting // in front of every single line looks bad/makes copying code around painful.

#if 0 is also a convenient way to flip bits of test/debug code on and off.


That's actually neat! Haven't touched C/C++ beyond university and didn't know about that.


  #if 0 <...> #else <...> #endif
Is a pretty standard way to switch between new/old code while refactoring to check if behavior/output matches. Just change 0 to 1 to switch between "versions".


Yes, if the function breaks, do we care what some code did in the same situation that was removed 30 years ago?

I mean, let's look at it like this: does the author of the removed code have a vote in what should happen, through the comment? If it's so important, why was it removed?

How do we even know that the new code did something wrong? We do not know that in relation to the old function; we know that in relation to some modern criterion. Maybe it completely bombs (irrelevant that the old code didn't bomb, or whether or not it did), or that returns some result someone doesn't like (without regard for what the old code did), or that violates some documentation.

We can decide today what the code should do, and fix it, or its specification or both.


I'm surprised that we're still using this as a bit of a hack in the future. In the project I'm working on, when we substantially optimise a section of code, we move the un-optimised, believed-good, version. It becomes an oracle against which the optimised version is tested on both manually and randomly generated test cases.

I'm not saying that the above solution should have been deployed at the time, but perhaps it could be moved in that direction now that we have these techniques and tools to hand.

Could such a solution work well here? Perhaps, it would be even more useful, as tests would be likely to fail if the optimisation was bad, rather than just assisting with debugging after the fact.


My question is - has anyone needed to refer back to the old code in the last 30 years? I can understand wanting to have the old code around for easy reference in case you need to debug the refactored version, but if no bugs have come up in the last 30 years (or even, say, the last 5), it seems pretty safe to say the new code is solid.


Another question: is the old code even valid in any sense compared to the current code? Since the original was written before 1985, then 99.9% of the people running Emacs will be running it on an OS that did not exist in any form in 1985 (Linux, Mac OS X, Windows, Android, etc) and the rest will be running on OSes which have changed substantially or been replaced entirely since then (SunOS->Solaris).


That's the beauty of C -- you don't have to rely on OS, so a code written 30 years ago is still working today.


Of course C relies on an OS. The point of an OS is securely mediating access to resources; a C program is going through the OS, using syscalls, in order to do stuff like 'write a file'. Which bears a certain resemblance to what this code is doing...


Let me repeat myself -- "you don't have to". C runs well on bare metal, that's why most bootloaders, kernels and low level drivers use it. Of course, you can make as many dependencies as you wish, but unlike high level languages, C doesn't rely on OS for it's core features.


If you think that's bad, try looking at the LibreOffice source code. I think I found a 15 year old bug the other day [1] [2], but I only noticed this when I was trawling through the history of the file "outdev3.hxx", which via cgit was a pain in its own right.

There have, in fact, been 5 source code repository switches in LibreOffice over the last 30 years. [3] It started with some sort of proprietary version from 1988 to 2000 so there is no history at all from this period (which I think is a great loss...), from 2000 to 2009 they used CVS, then from 2008 to 2009 they used SVN, then from 2009 to 2011 it was Mercurial, then from 2011 to 2014 it was back to SVN, and LibreOffice started using git from the very start.

One of the worst practices that the OpenOffice.org team had was to get a whole bunch of work, bundle it up into a single commit and then note that there were multiple fixes in the commit. This makes it nigh-on impossible to untangle what fixed what. They apparently branched in SVN, then took the branch and merged it, but then blatted the original branch so now this is all lost in history.

And if you've ever read some of the most crufty parts of LibreOffice (cough VCL cough) then you'd understand that actually reading good commits is really very important for moments you think "what on EARTH is that code doing?!?"...

1. https://bugs.documentfoundation.org/show_bug.cgi?id=96826

2. http://randomtechnicalstuff.blogspot.com.au/2016/01/possible...

3. https://archive.fosdem.org/2014/schedule/event/exploring_ope...


If you think this is crazy, never ~ever~ look at what Mainframes are running.


I was just thinking the same thing... We have lines of commented out code from 1988 (yes. that is a correct date) that people flat out refuse to delete. I am so glad I got out of that mess...


In all fairness, I have had to touch somebody else's code on a few occasions where they did not use any kind of VCS, and I was very reluctant to flat out delete code, so I did the same thing described in the linked page - comment out the original code and add an explanation why I replaced or removed it, along with a date.

And when reading somebody else's code, I tend to appreciate if the code carries its history with it, at least to a degree. What I really do not like, though, is when there is a piece of commented-out code with no explanation of why it's there in the first place or why it was commented out.


Committing commented out code means you don't know how to use your version control system. A comment with hash of the commit of the old code, or a command to show an older version of the file using your vcs system, is much better, and more team-friendly.


Committing commented out code means you're trying to communicate with comments in your code. English isn't the only language that's useful in comments, especially when trying to document something of computable logic.

If you have any comments in your code at all, you could make the same argument about "not knowing how to use your version control system" as everything could be in commit logs.

And please define "team-friendly", it is absurd to suggest that the git cli makes anything friendly.


Committing commented out code clutters the codebase, clutters the commit history, degrades the searchability of the commit history, and is usually a big middle finger to your teammates by saying "this thing I use for debugging belongs in the main branch." Not knowing how to efficiently use the Git API is a fault of the committer, not an excuse to clutter the code. A comment with the exact command to run such as "// To see how this blah blah blah, run `git show 049fa0293f:path/to/file.elm`", as suggested, makes this a moot point anyway.


> [commenting the sha] makes this a moot point anyway.

No, it does not. Emacs has gone through 4 version control systems, I can't imagine how they'd refer to it in a comment. As someone who's been through several, I guarantee you that git is not the final frontier of VCS.

Also - if you intend the comment to communicate to your team (as this commit does.. did you read it?), then your other comments also make little logical sense.


> And please define "team-friendly", it is absurd to suggest that the git cli makes anything friendly.

Is this how you develop?

1. Comment out old code. Make a note of why this was removed (and maybe the date).

2. Add new code.

3. Commit.


Is this how you document your code?

1. Write comment about code, and commit it.

2. Delete comment about code, and commit the removal.

I mean hey - your team mates should be able to use a VCS right?

Facetious workflows miss the point, judgment can be used to determine what should be commented out vs not.

I strongly prefer source code that is well documented, not at some point in the past, but right there at HEAD/master/main/tip/whatever


While that might be true, keeping an old reference implementation around for 30+ years "just in case" you hit some aberrant behaviour or weird edge-case seems extreme.

It seems to me (and I may be wrong) that it would make more sense to either turn the reference implementation into a spec document (and keep that next to the code) or to dub that code the "canonical reference implementation" instead of referencing some ancient version of Emacs.

By referencing the ancient version of Emacs (and the subsequent change to the new code), the comment comes across as an anachronism. It was (seemingly) left there in case there were compatibility issues while moving to the new code. This seems out of place now because if compatibility issues haven't come up in 30+ years, then I doubt that they will now (or that there is much code relying on edge-case quirks from such ancient versions of Emacs).

At what point does the current implementation (and any quirks that deviate from that ancient version) become the reference implementation? In another 30 years?


I have seen that. I have also seen that, but without the third step.

Its the fastest way to get a bunch of comments in your code(as well as code) that is probably meaningless if not actively bad for you to store in your brain.


> Committing commented out code means you don't know how to use your version control system.

Read the linked article more carefully.

It's an example of how, sometimes, it's better to keep comments in the code than relying on source control to preserve them.

These situations are rare, but they happen.

People who say "Commented code should always be deleted, period" need to be a bit more open minded about the constraints in which we operate in the real world.


I don't rule out there may be cases when something like that is justified, but burden of proof lies on the other party, and having read most comments I remain unconvinced. The "close-minded" crowd is making more sense to me.

Of course people will go to great lengths to justify cutting corners, shortcuts, messiness etc. it's always called for by exceptional circumstances :) I just hope my doctor isn't open-minded about washing his hands...


Sure but then you switch Version Control systems later and the hash/vcs command may not mean anything.


Exactly. Emacs has changed VCS at least 4 times since the code was commented out. (pre-RCS (no VCS?) -> RCS -> CVS -> bzr -> git)

I've had codebases with references to bug IDs... from 3 bug trackers ago.


I can't seem to edit the comment, but: change that to 5 times, they used Arch between CVS and bzr.


With a capable vcs you don't need a version number, hash or other id. These things are always fragile.

Just add a comment saying that this is a rewrite of the old part the code and document any intended functional differences. You can then search through the history for the first appearance of that specific comment. With git this is done with the pickaxe command.


I think the issue here is that as nice as you might find your VCS tools when you write code, the code you are writing, particularly on code bases that are already old, may survive significantly longer than the current hot VCS. If the entire commit history gets squashed when the code base is ported over to a new VCS, then VCS history may not be robust enough to communicate with maintainers in the distant future (that is, potentially a decade or less from now given the typical half life of these sort of tools).


I don't blame the devs 30 years ago for being cautious but now emacs has survived 5(?) VCS migrations with all the data intact, the tools are more mature now and it shouldn't be a worry anymore.


Perhaps what we need are reliable tools that will robustly port commits forward from old VCSs to new ones. Does something like that exist? The most common defense of this code seems to stem from a vague sense that VCS history is more ephemeral than committed code. With proper tooling (or better awareness of the appropriate tools if they do in fact exist), this anxiety could potentially be allayed.


Eric S. Raymond wrote a tool called reposurgeon[0] for just this purpose. Coincidentally, he used it to migrate Emacs to Git and clean up the repo, although I'm not sure how much manual work he had to do to besides running reposurgeon.

[0] http://www.catb.org/esr/reposurgeon/

[1] http://esr.ibiblio.org/?p=6524


I wasn't around when rcs was a thing, but cvs to svn and svn to git preserved history.


It will automagically scan through all files and replace what looks like a revision number with the correct git hash?


I don't know about the cvs to svn migration but migrating from git to svn you generally can keep the original svn commit numbers in the history such that you can look up svn revision numbers like that.


he means replacing comments mentioning revision numbers. no source control migration does this - it would be harmful if done unchecked (revision numbers like 1 will turn into compile errors or very confused comments in the most naive implementation, for example)


Respectfully disagree. If it's not "in your face", you'll forget it's there - or even never have known about it, if you weren't on the project since the beginning. Also, it won't show up in simple greps. Not saying you should leave every old method around, but for some routines (or part of routines) it can be very instructive to surface what DIDN'T work.


And including ANY comment in code - as in, mixing code and comments in the same file - means that your workflow doesn't support external documentation keyed to branches, hashes, filenames, and line numbers.

Likewise, naming variables, functions, classes, and continuously typing out the same name over and over again means that your language and IDE is not rich enough to contain pronouns, making you have to type tediously instead of something even sightly more natural.

Why don't we improve languages so that they have pronouns for the stuff you're doing right there? Why don't comments get linked to line numbers in some external layer? And why would anyone commit commnented-out code to a repository?

The only answer to all of the above is that we are living in 2015 and not some future year. you work with what you have.


>Why don't we improve languages so that they have pronouns for the stuff you're doing right there?

Interestingly, we do sort of have that - in most CAS interfaces "Ans" can be used to refer to the last calculated value. A lot of programming REPLs can do this sort of thing too.

Another example might perhaps be Bash's $1, $2, $3 etc for positional arguments.


We have those "tedious" things because programmers don't like ambiguity.


You might notice the commented code mentions CVS -- even though it's now on git. If they had done what you're suggesting, it would almost assuredly lead to a dead-end for someone trying to look up the code.


Yes, VCS doesn't give you a guarantee of storing your history (in an accessible way, I mean - "if a tree falls in a forest...") FOREVER, especially if you switch VCS systems on the way.

However, commenting some code out also doesn't mean it won't rot. Methods/procedures around it will be refactored, variables renamed - in long time perspective, in a different landscape it can become incomprehensible and useless anyway.

Just because some abstraction (version control) can be a bit leaky and one shouldn't rely on it absolutely, doesn't justify succumbing to some other illusion.

Commenting code out is not equal to hibernation and it doesn't make it immune to flowing time. You uncomment it one day, you discover it stinks to heaven high - especially if it was complex from the start - and isn't informative at all.

"But that's not the case here!" - well, guess what: version control failing to do what it's supposed to also isn't the case here, if you can still access commits from 1986, as someone pointed out: http://git.savannah.gnu.org/cgit/emacs.git/log/?ofs=124080


There's an exception to every rule, like in this case. It drives me perpetually batty when folks blindly stick to process, best practices, convention even when it doesn't make sense.


Modern version control systems (Git, Hg, etc.) do a great job of storing file commits. They're very "flat," however. A file is a file is a file, and there's no good mechanism for storing tightly-linked documentation about the evolution of a project. Commented-out code is just one ad hoc way developers try to tack on that kind of information (often in clumsy, likely-to-be-obsoleted-or-forgotten ways).


For all the folks saying the comment itself is bad... Please explain why? All i can think is that adding a date and author might be nice so the current maintainers can give some thought to if we care about those old subtle bugs x decades later without digging through 3+ changes (guess) to version control systems.


Trying to use this to catch subtle bugs is the worst kind of "the implementation is the spec". If you can't document what it does except in executable code, you don't understand what it does, and you need to go back to requirements gathering and then write some tests for the important cases. Or, better still, write the actual code in a way that's readable and expresses the intent.


Well, the other question is does it even need to be in the code? Could there be a documentation / spec file somewhere that uses this code as the reference implementation, and just add a comment that says "see spec file" next to the code?


After 15+ years writing software, I can guarantee you that keeping 2 files in sync (eg the source file linked to the spec file) for more than foo days is not going to happen.


Funny how the emacs vs vim battle ended:

"VIM is the worst written software in existence today. So we deleted a ton of it and rewrote it as neovim"... Waiting on neoemacs.


Emacs has pretty decent code quality. Contributing is also still doable, unlike with VIM.

NeoVIM is mainly about the personal cost of contributing to VIM, protection of ancient platforms, etc.

Emacs doesn't have these problems for a contributor.


> Waiting on neoemacs

You mean Guile Emacs: http://www.emacswiki.org/emacs/GuileEmacs


There is an editor, zile I think, that started out as a lightweight emacs clone but was eventually rewritten to be to Lua what GNU Emacs / XEmacs is to Lisp.

I am not sure how far along they are, but I like the idea, because Lua is a really sweet language for this kind of task.


Why would you leave old code commented out when you can easily look at earlier versions of code with you source code manager?


It's true that it will be accessible from the repository, but a few years from deletion, people will forget it is there.

I think that they should've put this right there in the comment, really.


If the comment says where to find it in the repository, when you change repositories you have to remember to change the comment. If enough time has passed since you changed repositories, that becomes a problem.

Emacs is an old codebase. It predates modern repositories, or close to it, and it certainly predates the repository it currently uses.


You don't need to update the comment. You just need to make sure that when you change repositories you preserve the mapping of old commit IDs to new commit IDs. git-svn does this by including the SVN revision in the git commit message.


Ahh, but does git-svn-cvs-rcs do this? It's worth remembering that there were still 15 years to go from when this code was commented out to when SVN was invented.

And for that matter, does burguggle-git-svn keep SVN revisions traceable? Or are we assuming git is the pinnacle of software revision control and emacs would never ever change again.


The comment itself addresses that: it says people will forget that it exists.


The GP's point still stands. Just because the comment offers an explanation doesn't mean it's right.


I feel like there is a big difference between "right" and practical. The world isn't black and white, the authors clearly have had problems when the comment was removed, so it's probably better to just keep it there.

I personally hate when commented out code are checked in to source, but put in a situation where the "right" thing increases the likelihood human error, I'll pick the "wrong" way every time.


How about a smaller comment that says something like:

    /* Here's the deal...Check out what this used to
    look like at http://example.com. */


Except in this case, when the comment was written http hadn't been invented yet :)


That just makes it easier to skip reading.


We have a winner of this discussion!


Putting this into context, in the 1980s and 1990s, though source version control existed, most source was downloaded via FTP and built directly, these source files didn't necessarily contain historic revision information, so keeping the commented line there passes along the context when version control is not present.


Per the article, the comment wasn't added until 2001.


Not exactly.

Per the article, the comment in the comment was added in 2001 when the comment was recreated after someone else deleted it.

The original commented out version predates the first public release back in the mid-80s.


That's the comment saying why not to remove the commented-out code. The commented-out code itself was presumably there in 1985.


From the linked article:

    Don't remove this code: it's true that it will be accessible from the repository, but a few years from deletion, people will forget it is there.


Eh, I've seen some pretty long and ugly commit histories. Finding the useful snapshot in there might be tricky on a very old project, plus the added problem of not knowing to look for it (how would you know which old version was the "simple, good" old version?).

Usually leaving commented-out code in source control is bad, but in this case it's good.


Because you can more easily look at it by pressing M-v/C-v a couple of times. Anyway, in this case, it isn't earlier versions: it's one specific earlier version.

From the way people carry on when they see commented-out code like this, you'd get the impression that people actually paid attention to what was in comments! But if only that were so...


I do. On the particular codebase I'm working on, the code commenting is ridiculous and distracts me from the actual code and comments.



maybe a long time ago this was a sensible function to have... but i wonder why this is here at all? myself from the 1990s would wonder as well. file i/o functions in the c standard library make this a pointless exercise today...

emacs. an anachronism made of other anachronisms...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: