My favorite all time bug was the infamous "Bumblebee" commit of 2011[1]. IIRC this was the first github commit to go viral. Make sure you have 30 minutes to read the thread. It is gold.
I have written variants of "GIANT BUG... causing /usr to be deleted... so sorry...." into commit messages over the years. Such a classic part of history.
Hah, that is a good one (and earns sympathetic winces). Though my continuing "favorite" (more amusing in hindsight) similar one was when Apple did that, back in like 2001 with, iTunes 1? iTunes 2? Or maybe it was one of the Mac OS X 10.1 updates, because it was end up of 2001 and I think we'd already had 10.1 by that point, I moved to 10.0 from PB right away and 10.1 was still out real fast. Edit: actually yeah it was iTunes 2.0 end of October/November, I did have that saved somewhere.
At any rate, lots of people at Apple back then brand new to Unix, NeXT was still integrating, everything was still coming together. And they made one of the absolute most classic Unix newbie whoops moments: they wanted to clean up old versions of iTunes, so the used rm -rf... without quoting the path. It had this IIRC:
rm -rf $2Applications/iTunes.app 2
with $2 as the path. But of course classic Mac users were used to having spaces in drive names and folders and so on. If you only had the startup drive no problem. But if you'd partitioned or had an external drive and it had a space in the name, ie "Disk 1", then that'd become rm -rf Disk 1/Applications/iTunes.app 2 and you were off to the races. There were some fun discussion threads about it, although unfortunately the only Apple Discussions bookmarks I have saved from back then all seem to be dead. Not sure if they're still archived somewhere and the links just no longer redirect or it was cleared out sometime.
They got that pulled real quick too, but I always secretly wondered if the genesis of Time Machine was somewhere around there... backups were pretty rough at that stage of the game. Well, everything about Mac OS X was pretty rough, though exciting too.
Edit 2: Did find an old /. discussion about it that still works. Bit of a blast from the past reading through some of those, both in what has changed and what hasn't:
Whoa, I’d entirely forgotten about this. Hard to believe that almost 20 years have passed. It wiped an external drive filled with MP3s and other media. I still have a slight aversion to any spaces in partition/folder/file names.
Reminds me of the time when Steam might remove your whole home directory under some circumstances. This thing happened pretty often in the past and people now actually starting to follow best practices (e.g. adding 'set -euxo pipefail` at the top of the script before doing anything else) when writing bash script to avoid these kind of accidental deletion. Also, I regard `rm` as taboo when I had to do something as root, and use `mv` to move it to a temporary folder instead. I can then clean up the temp folder using unprivileged user later.
Or NixOS/Guix, which treat the system as immutable (ie. in Nix pretty much everything but the Nix store at /nix is built after boot anyway). Or macOS by using a read-only system volume.
The grandparent bug is as much a failing of the grandparent's script as the OS, which exposes itself as one global mutable namespace.
Removing /usr isn't all that bad in the grand scope of things. /home is where the important bits typically live, so this would be more of an annoyance than an unmitigated disaster.
In an early experience with Go, while debugging a problem, I realized (with strace) that one of the go commands was attempting to write into the system tree (/usr, /etc--I can't remember now). f###, f###ity, f### f### f###!!!
It didn't succeed because I wasn't 'root'. Maybe it was a local misconfiguration in this company. Not sure. But I've never looked at Go the same way again.
This is why in general I am not the first to get an update, unless I've been waiting for a fix for a while.
I also think how shitty the person must have felt for the screw-up. We all screwed up at some point too, I remember removing a database by accident (it was a Dev, luckily not Prod and i was a junior 20 years ago. ), but having that stick like it does nowadays is not pleasant thought. I learned my lesson, and as a rule of thumb I always comb the code a couple of times before I commit.
This reminds me of a bug I ran into in some beta software on Windows many years ago:
1. Create the directory "C:\APPNAME-tmp"
2. Change the current directory to "C:\APPNAME-tmp"
3. Recursively delete everything under the current directory.
This worked fine for thousands of users until I came along -- running Windows 2000, with the "C:\" directory read-only for unprivileged users, running as an unprivileged user. At that point it became:
1. Try, and fail, to create the directory "C:\APPNAME-tmp"
2. Try, and fail, to change the current directory to "C:\APPNAME-tmp"
3. Recursively delete everything under the current directory, which is my home directory.
The authors were very apologetic when I pointed out the bug.
Oh man, I did something similar once. Folder with customer files also had a temp folder that needed cleaning, so I wrote a program that went into the temp folder and deleted everything recursively. For some reason, the temp folder had a folder named ".." (no reason how anyone managed to create that). So I ran the script, and upon hitting the ".." folder, it started deleting regular customer files. Good times...
This has been a big fear for me - its why in my scripts, I obsessively check whether I am where I should be after a cd before I try any changes. Has saved my back many times.
You can have his fortitude in the face of all your files being deleted if you have a robust, automated backup policy.
One of my professors used to say that he should be able to destroy your laptop, buy you an equivalent new one, and you should be up and running again within a few hours. Hard drives fail all the time and computers get lost/damaged/stolen. Losing your home directory on a computer should be expected, and definitely not the end of the world.
I prefer the raid5 approach, I leave my junk strewn all over a sufficiently large number of computers- odds are I'll still have a recent copy somewhere if my laptop explodes.
You might be interested in Syncthing [1] with the option to keep previous file versions enabled. I use it as one layer in my backup policy by having it on my computers+phone. If I take a picture on my phone, it is synced within minutes to my desktop at home. If I make a document on my laptop, I have a copy on my phone in case something happens to the laptop.
I've been using actual raid5 (via adaptec raid card) for years and very recently had one of my trusty 5TB HGST drives fail (after 3+ yrs of uptime).
Fortunately the rebuild worked, but there are so many horror stories of raid5 rebuilds NOT working it has me contemplating going back to simple mirroring.
If you are going to use hardware raid, please do make sure you have a spare raid controller, same firmware, same model. If not, you will be SOL when your controller dies.
RAID5 these days (with our very large disks) is basically asking for trouble - the odds of a second disk failing during the reconstruction are very high. But I guess you already know that!
I used to run with simple mirrored drives. One time the master drive experienced some corruption and wouldn't boot. In attempting to fix the issue I mixed up the identifiers of each drive and ended up hosing the mirror as well.
Now I use a more sophisticated RAID10 setup and really appreciate the way failed and replaced drives automatically rebuild themselves without my dumb interaction.
Mirrors are generally better because you get shorter resilver times and you don't stress all the disks in your pool when you resilver (which is why a lot of disk failures happen during RAID5 rebuilds -- the rest of the pool's disks are old). The downside is that it uses more disks for the same level of redundancy.
I would also mention it's probably a good idea to use something like ZFS rather than a hardware RAID card. In general, free software RAID is better audited and you don't need specific hardware for the recovery process. But also, ZFS has proper checksumming, so you won't have to worry about all sorts of silent corruption (which RAID blissfully ignores -- most implementations will just recompute the parity in case of a mismatch, which is the wrong thing to do (1-1/n) of the time).
You probably want to bump that toRAID 6 at least. Rebuild times on arrays that big can be long, especially if it takes a while to get a replacement drive. Plus most RAID controllers can protect against bit rot when in RAID 6 because it determine if bits get flipped. On RAID 5, you don't have that option.
RAID is not a replacement for backups. In the case with Steam the top level comment mentioned, all files writable by the user were deleted (including mounted drives in /media). RAID might protect you against hardware failure, but you also have to consider software (bugs/hackers/ransomware).
The author of that issue said they had backups, on an external drive.. that Steam also wiped.
You can say 'robust' but there are limits of what I think one can reasonably expect from a user. That they had backups at all is not necessarily the norm.
Right, that's why I said robust. But I'd push back against the idea that a normal user shouldn't be expected to have backups that are safe in such an event.
If you want to guard against rogue software (or clumsy fingers in the terminal), you'll probably need to have remote backups. It sounds like copies on the cloud saved this user, and it's not unrealistic to suggest users backup to the cloud (I think many already do with OneDrive/iCloud/Dropbox). If you're a Linux user who likes to tinker, you can set up a Raspberry Pi with a hard drive attached and use restic over SFTP (or any of the other numerous choices).
> I'd push back against the idea that a normal user shouldn't be expected to have backups that are safe in such an event.
That is fair. I was commenting from the perspective of what is rather than what should be. Alongside making software as safe as possible, we should also be encouraging and expecting people to do this.
Cloud is not backup! Rogue software can erase you cloud data too. You can't call Google or Dropbox and ask for last week's version of your cloud files.
Not sure if rsync can do local encryption these days? So I guess 'for the paranoid' (as Tarsnap's tagline is), Tarsnap with write-only keys might be better.
I used to set the append-only extended file attribute on important files
No one can delete the files afterwards, not even root, without removing the attribute first
It worked well with Mercurial, which is also append only. So I could commit as usually to my Mercurial repository, but it could not be deleted.
Ironically, I stopped doing that, because it messed my backups up. When running the backups as root, some tools would add the flag to the backuped files, but then later backups as non-root could not replace the file with new versions.
Today with github+google drive+steam or whatever flavors all you should be limited by is download speed. I wipe my hdd every 6 months or so just to get a fresh feeling of no random junk. The biggest chore is downloading all the dev environments for all the programming languages one uses at the same time
> One of my professors used to say that he should be able to destroy your laptop, buy you an equivalent new one, and you should be up and running again within a few hours.
Like the sibling comment, switching to a mental mode where a file not backed up doesnt exist. So anything on my machines is indeed in a Next cloud or Resilio share. Files outside of that might a well be in /tmp.
Secondly, treat all machines as cattle. No customization, unless done programmatically or repeated easily, and absolutely essential.
In practice, I have a tiny Dir in a Resilio share from which I bootstrap. It contains some cfg files/dies, a bashrc, passwords and share keys, some written notes for customizations that are not possible or unreliably so to automate. In my notes you will find for instance a package list for fresh installs, notes on which Firefox extensions to install, how to configure certain software that have only a GUI to reliably do so (and I test such instructions before I consider the tool ready for use), a zip of my thunderbird profile, and so on.
I started this way of thinking when my dad accidentally formatted my drive in 2000, and it has been bullet proof since. When I use new software, I do not consider it usable before I made a note or cfg backup in my 'bootstrap' share. I do not rely on my memory and I do not rely on a particular machine for anything, and it costs barely any effort other than being disciplined in never customizing a machine without recording and testing how to repeat that someday. If that is too much work? Then I will not use that software, apparently its not worth it.
I use Syncthing [1] and restic [2]. I use restic to backup the important things to several backends, including a home server (Raspberry Pi equivalent) and cloud storage. If you worry about getting your software/operating system up and running in the same way as before, you can use a declarative operating system such as NixOS [3].
I do have everything backed up but I doubt I would exhibit anywhere near that equanimity. I get annoyed when a kernel upgrade leaves me with a low res login screen.
This is an amusing story, but I disagree with not using numeric literals. Most of the time, sure, you don't want to use numeric literals, but, sometimes, you do. I posted my example in another comment here.
As for the other conclusion, I have a story of my own. I once shut down all of my company's asynchronous task processing for about 20 minutes, completely by mistake. Our web servers kept serving, and nobody external would have noticed a difference, but, for 20 minutes, our scrapers stopped scraping, our NLP classifiers stopped classifying, and a bunch of stuff that made us money wasn't happening. Had I not noticed it and fixed it ASAP, we probably would have lost money. Instead, I immediately announced what I had done, got help, and we fixed it.
The real moral of the story is that an honest mistake is nothing to be ashamed of. I make mistakes all the time. Some of them make it into production. Big deal. Everybody does.
There used to be an absolutely fascinating bug with Wine and Diablo II - if you had raw write access to /dev/hda, running Diablo II would nuke your MBR, so on next reboot, you'd be staring at No Operating System Found. [1] is a Gentoo forum post about it, I can't find the Wine bug.
This didn't happen on Windows, just in Wine. It also went away in the 1.10 patch IIRC, so whatever mysterious behavior resulted in that got fixed.
edit: Found it! [2] It was even more recent than I remembered.
My recollection (which has already proven faulty) was that the problem eventually went away with a later patch; it could also be that as most distros don't put users in a "disk" group any more by default, that it's simply gone away as a side effect.
"Threadreader' is a service which aggregates a thread of Twitter posts, by a single author (no replies), without the annoyances and repetitive elements of individual tweets. It also integrates all images into the thread, something Twitter itself often fails to do. The textual content is the payload of the original Twitter thread, other than as noted. Whilst often having the appearance of a blog post, it is in fact the original content, only presented vastly more sanely.
We were a young startup we went to a conference... when the site went down and we found someone DoSing us. But... there was little network traffic however the database was loaded beyond any reason. Turns out, the image importer from mail had a bug in error handling and a broken email came in so it just kept retrying creating an image entity in the database. Ten thousand broken images before we stopped the party, in 2005, that was enough to crash our little server. We DoS'd ourselves.
(Trying to keep it anonymous) A large international IT company kept backups for the company off-site. Consultants from the company were called in, and the orders were eventually restored - I think it took days, but they got there. Orders were delayed and it cost the company a lot of money. No-one was fired.
The iTunes 2 installer would delete hard drives in some cases, if the installation destination contained a space character and the portion before the space matched another drive (e.g. "Seagate" and "Seagate 2"). https://www.wired.com/2001/11/glitch-in-itunes-deletes-drive...
There was once was an operating systems that allowed you to format your entire hard drive when you only intended to format a floppy disk, if you typed format c: instead of format a:. This was called Microsoft Dos 2.x.
That's not that much different from a plausible error on Linux, with a USB drive and a hard drive.
(At least, the man page for my mkfs.vfat doesn't include any options suggesting it protects against formatting already-formatted or mounted filesystems.)
I lost about 5 hours work by getting the order of input/output parameters wrong in a tar command. No warnings, nothing.
"Yes by all means, use a nonexistent file as input to overwrite an existing file That doesn't even have a .tar extension. That's exactly what I wanted"
Myth was one of my favorite games as a teenager. The dark fantasy atmosphere really captured my imagination, and I still have a little giggle when I think about exploding dwarves. :D
I still like the one when ubuntu devs decided that netcat's behavior to keep the connection open and wait for the answer when connecting to a port is outrageous and closed the connection after sending the request. We used netcat to debug network connectivity. The day I learned that I cannot trust any piece of software that is part of ubuntu. Later the also pretty good purify complains bug followed. Those guys might have been the Debian folks.
Because Ubuntu included proprietary drivers by default, notably ATi and NVIDIA and Wi-Fi cards, all of them were so tedious to install on Debian (which was one of the most popular Linux distributions at the time)
It also shipped latest free software from two months ago (GNOME, X) that actually worked on a 6 months schedule! (Remember Debian releases back in the day? If you attempted to get your USB flash drives to just work, you'd otherwise be recompiling half of your system and Gnome with Garnome or jhbuild)
And it shipped actual, physical CDs across the world for free. It was 2004!
Then it shipped a free LTS that may not have been as stable as RHEL, but it was definitely fresher and with guaranteed updates (for main) for 5 years (3 on desktops early on). It was 2006.
Oh I laughed so hard!!
Computing has come a loooooong way since the early 80s.
Oh what a gift is for humanity, what a great tool (not the specific word processor but hey, we've all made mistakes).
We once shipped a product (2011) which would delete user's account (and all his data) from the website when all they wanted to do is to 'unfollow' someone. It was hilarious issue of MySQL database issue, instead of deleting the record of the 'follow' it basically removed user's record and thus removing all their files and entries.
Recursive email: Very early in my career, I made something accidentally recursive.
There's a reporting & data extraction language called Focus (nowadays WebFocus). I was working in a very old version, and had a few nested GOTO statements to form a loop where the final output was a customized email sent for each row of a query returned. I had to make a simple update, but accidentally deleted a line first. I reinserted it... one line off from where it had been. My careful GOTO structure went from a loop, to recursive. Instead of sending an email to each row returned, I sent an email to the first person. Then I sent an email to the second person and the first. Then to the third, and the second, and the first.
The list was about 1,000 recipients... but luckily it usually finished very quick, and I would monitor as it went, and noticed the long run time after about 100 iterations and killed it. I then checked the logs to see why it ran so long, and traced back my error and fixed it... and sent an apology email to the 100 recipients I accidentally spammed.
I've been writing code for about as long as him, but because I started with Asm, which makes one become really careful with buffer sizes, doing something like making one thing larger would not be done without carefully looking "down the line" to see if any further changes were required; and they almost certainly were. That's not to say I haven't corrupted files before, but fortunately nothing quite as catastrophic as wiping the disk.
Recently I got a phone call during breakfast that texstudio had scrambled the file when saving, by rearranging blocks of it randomly. The user had triple backups, on the hdd, usb stick and another one (dropbox?). All files were useless after he saved them at the same time :/
Might be some memory corruption caused by a wrong pointer in any other part besides the saving?
I never documented the "quit" command for a MUD client I wrote on VMS back in '89. For years I got emails from users who couldn't figure out how to exit the app.
well, I was testing something on s a clone version of a game and end up publishing that version to the app store. When I uploaded the game on my phone, my progress ware gone, all my saves. the colone version created a new database (thank God without deleting the old one). my game only had like 1k installations, and I found out my mess quick enough. just published a new update with the original database name and was lost.
Hmm I never knew this was redefined. Bricked always quite literally means you device becomes permanently non-functional. I.E. a brick. What is the new word for this state?
Given that it's almost impossible to do this with modern hardware I don't see what the distinction is?
I'm not aware of any way to make a modern x86 processor self-destruct. Apparently you can make an FPGA fry itself but even then not with good heat conduction away from it.
Not with a general purpose PC no. But I don't doubt that it is possible to brick any of the current gen consoles if you try hard enough. Actually many general purpose laptops probably also can be bricked since the amount of non-standard hardware is increasing every year.
In '87 my first 256 computer had 640k of memory with a hd that was less than 1mb IIRC. It cost nearly 5 thousand dollars including the $700 dot matrix printer.
The IBM PC/XT, the first IBM PC to be delivered with a hard drive, already had a 10mb hard drive in 1983, so 1mb sounds too small for your hard drive. 1mb was roughly the capacity of a floppy disk.
My first PC (in 1991), a no-name luggable with a monochrome plasma display had a 200MB HD. I remember one of my former professors telling me it was nuts to get such a large hard drive, that I'd never fill it up. When I bought its replacement in 1994, the new computer had a 1G hard drive which I split into 200MB partitions with one partition dedicated to a copy of the old computer's hard drive.
I'm going to assume you're not deliberately trying to misunderstand him for the sake of a hot take, because, that would be pretty lame. So instead I'll explain for you and anyone else who doesn't understand him what he meant.
What he is suggesting is that you generally not include literals in your code, and instead, use a constant/variable that can be traced back to a single place in order to make changes more visible and easier to deal with. That he makes exceptions for 1, 0, and -1 is explained by conventions in various languages where those values in context end up as generic and meaningful as any other language keyword and thus would not benefit from being referenced from a constant. That doesn't mean you wouldn't ever end up with constants that are 0, 1 or -1 though, just that you wouldn't assume every random place you might use those values (such as sorting algorithms) justifies a constant placeholder.
The spirit of the advice is mostly correct. However, there are reasons to use numeric literals in code. One is the PRNG example given downthread. Another is one I recently wrote.
I have some code where I needed to calculate what the 1st percentile of a list of numbers was. Since it doesn't make a lot of statistical sense to calculate a 1st percentile of fewer than 100 numbers, I inserted a condition like
if len(numbers) < 100:
# skip the calculation
and included a comment stating that 100 is explicitly not a magic number here. By that, I meant that you'll never want to change this 100, so, why bother obfuscating it behind a name?
It's perfectly readable, if you understand why the calculation is skipped in the first place. If you don't understand why the calculation is skipped, giving it a name like TOO_FEW_NUMBERS_TO_CALCULATE_1ST_PERCENTILE isn't really going to give you much insight into why the calculation is skipped, anyway.
You'd name it MINIMUM_NUMBERS_FOR_1ST_PERCENTILE (your name was for the entire if).
To me "if len(numbers) < MINIMUM_NUMBERS_FOR_1ST_PERCENTILE" still reads better and requires me not to need to refer to comments, which I fall back on when I do not immediately get the code.
So maybe the risk is not lowered, but readability is still improved.
(Oh, and I wonder what do you return for a list of all equal numbers when there are more than a 100? If it's that number, why would suddenly going from 100 to 99 change that? ;)
> I'm going to assume you're not deliberately trying to misunderstand him for the sake of a hot take, because, that would be pretty lame.
I'll admit, my example was overwrought with comedic intent, but I stand firm on this issue. Use a magic number twice? Yeah, go ahead and put that numeric literal somewhere convenient (but you still have a numeric literal) to the consumers of it and easy to find by your readers (not a top-level magic_numbers.h)
I'm a mathematician. The fear of numbers in code is an affront to the domain that I work in. YMMV. I put a lot of work into documenting my code, but for the love of pete, 2 is not a magic number: my example was no more overwrought than OP's rule.
In some cases, I'd say that 1 can be a magic number. For example, it would better to write
fprintf(STDOUT, output_string)
than
fprintf(1, output_string)
However, 0 and ±1 have a special role in picking out the first/next/previous elements of a sequence (and related things), which is so common that it'd be silly to insist on defining POSSIBLE_NEXT_INDEX and POSSIBLE_PREDECESSOR. That said, people do seem to love Python's itertools, so....maybe that's where we're headed. I had somebody aggressively complain about the readability of the all-pairs for-loop, which I thought was basically standard.
I guess so. I don't think it's atrocious: it looks like iterating over the upper/lower triangular part of a matrix to me, which is fairly common (in some domains, I guess). Plus, I like that the double for-loop indicates slowness.
However, Python has itertools.combinations(X,2) and Julia has IterTools.subsets(X,2) if that's what you want.
Some things only/obviously make sense over pairs of items. For those, go with a literal 2
for x,y in itertools.combinations(obj, 2):
if is_overlapping(x,y):
raise OverlapError(x,y)
However, sometimes the subset’s size just happens to be two (but you might change it), maybe a constant would be good. Ditto if you’re doing some math where there are “real” 2s that are part of a formula and incidental ones that are due to the subset size. For example:
subset_size = 2
for subset in itertools.combinations(X, subset_size):
mse.append(sum((subset - target)**2)/subset_size)
is a bit more flexible and more clear than using all 2s, IMO, and at very little cost (YAGNI blah blah, but I think that’s an argument for not making a whole configuration system that lets you set the size at runtime).
Only in my choice of variable name. OP drew a line at 0, 1 and -1. What I did there was highlight that the implications of that rule are absurd. See how your sum of squares also contains a 2? VERBOTEN!!!! And don't you dare re-use "subset_size" ;)
This is somewhat akin to Dijkstra's opinion on goto. Which is actually great advice when you're doing apps in javascript, but doesn't get you very far if you're writing or generating assembly. When such advice is promoted to a taboo, I side with Churchill: this is the type of arrant pedantry up with which I will not put. Or Emerson: foolish consistency is the hobgoblin of little minds.
Yes, too many numeric literals can make code hard to read. But math is hard to read[1] because you need to really think about it -- you can't avoid the complexity; you can only rearrange it. Where you put it is a matter of taste, and absolutes have absurd consequences.
As for the original notion of indexing a triangular array, I have a greater concern:
for i in range(n):
for j in range(i):
foo(array[i][j])
this is self-documenting in the sense that I immediately know the relationship between i and j -- without reading documentation, I can't recall if itertools.combinations will give me the upper or lower triangle. In this case, I'd avoid the 2 for entirely different reasons :)
[1] and I don't mean "let's go shopping!" -- I mean that reading a math paper, even for experts, can take days per page.
I'm not sure where the misunderstanding is coming from.
Yes, of course if you have a formula that involves dividing by 2 or something you just include the number directly.
But anything that's a magic number, even if only used once, is better to have a named constant defined. The name of the constant becomes the documentation itself.
The other reason is that if someone needs to come in and maintain the code and change a magic number for whatever reason... if it's a defined constant, they know it should only need to be changed in that one place, assuming all magic numbers are defined in a sufficiently wide scope. If it's just a number, they have to search the whole codebose for all instances of that number, and investigate each and every one to see if it needs to be replaced as well or not.
/* From Park and Miller (1988, pg 1195), using their notation */
const int A = 16807;
const int M = 2147483647;
const int q = 127773;
const int r = 2836;
...
instead of "inlining" them into the code like
test = 16807 * (seed % 127773) - 2836 * (seed/127773)
The "ban" on literals obviously exempts their definition. No one, outside a number theory textbook, is defining A as successor(successor(...(successor(1)))
I'm flattered you think there's even a tiny possibility I'd remember the constants or page :-) In reality, I vaguely remembered the authors and part of the title, then guess-and-googled the rest. It's a nice paper: https://dl.acm.org/doi/10.1145/63039.63042 though obviously no longer the state of the art.
I recommend the citation-in-the comments idea though. We do it for lab stuff and it's very helpful for everything from debugging to writing up results.
In case you somehow haven't figured it out by now, he meant in the body of one's code. He's fine with named constants.
Protip: If a smart person says something that seems obviously dumb to you, it's worth trying to find an interpretation that isn't dumb. Doubly so when, as here, it's from a piece that others are clearly finding smart and useful.
Also, you are referring to an orthogonal problem—you would presumably want to put magic constants into a static constant to reduce the chances of disagreement across references. Hell even if there’s a single usage I would expect magic constants to be clearly demarcated and documented.
Edit: didn’t see sibling constant, didn’t mean to dupe the reply itself. This certainly refers to in-line use of numeric literals vs those used in constant definitions.
Obviously not. So since you found a counter example we should discount the original advice and feel free to use numeric constants all over our code without any explanation.
According to the thread, there was a size of something, and that size was used in two places. The size was specified as a literal number in both those places. The bug was caused by changing the literal in one place but not the other. One way that could have been avoided is to use a named constant for the size, so that when the literal number was changed, both uses of the name would see the new value.
The reverse is also dangerous: when you have the same numeric value doing different jobs in different parts of the code. If they're literals, you've got no clue whether they should both be changed or not.
The idea is not to replace your constant 2048 with A_2048_NUMBER. But eg. MAX_BLOCK_SIZE in one place, and STACK_SIZE in another, even if they are the same value today.
Of course, if you are repurposing the code to do something new, you need to be extra careful not to use the constant just because "it's the right number". And sure, it sometimes gets tricky, but using literals will not make it any less so.
Does my balanced ternary expansion obviate the need for documentation? No. Drop a reference to the paper/website describing the algorithm you're doing. If it's, say, trig that you're doing, use clear variable names and explain how you derived the formula.
Or you could go the Amazon AWS for C route and give make a define with the name of MilicsecondsPerSecond, and a separate define that makes that at 1000.
Let's call that a "natural constant": while not strictly coming from nature, it is coming from a very familiar and widespread convention.
There are other numbers like that (12, 24, 60 come to mind; 365 has a similar familiarity but has lots of gotchas; I am sure many developers will immediately understand 1024 too).
But that's also mostly due to how they will be used. Contrast and compare:
— a = b * 24
— a = b * HOURS_PER_DAY
— uptime_in_hours = 24 * uptime_in_days
I like the last one best, but if given a choice between the first or the second, I'd prefer to be reading the second.
FWIW, I am not advocating for use of units in variable names at all times — but if you are converting between units, put them in either your variable or constant names.
Basically, I interpret the rule of no-literals as a reminder to think of the readibility of any statement involving them.
I have written variants of "GIANT BUG... causing /usr to be deleted... so sorry...." into commit messages over the years. Such a classic part of history.
[1] Commit thread: https://github.com/MrMEEE/bumblebee-Old-and-abbandoned/commi...
[2] Issue thread (not as good): https://github.com/MrMEEE/bumblebee-Old-and-abbandoned/issue...
EDIT: Github is timing out trying to load it. May want to use archive.org: https://web.archive.org/web/20130613012555/https://github.co...