“As it is assumed that there is an even distribution of bugs through all software, it is safe to consider any piece of software to be bug free once a certain number of bugs have been found”
Lol genius. Anyone else ever do that thing where you write some good code and it works the first time you test it, but somehow that makes you feel less confident. Other times you write some crap code and it has several immediately found bugs but weirdly you emotionally feel more confident in that code because you “fixed” the bugs.
> Anyone else ever do that thing where you write some good code and it works the first time you test it, but somehow that makes you feel less confident
I did this yesterday. I was writing a variation on a function to draw some stuff in the terminal. When I ran it, it worked fine, no crash or anything like that.
Turns out I wasn't running the new version of the function.
When I was a larval programmer, I collected the different types of bugs I found as I debugged my code. This category was my favourite.
I once progressively commented out more and more of a file, trying to find the bit with a bug, until the whole thing was commented out and no source was being compiled yet the original buggy behavior was still there and only THEN did I figure out I was editing a different file than I was compiling.
Echoes of this original sin have shown up repeatedly throughout my life, to be point where it's one of the first things I check when my corrective measure has no effect on problematic behavior.
Fun fact: Unix Makefile was invented as a result of this incident.
> Make originated with a visit from Steve Johnson (author of yacc, etc.), storming into my office, cursing the Fates that had caused him to waste a morning debugging a correct program (bug had been fixed, file hadn't been compiled, cc *.o was therefore unaffected). As I had spent a part of the previous evening coping with the same disaster on a project I was working on, the idea of a tool to solve it came up. - Stuart Feldman
Oh the irony. Makefiles has been the number one source of this issue itself. Usually due to badly specified transitive dependencies. Like only rebuild .o files on changes to .c files but forget completely about changes to .h files etc. Removed c files leaving stale .o files still being linked in is another favorite.
But yeah, I guess the predecessor it replaced must have been even worse. Today we can have higher expectations.
My favorite solution is Tup, which uses FUSE to observe exactly which files each command is reading. That way it's impossible to write a Tupfile for which incremental builds don't work properly.
I also remember changing the makefile, mindlessly running 'make' without looking at the output, then running the program and wondering why the program isn't behaving differently.
I've been hit enough times with this bug, now it's one of the first things I test. Just printing something in order to check I'm editing the right file, even before trying to fix the actual bug!
Getting control means making the system do something of your choosing. It’s not the thing you really want (until you’re done), but it’s something you decided. In your case, printing.
Extend control means make it do something else you want. This should be a small step. It’s surprising how few small steps you need to get a pretty complex system going. Here you might print from the function you’re fixing.
Sometimes when you go to extend control, the system won’t do what you want. You’re surprised. This means you’ve lost control. The good thing is because you took a small step (right?) the problem is now in a narrow space.
Get control again. Either go back to what you did before, or take the smallest possible step forward from there.
You will spend all your time with a system you either can control or nearly can. This ways of debugging is slower than how most people do it in the median case, but much faster in the mean. Debugging time is dominated by the worst cases, and this real cuts down on those. Especially the ones that make you feel dumb.
A critical thing is to not just “check” but “do”. If you have a computer that won’t power up, you don’t just check it’s plugged in, that’s not controlling anything. You plug in another computer or electric heater.
This way of working is rather peaceful. Much less frustrating than how most people do it. The most frustrating part is when someone asks for your help and tells you not to bother checking that thing, they know it works. And that part, and that part, and that part. But somehow this system they have complete confidence in as parts doesn’t do what they want in the whole. Best phrase for this situation:
Not really a bug though. That's operator error. This week I've seen multiple people stumbling over that, and it's gotten to the point I question whether people understand filesystems anymore.
Half the reason for them is to provide an abstraction that maps a spacelike addressable quality to data to be operated on. It's an extension on the more general concept of namespaces. What's the difference between data HERE and data THERE. different path.
Even with stuff like git or other VCS's where you fudge the spacelike mapping with an extra layer of time-like addressing...
This is the content HERE from 3 revision's ago vs. the content HERE now vs the content over THERE either now that or 3 revisions ago...
I have to train people to break them of the habit/hubris of thinking they are "smarter" or more "accurate" at computation than the computer is. If you take the meaning of compute to be "creatively navigating to a solution in a problem space, humans win hand down.
If you define it as reliably doing the same thing over and over again, give up. The silicon has you beat. Always check your assumptions against what the machine is actually doing. Checklist if necessary. That puts you on a footing where you can actually outdo the computer in both realms.
Or don't and be reminded once again of why being human is the most pitiable form of cognitive logic, where it takes hard work just to ensure a consistent response to similar classes of stimulus over time, and where remembering and recalling things is totally non-trivial. In the quest for staying alive in a dynamic world doing it's best to kill us, our brain does great. In a world in which it'd be nice not to be reminded by my equipment of my own idiocy on a regular basis, not do much.
You're right, it's a meta-bug. It's a bug in your debugging process.
You think you have a problem.
You change the wrong file to find out.
Now you have two problems.
And if you're lucky, it won't take you fifteen minutes to figure that out.
A programmers view of code isn't exactly one to one with a filesystem though, so I don't think it's that programmers have trouble understanding filesystems. Instead, I think it's that many languages push us into particular paradigms of code organization. That filesystem now quickly reflects the programmers mental model of the application within the confines of the language. Often in those paradigms I find that there's layers to a single feature; for instance, it may be implemented via a hook that calls a controller that calls a service. Those layers exist to group code and to allow for future development, but they can also get confusing about what you're touching in large codebases.
This class of issues is so common in my personal experience that very often I intentionally introduce some sort of panic/assertions in the place that I want to fix, just to be 100% sure I'm a) editing the right source b) the tooling (compiler, ci, ...) is actually doing what I expect.
Most of the time it's unnecessary, but when it catches such issues it saves countless hours of frustration.
i had an nginx "bug" where it was serving an old version of a file. i deleted the file and hard refreshed and it still served it up happily. turns out i had static gzip compression enabled and never deleted the corresponding gz file
I do that all the time. And I just [1] did the reverse: I tried to track why a bugfix wasn't working, only to realize my user was running a version that predated the fix. D'oh.
The best refactor is the one you accidentally commit using the smooth, virtuoso commit command only for masters of the art, "git reset --hard". Fingers have a mind of their own sometimes.
(Next was something to snapshot the VM, followed by "apt-get install ext4magic". It worked!)
I really wish it would warn you with a y/n prompt before doing this. Same with `git checkout` when overwriting unsaved files. You should need a `--force` option to bypass the prompt. Or it could simply refuse to make the change without `--force`.
This happens to me all the time. “Dammit, I thought I fixed that damn stupid bug an hour ago” before realising I effectively hit “run” instead of “build and run”.
> before realising I effectively hit "run" instead of "build and run".
This used to happen to me all the time too. The solution I ended up going with was to standardize all my build scripts on `./build run` as the way to (build and) run the program, with the actual binary getting shoved in ./bin/ or something to discourage running it directly (which also reduces clutter). Basically, get rid of the "run without building" command entirely.
I think the Red-Green-Refactor loop is pretty close. I learned TDD via obeythetestinggoat.com and it really instilled in me the urge to see a test fail before I trust it when it passes.
Sometimes I forget to check that my test fails the first time, so I’ll go back and deliberately break the function under test just to see it fail.
>> Anyone else ever do that thing where you write some good code and it works the first time you test it, but somehow that makes you feel less confident
> I did this yesterday. I was writing a variation on a function to draw some stuff in the terminal. When I ran it, it worked fine, no crash or anything like that.
No, the exact opposite for me: Code that works on first run is almost always actually bug free; if I find one bug in a piece of code, then I almost always fine more eventually.
> Anyone else ever do that thing where you write some good code and it works the first time you test it, but somehow that makes you feel less confident. Other times you write some crap code and it has several immediately found bugs but weirdly you emotionally feel more confident in that code because you “fixed” the bugs.
Ever since I started adopting TDD, FP-style and using "Railway oriented programming", and always running static-analyzers as part of the build process far more often-than-not my code not only works correctly on first-run, but stopped getting bug reports. It felt weird at first but now it's weird to see my code not work on the first run.
April fool's joke : we'll pretend that bugs in software are distributed as a homogeneous Poisson process, AND that Poisson distributions are bounded, while we're at it.
Bugs are like mice: when you see one there will be 10's, when you see 10's there will be 100's. But if you see none, there really might be none. Though that's pretty rare, usually it just means you have friendly and well formatted input.
Well, although this RFC is probably a 4/1 joke, there is some truth in the first part: assumed we are talking about state-of-the-art open source project where many people can actually inspect the code and are interested to do so, with the amount of bugs per LoC already found you decrease the chances of finding more bugs in the future provided that you don't add new ones. This can be seen in projects considered "mature" where no new features are being implemented.
Of course the second part is ridiculous: unless we are talking about a system formally proven to be bug free, which has enormous limitations, there always will be bugs.
There have been a few times when I wrote complicated Rust code involving mutable borrows moving around lots of scopes, expecting the compiler will yell at me and I'll have to rewrite it a lot, but it compiled fine the first try. It makes me suspicious every time that I've done something wrong.
Blockchain can help with this. We can record all defects on the blockchain and have tamper proof evidence of those who consistently fail to adhere to this RFC. It will encourage developers to write bug proof code everywhere. Employers can further encourage developers to stop writing bugs by docking the gas fees from their paycheck. This works doubly so on devs who are blockchain "non-believers" because they desperately will not want to contribute to the heat death of our planet. (Those people clearly don't get the bigger picture. The colossal waste of energy will obviously be offset by the fact that code throughout the world will no longer have bugs and people will be more efficient.)
This seems like it could be the one of the best thing to ever happen to the software industry in decades and has no possible chance of unintended consequences.
TBH it had me going. "Well, I guess we finally need to define this in an RFC." "Maybe this will helps against TLA spies hiding backdoors in critical infrastructure." "Surely automatic theorem proofing could help with this." And "Oh..."
I have had a terribly humorless day. The dryness made me chuckle in some deep hidden place.
This, of course, is achieved by writing the software in Rust.
Writing software in Rust avoids, not only memory corruption type bugs, but indeed it is impossible to implement any sort of defect, as long as you write your software in Rust.
> Or "write it" in Java, there already is a framework to solve your very problem, all you need to do is to configure it in X.. ehm, YAML.
Nah, modern java needs still needs configuration in XML, YAML, _and_ environment variables set in shell scripts somewhere. And maybe stored in your LDAP server too ;)
Alternatively write your software in Go. Go is not only memory safe, it's also easy. So easy that it's basically impossible to write bugs in any Go program.
Alternatively write your software in Java. Java is not only memory safe, it's also approved by your corporate overlords. So authorized that it's basically impossible to write bugs in any Java program, except for ones like Mad Gadget vulnerability and Log4j.
This of course is achieved by writing software in a safe language. would be nice if Rust someday would join this group. their bug tracker speaks otherwise, and their docs and specs ditto.
Software defects are not harmful. Backwards long jumps are just a bounds-checking error, and without them, it would take over five minutes to beat Super Mario 64. The A-Button Challenge wouldn't be possible at all if it weren't for bugs. Bugs are an important part of the software ecosystem and should be included whenever necessary.
Yes! A speedrunner. Bugs are an endless source of amusement. Except when it's ones like Pokemon Yellow https://youtu.be/RswDekKF3TQ?t=68 or Zelda Link to the Past https://youtu.be/6s0L0kkFwd8?t=166 With Yellow it's kind of interesting because it's not so much a speedrun as he's reprogramming raw memory probably using a carefully designed sequence of joystick movements. I heard airline ticket sales women used to do that with Sabre terminals back in the day. If you have a TAS then you can even exploit the video game so hard that it gets replaced with another video game like in Mario 3 https://youtu.be/oWbwmxVpqVI?t=163
I think the same argument that is used against "GOTO considered harmful" applies here - deep down, at the machine level, every single decision that is made by the code is simply just a bug.
Pretending that "higher level code" avoids the bug because you don't see it is just tricking yourself. At the base level, code is just 0, 1, and bug.
From my quotes file: "Every program has at least one bug and can be shortened by at least one instruction---from which, by induction, one can deduce that every program can be reduced to one instruction which doesn't work." Proof: http://en.wikipedia.org/wiki/IEFBR14
“There are two ways to write code: write code so simple there are obviously no bugs in it, or write code so complex that there are no obvious bugs in it.”
Can the publication of a document which is very obvious be considered a sort of light joke? Is this an April Fools? This seems like the ideal sort of April Fools to be put in an RFC:
* Harmless
* Technically correct
* Hypothetically I guess it might be useful to have a specific rule related to something obvious if you are arguing with a real super-pedant.
If you extend the protocol to support SD-cards instead of rolls of paper as the data medium, and appropriate frame and packet sizes, you can get phenomenal bandwidth even if latency and packet drop are still not great.
“ As an example of the detrimental effects of bugs in physically hard to reach systems: the [NASA] Deep Impact spacecraft [DEEPIMPACT] was rendered inoperable due to a fault in the fault-protection software, which in turn triggered endless computer reboots. Mission control was unable to recover the system from this error condition because no engineers were available on-site. The commute was deemed infeasible due to a lack of reasonably priced commercial transport options in that region of the solar system.”
I think this means that software bugs will be acceptable once there is low cost commercial transportation to every part of the solar system? Has a cost/benefit analysis been done to determine which option is cheaper, eliminating all software bugs versus introducing low cost commercial transportation to every part of the solar system? Is it possible that actively promoting the creation of bugs, while funding remediation efforts for those bugs, including all efforts but especially including low cost commercial transportation to every part of the solar system, would have side effects that would actually pay for the remediation efforts?
Plainly, this document is the response to my complaint that my ability to make money from my art (poetry, music, etc) through Patreon supporters was improperly removed, similar to the closure of my Stripe account.
The document is written as though it's just a joke like, haha, of course all software is going to have bugs, it's just a bug bro.
This is false. Software generally doesn't have these kinds of bugs by the time it ships and gets to be the top service in its category.
I am unhappy to read such a glib dismissal of serious, repeated issues that impact my ability to put food in my mouth. 0 out of 10.
> Plainly, this document is the response to my complaint that my ability to make money from my art (poetry, music, etc) through Patreon supporters was improperly removed, similar to the closure of my Stripe account.
I'm not sure how you arrive at this conclusion seeing as how none of the authors work for Patreon or Stripe.
We're laughing at this because it's obvious satire and tongue-in-cheek, and we're laughing at ourselves. No one here that's a programmer strives to produce buggy code, quite the opposite. Frankly, your attack on programmers at large is quite offensive and unnecessary. We're fallible human beings, the same as you (unless you're a bot, but I'm assuming you're a person).
I've personally been woken up overnight 3 times this week to address defects in code I support. Do you think I want that, enjoy it, or celebrate it? Of course not. My full time job is to make the software I work on better: more reliable, faster and use less computing resources to do the work.
> I am unhappy to read such a glib dismissal of serious, repeated issues that impact my ability to put food in my mouth. 0 out of 10.
> We're laughing at this because it's obvious satire and tongue-in-cheek, and we're laughing at ourselves. No one here that's a programmer strives to produce buggy code, quite the opposite. Frankly, your attack on programmers at large is quite offensive and unnecessary. We're fallible human beings, the same as you (unless you're a bot, but I'm assuming you're a person).
I think logicallee is a person trying to be satirical. But from your comment and its sibling, I guess that wasn’t obvious. Poe’s law strikes again (and now I’m wondering if I could be missing satire in your comment… I need more sleep).
Lol genius. Anyone else ever do that thing where you write some good code and it works the first time you test it, but somehow that makes you feel less confident. Other times you write some crap code and it has several immediately found bugs but weirdly you emotionally feel more confident in that code because you “fixed” the bugs.