Hacker News new | past | comments | ask | show | jobs | submit login
The unreasonable effectiveness of print debugging (buttondown.email/geoffreylitt)
349 points by goranmoomin on April 24, 2021 | hide | past | favorite | 354 comments



Whenever this comes up, I think of this quote from The Practice of Programming by Brian W. Kernighan and Rob Pike [0]:

> As personal choice, we tend not to use debuggers beyond getting a stack trace or the value of a variable or two. One reason is that it is easy to get lost in details of complicated data structures and control flow; we find stepping through a program less productive than thinking harder and adding output statements and self-checking code at critical places. Clicking over statements takes longer than scanning the output of judiciously-placed displays. It takes less time to decide where to put print statements than to single-step to the critical section of code, even assuming we know where that is. More important, debugging statements stay with the program; debugging sessions are transient.

I found this lines up with my personal experience. I used to lean on interactive debuggers a lot, and still enjoy using them. They're fun and make for good exploring. But the act of figuring out where you want to print really makes you think in ways that interactive debugging cannot. I find the two forms really complement each other.

[0] https://logging.apache.org/log4j/2.x/manual/index.html


There's a part of me that wants to say that that opinion has to be taken with a grain of salt and a lump of paying attention to who is offering it.

Circa 1999, one would assume that Brian Kernighan and Rob Pike are largely drawing experience from working with C, which is a relatively verbose language. Single stepping through C code in a debugger is indeed a laborious process.

If you read accounts from Smalltalk developers, on the other hand, it's clear that they very nearly live their entire lives inside the debugger, and consider it to be an enormous productivity booster.

I would guess that there are several effects going on there. One would be that, in terms of how much effective work it accomplishes, a line of Smalltalk code is generally not equivalent to a line of C code. That has a big impact on just how many steps are involved in single-stepping your way through a region. The other is the nature of the debugger itself. Gdb and Smalltalk debuggers are wildly different pieces of software.


Linus used to be against kernel debugging for the longest time.

The core of his position (as I understand it) was that regularly needing a debugger is a sign that your software has "gotten away from you". You've let the software get to a state where it cannot easily be understood from the architecture and program text alone.

I do think debuggers can be useful when building up comprehension - particularly of other people's software.

It's very time consuming though, and at each "trace" you're only seeing one path through the program. Good architecture and documentation lets you understand all the possible paths at once.


I hold the very same opinion, interactive debugging in general should be a rare need.

If it's being used too often then it points to the fact that the software has to be run in order to understand it. It's representation is not sufficient to convey it's run time behaviour.

Also would like to point that dynamic languages in general require more debugging than statically typed one's since one can't be certain of the data flow within functions.


I usually find that debuggers are more useful in dynamic languages. As an example in python many bugs are due to an incorrect structure being passed into a method due to the method or input being underspecified/abused.

In such a case it's difficult to decide what to print or more particularly how to print it if it's type/shape is unknown.


I find that it's not specifically the kind of programming language, so much as the way the data is structured. Passing around big generic heterogeneous data structures makes it harder to just reason about state. That's a lot more common in dynamic languages, but I also see it happen plenty in, e.g., enterprise Java code.

In those situations, it's often just so much easier to set a breakpoint and take a peek than it is to waste brain cycles on thinking about what exactly you should even be printing in the first place.


I’ve felt the same way about IDEs in general.

We need more tooling to help people understand and mitigate necessary complexity, not tools that help one muddle through or — I shudder to think — extend complexity.

I’ve changed my mind on this recently only because some IDEs have indeed become good at the latter.


Once you traipse into concurrency land, too, debuggers get much more tricky. You now have to consider the state of the thread you are in, plus all the others.


And whether the act of debugging has changed how everything works


Agreed. The only time I've found a debugger useful is when the print statements aren't immediately giving clarity, and cognitive dissonance is settling in.


I think its the same as cognitive dissonance but I always find debuggers help when my code appears to be doing something impossible- like I’ll be thinking how could if(true) ever fail then remember oh this whole function isn’t even getting called and I had fixated on the wrong place. Debuggers will sort it out faster than a bunch of print statements.


I guess you could view it that way although I prefer to think of what Smalltalk developers do as living inside a REPL. And that is indeed something I can relate to. I program Julia and I basically stay in a REPL all day. But I don't regard that as the same as using a debugger. Like a Smalltalk developer, I evaluate specific functions/methods with particular values. I don't step through code. I am pretty sure Smalltalk developers don't step through code a lot.

Rather they do live changes of their code, and then evaluate various objects to verify that things work as expected. That how I seem to remember working in Smalltalk many years ago.

I am not a fan of debuggers, although I do use the REPL a lot. I suppose like Rob Pike, I used them only for very limited tasks, such as getting a stack trace or getting some sense of control flow. But as soon as I have that, I spend more time looking at code and reasoning about it, than stepping in a debugger. With a REPL I can try out assumptions I make about the code, rather than being forced to step through it.


Perhaps it has to do with dynamism? I find myself "debugging" way more in dynamic languages that emphasize interactive development. At that point, the line between running code and debugging gets pretty blurry, since "stopping" execution at some arbitrary place is not very different from normal development anyway.


I think this is a really good split in methodology to identify. I've noticed the same in the way I debug static vs dynamic languages. It seems to reflect the nature of the language; dynamic languages are powerful because they are fuzzy, but that comes at the cost of comprehension, and static languages tend to be the opposite.


It also matters which debugger you are using and which features it has. I personally find gdb to be completely awful, while some IDE debuggers are a delightful. With Java, you can hotswap code into a running program. And you could write the majority of your program like that. It becomes an interactive experience a little bit similar to using a jupyter notebook.

You can also make a weird sort of UI this way, where the way you interact with your program is by changing the code / and or state while it's running. Breakpoints prompt for input.


> we find stepping through a program less productive than thinking harder and adding output statements and self-checking code at critical places.

Uh, me too. That's why I don't single-step through huge chunks of a program.

I use code breakpoints.


I'm getting the impression a lot of people don't know basic debugger features like "continue" or setting breakpoints while paused at a breakpoint.


In a way, the print-debugging is a sign of 'owning' the code, when a developer is very much familiar with the structure and internal works of the project. This is akin to surgeon precisely pointing the scope and scalpel.

Add to this a need to build a debugging-enabled version of the project - an often long-running process, compared to a few edits to some 'release' build.

On the other hand, when dealing with an unfamiliar or complex and well-forgotten project, debuggers become that discovery and exploration tool that offers a wider context and potentially better situational awareness.

Of course, mix-in some concurrency and either debugging approach can equally become cumbersome without proper understanding of the project.


I still resort to gdb, but mostly when I need memory access breakpoints for when someone (occasionally me) stomped on memory.


I think the Key phrase from that quotation is "thinking harder". A debugger gives you all of the programme's state as a kind of vast animation, so it's easy to start working with one thinking "Something's going wrong here, so I'll just step through the whole programmme and see when things start looking like they're going wrong". It's then easy to miss the problem due to the vast quantity of data you have to parse. Using print statements, in contrast, forces you to formulate simple hypotheses and then verify or falsify them, e.g. "I think this variable in this function is causing the problem" or "I think everything's fine with the execution up to this point". I.e. the very fact that a debugger works so well at giving you an insight into how the programmes state can itself be part of the problem: it can be overwhelming.


Why would one step through the whole program? Use the approach you describe for print statements, but for breakpoints instead. Set them, inspect the relevant state when one is hit, then resume execution until the next is hit.


Why would I do this manually using some fiddly UI instead of automating it using the programming language I already have at my fingertips?

Using the programming language itself, I can extract exactly the information I need to see, transform it into exactly the shape that's easiest for me to inspect and combine output from different places in the code to create a compact list of state changes that's easy for my eyes to scan.

What I have found is that my debugging problems are either too simple to require anything more than thinking or too data dependent for a debugger to be the best tool for the job.


The Chrome devtools make adding a logpoint as easy as setting a breakpoint. So instead of adding your print statement, and rebuilding the app you can do it live. I think this is strictly better as you still get to enjoy the mental exercise of deciding where to put the logpoint. Even better, you can run anything you want in a logpoint, my favorite is console.profile/profileEnd() to get a cpu profile between two lines of code.


There's another methodology that Brian and Rob Pike miss.

Rather then stepping through a program. Add breakpoints and just step from breakpoint to breakpoint.

With a good IDE, adding a breakpoint and hitting a shortcut key is faster than a print statement and on the GUI IDE your debugging sessions are not transient.

The only time their advice makes sense to me is when I'm in an environment without a gui. Even then jetbrains has a "ssh full remote mode." However at my company I have found that this feature doesn't work under Nix (nix-shell) so we all just use print statements.


Personally, I think my biggest reason for using print debugging is.. it works.

In C++ I often find the debugger doesn't find symbols on projects built with configure/Make. If I have a Java Gradle project I have no idea how to get it into a debugger. Python debuggers always seem fragile. Rust requires I install and use a "rust-gdb" script -- except on my current machine that doesn't work and I don't know why.

I'm sure in each case I'm sure I could get a debugger working given enough time, but the only error I've ever had in print debugging is failing to flush output before a crash, and it's never been hard to search for "how to flush output" in whatever language I'm current using.


Print debugging always works, but also: it lets the programmer customize their view of the program’s (very, very large) hidden state in any way imaginable. Step debuggers are the “no-code” equivalent: extremely useful for the purposes for which they were designed—and often the better choice there—but inherently limited.

Geoff’s not wrong in invoking Bret Victor’s Learnable Programming argument that being able to track state over time is critical to debugging, and Geoff’s right that print debugging makes this easier than almost any existing step-debugger.

Bret’s deeper point, though, is that a major challenge in debugging is hidden state in general, and that variable state changing over time is just one example.

Not only is there a ton of hidden state in the execution of a program—the full execution trace PLUS state of every variable at each point in that trace—but there is also a ton of interpretation of that state that the programmer needs to do while debugging: “what does this sequence of events imply?” - “why is this pointer pointing here?” - etc.

Doing that interpretation is much easier when the programmer gets to selectively view that (again, HUGE amount of) hidden state. Print debugging gives the programmer complete control over what state is shown. No other debugger does that: they all show a ton of data and context (often useful!) and make certain operations easy (inspecting single variables! viewing call stack snapshot!), and these are often just the right things.

But sometimes they’re not. And often, when you start debugging, you don’t know if the fancy debuggers will be too much or not enough.

Print debugging gives you the power to write code to selectively view the (again, HUGE!) hidden state of your program, and this scales from the smallest code-tracing bug to the largest distributed systems.

Step debuggers, on the other hand, are essentially “no-code” debuggers — extremely useful for the purpose for which they are designed, still useful for adjacent purposes, and a great place to start if you know the tool well, but ultimately not as powerful if your needs exceed their capacities.

A good programmer will know how to use all these tools for what they’re best at!


If you've chased heap corruption printf doesn't really help you much but a data breakpoint is a godsend.

Same thing with watch windows, memory views and the like. There are classes of problems that do well with printf but calling them "no-code" is vastly underselling them.


I’m not sure when no-code became a pejorative, but that wasn’t my intent! Only that most of these tools, unlike print, are special-purpose, exceptional for that purpose, and often even useful in other circumstances.

A data breakpoint is a great example of something useful that print doesn’t do well.


You seemed to be drawing a parallel between no-code -> "not as powerful", in my experience they're different tools for different use cases.

I also don't think they're nearly as no-code as you call out. VS' watch window has very few limitations compared to printf back when I was working on win32 things.

Also important to consider iteration time. I once worked on a system where adding a printf was a 20 minute process due to the need to heavily optimize for a memory constrained platform(scripting fit in 400kb block with the asset bake step).


> in my experience they're different tools for different use cases

Exactly!

Debuggers are very useful tools, and typically not as general-purpose as print. I don’t view “not as powerful” as a meaningful distinction, because it requires that you ask “powerful at what?” ---

VS’ watch window is great but (I assume) doesn’t work across distributed systems, etc. — as a general technique, print is universal in the sense that there are very few problems that can’t be diagnosed by modifying your code and printing some (possibly a manipulation!) of the hidden state. This is going to be harder than using a special-purpose tool designed for exactly your problem.

In the same way, “no-code” tools are typically better and/or easier than writing code to solve the same problem, but special-purpose.


> typically not as general-purpose as print

In my domain which doesn't usually cover distributed systems printf can be worse because it introduces synchronization primitives that have caused race conditions to disappear(and that race condition causes second order heap corruption or the like). On one platform system memory was so small(8mb total) that each output to stdout went over the serial link slowing performance down to 1/20th of a realtime process under any real logging.

Like I said, different tools for different uses, and really depends on the context. If there was one size fits all then we'd just use that but the diversity of debugging tools I think shows that you need a variety of techniques to approach the problems we encounter.


> Like I said, different tools for different uses, and really depends on the context

We totally agree, and I'm not sure what we're arguing about--perhaps you can fill me in.

I'm arguing that print is almost always worse than any specialized tool. (After all, who would use a specialized tool worse than print?) There is not a one-size-fits-all tool, and print is not a one-size-fits-all tool.

Indeed almost every seasoned developer has a story about print failing. Whether it's the mysterious "Heisenbug" that disappears when you measure it (like the sync issues you mention) -- my personal story is when I was trying to debug a (class project) kernel scheduler. Printing to the console was so slow that by the time I'd printed anything, the scheduler had moved on to the next time slice!

It's worth nothing that "print debugging" is not literally just using the "print" function; it's a style of debugging that involves logging specific information using some logging function (usually, but not always, print) and then analyzing it after the fact (usually, but not always, by reading the printed output).

This strategy of "get data out, then analyze it" is the general form of print debugging, and in the small-memory case, or the sync Heisenbug case, this often means collecting data in the appropriate variables before outputting it to be visible. Isn't this still print debugging, even though it doesn't use a "print" function?


I think we're mostly arguing about how useful the various approaches are. At least for me print debugging is a measure of last resort unless I want to extract some historical data out and I know it won't influence the timing of the issue I'm trying to chase down.

With print debugging your inserting the whole build + deploy + repro setup loop into your debugging, if that's a long time(say 20 minutes in one job I had with production hardware) you're in for a world of pain. I find that just about any other tool usually is an order of magnitude more efficient.

Also even the "step debugger" tools do the same thing you'd do with a print. LLVM for instance uses the IR JIT API to generate watch/eval values: https://releases.llvm.org/9.0.0/docs/ORCv2.html#use-cases

IMO you should relentlessly optimize your iteration times, that's the inner loop of development speed and print debugging fares pretty poorly in that area for all the reasons above.


> I think we're mostly arguing about how useful the various approaches are.

Ah, that's fair.

> At least for me print debugging is a measure of last resort

Right, and I think this depends on the domain. For lots of mature environments, this makes sense -- there's been years for tooling to catch up to the kinds of bugs people run into, possibly corporate money being put into developing debugging tools, etc.

> IMO you should relentlessly optimize your iteration times, that's the inner loop of development speed and[...]

Agreed, though the effect on print debugging on iteration time is very environment-dependent.

> [...]print debugging fares pretty poorly in that area for all the reasons above

Adding console.log to a web app can be a trivial change (though of course reproducing app state is another issue) -- again very environment-dependent.


This is a lot of words but I wonder if you've ever worked with a debugger with watchable variables or immediate mode code execution. I find it odd you say print debugging is more flexible.


I have. What information exactly can you get from a debugger with watchable variables and immediate mode code execution, that you can't get from print?

I'm not making the argument that people shouldn't use debuggers -- obviously if there's a good one that does what you need, it'd be silly not to use it. And good debuggers are great.

But what happens when you're working on a distributed system? Or multiple processes on a single system? Or...etc. etc. -- debuggers are almost always built to support a particular set of use cases for a particular set of domains. Step outside, and they're not usually as useful.

print always works. It's almost never the best. It's more flexible because it can almost always be used.


Silly example i made up on the spot, but not too far from real life. I'm iterating a list of 10000+ objects with about 20 or more nested properties. Some times one of them behave strange. This function runs several times per second.

Option1: Print all of them, requires rebuild, would log 200000 lines every second. Unless i wrap the print inside conditions, requiring yet another rebuild.

Option2: Conditional breakpoint: user[i].department.balance <= expected_value. Bang, i can now inspect both the complete nested user-object, and the previous/next item in the list, other local and global variables, the call stack of how it reached there, the state of all other threads in this moment, and so on. With the really good debuggers like .NET or JVM I'm even able to rewind the execution pointer to the start of the function or hot-swap the code as it's running.

In short, a debugger allows you to see all state of the program at once, and you can retroactivelly choose what is relevant. As opposed to printf where you must select upfront. Maybe even more importantly it's lacking stack traces, unless you add a log and request-id at every function-enter/exit.

There is also the case of third party code which you can't edit to add printf to. Many environments usually by default give you pretty good context of the symbol names or even complete source code which you can step through (except c++ land where you'd need a debug-build of the lib, which isn't impossible either).


> What information exactly can you get from a debugger with watchable variables and immediate mode code execution, that you can't get from print?

If you have some completely undocumented object oriented code with tons of relationships, peeling the state apart in a debugger is a good way to get an idea of what you even have that might be worth printing.


>What information exactly can you get from a debugger with watchable variables and immediate mode code execution, that you can't get from print?

Lots of things but an obvious example is private member info.


Using a debugger is largely passive - it shows you what is actually happening.

Debugging via print allows you to step outside and peer in ie it is active.

Print debugging can be prone to bugs within itself which may cause additional ignorance about the potential bugs being diagnosed. How meta can you get? 8) There's also the effect of the effort of actually looking - that may or may not have an effect.

Anyway, the discussion here is largely ignorant of language and function. At the moment I spend time fiddling up Python and OpenSCAD scripts if I dig out a programming language. For me, print is really handy. For a Linux low level latency sensitive driver in highly optimised ASM n C I suspect this matter is moot.


With interactive debuggers like python's pdb you can actively manipulate data, print it out, basically do whatever you need to do in a REPL. It is way more effective than adding print statements, rerunning your script, etc. That being said I typically use a combination of both: I use a print to get me to where I think the problem begins, then an interactive debugger (pdb.set_trace / breakpoint / etc) to drill down into the details.


(most) debuggers can print - I often use conditional breakpoints with the "condition" "print(thing)". It works great, doesn't require re-compiling, can be enabled/disabled with a single click, etc. It's handy when you want to see a lengthy sequence all at once.


Discovering this capability changed how I debug. Conditional breakpoints that log-only creating an always useful, easily enabled/disabled log of critical method results without littering the code itself with logging statements.


"Print debugging always works..."

Nope, print debugging does not always work. Please stop saying that. Sure, it's often useful and often works. But go try to debug a race condition, or bug in some locking mechanism with print statements. I'll wait as you add a bunch of print statements, and then learn the harsh lesson that you are now debugging a completely different program with very different performance characteristics that now doesn't deadlock where it used to deadlock because you slowed down one of the threads massively as it spends time it used to not spend logging stuff.

"Step debuggers, on the other hand, are essentially “no-code” debuggers — extremely useful for the purpose for which they are designed, still useful for adjacent purposes, and a great place to start if you know the tool well, but ultimately not as powerful if your needs exceed their capacities."

I guess you'd think this if you thought print debugging "always works". But I promise you it is equally (if not easier) to exceed the capabilities of print statements in some scenarios.


You don't put printf() in critical sections, instead you collect some statistics. Once critical section is over you can freely print collected data for further analyzis. Or you can print it once a second. This simple technique works well for debugging racing conditions, performance issues (profiling), memory leaks, kernel drivers and bare-metal stuff where timing is a concern. So, yes "Print debugging always works...", but must be used wisely. :-)


Collecting statistics can change race conditions too, it's just less likely


Debuggers don't work in the situation you described either.


Yes, my reaction entirely, I do a lot of real-time work, an interactive debugger is quite useless when stopping everything is not an option


Where did I say that they did work?


> it lets the programmer customize their view of the program’s (very, very large) hidden state in any way imaginable.

I’ve been trying to articulate this to myself for a long time. Thank you.


> A good programmer will know how to use all these tools for what they’re best at!

Yeah, I run through valgrind; 9 times out of 10 I don't even get to the point of starting the debugger or recompiling with prints. The remaining 1 time where valgrind output is clean, I go with printing. About half the time the prints aren't enough and then I fire up the debugger.

Also, it depends. I'm working on a legacy C# project where the previous dev read somewhere that passwords shouldn't be stored in plain-text, so when changing the stored DB credentials you have to set a breakpoint where the hash is calculated, change the variable holding the cleartext password to the new password you want to use, step the line that calculates the new hash, copy the text out of the watch window and finally paste that text into the file holding the credentials.

I do not know how this helps. I also don't care enough to write a c/line utility that generates the hash in a base-64 string, becasue we've changed the DB credentials for the webapp only once since he left. We may have changed it more often if we had an easy way to do so :-/


That suggests that debuggers should let you write visualizers and they should be so easy to write that you don't hesitate to write them.

That seems to be the idea behind Glamorous Toolkit https://gtoolkit.com/

I'd be curious what it would take to add any of those concepts to existing debuggers for other languages.


As a counter-point, I think there’s an argument that folks don’t spend enough time in the debugger. But there’s a lot of value there and in fact one could use a debugger environment to unit test as even native debuggers have scripting environments.

Personally, I think folks should master the debugger _first_ and during all steps learning a programming language.

But similar to test-driven-development it’s a different way of thinking, and most books scarcely discuss the debuggers.

That being said, I do use print-debugging a lot too—in C++ a lot of functionality can be compiled-out, allowing one to, for instance, print hex dumps of serialized data going to the network.

On that note, there is a distinction between trace debugging that is part of the source code and general print statements that are hacked in and removed.


I can believe there is value in learning a debugger, but debuggers could stand to improve significantly. Debugger UX is almost universally awful and per the parent it’s often difficult to get one up and running. Moreover, if you do your “print debugging” with log statements of the appropriate level, they can be useful in production which is perhaps the biggest value.


Also, in almost all languages debuggers are an afterthought. Take e.g. the situation with Golang, Haskell or Python. Either there is no useful debugger or there is one, but it came late and still cannot debug everything the language does.


Print debugging not (really) working is haskell is... Non-idea, and a bad pairing for bad real debugging. But test cases are usually easier to figure out. Presumably there's a balance discovered by people in big projects but it never seemed as good as normal approaches to me.


Haskell debugging by testing is great for small functions where you can use quickcheck. But larger tests for the more complicated stuff don't work in quickcheck and there isn't much else that one can easily do.


Not sure what you mean, there's e.g. Tasty for non-QC testing. It can do all sorts of variations of test, e.g. traditional unit tests, "golden" tests, etc.


I haven't actually used quickcheck in Haskell, but I've used it for very complicated tests in other languages including Racket, TypeScript, Rust, and Java. The nicest thing about quickcheck is that it lets you easily create test data without imposing too many constraints on it. Regular fuzzing or randomized testing is almost as good, but the narrowing done by quickcheck is sometimes nice for understanding test failures.


On the other hand, it is reall great in case of the JVM


Not sure what situation you are talking about. Debugging Python is as easy as right-clicking a file in Pycharm and pressing debug. Why care if it was an afterthought when it for the past decade has worked perfectly.


I care. That it has worked for the last decade only means that Python was without a working debugger for 2/3 of its existence. 1/3 of which I had to suffer from it.

Also, pycharm isn't really what I would call a proper debugger yet, attaching remote running processes for example just doesn't work reliably yet and is very new anyways. Debugging embedded targets just doesn't work. Multithreading is iffy (but that's unfortunately normal in Python).


Also needing to use a particular text editor to use a decent debugger is bananas.


The problem is that it's not just a matter of "learning the debugger for Java." In practice there are many different projects that configure debugging many different ways, and it doesn't matter that you know which keys to press in IntelliJ if it will take you an hour to figure out how to attach it to the project. This speaks to OP's point, where it's hard to use a real debugger to casually investigate random projects.

Having said that, it is absolutely a requirement when working on a project for any length of time (especially professionally) to set up and figure out a debugging environment, because it is significantly more productive than printing. But the startup cost is certainly there.


The java case is actually pretty universal ... you run the JVM with debugging enabled (fixed string of flags) and then tell your IDE to attach to the JVM on the port you gave it. You don't need compileable source, can be on a remote server, different OS etc - if you have just the source for the bit you want to debug you can set a breakpoint in it and it'll stop there.

Being able to debug third party code in remote / hostile environments (even when its mixed with proprietary vendor code) is one of the things I like about Java.


The Java case is arguably the least difficult out there thanks to reasons you outline. But still, the other day I had to debug a Gradle plugin written in Java. It's possible! But it took an hour or so of effort to figure out which options to use and which program to give them to.


> The problem is that it's not just a matter of "learning the debugger for Java."

In the Java case, for stanalone projects (i.e. not something deployed on a server) an if it is your own project and you don't do anything unreasonable it is mostly just set a breakpoint and hit "run with debugging".

Probably the least painful debugging experience I know.

Doing it for Tomcat/Tomee was slightly more advanced IMO but still utterly trivial compared to wrangling css or js ;-)

There are reasons why we "old folks" like Java so much despite its verboseness.


In a world with perfect optimizing compilers that never introduce bugs, we should never "need" print debugging. But that's not where I live, so I'll keep using print debugging.

On the other hand... adding print statements can also invalidate certain optimizations (an excellent source of heisenbugs), so I'll never stop using debuggers either


Print debugging is essential in distributed systems. Sitting in the debugger waiting for human input often leads to timeouts and not covering the case you want. Of course, sometimes adding the prints, or even just collecting values to be printed later also changes execution flow, but like do the best you can.


All the more reason for folks to spend more time using the debugger; for instance, break points are only one feature.

Setting up remote debugging I’ll agree is more difficult than a local application, but each remote machine can automatically run startup commands and not require user input; commands can be run at particular places too (to print output etc) with conditional trace points, all while not impacting the code itself.

Main point is that folks don’t spend enough time learning the debugger, as print statements are easier. But using the debugger is a better practice in my opinion in the case where print statements are added just for a quick test, then removed.


What Python debuggers you are talking about? have you tried the built-in CLI debugger? Just drop breakpoint() in your code and you're in. Have been using it daily for over a decade and really happy with it - it's actually one of my favorite features of the language, amongst the many super useful features that Python and its excellent stdlib have to offer.


Honestly, I didn't know that and I'll try it. Last time I had to debug I remember adding "-m pdb" (as at the start of https://docs.python.org/3/library/pdb.html , first result in Google), but for some reason that immediately threw an error instead of starting the program, so I just chucked some prints in instead.


I have mapped a key to insert "import pdb;pdb.set_trace()" in my editor. Also use it daily, and not just for debugging. It is useful when working on a new project and you just want to interrogate some object you got back from a library to see what valid operations you can do with it. Or to double check some math operations.


I just tried python -m pdb and it works for me https://dpaste.org/9EQo#L26 but I really always use breakpoint(). You can even configure it to use other debuggers with an environment variable, ie.: PYTHONBREAKPOINT=ipdb.set_trace


I just quickly went any checked the program I was trying to debug. I was running 'python3 package --arguments', where 'package' is the name of a directory which contains a package I was working on. 'python -m pdb package --arguments' just complains that 'package' is a directory.

Adding a 'breakpoint()' at the start of the program does get me into the debugger. I'll remember that for future (but, it's not easy to find by googling if you don't already know what you are looking for!)


For complex problems, `import pdb; pdb.set_trace()` instead of a print statement can be super handy. It basically launches the debugger from the context of wherever you stuck the line.

For large unwieldy data-structures, you can go ipython: `import IPython; IPython.embed()` launches the ipython REPL from the calling line's context.

I use the latter a fair bit when spelunking around in other people's code. `pdb.set_trace()` lets you continue execution more easily.


Never felt that Python debugging was "fragile". BTW if you're not using pudb you're missing out.


Only thing I hate about this .... regular point of code review: remove the debugger breakpoint you left in your code!

We haven't had one hit production yet, but it came close. Print statement is a lot more harmless.


>> If I have a Java Gradle project I have no idea how to get it into a debugger

You download Intellij IDEA, run it, choose File->Open and select the build.gradle file, right click the main class and there's a Debug option.


I agree with this sentiment. Print debugging is essentially universal. The same goes for compiler directives or simple constant driven conditionals to toggle it (in a brute force way) on and off.


> Personally, I think my biggest reason for using print debugging is.. it works.

I agree.

A similar statement is that using a debugger often does not work.

the sort of ratholes I've run into are: - debug build - proper symbols - interrupts and debugger - kernel and debugger - unfamiliarity with debugger - limitations of debugger

Without a proper debug build, you can't run the debugger effectively. You have to set up a whole debug environment.

Sometimes you need to do broad work to get proper symbols and stack trace information. By broad, I mean a debug build for everything.

I've also found many debug builds, apart from altering some behavior, they also turn on a lot of printfs.

If you're using interrupts/timers or debugging a kernel module, many times debuggers don't work or may alter, move or supress the problem.

and then there's the... I haven't used a debugger in 6 months, how do I do (very simple thing). And sometimes the debugger is just not the right tool or a tedious tool to use. "If I just load this one macro and somehow get the right address maybe I can decode this one kernel data structure..."

Personally, many times an ephereral printf isn't crude and meaningless, it's a precise scalpel getting to the core of the problem.


This is the only argument for print debugging I can understand. Given the debugger experience I've had at every job in the last 10 years (IDE-driven, set breakpoints in editor, inspect state visually) it's mind-boggling that anyone thinks print statements are superior, but getting that experience can be frustrating depending on your tools and environment.

FWIW my experience debugging Python in VS Code required no setup and I've encountered zero issues.


Python easily has the best debugger I've ever seen in a language.

``` import pdb; pdb.set_trace() ```

that's literally all you have to do at any point in your code. Run your code in the foreground of a terminal, and boom you have a debugger exactly where you want it.


In recent Python, it's even easier -- just put `breakpoint()` in your code.


Fwiw, it's not too hard to approximate that in c/c++. Print out the pid (I often print out "gdb -p %d") and then sleep (and perhaps send SIGSTOP to other processes in more complicated scenarios).


Pry with Ruby allows you to drop in to a repl anywhere with binding.pry

You can edit the state and keep running after altering the program.

But usually I just end up printing a few things from the repl and figuring out what is fouled up.


Part of the reason it works is it forces you to reason about the code, otherwise you won't know where to put the print statements.


Agreed. Also certain classes of bugs (such as bugs in parallelism) are easier to debug via print than using a debugger.


Especially when you need to run your code on a remote server as part of a bigger platform like some Cloud or Serverless system.

These systems likely already have a way to get logs, but good luck getting a debugger to work there.


I’m sorry but this is just ignorance, learning to use the debugger for the platform at hand is a basic skill every developer should master. So many times have seen developers use the debugger to troubleshoot and fixed issues and been perceived as a “ninja” (despise that term but that was the effect) because they knew how to use the debugger. I mean yeah keep printing lines, and keep being out performed by your debugger using peers. That’s the choice.

Yes I am dieing on this hill.


OK. And I've also cringed watching ninjas step through code slowly, reading everything, spending 20 mins catching something 2 print statements would have achieved.

Debuggers aren't bad. But neither is printing. Knowing when to reach for them is probably a bit more key.


> And I've also cringed watching ninjas step through code slowly, reading everything, spending 20 mins catching something 2 print statements would have achieved.

The problem is that the two print statements will only catch the bug if they are the right two, based on a correct hypothesis of what the bug is. Which, with a debugger, won’t require stepping, but setting two breakpoints, doing a run-to-breakpoint, and inspecting values.

Stepping is required when you are exploring behavior because you don’t have an easily testable hypothesis about the source of the bug.


But with print statements if the first place I put them doesn't work, I can start doing bisects and quickly find the right place to print.

As you note, debugger breakpoints aren't magically better than print statements when I'm investigating a hypothesis – I'm no more likely to put them the right place than I would have put print statements.

And then there's a class of problems that neither debugger nor print statements will help: many years ago a very junior co-worker was wondering why his C code was giving the wrong answer for some math. It took me pointing out that one of the numeric types he was using in his code was different from the rest (I think it was #defined elsewhere, in some library, as an integer type). When the compiler did the math it had to do some type coercing.


> And then there's a class of problems that neither debugger nor print statements will help: many years ago a very junior co-worker was wondering why his C code was giving the wrong answer for some math. It took me pointing out that one of the numeric types he was using in his code was different from the rest

A debugger and watches on the values of concern absolutely will help with that (so will properly placed print statements), so its a really bad example. (Of course, strong typing helps even more with that particular case.)


No, my co-worker was so junior he didn't understand why that was happening. It took me a moment to glance at the types in the source and point out the problem, no debugger needed.


> But with print statements if the first place I put them doesn't work, I can start doing bisects and quickly find the right place to print.

Can't I also bisect with breakpoints?


You can, but the comment was talking specifically about stepping through the code line-by-line.


I can bisect faster with breakpoints. Plus with a time traveling debugger like rr the the time is further reduced.


Tools in a toolbox.

The worst developers I've ever known always have their "this is the best way to do everything" hill they die on.


They weren't ninjas, otherwise they would have used breakpoint actions for doing those print statements without modifying the source code.


What's breakpoint action? Is it like inserting printf before breakpoint?


Breaking is just the default behavior when a breakpoint is hit, you can generally attach whatever behavior / conditions you want using the debugger's scripting language.


Reading through the majority of this comment section, I get the impression that those who like print statements find value because they aren’t proficient with modern debuggers, rather than they find print statement valuable even though they’re proficient with debuggers.


I once saw an interview with Visual Studio members that one reason why they started doing talks about how to use the debugger was the continuous set of requests for features that Visual Studio does almost since it exists.

Same applies to other debuggers.

It is not only the debuggers, but also OS and language runtime tracing facilities like DTrace, eBPF, ETW, JFR, ....

Many devs aren't 10x because of QI, rather because they learn and make use of the tools available to increase knowledge about the platform.


I agree, but my feeling is, if one person is bad at using debuggers it is their fault. If (as it seems to me) most developers are bad at using debuggers, then it's probably to debugger's (and associated tooling's) fault.


For me it is the teacher's fault, given that a large majority never teaches anything related to debuggers.

So we get generations that use vim and Emacs like notepad, create Makefiles by copy-paste and barely know gdb beyond set breakpoint, run, step and continue.

Using C as example, but feel free to extrapolate to another language.

And no I am not exaggerating, this was the kind of students I would get into my lab when I spent a year as TA in 1999/2000, and naturally would have to get them up to speed into good programming practices.


Reminds me of this diagram http://i.imgur.com/ZOuf9hg.png (which I don't necessarily agree with)


It maps breakpoints to debugger actions that are triggered instead of actually stopping execution, like formatted output of whatever variables are in scope.


maybe this is sometimes called assert()? or in some debuggers you set a watch on a var and the BP triggers only on the watch-condition so the BP don't trigger each loop, only when x=7


You are missing out on the important point: printing forces you to formulate a hypothesis: what you expect and to compare with what you actually get. Debugging encourages less modeling and more trying random stuff until something sticks.

It is an exaggeration. In practice, it is useful to apply both. Novices can get some insight using debugging, more experienced with code base people should exercise their understanding of the code and use well picked prints.


You can also do print debugging with a debugger. Just have a breakpoint that doesn't halt, but instead simply prints the values of interest to the debugger console. This is particularly nice for things like debugging interrupt handlers where the time taken to print output normally is too much to accept.


Watched variables they were called once upon a time.

Also, I think what you suggest to do here is way harder to learn than printing.


Perhaps, but that lends credence to the theory that people using print debugging may just not have learned how to effectively use a debugger yet.


I do use debuggers when I have a hard core problem, I've found some horrible memory corruptions using RR and gdb for reverse debugging. However, sometimes someone throws a horrible gradle java project at you and asks for help, and figuring out how to debug is a pain.

I'd throw your comment back -- maybe all languages should be "debug first", make it as easy to get code into a debugger as it is to just build and run it.


I’m also willing to bet you live way up the stack.

Try debugging something in the embedded world, and you’ll see why a lot of bare metal programmers use printfs. Turns out timing is critical most of the time, so using a debugger hides a LOT of bugs from your eyes.

Debuggers are very useful, but so are prints, there just different tools, they have different purposes.


I've used print debugging extensively doing embedded development, when I could reasonably hook up a serial port to capture the output, or put a crude console on a tiny screen. These systems can't always be debugged in the traditional sense, and if you're troubleshooting some bug that happens on hardware but not in your simulator, then you use the tools you have available.


Worst thing with Java is, especially when taking over a project or extending some open source thing: finding where that goddamn log4j config is. Is it in web-inf, in tomcat/glassfish config, somewhere entirely else (e.g. specified in the run config), or is it configured in one of the five wrapper layers.

And then you have to figure out the syntax... does it want package names, class names, or (hello Intellij plugins!) need a fucking # at the beginning to be recognized.

And then you have stuff like a "helpful" IDE that by default only shows WARN and above levels without telling you somewhere "there might be stuff you don't see" like Chrome does.

For actual debuggers, shit is worse, across the board. Running in Docker is always a recipe for issues, not to mention many applications actively messing around with stuff like ports.

A system.out.println always ends up somewhere sane.


I think the point about seeing the state over time is a great one.

But also I want to nitpick because the title is one of my “favorite” pet peeves: “The Unreasonable Effectiveness of ...” thing is now used (as in this article) by people who are trying to say that something is remarkably or surprisingly effective, but that’s not what the original essay was about at all!

“The unreasonable effectiveness of the mathematics in the natural sciences” was a philosophy of science piece whose thesis was that there is no reasonable (rational, provable) basis for the degree to which our math abstractions and syllogisms happen to correspond to the physical universe.

It is self evident that they do in fact correspond super well, but the original piece was about how weird and spooky that actually is, if you think about it at all. Math is super effective, and there is no reasonable basis that we yet know that it should be so effective. It’s unreasonably effective.

It’s such a perfect title for that piece, and it feels dirty or diluting when it’s just used to mean “remarkably effective.”


IDE vs Text editor. OOP vs Functional. Logger vs debugger. The holy wars that shouldn't be. Why can't we all be friends and accept that Vim is better than emacs.


I used to think like you, friend! For over a decade, then I discovered doom emacs :)


For those wondering, Doom Emacs is a better vim (from evil-mode) than vim and so much more (easy out of the box community configs for most languages and tools, and way more cool stuff) inside the Emacs OS.


What always prevented me to actually switch to Emacs was how it's so huge it seems impossible to get an overview of how to do what. Every programming language-specific mode comes with its own unique features that surprise me when I just want to write code, meanwhile just entering a single tab without it getting deleted again is an odysee of reading documentation. At the same time it's slow and despite it having the best Vim emulation it cannot hide that Emacs just doesn't work like that. As soon as you leave the file's buffer you discover how Evil mode's illusion falls apart on all sides and you always land in situations where you have to use a mix of Vim and Emacs keybindings.

I love the concept behind Emacs, I just think at least 80% of its code should actually be in plugins, and the program itself and a lot of large expansions are really bogged down by the sheer size and lack of simplicity.

Oh, and Emacs-Lisp...it's much better than Vimscript, but it's a disappointment nonetheless. Loops instead of recursion in Lisp, really? And last time I tried it the parser could not handle unmatched brackets in comments.


> Loops instead of recursion in Lisp, really?

That's pretty common in common lisp as well. Specifically do loops (and lest we not forget the loop macro).

I think you might be thinking of the scheme branch of lisps, but not all of them work that way.


I tried using it but got stuck on having to learn Lisp to understand my config file.


I'm not sure what's difficult about

    (map! :n "ff" #'save-buffer)          ; Save
    (map! :n "fq" #'kill-current-buffer)  ; Quit a buffer
or

    (defun my/org-buffer-check ()
      "Check that we are in org-directory, and the buffer name is in that directory"
      (and (string= org-directory default-directory)
           (seq-contains (directory-files org-directory nil)
                                (buffer-name)
                                'string=)))
the latter being easily represented as:

    function my/org-buffer-check()
        return string=(org-directory, default-directory)
           and seq-contains(directory-files(org-directory, nil),
                            buffer-name(),
                            &string=)
    end
(seq-contains looks through the sequence returned by directory-files, for the result of buffer-name(), and compares them using the string equality function)


Sorry but that is not clear or obvious at all to me.


I understand the "!", I use it when it fails the first time and I'm trying to hint that I really really want it to work.


Apparently all the cool kids are using neovim + Lua these days. Lisp turned me off of emacs years ago as well. Recently I started digging into Neovim and have found Lua much easier to parse/internalize than Lisp, and kind of a joy to work with.


Great, I am at least halfway on the right track since I am using neovim. I know Lua a bit, I actually didn’t realize it was integrated.


Wow, by looking at this screenshot (https://raw.githubusercontent.com/hlissner/doom-emacs/screen...), is doom emacs a terminal/console program or a GUI program?!


It's a number of extremely, extremely well crafted layers on top of Emacs.

I switched last July and after 8+ years of using variations of Vim, Vi, Ex-Vi, Ed(1) (yes), NeoVim, et all, it is by far the smoothest experience I've ever had.

Unlike my experience with Spacemacs, I haven't had any problems adapting from Vim -- there are no points where the Vim interaction layer breaks down, and it genuinely feels like an editor that I'll be using for the next 20+ years. Like something that can grow around me.


Emacs (Doom Emacs included) can run in a terminal session or in a GUI.


It's configuration boilerplate for emacs. Emacs itself can run in both terminal and GUI mode, and doom should broadly look similar in both settings.


Spacemacs is the one true way, heretic scum!


Let us not forget the war to end all wars:

Tabs vs spaces


Tabs are the clear winner here, since they unambiguously denote which level of indent is in use, take only one character (octet) per indent level, and can be visually adjusted on any moderately advanced editing program to an end users taste WITHOUT modifying the source code.

This __wouldn't matter__ if we all just used TABS for indent level and if spaces were ignored for that: I also prefer tabs to show in a GUI code editor at ~4 characters, but be equivalent to 8 display characters in terminal modes (I guess EM size, but when I care about those I really want a 'fixed width' font, so toss all of the complexity aside please).


> unambiguously

Multi people teams (or just one person editing on different platforms depending on need) using different editors and with different preferences would like to have a word with you here.


I've been a vim print debuggerer for like 30 years, and last year picked up writing C# in an IDE (Rider) and its been quite nice really.


How's file exploring and method/class definition lookup these days in Vim?


The quality of the language servers vary, but you could get a decent IDE-like experience.


LSP is a great leveler, for that. From what I hear, vim has great LSP support, these days.


For the junk I write and work with Ag/Rg has been sufficient.


I heard the latter sentiment earlier today, but I don’t think anyone is actually passionate about what editor others use. Opinionated sometimes.


Passion about the tools others use can be called for if you can see they're obviously struggling to meet their goals with the tools they've chosen.

The hard part is, unlike a screwdriver where you can demonstrate, editors and "IT" in general are mental tools where the mindset is an invisible, nontransferable "handle" to the visible portion that everyone can see and use.


A previous manager was snarky about me using Emacs to write Python instead of “a proper tool”. Every time he’d pass my desk, “a real IDE could to that for you”. We finally had a conversation along the lines of “can it save me more time than you waste pestering me about meaningless stuff? Also, STFU until I miss a deadline for the first time since I’ve been here.”

I would not hire a carpenter who doesn’t believe in using hammers. Neither would I constantly bug a hired carpenter to use the hammer I think they should be using instead of the one they like.


Most of the time, I suppose. Watching someone try to write java in vim (or generally, without an IDE) gives me anxiety though, even with a language server :)


Meh, it's fine. In general, I find that vim in more productive most of the time (now I have a language server, before, wouldn't ever consider it!)

The fluid and consistent (java is only a portion of what I write at work) editing experience is mostly more valuable to me than the slightly better autocompletion.

I keep intellij installed, but it only open it if I want to do a fancy mechanical refactor, like extract an interface from an existing class. Smaller niceties like creating a local variable from an expression are only a handful of keystrokes just feel like naturally describing what I want (lexically, rather than semantically, I'll admit) in vim anyway.


That's perfectly doable. I routinely navigate / extend / debug / refactor a 600 KLOC Java codebase with nvim + ctags + ripgrep and will have the job done well before the language server has even completed digging through those 600 KLOC.


Speed of iteration beats quality of iteration.

You can step through the program, reason about what's going on, tracking values as they change. But if you missed the moment, you have start again from the beginning (time traveling debuggers being rare). Or maybe you're looking at the wrong part entirely at this stage, and just wasting time.

With print debugging you write a bit of code to test a hypothesis. Then you run it, and you keep running it, and especially if it's an UI program you play with the UI and see how the values change during that run. Ideally the loop to change the code -> see the result should be a few seconds.

You can then git commit or stash your prints, switch branches and compare behavior with the same changes applied. And at the end of the day if you walk away, your prints will still be there the next morning. The debugger doesn't produce any comparable tangible artifacts.

Once you do know where the problem is, and if it's not apparent what the problem is (most problems are pretty trivial once located), that's IMO the time to break out the debugger and slowly step through it. But the vast majority of problems are faster to solve through rapid iterative exploration with prints in my experience (C, C++ for over a decade, Python, now JS/TS).


> Speed of iteration beats quality of iteration.

I totally agree but for me that means using a debugger and make full use of its features.

> But if you missed the moment, you have start again from the beginning

As already mentioned in another comment, "drop frame" is a standard Java debugger feature. You can easily go back to the start of any method and go though everything again (side effects of already executed code can give some trouble though).

> Or maybe you're looking at the wrong part entirely at this stage, and just wasting time.

You have the same issue when printing in the wrong parts. Of course you can plaster the code with lots of print statements to see which gets executed. But you can do the same with breakpoints and see where the debugger stops.

> With print debugging you write a bit of code to test a hypothesis. Then you run it, and you keep running it, and especially if it's an UI program you play with the UI and see how the values change during that run.

I really like conditional breakpoints for this. You write a condition for a state that interests you. Then play around in the UI until it stops for that condition and you can easily inspect the complete state at that moment. This is quite useful for debugging methods that are executed very often. Trigger breakpoint (which disable all other breakpoints until they are triggered) are also useful in those situations without requiring any code.

> Once you do know where the problem is, and if it's not apparent what the problem is (most problems are pretty trivial once located), that's IMO the time to break out the debugger and slowly step through it. But the vast majority of problems are faster to solve through rapid iterative exploration with prints in my experience [...]

I can just say that I usually locate issued way faster with a debugger. "rapid iterative exploration" could also kind of describe my workflow using breakpoints. Maybe it actually less about the tool and more about your approach for locating issues in the code.


I often use prints to find the suspect, and then debugger to weed it out. Conditional breakpoints make it easy to stop at the correct place.

About your debugging from the beginning: with Intellij on the jvm one can "drop frame", which is basically to discard the current function and start over with the stack as it was. Since I mostly write kotlin my objects are immutable, so rerunning most stuff actually works fine. And hot-swapping the function while the debugger is paused I can even try multiple implementations without having to rerun everything, just drop frame, hot swap, step into the new and updated function.

I'd say knowing the debugger well and using it is a faster way to iterate than not.


> Speed of iteration beats quality of iteration.

Right. That’s why printf debugging sucks.

If you’re in a compiled language with a 2-minute iteration it can take an hour to do a binary search to track down an issue that would take 5 minutes with a proper step debugger.

Print debugging is great because it works and is the ultimate fallback. But it sucks and I hate when I am forced to use it.


> Speed of iteration beats quality of iteration.

That's especially true if you're doing some form of TDD/unit testing. With IntelliJ, I can easily set it to watch for changes and cycle one unit test while I make changes. If something weird happens I can just drop a printf in there, understand and rectify the issue, then take it out. Much faster than step through debugging.


Good for simple stuff, but if you find yourself changing your print statements over and over, a breakpoint probably would have been faster


Java debugger can easily drop frames which helps tremendously in going over a function multiple times.

Hot code replacement, which I have already and still use since 2005 works very well as well.

In php, debugger are half as good but code replacement works immediately.

I would rarely use print debugging and in Java never.


Print debugging is the only way in a distributed system the way we are building micro services these days. We just call it logging.

Edit: ..and do it in production


Amen. “What do you use for debugging prod services?” “CloudWatch.”


Yes. The same apply to a lot of embedded systems. When you can't stop the system you better learn how to debug from logs. And put the right logging support in place ahead of time: it may be impossible to replace the software in place by a new debug version, and then it's only based on preexisting logs and just adapting the logs configuration.


It doesn't have to be that way, record-and-replay debugging can overcome this.


Completely agree.


Print debugging is not that different from setting logging level to DEBUG and those logging calls should already be there in code and give meaningful insight so I don't get printing being often ridiculed.

For over ten years of commercial work I used a debugger only a couple of times and in most cases it was against someone else's code, usually when things were completely broken and I needed to get backtraces from multiple deadlocked threads or lacked debugging symbols and things like radare were also required. There were also times when I manually called a syscall using gdb.

My opinion is that if you can't reason about the code helping yourself with just a couple of additional messages the code is probably broken/too complicated to begin with and requires serious refactoring. I've never understood people stepping through a program hoping to find some mysterious creature somewhere along a huge stack of calls. In my career I have often seen people always debugging an application as a whole instead of separated modules. Dividing a problem is the key. The same key that allows me to still program using vim without autocompletion, keep APIs sane and coherent, and avoid dead code.

One really useful exception is when dealing with electronics. My friends programming hardware use debuggers all the time and in this case it actually makes perfect sense because there is no way to print anything and things like hardware interrupts come into play.


> My opinion is that if you can't reason about the code helping yourself with just a couple of additional messages the code is probably broken/too complicated to begin with and requires serious refactoring. I've never understood people stepping through a program hoping to find some mysterious creature somewhere along a huge stack of calls. In my career I have often seen people always debugging an application as a whole instead of separated modules. Dividing a problem is the key. The same key that allows me to still program using vim without autocompletion, keep APIs sane and coherent, and avoid dead code.

The big thing here is that you seem to only work with your own code, where you can arbitrary refactor it and keep the entire thing in your head, as well as quickly find which module does what. But when working with a large foreign project, none of this works. You have to start working at the scope of the entire program, because you have no idea of the internal structure yet. Of course, people who use debuggers divide the code up as they go, but the point here is that they place a few choice breakpoints at central points in the application logic, inspect the stacktraces when one gets hit, and use them to further dig in to the part of the code they need to look at.


> The big thing here is that you seem to only work with your own code, where you can arbitrary refactor it and keep the entire thing in your head, as well as quickly find which module does what.

Not at all. Due to lack of documentation I look at code of foreign libraries and applications all the time to check what really happens inside and what are the guaranties. The latter being often the case when it comes to concurrency problems.


> I used a debugger only a couple of times and in most cases it was against someone else's code

The vast majority of code I investigate is "someone else's" code. Most of the cases, it's a historical accumulation by multiple authors. If you generally only work in your own code, that's quite a different experience, and debugging is generally easier (because you were there when it was written).


That is not the point I was trying to make here. I was talking about extreme cases of faulty code which luckily are not that common. I meant that as long as you couple with sane libraries and adhere to standards in your own team extreme measures such as a debugger are not necessary.


Actually, using the UART interface to send text breadcrumbs out the port is a standard technique in embedded, too ...

The article hits the point of print debugging, you get to see the backward in time.

By the time you hit "the problem", the pointer is NULL, the memory is trashed, the system is deadlocked, etc. You need to reason about how you got there.

There is a reason why the next step up from basic debugging embedded is "streaming trace"--effectively print on steroids.


Memory watches let you see exactly when the pointer becomes null, and the call stack. In that particular case, the debugger is much faster and easier. Not saying print debugging is useless, but I do find many of the arguments in these comments seem to think the primary feature of a debugger is to step through code


I pretty much all of these. One thing I wanted to add is decorators. There is code you might have easy access to edit to add print statements. I don’t love the spring boot docs and reading the code isn’t as useful as stepping through your specific autowired code tree. There’s definitely use cases but 95% of the time prints will get you there. Imo you should learn it because it will save you a bunch of time and headache when you need it.


When I start to use a new server framework, I like to step thru the main loop, just to see how it works with system calls/listens/accepts/reads and how it dispatches up the stack. But for debugging, I like to a) make it reproducible, b) read the code, c) add logging to help with any deductions that b yields. (Sometimes will just go to b if it's a simple bug).


> I don't get printing being often ridiculed

I just told one of my co-workers last week that I was going to print-debug an issue. He paused for a moment before saying, "Uh, I can just debug this for you if you like."

So yeah, there's definitely some kind of stigma against print-debugging.


You’ve only used a debugger a couple of times in 10 years? Yikes.


Why "yikes"? Note that doesn't mean I don't know how to use eg. gdb outside or inside IDE. My debugging record proves my methods are quite efficient but there's something even more important - there are ways to systematically avoid the need to use a debugger by keeping projects' sanity level high.


I've never understood print debugging, at least in a web dev/nodejs context.

I don't begrudge people having their own approach to things, but almost universally when I see people use print debugging they seem to take quite a bit longer than just break pointing at the problem area.

If your code is in an unexpected state, it's much easier to hit a breakpoint, examine local values, and then backstep through the call stack to see what went wrong. I dare to say that in a single threaded context, it's almost objectively more effective.

Versus the alternative of using printlines, you basically need to map/model the state flow out in your head which is prone to error (limited capacity of human working memory).

Is it not easier to directly see the problem rather than doing mental math to make assumptions about the problem? I can't see a case for that being more effective.

Most of the time I see people print debugging it seems to be because they haven't used the debugger much... either they aren't comfortable with it, or didn't bother to set it up, or see the mental mapping approach as more "mathematical/logical"... or something. Takes you back to the school days of solving algorithms on paper :)

That being said for simple problems, I've used print debugging myself (again, usually because I'm too lazy to setup the full debugger). Or for multithreaded contexts etc, where thinking it through can actually be more effective than looking directly at the problem (multiple contexts)


> almost universally when I see people use print debugging they seem to take quite a bit longer than just break pointing at the problem area.

Could it be... because they don't know where the problem area is yet? Which is what the original article and most comments in favor of print debugging say.


How can they not know? If you see unexpected behavior in some subsystem, it should be easy to pinpoint.

e.g. in a UI context, timeline shows wrong data. Start there and work backwards.

I can only imagine it's hard to pinpoint if the code is not factored well.


Probably the most interesting thing about development as a discipline is the near radio silence on how to debug.

There is a decided lack of academic success in engaging with debugging as an object that can be studied. There are channels to learn about debugging as a stand-alone topic. Programmers don't often talk about debugging techniques in my experience.

For something that takes up the overwhelming bulk of a developer's time the silence is in many ways deafening. It may be that nobody has a method superior to print debugging.


Debugging is impossibly difficult to teach. It's much closer to "how to solve an escape room" than it is to "how to build X".

Debugging requires deep understanding what you are doing and your system. It's different every time. And while there's a general algorithm you can follow:

1. Guess what's wrong

2. Ask "How would I prove that's wrong?"

3. Try it

4. If bug found, fix, if not go back to 1

How would you teach that other than asking people to go solve a bunch of real world problems in real world systems for a few years?


I think this is the standard algorithm and it's absolutely terrible. People poke at things, which ends up giving a linear search across a possibly huge system. Even if the "guess" is intelligent, it's not like you can trust it. If you actually fully understood the system, you would know what's wrong and you wouldn't be debugging.

Do a bisect instead. The complexity is O(log n). It's probably slower than if you guess right on the very first time, but that's less important. Debugging time is dominated by the worst cases.

1. Do something you're 90% sure will work that's on the path towards your actual goal.

2. If it works, move forward in complexity towards your actual goal. Else, move halfway back to the last working thing.

3. When you've trapped the bug between working and non-working to the point that you understand it, stop.

"The weather data isn't getting logged to the text files. Can I ping the weather servers? Yes. Can I do a get on the report endpoint? Yes. Can I append to a text file? Yes. Can I append a line to the weather log file? No. Ok, that narrows it a lot."

The real point of this is that you should spend most of your time with working code, not non-working code. You methodically increment the difficulty of tasks. This is a much more pleasant experience than fucking around with code that just won't work and you don't know why. Most importantly, it completely avoids all those times you wasted hours chasing a bug because of a tiny assumption. It's sorta like TDD but without the massive test writing overhead.

A modification for the disciplined: give yourself one (1) free pass at just taking a stab at the answer. This saves time on easy fixes. "Oh, it must have been that the country setting in the config file is off." Give it a single check. And if it's not that, go back to the slow and steady mode. Cause you don't understand the system as well as you thought.


You're basically describing the scientific method. Particularly the practical application of Occam's Razor: Starting from simple theories, and working your way up towards more complex ones until the theory is just complex enough to describe the system behavior you're trying to understand.


Your algorithm is for creating programs or adding new features. Not reslly for debugging already broken stuff.

I would agree that hoing from less to more complexity is a great heuristic for making those guesses on what to check.


This is normally described as a way to write code, but it works for debugging if you can modify the system or at least give arbitrary inputs. It doesn’t really apply to “read only” debugging.


It feels like there are basic heuristics that too often people either don't know or forget to apply.

I recommend http://debuggingrules.com/ - it's a good book that lays out some rules that have always helped me. When people come to me for help debugging something, invariably they've skipped some of these concepts, and applying them usually gets to the bottom of things faster than randomly changing things (which seems to be a common, but ineffective, way to debug a problem).

    UNDERSTAND THE SYSTEM
    MAKE IT FAIL
    QUIT THINKING AND LOOK
    DIVIDE AND CONQUER
    CHANGE ONE THING AT A TIME
    KEEP AN AUDIT TRAIL
    CHECK THE PLUG
    GET A FRESH VIEW
    IF YOU DIDN’T FIX IT, IT AIN’T FIXED


This doesn't seem to be a different problem in kind than teaching people to write programs. How do we do that other than teaching them some mechanics (syntax, how to run the compiler) and then setting them a lot of exercises to gain experience?

The same seems to apply to debugging. A student needs to be introduced to the basic concepts and commands, and then practice. Just same as with a writing exercise, the instructor can have specific problems to practice specific techniques.


The one beginners often miss:

What output do you expect?

Ask this yourself before starting the debugger. Without this, it is very easy to glance over the point where things go funny.


For a system of moderate complexity, there are myriad answers to question (1). A skilled programmer has the intuition to guess the most likely causes. This ranges from the mundane (e.g. recompile everything from scratch) to the occasional almost magical lucky guess which immediately leads to a solution.


I might suggest a few of the adventures of Sherlock Holmes.


So, having conveniently written a monograph on the exact topic that provides extra information, the extraordinary ability to know exactly which minor features of the situation are contradictory, and access to knowledge unavailable to the reader?


> 1. Guess what's wrong 2. Ask "How would I prove that's wrong?" 3. Try it 4. If bug found, fix, if not go back to 1

I think this is exactly right. Teaching 1 is impossible I guess but the general method (it's basically the scientific method in an ideal environment) seems teachable.

Come up with a hypothesis of what's wrong, try to prove or disprove the hypothesis.


My general approach is simply to ask: what changed?

This assumes that I'm working with an already working system, but if I'm implementing a new feature, debugging is easier since I have more flexibility to explore alternative approaches.


There's also:

1. Pick a spot in the code

2. Figure out what you expect the state to be there

3. Check and see that the actual state matches

You can kinda binary search your way to the location of a bug by looking in different places


> For something that takes up the overwhelming bulk of a developer's time...

It isn't the bulk of my time.

Most of my time is spent figuring out what to do.

Once I have decided that, telling the computer is usually straightforward.

> Programmers don't often talk about debugging techniques in my experience.

No they don't, but look at it this way: Bugs are mistakes, and nobody wants to be told they're making too many mistakes. Anyone who discovers some amazing debugging technique will struggle to share it with anyone else for a lot of reasons, and this is one.

> It may be that nobody has a method superior to print debugging.

Print debugging always works. Getting "a debugger" to work isn't always easy, and if you aren't already comfortable using the debugger to track down the kind of bug you're facing, you will find it very difficult to find the bug and cure it faster than with print debugging. And since people don't tend to make the same mistakes over and over again, debuggers tend to have a very limited utility in those few mistakes made frequently.

My experience is that mistakes like that are the fault of some kind of fundamental misunderstanding, and rather than spend time to learn all of the different fundamental misunderstandings that the debugger was designed to work around, time is better spent simply correcting your misunderstandings.


It's one of those things that is acquired with experience and from more and more peers one has had. I think I've learned a bit from all my peers, almost everyone had something interesting or an interesting way to tackle a problem. Once a lot of time is spent in the industry one starts seeing patterns as a new cycle starts.

As far as teaching debugging, it is one thing to show some examples and another one is to run into a bug yourself and get from having no idea how to debug to actually fixing it. That whole experience is hard to replicate in unnatural ways.. When I was in school they told me not to worry too much about debugging and that I'd run into issues in the real world and figure out ways to debug depending on the system and that turned out to be quite correct.


I mean there are other types of debuggers (step-through, time travel[1]) but I agree with you that I have never seen any research on which is better/faster. It seems an obvious topic so it makes me suspect that it comes down to individual style.

[1] https://docs.microsoft.com/en-us/windows-hardware/drivers/de...


“ which is better/faster”

Because the answer is “it depends”. As you mentioned it’s a lot about individual style and preference. I have seen a lot of good coders that got things done but worked in totally different ways.


It‘s a very interesting point. I suspect it‘s hard to study because it‘s a very high level capability of our brains. If you understand how we debug, you understand a lot about how reasoning itself works. I did read a book on debugging as a general practice once, but it wasn‘t helpful, as it was mostly anecdotes and generalities.


ITT: a non-controversial opinion shared by most programmers.

Print debugging is fast in many cases and requires little mental overhead to get going.

But for some/many systems, there's a huge startup and cooldown time for their applications - and compiling in a print, deploying the service, and then running through the steps necessary to recreate a bug is a non-trivial exercise. Think remote debugging of a deployed system with a bug that requires select network and data states that are hard or impossible to replicate in local/dev.

For things like this, being able to isolate the exact point of breakage by stepping through deployed code, and doing immediate evaluation at various points to interrogate state can't be beat.

This post strikes me as either (a) a younger programmer who still thinks that tool choice is a war rather than different tools for different jobs (b) someone making a limp effort at stoking controversy for attention.


> I should emphatically mention: I’m not saying that print debugging is the best end state for debugging tools, far from it. I’m just saying that we should reflect deeply on why print debugging is so popular, beyond mere convenience, and incorporate those lessons into new tools.

I'm not sure what about the article makes you think either a or b. They are trying to critically examine why some people reach for print debugging first, and I think it's spot on.


Probably explains why java has such a rich set of logging and debugging tools. Startup time, plus the idea that printing to stderr/stdout doesn't help you figure out where that goes in many java environments :)


Or c) someone just making comments from observed experience, and there's not much about that 'senior developers' have when it comes to 'having had to compile something that takes a while' - that's the purview of everyone, or at least, those who have worked on those larger projects. And though remotely debugging code definitely happens, it's in relative terms, very rare. This is just someone making a comment on their blog, that's it.


On the other hand, when you are working in an example like you are discussing (a service, or multiple services, which must all be deployed), it can be hard to figure out how to get the debugger attached.

It possible depends on the kind of programming you do -- I find myself doing little bits of work on projects in many languages, so learning how to get the debugger going often takes longer than finding + fixing the bug.


In languages where you build a deeply nested call stack, advanced debugging looks more promissing. But in simpler setups like ASP/PHP/JSP etc, simply printing works fine.


Almost all the reasons people use print debugging can be overcome by improving debuggers --- and to some extent already have been (in the words of William Gibson, the future is already here, it's just not evenly distributed yet). I think it's important for people to understand that the superiority of print debugging is contingent and, for many developers, will not persist.

Record-and-replay debuggers like rr [0] (disclaimer: I started and help maintain it), Undo, TTD, replay.io, etc address one set of problems. You don't have to stop the program; you can examine history without rerunning the program.

Pernosco [1] (disclaimer: also my baby) goes much further. Complaints about step debuggers (even record-and-replay debuggers) only showing you one point in time are absolutely right, so Pernosco implements omniscient debugging: we precompute all program states and implement some novel visualizations of how program state changes over time. One of our primary goals (mostly achieved, I think) is that developers should never feel the need to "step" to build up a mental picture of state evolution. One way we do this is by supporting a form of "interactive print debugging" [2].

Once you buy into omniscient debugging a world of riches opens to you. For example omniscient debuggers like Pernosco let you track dataflow backwards in time [3], a debugging superpower print debugging can't touch.

rr, Pernosco and similar tools can't be used by everyone yet. A lot of engineering work is required to support more languages and operating systems, lower overhead, etc. But it's important to keep in mind that the level of investment in these tools to date has been incredibly low, basically just a handful of startups and destitute open source projects. If the software industry took debugging seriously --- instead of just grumbling about the tools and reverting to print debugging (or, at best, building a polished implementation of the features debuggers have had since the 1980s) --- and invested accordingly we could make enormous strides.

[0] https://rr-project.org

[1] https://pernos.co/about/overview

[2] https://pernos.co/about/expressions

[3] https://pernos.co/about/dataflow


Post author here — just wanted to say I emphatically agree with this and have found your work on rr and Pernosco very inspiring! Anyone who hasn’t seen this work should check it out.


Thank you! I've recently started learning how to use rr and it's been amazing so far. I've written a tiny wrapper to parse cargo's output and run rr on the appropriate binary to reduce the friction a little[1]

I'd love to use pernosco, but it's too expensive for me. Do you have any sort of student discount?

[1]: https://crates.io/crates/cargo-rr


Use up your free sessions and if you want to use it more, email us at inquiries@pernos.co and we'll work something out.

Thanks for the cargo-rr crate, that looks nice.


I’ve said this before, but rr really is a superpower, and Pernosco doubly so. Definitely worth every penny.


> Almost all the reasons people use print debugging can be overcome by improving debuggers

speed and simplicity

https://www.youtube.com/watch?v=JXQZhyPK3Zw&t=1410s


These are important, but for larger projects "speed" is often not a feature of print debugging when you need multiple iterations of refining your logging statements.

"Simplicity", sure ... it's difficult to beat the simplicity of not using tools.


I recently discovered a Linux debugger & tool which allowed me to solve problems 10x faster than print statements: pernos.co (which is layered over Mozilla's rr time-tracking debugger).

Pernosco's tool is described pretty well on their website, but basically it allows you to view a program inside and out, forwards /and/ backwards, with zero replay lag. Everything from stack traces to variable displays (at any point in time in your code execution) is extremely easy to view and understand. The best part is the lightning fast search functionality (again: zero lag).

On top of this: extraordinary customer service if anything breaks (in my experience, they fix bugs within 24 hours and are highly communicative).

If you value your time I highly recommend you check out this tool.


I had a crazy idea the other day that perhaps there could be something like "CSS for program execution traces". If you think of function identifiers as XML/HTML tags and arguments for individual function activations as element attributes, then perhaps something similar to CSS selectors but acting on the tree representation of a program's execution could trigger at certain clearly defined points during the execution and format some human-readable output of what the program was actually doing, or a "cross-section" of it at least.


Sounds a lot like syntactic sugar or a DSL for symbolic breakpoints combined with conditionals. That's certainly doable.

Something like: func1(4) > func2(null) debug;

Semantically: upon func1 called with arg 4 and some descending path that calls func2 with arg null, enter the debugger

Neat idea!


I got the idea when I was thinking about the applicability of computer algebra systems to math education. Some way of visualizing the decisions and steps of a logically complicated program seemed necessary for that. Getting a readable trace of the computation in a similar way to the one that some logical programs or expert systems can justify their reasoning with seemed like a usable form of such a visualization, and some time later then the analogy with CSS/XSLT struck me. I was thinking of collecting all the steps into an output, but setting breakpoints in a similar fashion with individual "selectors" could be useful for debugging, too.


Most under appreciated aspect of proper debuggers is not about the code line of interest but the context they give you about the whole application, ie: the stack frames and their state. When handed a new codebase I often fire up the debugger and attach and set various breakpoints in interesting places and then execute the application to see where / when they get hit. It's a great way to learn a codebase - things that are hard to discover ("when is the database driver created and how does it know its password") just pop out where you might have to spend ages working it out if you were just examining the source tree.


The problem with async programming nowadays is that stack traces become meaningless.


That depends on your tooling! Lots of async programming has debuggers that tracks what you might consider to be synthetic but more useful backtrace.


In my experience, people who downplay debuggers don’t have the option to use effective debuggers. Debugging C++ and especially C# in Visual Studio is wonderful. Debugging Java in Eclipse can be great. Meanwhile GDB and most other language debuggers are painful and every IDE integration I’ve seen of them has been horribly unreliable.

I’ve heard there’s a culture in parts of Google where kids go through uni using GDB because “Woo Linux!” then go straight into Google where everyone is “Woo Linux!” (I do like Linux, btw) so they are either still using GDB, or more likely have given up on it and reverted to printf. So, everything takes forever to figure out and that’s just “normal”. This was coming from a console gamedev who was shocked by the transition after moving to Google.

Meanwhile, I’ve spent a good part of the past couple decades debugging large volumes of code that I will literally only see once ever. With a good debugger, that can be done effectively because watching and even modifying the code’s behavior can be done at a glance rather than a re-compile.

I’ve also worked on a very big project that used extensive logging because they had a very bad debugger setup and productivity was in the toilet compared to every other job I’ve had. The only way I could keep productive was to take the time to break out systems into small independent programs in my own environment so that I could use a debugger on that rather the run the code where it is.


I dunno. I was a C# dev for 7 years and exclusively used Visual Studio's debugger. Then went to a JRuby project which had abysmal debugger support at the time. Learned to used printf style and it's now been my goto for the last 8 years. This despite coding in Nodejs for last 4 which has pretty good support. I only reach for the step through debugger when the problem is tricky, mainly because of having to do the setup.


The Visual Studio debugger is great, but there are some limitations. Anything very serious is going to be multithreaded, and if you block a thread poking in the debugger, other things are going to start timing out and the real flow of the program is interrupted and impossible to reproduce.

Log heavily, and log systematically - imagine you're going to need to grep through days of logfiles to find the needle in the haystack - you will eventually. Build in runtime switches to dial log verbosity up and down. Err on the side of providing more context than less. If something throws exceptions, catch them, log exactly where it was, what it was supposed to be doing, and any relevant parameters or state.

If you can get them, process dump files are unreasonably effective, too.


> Debugging C++ and especially C# in Visual Studio is wonderful.

... and it's still a royal pain to get your program to display stuff on some text console when you want those printfs.

Also, you have to use Windows. I'd rather avoid that if i can.


I actually find GDB to be a fairly good debugger, but you need a bit of work in how to translate what your IDE is doing into something you can do in GDB.


I feel like the author gets close to the point but fails to drive it home: step-through debugging is unbelievably cumbersome. During a typical step-through debugging session, 90% of the time is spent on lines you are completely not interested in. Oh, did you accidentally skip the important point because of how tedious it was to keep spamming step-over/step-in? Better start over again. With print debugging, you set up your print statements strategically and -zing-, you get your results back. Feedback loop shorter. 100% of the lines are the ones you are interested in, because you put the print statements there.

I'm still waiting for the feature where you can conditionally stop at some breakpoint -only- if some other breakpoint/watchpoint was crossed over. It's not a conditional breakpoint, because conditional breakpoints can only watch variables, not other breakpoints. You could of course set some variable depending on whether some section was entered and then conditionally break based on that variable. But then you're back to print debugging land, having to manually insert code in order debug the program.

Debuggers are superior when it comes to interrogating the exact state of some variables, as well as the decision paths the program takes. For anything simpler, print debugging simply offers the better developer experience.


> I'm still waiting for the feature where you can conditionally stop at some breakpoint -only- if some other breakpoint/watchpoint was crossed over.

PyCharm 2021.1 has this, so I would guess that other members of the IntelliJ family probably have it too.

Set a breakpoint and then right-click the red dot, and click More to open the full Breakpoints dialog. Open the drop-down under "Disable until hitting the following breakpoint:" and select the other breakpoint that should enable this one.

And thank you for mentioning this! I didn't know PyCharm had this feature until I took a look after seeing your comment. This will be super useful.


Why would I spend any time stepping through lines I'm not interested in? I set breakpoints on the important parts and let the program run until a breakpoint is hit.

I stop at a breakpoint only after another breakpoint is hit all the time. You set the first breakpoint and run the program. It gets hit and pauses, you set the second breakpoint, then resume.

I'm just not getting how print debugging is the better experience.


>During a typical step-through debugging session, 90% of the time is spent on lines you are completely not interested in. Oh, did you accidentally skip the important point because of how tedious it was to keep spamming step-over/step-in? Better start over again.

No offense, but this sounds like you just really need to learn how to use a debugger - This is in no way a "typical step-through debugging session." I've been a professional software developer for 16 years and I've never once in my life "spamm[ed] step-over/step-in"

>I'm still waiting for the feature where you can conditionally stop at some breakpoint -only- if some other breakpoint/watchpoint was crossed over.

This is trivial to do, place two breakpoints, disable one. When the breakpoint is hit, enable the second (and optionally, disable the first).


With lldb you can do that, basically you have the option of running commands when a given breakpoint is hit, so you can just make it place another breakpoint, and it will be placed only if the first breakpoint is hit. I assume you can do something like this on gdb as well.


Another aspect, where printf debugging can be better than debuggers are use-cases where timing is relevant. Some bugs don't occur when break points stop the program at certain points in time. For completeness is should be added, that there are also cases where the printf can change the performance and make it impossible to find a bug.

I think the two methods are complementary and should be use in combination.

However, the big issue is that basic printf debugging is very simple to use and debuggers have a steeper learning curve in the beginning. Therefore, people start using printf debugging and don't invest into learning how to use debuggers. And when developers don't invest into learning how to use debuggers properly, they are missing the skills to utilize them and still use printf debugging in cases when debuggers are clearly superior.


Debuggers don't have to halt execution on hitting a breakpoint. They can do other things, like print the contents of memory, letting the host system handle the formatting. They're actually usually better for timing-sensitive prints than printf debugging.

That said, most people don't know this is possible (the learning curve issue you mentioned), even though it's an important part of how to use debuggers!


In my experience, GDB running commands on a breakpoint is generally much, much slower and more prone to materially changing the timing of things than printf.


If you have a timing related issue fixed by a debugger it's probably going to be fixed by printing/logging too.


Not at all. Stepping through a function with a debugger usually takes on the order of seconds and minutes. Printing some stuff in the function usually takes on the order of microseconds or milliseconds. That's a difference of at least three, possibly eight or nine orders of magnitude. It's very easy to imagine a timing-related issue that's affected by a delay of X seconds, but not a delay of X microseconds (RPC calls are typically on the order of milliseconds, for instance).


“ I think the two methods are complementary and should be use in combination”

This should be repeated many times. I am getting very tired of the constant need of people who want to have a strict ideology and find the one true way of doing things.


There are two separate questions: whether you want to see some kind of trace of the program or you want to step around in its state, and whether to use a "real" debugger or not.

In most cases I prefer to do something trace-based, and in the IDEs I've used the debuggers have much weaker support for that than they do for stepping around.

In particular, setting up tracepoints tends to involve fiddly dialog boxes which are much less convenient than using the main text-editor interface to say what you want.

I think there's plenty of scope for debuggers to provide a better interface for trace-style debugging. For example I'd like to be able to toggle a tracepoint after capturing the run, and have the lines it created appear or disappear, or add a filter expression or additional information to display without having to rerun the program.


Qt Creator debugger fails on me constantly, it's 2021 and the leading C++ plaf. is completely unreliable in that many more cases.

That's why 'I must' use print debugging, because the 'powers that be' still provide a broken, half-baked solution 30 years in.

Print debugging is however so powerful, I think there almost should be a mechanism built into languages and tooling around it so that it becomes part of the process instead of a 'kind of workaround'. It's something we all do, constantly, and yet you'll never hear about it when people are arguing about Rust or Go.


This kind of thing actually got added to a Microcontroller and its native SPIN language.

There is "SEND" which can be used as an ad hoc comms channel, aimed at a program method.

And a debug system that can both stream data, text and graphics to a client, as well as capture and report on the state of the 8 CPU cores possibly running.

https://www.parallax.com/propeller-2-graphical-debug-tools-i...

The debug output is something like using an xterm with Tektronix emulation turned on, and with all the lower level bits packaged away. The use can do a lot, from a "print" type operation to sophisticated graphics, static or animated.

On the capture side, a sort of supervisor region of RAM is reserved to for an ISR to capture processor state, or anything in memory really. Can be time, or event driven.


Have you tried Clion?


Well I just tried it and thanks for reminding me it was an option. I like it so far, we'll see how the debugger works, but IDE's cannot rid us of the cobwebs of arcane languages. C/C++ I think are the worst, the number of pitfalls and amount of needless complexity is byzantine and an enormous strain on mental energy.


I think it has most to do with way user thinks.

I need to see big picture, whole state, all the stuff and rapidly jump back and forth. I also, supposidely, have ability to keep a lot of state / scope/ abstraction in my head. So I find print debugging sufficient and fast. Rarely encounter situation I feel need for "stronger" tool.

Where other people focus on one thing, all that simultaneous output is just noise and distraction to them. And based on the continued use and popularity of step-based debuggers, these people are much more productive (and happier) using those type of tools.

It's very important to understand neither system is inherently superior. Although one or the other is superior to each individual. [btw over 35yrs of tech industry / software development I've found this true, that tools/paradigms are not universally superior but are superior based on individual) for many subjects. All the ones that have internal debates in techdom]


printf debugging always have a place, but for some reason, I found the debugging experience to be worse than 20 years ago. Tools like Visual Studio still have great debuggers, but I didn't notice significant improvement since the early days, and newer toolchains are worse.

A couple of years ago, I had to maintain a bit of Java code using Eclipse. That is, the old IDE everyone loves to hate. And while some of that hate is well deserved, for debugging, it was the most pleasant experience I had in a long time. Nice object inspector, edit-and-continue, conditional breakpoints, and step-by-step that works. Much better than fumbling around with GDB or one of its less-than-perfect UIs.

Also note that printf debugging and the step-by-step and breakpoint kind are not mutually exclusive. With an edit-and-continue feature, you can get the best of both worlds, but that's not something common these days, unfortunately.


Maybe it was because I was exposed to it early in my career but I have yet to find anything that rivals Visual Studio debugging, either from a "just works" perspective or ability to deep-dive into gnarly memory corruption(memory windows, robust watch windows and data breakpoints).


The beauty of printf debugging for a novice C programmer is that the recompiling with printfs changes the memory layout so your buffer overflow no longer segfaults you.

ALternatively, your printf can use the wrong formatter string, and cause unrelated crashes. Such joy!

Makes me nostalgic for the good old days.


> ALternatively, your printf can use the wrong formatter string, and cause unrelated crashes. Such joy!

What compiler are you using? Aztec C? Prehistoric C?


I agree about being able to see the whole program execution. This is particularly useful for multithreaded code since it provides a linear view into how the program actually executed. How are you supposed to figure out that A happened before B in a multithreaded program using only a debugger? With adequate logging, even if you don't log the precise times for A and B, you can often infer the ordering of these events based on other logged data.

For a lot of glue type code, I don't actually care about stepping through something line by line. I really want to see how components interact, not each step of execution. Though I do wish languages had better support for doing something like printing out all local variables in the current function along with the stack trace, sort of like a very shallow, low-cost dump.

Another big advantage is that logging is usually much easier to turn on (or even keep on by default) for production scenarios. Good luck getting some bank to let you run a debugger or even get a dump for anything.


> How are you supposed to figure out that A happened before B in a multithreaded program using only a debugger? Setting printpoints, letting them be hit and continuing... this whole thread seems to arise from the fact people have not learned to use debuggers.


> How are you supposed to figure out that A happened before B in a multithreaded program using only a debugger?

Breakpoint at A, breakpoint at B, both automatically continue when hit.


Both suck. With a debugger, you need to set up a debugger and step through (and often, they don't work quite as well as you hope). With print debugging, you need to add the print statements.

In both, you can't retroactively debug already executed code.

This is one of the areas where I'm really proud of what we did in Dark. In Dark (https://darklang.com), all execution is traced and you can see the value of any expression on any trace by putting your cursor in the expression. Advantages:

- no struggle to reproduce the error

- no need to set up a debugger

- no need to add print statements

When I write Dark, I can debug in seconds. When I work on the Dark implementation (F# or ReScript), I spend at least minutes on each bug because I need to do a bunch of setup to find enough information to diagnose the error.


A few more reasons why print debugging is used. If you are debugging multiple things at once, you’ll have breakpoints set that aren’t necessarily needed at the moment, meaning you have to continue a bunch of times to get to the right spot. Or your breakpoint needs to be in a loop that is called multiple times and conditional breakpoints are a pain and subject to code errors in the condition itself. Many debuggers are not great at examining state of objects, for instance a deeply nested object for which you want array index 42 within a dictionary of an object. Or you need to see a value that is calculated rather than just present in the current state.


> you’ll have breakpoints set that aren’t necessarily needed at the moment, meaning you have to continue a bunch of times to get to the right spot.

Python: if something: breakpoint()

Js: if (something) debugger;

Much easier than breakpoint conditions in visual debuggers imho.


That runs the risk of forgetting to remove it.


If you don't read your commits before pushing or even merging them ... But I use `git add -p` and `git checkout -p` which works well against this


The idea that print debugging is about being able to understand the time dimension of your code resonates, definitely. It reminded me of how the redux dev tools browser plug-in is an interesting pointer to a better kind of debugging. And essentially all that is is a rich UI around printing out the entire redux state after each operation. But because the redux state advances in discrete steps it’s very easy to express exactly what happened, and explore precisely what state change happened in response to each action. I do find myself wondering whether there’s a much richer debugging capability along those lines that could be applied more generally.


I have never spent much time learning debuggers honestly. I'm not sure if what I want exists:

I would love to have a debugger that offers a partial text editor experience, eg. it shows my code, I move the cursor to some statement, then I press some key binding and the debugger starts printing (in another window) all the state changes in that statement. Another key binding prints all the state changes in the entire function, etc. All of this while the program is running.

Are there debuggers that can do this? I have used gdb in the past, but having to set up breakpoints by hand and remembering names makes it too tedious.


Yes, it's called pernosco, and it's quite remarkable. However, it works from a recording of your program.

https://www.pernos.co


I think there's a conflation of processes and tools which leads to the false comparison. Print debugging is a process, which uses a tool called print statements. Stepping through code is a process, which uses a tool called the debugger.

Print debugging excels at triaging the problem. And every language has print statements. Ubiquitous first tier support. They help you narrow down where your assumptions about the program behavior may be wrong.

Once you know what area to focus on, you pull out the debugger and step thru the code.


For python: i specifically recommend https://github.com/zestyping/q a lot, which is like print debugging on steroids:

  All output goes to /tmp/q (or on Windows, to $HOME/tmp/q). You can watch the output with   this shell command while your program is running:

  tail -f /tmp/q


I find that the greater majority of the time there are better tools to solve a problem than using print statements even when considering the fact, that a project needs to be refactored to be debuggable.

If I have a bug I can reproduce I can write a unit or integration test, try narrowing down the issue and use a debugger on the test itself for further help. Intellij has great support here, VS as well and there's plenty others.

If the bug exists in production only using a debugger I can connect to it remotely and dump the state (thread dumps in Java or core dumps with Delve for Go). If there's an option of using a profiler it makes the experience even better especially for diagnosing performance issues.

For distributed systems monitoring libraries, log aggregators are much more useful than raw logs. Proper metrics allow fast pinpointing of issues and log aggregators give me an option to either look for rare/common errors easily.

The only case I'd resort to prints nowadays is as a last resort if there are no better options.


I stopped doing step debugging at all many years ago. For me it looks the same as visual vs. text programming. Text and text search tools are just miles ahead of clicking buttons.


There's always gdb/lldb from the command line.


So (for instance) in PowerShell my code breaks right as it is about to fail, with the state intact (ErrorActionPreference Break) which allows me to effectively fix whatever problem is about to occur and immediately have the state at the time of failure.

I dont understand how printing text could EVER approach this given I can test my assumptions right away and generally only need 1 error to happen to understand the totality of the circumstances.


The Python package PySnooper is pretty good for "fancy" print debug statements: https://github.com/cool-RR/pysnooper

I've caught quite a few bugs using this show-me-all-locals() approach...


Looks equivalent to python -m trace --trace, which is part of standard installation.


I did't know about this option, and I just tried it. Seems to be waaaaay more verbose, but I guess it can be tweaked/customized. I tried it on a command line script that uses google libs and still waiting for stdin to come back...

What I liked about pysnooper is the ability to snoop on a specific function and/or code block to focus on the work-in-progress section of code.


This looks pretty neat indeed, thanks!


Personally the biggest obstacle of using a debugger is that it cannot be automated easily. You have to be present when it's triggered. You have to navigate it manually. When the program crashes again, you have to repeat the process. I know some debuggers can be automated but then you'll have to debug the debugger script.

Logging is much nicer because you can turn the exploration process into a text analysis problem. Logs can be searched, stored and compared. For me, sifting the log is much easier.

Whenever I try to write a medium-sized program for a serious kind of purpose, the first thing I do is to set up a nice and reliable logging system. This is the decision that you won't regret for the rest of development.

I would argue that the use case of a debugger is much narrower than logging/printf debugging.


I usually only do print debugging when I encounter a Heisenbug. I mainly develop in Java, maybe my choice is related to its really really great debugging tooling.


The worst is when you put print statements in and a bug goes away, and you realize there's some kind of instruction reordering bullshit at work.


A simple toString() with side effects can be insidious when mixed with other bugs...


I find it mildly disturbing that so many comments are saying "But breakpoints!"

One would assume that anybody who used a debugger for more than a day knows about breakpoints. TFA isn't saying you have to step through every line in a debugger.

It's saying that, even if you employ your amazing debugging skill to find exactly the point you want to look at, you will only be looking at that exact point in execution, and not other points at the same time. Sure, it will be a very detailed representation of that particular point, which can be extremely handy, but sometimes you want to look at a hundred different points in execution, at once. That's when printf comes handy - you just need a large monitor (or a small font and good eyes).


Then I would say that logging is useful. But stack traces in debuggers, conditional break points and stopping at exceptions and the like are the best.


Interesting hypothesis.

I think a big part of the issue is that printf debugging has always been "good enough" for me. I have used gdb in the past, but I've never felt the incentive to become good at it, so my knowledge of it atrophies and it has become a less interesting option over time. On the other hand, my knowledge of how to printf messages and extract them from the running process never atrophy because I do exactly that every day.

So maybe the situation changes if ever I come across a bug that's so mindbogglingly convoluted that printf debugging is not viable. Then I'll be forced to learn to use a step debugger well, and that could change my choice of tools going forward.


Personally, using a good debugger and knowing how to use have been more useful to me that anything else. I mainly code in C and C++, and Visual Studio integrated debugger and GDB are my main debuggers (depending what I'm doing).

For me is faster to double click the border of a line in VS or writing "break 123" or "break fooFunction" in GDB and stepping and watching how some values changes than adding and removing "printf" lines.

Adding some asserts are other thing. They always are good, and often necessary to find some "Heisenbugs".

In other languages I probably won't think the same, but I haven't done anything big enough outside C or C++ to give a proper opinion.


You call it print debugging, I call it a powerful experimental framework for testing hypotheses about the behaviour of the program.


> I do want to point out that print debugging has one critical feature that most step-based debuggers don’t have: you can see program state from multiple time steps all at once.

At Google, we have time-traveling debuggers neatly integrated into our cloud IDE: You can step forwards and backwards, you can inspect variables for all the values they've had or will have until program termination, and you can also see all invocations of methods (along with their parameters) that have happened or will happen.

I still use logging for debugging. Cool tech aside, I think what you really need, above everything else, is the fastest possible iteration cycles.


I disagree slightly with the emphasis on "print debugging". I think what is missing is a body of theory around logging as a methodology. When I write code, I like to be able to look at the log file and "see" what the code is doing, when on DEBUG or higher. I think logging is a difficult but very important skill, and one which we are losing over time. If anyone is aware of any good books on logging (even if very old), do let me know. Seems like "logging theory" is a missing subject in Software Engineering.

I also don't see any contradiction between liking good logs and using the debugger when needed.


Absolutely.

When people say they use 'print statements,' are they talking about log points (a debugger construct), logging or something else?

I should hope that, in most cases, they're not literally modifying their source code to achieve this. While there are a handful of scenarios in which this is necessary, on the whole it strikes me as inefficient, time-consuming and error-prone. In most environments, there are better ways to make this data observable.


I'd say that except some heavily multithreaded cases, then print approach may be due to lack of mature tooling

I can't understand why would anyone prefer to write some print, when you can have Visual Studio's

* break point

* conditional break point

* ability to place another break points when you're already on the other

* expression evaluation at fly!!

* decent possibility to modify code at fly

I still remember case where I modified function with line with bad SQL (breakpoint after executing this SQL), added call to the same function with the same parameters after this breakpoint, let it execute again, caught the breakpoint once again and removed that call to itself

and all of that without recompiling program! it felt like magic


Print statements lets you see many things at once. Breakpoint only breaks at one thing. They are good for different things.


> Breakpoint only breaks at one thing.

One breakpoint breaks at one thing, that's why you can have many of them


But it only stops at one thing at once. With print debugging I can print 100 things in different places and look at all of them at once giving me a temporal overview, I can't do that in a debugger.


Debuggers are next to useless when dealing with today's distributed systems, all operating asynchronously in parallel. For the kind of bugs (race conditions, corner cases) that aren't easily caught by compilers, linters, unit tests or code review (in other words, the "Heisenbugs" that can stop a release in its tracks), aggressive logging is the only tool I've ever seen that is useful in-the-wild.

I would put forward that proficiency with this style of debugging (closely related to useful performance profiling) is a major factor separating mediocre programmers from the quasi-mythical 10X rockstars.


Print debugging is useful for the same reason backtraces are useful: both allow you to see what happened in the past, which is usually where the problem you're trying to fix actually happened.


This is pretty much the only way I debug;

- cross language/platform

- forces you to come up with hypothesis up front, then test these very systematically

- you know what you are doing

- debugger doesnt interfere

- works across threads and processes


Besides "behaviour in time", print debugging is effective because it's typically an extract of the most interesting for programmer things. I have a debugger window open this very moment and I can see about a hundred lines with various information about one structure, but I'm interested only in two (and I have to rerun this several times, because state got corrupted somewhere earlier).


One thing I haven’t seen mentioned here yet: I use print debugging all the time in Haskell, and find it works really well there compared to other languages. There’s a couple of reasons for this, I think:

• Nearly everything is immutable, so once I print the value of an expression I know it won’t change in the future. This is not the case in other programming languages, where a variable can be mutated after I print it.

• The base library provides a really nice range of functions for print debugging [0] — so I can just wrap any expression I want printed in ‘traceShowId’, and it’ll get printed. (Yes, these functions break purity; that’s why the module is marked ‘Debug’!)

Of course, sometimes print debugging isn’t sufficient, in which case I fire up the GHCi stepper debugger. But for the vast majority of cases print debugging works well.

[0] https://hackage.haskell.org/package/base-4.15.0.0/docs/Debug...


There is a dimension that gets overlooked in these discussions: tests. Every bug should start with a mind set to create a new test: unit, integration, or end-to-end. These are regression tests. Now, whether the test is needed or not is a decision that will fall out the bug fix. There is a distinct difference between the skill of debugging and skill of writing tests. I focus most of my efforts in writing test code. Someday perhaps IDEs will be the test platform for all the test types. That's not today though. The question in my mind is not print debugging versus IDE, but test code debugging versus ad-hoc debugging. IDEs encourage ad hoc debugging because once a bug is fixed, the test code needs to be written from the ground up, a step this is often left out due to time limits. I debug in test code and when the debugging is done the test is written. This applies to new code as well and mirrors the paired programming notion of starting new development using test code.


The limitation of print incentivizes me to write smaller functions and code that are generally free of mutations, so traces doesn't get stale fast.

Debugging on the otherhand, well.. I've just been told by my senior to write bigger functions, because the line-by-line debugging tool jumps around too much when moving between functions to functions.


(I have already replied to another comment with the same suggestion).

Pernosco offers the best of both worlds ( debugger, print), along with a few magical features.

https://www.pernos.co

With it you can print anything present in your recording and step, and do anything you'd do in a regular debugging.


It's good to see that they hope to provide « the best printf debugging experience you've ever had », but I'm disappointed at the UI they show on the "Condition and print expressions" page.

The video shows the user using their mouse and typing the expression to be printed into a tiny text input.

Part of the attraction of print-style debugging is the convenience of being able to use your main editor UI, along with all the conveniences that provides, to write that expression.

(That might be fancy completion or vi-style editing commands or keyboard macros; it will be different for different programmers.)


I suggest you tell them! They'll probably be happy to get some feedback.

If I'm honest, in spite of it not being perfect, it's much better than regular printf. The other very nice feature is dataflow (click on a variable value, and it tells you where it comes from, and it handles copies seamlessly), which makes a large number of debugging tasks trivial.


A lot of debuggers will also print/log and can even inject those statements into a running app where hot reloading manual print statements would otherwise not work.

From there there are situations where a debugger will save a LOT of time. I'm thinking of trying to figure out what's causing a behavior in a large dependency injected application with plugins when you have little to no familiarity with all the code involved. And then of course all the other things a debugger can do for you.

> Clearly Real Debuggers offer a superior experience to print debugging in so many ways. But print debugging is just easier to get started with, and it reliably works anywhere, so that’s why we use print debugging so much.

I think the tone of the first sentence and the word "superior" unnecessarily creates a strawman.


I've not figured out a way to effectively debug a distributed system except via printf. Debuggers are basically a nonstarter, because stopping one component to inspect it almost always triggers knock-on effects in other components that change the overall state of the system.


I'm working on Swift interpreter and the codebase is fairly difficult to debug. There's a lot of reused bits. So if you put a debug point somewhere trying to capture one behavior, odds are that that line will run 10 times for other work before the relevant part uses it.

So I tend to write a LOT of print statements that flush of debug variables right before I where I want to debug. Then I set a conditional breakpoint so that I can have the logs "stop" right where I want the program to.

Example:

// debug print

let someValueICareAbout = variable...

print(someValueICareAbout)

print("") <- conditional debug point here "if someValueICareAbout == 3"

I think it's technically still "print debugging", because I'm only using the debugger to stop the program so I get a chance to read my output.


Why not just add an action to the conditional breakpoint that prints the value?


I’m usually constructing those values just for that one-off test. They aren’t there at runtime.

It’s an AST interpreter so sometimes I want to see the value of something that’s 5 properties away inside a syntax node


I hate coding in an environment that does not easily support step-wise debugging. And yet, I use printf 10x-100x more frequently. Printf is actually causing you to do some thinking, and writing a little bit of code to conduct an experiment that hopefully will tell you in one shot what the problem is on a simple run. Step-wise debugging instead forces you to think about the problem, but then go through carefully and run a lot of mental load at each "next step" push to figure it out.

That being said, there's almost no good reason for a platform to not support step-wise debugging, so it's a big code smell that you're going to have a bad time in general there (even if in practice you'd largely use printf anyway).


There are environments where printf is not possible - e.g. MCU development. For instance, if the code breaks before the serial port is setup for printf to work.


You can always just send the output to a block of memory that is reserved for debugging, and dump out that block when necessary.


If you think print debugging is "unreasonably effective", it's probably because you have a shitty debugger.

Try Visual Studio under Windows. Go on, try it. You'll be surprised at just how stone-knives-and-bearskins the standard tools on Linux really are.


Print debugging is great but you'll pry the IntelliJ debugger from my cold dead hands.


Completely agree. When implementing new functionality in my Kotlin Spring Boot apps, I find the debugger crucial for fixing any exception that isn’t immediately clear. I’ll simply rerun my test with a breakpoint on the failing line, peruse the values of local variables (often spelunking deep into nested objects), and test theories with the window that lets my evaluate arbitrary expressions. Occasionally, I’ll change the value of a local variable and let the program continue to see if that value would fix the issue.

It’s a workflow that makes “Wait, why did that happen” such an easy question to answer.


For me this isn't an either or.

I constantly use both together. For problems that a quickly and reliably reproducible I'll often just use the debugger (if rr is suitable, even better).

But there's plenty problems that take a while to reproduce, involve many threads / processes, etc. Where the initial set of potential issues is too wide to easily target with a debugger. There sprinkling printfs around can provide data at a lower overhead than doable with a debugger.

Just yesterday I was debugging something where rr didn't finish replaying a workload that originally takes 10s within an hour (loads of io). Switching to print debugging I pinpointed the issue in < 10min.


This is why products like OzCode for Visual Studio[0] are interesting. With their ability to put in a breakpoint and see multiple variable's values instantly and "time travel" (i.e. limit step back through logic), it kind of gives you the print debugging benefits in regular debugging.

I've not seen anyone else try anything like this. There's a YouTube demo here:

https://youtu.be/82jq5cvl67E?t=1561

[0] https://oz-code.com/ozcode-production-debugger


IDEs are amber and throw away effort. All the break points, data integration and what not is thrown away after the bug is fixed. Further this IDE effort is not shared between developers. Log debugging is reusable out of the gate. Log debugging can easily be promoted to a production statement if deemed important. IDE developers it seems to me need to work on how all the time and energy IDE developers spend on a bug can be generalized to the point these things can be shipped with the code itself. Until then it is throw away work trapped in the amber of the IDE.


Recently I tried out rr, the time travelling debugger. It blew my mind. I never imagined you could just run until an assertions fails, set a breakpoint on the variable the assertion checks, and the run backwards until the last time the variable was modified.

Shameless plug: If you're writing rust I wrote a tiny wrapper that finds the appropriate binaries and provides the right config to make it as easy as `cargo rr test my_test`. https://crates.io/crates/cargo-rr


I don't get the usefulness/effectiveness print debugging. I work in Ruby and JavaScript and I find it much more efficient to know the whole state of the world and the objects in it at a certain place, because I generally know where the problem may be. For example I use pry in ruby and the debugger; statement in JS.

Maybe it is just the way my brain works? I'd rather stop and see what I need behind a condition than have to filter through a lot of possibly unformatted console output.


I learned early on with an expensive microprocessor emulator, just have the code raise the voltage on an IO pin as a print debugger rather than spend days debugging the emulator.


Print debugging is basically variable watch points, but IMHO easier.

The only time to really beware is embedded and real time systems where printing can throw timing way off or cause other side effects.

I heard of a case once where printing via JTAG caused an issue due to the power draw of sending all the extra data out. But that was trying to debug a novel board design and its software at once.

You won’t hit that kind of thing on normal computers like desktop, mobile, or cloud unless you are writing drivers.


This should be a non-debate.

A debugger is for when you want to inspect local state in detail. That can indeed often be very useful, and they are sophisticated technology.

However, the people who think that a debugger is the only way to debug just aren't good programmers: often you want a picture of the overall behavior of your program. As has been said by someone other than me, a debugger allows you to fix a bug; print statements allow you to think about the right fix for a bug.


Perhaps the people using a debugger have it set so it can give them a picture of the overall behavior of your program.


I think that's just playing with definitions. Correct me if you think this is wrong, but for most people in this thread:

"debugger" := a thing that pauses and allows you to inspect the local stack frame, and step into/over, evaluate code in the frame context, etc.

"print statements" := any technique involving letting your program run to completion and having it output debugging information to screen or file for examination after it has finished.


One of the things a debugger can do is print things once it hits a breakpoint and continue execution automatically.


So I'd be curious. I usually work in scripted languages Bash, Ruby, JS (bleh) a bit of python.

Sometimes I do some Java work though and I usually end up going to print debugging because trying to figure out all the Java logging framework, or not ending up like 40 layers deep in some magic framework dependency that is interecepting my code which is what always happens when I use a debugger.

That being said do those who work in compiled languages make more heavy use of debuggers?


I wonder if C# is a bit of an outlier here. I work mainly in C# but also sometimes do Type/JavaScript. While I've got the VS Code debugger running for JavaScript projects I'll rarely use it and generally use print debugging.

In C# I'd almost never use print debugging and the whole thing seems ridiculously antiquated (it's partly why I hate JS work). [Assuming you're working in Visual Studio...] You literally hit 1 key, F5 and then you can step through, time travel, edit and continue. I wonder if people just haven't experienced the ease of debugging in .NET with VS. I'd say I write probably 50% of my code in a debugging session, edit and continue is a game changer.

I did a little Java work in Intellij and it was similar but I think partly due to a lack of familiarity with the UI didn't feel quite as powerful.


> do those who work in compiled languages make more heavy use of debuggers?

I work in both quite a bit. I actually think I end up using a debugger more in e.g. Python because I'm more often asking questions like "what is the type of the thing being passed here", which is not a thing I need to seek out in something like Go.

That said, I think it's more a style difference than anything. I use debuggers in both compiled and noncompiled languages when I need a deeper look, and I'd guess people who don't use debuggers in scripting languages wouldn't use them in compiled languages. Probably also has to do with the ecosystem and how easy/effective debuggers are.


You can already do print debugging in runtime using tools such as Googles stack driver, Lightrun and Rookout (probably others too). These tools let you inject new logs into your running server so you can do print debugging without redeploying a cluster. Pretty darn cool.

They also let you place a breakpoint which doesn't stop execution so you can get a stack trace, variable states etc. without the pain.


With the Jetbrains products breakpoint debugging is so easy that i use it for development all the time. Evaluating expressions inside a breakpoint while developing provides many answers in a mich tighter feedback loop, even with go or something equally fast. If i don't have the jetbrains tools i default to print debugging because everything else is too much of a hassle.


I wish that evaluating expressions in a C++ debugger worked more often. It fails half the time (due to "optimized out") in Visual Studio C++, and 80+% of the time (for various reasons) in Qt Creator, even in debug builds.

Maybe it works better in non-C++ languages.


Maybe we need a debug oriented programming language ? say

`{ ... }?` denotes a scope we want to inspect, debugger gets launched and we get a generic reification of the tree path at that point; with ability to tweak parameters up that path and see multiple new trees rapidly (think Brett Victor live coding)

Honestly I think printf debugging is a pity. I do it.. but it feels like processing xml with sed.


Modern debuggers allow you to execute log statements on breakpoints. Much better than modifying your program to output something.


I used this a lot! Combined with scripting support, you can make the experience even more interactive.

Used gdb scripts in the past to make debug sessions repeatable. Stuck beyond the point your interested in? No problem, just restart the session with your gdb script and your right back on track! You can also add custom functions to output your state in a more meaningful way or to mock some state. In longer debugging sessions, a good debugger can be a life safer!

Still, for shorter sessions, reading logs and adding occasional prints are hard to beat.


For print debugging in Python I recently discovered a nice little time-saver: the icecream package. Rather than having to type "print( "x: ", x )", you can instead type type "ic(x)".

[1] https://github.com/gruns/icecream


Also, “print(f’{x=}’)” without an external dependency.


Cool! Where can I read about what's going on in that?


Go to https://docs.python.org/3/reference/lexical_analysis.html#f-... and search for “ New in version 3.8: The equal sign '='.”


This feels like an instance of "worse is better". It works well enough, it is easy to start, it is robust, and it is naturally integrated into your workflow (which is, run the code you wrote.) Debuggers are like a perfectionist approach, and still lacks things like timeline-like view that the articles mentions.


I love tracepoints, which is basically print-statements but dynamically from he debugger. Sadly I almost always end up in having performance problems, so I still need to an if or so to the code for the tracepoint to perform well. And then were back at printf debuggging again...


Yeah it works great until you install 30 frameworks and they all tell you so much useless crap that you can't see your own messages. Why do they log these useless messages? Because, they're bad programmers who are 1000 AU from being able to realize it.


The best way of finding faults for me is writing a test that fails the problematic condition and then use prints in all parts that I think are being executed and may have key information to help solving the mystery.

I tried using debuggers, but it was always too much hassle.


Here's a hack I do when I'm running a tight loop. In something like a video game at 60 fps, print is useless cause it spams so much in the terminal it's unreadable. So I use my hack: If math.random() > 0.99 print(debug_msg)


Stupid question: why don't more programming languages and/or compilers natively support the alternative to print debugging, which is (afaik) tracing? I guess some languages have it, but some don't, or they are onerous add-ons?


it works, its convenient, its easier to learn, easier to setup, it has less side effects in multithreaded programs meaning you can debug those too. You can even log to a file and then get these logs from your end users... The article does make a good point, errors that only cause failure a few hundred calls after the originating problem are easier to find this way too. Every few years I make an effort to learn to use whatever the current most popular debuggers are, but at the end of the day, its really just very specific kinds of errors that the debugging tools are better for finding and I generally go back to debug outputs soon enough.


Print debugging is a tool in the toolkit . It’s good enough for many scenarios , and much easier to deploy most of the time . I still recommend setting up and familiarizing yourself with a step through debugger , but use both


I wrote my own print debugging tracer. It’s now my goto for debugging most things.

https://github.com/elonvolo/logitall


Have never found writing to logs to be "effective". More a necessary evil ;)

What is really effective is "visual debugging". Say for example you are testing for bias in an RNG. Rendering a large format image of random rgb values will immediately show any cycles, even to the untrained eye.

Consider GPGPU workloads, for ML or ray tracing for example. There are myriad levels of variables to track: resources, allocations, command buffer state, synchronization, compute kernel per vector, and so on. All primitives that very much lend themselves to graphical representations!

Right now editing live code in a profiler usually involves textual editing of the graphical shaders. But it's easy to see how this evolves to a purely visual shader editor, not unlike those found in Unreal or Godot.


I can just write print(whatever) and get the job done, I don't want to put breakpoints and search for a data structure I need. Why can't I write something like this: breakpoint { debug(var) }

?


Assuming you mean something like "leverage the debugger to print, so I don't have to do it in code": you can, in most debuggers. This is effectively a "log breakpoint", or a "conditional breakpoint" where your condition is "print(thing)".

I use debuggers even for print-debugging for this kind of reason. No need to re-compile between changing prints, just re-run - the debugger session will hold them from previous runs, you can temporarily disable them with a single click, etc. It's FAR faster and more flexible.


I agree with the point about time travel debugging. I find it so intriguing that I've been playing with it for a little tool to make VR games. Anecdotally, it has helped me a lot with debugging.


I don't usually resort to a debugger to hunt for bugs, but I use them a lot to explore APIs in "real time". I find them much more convenient than the likes of Postman.


I have been developing large-scale django apps on ec2 for a while and the solution that has been working best for me is a lot of logger.** statements sent to papertrail.


Why "unreasonable"? There's nothing unreasonable nor wrong about print debugging. Moreover, it's a great first step towards logging and testing.


"Unreasonable" here means that it works way better than it should for something so simple. It's a somewhat common phrasing. Example: "The Unreasonable Effectiveness of Data" https://static.googleusercontent.com/media/research.google.c...


It's a meme, like "_____ Considered Harmful". That one started with "GOTO Considered Harmful". This one started with: https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness...


It’s “unreasonably effective”, meaning that a cost/benefit analysis is very clearly tilted towards benefit.


I always feel like I should get better with a debugger, but whenever I'm debugging I always fallback on printf and crashing. It just feels so immediate


For production, the only way is logs (print debugging).


Good points. Makes me think that if print debugging is primitive, the more sophisticated alternative isn‘t a step debugger, but a logging system.


“With enough print statements, all bugs are shallow”


The reason I like print debugging is that it is repeatable. The debugger requires too much interaction for it to be automatable.


If you've ever used visual studio to debug c/c++ code you just know why the linux crowd mentally works around it.


GPU equivalent: putting stuff into vertex buffers so you can inspect the buffer and see if the expected values are there.


Don’t let debugger heavy users tell you off


I definitely think we should create debuggers that can step backwards. Would be incredibly helpful


I'm shocked to see no mention of DTrace in the comments.


In one word, search. You can search the output over time.


set -o xtrace ftw!

More languages should have that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: