Yes, it is definitely inspired by Rusts `dbg!(..)` macro (see bottom part of the README). A lot of my open source projects are written in Rust and I really liked the idea of the `dbg!(..)` macro, so I wanted something similar for my work in C++.
Rust’s `dbg!()` macro also includes the source file name and line number; this C++ `dbg(…)` macro looks to add source file name, line number and function name.
Wow I had no idea you could do that with the logging class. I agree it would be smarter to be able to at least have a flag for that to print the rest. Overall I prefer logging to print.
I do think you are right though: if you are trying to debug it is useful to know the line not just a variable name. Now you gotta add garbage to say hey this was from this method and line...
> Which in turn was inspired by Haskell s `Debug.Trace` functions [2].
According to the RFC, the only inspiration from Debug.Trace was returning the input. The only other mention of haskell in RFCs 2173 and 2361 is the rejection of `show!`.
That might be true but I'd still want a source on that because the assertion makes no sense: aside from the value-returning property, all the overlap between dbg! and Debug.Trace is already covered by println!.
And as far as I can see Debug.Trace doesn't contain anything like dbg!: the closest would be traceShowId, which neither shows the traced expression (before evaluation) nor the location information.
Why don't you commence a formal inquiry, analyze the complete discussion history, acquire a sworn statement from Centril and report your findings here?
It seems weird to get so hung up on the word inspired... but I'm looking forward to the report!
Debuggers are practically useless for me, they aren't complex enough to meet my minimum use-case. If I use a debugger--approximately two hours wasted if I'm completely unaware where the problem is. If I instead try to spend ten minutes logically explaining the program, usually I fix the problem and also gain a deeper understanding of the mechanics of the program. The program is usually multi-threaded, real-time.
A debugger gives you the state of the entire program, and one point in time. A log allows you to focus, which gives you the state you actually care about, at all the points in time where you care about it.
The time-evolution of the state of the program is practically the only thing I'm interested in.
I find that code that has been written mainly with debuggers is often full of tons of errors, often simple negations which have been worked around an even number of times. Then you're stuck with the debugger, and there's no way to reason about the program. The result is low quality.
> If I use a debugger--approximately two hours wasted if I'm completely unaware where the problem is. If I instead try to spend ten minutes logically explaining the program, usually I fix the problem and also gain a deeper understanding of the mechanics of the program.
Er.... I’m pretty sure debugger advocates aren’t advocating using debuggers without logical thought. Usually, it’s more about that debugger is a useful tool when one spends three hours logically explaining the problem but can’t figure it out.
If you can find all kinds of bugs in ten minutes, then great! You’re officially an endorsed 10x programmer. :-) I can’t, for some bugs, (reproducing bugs in complex programs is a difficult hurdle for me too) so I use a debugger.
> I find that code that has been written mainly with debuggers is often full of tons of errors, often simple negations which have been worked around an even number of times.
Usually, I find that code that is thoughtfully written and extensively debugged in the debugger is the most robust part of the program, but YYMV.
I'm not a 10x programmer, I just spend the time you spend debugging logging. And if I'm correct it's slightly less time than you spend debugging.
> Usually, I find that code that is thoughtfully written and extensively debugged in the debugger is the most robust part of the program, but YYMV.
Have you ever worked in a code base that is thoughtfully written but not extensively debugged? I'd wager they're pretty rare because of this extensive widespread use of debuggers, but perhaps these code bases are even more robust, since in my experience they're more thoughtfully written. Preconditions and postconditions everywhere!
The trap that I’ve seen people get into is thinking they can just blindly step through code and they’ll eventually work out the problem. I think that’s the usage of debuggers that you’re describing. And it’s a bit of a trap, maybe you start out with a targeted breakpoint but then wind up stepping through stuff without any real idea what you’re doing, mostly treading water.
Effective use of printf and breakpoints is, IMO, as a tool in logically thinking through the problem. You work out some belief of the problem, you see how this belief being wrong could cause the bug but you can’t see how the belief could be wrong, so you use debugging tools to check your assumption. This is imo the easiest and quickest way to find something surprising. And once the assumption is either validated or invalidated, it’s time to go back to thinking through the problem logically.
Personally I prefer printf to breakpoints most of the time, because they leave a trail that I can keep going back to when thinking through the problem logically, and because they build up as I work through a problem. I think printf debugging suffers the “trap” issue less for this reason, but sometimes it’s nice to do a little exploratory debugging once the assumption is invalidated.
> A debugger gives you the state of the entire program, and one point in time.
That’s only if you’re only using the most basic features. A good debugger lets you run dynamic analysis and introspection at the level that static log statements cannot allow.
You are referring to the ability to hop around the stack during debugging and inspect variables at will? Whereas with logging you have to log it before you will see it.
Discussing your feelings towards debuggers would produce a better discussion if you explained what complexity you find lacking? Nowadays they are incredibly complex pieces of software (e.g., https://rr-project.org) if you take the time to learn how to apply what they offer to the kind of software you write.
Simple things that people like reasoning about and visualizing with printf() statements can be done in a debugger more easily after using your brain and logic to think about what the issue might be. Think a value got assigned a value it should't have? You add log statements, rebuild, relink, reproduce; or, you could just set a hardware watchpoint to print the backtrace whenever it changes. In my experience people don't reach for the debugger way of doing it simply because it might not be needed all the time and the syntax is forgotten. Diving into `man` or docs is harder than printf(). Use those skills more regularly and/or make a cheatsheet, macros, etc. and it is less of a problem.
Debuggers must be used in certain situations. Personally, the canonical example I use is tracking down compiler bugs. I and colleagues have been hit with a number and it would be utterly hopeless to attempt to reason about what is going on without a debugger. (One particular past case that stands out was the compiler, under very certain circumstances, not restoring a register that it was obligated to restore when emitting a catch block. Isolating the root cause and generating a reduced example was loads of fun /s, would not recommend.) Then there's kernels, runtime engines with JIT, diagnosing unreproducible problems in situ where the executable can't be replaced or restarted, and the list goes on...
It's more of just a joke--sometimes a simpler solution can do much more than a complex one. Complex solutions aren't necessitated, in fact, I actively avoid them in my product.
It's also not for lack of use, I've developed software for more than fifteen years, and I've always had access to some sort of debugger, and I've always used them, just very rarely.
>You add log statements, rebuild, relink, reproduce; or, you could just set a hardware watchpoint to print the backtrace whenever it changes. In my experience people don't reach for the debugger way of doing it simply because it might not be needed all the time and the syntax is forgotten. Diving into `man` or docs is harder than printf(). Use those skills more regularly and/or make a cheatsheet, macros, etc. and it is less of a problem.
Hardware watch points work horribly for objects, I have to create and maintain views for all the practical objects I use, unless they're very popular libraries like OpenCV. Ultimately, these views could simply be printf or cout statements, maintained with a common function or method defined on all objects.
These log statements typically happen right near assert statements, so they can be maintained along with the code. I'll usually use something more sophisticated than simple printf, and I'll have many different types of logs for common debugging cases. They can be enabled or disabled at runtime, so no recompilation needed. They give me access to all the state I could possibly require, and they get committed to the code base.
I've had coworkers spend entire days in debuggers (often in groups), where a simple step back and a simple log easily solves the problem.
I would use hardware watchpoints if they were standardized across debuggers, but they don't have all the features I need even in the best debuggers, and then I'm still useless on other platforms. Also, I usually need the program to run at near real-time otherwise the bug may not reproduce itself.
edit: I should add I do use some of the more advanced debugger features in those rare cases. So debuggers are still necessary, just not in the day-to-day.
I should also say I've also worked in environments where the software has to be pretty much bug-free, where small bugs may not be immediately noticeable. So there's a lot of planning, design, even non-formal proofs of correctness happening. If it's allowable for your software to have some bugs, maybe you can take a shortcut with a debugger.
Finally, I'm not against all tooling, just debuggers. I use visual profilers all the time.
> Simple things that people like reasoning about and visualizing with printf() statements can be done in a debugger more easily after using your brain and logic to think about what the issue might be.
A big problem that tends to make me sometimes not use debuggers, is that for relatively large software split in many DLLs, adding a printf & rebuilding takes a few seconds, while loading it in gdb or lldb is sometimes north of a minute (or, from my experience with MSVC's "symbol loading", sometimes fifteen minutes).
The key idea is that modern CPUs dynamically branch-predict-away if-statements that are rarely/never invoked, such as a dynamically disabled dbg() call.
With the performance difference magnified 10x by loop unrolling, this is what I'm seeing on my Mac laptop:
830000000 iterations in 3 secs = 2.76667e+08 iters/sec (compiled-out).
840000000 iterations in 3 secs = 2.8e+08 iters/sec (dynamically disabled).
And this is the worst case, where the whole program does nothing but call dbg() - real world programs contain lots of other real work, drowning the minute difference in performance, i.e. in a real program I doubt you'd see even 0.1% total performance difference, littering it with dynamic dbg() statements.
p.s. my C/C++ is pretty rusty - feedback welcome, but pls be kind.
yes, correct - my code is just the performance test, which IMHO is the first step and surprisingly tricky due to effects from the loop around it.
The code for dynamic enable/disable is trivial but the design is a bit subjective. For example, another API might be set_dbg_output(bool) and then maybe get_dbg_output() to inspect the current state.
An alternative route might be to find a way to integrate this code with a logging library like spdlog.
This way you'd get the cool short notation of dbg(...) and the ability to have more control about the sinks (including indeed dynamic disabling/enabling).
That probably closer to a best case scenario. A property predicted branch is still an extra instruction and a couple of entries in the branch target buffer and a global(?) history (I'm not really up to date on the newer perceptron predictors yet.
These are going to be limited resources so you'd rather not pollute them and make other branches predict wrong because they are aliases (shared) by many different instruction address.
Also, the fetcher can only get instructions in blocks and if there are multiple branches it will have to predict based on the first branch.
There is a lot of machinery in branch and branch the prediction because keeping the pipeline full of instructions is critically important for performance.
And while throuhput may not suffer much, the chances of increasing mispredictions in the hot path increase adversely harming latency.
These really need to be compiled out for release because you can't assume others can afford the cycles.
I wrote something similar a couple years ago, except my macro automatically compiled to a no op if compiled in release mode, which is very convenient for switching back-and-forth to the testing.
Thank you for the feedback. We deliberately chose not to do this (see discussion in https://github.com/sharkdp/dbg-macro/issues/26), mainly for the reasons given in the Rust documentation:
> The dbg! macro works exactly the same in release builds. This is useful when debugging issues that only occur in release builds or when debugging in release mode is significantly faster.
Note, however, that the C++ dbg(..) macro can be easily disabled to a no-op (identity-op, to be precise) with the DBG_MACRO_DISABLE flag.
The name reminds me of https://metacpan.org/pod/DBG, which, full disclosure, I wrote. Not that it's nearly as cool or likely to be used by anyone, but ... it has a superficial resemblance.
Nice idea! I would appreciate a warning from the author specifying whether dbg evaluates its argument twice, as if so one must be careful with side effects when using it.
Colorization is just inserting some trivial ANSI codes. Arguably, this just slows down the program even more and increases the size of the log files. It can be done as a post-processing filter.
Which in turn was inspired by Haskell s `Debug.Trace` functions [2].
It's definitely a convenient tool if you can't use a debugger or want to follow more complex interactions.
[1]https://doc.rust-lang.org/std/macro.dbg.html
[2] https://hackage.haskell.org/package/base-4.12.0.0/docs/Debug...