Thanks for the link and the original article. To me, OutputTracker looks a lot like a spy, except instead of verifying that the unit under test took a particular action you're instead verifying that it logged that it took an action. That would seem to create the risk of missing cases where the events emitted by code don't match its actual behavior.
Output tracking and spies are solving the same problem, but they do it in different ways. Spies record which methods are called. Output tracking records behavior that's otherwise invisible to callers (such as inserting something into a database).
There's no risk of missing cases. The output tracking happens at the same semantic level as the rest of the code and is a binary "tracked / not tracked" type of thing. There's no behavior to match, and the code is tested anyway.
Edit: By "no behavior to match," I mean that the thing doing the behavior is the thing tracking the behavior. The tracker is driven by events you emit when you perform a behavior.
payload = prepare_payload
if verify_payload?(payload)
mailer.deliver(payload)
emitter.emit(:sent_the_email)
end
there is a risk that a later change to this code will make it so that the `mailer` and `emitter` are not guaranteed to be called together. I have seen this bug in production and I don't see how your approach catches it. I'm also not sure how I'm supposed to test for different desired values in `payload` here.
I mean, sure, if you program the output tracker incorrectly, it won't work. Not sure what else you expect. You're expected to have tests, of the output tracking code itself. They catch changes that breaks the output tracker, just like you have tests to catch any other regressions.
Regarding testing different values, I think what you're missing is that you don't just emit an event; you emit an event with data. Typically it's whatever data you're delivering.
Looking at your example, I'm still not seeing how this isn't just implementing ad hoc mocking for each component. The reason I'm interested is because the overall approach is very similar to what I've settled on over the years, other than the aversion to using labor-saving DI and mocking frameworks. I'm not sure why I should prefer to write more code (that needs to be tested itself) rather than relying on a well-tested and well-understood library.
I don’t know what else to say, man. Maybe try it for yourself so you can see how it works?
Mocks lead to solitary, interaction-based tests.
My approach leads to sociable, state-based tests.
These are polar opposite testing approaches, with different tradeoffs. I don’t care which approach you use, but saying they’re the same thing means you don’t understand it.
Other people haven’t had the same problem understanding the fundamentals that you are. You’re asking very basic questions, which makes me think you haven’t taken the time to read the article carefully. I’m happy to help, but your dismissive attitude makes me think you’re less interested in understanding the material and more interested in proving that you don’t need to understand it. Your shallow comments about Capitalization and wallets didn’t exactly endear you to me, either.
I’ve provided a lot of material online. An article with tons of details and examples. Links to additional full-fledged examples. Multiple video series. Now it’s on you to take advantage of these resources. Or not; no skin off my nose either way.