> re: Write Your Tests, I've never been successful with this ... I wouldn't bother to write unit tests etc for code that is likely to be culled, replaced.
I think you misread the author. He says "Before you make any changes at all write as many end-to-end and integration tests as you can." (emphasis mine)
> My go-to strategy has been blackbox (comparison) testing.
Capture as much input & output as I can. Then use automation to diff output.
That's an interesting strategy! Similar to the event logs OP proposes?
The thing about end-to-end and integration tests is that at some point, your test has to assert something about the code, which requires knowing what the correct output even is. E.g., let's say I've inherited a "micro"service; it has some endpoints. The documentation essentially states that "they take JSON" and "they return JSON" (well, okay, that's at least one test) — that's it!
The next three months are spent learning what anything in the giant input blob even means, and the same for the output blob, and realizing that a certain output in the output comes directly from the sql of `SELECT … NULL as column_name …` and now you're silently wondering if some downstream consumer is even using that.
Methinks I've prioritized writing of tests, of any kind, based on perceived (or acknowledged) risks.
Hmmm, not really like event logs. More of a data processing view of the world. Input, processing, output. When/if possible, decouple the data (protocol and payloads) from the transport.
First example, my team inherited some PostScript processing software. So to start we greedily found all the test reference files we could, captured the output, called those the test suite. Capturing input and output requires lots of manual inspection upfront.
Second sorta example, whenever I inherit an HTTP based something (WSDL, SOAP, REST), I capture validated requests and generated responses.
I think you misread the author. He says "Before you make any changes at all write as many end-to-end and integration tests as you can." (emphasis mine)
> My go-to strategy has been blackbox (comparison) testing. Capture as much input & output as I can. Then use automation to diff output.
That's an interesting strategy! Similar to the event logs OP proposes?