Hacker News new | past | comments | ask | show | jobs | submit login
Code with Engineering Playbook (microsoft.github.io)
195 points by nrsapt on Nov 27, 2021 | hide | past | favorite | 19 comments



This was written by the organization I used to work for in Microsoft. It was developed over the last 6 years by the Commercial Software Engineering organization through hundreds of real, production ready, engineering engagements with Microsoft customers and partners.

There’s nothing groundbreaking for seasoned engineers but it serves as a very robust set of reminders, especially the secondary and tertiary elements that often fall off when projects get stressed to move faster/cheaper/more featured.

Also important that this isn’t Microsoft’s home grown process - rather an amalgamation of working with hundreds of companies - some tech companies, many not - with wide ranges of software engineering maturity. It’s also written and maintained by the engineers directly and with a singular purpose. No marketing, no fluff, no technology or vendor implementations.


As much as one could "nitpick" about ommisions or unnecessary points, for large companies these things are obvious yet necessary.

I also liked the writing style guides that Microsoft and Google put out in recent years:

- https://docs.microsoft.com/en-us/teamblog/style-guide

- https://developers.google.com/style/highlights


Glad you shared these valuable resources. As a self-taught coder, I always admired the quality in writings published by these big orgs but I couldn't emulate them. On the links you get access to various clear guidelines which are quite helpful for open-source technical docs and project description. Thank you Ser.


Lots of good stuff in there. One that caught my eye is “able to change logging level without code changes”. I would take that one step further to “without redeploying”. Going to try and implement that in my own projects.

I disagree strongly with “90%+ unit test coverage” though. Diminishing returns are very real there.


>I disagree strongly with “90%+ unit test coverage” though. Diminishing returns are very real there.

Hi, stupid junior here. Can you elaborate a bit or give me some reading material? I'm currently struggling to get good practices with tests and understanding associated metrics. Thank you!


I'll take any chance to repost this:

https://testing.googleblog.com/2010/07/code-coverage-goal-80...

That said, it's very easy to write tests that produce high code coverage, but do not test much that is useful (do they assert the things we truly need to check for?). It's also a pointless exercise to try and cover every single branch (e.g. does every defensive error check need to be asserted on?)

It's a delicate balance and I'm sure some people will disagree with this, even.


“stupid senior” here! don’t listen to them :) always go for 90+% . lower coverage is laziness and may hurt sooner than expected. Sure full coverage doesn’t preve tu r you to write a bad test.


Another “stupid senior” here and I say go for 75-80%. It’s almost exactly a classic Pareto situation in my book.

Parent post is right that lower coverage degrades rapidly... the difference between 65% and 75% is huge. But the ancestor post is right that there’s large diminishing returns too.

I’ll qualify this with recommending leaning hard into lints and type checkers and the like. Eliminating whole classes of errors gives the edge; rather than writing test cases in, say, Python to ensure that a string-mangling function raises the “correct” exception if passed an int… just enforce mypy checking instead. And then get your type coverage up to 75-80 percent. Fuzzers too. Get more overall coverage by letting the computer do the work.


I'm a stupid principal and I go for 100%... when I can.

When I can't, I ponder.

I see how my design can be aided by coverage and ask "Hey, I don't have coverage for this, why did I write it?"

It's a worthwhile question. I wrote code, but it is hard to exercise easily... why did I write it?

Don't get me wrong, there is a bunch of stupid shit to contend with (I'm looking at you MessageDigest AlgorithmNotFound)


Of course I prefer 100% coverage with well written unit tests. But I’d also prefer to ship software and create value. There is a point of diminishing return in trying to exercise all code paths. Guessing at some hard rule (70%, no 80%!) seems futile as it varies depending on your own work. Experience tends to guide us to critical paths that should be tested at 100% and code generated RPC stubs that we may skip as an example. This is a trade off we make like so many in this field.

Edit: we may be speaking pass each other. I agree that your code should be testable, and if it isn’t, it’s a code smell worth investigating. This is separate from the point of should every error thrown be exercised and every getter verified, etc.. With unlimited time and for some parts of the code sure, 100% though leaves 0 wiggle room here.


A challenge with targets is gaming metrics which isn't great

My gut is that we are in this dark age of the field where we are stuck between low hanging fruit and doing this exceptionally well. I'm not sure how we achieve balance, but I do ponder it.

For example, in the domain of building a small game, then coverage is not really needed as the quality is measured more by playing the game.

In the domain of infrastructure where I have toiled for over a deacde, I have come to expect amazing coverage since so many CoE/SEV/prod issues have "did we have a test for that?". I've worked with teams getting not just 100% but all sorts of E2E and other stuff to test software.

A key problem is that what is good in one domain is awful for another, and we can't speak in general metrics as a good rule. Is 70% good? Well, maybe... Maybe not...

As I reflect on simple games, I am working on a way to automatically build unit tests for a class of games I care about: board games. As I look into how to do AI for arbitrary games (I have a random player model for giggles at the moment), I find that I could use a bit of AI to build a minimal spanning set of cases which exercise all the code paths in different combinations.

This is possible because my language tightly couples state and compute ( http://www.adama-lang.org/ ), but I believe this provides an interesting clue out of this mess. As we look into AI to write code, why don't we start with using AI to find how to automate testing and then bring a human in the loop to be like "Yo, is this the right behavior?" and then go from there.


I find any of these numbers pointless.

I’ve seen enough 100% or close to completely broken applications.


Agreed on the 90% unit test coverage. I'm all for high test coverage, but I care more about having solid assertions. Too often I see test that are expecting an error, but not the error it's getting.

Major business rules need tests too, but they are much harder to write - so we get high unit test coverage and not hard conversations about test quality.


I prefer "Write tests. Not too many. Mostly integration." - Guillermo Rauch.


I know this is meant more for backend applications than software meant to run on end user hardware, but...

>able to change logging level without code changes

The folks at Suckless would like a word. ;-)

Personally, I think having to recompile software just to change settings is a royal pain in the butt.


agreed, I think this one comes from 12 factors apps methodology, see part III about config https://12factor.net.


I feel a bit “we value processes over people” on this one. Not disagreeing with a lot that’s said but it just feels a bit prescriptive and I think agile (as in manifesto rather than big consultancy) is very much context specific


I am working in very small company, but the funny part is - most of the points from the playbook applicable and fit well here.


Thanks. This applies perfectly to the current organization I consult for with their newly spun up ML team.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: