Hacker News new | past | comments | ask | show | jobs | submit login

I think usually people who try to make this point are just pouting over how they shouldn't be expected to know basic, fundamental things. Like https://twitter.com/mxcl/status/608682016205344768



Honestly, I had things like WiX and CMake in my mind when responding to you, but the tweet you linked to seems pretty fair too.

So many "nerdy" circles seem to be plagued by this subculture where your competence or intelligence is measured by how much obscure and useless knowledge you can accrue.

The fact that Max's experience creating and maintaining the single best package manager on macOS (by a country mile, too) was considered less important than whether or not he knew the minutiae of a particular form of logic puzzle is bullshit, and I can completely synpathise him in that regard.


I actually do think you have a point about the fetishization of trivia knowledge in this field. It's really not ideal, and I think it's more driven by social dysfunction or over-compensation for various kinds of imposter syndrome.

But that is not this. In the case of inverting a binary tree, what you call logic-puzzle minutiae is just taking a fundamental building block in computer science (binary trees) and asking the person to demonstrate even the faintest ability when it comes to writing an incredibly basic algorithm. Max Howell not only can't do it, but he doesn't even see why he should need to know how to do it!

That kind of proud ignorance is what grinds my gears. I'm sure someone can gather requirements and deliver value to customers and fix bugs and string together code and everything without knowing how to work with trees, but I don't really care. If they've somehow gotten that far without even a glimmer of curiosity about the fundamentals of computer science then something is disturbingly wrong, and I would worry about what other mammoth blind spots they inexplicably have.


>> But that is not this. In the case of inverting a binary tree, what you call logic-puzzle minutiae is just taking a fundamental building block in computer science (binary trees) and asking the person to demonstrate even the faintest ability when it comes to writing an incredibly basic algorithm.

The thing is that most programmers are not computer scientists anymore, in the same way that most computer scientists are not mathematicians anymore. In many CS101 courses there's only a very brief study of algorithms and data structures and most of the course is about using this or that language (probably python, these days, java back in the day, Ada further back etc).

This is partly the fault of universities, in a "the road to hell is paved with good intentions" kind of way. Universities try to prepare their students for the industry, except they seem to be in lockstep with the industry's requirements, but with a ten-year gap. So they try to teach students programming, rather than computer science, because they believe that's what the industry is asking for, then the students go to interviews and find themselves staring at a binary tree on a whiteboard.

Also, to be fair, the majority of programmers nowadays are not nerds, anymore, and they're not even that interested in computers, or even progamming. Most of my class in my degree and in my Master's just wanted a cozy job at an office. In one company where I was hired through a graduate programme, all of the guys in my cohort came in with a qualification in CS, then immediately sought the better-paying manager jobs in the corp (and I left to go to academia because [edit:] they didn't let me train neural nets on their mainframes :P).


>That kind of proud ignorance is what grinds my gears.

I do get where you're coming from there, but I interpreted the Tweet very differently. He wasn't just asked "how does a binary tree work?", but asked to go through an extremely specific process manually, on a whiteboard.

And if I can just interject my one little quip (I come from a EEE background, stumbled into software engineering and then left after a couple of years to do my own thing), all of this knowledge of the academic aspects of CompSci doesn't seem to help people build code that is reliable and performant. We weren't taught it in EEE (we learned about programming and digital logic, but in a different, much more concrete way), and yet EEE-written code runs flawlessly on 8-bit micros in safety-critical systems for decades at a time without a single crash or missed timing constraint.


Because it's very simple- compared to a web app backend (which is typically an uhonly mess of unnecessary complication; but still, much more complex for that).


May be an over simplification here, I’ve done both and found most web app backends to be more trivial than embedded programming projects.


Fully agreed.

I usually don't need complex algorithms in my line of work, but I can't count how many times I've needed to implement topological sorts, for instance, or non-trivial tree traversals, or to rewrite code to increase parallelism, or to be able to quickly spot that a poorly performing algorithm was O(n^2) or O(n^3).

And sometimes, it actually gets complicated. Sometimes, it's about increasing cache hits. Sometimes, it's about making sure that stuff gets allocated in the right order or in the right place in memory. Sometimes, it's about rewriting the IPC layer. Sometimes, it's about reimplementing foreign key logics in a low-level database/file system. Sometimes, it's about writing custom locking data structures or non-blocking algorithms, or a custom memory allocator or GC to match specific performance requirements. Sometimes, you need to do all of this without a debugger or a profiler or even logging.

If you can't handle the simple tasks from the first list, well, how are you going to tackle the issues from the second?

And if you're not curious, how are you going to learn all of this?


I agree, uses for this kind of knowledge do come up from time to time, but when you're implementing something like this on the job, are you starting from nothing, a blank piece of paper/whiteboard?

Personally I wouldn't start writing a single line without doing some research first. I'd look around on the web for some sample implementations or at least pseudocode. I'd probably get one of my algorithm books down from the shelf to make sure I understand the basics -- and check for "gotcha" edge cases.

So this is still wildly different than Max's interview environment, where the expectation is that you can effectively invent the algorithm.


I'd argue that, for many problems, researching the issue is hard if you don't have a starting point. For instance, I currently have no clue about the natural language manipulation with machine learning. I would have strictly no idea where to start or where to start looking. I might be lucky and stumble upon some literature that I would understand, but then I might not.


Couldn't a good lead suggest an engineer learn this after-the-fact?

Also do most software problems involve optimizations at scale like at Netflix or Google ? It seems mismatched to assume that an esoteric use case is the gold standard.


I'm talking of my own experience. I've had to do most of these things (or variants thereof) either while working on Firefox or various experiments at Mozilla, or while working at various startups.

I'm sure that not all companies need this kind of skill. But I'm not surprised if Google feels that they do.


I would go a slightly different track after seeing comments on OPs linked article making fun of people who know what ACID compliance means WRT databases.

The point of interviewing about algos or ACID or SOLID is fast inter-team communication and less wasted effort.

So you have two sysadmins talking about some database's filesystem options and they talk about sync-writes vs non-sync writes and they immediately see a problem they need to research WRT ACID's "D" and sync writes and there are of course many solutions historically and workarounds and on the whiteboard they just look at each other and say "ACID's D" and the other nods knowingly and they know what to research and are on the same page and it took about 30 seconds total interaction. I mean two people who know about durability and sync writes are going to like "wink and nod" and they're on the same page and they're both productive in minutes.

Then you meet some joker who don't wanna learn nothing about nothing he just does stuff that seems to work and googles for how to fix it later, and the guy doesn't even know the concept of what the "D" in ACID means and when the tickets start rolling in, tries to reinvent the wheel all himself and its just awful to see and he can't explain it was well as a textbook or wikipedia and you try to point out this is all old stuff "Everybody doing DBMS stuff should know" and "eh its just some trivia thing from the old days nobody knows like back when you wrote Perl for money" and its just a train wreck watching the guy. OR he doesn't get hired for the position because only 50 bazillion social media posts explain this employer loves "useless trivia stuff" and if you know anything about DBMS the concept of the ACID acronym makes so much obvious sense you can learn it well enough to BS past an interview in like two minutes, if you're a DBA worth anything. But nah they're too lazy, which is how its gonna be day to day if they get hired accidentally and nobody wants to deal with laziness either.

The guy who invented homebrew and famously couldn't get hired at google wasn't not-hired because he couldn't flip a binary tree, but was not-hired because he couldn't talk like a programmer with programmers about programmer stuff quickly in a standard documentable format in the trivial and non-commercial sense of flipping a binary tree. So he could have memorized every algo just in case, or he could have learned to talk like a programmer with programmers about programmer stuff but he never did either, so ... that didn't work out well for him.

As a terrible car analogy you wanna hire two car mechanics to change tires at a tire shop and one guy knows all the names for all the tools and parts and can at least BS plausibly at the interview how to fix something obscure like a malfunctioning TPMS, at least his answer sounded believable or rational if not perfect. The other guy doesn't know the names of any of the parts "This is the growly thing that twists the shiny things off the side of the heavy bouncy circles" and you ask him how he'd troubleshoot a broken TPMS in a general sense and he fires back about he don't know no fancy words but he's been changing tires for years and it'll probably be OK eventually. And you're like "Dude, you interviewed at a F-ing tire shop and read online that we'd ask about TPMS systems but couldn't be arsed to bother even looking up what the acronym means?"


That is a big part of it I think, and just being able to provide meaningful names for code/documentation/discussions makes a huge difference, though I'd say communication problems alone don't tell the whole story.

Even when you're just working alone, having seen something before and knowing the theory/literature means you can build on it instead of wasting time thinking about how to reinvent the wheel. Like you can imagine someone who knows nothing about ACID or transaction isolation being frustrated by apparent concurrency bugs in their database and hand-rolling some abomination of a locking mechanism to manually serialize database transactions. They're going to waste a ton of time on that, and it'll be fragile and have terrible performance. Or if you're working on some task tracking application and wondering about how you can untangle dependency relationships for users, you should have a basic suite of graph algorithms in your head so that you can use them effortlessly and focus on the actual problem instead of being mired down in retreading basics that every CS graduate already knows.

Also it doesn't seem like an issue of communication in the case of the homebrew guy; he was just stumped.


I'm a VP of Engineering now and I made it here without ever having to do any of the above and most engineers I've worked with haven't had to do any of that either. Your experience isn't the norm and you also sound incredibly arrogant and out of touch.


Well, thank you very much for the last part.

However, I have needed to do most of the above. Usually as part of a team, thankfully. Happy to learn that you didn't. YMMV.

I'd guess that Google wants to hire engineers who can do that, too.


Exactly, the technical interview has become like an arms race, harder and harder questions. Whereas in the management track, its all hustlin, any bozo can and will become a manager and go up the ladder, gain more power and control all us technical folks.


I should not comment on this chain, just adding clarity on my behalf... all I've done in my "career" is web related stuff, mostly JavaScript paired with a backend... I've never used BT/BST only recently learned it for myself just to know what it is.

If you went through a CS path then I would say you should know about it. I'm trying to pick it up due to FOMO. I hear about it.


Notably, he wasn't actually asked to invert a binary tree in the interview (this would not be a desirable question internally for anything other than screens) and he wasn't told why he failed the interview process. So given the response, I'm not certain that it wasn't his attitude during the interview process that sunk him.


It's hard to tell whether your comment was sarcastic or serious, but the guy invented homebrew, and was passed over for as you say, not knowing fundamental things. I'd flip the script in this narrative: by passing over the author of a wildly popular piece of software, the interviewers showed their missing knowledge of fundamental things - like good business sense. No thanks, Google.

And what does Google invent, anyway, given that their biggest financial successes have been acquisitions?


I can count on no hands the number of times I have ever needed to invert a binary tree in my entire career.

I'm sure someone knows. I'm also sure that I don't care: why didn't they use the standard library function for it?


Beyond parody.


For your consideration I direct you to the struggles of developing Starcraft: https://www.codeofhonor.com/blog/tough-times-on-the-road-to-...

Whereupon a huge proportion of early bugs were programmers who believed they were easily smart enough to write doubly linked list handling methods proceeded to ongoingly and continually screw up writing doubly linked list handling methods while refusing to use standard functions.

This isn't hypothetical.


There is no "standard library function" for inverting a binary tree. That's not a thing that exists, probably anywhere in any standard library for any language, and even if it did you should still know how to write it yourself.

Also the article you linked doesn't argue that you shouldn't know how to write a linked list, it argues that you should implement ADT operations in functions instead of trying to wing it with ad-hoc logic all over the place. Blizzard's developers were definitely smart enough implement a doubly-linked list one time, they were just fallible humans who got tripped up consistently implementing the same logic in a hundred different places.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: