As someone who has recruited hundreds of candidates at this point I would urge caution with lists like this.
By all means read the list, but rather than trying to grind through as many of the questions as possible, try to use it as inspiration and think of what you're actually curious about and which might tip you over one way or the other if you're on the fence. It's far better to have a good conversation about 1 thing which might affect your decision than a superficial sprint through a bunch of topics you don't actually care that much about.
Secondly, consider that if you do well you are likely to have multiple rounds of interviews. Think about which questions are likely to be most enlightening when you ask each interviewer. Don't shy away from asking 2 people the same question if you think you'll get different perspectives. If you ask a peer engineer the diversity questions you may well get quite a different answer from asking the HR/recruiting person for instance (eg one might give you a cultural insight vs the other might just give you the policy/factual answer).
I once interviewed a candidate[1] who had printed off a whole list like this and insisted on grinding through the whole thing. It was weirdly offputting, but he was such a monster programmer that he was already a clear hire. It gave an insight into the thoroughness of his nature though and showed that he was absolutely perfect for the (critical) role we were hiring him for where mistakes were extremely costly.
[1] Hi Fluffy! If you see this I hope you're doing well.
This is not a checklist, this is not a shopping list. If you send this entire list to an employer, they probably won't be calling you back. This list is intended to serve as a reference point for things to be aware of during your interview process. Not all of these questions will be relevant to every person or position, you should choose the ones that are relevant to you and what you are interviewing for. It's OK for there to be questions on this list that you personally do not care about.
They seem well aware of what you're talking about.
> I once interviewed a candidate[1] who had printed off a whole list like this and insisted on grinding through the whole thing. It was weirdly offputting, but he was such a monster programmer that he was already a clear hire. It gave an insight into the thoroughness of his nature though and showed that he was absolutely perfect for the (critical) role we were hiring him for where mistakes were extremely costly.
Would you share some more information on what kind of role it was? And what kind of qualities would you look for when hiring someone for that kind of position?
I aspire to work in such role (where pedantry is appreciated, instead of tolerated) one day, and I'd be grateful for any such information or hints.
He was looking after and developing a key set of extremely sensitive servers (custom key/value stores that served all the global risk and pricing infrastructure for a major investment bank).
The desirable qualities (besides being a very good programmer) for such a role include:
1)Being very calm and having exceptional judgement in a crisis. Crises happen a lot.
2)Being sufficiently frightened of breaking things that you proceed with caution
3)Being good enough and having enough confidence in your abilities that you actually manage to get things done in spite of the very significant consequences of mistakes.
For the longest time I viewed kernel programming as an exalted realm of "optimized bare-metal software written by visionaries and wizards" (like operating system core developers of the Dave Cutler variety). Having had the misfortune of dabbling in reverse-engineering Windows drivers and debugging in-tree Linux kernel drivers, I now believe they're often messy hackjobs written by low-paid average-skilled embedded developers, no less fallible than the average enterprise Java programmer (https://blog.ffwll.ch/2022/08/locking-hierarchy.html expresses a similar sentiment). And I feel the maintainers are sometimes spread far too thin (Dmitry Torokhov manages a good chunk of Linux's entire input system, no wonder he lets mistakes slip by in the touchpad driver, I'm just disappointed I reported a bug to the mailing lists and never got a reply after like a month).
EDIT: The wizards are out there, for example https://pvk.ca/Blog/2019/01/09/preemption-is-gc-for-memory-r..., but they're not evenly distributed, and perhaps the insane cutting-edge code they write (and the less insane code I write) isn't necessarily dependable, since by definition they lie on the edge of human understanding.
However, the kernel is fairly hard to test because it's the kernel:
- A lot of code paths are fully dependent on hardware that can't be mocked. How would you test the bootloader, for example? Or code that depends on a certain processor architecture? The only way is to actually run the code on that hardware.
- Similarly, it's hard to test the drivers. A lot of them are made by vendors, and the only way to test is against the devices themselves. Even if you could emulate the device and connect that to the driver, there's a lot of things that you won't cover (the communication path, the device misbehaving...)
- It's hard to test things that the runtime depends on. For example, if you try and test the memory subsystem, how do you manage the memory to actually run the tests?
Kernel testing is hard for the same reasons kernel development alone is hard, you're building the thing that all other tools require to run.
Interesting, I quickly checked the source before posting that comment and didn't find any, but I was expecting them to be co-located for some reason. Thanks for sharing.
> - A lot of code paths are fully dependent on hardware that can't be mocked. How would you test the bootloader, for example? Or code that depends on a certain processor architecture? The only way is to actually run the code on that hardware.
My incorrect assumption was that one could run those tests in some sort of a virtualised environment where the host VM provides the tested component with the correct virtual hardware. The "hardware" itself contains mocks the test can access to run assertions. In other words, testing would involve checking the side effects. Come to think of it, that doesn't sound very practical. In fact, it sounds almost impossible to scale.
Virtual Machines are a really poor rip-off of the original thing. Users rarely notice tho because the kernel hides that away.
Like GP said, real hardware is difficult to mock good enough to be useful for testing. If you try it in practice, your emulator will only be one bullet point in the list of platforms to support...
I had to attempt to write an interface to the floppy and serial controller for IBM PC compatibles to learn this.
The direction people are moving towards is re-hosting and directed emulation, where you run the code in an environment where the analysis tools are better and emulate just enough of the platform that the code thinks it's executing natively. Some of the published work even looks at the expectations encoded in the source to emulate what it's expecting from the "hardware". Since the HW is fake, testing new configurations is just a config change away. Take a look at [1], [2], [3] for published work. I also have a tool that's used in a couple companies able to rehost simpler linux device drivers and embedded firmware without source code modifications. Everything's encoded as DT files, so you can just use the configs already in the kernel tree and even do fun things like simulating physical hardware in a game engine.
I know some developers test with VMs with emulated architectures, but it almost requires manual testing and seeing how things work. As you say, it's fairly impractical and impossible to scale, you can't really get too much useful information out of VM memory and registers.
Not to mention, if the emulation isn't 100% accurate, you could still run into something odd. Especially if you only emulate what's in the spec and not the actual hardware.
Naive question: Couldn't you manually test against the hardware/devices, confirm the code is passing, identify the salient behavior that is hardware-specific and needs to remain invariant under refactoring, and write unit tests for just that?
I realize that's not perfect -- not even close -- and I'm sure smarter people than me have considered that idea and rejected it. But, out of curiosity, I wonder if you know the reason why it wouldn't help at least add some marginal value?
> identify the salient behavior that is hardware-specific and needs to remain invariant under refactoring, and write unit tests for just that?
I don't think you really can. There will be things that can be different between runs (e.g., a DMA transfer) and not necessarily mean bugs. I don't think there are many fully stateless drivers where that kind of effort would both cover a significant part of hardware behavior and yield good enough code coverage.
Can you record the register writes, DMA transfers of a successful run, then run the kernel in an emulated environment and ensure it makes the same writes, and presents the same results to userspace given the same data returned back? This is probably a lot more practical for a mouse than a high-bandwidth webcam or display where the "expected" logs can run into the megabytes or gigabytes for a few seconds of recording.
Yeah, sure it is hard or even harder. But think about browsers. We run applications and we want to make sure they look and behave the same, but there are so many OS, browsers, versions, devices, ...
So do we have to buy a laptop for each? Maybe in the past, but today we "just" emulate it. The same can be done with kernel testing. Assume that the whole kernel runs emulated (which is the hard part) and then test the outout of that (in the emulator).
Browser and kernel development couldn't be more different.
> Maybe in the past, but today we "just" emulate it.
"Just emulating" is almost impossible. For testing to be actually reliable, you'd need to recreate all the hardware behavior in code. OS virtualization relies on re-exposing the hardware and kernel primitives to VMs, no VM is reimplementing the Intel architecture for memory management, for example.
> then test the outout of that (in the emulator).
Another hard topic. What's the output of a kernel? In an emulator the only thing you have is access to the memory areas (RAM, screen, devices). You can't really rebuild the state of the kernel, or access internal structures that tell you information about how it is. If you're lucky and advanced enough in the boot sequence that the kernel has configured the serial port for comms and debugging, you might be able to get some information, but not that much, and it'd still be impossible to test certain parts of the boot sequence.
> OS virtualization relies on re-exposing the hardware and kernel primitives to VMs, no VM is reimplementing the Intel architecture for memory management, for example.
That's fair. But in the same way that the VM relies on that the host OS handles operations correctly, the kernel can assume the same, as long as there is a clean separation (so that one kernel doesn't influence the other).
Of course, if you have a bug in the (old/other) kernel that influences the tests themselves then that wouldn't work, but I would say that this probably makes up a very small minority of cases.
Just like if Intel runs tests for their CPUs on other CPUs and those other CPUs are buggy...
> Another hard topic. What's the output of a kernel
The kernel has an interface and if I ask it to write something and then read it back, I have an expectation about what happens. I don't need to care about the state of the kernel (neither should I) as long as the result is good. Only for certain performance-relevant things I can see that this might be a problem.
> and it'd still be impossible to test certain parts of the boot sequence
I'm neither a kernel nor a browser developer ;) but I would be interested in an example. You are probably right, but I fail to see what that would be.
Yeah I think that's true. From my experience it's the same for testing software in general - you can mock things but having actual tests that test the behaviour of dependencies is tricky since you have to emulate those for your tests and that can be a lot of work.
But since it covers your for all future changes, it might still be worth it, because otherwise you either have to manually retest with actual hardware or accept a higher risk of bugs.
Last I checked, the testing strategy was „all maintainers’ computers should be on the nightly build and report bugs/crashes they experience”.
I wonder how Microsoft does this with Windows? A friend of mine worked on Intel GPU drivers and they had entire farms of PCs where the tested driver would be installed on top of fresh Windows installation. For kernels, you’d probably need something like that, but times a thousand…
If you haven't any experience of doing proofs of correctness, it may be worth getting some before being certain about what you prefer. Proofs are hard. Proofs of something like the kernel... dream on.
Time to unlearn some things about English that aren’t true. Less is fine to use. Also, myriad can be a noun, literally has always been used figuratively, responding to ‘how are you?’ with ‘I’m good’ is perfectly fine. I’ve tried correcting people too, but the older I get, the more I find out that it’s me who’s wrong and policing language is almost never correct.
“This rule is simple enough and looks easy enough to follow, but it's not accurate for all usage. The fact is that less is also sometimes used to refer to number among things that are counted.
“This isn't an example of how modern English is going to the dogs. Less has been used this way for well over a thousand years—nearly as long as there's been a written English language.”
This is what I can gather about "fewer versus less" and "myriad" from my Garner's Modern English Usage.
Fewer emphasizes number; less emphasizes quantity. So, I would ask myself if I prefer saying the number of bugs or the quantity of bugs, and then choose accordingly. I prefer number, here. On account of that, I prefer fewer.
As to myriad, though it can be used as a noun (and apparently its use as a noun predates its use as an adjective), written language for the past 150-plus years prefers the adjective. (When Garner is talking about written language, he's talking about quality writing.)
I'm not saying you're wrong. But, style is as much about a sense of style as anything. Keeping language from going to the dogs requires a bit of dressing up.
“This isn’t an example of how modern English is going to the dogs.”
It’s fine to hone your style and exercise your own preference, I have absolutely nothing against that. Correcting people is something completely different.
arwhatever made a specific comment about the use of "less mistakes" vs "fewer mistakes", not a broader claim about the use of fewer/less for countable/uncountable amounts.
I think the relevant part of the "Exceptions to the Rule" section on that page is:
> The use of less to modify ordinary plural count nouns (as in "made less mistakes") is pretty rare in writing and is usually better avoided, though it does occur frequently in speech.
The exceptions (refering to distances/sums of money/units of time and weight/ statistical enumerations, phrases like "or less", and uses of "less" immediately following a number) aren't relevant here.
But both are "fine" in that they will be correctly understood.
I don’t follow your distinction about broad vs specific. The specific correction was offered based solely and entirely on the general rule of thumb, no? What other line do you see here?
> both are “fine” in that they will be correctly understood.
Agree! So much! I would go further, because I think this is the most important point: there is not a way to misunderstand the use of “less” in this context, which means that correcting people is purely pedantic, and not a functional issue or helping avoid potential misunderstanding.
Not only that, but sometimes people use “less” rather than “fewer” intentionally for countable things than can have a qualitative weight to them, as @danielheath pointed out. If the parent was trying to say that the bugs themselves could be both fewer in number, and also less damaging in scope, then less and fewer have two different meanings, and less is the more correct word to use.
> use of less […] is pretty rare in writing and is usually better avoided
Yes, I chose to emphasize that the rule is not absolute. Lots of people claim the rule is absolute, and a correction implies that point of view. My point is that even if using ‘less’ is better avoided, unsolicited corrections in public are also better avoided, because the rule is not absolute and in this case cannot be mistaken or misunderstood.
Worth noting that HN comments are not formal writing, and social media by and large is closer to informal speech, so the snippet you quoted can be viewed as validating use of less in this specific case.
I mean I'm reading that and I just wonder if the grandparent only hired one person for that role; if it's that critical, you should never rely on one person. You need a team and thorough QA.
It does remind me of this article I once read about how things happened at NASA, with every line of code extensively documented and discussed, and teams competing with each other to find bugs.
For hiring orgs, I have a phrase that I repeat quite often on LinkedIn:
"How you hire is whom you hire."
Along the same lines, for the candidates, I might now have to add:
"The question you ask* are as important as the answers you give."
That said, unfortunately, quite often the candidates' side of the process is a second class citizen. If I had $20 for everytime the candidate was tossed token "We have 5 minutes...got any questions..."* I'd have Bezos' FY money.
And now we're back to where we started: How you hire is whom you hire.
> quite often the candidates' side of the process is a second class citizen
One thing I find increasingly strange is that it is extremely taboo for the candidate to ever ask any technical questions of their interviewers of the same variety the candidate is asked.
As I've grown more senior in my experience it has become increasingly important to me that my peers and coworkers are as technically competent as they expect me to be.
An example: I was interviewing at a company that has the reputation being fairly elite. The role wasn't my ideal role, but given the reputation of the company I would happily take a less desirable role if the team was truly world class and passionate about engineering.
While some of the interviewers were clearly excellent, there was one point in the process where I was being grilled on Python internals. It became increasingly clear that the interviewer's depth of knowledge was very likely limited to the set of questions they were asking me. The topic of threading in Python came up, and so I gave usual mentions of the GIL and IO bound vs CPU bound tasks, the trade off of multiple processing versus threads etc.
However I personally find the design decisions behind the GIL really interesting since it brings up a nice discussion around memory management in Python. I brought some of this up causally at the end just to chat with the interviewer a bit, but it was clear that this well outside of their understanding of the interviewer.
I just find it odd that it's fine for companies to aggressively grill you on a range of topics and walk through complex algorithms cases, but you aren't really supposed to try to get a feel for how technically skilled your potential new colleagues are.
But, to your point, the companies that I've enjoyed the most are ones where a technical discussion (rather than grilling) naturally breaks out during the interview.
I conducted an interview recently where I started by giving the candidate time for questions, and he occupied nearly all of the allotted time that way.
I wasn't certain whether he was trying to avoid being asked questions, but the questions he asked were insightful and useful to help evaluate him.
The solution is simple. Make the meeting more conversational, at least initially.
It's a form of dating, if you will. No one wants to be interrogated. No one wants to do all the talking or all the listening. Yet most inteviews are some derivation of what would be a bad date by most reasonable accounts.
More eHarmony. Less Tinder.
p.s. I'd even recommend that prior to the first meeting, the company send an email / docs titled FAQs that covers the obvious stuff. Is it a new position or a back fill? How long is it open? Next steps in the process? Not only does this inform the candidate it's great context, it shows a level of thoroughness and processionalism, etc.
Absolutely. I've done a lot of interviews for different companies, and it is clear when someone is asking a pre-canned list of questions vs. someone who has real questions and genuine interest. I wouldn't give a no-hire recommendation just because someone did that, but it is definitely off-putting. Genuine questions asked out of interest can lead to an interesting discussion which could give a boost to a candidate. (Obviously, actual skills are the most important thing, but when choosing between a couple of candidates, both seeming fits for a role, it could tip the scales.)
These days the companies are interviewed just as much as, if not more than, the candidates. It's the new reality, or at least current reality. Pushing back against it will push that position to the bottom of the pile for (anecdotally) many candidates.
I've certainly gotten into the habit of "interviewing the interviewer", in a reasonable sense. After all I'm looking to see if the job suits me just as much as the employer wants to make sure I suit the job.
But I've never considered asking a full interview worth of questions over multiple sessions. "As much as, if not more than" seems really exaggerated. That implies that candidates are asking 30-60 minutes of questions straight, then doing it again a week or two later. Maybe even asking the employer to do a take-home? I can't imagine the look on an interviewers face if I asked them to do a take-home.
This is the best interpretation of this list. It’s a great guide to understand what topics you care about. Maybe there are a couple questions in here that would raise red flags for you. You should pursue those as conversation points.
Hi don't think anyone would want or need to plow through 100-200 questions but I'd certainly not take a job without getting good, clear answers to perhaps 30 of these.
It's also possible to get answers to many of these questions just by talking to your interviewer, no need to go asking questions in interrogation style, or as a candidate you might get answers to them as part of the interview process. Many of the questions about the company can (and probably should) be something you research before interviewing, or even applying.
Another way to phrase this is that yoo met someone who was so much more talented than you expected and understood (in one dimension), that you almost rejected them to cover up your insecure ego, instead of being open to learning from someone who behaved differently. Congratulations on overcoming and making the right decision.
The last interview loop I went through, I met with 10 different people along the way. I asked every single one of them "What are the qualities that make for a successful engineer at XYZ"?
I got 10 different answers but with several common themes and threads running through them. Each individual is biased by their own experiences and perspective, but the commonalities are what speak to the culture.
No one person's answer was sufficient to give me a clear picture of the truth.
By all means read the list, but rather than trying to grind through as many of the questions as possible, try to use it as inspiration and think of what you're actually curious about and which might tip you over one way or the other if you're on the fence. It's far better to have a good conversation about 1 thing which might affect your decision than a superficial sprint through a bunch of topics you don't actually care that much about.
Secondly, consider that if you do well you are likely to have multiple rounds of interviews. Think about which questions are likely to be most enlightening when you ask each interviewer. Don't shy away from asking 2 people the same question if you think you'll get different perspectives. If you ask a peer engineer the diversity questions you may well get quite a different answer from asking the HR/recruiting person for instance (eg one might give you a cultural insight vs the other might just give you the policy/factual answer).
I once interviewed a candidate[1] who had printed off a whole list like this and insisted on grinding through the whole thing. It was weirdly offputting, but he was such a monster programmer that he was already a clear hire. It gave an insight into the thoroughness of his nature though and showed that he was absolutely perfect for the (critical) role we were hiring him for where mistakes were extremely costly.
[1] Hi Fluffy! If you see this I hope you're doing well.