Hacker News new | past | comments | ask | show | jobs | submit login

There's no shortage of criticism of the leet coding interview questions, but I found the system design interviews even more asinine.

I have never in my career had to do anything like designing a large scale system. Maybe I'm inadequate, maybe I've been insufficiently motivated, but it hasn't happened. If that's a requirement, say so and don't waste the time of applicants who don't know what a ring tokenizer is.

As it was, it turned into a ridiculous charade session where I watched a bunch of videos and regurgitated them as though I knew what I was talking about. "Oh yes, I'd use a column oriented database and put a load balancer in front".

Without any real-word experience it's just a bunch of BS. I'd never let someone like me design a large scale system - not even close. I don't want to design large scale systems, it sounds boring and like the type of job where you're expected to be on call 24/7.

I've worked with the Linux kernel, I've written device drivers, I've programed in everything from C to Go, and that's what I want to keep doing. Why put me through this?




> I have never in my career had to do anything like designing a large scale system.

Giving large scale system design interview questions for a role where someone never has to work with large scale systems would be a weird cargo cult choice.

However, when a job involves working with large scale systems, it's important to understand the bigger picture even if you're never going to be the one designing the entire thing from scratch. Knowing why decisions were made and the context within which you're operating is important for being able to make good decisions.

> I've worked with the Linux kernel, I've written device drivers, I've programed in everything from Fortran to Go, and that's what I want to keep doing. Why put me through this?

If you were applying to a job for Linux kernel development, device driver development, and Fortran then I wouldn't expect your interviewers to ask about large scale distributed web development either. However, if you're applying to a job that involves doing large scale web development, then your experience writing Linux kernel code and device drivers obviously isn't a substitute for understanding these large scale system design questions.


Oddly, knowing the limitations of last year's designs can, as often, limit you to last year's solutions. That is to say, the reason things were done in the past almost always come down to resourcing constraints.

Yes, it is good to understand constraints. It is also incredibly valuable to be respectful of the constraints that folks were working on before you got there. Even better to be mindful of the constraints you are working on today, as well. With an eye for constraints coming down the line.

But, evidence is absurdly clear that large systems are grown far more effectively than they are designed. My witticism in the past that none of the large companies were built with the architectures that we seem to claim are required for growth and success. Worse, many of them were actively done with derision for "best practices" coming from larger companies. Consider, do you know all of the design choices and reasons behind such things as the old Java GlassFish server?

Even more amusing, is to watch the slow tread of JSON down the path that was already covered by XML. In particular the attempts at schemas and namespaces.


> large systems are grown far more effectively than they are designed

It's easy to bake in poorly scaling technical decisions at an early stage that take an obscene amount of engineering effort to undo once the scaling problem become obvious. I've seen intern-days of "savings" turn into senior-years of rework and the scale in my corner of the world is tiny by SV standards.

I always assumed that SV companies experienced similar traumatic misadventures, multiplied up by scale, and baked "thinking at scale" into their technical interviews as a crude (but probably somewhat effective) countermeasure. Even if you only ever use the knowledge one time, indirectly and accidentally, by peer-pressuring your buddy into thinking before coding and therefore avoid a $10M landmine, it was all worthwhile.


It is as easy to bake in large maintenance and runtime costs on early stage development. Worse, it is easy to bake in aspirational growth ideas in the architecture that make it difficult to adjust as you go.

Is akin to thinking you need a large truck, when a very cheap pickup will do. Will the pickup scale to larger jobs you may grow to take on? Of course not. But it will be far cheaper to operate and own at the start, so that you can spare resources to get there.

Now, oddly, this can be taken in several directions. ORM is the poster child that folks love to hate on for how rigid it can be in a mid sized project. And it is also the poster child for how rapidly you can get moving with a database. Which is more important for a project? Really really hard to say, all told.


In contrast, there are a lot of systems out there designed to scale up really quickly, but never achieve the product-market fit to ever need this.

All that engineering for scalability would have been better applied toward things to find the right product-market fit.

It’s hard to strike the right balance of engineering in all aspects of a product. But I’d rather be at a company forced to pour hours of senior engineering effort into fixing scalability than one where things can scale to hundreds of millions of users, but you never attract more than a few thousand.


If they know the code base well, it shouldn't be that hard to undo intern-level shortcuts.

There's another failing here which is that quality wasn't gated well enough.


Hmm. This view works except where it doesn't. For example, if you don't pick the right ID/account/object # scheme so you can shard later on, good luck figuring how to distribute and/or scale the issuing of said IDs years down the road. Some things will never need to be sharded. Some things will kill you if you can't. Every bit of your code is going to make assumptions about this and you're maybe going to end up with a hot key that's hard to fix or have to do weird contortions to split your infrastructure by country or region where there are laws or regulations about data residency.

Here's a few others without explanation of how they'll blow up on you: not being careful about the process around managing feature flags, doing all your testing manually, not doing system level design reviews including the outline of a scaling plan for the system as planned prior to building systems, not building and testing every check in, doing system releases when it seems good or by marketing requirements instead of on a regular cadence, not having dev/test/production built via IAAC or at least by scripts that work in all envs. Not having runbooks.


>This view works except where it doesn't.

It only stops working in the worst case scenario though: LOTS of hastily written code (by interns?) that suddenly needs to scale and will take senior-level people years to do it.

If given that situation, most folks here would run the other way. That's years of toil for little career payoff, and a company in this situation is unlikely to be willing to pay for the best people to do it since they didn't want to pay for that in the first place.

It's very likely something like this will just die or get rewritten and it's probably for the best.


But some things are obvious once you build up scar tissue from previous experience.

And scaling could mean “it might work on a developers PC with 50 rows of data. But it won’t work with our current production load because he didn’t index a table”.


To me this is a entirely separate problem.

I’ve noticed that when less experienced people try to solve a problem, they have to look up how other people do it first.

But someone more experienced has a strong understanding of technologies on an abstract level so they can whiteboard a solution without even involving any specific software (then compare to how others do it). When you think that way, you’re not worrying about JSON or XML. You become neither tied to last year’s tech or too eager to try new tech. You just build something solid that’s reliable and long-lasting.

Knowing about different tech used in different designs expands the pool of legos that you can snap together and so it can’t hurt.


There is a similar learning style. Basically, guess the answer and then compare to the other answers. Even before you know anything about it, all told.

That said, I have as often fallen into the trap of trying to build it myself first. So called "first principals" thinking. That works far less often than folks think it does.


You missed the key statement in the commenter's post:

"If that's a requirement just say so"

Clearly the roles they're applying for are not concerned with the ab initio design of large-scale systems. Which is why they said what they said. They're not whining for the sake of whining.

Your experience writing Linux kernel code and device drivers obviously isn't a substitute for understanding these large scale system design questions.

A drop-in substitute, no. But an engineer who has the wherewithal to truly master the grisly low-level stuff can easily ramp up reasonably quickly in the large scale stuff as well, if needed. To not understand this is to not understand what makes good engineers tick.

We get the fact that, yeah, sometimes, for certain roles a certain level of battle-tested skills are needed in any domain. Nonetheless, there's an epidemic of overtesting (from everything to algorithms, to system design, to "culture fit") coursing through the industry's veins at present. Combined with a curious (and sometimes outright bizarre) inability of these companies to think about what's truly required for the roles -- and to explain in simple, plain English terms what these requirements are in the job description, and to design the interview process accordingly.


The problem is that the system design interview somehow became a necessary component of the FAANG hiring process.


FAANG and similar companies typically subscribe to something like the "T shaped engineer" philosophy. They're making a conscious choice that their engineers should be comfortable in discussions about distributed systems, performance tradeoffs, etc. regardless of whether they do such things on a regular basis.


Certainly not at the FAANG I work at. We hire specialized engineers to work on device drivers and OS kernels and absolutely do not ask them questions on how to design distributed web services.

I encourage you to apply: https://www.apple.com/careers/us/


Why would you interview for a role at a FAANG company in the first place?


They want to exchange the most money possible for their labor ?


This isn’t true, and even if it were, “most money possible” isn’t a meaningful metric.


So what companies pay on average than FAANG[1] for developers.

The most money possible is far from meaningless. If I work for a company, which one will deposit the most in my bank account in a year and/or the most in my brokerage account when my stock vests?

[1] not literally FAANG, the most profitable public tech companies


Most Fortune 100 companies are competitive now, and the most money is extremely meaningless.


I assure you that the other Fortune 100 companies are not paying even in the same ball park as Facebook, Apple, Amazon, and Google.

https://finasko.com/fortune-100-companies/

What do you think the average compensation of those companies are?

I’m well aware of the comp levels at at least three of those companies because they are based in my former home town - Delta, Home Depot and Coke.

They pay their senior devs about the same amount as an intern I mentored got as a return offer (cash + stock).


I can assure you they are absolutely paying in the same ballpark as Facebook, Apple, Amazon, Google, you just don't know how to limit your searches to the tech roles.


So give some numbers of what the f100 companies make compared to FAANG?

I’m very familiar with at least three of the companies that are based in Atlanta - Delta, Coke and Home Depot.

Seeing that I’ve worked for corp dev for almost 25 years before joining BigTech.


Depends on the role. $500k total comp is common on the tech scale of the companies I'm familiar with. While not the $800k+ you might see at some FAANG in specific roles, it absolutely is enough to no longer consider FAANG if you're bothered even one iota by their hiring process.


Name a company. You keep being obtuse


Fortune... 3. not FAANG but in retail. Can hit $500k total comp easily, if you live on either coast.


Not according to Levels

https://www.levels.fyi/companies/walmart/salaries

The highest one in retail is Walmart.

Level 5 comp for Walmart is about the same as a mid level developer at Amazon and Amazon is in the middle of the pack for tech compensation.

Again you still refuse to name a company and level and numbers.


If you knew anything about this you'd know "Walmart" isn't what you search.

But if you knew anything about English you'd know "ballpark" doesn't mean "exactly equal".


You said top 3 company in retail. Walmart is the highest in the F100 in retail. Again name names if you can’t, you’re obviously full of it.


Like I said, if you don't understand what's going on here, you're not qualified to have this conversation.


You’re right, I’ve only been in the industry for 27 years across 8 companies including my current one working at BigTech where I do cloud consulting for other large enterprise companies.

What do I know?

And yet you still haven’t been able to prove your claims that “most” pay about the same.


You linked to a site that literally showed Walmart's non-tech arm (hint: the majority of the tech jobs at Walmart aren't under 'Walmart' on levels.fyi) paying comparable salaries (you yourself compared them)...

But yeah, all those 27 years of experience totally weren't just you sitting in a chair somewhere, being "part of the team". Got it.

> prove your claims that “most” pay about the same.

first prove I claimed that (hint: I didn't)

I'm starting to think you got fooled into thinking BigTech was the only option, and are now discovering how untrue that actually was.


> You linked to a site that literally showed Walmart's non-tech arm (hint: the majority of the tech jobs at Walmart aren't under 'Walmart' on levels.fyi) paying comparable salaries (you yourself compared them)...

Now I’m still waiting for you to prove your claims which were “Most Fortune 100 companies are competitive now” and you haven’t provided a single link.

> I'm starting to think you got fooled into thinking BigTech was the only option, and are now discovering how untrue that actually was.

Well

A) seeing that when I started working, only one of the current FAANG companies existed, I know that FAANG isn’t my only option.

B) seeing that I specialize in cloud architecture and modernization + dev - ie “system design”. I think I’m at the right “F100” company.


I don't owe you any links, I am not here to do your research for you. You already found one company that pays similarly, levels.fyi will give you dozens of others, you even named a few who do pay in a similar ballpark as FAANG.

Besides, it's not actually at all clear if you care about this topic genuinely or are just being a jackass online, so you can forgive me if I don't take your demand for my effort more seriously.


They pay a lot?


In retrospect it was a horrible mistake.


They do some impressive stuff


Salary, I suppose?


My understanding is that this is not the case any more for the more junior software engineer positions in Google and in Amazon, which are expected to them learn system design before being promoted. If you are applying to a more senior position, then yes, there should be a question about system design, and yes, you will probably be doing system design in your work, so it's completely fair game.


And the second part of this is that just as all non-rich people tend to consider themselves as soon to be millionaires temporarily down on their luck, most startups (especially the VC funded ones making big promises to investors) tend to consider themselves soon to be FAANGs temporarily in the early phase of their inevitable hockey stick growth.


Is this a problem? I would argue that this style of interviewing is much more relastic to day-to-day activities than leetcode


You'd be arguing wrong imo. No one sits down solo and has to design a system to scale in isolation,and if you do then something up the chain from that moment went very wrong.

It's a pointless academia by proxy situation that encourages filling out teams with the kids of people who csn architect and tinker forever but have no capability to actually deliver software anyone wants to use. This becomes more clear when you look across the last 10 years in FAANG and list out what products have actually been delivered that are improvements to users and customers vs what's just infrastructure padding and bought in though acquisition.


In both FAANG jobs I had I was expected to design systems solo, and then review them with my team. If the system is complex enough, I would probably whiteboard it first with some teammates while I was designing it.

It is something that was asked of me in interviews, and comes up often in my day to day job. And being able to design systems, and to help review systems others are designing, is probably the single biggest impact thing I do regularly.

It is more useful in day to day than the algorithmic knowledge that was also asked of me during interviews. While there are people that do use complex algorithms in both companies, most software of Google is converting a protocol buffer into another protocol buffer, and in Amazon is the same thing but with JSON. If you are a frontend engineer, you might convert into HTML by plugging the values into a template engine.


I have really good analytical skills which I leverage to tackle issues in large scale system piecewise. I have to suspend my skeptical mind and switch to blue hat thinking to come up with something from scratch. Then I take it apart and iterate over it. I don't think large scale system design is a straightforward process and pretending it is so may very well lead to living in interesting times.


Agreed 100%. I spent ten years at Google, got promoted three times, never did any distributed systems stuff. When I decided to leave about 18 months ago, trying to cram for these interviews and memorize how WhatsApp works was the worst part of interviewing. But I jumped through the hoop and got a couple offers, neither of which involve doing distributed systems work. Those were definitely my weakest interviews, and in the case of my current employer, I literally said to my HM, "I've never done this kind of work, and I'm not going to shine in these interviews. Here's the kind of work I have done and I'd love to talk to you about it instead." I still had to do the sys design interview, but I think maybe that helped get it down weighted?


I never did any system design, and got offers at Meta and Google (I even aced the system design at Google). It's not a very discriminating interview I believe. And I found it quite fun to prepare.


>And I found it quite fun to prepare.

What resources did you use?


Quite a lot. I read many times "Designing data-intensive applications" which I highly recommend. There's a book called "system design interview" I believe, that is a summary of the most typical designs. There are also a bunch of videos on youtube. I read some research papers of classical designs. I played with some typical components, such as nosql dbs. I even implemented some prototypes.


Do you have any pointers to these research papers?


Not on the top of my head, but they are usually cited in the blogs or video that present some classic systems. And "designing data intensive applications" has all the references. That being said, I don't think it's worth getting that deep for system design interview preparation unless you're already quite advanced. Retrospectively, I think I spent too much time on advanced material, overestimating what was required for this roles.


I'm also a system programmer developer for the most part(kernel, hardware bring-up, low level firmware, performance tuning of embedded systems) and I got invited a couple times for FAANG interviews,and all of them had this system design interview with nosql column databases and loadbalancers behind nginx proxies of some sort. Problem is that, it's pretty far from my field.

Are you going to ask a cloud architect who connects frontend to backend with databases at scale a lot of questions about how to write a pci express network driver in linux kernel with great performance too?

I would like to be hired by a specialized hiring team in those big companies instead of going through the general hiring process that you're expected to be a technical God who knows everything at expert level.

I rejected all those interview requests.


I have designed large scale systems, but I often feel most of my system design interviewers haven’t and are much more pedantic/nitpicky than is reasonable. Leetcode is better because most people at least understand the questions they’re asking.

Last time I did a system design interview I mentioned database triggers as a way to maintain some kind of data invariant, which flustered my interviewer and I guess made it so their canned follow up questions didn’t work, so they asked if I could think of any other approach (the one they had in mind). I couldn’t and it made the interview very painful.


This is a good example of a bad interviewer. Never have a specific concrete answer in mind when asking an open-ended question.


For the really good candidates (IE, ones who can answer basic programming questions and explain how a hash table works), I have an open-ended question which is really an open question in an active field. I really hoped I'd get candidates who would get that far (https://arxiv.org/abs/1309.2975) so that I could finally get some interesting answers I haven't heard before but most candidates I end up interviewing struggle to explain a hash table and what its advantages/disadvantages for counting unique k-mers are.


Or, if you want the candidate to talk about a certain approach, maybe think of some ways you can nudge them in that direction. If something's not at top of mind, some subtle hints from the interviewer could trigger a discussion of what they want to talk about. The interviewer shouldn't be tied to a "script".


I was once interviewing someone who came up with a design I didn’t understand and he walked me through it on the board. He was an immediate hire.

I learned a lot from him and asked him for advice when I had an architectural decision to make at my next job even though we didn’t work together.


You just reminded me of the time I was explaining how to do consensus failover, and the interviewer asked me to do distributed single thread design. Great use of everyone's time. The market is full of LARPers who wish they had the chance to design something real, and they will take it out on you.


I find system design interviews generally provide significantly more signal than coding interviews, but still less signal than a lunch interview.

It's a test of breadth and depth and it takes an apt, possibly experienced interviewer to navigate the candidate's domain knowledge effectively. The goal isn't to build some elaborate, buzzwordy house of cards, but to understand where tools are appropriate and where they are not, to think on your feet and work with an interviewer to build a system that makes sense. And, just like the real world, criteria should shift and change as you flesh out the design.

In particular people trip over the same things every time: reaching for things they do not understand, not understanding fundamental properties of infrastructure (CPUs, memory, networking), and cache invalidation.

When I interview folks I always preface the prompt with an offer to provide advice or information, acting as if I were a trusted colleague or stakeholder.

If you want to be low level, then I'd question why we're conducting a large system design interview anyways. We could certainly frame it as a small system design instead, and focus on the universe contained within the injection-molded exoskeleton of the widget.

If you say "column oriented" I'm going to ask you to explain why. I'm going to challenge what the load balancer is doing or what you expect it to do, and why.

Building large systems in the real world well, and watching them scale up under load with grace (and without contorting your opex to have only lunar aspirations) is somewhat akin to watching your child ride their bike for the first time after the training wheels are off. It feels good. Just like seeing your hardware in the field produce a low failure yield.

There is satisfaction in doing good work, or at least their should be.


> but still less signal than a lunch interview

Careful with that line of thinking. There’s a significant body of research showing that people feel like a “chat about tech” interview provides the best signal, but it empirically performs the worst with a roughly 50% correlation to on-the-job performance. You’re better off flipping a coin because at least then you’re not biased.

source: https://en.wikipedia.org/wiki/Noise:_A_Flaw_in_Human_Judgmen...


> but it empirically performs the worst with a roughly 50% correlation to on-the-job performance. You’re better off flipping a coin because at least then you’re not biased.

I was going to point out that a correlation of 50% is pretty good (specially for predicting job performance from a single interview), whereas flipping a coin has 0% correlation with anything that is independent of the coin flip (such as job performance).

You probably meant to say the probability you rank job-performance of two random candidates correctly based on an interview is about 50% (what your source calls "percent concordant"), a correlation of 0%.

Out of curiosity: do you remember which section of the book looks at "chat about tech" interviews compared to other kinds of tech interviews in regards to their job performance prediction capabilities?


You are right, I got my stats terms mixed up.

The "Improving Judgements" portion uses interviews as a case study and builds up from "just have a chat" to the typical multi-round panel interview with pre-defined rubrics that we see in tech these days. When done correctly, the book suggests this is the best we can do short of hiring everyone and firing low performers soon after.

I remember they specifically mention Google as one of the companies where they ran a study linking interview practices/decisions to on-the-job performance.


Lunch interview is often a great way to get some signal about the company on your potential coworkers. It's not so much asking questions and getting answers, but you often get a chance to see the team interacting _with each other_ and sometimes you can get a view into what sort of issues they are really dealing with.


Hiring well is all about collecting signal. I can reliably collect more signal in a lunch interview than a coding interview. But I never said anything about what the signal represented, and it certainly doesn't represent anything that correlates to performance.

Passing a lunch interview should be the easiest thing in the world. Just don't be brazenly unethical or immoral, a complete douchebag, a sexist scumbag or a racist shithead. It's amazing how many people fail at this simple task. It doesn't take a lot of signal to fail.

It's far more important that you can have a civil conversation with a reasonable, level headed person than it is for them to be able to solve fizz buzz in 30 minutes.


> If you say "column oriented" I'm going to ask you to explain why. I'm going to challenge what the load balancer is doing or what you expect it to do, and why.

And I would have happily repeated something from a video I'd watched on YouTube two nights earlier. It's a cram test.


I don't know how I could prove or verify this, but in my experience it feels very easy to detect a difference between people who understand a systems topic and people who've treated it like a cram test. I recall one interview in particular where the guy gave textbook answers to anything like "what technology would you use here" or "what are the benefits of X vs. Y", but fell apart completely whenever I scratched the surface for an implementation detail.


You verify it by asking them to walk through their previous experience and why they made the tradeoffs they did.

You can also ask them “knowing what you know now, what would you have done differently?”.

That lets them talk about practical experience and theoretical knowledge.

When I use to interview infrastructure people, I could tell quickly the ones who only knew anything based on cramming with ACloudGuru.

On a related note: when I work with customers consulting in cloud application development, I am quick to distinguish between what I know well where I have practical experience, what I can ramp up on quickly based on related knowledge and what I only know from watching a video.


I built one large scale system in my career (when I worked at Google, I made a folding@home screensaver that used up idle cycles in production).

When I built it I ignored 95% of what Google knew about large scale system design because that knowledge was really about building scalable web services and storage systems, while I needed to build a batch scheduler which could handle many millions of tasks simultaneously. We depended on a few core scalable resources available in production (borg, chubby, bigtable, colossus) and tried as hard as possible to avoid spamming them with excessive traffic without adding lots of complex caches and other hard-to-debug systems. In fact, "simplicity" was the primary design goal. The system worked, it scaled, and if I'd followed all the normal Google guidelines, it wouldn't have (because scientific computation and web load balancers differ). Not sure what to take away from that.

These days in system design interviews I usually focus on limiting the use cases for the system so that I can architect something that: linear resource consumption for linear workloads over 2-3 orders of magnitude, is simple enough that a small group of engineers can understand the whole system and debug it when it breaks, and not try to optimize for future use cases (clearly documenting the limits of the system) or try to accomodate too many oddball one-off user requests.


I have designed a few systems, but my issue with the system design interview is that this is not how it works. There is never a blank page in real life like it is in one of these interviews, and the stuff that's actually on the page matters more than the stuff you're "supposed" to say in these sessions.

Yes, column-oriented databases absolutely do work better for OLAP use cases, but is it better enough for the specific use case to be worth introducing a new database technology into the organization, or would a new database within the existing managed psql instance be good enough for now? Those detailed organizational questions usually matter more in the first few iterations of systems than "principled" architectural concerns.

The useful kind of design is: What is the next best iteration of this system? Maybe with an appendix at the end discussing some ideas for the iteration after that. Sometimes that next iteration is actually the first iteration of the system, in which case you should definitely not be drawing 20 boxes with different components of how it will fit together, you should be looking for the simplest possible thing that could work.

One of the fun things at big successful companies is that there are actually a lot of systems that are quite a few iterations in, with a stable baseline of usage properties. With these, it actually can make sense to draw a bunch of boxes with different components targeting different well-known pain points in a way that avoids trading off important existing capabilities. But again, that's exactly the opposite of a blank page, and no amount of digging into the interviewer's toy system design question can get deep enough for that.

All of my answers to these questions - which have always been very well-received - have been over-engineered solutions that I'd never actually pursue in a real job. But interviewers aren't really prepared for questions like "what frameworks are already being used and familiar to most teams at the company?" followed by "since we already have familiarity with postgresql+rails+react, we should set up a new non-replicated but periodically backed up database in that database instance, start with a few tables with some reasonable relationships, use activerecord and its migrations, and implement the front-end in react, then we should launch it and see what bottlenecks and pain points arise".

I get it, these interviews are trying to see if you have the knowledge and chops to solve those pain points and bottlenecks as they arise, but I'm sorry, "do an up-front design of a huge fancy system" just doesn't answer that question.


I was asked how I would replace an large scale but inadequate logging infrastructure and I stared with "In a way that minimally disrupts everything currently in place for monitoring and alerting".

I'm not sure how well that answer played out but it's still the correct answer.


I agree.

I am shocked even though I shouldn’t be at this part of the technical interview for devs and that devs can pass these interviews without demonstrating practical experience.

I’ve had interviews and jobs for three smaller companies where I was actually coming in as an architect 2016-2020). But they wanted to know about my real world experience.

Luckily my second job out of college way back in 1999 I actually had to manage servers as well as develop so I could both talk theory and practice.

From 1999-2012 I was managing infrastructure as part of my job at two jobs.

I’ve never interviewed at BigTech for a software engineering position. But I did do a slight pivot and interview (and presently work) in cloud consulting at BigTech specializing in “application modernization” - cloud DevOps + development.

Sure I had the one initial phone screen where I had to talk theory about system design. But my entire loop consisted of my walking through my past system design experience - and not all centered around AWS.

And yes I can talk intelligently about all of the sections that the page covers including your example of columnar vs row databases. But I wouldn’t expect that from most devs.

I was never on call at my last job. We had “follow the sun” support. But our site was only business critical during the day. One of the first things I insisted on with my CTO is that we hire a manage service provider for non business hours support.

Sort of related: at most tech companies, the difference between mid and Senior is not coding ability. It’s system design and “scope” and “impact”


Interesting take. Different strokes for different folks? You aren't right or wrong here, it's preference.

That being said I am in the opposite camp and I find that more and more, the systems that I am building and maintaining are large and distributed.

Despite what a lot of commenters here on HN will say - yes there absolutely are businesses out there that need tooling like Kubernetes and huge column databases.


IMO being able to think about the wider implications for smaller subsets of a system is an important ability. That being said, if your organization allows ICs to make technical decisions without any sort of review from someone whose job it is to architect large systems, idk that seems like something you should have.

It also depends greatly on what 'layer' of the stack you work in.


There's a difference between talking about the wider implications of a system and acting like software can be planned top down with any sense of accuracy


If you want to ask me about the wider implications for smaller subsystems then ask me about the Linux kernel. How did it become that system design is the only way to demonstrate this particular skill?


I think they're BS also. What you should be looking for is does the person understand how the project(s) they have worked on fit into a larger system. For instance, I have a high level understanding of how the other systems at the Co I work for work. I know what they do, I know what data they ingest and expel. I know how the data I consume is generated. I know how it all fits together and I can talk conversantly about it. I know that when a change request comes in if it affects other teams and I voice out when it does.

I think this is what you should look for in most candidates except high level system engineers and jr developers.


It's a good way to filter out experienced candidates. The requirement is to be a junior dev that can take any BS that is dumped on him.


System design questions are way more relevant to day-to-day work than leetcode questions.

Nothing stops the interviewer from asking you even more relevant system designs questions like:

* How would you build a Linux kernel from scratch?

* How would you design a common interface for any device drivers (that you are familiar of)?


If you apply to a generalist SWE role at FAANG, you're expected to have some knowledge about these things since you're likely to encounter them. I didn't have such experience either, but it's now part of my day to day job.

If you apply for a targeted specialized role, you may get a system design question that relates to your domain. If there's a general system design interview, it should have less weight in the process. It's still an indicator on how candidates communicate and think through a design. Plus, these kind of things should be general knowledge for a computer scientist nowadays.


> If you apply for a targeted specialized role, you may get a system design question that relates to your domain.

If you apply for a specialized targeted role.... you still get the same generalized interview loop


Depends on the company. Sometimes you get extra interviews in addition to the generalist ones.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: