Hacker News new | past | comments | ask | show | jobs | submit login
Learn how to design systems at scale and prepare for system design interviews (github.com/karanpratapsingh)
215 points by vinnyglennon 12 months ago | hide | past | favorite | 143 comments



There's no shortage of criticism of the leet coding interview questions, but I found the system design interviews even more asinine.

I have never in my career had to do anything like designing a large scale system. Maybe I'm inadequate, maybe I've been insufficiently motivated, but it hasn't happened. If that's a requirement, say so and don't waste the time of applicants who don't know what a ring tokenizer is.

As it was, it turned into a ridiculous charade session where I watched a bunch of videos and regurgitated them as though I knew what I was talking about. "Oh yes, I'd use a column oriented database and put a load balancer in front".

Without any real-word experience it's just a bunch of BS. I'd never let someone like me design a large scale system - not even close. I don't want to design large scale systems, it sounds boring and like the type of job where you're expected to be on call 24/7.

I've worked with the Linux kernel, I've written device drivers, I've programed in everything from C to Go, and that's what I want to keep doing. Why put me through this?


> I have never in my career had to do anything like designing a large scale system.

Giving large scale system design interview questions for a role where someone never has to work with large scale systems would be a weird cargo cult choice.

However, when a job involves working with large scale systems, it's important to understand the bigger picture even if you're never going to be the one designing the entire thing from scratch. Knowing why decisions were made and the context within which you're operating is important for being able to make good decisions.

> I've worked with the Linux kernel, I've written device drivers, I've programed in everything from Fortran to Go, and that's what I want to keep doing. Why put me through this?

If you were applying to a job for Linux kernel development, device driver development, and Fortran then I wouldn't expect your interviewers to ask about large scale distributed web development either. However, if you're applying to a job that involves doing large scale web development, then your experience writing Linux kernel code and device drivers obviously isn't a substitute for understanding these large scale system design questions.


Oddly, knowing the limitations of last year's designs can, as often, limit you to last year's solutions. That is to say, the reason things were done in the past almost always come down to resourcing constraints.

Yes, it is good to understand constraints. It is also incredibly valuable to be respectful of the constraints that folks were working on before you got there. Even better to be mindful of the constraints you are working on today, as well. With an eye for constraints coming down the line.

But, evidence is absurdly clear that large systems are grown far more effectively than they are designed. My witticism in the past that none of the large companies were built with the architectures that we seem to claim are required for growth and success. Worse, many of them were actively done with derision for "best practices" coming from larger companies. Consider, do you know all of the design choices and reasons behind such things as the old Java GlassFish server?

Even more amusing, is to watch the slow tread of JSON down the path that was already covered by XML. In particular the attempts at schemas and namespaces.


> large systems are grown far more effectively than they are designed

It's easy to bake in poorly scaling technical decisions at an early stage that take an obscene amount of engineering effort to undo once the scaling problem become obvious. I've seen intern-days of "savings" turn into senior-years of rework and the scale in my corner of the world is tiny by SV standards.

I always assumed that SV companies experienced similar traumatic misadventures, multiplied up by scale, and baked "thinking at scale" into their technical interviews as a crude (but probably somewhat effective) countermeasure. Even if you only ever use the knowledge one time, indirectly and accidentally, by peer-pressuring your buddy into thinking before coding and therefore avoid a $10M landmine, it was all worthwhile.


It is as easy to bake in large maintenance and runtime costs on early stage development. Worse, it is easy to bake in aspirational growth ideas in the architecture that make it difficult to adjust as you go.

Is akin to thinking you need a large truck, when a very cheap pickup will do. Will the pickup scale to larger jobs you may grow to take on? Of course not. But it will be far cheaper to operate and own at the start, so that you can spare resources to get there.

Now, oddly, this can be taken in several directions. ORM is the poster child that folks love to hate on for how rigid it can be in a mid sized project. And it is also the poster child for how rapidly you can get moving with a database. Which is more important for a project? Really really hard to say, all told.


In contrast, there are a lot of systems out there designed to scale up really quickly, but never achieve the product-market fit to ever need this.

All that engineering for scalability would have been better applied toward things to find the right product-market fit.

It’s hard to strike the right balance of engineering in all aspects of a product. But I’d rather be at a company forced to pour hours of senior engineering effort into fixing scalability than one where things can scale to hundreds of millions of users, but you never attract more than a few thousand.


If they know the code base well, it shouldn't be that hard to undo intern-level shortcuts.

There's another failing here which is that quality wasn't gated well enough.


Hmm. This view works except where it doesn't. For example, if you don't pick the right ID/account/object # scheme so you can shard later on, good luck figuring how to distribute and/or scale the issuing of said IDs years down the road. Some things will never need to be sharded. Some things will kill you if you can't. Every bit of your code is going to make assumptions about this and you're maybe going to end up with a hot key that's hard to fix or have to do weird contortions to split your infrastructure by country or region where there are laws or regulations about data residency.

Here's a few others without explanation of how they'll blow up on you: not being careful about the process around managing feature flags, doing all your testing manually, not doing system level design reviews including the outline of a scaling plan for the system as planned prior to building systems, not building and testing every check in, doing system releases when it seems good or by marketing requirements instead of on a regular cadence, not having dev/test/production built via IAAC or at least by scripts that work in all envs. Not having runbooks.


>This view works except where it doesn't.

It only stops working in the worst case scenario though: LOTS of hastily written code (by interns?) that suddenly needs to scale and will take senior-level people years to do it.

If given that situation, most folks here would run the other way. That's years of toil for little career payoff, and a company in this situation is unlikely to be willing to pay for the best people to do it since they didn't want to pay for that in the first place.

It's very likely something like this will just die or get rewritten and it's probably for the best.


But some things are obvious once you build up scar tissue from previous experience.

And scaling could mean “it might work on a developers PC with 50 rows of data. But it won’t work with our current production load because he didn’t index a table”.


To me this is a entirely separate problem.

I’ve noticed that when less experienced people try to solve a problem, they have to look up how other people do it first.

But someone more experienced has a strong understanding of technologies on an abstract level so they can whiteboard a solution without even involving any specific software (then compare to how others do it). When you think that way, you’re not worrying about JSON or XML. You become neither tied to last year’s tech or too eager to try new tech. You just build something solid that’s reliable and long-lasting.

Knowing about different tech used in different designs expands the pool of legos that you can snap together and so it can’t hurt.


There is a similar learning style. Basically, guess the answer and then compare to the other answers. Even before you know anything about it, all told.

That said, I have as often fallen into the trap of trying to build it myself first. So called "first principals" thinking. That works far less often than folks think it does.


You missed the key statement in the commenter's post:

"If that's a requirement just say so"

Clearly the roles they're applying for are not concerned with the ab initio design of large-scale systems. Which is why they said what they said. They're not whining for the sake of whining.

Your experience writing Linux kernel code and device drivers obviously isn't a substitute for understanding these large scale system design questions.

A drop-in substitute, no. But an engineer who has the wherewithal to truly master the grisly low-level stuff can easily ramp up reasonably quickly in the large scale stuff as well, if needed. To not understand this is to not understand what makes good engineers tick.

We get the fact that, yeah, sometimes, for certain roles a certain level of battle-tested skills are needed in any domain. Nonetheless, there's an epidemic of overtesting (from everything to algorithms, to system design, to "culture fit") coursing through the industry's veins at present. Combined with a curious (and sometimes outright bizarre) inability of these companies to think about what's truly required for the roles -- and to explain in simple, plain English terms what these requirements are in the job description, and to design the interview process accordingly.


The problem is that the system design interview somehow became a necessary component of the FAANG hiring process.


FAANG and similar companies typically subscribe to something like the "T shaped engineer" philosophy. They're making a conscious choice that their engineers should be comfortable in discussions about distributed systems, performance tradeoffs, etc. regardless of whether they do such things on a regular basis.


Certainly not at the FAANG I work at. We hire specialized engineers to work on device drivers and OS kernels and absolutely do not ask them questions on how to design distributed web services.

I encourage you to apply: https://www.apple.com/careers/us/


Why would you interview for a role at a FAANG company in the first place?


They want to exchange the most money possible for their labor ?


This isn’t true, and even if it were, “most money possible” isn’t a meaningful metric.


So what companies pay on average than FAANG[1] for developers.

The most money possible is far from meaningless. If I work for a company, which one will deposit the most in my bank account in a year and/or the most in my brokerage account when my stock vests?

[1] not literally FAANG, the most profitable public tech companies


Most Fortune 100 companies are competitive now, and the most money is extremely meaningless.


I assure you that the other Fortune 100 companies are not paying even in the same ball park as Facebook, Apple, Amazon, and Google.

https://finasko.com/fortune-100-companies/

What do you think the average compensation of those companies are?

I’m well aware of the comp levels at at least three of those companies because they are based in my former home town - Delta, Home Depot and Coke.

They pay their senior devs about the same amount as an intern I mentored got as a return offer (cash + stock).


I can assure you they are absolutely paying in the same ballpark as Facebook, Apple, Amazon, Google, you just don't know how to limit your searches to the tech roles.


So give some numbers of what the f100 companies make compared to FAANG?

I’m very familiar with at least three of the companies that are based in Atlanta - Delta, Coke and Home Depot.

Seeing that I’ve worked for corp dev for almost 25 years before joining BigTech.


Depends on the role. $500k total comp is common on the tech scale of the companies I'm familiar with. While not the $800k+ you might see at some FAANG in specific roles, it absolutely is enough to no longer consider FAANG if you're bothered even one iota by their hiring process.


Name a company. You keep being obtuse


Fortune... 3. not FAANG but in retail. Can hit $500k total comp easily, if you live on either coast.


Not according to Levels

https://www.levels.fyi/companies/walmart/salaries

The highest one in retail is Walmart.

Level 5 comp for Walmart is about the same as a mid level developer at Amazon and Amazon is in the middle of the pack for tech compensation.

Again you still refuse to name a company and level and numbers.


If you knew anything about this you'd know "Walmart" isn't what you search.

But if you knew anything about English you'd know "ballpark" doesn't mean "exactly equal".


You said top 3 company in retail. Walmart is the highest in the F100 in retail. Again name names if you can’t, you’re obviously full of it.


Like I said, if you don't understand what's going on here, you're not qualified to have this conversation.


You’re right, I’ve only been in the industry for 27 years across 8 companies including my current one working at BigTech where I do cloud consulting for other large enterprise companies.

What do I know?

And yet you still haven’t been able to prove your claims that “most” pay about the same.


You linked to a site that literally showed Walmart's non-tech arm (hint: the majority of the tech jobs at Walmart aren't under 'Walmart' on levels.fyi) paying comparable salaries (you yourself compared them)...

But yeah, all those 27 years of experience totally weren't just you sitting in a chair somewhere, being "part of the team". Got it.

> prove your claims that “most” pay about the same.

first prove I claimed that (hint: I didn't)

I'm starting to think you got fooled into thinking BigTech was the only option, and are now discovering how untrue that actually was.


> You linked to a site that literally showed Walmart's non-tech arm (hint: the majority of the tech jobs at Walmart aren't under 'Walmart' on levels.fyi) paying comparable salaries (you yourself compared them)...

Now I’m still waiting for you to prove your claims which were “Most Fortune 100 companies are competitive now” and you haven’t provided a single link.

> I'm starting to think you got fooled into thinking BigTech was the only option, and are now discovering how untrue that actually was.

Well

A) seeing that when I started working, only one of the current FAANG companies existed, I know that FAANG isn’t my only option.

B) seeing that I specialize in cloud architecture and modernization + dev - ie “system design”. I think I’m at the right “F100” company.


They pay a lot?


In retrospect it was a horrible mistake.


They do some impressive stuff


Salary, I suppose?


My understanding is that this is not the case any more for the more junior software engineer positions in Google and in Amazon, which are expected to them learn system design before being promoted. If you are applying to a more senior position, then yes, there should be a question about system design, and yes, you will probably be doing system design in your work, so it's completely fair game.


And the second part of this is that just as all non-rich people tend to consider themselves as soon to be millionaires temporarily down on their luck, most startups (especially the VC funded ones making big promises to investors) tend to consider themselves soon to be FAANGs temporarily in the early phase of their inevitable hockey stick growth.


Is this a problem? I would argue that this style of interviewing is much more relastic to day-to-day activities than leetcode


You'd be arguing wrong imo. No one sits down solo and has to design a system to scale in isolation,and if you do then something up the chain from that moment went very wrong.

It's a pointless academia by proxy situation that encourages filling out teams with the kids of people who csn architect and tinker forever but have no capability to actually deliver software anyone wants to use. This becomes more clear when you look across the last 10 years in FAANG and list out what products have actually been delivered that are improvements to users and customers vs what's just infrastructure padding and bought in though acquisition.


In both FAANG jobs I had I was expected to design systems solo, and then review them with my team. If the system is complex enough, I would probably whiteboard it first with some teammates while I was designing it.

It is something that was asked of me in interviews, and comes up often in my day to day job. And being able to design systems, and to help review systems others are designing, is probably the single biggest impact thing I do regularly.

It is more useful in day to day than the algorithmic knowledge that was also asked of me during interviews. While there are people that do use complex algorithms in both companies, most software of Google is converting a protocol buffer into another protocol buffer, and in Amazon is the same thing but with JSON. If you are a frontend engineer, you might convert into HTML by plugging the values into a template engine.


I have really good analytical skills which I leverage to tackle issues in large scale system piecewise. I have to suspend my skeptical mind and switch to blue hat thinking to come up with something from scratch. Then I take it apart and iterate over it. I don't think large scale system design is a straightforward process and pretending it is so may very well lead to living in interesting times.


Agreed 100%. I spent ten years at Google, got promoted three times, never did any distributed systems stuff. When I decided to leave about 18 months ago, trying to cram for these interviews and memorize how WhatsApp works was the worst part of interviewing. But I jumped through the hoop and got a couple offers, neither of which involve doing distributed systems work. Those were definitely my weakest interviews, and in the case of my current employer, I literally said to my HM, "I've never done this kind of work, and I'm not going to shine in these interviews. Here's the kind of work I have done and I'd love to talk to you about it instead." I still had to do the sys design interview, but I think maybe that helped get it down weighted?


I never did any system design, and got offers at Meta and Google (I even aced the system design at Google). It's not a very discriminating interview I believe. And I found it quite fun to prepare.


>And I found it quite fun to prepare.

What resources did you use?


Quite a lot. I read many times "Designing data-intensive applications" which I highly recommend. There's a book called "system design interview" I believe, that is a summary of the most typical designs. There are also a bunch of videos on youtube. I read some research papers of classical designs. I played with some typical components, such as nosql dbs. I even implemented some prototypes.


Do you have any pointers to these research papers?


Not on the top of my head, but they are usually cited in the blogs or video that present some classic systems. And "designing data intensive applications" has all the references. That being said, I don't think it's worth getting that deep for system design interview preparation unless you're already quite advanced. Retrospectively, I think I spent too much time on advanced material, overestimating what was required for this roles.


I'm also a system programmer developer for the most part(kernel, hardware bring-up, low level firmware, performance tuning of embedded systems) and I got invited a couple times for FAANG interviews,and all of them had this system design interview with nosql column databases and loadbalancers behind nginx proxies of some sort. Problem is that, it's pretty far from my field.

Are you going to ask a cloud architect who connects frontend to backend with databases at scale a lot of questions about how to write a pci express network driver in linux kernel with great performance too?

I would like to be hired by a specialized hiring team in those big companies instead of going through the general hiring process that you're expected to be a technical God who knows everything at expert level.

I rejected all those interview requests.


I have designed large scale systems, but I often feel most of my system design interviewers haven’t and are much more pedantic/nitpicky than is reasonable. Leetcode is better because most people at least understand the questions they’re asking.

Last time I did a system design interview I mentioned database triggers as a way to maintain some kind of data invariant, which flustered my interviewer and I guess made it so their canned follow up questions didn’t work, so they asked if I could think of any other approach (the one they had in mind). I couldn’t and it made the interview very painful.


This is a good example of a bad interviewer. Never have a specific concrete answer in mind when asking an open-ended question.


For the really good candidates (IE, ones who can answer basic programming questions and explain how a hash table works), I have an open-ended question which is really an open question in an active field. I really hoped I'd get candidates who would get that far (https://arxiv.org/abs/1309.2975) so that I could finally get some interesting answers I haven't heard before but most candidates I end up interviewing struggle to explain a hash table and what its advantages/disadvantages for counting unique k-mers are.


Or, if you want the candidate to talk about a certain approach, maybe think of some ways you can nudge them in that direction. If something's not at top of mind, some subtle hints from the interviewer could trigger a discussion of what they want to talk about. The interviewer shouldn't be tied to a "script".


I was once interviewing someone who came up with a design I didn’t understand and he walked me through it on the board. He was an immediate hire.

I learned a lot from him and asked him for advice when I had an architectural decision to make at my next job even though we didn’t work together.


You just reminded me of the time I was explaining how to do consensus failover, and the interviewer asked me to do distributed single thread design. Great use of everyone's time. The market is full of LARPers who wish they had the chance to design something real, and they will take it out on you.


I find system design interviews generally provide significantly more signal than coding interviews, but still less signal than a lunch interview.

It's a test of breadth and depth and it takes an apt, possibly experienced interviewer to navigate the candidate's domain knowledge effectively. The goal isn't to build some elaborate, buzzwordy house of cards, but to understand where tools are appropriate and where they are not, to think on your feet and work with an interviewer to build a system that makes sense. And, just like the real world, criteria should shift and change as you flesh out the design.

In particular people trip over the same things every time: reaching for things they do not understand, not understanding fundamental properties of infrastructure (CPUs, memory, networking), and cache invalidation.

When I interview folks I always preface the prompt with an offer to provide advice or information, acting as if I were a trusted colleague or stakeholder.

If you want to be low level, then I'd question why we're conducting a large system design interview anyways. We could certainly frame it as a small system design instead, and focus on the universe contained within the injection-molded exoskeleton of the widget.

If you say "column oriented" I'm going to ask you to explain why. I'm going to challenge what the load balancer is doing or what you expect it to do, and why.

Building large systems in the real world well, and watching them scale up under load with grace (and without contorting your opex to have only lunar aspirations) is somewhat akin to watching your child ride their bike for the first time after the training wheels are off. It feels good. Just like seeing your hardware in the field produce a low failure yield.

There is satisfaction in doing good work, or at least their should be.


> but still less signal than a lunch interview

Careful with that line of thinking. There’s a significant body of research showing that people feel like a “chat about tech” interview provides the best signal, but it empirically performs the worst with a roughly 50% correlation to on-the-job performance. You’re better off flipping a coin because at least then you’re not biased.

source: https://en.wikipedia.org/wiki/Noise:_A_Flaw_in_Human_Judgmen...


> but it empirically performs the worst with a roughly 50% correlation to on-the-job performance. You’re better off flipping a coin because at least then you’re not biased.

I was going to point out that a correlation of 50% is pretty good (specially for predicting job performance from a single interview), whereas flipping a coin has 0% correlation with anything that is independent of the coin flip (such as job performance).

You probably meant to say the probability you rank job-performance of two random candidates correctly based on an interview is about 50% (what your source calls "percent concordant"), a correlation of 0%.

Out of curiosity: do you remember which section of the book looks at "chat about tech" interviews compared to other kinds of tech interviews in regards to their job performance prediction capabilities?


You are right, I got my stats terms mixed up.

The "Improving Judgements" portion uses interviews as a case study and builds up from "just have a chat" to the typical multi-round panel interview with pre-defined rubrics that we see in tech these days. When done correctly, the book suggests this is the best we can do short of hiring everyone and firing low performers soon after.

I remember they specifically mention Google as one of the companies where they ran a study linking interview practices/decisions to on-the-job performance.


Lunch interview is often a great way to get some signal about the company on your potential coworkers. It's not so much asking questions and getting answers, but you often get a chance to see the team interacting _with each other_ and sometimes you can get a view into what sort of issues they are really dealing with.


Hiring well is all about collecting signal. I can reliably collect more signal in a lunch interview than a coding interview. But I never said anything about what the signal represented, and it certainly doesn't represent anything that correlates to performance.

Passing a lunch interview should be the easiest thing in the world. Just don't be brazenly unethical or immoral, a complete douchebag, a sexist scumbag or a racist shithead. It's amazing how many people fail at this simple task. It doesn't take a lot of signal to fail.

It's far more important that you can have a civil conversation with a reasonable, level headed person than it is for them to be able to solve fizz buzz in 30 minutes.


> If you say "column oriented" I'm going to ask you to explain why. I'm going to challenge what the load balancer is doing or what you expect it to do, and why.

And I would have happily repeated something from a video I'd watched on YouTube two nights earlier. It's a cram test.


I don't know how I could prove or verify this, but in my experience it feels very easy to detect a difference between people who understand a systems topic and people who've treated it like a cram test. I recall one interview in particular where the guy gave textbook answers to anything like "what technology would you use here" or "what are the benefits of X vs. Y", but fell apart completely whenever I scratched the surface for an implementation detail.


You verify it by asking them to walk through their previous experience and why they made the tradeoffs they did.

You can also ask them “knowing what you know now, what would you have done differently?”.

That lets them talk about practical experience and theoretical knowledge.

When I use to interview infrastructure people, I could tell quickly the ones who only knew anything based on cramming with ACloudGuru.

On a related note: when I work with customers consulting in cloud application development, I am quick to distinguish between what I know well where I have practical experience, what I can ramp up on quickly based on related knowledge and what I only know from watching a video.


I built one large scale system in my career (when I worked at Google, I made a folding@home screensaver that used up idle cycles in production).

When I built it I ignored 95% of what Google knew about large scale system design because that knowledge was really about building scalable web services and storage systems, while I needed to build a batch scheduler which could handle many millions of tasks simultaneously. We depended on a few core scalable resources available in production (borg, chubby, bigtable, colossus) and tried as hard as possible to avoid spamming them with excessive traffic without adding lots of complex caches and other hard-to-debug systems. In fact, "simplicity" was the primary design goal. The system worked, it scaled, and if I'd followed all the normal Google guidelines, it wouldn't have (because scientific computation and web load balancers differ). Not sure what to take away from that.

These days in system design interviews I usually focus on limiting the use cases for the system so that I can architect something that: linear resource consumption for linear workloads over 2-3 orders of magnitude, is simple enough that a small group of engineers can understand the whole system and debug it when it breaks, and not try to optimize for future use cases (clearly documenting the limits of the system) or try to accomodate too many oddball one-off user requests.


I have designed a few systems, but my issue with the system design interview is that this is not how it works. There is never a blank page in real life like it is in one of these interviews, and the stuff that's actually on the page matters more than the stuff you're "supposed" to say in these sessions.

Yes, column-oriented databases absolutely do work better for OLAP use cases, but is it better enough for the specific use case to be worth introducing a new database technology into the organization, or would a new database within the existing managed psql instance be good enough for now? Those detailed organizational questions usually matter more in the first few iterations of systems than "principled" architectural concerns.

The useful kind of design is: What is the next best iteration of this system? Maybe with an appendix at the end discussing some ideas for the iteration after that. Sometimes that next iteration is actually the first iteration of the system, in which case you should definitely not be drawing 20 boxes with different components of how it will fit together, you should be looking for the simplest possible thing that could work.

One of the fun things at big successful companies is that there are actually a lot of systems that are quite a few iterations in, with a stable baseline of usage properties. With these, it actually can make sense to draw a bunch of boxes with different components targeting different well-known pain points in a way that avoids trading off important existing capabilities. But again, that's exactly the opposite of a blank page, and no amount of digging into the interviewer's toy system design question can get deep enough for that.

All of my answers to these questions - which have always been very well-received - have been over-engineered solutions that I'd never actually pursue in a real job. But interviewers aren't really prepared for questions like "what frameworks are already being used and familiar to most teams at the company?" followed by "since we already have familiarity with postgresql+rails+react, we should set up a new non-replicated but periodically backed up database in that database instance, start with a few tables with some reasonable relationships, use activerecord and its migrations, and implement the front-end in react, then we should launch it and see what bottlenecks and pain points arise".

I get it, these interviews are trying to see if you have the knowledge and chops to solve those pain points and bottlenecks as they arise, but I'm sorry, "do an up-front design of a huge fancy system" just doesn't answer that question.


I was asked how I would replace an large scale but inadequate logging infrastructure and I stared with "In a way that minimally disrupts everything currently in place for monitoring and alerting".

I'm not sure how well that answer played out but it's still the correct answer.


I agree.

I am shocked even though I shouldn’t be at this part of the technical interview for devs and that devs can pass these interviews without demonstrating practical experience.

I’ve had interviews and jobs for three smaller companies where I was actually coming in as an architect 2016-2020). But they wanted to know about my real world experience.

Luckily my second job out of college way back in 1999 I actually had to manage servers as well as develop so I could both talk theory and practice.

From 1999-2012 I was managing infrastructure as part of my job at two jobs.

I’ve never interviewed at BigTech for a software engineering position. But I did do a slight pivot and interview (and presently work) in cloud consulting at BigTech specializing in “application modernization” - cloud DevOps + development.

Sure I had the one initial phone screen where I had to talk theory about system design. But my entire loop consisted of my walking through my past system design experience - and not all centered around AWS.

And yes I can talk intelligently about all of the sections that the page covers including your example of columnar vs row databases. But I wouldn’t expect that from most devs.

I was never on call at my last job. We had “follow the sun” support. But our site was only business critical during the day. One of the first things I insisted on with my CTO is that we hire a manage service provider for non business hours support.

Sort of related: at most tech companies, the difference between mid and Senior is not coding ability. It’s system design and “scope” and “impact”


Interesting take. Different strokes for different folks? You aren't right or wrong here, it's preference.

That being said I am in the opposite camp and I find that more and more, the systems that I am building and maintaining are large and distributed.

Despite what a lot of commenters here on HN will say - yes there absolutely are businesses out there that need tooling like Kubernetes and huge column databases.


IMO being able to think about the wider implications for smaller subsets of a system is an important ability. That being said, if your organization allows ICs to make technical decisions without any sort of review from someone whose job it is to architect large systems, idk that seems like something you should have.

It also depends greatly on what 'layer' of the stack you work in.


There's a difference between talking about the wider implications of a system and acting like software can be planned top down with any sense of accuracy


If you want to ask me about the wider implications for smaller subsystems then ask me about the Linux kernel. How did it become that system design is the only way to demonstrate this particular skill?


I think they're BS also. What you should be looking for is does the person understand how the project(s) they have worked on fit into a larger system. For instance, I have a high level understanding of how the other systems at the Co I work for work. I know what they do, I know what data they ingest and expel. I know how the data I consume is generated. I know how it all fits together and I can talk conversantly about it. I know that when a change request comes in if it affects other teams and I voice out when it does.

I think this is what you should look for in most candidates except high level system engineers and jr developers.


It's a good way to filter out experienced candidates. The requirement is to be a junior dev that can take any BS that is dumped on him.


System design questions are way more relevant to day-to-day work than leetcode questions.

Nothing stops the interviewer from asking you even more relevant system designs questions like:

* How would you build a Linux kernel from scratch?

* How would you design a common interface for any device drivers (that you are familiar of)?


If you apply to a generalist SWE role at FAANG, you're expected to have some knowledge about these things since you're likely to encounter them. I didn't have such experience either, but it's now part of my day to day job.

If you apply for a targeted specialized role, you may get a system design question that relates to your domain. If there's a general system design interview, it should have less weight in the process. It's still an indicator on how candidates communicate and think through a design. Plus, these kind of things should be general knowledge for a computer scientist nowadays.


> If you apply for a targeted specialized role, you may get a system design question that relates to your domain.

If you apply for a specialized targeted role.... you still get the same generalized interview loop


Depends on the company. Sometimes you get extra interviews in addition to the generalist ones.


Interviewer: How do you design twitter?

Interviewee: Long answer with novice RDBMS choices.

Interviewer(implicitly): I will judge you based on how you can "Problem Solve" many-many conundrum.

Interviewee: Use Cache, scaling, edge nodes, kafka because....

....

HIRED

....

Interviewee spends first 6 months writing a test framework to automatically run tests of deployment of an internal tool that runs python ingestion pipeline. Next 6 months figuring out how to help user add type hints to IDE for pipelines to help catch errors faster and how to roll it out to 100+ ICs in the company. Sadly fails.

Never ever writes a line of code to make "twitter", learns a ton on how to work with people.

....

Interviews again: "How do you design twitter"?


Not sure what the point is here, but ... in my experience as the hiring manager I haven't seen huge success in testing a candidate whether they can do the exact thing they will be doing in the first 6 months (in software engineering). I had better results in asking questions that bias towards a certain set of behavior traits (yes, this has other long-term problems) and not certain skills.

I understand that some candidates don't like it, because they are obviously good at something and want to be asked questions in that field. But ... I already know you are good at it because you said it on the CV, so i'm not wasting time with that (yes, if you are a good liar it takes us longer to find out).

So your example, working as intended?


And the culture is also promoting gameable interviews that tell one absolutely nothing to the interviewee about a candidate except that he has managed to parrot an answer from a online guide or DDIA.

I am sure a skilled interviewee can find the difference between a guy who knows his System design from one that does not. But its paradoxical to have a guide prepare someone on so many topics of which only 1% is what one person has done in their life.


My point was that the culture of system design interviews is not a good one. Its repeated to infinity at many companies.

> I had better results in asking questions that bias towards a certain set of behavior traits

I am curious. What do you ask? Like do you ask tech centric questions around Proactiveness, Empathy, Motivation, Conflict Resolution etc?


I still ask to design Twitter, but make it clear that it's about the process, and follow ups are driven by where the candidate leads to.

The material from OP won't prepare you for systems design interview (witch is also your point?). Only experience will, so we measure experience and leadership.

Juniors can apply to systems design openings and will be interviewed, but they end up as generic SWEs if hired. If one thinks one can work on distributed systems coming out of university you will have a hard time interviewing or not like it.


See also "The System Design Primer": https://github.com/donnemartin/system-design-primer


ByteByteGo is also an incredible resource: https://www.youtube.com/@ByteByteGo


Very nice thanks.


When I see something like this I wonder if there are people out there who actually go chapter-by-chapter from start to finish, spending dozens of hours learning the content a random GitHub repo claimed was important.


You can always go for the fundamentals and go through DDIA. It is heavy on foundations but does not really give examples of specific real world systems. For that, Alex Xu’s books are probably the most popular.


This repo is a very high level collection of topics with short descriptions rather than a learning source. You can skim it a couple of minutes. But if you're curious about a topic, why wouldn't you read a free resource on it if the quality seems good?

Are you always this dismissive? How do you learn?


“quality seems good” is the point of contention here; how would you know? You’re either already knowledgeable about this topic and therefore necessarily didn’t learn it from here or you’re not and therefore unqualified to judge its quality at all.


That applies to everything we learn. Most of us use previous experience to make an educated guess. Someone just coming into software won't be reading this, but it can be good start to know what one could dig deeper into.

Most content on any topic is bullshit, and in tech quickly outdated. What's your gripe with this particular one? I have several similar repos starred for future reference, because why not? Could be useful.

You never answered how you learn? Because I assume you still do? Since you have opinions on how others do it maybe you have better approaches you'd like to share?


Could be useful? Could be misleading, wrong, outdated, manipulated… there’s a reason these people are self publishing.

You asking me how I learn is the very problem, in fact. When the consequences of being wrong are basically zero, the incentives are not aligned for me to provide you with anything resembling quality.

Avoid individuals entirely, and focus on institutions who are meaningfully harmed by their inaccuracy if discovered.


> there’s a reason these people are self publishing.

??? What in the paranoia is this, do you think the same of Jeff Erickson who self publishes his book on Algorithms? Or no, because he's a professor?

I get that you need to use surface-level signals to determine if something is worth your time, but the opposite is not "this is crap" the opposite is "I don't know if this is good"


Where did I say this was crap?


You seem to have high views of institutions, and low self-esteem on your own ability to assess information. Take nothing at face value, and learn from multiple sources. Those institutions you claim have no incentives are just as likely trying to monetize you and lock you into their walled gardens. To each their own I guess.


Nope! You entirely misunderstand; it’s about incentive and consequence, not self esteem or reverence.

Those institutions exist as a result of their reputation, and that reputation is harmed by giving bad instruction. This random Internet stranger has no reputation whatsoever, so there is zero incentive for them to operate honestly, beyond whatever self imposed morality they may have.

Your method relies on good nature and blind faith, and leaves you open to manipulation, ignorance and misunderstanding; what I describe uses societal forces and self interest to ensure quality. I’m sure you can guess which had better outcomes…


> Your method relies on good nature and blind faith, and leaves you open to manipulation, ignorance and misunderstanding; what I describe uses societal forces and self interest to ensure quality. I’m sure you can guess which had better outcomes…

Is that how you read "take nothing at face value"? I question everything, so should you. Which are these infamous institutions you praise? Maybe some of them are among the sources this random internet stranger referenced at the end?

Linus Thorvalds once was a random internet stranger, now the world runs on his kernel. Your world view is very black/white. Most of the best content I've read are from random internet strangers, and not institutions. They're often from hands-on experience and not written for PR purposes.


You continue to misunderstand.

The concept is a simple one; stop blindly trusting strangers, start finding groups that have a monetary incentive to be accurate.

If you can’t grasp the concept of incentive alignment, system design is probably the least of your worries.


I'm not misunderstanding anything, you're claiming I am just to "win" some internet points. How many times do I have to write "don't take anything at face value"? I assess the content (from multiple sources) not the messenger, of course the messenger counts but that goes both ways (bias is a thing).

You claim Docker is pushing Docker desktop over Podman or even their own daemon is out of goodwill and not for telemetry? You have to go out of your way to find the instructions for just the daemon without the bloat. But you're paranoid some blog will trick you into running 'rm -rf /' as root because the source is an "individual"? (straw-man yes, but lacking examples I have to make one up).

You won't even provide a single example of a better source when asked, just hand-waving "institutions good, individuals bad". There's nothing constructive in this chain, and we're going in circles. I'm genuinely curious of your examples, so if you want to be constructive instead of focusing on my worries please provide some. If not, I wish you the best day.


Either you are an expert on the topic of systems engineering and can assess the quality of this submission but don’t need it as you have better sources, or you are not an expert and can therefore not assess its quality (it comes from a random person on the Internet, after all) and therefore can’t safely use it.

As for Internet points, or arguments, or winning, why do you care? Why even bring it up?


I pity you for having such a simple view of things. Even if I've designed systems for a decade I still find different approaches and other peoples' experience interesting. Don't tell me your time is too precious, because obviously you have time for this useless discussion.

You don't have to be an expert to make assumptions of the content, learning is incremental. I'll give you an example, I've never dabbled with AI, but I could get a decent feel of the quality of a blog post on the subject. Maybe there's a mistake in there that I'll encounter and have to fix. To me that's a win, because then I know WHY I do something, as opposed to just being told THE way to do something and missing the details.

As for why I care? I'm trying to help you broaden your perspective, if possible, maybe it brings me good (real life) karma. The way you word yourself sounds like you're still early in the Dunnin-Kruger scale.

Some random internet stranger puts time and effort into creating this repository and sharing it for free. You see it, find no interest in reading it but instead of scrolling past you have to question why anyone else would. I tell you what value it could give others, and you double down, leaning on "trust" and "reputation" of mysterious "institutions".

Still waiting for your reputable institutions, or anything resembling a useful response. I've asked what gripes you have with the repository, but crickets. If you aren't even going to try to be constructive why are you wasting your precious time? What are you gaining from this?


> I still find different approaches and other peoples' experience interesting

...except this is the literal opposite of that, this is presented as an introductory resource. The literal opposite of "other peoples' experience", it's an intro into the topic.

> You don't have to be an expert to make assumptions of the content, learning is incremental.

My entire argument is that you must make evaluations about the source of the content, so I think it's clear you completely missed what I actually said for what you wanted to believe I said.

> As for why I care? I'm trying to help you broaden your perspective, if possible, maybe it brings me good (real life) karma.

No, I didn't ask why you cared about talking to me, I asked why you cared about Internet Argument Points. You brought them up, then got pissy about me caring about them.

> Still waiting for your reputable institutions, or anything resembling a useful response.

I don't owe you shit and this is manipulative. You don't get to tell me what to do or how I engage in a conversation.

> I've asked what gripes you have with the repository, but crickets.

"This is from an anonymous source who has no incentive to be accurate, so it is of unknown quality." is my "gripe". It's the gripe you replied to originally, so stop gaslighting.

> If you aren't even going to try to be constructive why are you wasting your precious time? What are you gaining from this?

I presume you wrote this to yourself.


> ...except this is the literal opposite of that, this is presented as an introductory resource. The literal opposite of "other peoples' experience", it's an intro into the topic.

We're discussing this in a broader sense than the OP right? Individuals vs institutions (whatever those are) and if the former can be useful. You claim they never are, because experts don't need them and novices can't assess the quality.

> My entire argument is that you must make evaluations about the source of the content, so I think it's clear you completely missed what I actually said for what you wanted to believe I said.

I understand you as dismissing anything not from a reputable institution. Is that not what you meant by "Avoid individuals entirely, and focus on institutions who are meaningfully harmed by their inaccuracy if discovered."? I read you literally. I don't see examples in the real world of institutions being meaningfully harmed by mistakes, ever.

> No, I didn't ask why you cared about talking to me, I asked why you cared about Internet Argument Points. You brought them up, then got pissy about me caring about them.

What?

> I don't owe you shit and this is manipulative. You don't get to tell me what to do or how I engage in a conversation.

No you don't but it would be much better use of your time to just define these institutions. I claim their "monetary incentive" you mentioned mean they push you into their walled gardens, like AWS, Oracle and others. As I see it unbiased sources don't exist, and all sources have to be taken with a grain of salt. I would love to be proven wrong, why the hesitation to share an example? Was I close with the Docker one?

Maybe we're even in agreement but you've painted yourself in a corner? What other conclusions can I draw after all this back and forth?

> "This is from an anonymous source who has no incentive to be accurate, so it is of unknown quality." is my "gripe". It's the gripe you replied to originally, so stop gaslighting.

"Hence nobody else in the entire world should see any value in it either, because anonymous internet stranger." - Zetice

With such deep thoughts you should consider running for president, you'll be in good company. You probably understand that everyone starts as a nobody, how are they supposed to build said reputation if nobody ever evaluates them on their work? Chicken and the egg. Maybe that's his incentive? Build reputation, or land a job, or a million other reasons. I don't judge anything on shallow requirements like reputation, it can be a useful metric but never necessary. I would miss a lot of great content if I did.


> We're discussing this in a broader sense than the OP right?

No.

> I understand you as dismissing anything not from a reputable institution.

Nope.

> What?

I said, I didn't ask why you cared about talking to me, I asked why you cared about Internet Argument Points. You brought them up, then got pissy about me caring about them.

> it would be much better use of your time to just define these institutions.

Nope.

> I claim their "monetary incentive" you mentioned mean they push you into their walled gardens, like AWS, Oracle and others. As I see it unbiased sources don't exist, and all sources have to be taken with a grain of salt. I would love to be proven wrong, why the hesitation to share an example? Was I close with the Docker one?

...no. Literally any top 100 university (and their related CS departments) in the world would be a more correct example.

> Maybe we're even in agreement but you've painted yourself in a corner?

Nope.

> "Hence nobody else in the entire world should see any value in it either, because anonymous internet stranger." - Zetice

Nope.

Turns out I was right, and you have no clue at all what I wrote, despite it being plain and straightforward. What should concern you is the dozen or so people who agreed with me enough to upvote my original comment; what are they getting that you aren't?


> No.

> Nope.

Yes we are, at least I am. Or did someone borrow your account when you said "Avoid individuals entirely"? If we discussed the OP we would be discussing the details of the repo, but when I asked you to elaborate on that all I got was "This is from an anonymous source who has no incentive to be accurate, so it is of unknown quality.". I don't see any other way to read you.

> ...no. Literally any top 100 university (and their related CS departments) in the world would be a more correct example.

Nice, we're getting somewhere. There's good content coming from universities, but relying only on them isn't going to work in a fast moving industry like this. They do not cover everything. My experience is that it's a good starting point but the work is learned by practice, making mistakes and improving. Only then does that knowledge reach universities, but by that time it might be (probably is) outdated. They rarely go deep into the nitty gritty, why PostgresQL over SQLite and similar?

I'm sure you've heard the saying "Those who can do, those who can't teach". Hyperbole but there's some truth to it, they aren't really innovators in the field these days. Don't misunderstand me as dismissing them though, but I couldn't get anything done if that was the only source I'm allowed to read.

> Turns out I was right, and you have no clue at all what I wrote, despite it being plain and straightforward.

Yes, and I read you literally, see my quotes above. You can always correct me or rephrase them if you want.

> What should concern you is the dozen or so people who agreed with me enough to upvote my original comment; what are they getting that you aren't?

I don't find comfort in internet consensus. The majority of people are reactionary and can't form an original thought to save their lives. Tell them the internet is bad and they'll ask for a ban. Sure I find that concerning but not for the reasons you think, it's the reason democracy will never work, but that's a topic for another discussion.

Studies have shown that negative/inflammatory comments get more engagement than constructive ones. Do what you will with that. This post did reach the front page, but I'm sure you'll dismiss that.


Restate what you think I said.


I've quoted you several times asking for clarification, in almost every reply, but I'll bite because I appreciate the more constructive turn this is taking.

> Avoid individuals entirely, and focus on institutions who are meaningfully harmed by their inaccuracy if discovered.[1]

> "This is from an anonymous source who has no incentive to be accurate, so it is of unknown quality." is my "gripe". It's the gripe you replied to originally, so stop gaslighting.[2]

The incentive in this case being monetary (even though OP sells the book to those who are willing to pay, as I understand you he has no incentive to be as correct as possible). And because I either know nothing, or everything about a subject it's either completely useless to me because I already know it all, or dangerous because I won't spot mistakes.

> Either you are an expert on the topic of systems engineering and can assess the quality of this submission but don’t need it as you have better sources, or you are not an expert and can therefore not assess its quality (it comes from a random person on the Internet, after all) and therefore can’t safely use it.[3]

I don't agree it's that simple, knowledge is a spectrum and I will never be done learning. I'm also confident in my ability to sift through bullshit and don't need to be spoonfed from an institution with a stamp of approval to find value.

Based on that I'm not even extrapolating, I'm reading you literally that content should be avoided purely based on the messenger and not the content itself.

And to avoid getting misunderstood, I understand bias is a thing but it goes both ways. I'm not trying to argue individuals are better than corporations or institutions. Good content exists from all sources. I enjoy reading post-mortems from FAANG, but take them with a grain of salt. And I also have some repos starred containing similar content I'd like to dig deeper into at some point.

Some of the best content I've read are from passionate individuals who like to poke around the edges, they are the innovators as I see it. But I'm bullish on open-source and free software, so there's my bias. Universities are often outdated, Institutions (for example the FSF) have their agenda and corporations push their stack. I still read them all, but with a conscious mind.

To summarize: I'm arguing your claim that individuals can't be trusted and should (always) be dismissed based on WHO they are, and not WHAT they write. You're of course free to have that view, and I get the gist of what you're trying to say, but to dismiss that content completely is extreme. If you would've just said "not for me" and moved on, or a comment on some of the content in OP I would have no opinion what so ever. But you're claiming it should have no value, for anyone, ever. That's what I'm questioning.

[1] https://news.ycombinator.com/item?id=36602841

[2] https://news.ycombinator.com/item?id=36609072

[3] https://news.ycombinator.com/item?id=36606230


Ah, so you have no clue what I said. Thanks for clarifying, makes it easy to dismiss your emotional outbursts as such.

Let me know if you figure out what my argument was, and maybe we can discuss, but it honestly seems out of your grasp at the present moment, given how focused you are on "winning Internet points".


So what did you say if not what I quoted? Why is it up to me to figure it out, just clarify for me? There's no emotion from me in fact I find this somewhat stimulating, otherwise I would've left long time ago, but I see the urge to dismiss me entirely as being emotional. I'm just a random internet stranger after all, and it's easier that way. Anyway it was fun, I wish you the best.


The fact that you can't even tell how emotionally invested you got here is probably the main takeaway from this conversation, for you.

You took what I said, that blindly trusting a random source on the Internet to the point of spending hours judiciously studying its contents is, to me, not a wise way to spend time, and continually interpreted a vastly more extreme and broad version with the clear intent to pick a fight.

If you can't see how you took this incredibly personally, that's going to be a major issue for you moving forward. You can't behave like this in the real world and expect positive outcomes.


until recently i was a principal engineer at amazon. so maybe my opinion has some weight.

system design interview is more about interviewee asking questions..taking time to understand the problem..ask about product feature or SLA..understand functional and non functional requirement.

then its about candidate showing some knowledge set showing they can think and reason behind some immediate coding task. Demonstrate ability to make judgment..simplify where possible..discuss costs and trade offs.

this interview not about candidate building some system at scale themself. building and supporting has trials and lessons you only learn by doing and failing, not through interview prep or YouTube videos


Candidates can't read minds.

The best technical interviews I've been on as a interviewee have been those where the expectations are clear. In your example:

    "We're not expecting you to create Twitter in 15 minutes, but we want to understand how you think about the challenges and key considerations of building a large system like Twitter"
Many interviewers fail to provide enough context and that leaves the interpretation of the prompt too wide open. At that point, the interview has failed since whether a candidate can provide an answer that is aligned with the expectations of the interviewer has an element of chance to it.


>>>> Many interviewers fail to provide enough context and that leaves the interpretation of the prompt too wide open.

yes but this not a defect as youre viewing it.

in real world at amazon, your job to deal with ambiguity. the hand holding phase where youre given or told exactly what to do is maybe 1-2 year for college level hire. you work with ambiguity or you move out.

if you do not want ambiguity challenge then amazon not best fit for you. its not for everyone and amazon certainly has big problems in its culture. not defending any of it but saying to you what it is.


The difference is that in an interview context, there's almost always a defined endpoint; a narrow path that defines success. A 30-60 minute interview isn't the same thing as a 6 month project where you get a chance to meet with multiple stakeholders, digest the inputs, ask followups and so on.

This is why we see the rise of the "never ending interviews[0]". If you want an effective interview -- as an interviewer -- then understand what output you are measuring (like any good experiment) and then see if your subject can arrive at that outcome when given the context and 30-60 minutes.

Don't waste your own time disqualifying perfectly good candidates by playing games with ambiguity when you already know what you are looking for.

[0] https://www.bbc.com/worklife/article/20210727-the-rise-of-ne...


> Many interviewers fail to provide enough context and that leaves the interpretation of the prompt too wide open

So do clients/customers. What's your point? That an interview to assess whether a developer can elicit requirements should be less hard than dealing with an actual customer?


An interview with a customer for requirements had a very clear context.

An interview for a technical position could focus on either high level design, specific technical aspects, or process and so approach to problem solving. Maybe a bit of each. An interviewer that more clearly defines the context can get better responses.


I decline non big tech interviews that involve some kind of "system design". I've worked with so-called staff engineers who designed systems with textbook antipatterns (I literally studied them). This idea that you need to regurgitate some bloated mess onto the cloud even to solve even the simplest problems needs to stop.


This is not how people learn. And I always found these books more useful after I have actually worked on a system than before. Do you know why people ask such questions at any company? Its because there is a rubrick at every company and 1/2 the people don't have a clue what else to ask except because they have no other good ideas or because they may have done it themselves and it was a nice problem without realizing what that experience gave them never actually gives the same to the new guy.

Skill addition is not additive it is about comparing choices and critically analyzing risks on things you *have already worked on*. Its as much about knowing what won't work i.e. subtractive. If you have never worked on a distributed system and are asked to work on some part of it the reasonable way it happens is:

1. You will be asked to initially to start working on a small part.

2. You will/should have someone to help you if you have no clue. This is called seeking feedback and it is done on google docs where people critique your design and you iterate.

3. You should have the time and sense to read up when you do (1), (2).

4. Maintain mental sanity and find mentors to help you out.

Notice that a system design interview prep will help you with exactly *zero* of the steps above. For all other situations where you are forced to do more than the above, run away in the opposite direction.

I think if someone just published a Cheat Sheet on the actual Rubrick it would be so thoroughly gamed that it would prove the point that these things are pointless.


I've seen people cram their heads with all kinds of prep problems like designing an elevator, a parking lot or a social media site and I have never understood the rationale behind such questions. Design isn't something you could do in an hour. Design is also a trial and error thing. Slight changes in data flow might change the entire system architecture.


The problem we have today is that the Cracking the Coding Interview/leetcode grinder approach toward interviewing has now overrun the original purpose of system design interviews. It looks like people that lack the understanding of their purpose, or experience to conduct these interviews properly, now run these interviews, and/or often run the show, writing books/articles/etc, which perpetuate the cargo cult behavior of regurgitating patterns described in some influencer book/article/video.

This is somewhat associated with the title-inflation we've seen in our industry where "senior" and "staff" level engineers today seem to have much less experience than persons with the same titles a decade ago, and the people running these interviews are today's "senior and staff" engineers, or managers with very shallow engineering experience.

Systems design interviews were meant to be non-grindable, open-ended interview questions which allow a very experienced engineer as interviewer to understand the interviewee's experience and approach in solving systems-level problems.

The original purpose of the systems design question in engineering interviews was to: 1) Give the candidate an open-ended question that is not meant to be memorizable or fully solved within the allotted time. 2) Give the candidate an opportunity to shine -- expose and demonstrate areas where they have particular depth from their past work experience

Systems Design interviews employ the judgment of the highly experienced engineer as interviewer, rather than checking for a specific answer.

As an interviewee, the first phase of a systems design interview is really about asking questions to better understand context/constraints/considerations/etc before formulating solutions.

The second phase of a systems design interview is about communicating your understanding of the "whys" and clarifying/refining those understandings with your interviewer.

Ironically, we also have a post on Hacker News today ("The hardest part of building software is not coding, it's requirements") [https://news.ycombinator.com/item?id=36597709] which speaks to the point that systems design interviews really being a test to see how a candidate elicits requirements and considers trade-offs, when developing solutions for problems.


Can someone suggest a book or resource to learn more about designing a software system but not "scalable" system design that so many interview prep books talk about? I'm especially interested in something that is _not_ web related.

I've spent most of my career fixing bugs or taking over apps/systems that someone else started. I really don't know how to design something new. (I mostly work on APIs, Android apps/services, and a bit of game middleware).


Nothing like learning from popular software architecture

https://aosabook.org/en/


Has...the person who wrote this designed large scale systems?


Doesn't matter. The person conducting the interview has probably never done that. And the job itself probably won't involve large scale system design.


My last three jobs involved large scale systems. I've designed numerous ones. The last company I was at was a small startup and still did billions of transactions, a non trivial amount that certainly can't be handled by a single machine.


There are things in that doc that are completly wrong and ridiculous such as:

https://github.com/karanpratapsingh/system-design#streaming-...

It reminds me a Youtube video on how to create a Netflix clone, for streaming they were creating a Java class for each frame of a video.


Honestly a better resource for this is the following book . It is very easy to read and understand to use what to do where.

https://www.oreilly.com/library/view/designing-data-intensiv...


And for those not too sure about buying yet another O'Reilly book they may or may not read, I recommend checking your local library or school (if a student).

They may have the book physically or, even better, have a subscription to O'Reilly's online platform,letting you access any of their books and even some of them in pre-release.

I find that I only use a few of my books enough to justify the purchase, and I prefer skimming chapters I find interesting through the platform access I get from my library account.


That particular book has a long reservation line in my library network, because it is so popular.


My local library has a subscription to O’Reilly books included in being a card-holder. No waiting needed if you are not set on the dead-tree version.


I'm not sure if I would be prepared to design systems at scale after reading this ... extended glossary. I like the short explanations though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: