Hacker News new | past | comments | ask | show | jobs | submit login

I lost it at this one. As if there is some single point of failure that's going to bring all of Google down, and some intrepid director of engineering has to inspect some TCP packets by hand to fix it.



This is where I threw up my hands too. The Director of Engineering does not need to know the difference between SIGTERM and SIGKILL, or how many bytes are in a MAC address. I guess it's a nice bonus if he does, but he'll spending 10 hours per day in meetings talking about roadmaps, shielding his team from the execs, and removing productivity roadblocks. "Third engineer from the left" is doing the packet inspection--ask HIM about SYN and ACK.


Reading more closely, it sounds like they are not interviewing him for a director of engineering position; it just sounds like he thinks his current role, CEO-who-writes-code of a very small software company (http://www.gwan.com/about), qualifies him for a director-of-engineering-level position. He's probably being interviewed for an SRE team lead or thereabouts.

Why he's being interviewed for that position is a different question entirely, and I can imagine Google being totally right or totally wrong.


How can you imagine Google being totally right here? The disconnect between the questions being asked and the interviewer's lack of knowledge made the interview a waste of time no matter WHAT role they are interviewing for.

Take, for example, the sorting question. "Why is QuickSort the best sorting algorithm?" The answer being looked for was, "It has the best Big O."

And this is wrong. Its average case is O(n log(n)). Its worst case is O(n^2). Which do you call its big-O? Moving on, the average case of O(n log(n)) is matched by a wide variety of sorting algorithms. How do you choose one?

Here is a better answer.

QuickSort is a very simple to implement algorithm which achieves the lowest average number of operations on a randomly sorted list. Which is why it is so widely adopted despite sometimes being very slow.

However Timsort appears to be the fastest general purpose sorting algorithm for the mix of random and partially sorted lists seen in practice.

When I tend to notice that sorting is slow, generally that's a larger workload where some type of merge sort would be appropriate.


I can imagine Google being totally right because I can imagine the interviewee not accurately remembering the conversation here. (I expect, for instance, that he did not write down the interview as it was happening.) In fact, conditioned on the assumption that Google is right that this guy's experience is better suited for SRE than director-level, it is pretty likely that he did not understand the questions being asked / thought the questions were beneath him / etc. and therefore wrote them down inaccurately.

For instance, perhaps the interviewer asked "What makes quicksort a good sorting method," instead of "What makes quicksort the best sorting method"—a very small difference in phrasing. In that case, the answer of "It's not always the best, or even suitable" is still technically true, but much more wrong. (And an answer like the one you started with, "Its average case is O(n log (n)), its worst case is O(n^2)," would have been enough to pass... but sitting on the phone and arguing about storage topology is itself a failure.)

As I mentioned in another comment https://news.ycombinator.com/item?id=12702130 , my (five-year-old, faulty) memory of Google's SRE phone interview is that they asked another question here with a very small but important phrasing difference: "What is the signal sent by the kill command" instead of "What is the kill signal". If you make that change, the interviewee's answer of "SIGKILL" becomes wrong, and the interviewer is right to insist on SIGTERM (which would otherwise make no sense). It is a quite literal game of telephone.

(Again, I can also imagine Google being totally wrong and the interviewer mangling the questions.)


> they asked another question here with a very small but important phrasing difference: "What is the signal sent by the kill command" instead of "What is the kill signal". If you make that change, the interviewee's answer of "SIGKILL" becomes wrong, and the interviewer is right to insist on SIGTERM (which would otherwise make no sense).

But... the kill command is the command to send arbitrary signals. It sends them all.


You are right that several of these questions could be due to his misunderstanding the questions asked at the time, answering the wrong one, and then remembering what he thought he was asked. But it is beyond my imagination to reconstruct a plausible conversation that could result in the one recorded without there being considerable ill will on both sides.


Here's a thought experiment: read the article, replacing the interviewer-side questions with ones that make them sound more plausible. (This is the side that we should believe to be less accurate, if only because the interviewer isn't reporting their questions.) Pretend you're the interviewer, and ignore the internal monologue.

For question 5, you asked about an inode, and were told about an inumber, and got back an answer insisting that the inode was an index.

For question 6, maybe change "inode" to "information in the inode". The interviewee still has not figured out the distinction between an inode and an inumber.

For questions 7 and 8, apply the changes I suggested.

At what point do you decide that the interviewee is hopelessly arrogant and not worth your further time? And how do you get them off the phone gracefully?

Maybe around question 10, when they're quoting bits to show off and not saying the words that would actually let them communicate with other engineers like "SYN" and "ACK"?

No ill will is required on the interviewee's side, unless you consider refusing to waste time on bad candidates "ill will".


Your imagination is better than mine. :-)

The conversation that I'd have to reconstruct has a very combative interviewee. Which would also fit said interviewee deciding to write up the article that I read. Which would mean that Google dodged a bullet.

I find it interesting to note that his site is down. There are a lot of possible causes, but it isn't good advertising for his webserver software.


> Its average case is O(n log(n)). Its worst case is O(n^2). Which do you call its big-O?

I know this is irrelevant to the larger point of your post, and I'm sure that you know this already, but the worst case is the big-O. This is just another reason why "big-O" is not the most helpful thing to discuss in practice.


Technically, no.

Big-O is a way of categorizing the growth of mathematical functions. Those functions can represent anything. It is wrong to talk about the big-O of an algorithm without specifying what you are measuring. Be it average operations, worst case operations, average memory, worst case memory, amortized average time, amortized average memory, and so on.

It happens to be that when we talk informally, we're usually talking about the worst case we are willing to think about. Quicksort's worst case is a sorted set, so we think of that as O(n^2). But then we turn around and cheerfully call a hash lookup O(1) because its O(n) worst case is incredibly rare in practice.


Apologies. Throughout my CS undergrad I had only been given the impression and understanding that Big-O measured worst case (lower bound, no worse than), Big-Theta average case, and Big-Omega best case (upper bound, no better than). Looking into it more now, I see that there are some more subtleties I either missed in class or was never taught.

Thanks for correcting me!


The subtlety here is basically that big-O and big-omega and friends are ways of characterizing functions, and functions map one input to one output. "Running time of a problem of size n" is not a function; it has a range of possible values for a given n. "Maximum running time of a problem of size n" is a function. That function itself, an² + bn + c for some constants a, b, and c, has lower and upper asymptotic bounds.

I thought you were right at first but then realized what was going on. This is a pretty subtle point and mostly uninteresting for well-understood algorithms like quicksort. But one slightly less subtle point is that big-theta isn't average case, it is the combination of big-O and big-omega, i.e., bounded from above and below (possibly with different constant factors) by the same asymptotic behavior.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: