Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Anonymous interview evals of strong software/ML engineering candidates
95 points by ngptprad on July 31, 2017 | hide | past | favorite | 47 comments
We are a recruiting startup with a small twist. We represent engineering candidates who receive a recommendation through our personalized vetting process, which includes a technical interview with an unbiased third-party senior engineer. We match the candidates with interviewers based on their background and the roles they are looking for. We pay for the senior engineer's time to interview so that the feedback is completely unbiased.

These interviews are unique in that they are not set up to test if a candidate is a fit for a pre-set role, but instead they are personalized to extract each candidate's strengths and the roles in which they'd excel.

The senior engineers who interview for us have collectively interviewed 1000s of candidates and have built and led engineering teams at top tier startups and bigger companies (e.g., Google, Facebook, Uber, etc.).

Here are the two evals Candidate1: https://goo.gl/U4jPR7 Candidate2: https://goo.gl/VXMXRF

We'd love feedback on the two candidates and our interviewers' evaluations of them. - Does the feedback give you a good sense of the candidate's strengths and the environments they'll do well in? - Would this save time in your evaluation process because the candidate has already been recommended after a technical interview? - Do you want to interview this candidate for your own team? Why or why not?

If interested in these candidates or other vetted candidates with full evaluations and interviewer identity, please feel free to reach out to ngptprad@gmail.com




However, he was less comfortable jumping into [and] thinking about a problem outside of an area of expertise.

These kinds of snap judgements I find very problematic. As if anyone -- even if they're an "industry leader", even if they've interviewed hundreds of candidates; heck, even if they're a truly towering figure in their field (or otherwise a truly brilliant person that no one knows about yet) -- can make that kind of an assessment from a (highly contrived and stressful) 45-minute or so interaction with someone.

Maybe after working alongside someone for several months, you could say that. But from their off-the-cuff answers to your made up puzzle problems (or even from unstructured conversation)? I just don't buy it.

BTW you should be doing a lot more to anonymize these profiles. Blurring the school name is a good start, but you definitely should not include company names either; and the gender should be obscured, as well. Even from just a small tuple of attributes like these, it wouldn't be too hard to identify, and perhaps cause considerable harm to some of these candidates.


I don't think it's a "snap judgement".

Interviewing is a trying to get as much signal in a short window. Any good interviewer knows there is high variance, however you still need to make an assessment quickly.


It kind of sounds like you said:

"I don't think it's a snap judgement. Interviewing is [making snap judgements]. Any good interviewer knows there is high variance, however you still need to make [a snap judgement]."


Apologies, to clarify:

"These kinds of snap judgements I find very problematic. As if anyone... can make that kind of an assessment from a (highly contrived and stressful) 45-minute or so interaction with someone."

"snap judgement" in my reading, implies a rashness, lack of carelessness, or impulsiveness, (or arrogance, which is implied by OP)

My counter is that although it is not a lot of evidence, it is generally a very deliberate act.


My counter is that although it is not a lot of evidence, it is generally a very deliberate act.

Is not "lacking in evidence, yet deliberate" the very definition of... rash and impulsive?

BTW I wouldn't say these people are arrogant (for making these kinds of judgements), per se.

But it does seem that they're biting off more than they can realistically chew.


I'd recommend switching over to the singular "they" for referencing candidates, otherwise you lose a lot of benefits of anonymity.[0][1]

[0]https://www.nytimes.com/2016/02/28/magazine/is-blind-hiring-... [1]https://www.analyticsinhr.com/blog/blind-hiring-increases-wo...


I'm wondering what parts of the interviews had anything to do with engineering. I saw questions about CS textbook trivia in the first interview and vague references to questions in the second.

I didn't see much substance with respect to problem solving, especially in the context of machine learning or data science, analytical thinking in terms of systems or anything else that would benefit me were I hiring manager using these reports from my team to inform my decision about moving forward.

If the first interviewer were on my team our next one on one would have some time devoted to what "engineering" means to the person, and over time if it didn't stop being synonymous with "can regurgitate CS trivia" the interviewer would be removed from the pool of interviewers I select from to interview candidates.

The second interviewer needs to include more detail regarding the questions asked and the discussions that followed. It's far too vague to really have any weight attached to it.


saw questions about CS textbook trivia

But that's the point - this is really a filter for people who have recently been cramming textbooks, e.g. someone just out of college. It's a way to sneak ageism past HR, that's all.


It could be. Some hiring teams at the place I work use these kinds of silly interviews, but they have a number of "older" people on the team. Some of my teammates use these questions for their interviews, and they haven't indicated any preference for younger developers (e.g. to save on wages).

I think ageism might play a role, but I think a bigger factor is that this industry is just filled with people who have an academic degree and desperately want to think of themselves as engaged in sophisticated math-y work even when what most do most of the time is closer to being a glorified handy man. This includes myself, of course, and I work on a data science team.

The industry is just too enamored with itself.


I'm interested to know if the context of the role that is interviewed for is taken into account. I mean, the skills required to implement a large scale distributed database engine are very different to implementing, say, an accounting solution (an app).

I have personally been in interviews in the past to be interviewed by a "software scientist" for a role that was primarily high level business logic based, but, because of the interviewers background, the interview was full of puzzles, algorithms and whiteboard tests, completely divorced from the actual role that was being hired for. I.e, he was interviewing for a software scientist (like himself) where the company clearly needed someone with experience in building products.

This is basically my biggest gripe with interviews in the past ten or so years, companies who think they are Google, hiring like they are Google but are not solving anything near the scope of what Google does.


I find the two evaluations useful and valuable. I am a hiring manager and some of comments if I may:

1. Always use HE, even if it is a SHE. (1)

2. We test our candidates with custom-made case studies that help them SHINE and not focused on making them FAIL. This mindset has helped us evaluate candidates, in the waters they currently swim, than to throw them into our reality. It is the job of the interviewer to retrofit our problems into the candidate's universe. We have heard from candidates, both from hired and not-hired pools, that their stress level is reduced a lot by this approach.

(1) https://www.theguardian.com/women-in-leadership/2013/oct/14/...


I think the idea is novel enough, but the examples provided came across as "fluff" with very little useful material.

The meat of the form is the "Engineer Attributes" radio button scale for 14 different topics, not one of which was backed by a concrete example from the interview, a resume, a portfolio, etc. Those need to read "Agree, because ___," or "Disagree, because ___." Without that, these metrics undermine the credibility of the whole thing.

I wouldn't use this service in its current form.


I would move the interviewer profile to the end (sort of an "appendix" to the document). Yes, it's important context for some, but it's not about the candidate and putting that near the top interrupted my flow of understanding. (I confess that I was a little bit skimming, but I couldn't figure out how a VP/C-level person was being recommended for a junior IC role at first.)


I have some questions: How does measurement of leadership happening in those interviews? How is measurement of explaining hard terminology and principles in easier terms is measured? How is a reliability as the team member measured?

For the first-you can't asses leadership ability just out of the blue since each company has different set of values and approaches there. (For example Army vs Google leadership styles)

For the second-you need some 3rd person who has relatively low understanding of the field you testing person for, then you need to ask candidate to explain something to that person and then ask the 3rd person to tell you what did they learn or understood from this conversation. So far I see no other way about it, if you have better ideas, please share.

And third, well only time can show such attribute as reliability as a team players. There is way too many thing could go wrong in chemistry between team and the candidate.


In the last week or two, the idea of hosting mock interviews has taken the Twitch.TV "Programming" Community by storm. A number of willing victims have volunteered to go through the process of solving a problem within a set time limit. In general the interviewers have been able to stay positive even when people are so inexperienced they are clearly overwhelmed by the adventure.

My impression is that going through a mock interview in this way has pulled back the curtain on a process that a select few have been able to shake their network (school alumni, etc.) to do something similar privately.


Looks a lot like Triplebyte. What's the difference?


Yes, I'd say they are similar in that they also pre-vet candidates. One major difference is that we are more personalized to each candidate as we can match them to a senior engineer in their field (or field they are looking to enter). I think this should result in a more tailored and useful experience for the candidate (interview prep and feedback relevant for the role they want).

Also, our interviewers are independent, so there's no risk of conflict of interest or bias. My understanding of TB is that they do the interviews in house with their own engineers, or people they contract.


What level are you trying to place candidates? For example:

Engineering manager: https://careers.mufgamericas.com/job-incubation-engineering-...

Engineering lead: https://careers.mufgamericas.com/job-engineering-lead-platfo...

We’re a bigcorp (top 10 global bank) but I’m actively kicking tires on novel and/or data-driven candidate matching techniques. If you’ve got something interesting, bring it on.


Even if there's no difference, what's wrong with creating something that already exists? More competition in any space can only be a good thing.


Why are people with PHD and VPE level experience being slotted as junior engineers? It seems like a waste of time to recruit a senior person to a junior role.


I think those are both explained directly in the evaluations?

"​She does have 3 years of work experience prior to graduate school, but plays down the experience as her job search is focused on a specialization she has less direct experience in." Not to mention, the significance of "VPE" at a startup depends highly on the startup.

"However, he was less comfortable jumping into thinking about a problem outside of an area of expertise." Where the area of expertise is not software engineering.


Take a closer look - I had the same confusion at first.

The PHD and VPE is the interviewer.


You are confusing the interviewers with the candidates.


For the interviewer 1 question of sorting an array with only two unique elements. Am I missing something? The solution just requires counting lower value and writing it number of times it occurs and then writing the other value until the end.

https://en.m.wikipedia.org/wiki/Counting_sort


For some reason many senior engineers cannot answer questions of this type. These types of questions are meant to see if someone can write programs in language X. They are also followed up with exploratory 'what if' questions that enable the interviewer to inspect the thought processes of the candidate. They are also used in behavioral interviews.


The person interviewed isn't senior. The one interviewing is.

I think her solution for array larger than memory is very inefficient. Even in smaller case one might achieve speed up by excluding unnecessary read/temp/move in else clause by just by writing two values which will likely be in registers.


Thank you. My comment was not about the candidates but was an explanation for why these types of questions are asked by interviewers.


In the past year I've been in multiple conversations about how could the future of tech recruiting be shaped to best help both candidates and employers, and have the confidence the future of this idea is extraordinarily bright.


Thanks, thats very encouraging feedback!


I think providing more information on the questions for Candidate 2 would be pretty helpful. The description is really vague as "easy" and "hard" could mean different things to different people.


That makes sense. Will reach out to the interviewer to get a more detailed feedback. We also share the full code + questions as part of the full evaluation. Would you be interested in interviewing the candidate as a function of this evaluation? Is the interviewer identity important?


If a recording of the interview could be provided I think that would be best.

I think gathering enough trust where people start accepting these dossiers as fact will take a long time unfortunately. But if you have a recording of the interview, I would be able to say a lot more definitively whether I liked the candidate or not. And as my opinions aligned with your dossiers, I'd trust your opinions more.


We have thought quite a bit about this and decided not to go down this route because we weren't sure if hiring managers would actually have the time to listen to a 1-hour conversation. Also, would it help if the identity of the candidate (resume data) and the interviewer's profile is available (so that it's more credible)?


I second this! But also because an audio recording is more personal information to go off of.


The interviewer for candidate 1 isn't very technically proficient. In the "if the array was too big to fit in memory" case (where "disk" is explicitly mentioned), the proposed solution starts by moving the disk head from track 1 to track N, then back to track 2, then track N-1, 3, N-2, etc., oscillating until it stops at track N/2. That takes a grand total of O(N^2) seek time. The correct solution is to do one sequential pass over the file to count the a's and b's, and then a second pass to write the proper number of a's followed by b's, which takes O(N) total head-movement time (which dominates everything else).


I think the code is written for the case where it fits in memory. For that case, the solution is O(n) in terms of time complexity. From the feedback it seems that they only spoke about the solution where the data doesn't fit in memory.


"To test further, I asked her how she will approach the problem if the array was too big to fit in memory ... She then worked on adapting her original code to work with left and right batches, swapping ... back to disk." (Emphasis mine.)


The problem statement did not mention anything about optimizing the real world throughput of the algorithm. For the things you are talking about to matter there would need to be a far more complete problem specification.


I find it odd the scale goes directly from Junior to Senior Engineer. Is this a Bay thing or is that a traditional transition?


Both interviews highly biased towards interviewing skills: bs and basic algo. Judging if someone has the skill to build large system based on a conversation is silly. You cannot make an informed decision.

You should look for people with the portfolio. If someone has made a project in TensorFlow to paint in Picasso style then you should hire them regardless of education.

This is an unpopular opinion. You cannot become good in software engineering if you only read HN and work 9-5.

My ideas for interview:

  1. Ask the candidate to bring a laptop, people running Linux +1
  2. Pick random testing framework and ask if the candidate can setup and run some tests for fizzbuzz.
  3. Ask a difficult question on system design, CAP theorem, compilers and OS internals. They should not be able to answer these. 
  4. If they lasted so far, make them comfortable before the finale. Pat theirs ego a bit.  
  5. Confront the candidate, essentially troll them a bit. For example: Describe fictional homemade crypto system build in PHP. Judge their response.
Instead of comfy 30 min BS talk about their previous experience you will have a better picture of the person if you put them in the hot seat. Above is just an opinion as I do not have interviewing experience.


Are you serious with your ideas for an interview? You have:

1. Someone running linux? What about gamers who dual boot or VM into linux? What about people using the new windows+ubuntu thing? What about people who do windows/android/whatever dev that can be done completely fine in windows?

2. Fine, but extremely easy. Not much more than a small filter

3. You don't expect an answer? What are you getting out of this?

4. What?

5. What? Why?


Yeah, OP has some serious issues. Lul.


> You cannot become good in software engineering if you only read HN and work 9-5.

Yes, you can. I know several people who are and do.

> Ask the candidate to bring a laptop

Which would screw people with desktop PCs or do not own their own laptops (some do not; it's expensive for some).

> people running Linux +1

Ew. Checking for Linux skills is fine; judging someone based on their personal setup is elitist and arbitrary.

> Ask a difficult question on system design, CAP theorem, compilers and OS internals. They should not be able to answer these.

Then why ask?

I would absolutely not be able to get a clear picture of a candidate from these questions. There's no strategy; no context. And not to get too personal, but, yeah -- you do need some interviewing experience.


> Confront the candidate, essentially troll them a bit. For example: Describe fictional homemade crypto system build in PHP. Judge their response.

I would respond I have no idea about your fictional system if I had never heard of it before. What would be your takeaway from my response? Please do not ever interview if you want to just "troll" candidates and waste their time.

> Instead of comfy 30 min BS talk about their previous experience you will have a better picture of the person if you put them in the hot seat.

Oh boy, I bet you believe in those high pressure interviews designed to see how the candidate performs under difficult conditions.


Sounds like you are more interested in hazing than interviewing.


The BEST way to figure out if someone has built large systems is to ask them about it - how it was designed, who initiated which part, edge cases, people involved, etc. The candidate will often start talking more than they would if asked a simple yes or no answer. BS sniffing here is fairly simple.

It also tests soft skills - if the candidate can communicate to others (engineers even); often, large systems cannot be built and maintained by a single developer, hence communication to build and scale a team is another sign of large systems (people and tech) experience.

How would a portfolio demonstrate large system experience, unless the system was completely online? It's much more difficult to put a large system into a portfolio than small code samples. And if the large system is of value - why interview at all!? =P

For pure programmer/tech chops, sure, the interview ideas are fine. However, that's only one aspect of finding the right person for the right position at a company.

The hotseat approach is not right for all companies.


>If someone has made a project in TensorFlow to paint in Picasso style then you should hire them regardless of education.

Hmm, I definitely don't agree. Once the Style Transfer paper was released, it's pretty trivial to implement it in a DL framework. Certainly a good indicator that someone knows how to implement deep learning algorithms, but it's a pretty small facet of someone's skill. You can implement Style Transfer while still writing ugly, buggy, mediocre code. You can also copy most of the code from different open source DL repos, even before anyone had implemented Style Transfer.

That said, someone taking the time to implement Style Transfer does show that they're reading recent research and are curious about deep learning, which are definitely good signals.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: