Hacker News new | past | comments | ask | show | jobs | submit login

So, who feels like building a cloud based "work-sample tests" platform with me?



Funny, last week I wrote some notes about doing just this :). It would be hard to build a general purpose platform, but I think if it focused on just replacing the initial technical phone screen, it could be both reliable and useful.

There's a common set of skills that most any software engineer should have. I think those skills can probably be checked via a web tool, and in higher fidelity than you can get over the phone.

As a candidate applying to N jobs, I'd much rather take a web screen once instead of doing N technical phone interviews. And as an interviewer, I'd much rather not have to ask FizzBuzz again.


I envision something akin to the http pipeline in a web app the is composed of several "microservices".

One stage of the pipeline is left intentionally out. The candidate then is required to build an actual service. They can use any language they want. Even any cloud host they choose. All that's provided is the entry points, exit points, wire protocol, data encoding, and the spec to be implemented. After passing that, advanced topics like logging, telemetry and performance provide grounds for further discussion to assess development "philosophy".

But the real genius would be selling this work-sample platform-as-a-service to enterprise customers. Who could be prompted into building a catalog of microservices based upon their APIs and real data. And out of that hopefully some innovative, hackathon-esque mini-products could arise ;)

I definitely never thought I would become an "HR guy". But I really like this idea. Will include a contact email in my profile if anyone is interested in discussing it further off HN as well...


I've done a few online tests recently, and I was thinking of how to do a better one.

1) The main idea was to have the results of unit tests visible live. I've done a few tests where I found out I didn't get through even though the test rig says it works. It compiles, it gets the examples right, but there's some hidden test that failed. You'll never know why, even though it's probably not something surprising.

So just make it explicit. A bunch of lights for each test, a description of what the test is, and there you go.

2) That solves little problems like "given some set of numbers, how do I find some ridiculous property of those numbers?" type questions.

What you really need is something that tells you how people deal with complexity. I have a large problem, how do I solve it? There's always more than one right answer.

Here you might be better off doing some kind of subjective voting. People who are looking for work might find it comforting that other people in their situation are judging them. Or gratifying to be able to judge other people's skills. Perhaps there's some incentive structure that cleverly aligns this.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: