Hacker News new | past | comments | ask | show | jobs | submit | spaghetti's comments login

Convincing yourself that future success isn't predictable is actually pretty easy. Regardless of how technically awesome someone is, there's a random variable that can dramatically affect one's future success: the candidate's co-workers.

Anyone with a few months experience at a big company knows that a new hire's coworkers are, for the most part, selected randomly. When the co-workers match, everything can work out, even if the candidate isn't technically strong. If the co-workers don't match, for whatever reason, the working relationship can easily fall apart.

I don't know what the ultimate solution is. Even work trials aren't perfect since company and team structure change. Accepting the unpredictability of future co-worker relationships makes it easy to see there's no magical solution. Understanding this might help things move in the right direction.


> If the co-workers don't match, for whatever reason, the working relationship can easily fall apart.

> I don't know what the ultimate solution is.

Two approaches to this problem that might make sense are: (1) hire based on your current employees' recommendations; or (2) hire entire existing teams all at once.

Based on what I read, both of those are fairly popular approaches already.


I have a few thoughts on this in no particular order:

- It would be interesting if the top list gaming mentioned in the article was eliminated by replacing download or install count with something more meaningful like "frequency of app open" or "average time spent in the app per day". These can still be gamed but it's harder than just gaming a metric that relies only on download or install count.

- There should still be some sort of collaborative filtering. I favor improving the top list's ranking algorithm over getting rid of the top list.

- Giving new apps more chance to succeed or exposure could be implemented similar to how HN includes job postings. Namely the listing is put in the top spot of "the list" for a bit and slowly falls down. In the app store the new app should probably fall rapidly unless user-engagement is high. How to measure that? Download count is too easy to game so other metrics would need to be used.

- I think developers should be mindful that they're selling to (in general) everyone and not just other developers. When Marco mentions "high-quality" apps he's speaking from a developer's perspective. And developers often care a lot about details that the general public doesn't notice or doesn't care about if they do notice.

- The app store should be faster. The iTunes load time feels too long these days.


That would involve sending a message to apple every time you opened and closed an app. How would you feel if Apple was storing details about how often you used your various apps on their servers? I'm sure a lot of people would be outraged


My experience participating in TopCoder SRMs (quick algorithm development competitions) has payed off quite a bit when it comes to white board coding style interviews (both as the interviewee and the interviewer). Just thought I'd share a few pointers:

- participating in SRMs is great practice for interviews because you're forced to learn to think clearly while under intense time pressure.

- The TopCoder tutorials are a great resource. In addition to review of useful mathematics, data structures and algorithms they'll give you an introduction into how to participate well. For example time management between the easy, medium and hard problems.

- You have four language choices for SRMs. My preference is C#. I like using Visual Studio because the auto-complete/intellisense comes in handy when you have to write a bit of boilerplate code (like "new Dictionary<int, int>();". I've also used MonoDevelop on OS X. It worked well. iirc almost all problems are solveable using C# except a few where, to my knowledge, C++ was necessary. Read-up on Petr's advice. I believe he's a fan of C#.

- My knowledge of C# isn't particularly deep so don't let learning a new language scare you away. imo you just have to effectively use dynamic arrays (List<T>), hashtables (Dictionary<Tkey, TValue>), static arrays and strings. Also a sorted key-value store like SortedDictionary comes in handy once in a while.

- Last bit of advice: solve lots of problems and have fun! You can access a vast archive of problems from previous competitions and match editorials which usually have decent explanations for the problem solutions.


I believe Petr has switched to Java now though


One option is essentially a programming test framed as a first quick milestone. Say the app is for taking and sharing photos and has a $10k budget. Start with a milestone for an app with a single button. When the button is tapped the user can take a photo using the built-in, default camera view. Price this milestone at $300 or some other small fraction of the total cost.

If the contractor quickly sends you something simple that works without too many lines of code then you have a decent signal that the skill level is sufficient. Anything else (slow response time, app doesn't work, below expectations code quality etc) and you have a decent signal that the skill level is insufficient. In this case just pay out the first small milestone, thank the contractor, cancel the project and try someone else.


Yes there are great developers and other professionals on Elance. Think about it from a developer's perspective: I can work at one company, on a relatively small variety of projects, probably maintaining some legacy code, with the same people at the same place everyday for years on end and earn $X/year. Or I can work for a few clients that I choose, on a wide variety of products, rarely/never maintaining legacy code other than my own, and work from almost anywhere in the world while earning something close to $X/year (probably +/- 20%).


Milestones are great. However I like to have reasonably consistent prices. For a $4000 project you can have, say, four milestones with prices $500, $1000, $1000, $1500. This way the client can bail out after the first milestone if things aren't going well. And they won't lose too much money. Also the client gets the "carrot on a stick" with the hefty fourth milestone without being obnoxious (with a $3200 fourth milestone for example).


Do salary deltas of even 100k make a substantial difference to the bottom line for large tech companies like FB and GOOG? Of course the difference is greater than zero but as a percentage of operating cost I imagine it's relatively small.

My guess is large tech companies are eager to import foreign talent primarily because of cultural and socioeconomic differences. Who would you rather hire to maintain your legacy code bases? A skilled immigrant that's thankful for the six figure salary and will obediently complete their tasks without much fuss? Or the native-English-speaking citizen who can find a six figure salary at any of his friends' companies and will walk as soon as the legacy codebase's tech stack starts to reveal itself as less-than-inspiring?


I like to partition companies with products that gain traction into two groups. In the first group are technically mediodcre founders who cobbled together a product that's almost unmaintainable. The second group consists of relatively experienced founders who cobbled together a product that's maintainable.

The two groups use distinct interview styles. Since the group with nearly unmaintainable code has many more members than the latter we see their interview style far more often. This is the familiar all-day brain teaser and relentless quizzing interview style. This interview style makes sense for these companies because they need warm bodies to throw at legacy codebases. The quality to seek in employees is perseverance and obedience. And this familiar interview style tests for just that.

The second group of companies uses the more thoughtful interview style. The founders value insight, creativity, and experience more than obedience. Hence their interview style focuses on these qualities.


> The quality to seek in employees is perseverance and obedience. And this familiar interview style tests for just that.

I know some actual interviewers who ask these types of CS heavy, puzzle based, brainteaser type of questions. They say that their rationale for asking questions like this is to gauge the candidate's attitude and to see how they behaviorally respond when presented with this kind of problems. In many ways, they are testing for submission in candidates. Basically, if you up and leave when asked how many manhole covers there are in New York, their system worked perfectly: they weeded out a candidate who they think would be an entitled prima donna. Corporate environments often require foot soldiers who will do the job without complaining, hence the testing for absolute obedience.

I disagree with this tactic but unfortunately it is quite widely employed during the selection process.


If the bar for "absolute obedience" is not storming out of the room when asked a dumb question, then yay for absolute obedience.

Someone who really was the superstar candidate who shouldn't be bothered to answer pointless riddles would surely have the communication skills to tactfully change the subject to how they're the best candidate for the job.


I've never designed large-scale systems however I'm still fascinated. I wonder if LBs could learn over time which requests will probably take a while and which have large probability of being fast? Then the requests could be uniformly distributed amongst the servers.

A system like this could have all sorts of knobs to turn. Requests could be partitioned into two groups: "probably fast" and "probably slow". Or three, four, n groups etc. Then the way in which the LBs distribute the requests could be tweaked. For example a ratio of 1/5 slow/fast requests per server.

This does require some feedback from the servers to the LBs. However it doesn't have to be fast. The servers could push some (request, time) pairs to some aggregating system at their leisure. Then the response time prediction algorithms used by the LBs are updated at some point. Probably doesn't have to be immediate.


You have good instincts here, but

1) In very large systems there are many many load balancers

2) Server "spikes" (eg swapping) can happen very suddenly, so distributing health info is a hard problem

3) Determining a priori which requests are heavy and light is also hard. Think about "SEO urls": they blow up the cardinality.

4) Even assuming perfect global knowledge, you are proposing to solve a variant of the knapsack problem.


I agree this is true for small companies. However Dropbox is not a small company by most measures. Also pg's remarks about how you couldn't acquire Dropbox or Airbnb for (some large dollar amount I forget) lead me to believe Google won't be acquiring them.

I'd imagine Google wants to acquire them. However I doubt Drew Houston would go that route.


Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: