Hacker News new | past | comments | ask | show | jobs | submit login

I'm no expert in scalable algorithms, but my rudimentary understanding is that there are a few different ways to handle a problem with n inputs when n becomes a very large number:

1. Run it on a faster processor

2. Rewrite your algorithm so that it can run in parallel and the tasks can be divided among more cores/processors/servers

3. Rewrite your algorithm so that it's non-locking and the tasks can be completed asynchronously

4. Rewrite your algorithm so that each task takes less time (fewer/faster queries, fewer array accesses, fewer object instantiations, etc).

In terms of the analogy, #1 is equivalent to working harder on the problem--just throwing more time/money/energy at it. Staying late at the office. This is the wrong approach in programming and in life, because it does not scale with the size of the problem.

#2 and #3 are basically division of labor and task delegation in order to divide the time that it takes to complete the problem. This gets the job done faster but at a cost--you're paying for more heads to work on the job.

#4 is finding a better/cheaper/faster way to do each individual task or sub-task in the problem. It's improving your big-O. Having a better big-O alone will not make #2 and #3 unnecessary, but it will exponentially increase the effect of #2 and #3. It's what business types call a "force multiplier." It's really just reducing the amount of sub-tasks an individual worker has to do in order to satisfactorily complete a task. It's achieving the same thing by doing less.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: