Hacker News new | past | comments | ask | show | jobs | submit login

Always remember that the total time between you having a problem, and you achieving results, includes the coding (and recoding) time, and the run time. There's a tendency for people to ignore "slow" languages because they focus only on the runtime.

I am well aware that there are good reasons to optimize things in languages like C (and I use them), but consider...

If I take several extra weeks to code, debug and test a C solution, and I could have had a script done much sooner, then my results were not faster overall. Why? Well, the script could be slow as dirt, but if it has a few extra weeks to churn through data and produce results, it may be done before the C program is even ready.

It's also important to remember that not all bugs are in software. Suppose I was looking at an entire problem in the wrong way, and this wasn't apparent until I started seeing results? In that case, my earlier start with a "slow" program meant that this mistake was found much sooner, so the script can be thrown out and redone, producing correct results with not much of a time penalty.




Also, thought I would mention the fact that if you take a day to write a slow program that takes a week to run, that's cheaper than spending a week writing a fast program that finishes in a day. After all, your time is much more expensive than the computer's!


You're both making a pretty odd assumption here, which is that a program typically runs exactly once and that I am the only user of my own program.

Users' time is also much more expensive than the computer's. That's why we write software in the first place.


No, I expect my scripts to run for a long time with many users. This doesn't preclude optimization; some of it is automatic (new hardware, interpreter library improvements), some of it is well established (using SWIG and C to replace only a tiny piece of the program that must be faster).

In some respects, having long-lived software with lots of users makes speed the least of my concerns, because they're always asking for new features, and those are relatively easy to add to scripts.

And the relationship between software speed and productivity isn't linear, because people multitask. If a program takes 10 seconds to run, I might sit and wait for it to complete, without doing anything else. Whereas, if the program takes a minute, I may decide to switch to another quick task, and then return to see results. In this case, both tasks needed to be done, one took longer but it ate up the "slow" runtime of the program, and was only parallelized because of that long runtime.


I was referring to this statement: "Well, the script could be slow as dirt, but if it has a few extra weeks to churn through data and produce results, it may be done before the C program is even ready."

The comparison of development times and running times simply makes no sense if you assume the script is going to run a 1000 times. I agree that this relationship isn't linear. That's exactly why it's pointless to compare the the two numbers as if it were. The only number that is comparable is probably the profit you make in each case.


Yes, good point. I guess he was referring more to the scientific end of things, where you write a program that runs for days on massive data, and there really isn't a user




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: