Hacker News new | past | comments | ask | show | jobs | submit login

It continues to surprise me how much IT/sysadmin work is done by heuristic rather than actual measurements.

Applications slow? -> Add RAM

Database is slow? -> must be network/load

Service failures? -> reboot

It becomes very problematic in large IT organizations. Teams will play hot potato with issues, and all use excuses. Desktop support will blame the DB team, DB will blame the server team, then they all blame the network. All the while no one is actually measuring anything.




It is very frustrating when a group gathers statistics and measures everything it can. And the others do not.

I administer servers. Our group tires to measure what it can, keeps historical data. Then we get a call from the application owners:

'Users are complaining about slow performance.'

Who?

'A few. I didn't keep the email. It's just slow, look into it, will you?'

So what can I say but 'Server x did this and such a time and measure y is normal but I don't think that's your problem because of reason z'.

In other words, yes, I finger point because I _know_ it's not my problem.

And as often or not 'a few' users are one or two people whose problem went away when they restarted their PC.


Surprisingly almost no one cares about performance. When was the last time you have seen a webframework or a database that routinely does performance benchmarks at each iteration? In fact, I don't know any. I was very impressed with the continuous measurements that PyPy does.


Can you name a webframework or database?

There's many benchmarks available (I'm not familiar with Python):

http://codeigniter.com/user_guide/libraries/benchmark.html

http://codeigniter.com/user_guide/general/profiling.html



The site is great, but ironically a bit slow.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: