Hacker News new | past | comments | ask | show | jobs | submit login

I agree that this should be the standard, not an exception.



I've been once on /., two times on Digg and all times my server crashed. I've improved it a lot since then, using different techniques. But never been hit by one of those again :(

New self hosted blogs can't get the load of peak times, because new bloggers start a WordPress blog, install on it a lot of plugins and never think about performance. I think most of us have made that mistake some time.


Sure. But it's not a mistake; it's a correct choice given current expectations.

What I don't understand is why the various CMS's don't offer automatic on-the-fly reconfiguration. It's 2011. We should be able to have the best of both worlds:

--when load is light, your blog software hits the database 42 times with every page loaded, no problem

-- when site load shifts from 1 page per hour to ten pages per second, the CMS should automatically, with no user intervention, say "oh shit" (perhaps audibly in the server room) and then automatically generate a static front page, and static article pages for whatever pages are being linked to, turn off the bells, turn off the whistles, email and tweet at the server administrator, etc. The CMS should cheerfully weather a storm all by itself. And when the load dies down, the static page should revert to a dynamic, 42-queries-on-the-database page, again without any intervention from the server administrator.

Does this exist anywhere, out of the box?


I don't know, but I do know that my architectural approach is to get any web app I write to only hit the database ONCE per page view. Ten calls is a lot for me. I've seen some that hit the DB 100 times on each view and I was able to reduce all 100 down to a single call.


I heard one specific Joomla site used 800 queries to render the front page. A dev needs natural talent to reach that point, I guess.


I worked on a site once that loaded at an okay rate for the live site, but the staging server would take about 15 minutes (seriously) to load once every other hour or so. I thought this was strange, same code and all, so I looked into it.

There was a loop on the page that instantiated objects with an id, the number of ids dependent on the results of a previous query. What did that object instantiation do under the hood? It performed a query to fetch state. I calculated that 5000 queries were being run, and it only cropped up every once and a while (and seemingly 'never' on the live site) because queries were automatically cached for a set amount of time.

I was new on the project, and in what is probably poor form, went around the whole office, letting my horror be fully known. People just shrugged though.

edit: I forgot to add, modifying it to only perform one query was trivial.


Less queries isn't always better. Sometimes a couple small fast (easily optimized by the DB server) queries are faster than a join (which could block several tables at once instead of just one).


This is true; you have to decide when that is. Of course, multiple queries across tables can be placed into a view and queried at once.


I was slashdotted at the beginning of the year. Spent quite a bit of time at #1 for https://grepular.com/Abusing_HTTP_Status_Codes_to_Expose_Pri...

I managed to have it up and running again in about half an hour by converting some content which didn't need to be dynamically generated for every request, into static content.

Once things calmed down, I spent some time optimising it further and making lots of content static, which didn't need to be dynamically generated for every request.

I'm glad I did, because I hit the #2 spot on Slashdot for most of a day this very weekend because of this article:

https://grepular.com/Protecting_a_Laptop_from_Simple_and_Sop...

My lowly Linode VPS didn't even break a sweat this time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: