Hacker News new | past | comments | ask | show | jobs | submit login
Let's make the web faster: Tools to discover and improve your site performance (code.google.com)
93 points by mace on Dec 8, 2010 | hide | past | favorite | 24 comments



For performance benchmarking, httperf is at the top of their list. I wouldn't recommend it but only b/c the configuration parameters are very confusing for anything but the most simple tests. For quick gut-checks, I prefer siege (Apache Bench is viable too but it's quirky). For sophisticated or heavy duty tests, I preferred using Grinder or Tsung (JMeter is a time sink). They also recommend Pylot which I haven't used but Corey Goldberg is active on Twitter and other places.

For monitoring they recommend mon.itor.us. They're pretty good but they don't have a free plan. http://browsermob.com/ is pretty cool b/c they use real browsers and they can run your selenium tests.

Some free options for monitoring a single site: http://wasitup.com, http://pingdom.com and http://blamestella.com (that last one is my product and all the plans are currently free).

And remember: what you measure you get!


Mon.itor.us is free. It's the free version of Monitis.


Yes, but the monitoring interval is 30 minutes which is not useful in most cases.


I've found trying to make your websites faster is kind of a drug. They can just never be fast enough. You start with Google Page Speed, and before you know it you're profiling your CSS for slow rules in Safari ...

For most database-driven applications though the real latency is almost always in the database. You really need something like memcached to totally remove the bottleneck. I think oftentimes frontend speed demons are trying to answer the wrong question.


I totally agree that it's like a drug... eventually you'll start getting excited about an request time improvement from 15ms to 10ms by tuning your memcache client library...

Realistically, though, basic client-side optimization (e.g. expires headers, minification and concatenation, unobtrusive javascript) is low-hanging performance fruit that can make a website seem a lot faster with very little work. Since, tuning a database-driven app takes a lot more skill and time, and most people haven't even done the client-side stuff, Google focuses on that.


Backend tuning also involves a lot of black magic, whereas going down the YSlow checklist and knocking things off will improve your perceived user load speed. If you don't gzip, gzip. Bam. Done. It will work. Tuning the database, on the other hand, crikey -- how many chickens does one have to sacrifice to become an expert DBA anyhow?


I think the DB equivalent of turning on gzip is slapping an index on your primary keys [EDIT: should be "on your primary data column"]. Not exactly expert material, and quite useful.


I am unaware of a commonly used database where primary keys are not indexed by default. Granted, on a good day I can spell SQL correctly, but it was my impression that primary keys imply an index by definition. Can someone rectify my understanding?


Yeah, you're right. Call it slapping an index on the column holding actual data. My bad.

(The reason that primary keys tend to be auto-indexed is simple: to verify uniqueness, you need to check whether the primary key of the record being inserted is already in any record. Doing this without indices would be painfully slow.)


Up to a certain point, lot of backend issues can be mitigated by throwing hardware at the problem. At Torbit we've seen a lot of people realize their site were slow and their first response was to add more servers and upgrade to beefier boxes. By the time we get to them they've got their backend serving up HTML in 200ms but their site still takes 20 seconds to load. More often than not the front-end is the problem.


No, but a request time improvement from 150ms to 50ms could have a demonstrable impact on your bottom line.


i'd have to disagree. a ton of the slowdown in modern sites is due to lack of front end optimization. optimizing CSS rules is probably a waste of time, but the other basics (outlined by Steve Souders) can make a huge difference. as he says, 80% of the time is spent on the front end.

a lot of web sites and apps that depend on the database are aware that this is the bottleneck and optimize for that without giving the front end much consideration.

go look at the waterfall chart for techcrunch.com and tell me where you think their problem is.


Anecdotally, optimizing BCC took user visible load time for e.g. the front page from seven seconds back in the day to two or so these days (and it is interactive faster than that). That page spends no significant amount of time at the DB.

Total time I've spent on front end performance in four years: maybe six hours if that.

This is one of those things whose ROI is so staggeringly high that not doing it is just wrong. It is like source control.


Very true, but although these optimizations may not increase the actual speed of a database-heavy app by much, they can certainly improve the perceived speed. I've found this to be especially true if you can leave the expensive queries until after the initial page load.


My point is more that I've known more than a couple of frontend developers who leap on nifty frontend profiling tools like these before they've even tried a couple of well-placed table indices, or asking the database developer or systems guy about them. In my experience it's a lot easier to wrest 500ms from a bad query than 5ms from some bad JavaScript.


In reality however you will usually wrest 50ms from DB and can shave off seconds in front-end. It's usually not about Javascript loops optimization, it's about optimization of resource loading and minimizing the effect of network latency. Even you if shaved 500ms of DB request, hardly anyone will notice if page waits 15seconds before it has everything fetched to be able to render.


I think part of the thing is that it's relatively easy to measure how long it takes to generate your page. You don't need fancy tools or guidance from google to time how long you queries take.

Figuring out how to make the page appear to load faster or how to encourage browsers to cache your files in a smarter way is quite a bit more complicated and subtle.


This is what I don't quite get; why make the distinction between "real speed" and "perceived speed"? Really, as far as the vast majority of your users are concerned, the speed of the page is from when they click you in a Google result to when Internet Explorer's status bar says "Done". How long your queries take relative to the page's rendering speed is irrelevant.

My main point is that the database and queries to it usually has much lower-hanging fruit for speed hacks.


Using perceived speed as a goal, I can load the most prominent and interesting aspects of a page first. Ads and whatnot can show up 1s later. So even before IE says "done" users can be gratified and interacting with my site.


I don't think that's true. Most people aren't watching the status bar, they're waiting to see a page appear.

Also, I can make the status bar say "Done" before my page is actually finished loading.


I agree front end is where the majority of the latency occurs. Luckily there are automated solutions out there to tweak performance but my favorite is http://blaze.io because they do all the performance adjustments in real-time and you get the power of a leading CDN.


Can someone shed some light on what JSLint has to do with performance?


JSMin (and other compressors) often cause terrible, hard-to-understand bugs in sloppy code. Passing JSLint helps avoid a lot of those problems.


It is a little bit of a stretch. I'd guess that proper use of variable declaration, which JSLint enforces, could make more efficient use of memory. It also doesn't tolerate idiosyncrasies like missed semicolons which might amount to a touch less stress on processing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: