The point of that blog post is that sites should track the client-side errors that occur. It's hardly a new idea, but is worth repeating since so few people still seem to do it.
Remember folks, for apps that use the back-end as a JSON service, nearly all the code is running on the client. If you have no feedback about errors, you are assuming your 15 minutes of testing with Safari on a Macbook is representative of the entire Internet including that guy with IE7 on XP with the Bing Toolbar. It's not a good bet.
They mention tinyfeedback but there is also DamnIT and the YC-funded proxino.
Our java engine (Jetty) logs 200s even when it's generating 500s.
Learn me to trust my own fucking logs, will you.
One of the more useful monitoring tools I've got is a simple shell-wrapped "HEAD" script that polls our cluster and reports an "OK" or "ERR" (slow responses trigger a "Hrm..", along with the current, median, and standard deviation of the response, and total error counts. That sits in an omnipresent, always-on-top small-font terminal window.
Something like:
2012-03-30 12:03 i=9948
Host Status Cur Med sd Err
www OK 0.22 0.24 0.44 6
Remember folks, for apps that use the back-end as a JSON service, nearly all the code is running on the client. If you have no feedback about errors, you are assuming your 15 minutes of testing with Safari on a Macbook is representative of the entire Internet including that guy with IE7 on XP with the Bing Toolbar. It's not a good bet.
They mention tinyfeedback but there is also DamnIT and the YC-funded proxino.