Hacker News new | past | comments | ask | show | jobs | submit login

Thanks so much for your detailed response! This does sound very interesting!

If I may ask a few more questions, out of pure curiosity: What are, in your experience, the biggest challenges when it comes to frontend error monitoring from the point of view of a company / a developer using it? How can these be addressed? And what are good criteria to evaluate frontend monitoring services by these days?

(Obviously, you'll be biased here but that's alright. :))

EDIT: I should add: Obviously I've got some experience when it comes to frontend monitoring (though not a ton) and could – in theory – partly answer these questions myself. However, sometimes I wonder whether the problems we face in our day-to-day are the "correct" problems. (I.e. is everyone else struggling, too? How do other people solve their problems? How do other people choose their frontend monitoring solution?) Hence my curiosity.




I think we see "frontend monitoring" in two different buckets: "metrics-oriented" and "incident-oriented". Metrics are about things like lighthouse scores, optimizing your frontend bundle, etc.. which tend to only become relevant for larger companies as only at that point do these things affect conversions to a significant level. Incidents, however, are more about one-off issues happening in your web app, and that' what we're focusing on at highlight.io (at least right now). Examples include customer support issues, bugs that affect a user's experience, etc..

Both are important for larger companies, but for smaller companies, the only thing that we see as relevant are frontend "incidents".

So to answer your question, I'd ask yourself what types of issues are you trying to track down? Does that make sense?


My question was more about the error/incident monitoring part. For us the biggest challenges in this area have been

1) grouping errors correctly. Often, identical errors are not logged because the browser messes up the stack trace, because error messages are not completely identical (browser-dependent), or because the stack trace (line numbers etc.) have slightly changed from one version to another. So we often need to group errors by hand.

2) identifying what the impact of an error is. Is it critical? Is it enough if we look into it next week? Is it relevant at all? Add to that that our web shop sees a significant number of automated interactions by resellers, so errors due to broken browser extensions, their bots doing crazy things etc. have been quite common and often need to be ignored by hand.

I really hope session replays are going to help us with 2).


In my experience, the biggest problems we have is that it’s hard to figure out what exactly to log. There’s sensitive information in the system that we are not supposed to store outside designated locations.

Then there is session storage itself. Do we log every request? Do we log even very large responses?

We’re using https://openreplay.com




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: