One quick note to people writing release notes pages:
Often these pages get posted to news aggregators, and the people who click on those links are not as likely to be familiar with your product. A quick 1-2 line elevator pitch at the top would do wonders.
How does that compare to the ELK (ElasticSearch/LogStash/Kibana) stack? (Is it even right to compare?)
I'm currently evaluating ElasticSearch + Kibana to handle metrics/errors/application logs in our web app, it's impressive but Kibana is lacking a lot of features to be complete IMO.
Good point for ELK in my case is that it runs on windows which is not the case of InfluxDB it seems (running on windows is a requirement in my company unfortunately).
ELK is about log analytics, Grafana is about Time series & metric analytics and monitoring. The difference might not seem that big but currently it quite a big difference.
Time series are usually more about being able to collect huge amount of metrics. Metrics that can then be combined, averaged, filtered, put through a processing pipeline (analytical functions), summarized by different intervals. All in order to visualize (usually through graphs) recent live trends or long term trends and statistics.
Grafana is all about maximizing the power and ease of use of the underlying time series store so the user can focus om on building informative and nice looking dashboards. It is also about letting users define generic dashboards through variables that can be used in metric queries, this allows users to reuse the same dashboard for different servers, apps or experiments as long as the metric naming follows a consistent pattern.
Grafana also uses Elasticsearch but not for log analytics, but for annotating graphs with event/log information.
At some point in in the coming 1-3 years log analytics and metric analytics & visualization is going to converge and be solved/addressed by the same piece of software. But that is tricky right now without sacrificing either domain.
It's a similar web interface (Grafana is based on Kibana), but for different things.
ELK is best for deciphering large amounts of data from logs IMHO. Yes logstash has tons of plugins for third party monitoring systems, but generally speaking you're going to dump your web server access/error logs and syslogs into your ELK setup.
Grafana makes graphite/influxdb more useful and is by far and away the best option available. Usually you're going to be tracking things like cpu/memory/diskio/network traffic, but also anything from StatsD in your Grafana setup.
Combining the two systems into one could be neat, but they work great on their own and have their own benefits.
Thanks for the response. I should probably give a shot to Kibana 3.x because I'm on 4.x and I suspect it's still pretty new and lacking in features because of it.
I could already track system metrics on windows with WMI (well I know CPU and memory is possible at least) and send them to ES but I will need to hand code it to avoid LogStash (too big for this simple job in my case).
I think the most useful feature in Grafana for me has been Templating. It's really powerful and works well in an AutoScaling environment where things come and go. with graphite-web it would be a fight to keep the graphs collecting the right data points, or updating them if your hostname scheme changes, etc.
In an autoscaling environment, you just need to keep your metric paths consistent. Then all you need is wildcard queries. E.g., CPU userspace cycles on all your instances should be like:
That's true. I guess what I prefer about doing it in Grafana is that the interface for managing complex graphs with overlays of different metrics, and ones using wildcards to match groups of metrics is just way easier.
> The Graph panel now supports 3 logarithmic scales, log base 10, log base 32, log base 1024. Logarithmic y-axis scales are very useful when rendering many series of different order of magnitude on the same scale (eg. latency, network traffic, and storage)
Have been waiting eagerly for logarithmic scale support.
I remember wanting to use graphite for a pet project and stumbled across early release Grafana. I'm amazed at all the nagging items I had (even though I still used it) have been resolved.
The CORS, shipping with its own backend, not needing elastic search (such a pain!) and sharing graphs are stellar.
Truly amazing.
On a very tangentially related note: we've recently open-sourced our .NET port of StatsD/Graphite and it's available under MIT license at https://bitbucket.org/aeroclub-it/statsify
interesting! I am working on a windows build of grafana 2 right now. Does statsify have a /render and /metrics/find api that is compatible with graphite? If it had grafana could use it.
It has something similar to "/render"[0], but it's not Graphite-compatible. There's also support for "/find", but it is currently not the part of the public API.
Adding Graphite-compatible endpoints should be fairly easy and this can really open up a lot of integration opportunities.
The delay could be because you are far from New York where the graphite server is. But I am as well (in Stockholm), the delay is between 200ms-600ms for me. then maybe 100ms for the canvas rendering.
Also the delay is higher than usual now because the demo site is busy now with the release announcement and lots of traffic from hacker news and twitter.
The rendering is very fast, switching dashboards can be under between 500ms-2 seconds. Just zooming on on the graph and you have a fast metric store is usually almost instant.
What's the recommended backend for this? I tried it with InfluxDB but felt that the feature set wasn't really up there, I couldn't get graphs to display the data I wanted.
We have a graphite instance elsewhere but it's running into disk space issues - will that be an issue here, too?
Graph an is a wholly JavaScript "chart display" front end that hacienda onto a grpahite-query-compatible server. This is usually the graphite server that usually is storing rrd data into a carbon database.
However there are it seems a wide variety of choices. My view is if any form of column orientated data structure is struggling, try sharding your time series long before you swap technologies.
Picture phpMyAdmin, but for metrics and time series data. You point it to a time series database (such as Graphite or InfluxDB) and the app draws the charts for you with whatever settings you need, and puts them together in pretty dashboards for easy consumption.
Its a dashboard and graph composer for realtime and and historical metric analytic (mostly via rich graphs). Usually used for monitoring or application metrics analytic but can be used in many other scenarios as well.
Often these pages get posted to news aggregators, and the people who click on those links are not as likely to be familiar with your product. A quick 1-2 line elevator pitch at the top would do wonders.