Great article! I always love hearing Stripe talking about their internals.
I've been using this practice and I agree that it's incredibly useful. I think because people tend to think in terms of "logs", they end up overlooking the much more useful construct of "canonical logs". Many fine-grained logs themselves are almost always less useful than the fewer fully-described canonical logs. Other observability tools often call these "events" instead of "logs" for that reason.
There's a tool call Honeycomb [1] that gives you exactly what this article's talking about in a really nicely designed package out of the box. And since it handles all of the ingestion and visualization, you don't have to worry about setting up Kafka, or the performance of logplexes, or teaching everyone SQL, or how to get nice graphs. I was a little skeptical at first, but after using it for over a year now I'm completely converted.
If you record fully-described "events" for each request, and you use sub-spans for the smaller segments of requests, you also get a waterfall-style trace visualization. Which eliminates the last need for fine-grained logs completely.
If this article seems interesting to you, I'd highly, highly recommend Honeycomb. (Completely unaffiliated, I just think it's a great product.)
> The most effective way to structure your instrumentation, so you get the maximum bang for your buck, is to emit a single arbitrarily wide event per request per service hop.
> We're talking wiiiide. We usually see 200-500 dimensions in a mature app. But just one write.
Honeybee here. Feel free to just try it, there's a 14 day free trial, and a free community edition for small amounts of data :) experiment away, and our community slack is super friendly!
We certainly wouldn't fit into the community edition :)
Our main project is running on Django 1.11. I'm going to wait until we're on Django > 2 for the database tracing integration.
What I'd love to see is a screencast, demo, or series of screenshots that digs into the (out of the box) Django integration. NewRelic gives us a lot of insight into our database performance, including EXPLAIN traces for slow queries. Does Honeycomb provide something similar?
Yes, honeycomb is great. It's one of those "I wish I had more bigger projects, just so I could use this more" services. Other APMs / logging systems are just not really comparable.
I am wondering how things like OpenTracing-esque spans and sub-spans fit into the format Stripe describes. Are they just logged as `subspan1`, `subspan2`, `subspan3` in the log format?
It seems like that works, but I'm also unclear if maybe each sub-span is better off as its own log line? But that carries its own problems.
It's interesting that they've found denormalizing their log data so useful. I'm suprised to hear that that performs better for practical queries than a database with appropriate indexes, and that they've been able to build more ergonomic interfaces to query that than the standard relational approach a lot of people already have experience with. But I don't know much about log management at scale, so I'm only mildly surprised.
Denormalization typically improves performance. Normalization isn't done for performance reasons but for consistency reasons, i.e. so that data isn't duplicated and there is only one source of truth
Yes, exactly — normalization is really useful for reasons of quality and correctness, but generally not so important for data like logs that's rotating through the system on a pretty constant basis.
And addressing the parent's point on databases: they don't look like an RDMS, but you can kind of think of log management/querying systems like Splunk et al. to be like a specialized database with specific properties:
- Flexible indexing: Logs change frequently which makes keys come and go, so it's convenient not to not have to be constantly building new indexes to make them searchable.
- Optimized for recent data: Newer logs tend to be accessed relatively frequently and older logs much more rarely (if ever), so it's generally a good idea for these systems will rotate data through different tiers of storage as they age — the new on fast machines with fast disks, the old on slower machines with large disks, and the very old probably just in S3 or something.
- High volume: Any of the traditional relational databases would have a lot of trouble with the volume of data that we put through Splunk. (That said, its problem domain is more constrained — it scales horizontally much more easily because it doesn't have have to concern itself with things like consensus around write consistency.)
How many columns does the average canonical log entry at Stripe have? What's the mix of low/high cardinality string fields look like vs number of metric/counter fields?
Logs can be treated as database rows regardless of source format (plaintext, csv, JSON, etc). The modern approach for dealing with large scale tables is column-oriented storage and databases which can easily handle billions of log lines without indexes by using ordering, partition maps, compression, etc.
It's also about DRY and accurately modelling the data for general operations. Imagine the difference between a data environment where a GDPR deletion request comes in, and everything has a relation back to customer identity, and one where the customer identity is denormalised out to many places or only implicitly present.
Strange that they went with plain text when the industry is converging on (newline delimited) JSON logs for structured data. This also serves as the backbone of observability with metrics and tracing also being folded into and output as JSON.
Call them events and you can claim all the event-sourcing buzzwords too.
I wouldn't put too emphasis on the plain text — we started logging back when carrying everything via JSON would've been going against the grain. These days it might've gone the other way (I'm not sure).
One point that I'd try to convey is that the canonical line technique works for any kind of structured format. We use logfmt in all our examples, but JSON would work just as well.
Related tangent: I can't say enough good things about [lnav](https://lnav.org). It's like a mini-ETL powertool at your fingertips, w/ an embedded SQLite db and a terrific API. As of mid-2016 when I first used it, querying logs was extremely easy, and reasonably fast (w/ up to several million rows). Highest recommendation.
Disclaimer: I have no affiliation w/ the project or its maintainer -- but out of gratitude I mention it pretty much every time it's appropriate.
We've been using logging like this but with jsonl lines. Still easy to grep as straight text, but very handy to be able to parse with jq or other tools and be able to have rich values (or even substructures) as part of the log lines.
Log structure is really important, from the examples provided I would suggest the same approach can be used using a full 'logfmt' style, so timestamp and the event type can be set as keys, e.g:
the main difference is that you make easier the parsing since many tools can parse lOgfmt without problems.
One interesting use-case here for 'me' is the ability to perform queries in a schema-less fashion and I will do a quick speech on what we are working on Fluent Bit[0] (open source log project), pretty much the ability to query your data while is still in motion (stream processing on the edge[1]). Consider the following data samples in a log file:
the results are in a raw mode but can be exported to stdout in json, to elasticsearch, kafka or any output destination supported.
One of the great things of the stream processor engine is that you can create new streams of data based on results, use windows of time (tumbling) for aggregation queries and such.
This is not unlike what we've been doing for years. We generate billions of log lines like this daily as json and inspect them with splunk. By having consistent values across log lines, we can query and do neat things. "What was our system timing in relation to users who have feature x?" "What correlations can we find between users whose requests took too long and were not throttled? -> ah, 99% of those requests show $correlation_in_other_kv_pair!"
(I'm the author, and) Yeah, whatever you might call them, canonical lines are an "obvious" enough idea that I'd expect a lot of shops to have arrived at them independently. Besides yourself, I've heard from a number of people where that's been the case.
That said, it's also a surprisingly non-obvious idea in many respects — a lot of people are used to just traditional trace-style logging and never come up with a construct like them, so we felt they were worth calling out as something that might be worth doing.
I feel that logs are around for so long that it's easy to take their capabilities for granted and not go much further. This is another example there's more that can be done. It reminds me rfc5424
At LogSense.com we actually tackled this problem too and came with automatic pattern discovery that pretty much converts all logs into structured data. I actually just posted it here: https://news.ycombinator.com/item?id=20569879 I am really curious if this is something that you consider helpful and any feedback is very welcome
Oh, for sure. A lot of folks are doing really interesting things that others could learn from and they don't stop and write something up that helps the industry grow that much more. I'm glad y'all wrote this up; I might do a similar one!
is there any legal restriction of how long you can keep internal systems logs? if it's done right they don't contain PIIs but they _can_ be used to track people if you have enough logs.
Not to my knowledge. At least not in the US. CCPA (coming into effect Jan 1) does give users the right to be deleted, which presents some challenges with this sort of data but nothing insurmountable.
I'm not sure if it's due to a legal requirement or not, but at my workplace (a university in Canada) we are required to keep all log files we produce in prod for 7 years.
There's some vague stuff. For example, GDPR requires you only keep data for a "reasonable" period of time. So many many years would likely not be reasonable in most logging scenarios.
I suspect as a payment processor though, being able to look back far when investigating breeches etc would be important.
I've been using this practice and I agree that it's incredibly useful. I think because people tend to think in terms of "logs", they end up overlooking the much more useful construct of "canonical logs". Many fine-grained logs themselves are almost always less useful than the fewer fully-described canonical logs. Other observability tools often call these "events" instead of "logs" for that reason.
There's a tool call Honeycomb [1] that gives you exactly what this article's talking about in a really nicely designed package out of the box. And since it handles all of the ingestion and visualization, you don't have to worry about setting up Kafka, or the performance of logplexes, or teaching everyone SQL, or how to get nice graphs. I was a little skeptical at first, but after using it for over a year now I'm completely converted.
If you record fully-described "events" for each request, and you use sub-spans for the smaller segments of requests, you also get a waterfall-style trace visualization. Which eliminates the last need for fine-grained logs completely.
If this article seems interesting to you, I'd highly, highly recommend Honeycomb. (Completely unaffiliated, I just think it's a great product.)
[1]: https://www.honeycomb.io/