Hacker News new | past | comments | ask | show | jobs | submit login

By comparison, I wanted to use opentelemetry for a series of projects, but could find absolutely no useful documentation on how to do anything else other than "send data from a webapp to a server / other cloud service that some vendor wants to sell you".

All I wanted to do was instrument an application and write its telemetry data to a file in a standard way, and have some story regarding combining metrics, traces, and logs as necessary. Ideally this would use minimal system resources when idle. That's it.




It doesn't read from files unfortunately, but https://openobserve.ai/ is very easy to set up locally (single binary) and send otel logs/metrics/traces to.

Here's how I run it locally for my little shovel project - https://github.com/bbkane/shovel#run-the-webapp-locally-with... .

Also linked from that README is an Ansible playbook to start OpenObserve as a systems service on a Linux VM.

Alternatively, see the shovel codebase I linked above for a "stdout" TracerProvider. You could do something like that to save to a file, and then use a tool to prettify the JSON. I have a small script to format json logs at https://github.com/bbkane/dotfiles/blob/2df9af5a9bbb40f2e101...


That's actually a neat little analysis platform, thanks!

Amusingly I can run my application, if I generate custom formatted .json and write it to a file, I can bulk ingest it... which is pretty much what I do now without the fancy visualization app. I think this speaks to my point that the OpenTelemetry part of the pipeline wouldn't be doing much of anything in this case. (The reason I care about files is that applications run in places where internet connectivity is intermittent, so generating and exporting telemetry from an application/process needs to be independent from the task of transferring the collected data to another host.)


For that use-case, you almost want the file to be rotated daily and just ... never sent ... at least until a customer has an issue, or you're investigating that hardware.


maybe part of the issue is that all the vendors working on it usually have time limits for ingesting data into their backends (like timestamps must be no more than -18/+2h from submission time) so they don't really care about it.


The major tracing library in Rust suggests a consumer that prints to stdout, but it's at the end of the introductory documentation; https://docs.rs/tracing/latest/tracing/

EDIT: it's what I've used when bridging between "this is a CLI app for maybe 3 people" and "this will need to be monitored"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: