Hacker News new | past | comments | ask | show | jobs | submit login

It doesn't read from files unfortunately, but https://openobserve.ai/ is very easy to set up locally (single binary) and send otel logs/metrics/traces to.

Here's how I run it locally for my little shovel project - https://github.com/bbkane/shovel#run-the-webapp-locally-with... .

Also linked from that README is an Ansible playbook to start OpenObserve as a systems service on a Linux VM.

Alternatively, see the shovel codebase I linked above for a "stdout" TracerProvider. You could do something like that to save to a file, and then use a tool to prettify the JSON. I have a small script to format json logs at https://github.com/bbkane/dotfiles/blob/2df9af5a9bbb40f2e101...




That's actually a neat little analysis platform, thanks!

Amusingly I can run my application, if I generate custom formatted .json and write it to a file, I can bulk ingest it... which is pretty much what I do now without the fancy visualization app. I think this speaks to my point that the OpenTelemetry part of the pipeline wouldn't be doing much of anything in this case. (The reason I care about files is that applications run in places where internet connectivity is intermittent, so generating and exporting telemetry from an application/process needs to be independent from the task of transferring the collected data to another host.)


For that use-case, you almost want the file to be rotated daily and just ... never sent ... at least until a customer has an issue, or you're investigating that hardware.


maybe part of the issue is that all the vendors working on it usually have time limits for ingesting data into their backends (like timestamps must be no more than -18/+2h from submission time) so they don't really care about it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: