Nice project, have been following this project casually for a while.
The standout feature is to trace RPC flow across network connections, through packet tracing.
VP of DeepFlow here. Thank you for your interest in DeepFlow!
Yes, we have implemented distributed tracing using eBPF. In simple terms, we use thread-id, coroutine-id, and tcp-seq to automatically correlate all spans. Most importantly, we use eBPF to calculate a syscall-trace-id (without the need to propagate it between upstream and downstream), enabling automatic correlation of a service's ingress and egress requests. For more details, you can refer to our paper presented at SIGCOMM'23: https://dl.acm.org/doi/10.1145/3603269.3604823.
would it be reasonable to assume that because this entirely network-based, it would work best with systems which really emphasize the "micro" in microservices?
how well does this work if, say, my system has a legacy monolith in addition to microservices?
The advantage of eBPF lies in *request granularity* (i.e. PRC, API, SQL, etc ...) distributed tracing. To trace the internal functions of an application, instrumentation is still required for coverage. Therefore, the finer the service decomposition, the more effective eBPF's distributed tracing becomes.
It looks like it depends on applications either using threads or go routines for concurrency:
> When collecting invocation logs through eBPF and cBPF, DeepFlow calculates information such as syscall_trace_id, thread_id, goroutine_id, cap_seq, tcp_seq based on the system call context. This allows for distributed tracing without modifying application code or injecting TraceID and SpanID. Currently, DeepFlow can achieve Zero Code distributed tracing for all cases except for cross-thread communication (through memory queues or channels) and asynchronous invocations.
It uses eBPF to provide instrumentation of the kernel calls up as well as hooking into networking for http2 pgsql etc. Since it’s running the process in eBPF it’s essentially sandboxed and all memory, kernel function calls, and even profiling, is an option. They have an agent that collects this information and sends to the server over RPC (protobuf/grpc). You should check it out (however, some of the docs are in Chinese).
Some of our users’ ancestral processes are running on kernel 2.6, and the operations staff dare not upgrade the kernel. Indeed, there are many limitations in 2.6, but the simple traffic analysis has brought surprising insights to users. However, this also brings some troubles: even if a problem is known, no one dares to easily modify the code to fix it, unless absolutely necessary :)
The biggest difference: DeepFlow enables *Distributed* Tracing.
In addition, DeepFlow combines the capabilities of eBPF and cBPF to achieve full-stack tracing of syscall + network_forward. You can take a look at our documentation: https://deepflow.io/docs/about/features/
heh, GitHub also has "symbol navigation" turned on for that license file but I didn't dig into it to find out what source language it thinks the file is