Hacker News new | past | comments | ask | show | jobs | submit login

> An async method, basically does the same, but it means your logic doesn't have to handle the polling, orchestration, selection, ordering, etc.

In order to take advantage of that, I'd need to create one `async` method for each handshake with the TURN server, right? Like every `allocate`, every `refresh`, every `bind_channel` would need to be its own async function that takes in a `Sink` and `Stream` in order to get the benefit of having the Rust compiler generate the suspense-point for me upon waiting on the response. Who is driving these futures? Would you expect these to be be spawned into a runtime or would you use structured concurrency a la `FuturesUnordered`?

A single `Allocation` sends multiple of these requests and in the case of `bind_channel` concurrently. How would I initialise them? As far as I understand, I would need to be able to "split" an existing `Stream` + `Sink` pair by inserting another multiplexer in order to get a `Stream` that will only return responses for the messages that have been sent via its `Sink`, right?

I just don't see the appeal of all this infrastructure code to "gain" the code-gen that the Rust compiler can do for these request-response protocols. It took my a while to wrap my head around how `Node` should work but the state isn't that difficult to manage because it is all just request-response protocols. If the protocols would have more steps (i.e. more `.await` points in an async style), then it might get cumbersome and we'd need to think about an alternative.

> > It is doable but creates the kind of spaghetti code of channels where concerns are implemented in many different parts of the code. It also makes these things harder to test because I now need to wire up this orchestrating task with the TURN clients themselves to ensure this response mapping works correctly.

> I read through the node code and it feels a bit spaghetti to me as an outsider, because of the sans-io abstractions, not inspite of it. > That comes from nature of the driving code being external to the code implementing the parts.

It is interesting you say that. I've found that I never need to think about the driving code because it is infrastructure that is handled on a completely different layer. Instead, I can focus on the logic I want to implement.

The aspect I really like about the structure is that the call-stack in the source code corresponds to the data flow at runtime. That means I can mentally trace a packet via "jump to definition" in my editor. To me, that is the opposite of spaghetti code. Channels and tasks obfuscate the data flow and you end up jumping between definition site of a channel, tracing where it was initialised, followed by navigating to where the other end is handled etc.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: