Hacker News new | past | comments | ask | show | jobs | submit login

Why reinvent the wheel? Just look at some older protocol, like SLIP.



SLIP uses byte stuffing to reserve its end-of-frame sequence, which leads to data-dependent packet transmission times, which is not acceptable in my application.


Is this a big deal? Say, if character 0 is reserved you can encode everything in base255 and transmit encoded bytes shifted by 1. (Or, for simpler encoding, transfer an appropriately encoded bitmask of which characters are 0, then a copy of that data where 0 is replaced by anything else.)

Edit: this HN comment by KMag suggests a much simpler encoding https://news.ycombinator.com/item?id=12550584 (you'd need to process your packets in 254 byte chunks)

> replace first null with 255. Every later null, replace with the index of the previous null. Make the final byte the index of the last null (or 255 if no nulls were replaced). In this way, you've replaced the nulls with a linked list of the locations where nulls used to be. To invert the transformation, just start at the final byte and walk the linked list backward until you hit a 255.

Looks like this is https://en.wikipedia.org/wiki/Consistent_Overhead_Byte_Stuff...


Yeah, COBS works. In my case, I can even go simpler, since messages are fixed size. But:

1) This is now part of the line code. And "uart + slip but modified" starts losing some of the "simplest thing" charm of "just do what everyone else does."

2) Looking at this without reference to previous work, it sure seems unlikely to be the simplest thing. Magic numbers everywhere -- 8N1 uses 8 bit bytes to support ~5% clock skew, which isn't reflective of the application; COBS forces sub-packets at 255-ish byte intervals, which doesn't match any inherent concept, etc. It can work, but does it make sense in isolation?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: