Hacker News new | past | comments | ask | show | jobs | submit login

> receive side uses a per-packet interrupt to finalize a received packet

This has made much faster systems not being able to process packets at line speed. A classic was that standard Gigabit network cards and contemporary CPUs were not able to process VoIP packets (which are tiny) at line speed, while they could easily download files (which are basically MTU-sized packets) at line speed.




Fortunately, the receive ISR isn't cracking packets, just calculating a checksum and passing the packet on to LWIP. I wish there were two DMA sniffers, so that the checksum could be calculated by the DMA engine(s), as that's where a lot of processor time is spent (event with a table driven CRC routine).


You can do it using PIO. I did that for emulating memory stick slave on rp2040. One PIO SM plus two dma channels with chained descriptors. XOR is achieved using any io reg you don’t need, with 0x3000 offset (manual mentions this as the XOR alias)


Luckily the RP2040 has a dualcore CPU so one core can be dedicated entirely to receiving the interrupts, passing it to user code on the other core via a FIFO or whatever else you fancy.


almost

context switching between processors will reduce cache coherence and hence hits, but yea, it might be worth the tradeoff on busy systems


Why would there be context switching? One core is exclusively running user code and polls for new pre-processed packages in some loop, the other core is exclusively running low-level network code and dealing with interrupts.

It's a Cortex M33, so there's no meaningful cache to speak off. Access to all memory takes essentially the same amount of time. If you're really worried about access time you could probably use SRAM banks 8&9 (each 4k, with their own connection to the AHB crossbar) and flip-flop between the two - but I highly doubt it's going to have a measurable impact.


if interrupt and usespace code run on the same core, there is a chance that the data will still be in the cache line of the processor and it wont have to go thru main memory.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: