A TCP/IP stack is not an "implementation of syscalls". The things most netstack users do with netstack have nothing to do with wanting to move the kernel into userland and everything to do with the fact that the kernel features they want to access are either privileged or (in a lot of IP routing cases) not available at all. Netstack (like any user-mode IP stack) allows programs to do things they couldn't otherwise do at all.
The gVisor/perf thing is a tendentious argument. You can have whatever opinion you like about whether running a platform under gVisor supervision is a good idea. But the post we're commenting on is obviously not about gVisor; it's about a library inside of gVisor that is probably a lot more popular than gVisor itself.
Interesting to dismiss it as such. The gvisor netstack is a (big) part of gvisor and this article is discussing how the performance of that component was, and could well still be, garbage.
These tools bring marginal capability and performance gains, shoved down peoples throat by manufacturing security paranoia. Oh an it all happens to cost you like 10x time, but look at the shiny capabilities, trust me it couldn't be done before! A netsec and infra peddlers wet dream.
> The gvisor netstack ... this article is discussing how the performance of that component was ... garbage.
The article and a related GitHub discussion (linked from TFA) points out that the default congestion algorithm (reno) wasn't good for long-distance (over Internet) workloads. The gvisor team never noticed it because they test/tune for in-datacenter usecases.
> These tools bring marginal capability and performance gains
I get your point (ex: app sandbox in Android ruins battery & perf, website sandbox on chrome wastes memory, etc). While 0-days continue to sell for millions, opsec are right to be skeptical about a very critical component (kernel) that runs on 50%+ of all servers & personal devices.
The netstack stuff here has nothing to do with the rest of gVisor.