... and trying them (the features) out. In VMs and in other network simulators. ( http://mininet.org/overview/ comes to mind, but simply using Vagrant / Virtualbox is okay )
Also, networking is very standardized (duh :)), so if you follow network related stuff ( https://www.reddit.com/r/networking/ ), you can sort of get a feel for what might come to the Linux kernel.
LWN is still the best in this regard, because it deals with new developments, so reading the archives will tell you what might be in the kernel.
Furthermore, since nowadays everything is kube-cloud-virtual-container-open-netes-stack, looking at network developments for these technologies will get you in the know. (Seemingly bonkers stuff, like BGP for containers is now bog standard with things like Calico. Running a full switch in kernel with an awesome distributed overlay network for "cloud" without OpenFlow? OVN by OVS got your back. Doing all this fast? DPDK is uber fast, but XDP is just so conveniently clever.)
Plus there's the datacenter networking stuff, like TRILL/SPB ( https://networkingnerd.net/2016/05/11/the-death-of-trill/ ), but those haven't got integrated into the kernel, because the aforementioned Calico, OVN and other overlay stuff.
I'm sure you mean well, but this comment epitomizes the problem. Formerly, Unix was fairly modular, open to comprehension, and had excellent documentation guiding the user. Now, everyone who isn't a paying Red Hat support customer is not just on their own but thrown to the wolves.
But why would anyone be on their own in the age of stackoverflow and a thousand other user support avenues.
And there are good books and endless awesome posts/blogs about Linux. The new low level stuff is not covered as well obviously. (And I think it's a shame devs don't communicate well, but they are not perfect, nor they are paid for writing good docs, only for code.)
Furthermore, was the modularity of Unix ever really exercised? Are there success stories replacing dd, cp, ls, or parts of lower level stuff?