WSL1 when accessing Windows files and Linux files uses direct IO via the NT Kernel. This is "slow" because NTFS has a different CAP theorem tradeoff than POSIX expected file system semantics. (It's direct file access so working with one big file is sometimes faster: the trick is that's what NTFS is better optimized for: bigger, fewer files atomic transactions. POSIX semantics work better for lots of small files and doesn't guarantee atomic transactions in the same way.)
From Windows (such as in Explorer) accessing WSL1 Linux files the safe way passes through a Plan9-derived file server as intermediary. This is surprisingly quick, but not without overhead. (But you can if you need to, do some unsafe operations directly on the files in NTFS.)
WSL2 when accessing Windows files accesses them through a Plan9-derived file server as intermediary. This is surprisingly quick, but not without overhead. WSL2 when accessing Linux files is using a Linux filesystem in a virtual hard disk file (VHD) similar to any other VM technology. Using a Linux file system it naturally exhibits POSIX semantics and is fast in the way Linux operations are expected to exhibit in lots of little files scenarios.
From Windows (such as in Explorer) accessing WSL2 Linux files passes through a Plan9-derived file server as intermediary. This is surprisingly quick, but not without overhead. Some operations Windows can do directly via VHD support in Windows.
The issue isn't NTFS as far as I understand (based on what the WSL team themselves have explained). The problem is that the NT kernel is simply slow at opening files. Windows has transactional NTFS but it's deprecated and hardly used. The slowness can't be fixed because the open codepath goes via a lot of different filter drivers that can hook the open sequence, combined with the fact that there's no equivalent of a UNIX dentry cache because file path parsing is done by the FS driver and not the upper layers of the kernel. Even if the default filters were fixed to be made as fast as possible - which is hard because they're things like virus scanners - third party filter drivers are common and would tank performance again.
It's a pity because Windows would benefit from faster file IO in general but it seems like they gave up on improving things as being too hard.
What I mean here by "transaction" semantics is not "transactional NTFS" (or the other distributed transaction engines that replaced it) but as a short hand for all the various different ways that file locking mechanics and file consistency guarantees are very different in NT/Windows/NTFS than in the POSIX "inode" model. All of that has a lot of complex moving parts (filter drivers are indeed one part of that complex dance both affecting and affected by Windows' file "transaction" semantics).
"Transaction" is a useful analogical word here for all of that complexity because how a mini-version of the CAP theorem trade-off space can be seen to apply to file systems. Windows heavily, heavily favors Consistency above all else in file operations. Consistency checks of course slow down file opening (and other operations too). POSIX heavily favors Availability above all else and will among other things partition logical files across multiple physical inodes to do that. Neither approach to "file transactions" is "better" or "perfect", they are different trade-offs. They both have their strengths and weaknesses. Using tools designed for one is always going to have some problems operating in the other. POSIX tools are always going look at Windows as "slow file IO" because it doesn't hustle for availability. Windows tools are always going to look at POSIX as willfully dangerous when it comes to file consistency. At the end of the day these filesystem stacks are always going to be different tools for different jobs.
Yup, nothing to magic, just the usual symptoms of Windows and Linux have always had different ideas of how files are supposed to work, so give Linux its own (virtual) hard drive instead.
I don't know anything directly about Microsoft's 9p plans, but the blogs give an impression they are considerably pleased at the 9p file server for what they've been using it for (especially these cross-platform communications) and they might use it for other things.
I really "like" or at best have mixed feeling of Linux/POSIX way of handling file in use, can be deleted/moved/edited, like EXCLUSIVE LOCK means nothing to the system.
Windows took a very different path from POSIX for a lot of reasons. It frustrates me sometimes when some Linux fans insist that POSIX file system semantics are "the best" and "the only right option" simply because they've been ingrained in more than a half-century of Unix development practices. The NT Kernel team was certainly aware of POSIX file systems and their trade-offs when they built the Windows file IO libraries and made different choices for good reasons. POSIX isn't the last word on how file systems should work, some open source developers should learn that, I think.