Rclone uses WinFSP on windows to mount cloud storage systems via the FUSE compatibility layer.
This enables one implementation of a FUSE filing system to work on lots of OSes.
Bill Zissimopoulos also wrote cgofuse which rclone uses to access the FUSE compatibility layer. It's a FUSE library for Go. We use that on Windows and macOS (via macFUSE or fuse-T) too. It runs on Linux too but isn't the default as it needs CGO (linking with C).
I've worked a lot with Bill and he knows more about Windows file systems and Windows internals than anyone I've ever met!
I'm not familiar with Box, but this is a concern with most backup solutions. Many backup storage providers offer snapshot features that will protect you from that because the snapshots are immutable, just make sure that you check your backup regularly because there have been some cases of data ransom attackers managing to detect and work around such protections¹. Snapshots like this also protect you from a number of accidental matters that might cause you to lose data such as the content of a file being corrupted and the broken version being backed up over the last good one.
If the storage provider doesn't offer snapshots you could backup to a rolling target, say one subdirectory per day of the week/month/year, though here you lose the protection of snapshots being less mutable – someone taking control of a machine can kill all the backups as it has full access to them.
I do a soft-offline backup³ where the source machines don't have access to the backups directly and so the snapshots. The source machines backup to one external service, just a server connected to via ssh, the backups being maintained using rsync (other tools like rclone already mentioned would do just as well), then another external server reads the backups from there it itself and creates snapshots. The source and backup machines both have credentials to access the intermediate backup but can't authenticate against each other so malware or a human attacker getting access to the source can't corrupt the backups from there or corrupt the source if they hack into the backups. It isn't perfect: there isn't the immutability promise given by some storage providers, verifying backups takes extra steps (but can still be automated), a really determined attacker could still do damage if they get both sets of keys or access to my accounts on the server providers (though that would have to be a very directed attack and I'm not interesting enough to attract one of them!), and I have to be careful about those passwords/keys: I need to be able to authenticate against the final backup service if all else is compromised and for maintenance.
--
[1] breaking the backups subtly, so you don't notice and delaying hitting the encrypt-and-ransom switch until after the standard snapshot retention window² has expired
[2] assuming this is short, like a month or less, this is less of a concern with longer retention but you still stand to lose recent data
[3] I must write properly about that somewhere permanent-ish, so I can just post a link instead of re-describing it in every backup discussion I have!
I'm not the person you asked, but one way would be to mount the backup drive just for the backup, then unmount it once the backup is complete. This would limit the time the backup is exposed to ransomware.
ProjFS can solve some of the same use cases, but is different by design. It is basically local file storage with a virtual backend that can provide files and data to the local file system. Once files are local, the provider is not asked when files are accessed or modified. The provider can monitor changes and synchronize them with the backend (with chance of conflicts). The provider also needs to manually update the local view when the backing store has changes. It appears to be built for OneDrive. It is way more efficient for many use cases, but also requires more effort to synchronize and resolve conflicts.
I wonder how this compares to Dokan. Dokan doesn't integrate into the Windows Cache Manager, and file performance is constrained vs reading something which was recently read.
I’d like to know this too - Dokan wasn’t the most stable when I used it years ago (maybe it’s improved since then), would love to know how this is different
I tried out Dokan a couple years ago, and it seemed decently stable, just underperforming. There was a big fix that tremendously increased performance and also fixed a race condition around that time.
On a completely different note, there's also "Windows Projected File System", which sounds a lot like a user-mode file system, but it's not. It requires that files be backed by real files on your disk.
By my reading of the docs, Windows Projected File System (ProjFS) only requires reliability and performance characteristics comparable to a local filesystem.
I think it would work if your data is in memory or backed by a 10G+ lan.
At leat latests versions, coming from the store or just baked from https://github.com/microsoft/WSL/releases. As per [1] One invokes
GET-CimInstance -query "SELECT * from Win32_DiskDrive" in Powershell to list devices and then, wsl --mount "\\.\PHYSICALDRIVE1" --bare or whatever.
Why would somebody need to use this instead of like, smb or however normal Windows machines on the same network share drives/files/folders (even with other Mac machines on the network, or Linux, etc.)?
This isn't for network shares, this is for reading/writing programmatically generated files. The possibilities are endless.
For example, you could make a "filesystem" where everything inside looks like an uncompressed BMP file, but the actual files backing them are some other image format. Then ancient software would be able to use the newest image formats.
This is for exposing other data as a virtual file system. Let's say you want to expose the Git versions as folders and files. You need this to write the file system layer on top of Git.
That is a reasonable approach, one of the 2 MacOS fuse implementations uses NFS.
I've never written an SMB server, but I assume it is a lot more complicated than using WinFsp or Fuse.
WSL 2 uses the 9P network filesystem to share files between Linux & Windows. Supposedly this is a simple protocol to implement. I've searched, but can't find any documentation on using 9P without WSL.
Cryptomator suggests WinFSP for its Windows installation, MacFUSE for Intel Macs, and FUSE-T (the NFS FUSE implementation) for Apple silicon Macs. I've used all of them, and I've also used Dokan with encfs.
Both OSes treat network filesystems differently from local disks. The FUSE-T implementation is very speedy, but it shows up in the finder as a connection to "localhost", not as a properly named mount. In addition, macOS creates .DS_Store files on network drives by default (there is a setting you can change from the CLI to prevent this). On Windows, the filesystems can be mounted as local drives or network drives. When mounted as network drives, they can have issues with indexing and trust. All-in-all, drives you expect to function as local drives work best when they're actually local device filesystems and not network mounts.
Personally I use it with rclone to browse cloud file shares (S3, onedrive,...) as folders/files with windows explorer. You could also use something like this to trick ancient software and make it work across a variety of network protocols/file shares types even if it was designed or is limited to only local filesystem.
As for the question of implementing a user-mode filesystem by implementing a SMB server, you can't really do it. Windows takes up all the ports on your own host. You'd need to do a lot of trickery to stop Windows itself from eating up the SMB ports, and install a "Loopback Adapter".
Apache Cassandra + this would give high availability and reliability by being able to add/remove nodes randomly.
Think transparent backups like RAID 1 or depending on how you structure the data you could even use joins to do RAID 0 and get a performance boost.
Obviously this won't be as fast a real RAID cluster but throwing random storage at it like old usb sticks etc might be a bit of fun and an easy way to expand it over time to bigger devices.
I use this along with https://github.com/winfsp/sshfs-win, which allows me to mount ssh filesystems as windows network shares, and it's awfully slow. So slow that when doing operations that involve opening several files (like using git) or even listing files (like Ctrl+P in vscode) you want to punch your screen. Anybody knows of an alternative?
This enables one implementation of a FUSE filing system to work on lots of OSes.
Bill Zissimopoulos also wrote cgofuse which rclone uses to access the FUSE compatibility layer. It's a FUSE library for Go. We use that on Windows and macOS (via macFUSE or fuse-T) too. It runs on Linux too but isn't the default as it needs CGO (linking with C).
I've worked a lot with Bill and he knows more about Windows file systems and Windows internals than anyone I've ever met!