The problem with that is for good performance reasons that checksum is typically on smaller blocks than the whole file. For example ZFS splits files into a maximum of “recordsize” (128K) by default and checksums each block.
Not so likely whatever remote target you are looking at has the same blocks and checksum algorithm.
On the upside you can often ask such file systems to take two snapshots and what changed between them. Or to export some kind of differential to transform the original snapshot to the newer snapshot. Both ZFS and Btrfs can do those.
For ZFS, we have the pool-wide transaction number, which increments with every disk write. Couldn't you use that as the "time" for the last modification to each file so long as you have a starting transaction number to compare it against?
Not so likely whatever remote target you are looking at has the same blocks and checksum algorithm.
On the upside you can often ask such file systems to take two snapshots and what changed between them. Or to export some kind of differential to transform the original snapshot to the newer snapshot. Both ZFS and Btrfs can do those.