At some point you may get bitten by just nc’ing files around as tcp checksums alone are not that great. A lot of transfers or large files will at some point introduce errors. This is where an application layer error check may be desirable.
Is that really the case? (I mean i know in theory, but in practise does it matter?). Its not like http has its own checksum, and i've never had problems downloading over http (yes https does have better checksums, but i suspect it would just fail and not auto retry, which has never happened to me)
I've always verified the checksums of files transmitted via netcat and occasionally I have had different checksums. It's rare, and only on gigabyte+ files, but it has happened.
You should just use rsync - its been the best tool for transfering files between UNIXoids for the last 15-20 years for me and comes preinstalled with most distros (even macOS). It will take care of transferring only the required parts, and if you supply -c (slow!) it will even perform checksumming on both ends to assert files were correctly transferred and are bit-identical.
TCP checksums aren't that great, but you've usually also got lower layer CRCs which are much better. Though you can still get tripped up by misbehaving network hardware that corrupts a packet and then masks it by recomputing the CRC as the packet goes out again.