Almost every application I've written atop a TCP socket batches up writes into a buffer and then flushes out the buffer. I'd be curious to see how often this doesn't happen.
Are you replying to the correct people? I think I never mention how you should write a program. I only say that assume user have a good internet connection is a naive idea nowadays. (The gta 5 is the worst example in my opinion, lost of a few udp packets and your whole game exit to main menu. How the f**k the dev assume udp packets never lost?)
What I mean to say is that, whether or not your mobile device has bad internet or not shouldn't matter. Most applications are buffering their reads and writes. This makes TCP_NODELAY a non-issue
Most importantly buffering doesn't spend a whole bunch of CPU time context switching into the kernel. Even if you are taking advantage of Nagle's, every call to write is a syscall, which calls into the kernel to perform the write. On a mobile device this would tank your battery. This is the main reason writes are buffered in applications.
This is basically the first thing I check if diagnosing performance issues with network apps. Most probably are buffering now, but surprisingly many don't. MySQLs client library for years didn't for example (it's probably been fixed for a decade or more at this point).