Someone else on HN probably has firsthand experience with the systems that birthed FTP, and I will be speculating a bit. But here's an example, and it's an interesting one because it infects TCP to this day: presumably because systems at the time didn't have workable socket multiplexing, FTP (and TCP) supports an "URGent pointer" that allows one TCP to flag another that important command-and-control data needs to be read during a file transfer --- this despite the fact that FTP is already (pointlessly) allocating an additional socket connection for each data transfer. The URG wart lives on in TCP to this day, unused by any modern protocol.
FTP LIST responses are "successfully parsed" by predicting that servers will return a circa-1991 ftpd "ls" listing. Which means that to be parsed by those clients, you need to be bug-compatible with those servers. That was the point DJB was making with his (parsable) publicfile output.
For a good starting point on FTP's security design, read up on [FTP bounce attacks]. But the key thing to remember is: this design is pointless. There is no reason for a file transfer protocol to be allocating new connections like this.
FTP LIST responses are "successfully parsed" by predicting that servers will return a circa-1991 ftpd "ls" listing. Which means that to be parsed by those clients, you need to be bug-compatible with those servers. That was the point DJB was making with his (parsable) publicfile output.
For a good starting point on FTP's security design, read up on [FTP bounce attacks]. But the key thing to remember is: this design is pointless. There is no reason for a file transfer protocol to be allocating new connections like this.