Each iteration from the 8080 through x64 have a parity bit in the flags register for backwards compatibility with the previous generation. The 8008 was a microprocessor implementation of the Datapoint 2200 architecture.
Early protocols didn’t have error correction in the lower layers. The parity flag was equivalent to a CRC instruction nowadays. Presumably if parity was incorrect, that would mean the byte was transmitted incorrectly.
Could you elaborate on this? How does the 8008 being designed to run a terminal relate to the parity and the flags register?