I'm nearly old enough to try to put my brain back to that time (I used Unix V7 on a PDP-11/45..) and I'm not sure the replies here are quite on the mark. Yes, if someone had suggested a 64-bit time_t back then, the obvious counterargument would have been that the storage space for all time-related data would double and that would be a bad thing. Also true that there was no native language support for 64-bit ints, but I don't think that is a show-stopper reason because plenty of kernel data isn't handled as compiler-native types.
I think the main reason nobody pushed back on a 32-bit time_t is that back then much less was done with date and time data. I don't think time rollover would have been perceived as a big problem, given that it would only happen every 100 years or so.
In the decades since we have become used to, for example, computers being connected to each other and so in need of a consistent picture of time; to constant use of calendaring and scheduling software; to the retention of important data in computers over time periods of many decades. None of these things was done or thought about much back then.
This is a great point. Time synchronization between systems that do not share a clock line is a pretty recent thing. It didn't used to matter at all if your clock was wrong, and many people would never notice or bother to fix it. Now if your clock is wrong you can't even load anything in a web browser. Your clock sync daemon has to fix your clock before the certs will be accepted as valid. HTTPS is a bummer, maaan.
Not OP, but the obvious issue is that with very large offsets the certificates all look like they've either expired or are future dated; either way they're not accepted. I had a laptop with a dead clock battery for a while; I would sometimes fumble the time when booting it and would discover the mistake when I couldn't load my webmail or Google. (Also, the filesystem would fsck itself because it was marked as last fscked either in the future or the far past, but I didn't always notice that.)
One of the hard problems we already had to handle was that Unix used long also for file sizes. So if nobody would use 64-bit types early on to break the 4G barrier on storage, then obviously, nobody would do it for time.
Even the 32-bit Unix versions shipped with this limitation for a very long time.
I think the main reason nobody pushed back on a 32-bit time_t is that back then much less was done with date and time data. I don't think time rollover would have been perceived as a big problem, given that it would only happen every 100 years or so.
In the decades since we have become used to, for example, computers being connected to each other and so in need of a consistent picture of time; to constant use of calendaring and scheduling software; to the retention of important data in computers over time periods of many decades. None of these things was done or thought about much back then.