Doesn't sound like a lot, but where I am now we routinely work on very large infrastructure projects and the plans, documents and stuff mostly come as PDF. We are talking of thousands of documents, often with thousands of pages, per project and even very big projects almost never break 20 GB.
If you like, you could say, PDF are information dense, but data sparse. After all it is mostly white space ;)
They often aren't like you're describing, though. For example, pdfs with high res images embedded that are drafts of future book or pamphlets prints. These can be hundreds of Mbs for a single pdf with less than 100 pages, and are so common in marketing departments that it's hard to imagine that you could fit anywhere close to all the pdfs on 8TB.
True, we get plenty of high-res pictures of film in PDF here and some of them are ridiculously large, easily approaching gigabyte sizes, like you said. But that's more a problem of the user creating the PDF than inherent to PDFs. A raw 36 megapixels (our fancy 4K displays are only 8.3 megapixels, for comparison) picture reproduction of an ISO 400 film takes only about 70 MB, which tells us that something went wrong in the transfer if a PDF containing 10 pages of them cracks 1 GB.
So, yeah, there are these monsters that send even beefy computers thrashing. But in my experience something in the creation process went wrong and it is appallingly common for a trade where PDFs are the go-to transfer format (I'm looking at you AutoCAD users!) I'd guess that the archive is doing the same we do, reprocess them for sensible results and store them. I assume you think the archive does not and then I'd agree with you. One determined civil engineer with AutoCAD can fill 8 TB in a week ;)
I'm doing some work for a company that handles scanned documents (PDFs which are purely images) and they accumulate about 15 TB / year. Of course the actual amount of information is relatively small, just inflated by being scanned. Probably 80% of them were typed up, printed, and then scanned or faxed, and of course the first thing we do is OCR them to try to recover the original text and formatting...
I've been doing some work for an infrastructure company as well. They have a total of about 1 billion pages of PDF documents in their archives. If we assume even just 30 KB per page (which is quite low, all the PDFs I just randomly checked were higher, sometimes quite a bit so), that's already 30 TB of PDFs, just for that one company with 1B in annual sales.
If you like, you could say, PDF are information dense, but data sparse. After all it is mostly white space ;)