The main issue is that the standard was written back when file formats and tape formats were essentially the same thing. Data was read from tape, processed and written to a second tape. The memory of machine at the time were of the order of 1 block.
Decoupling tape formats from file format effectively resolves the issue, that is: read the tape in 6k blocks, write the file to a modern file system. Set the tape block size to 10mb. Write the file back to tape as a tar. You get the bandwidth limit of the tape drive.
The issue is almost all legacy industry software is still designed to read and write directly to tape objects so your tape isn't readable in all the software this data is designed for.
Couldn’t this be addressed with virtual tape devices? The application gets presented with something that is indistinguishable in behaviour from a physical tape drive, but actually backed by a file on SSD or hard disk. Then you can copy that file to a real tape device separately
I can't say for oil and gas but I encountered the same thing in semiconductor manufacturing. CMP machines still use tape formats even if the file is now written to different media. The challenge during transition was in proving reliability and transition costs. Sometimes letting engineers deal with the hassle is cheaper than letting them fix it.
Decoupling tape formats from file format effectively resolves the issue, that is: read the tape in 6k blocks, write the file to a modern file system. Set the tape block size to 10mb. Write the file back to tape as a tar. You get the bandwidth limit of the tape drive.
The issue is almost all legacy industry software is still designed to read and write directly to tape objects so your tape isn't readable in all the software this data is designed for.