Isn't this a property of the implementation rather than the language, though? Couldn't you have a compiler option to keep the type tags attached to the bits, sacrificing some of the speed benefits of static typing but keeping the primary advantage of proofs/testing?
You would still need to verify the data is still typed correctly. You just can't trust bits in general.
I've come to thinking of a strongly-typed program as being like a cell. Once you cross the cell boundary and it lets you pass, you can trust it and do all the wonderful type-based magic and optimizations that we know and love, but you have to get past that cell boundary first. And as annoying as that may be, poking a gaping wound in the cell wall and just jamming stuff in is likely to have bad results. Even between two cells of the same type, you really ought to do the checking, lest it turn out they not be quite as same as you thought (versioning issues).
(And of course in both theory and practice you still can't truly fully "trust" it even after it gets past the cell wall, but at some point you hit the limits of what you can verify. Real-world cells have the same problem too.)