Not necessarily. The issue at hand here is that of false positives: if you move something to the typesystem, you want no false negatives, so you commonly get false positives instead, the developer must prove to the type system's satisfaction that <incorrect behaviour> can not occur, that doesn't mean it could occur otherwise, just that the type system was not able to understand it.
A well-known modern example is Rust's borrow checker, there are cases which it will reject despite the code being correct (there are no actual issues in it) but the borrow checker can't be sure.
At a fundamental, "UB" simply means the compiler assumes it doesn't happen and carries on from there, yielding predictable behaviour to an UB is nonsensical. And incidentally while C is rife with UB most languages do have them to a certain extent e.g. sorting assumes your comparator is correct/stable, feeding a nonsensical comparator to a sorting routine is usually UB (neither the language nor the function define what'll happen), likewise hashmaps and hash/eq being coherent. Of course the consequences are also more significant in C given the language also is not memory-safe (and even then… "safe" Rust is memory safe but if you create an UB in unsafe code and leak it to safe Rust all bets are off) (and yes Rust does have UBs though not that many compared to C: https://doc.rust-lang.org/beta/reference/behavior-considered...)