String slicing using byte indices has to exist in some form, since it is the only thing that is efficient (O(1)). But, I guess it could have used syntax other than somestring[...].
That means one loses all the conveniences and guarantees of the string types and, in many cases, forces an immediate revalidation the byte slice as UTF-8 to get back to &str, which is O(n). Furthermore, this is also rather clunky.
I suppose one could have it return StrWithInvalidSurrounds, where just the first (at most) 3 and last (at most) 3 bytes might be invalid, which would then allow for O(1) revalidation to a &str, and even other operations like continuing to slice... But this is even more clunky for actual use!
I think a moderately less clunky API might have been to not use integers for byte indexing, but instead some ByteIndex wrapper type that string operations return, meaning one can't just write `s[..5]` in an attempt to get the first 5 characters of the string.