Then don't use them when you have arbitrary unbounded inputs or maybe even don't use them if you care about security.
In many applications you don't care about those things (like say writing a game engine). They come in handy there.
I mean, it's not a some kind of critical feature but some code use it and it's nice to be able to compile it. They are also quite convenient and in rare cases the best solution performance wise.
VLAs in reality are just as dangerous as allocating from the heap. I would agree not everything should be allocated on the stack, but you can overflow the heap just as you can overflow the stack.
Unless people are completely irresponsible, they query the stack limit and subtract the current stack bound before using the heap for large objects. Freeing memory is even simpler than using the heap, and faster as well. I think the only real concern is people being responsible about using them.
In my current project I get a significant (about 3%) performance difference when I change my VLAs to allocating the biggest possible size.
My guess is that it's because cache implications (a lot of useless 0s occupy a cache when you allocate a lot of useless space on the stack) but I haven't investigated it too deeply, just measured it.
A. They make stack allocation much more dynamic, harder to reason about its bounds, and arbitrary inputs may blow the stack later.
B. They make sizeof a dynamic thing! sizeof can no longer always be constant folded and can even cause side effects!