Hacker News new | past | comments | ask | show | jobs | submit login

The made it optional because the committee has to cater to the interest group. They are as safe as the rest of C/C++ language: 100% rock solid safe if you know what you are doing and catastrophically unsafe if you don't.

The point is that a lot of code uses them and that makes porting it a pain. I mean, they spend so much resources to implement all the new toys in C++ which just introduce the new way of doing the same thing but to actually implement something which is used in real world code, which they already almost have (_alloca) is sometimes a problem because it's "unsafe". No one sane can actually believe that explantion.




VLAs are just a bad idea.

A. They make stack allocation much more dynamic, harder to reason about its bounds, and arbitrary inputs may blow the stack later.

B. They make sizeof a dynamic thing! sizeof can no longer always be constant folded and can even cause side effects!


Then don't use them when you have arbitrary unbounded inputs or maybe even don't use them if you care about security.

In many applications you don't care about those things (like say writing a game engine). They come in handy there. I mean, it's not a some kind of critical feature but some code use it and it's nice to be able to compile it. They are also quite convenient and in rare cases the best solution performance wise.


VLAs in reality are just as dangerous as allocating from the heap. I would agree not everything should be allocated on the stack, but you can overflow the heap just as you can overflow the stack.

Unless people are completely irresponsible, they query the stack limit and subtract the current stack bound before using the heap for large objects. Freeing memory is even simpler than using the heap, and faster as well. I think the only real concern is people being responsible about using them.


What's the benefit over just allocating the worst case size? If worst case size is too large, your program is broken anyway on some of your inputs.


In my current project I get a significant (about 3%) performance difference when I change my VLAs to allocating the biggest possible size. My guess is that it's because cache implications (a lot of useless 0s occupy a cache when you allocate a lot of useless space on the stack) but I haven't investigated it too deeply, just measured it.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: