What you can do is run over a large corpus of programs (crates.io? GitHub?) to identify the places where the different alternatives would impact code, collect metrics on them, and pull some sample programs to investigate the impact on them of the different alternatives. Is anything like that being done?
The HTML5 spec was developed in this way, and some of the code-health folks at Google were just starting to operate this way when I left, but I haven't heard of many other language-design efforts working like this. The technology & data exists now, with large public corpora of open-source code, parsing libraries that can identify patterns in it, and big-data tools to run these compiler tools over a large corpus. It's a bit of a paradigm shift from how languages are typically developed, though.
We use crater to check potentially breaking changes against our entire package ecoystem, but its not as easy to test which of two syntaxes for a new feature will be easier to use by looking at existing code.