Hacker News new | past | comments | ask | show | jobs | submit login

Mainly for extension authors and people writing tools that interact with source code. Having a "compiler as a service" allows you to generate data structures and query the structure of the source code you are working with without the need for writing your own parser.

For example, you could use the "compiler as a service" to create a semantic model from source code and then query it for "all classes that contain a public method that accept an int and return a bool". This might sound trivial, but using a tool like Roslyn to do this is much easier than writing a parser that handles a language and then doing the same thing.




To build on this:

1. It's nice to have the same view of the code that your compiler has. This matters less for C# than it would for C++, but it's nice to know you handle the edge cases in a consistent fashion.

2. Invalid code. Since Rosalyn is designed for being used in VS while you're in the middle of typing, it has advanced functionality for handling incomplete or erroneous code. If building your own parser for valid code is hard, building a parser that also gracefully handles half-finished lines is an order of magnitude harder. And having it handle half-lines in a fashion consistent with how VS handles it is another order of magnitude.


As far as visibility to the end user: For those of us using something like Resharper, there won't be any digression between errors reported by Resharper and errors being reported by the compiler.

This also means that the general bar for building code analysis tools has been lowered by quite a bit


The two parent's answers are completely correct.


Does this mean the C# compiler has now been refactored, so doing type inference on, say, fields, can be done?


Probably not: http://blogs.msdn.com/b/ericlippert/archive/2009/01/26/why-n...

EDIT: On second thought, Mads said primary contructors are being considered, and I think that would make implicitly-typed fields possible.


That post says "Doing so would actually require a deep re-architecture of the compiler." I thought Roslyn would satisfy that part.


I've talked to members of the Roslyn team and asked about some features I would have found interesting to get added to the C# team (immutable, non-nullable data-types, type-inference on return-types, etc).

They seem very much on top of their game, and their response has been that lots of these features would probably introduce problems with existing base-class libraries and classes. If they were to introduce those features, they would have to do it at release 1, not 10 years later. They agreed it would have made the platform better, but now it's too late.

As for the "deep architectural" changes required, Roslyn is indeed that change. But the team was very clear on wanting to deliver in stages. 1: Deliver the new compiler and then 2: later when it all has proven to work well, then start using this new architecture to deliver new features.


This is generally accurate, but the last paragraph is a little suspect -- we're iterating on C# 6 right now and using the Roslyn compiler to build the features necessary for it.

I don't think compiler architecture is seriously stopping features in their tracks -- there's only a cost/benefit analysis. 'MichaelGG's suggestion on inferred members, for example, has implications on the contracts of types and usability/user interaction with the language, so it's not just a question of if we can do it, but if we want to do it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: