Hacker News new | past | comments | ask | show | jobs | submit login
Rejuvenating the Microsoft C/C++ Compiler (msdn.com)
203 points by ingve on Sept 25, 2015 | hide | past | favorite | 134 comments



This is excellent news. MSVC is a big pain in the ass for me, as I tend to write C code (not ++ usually) and Microsoft's C compiler is the one with the worst support for language level features (out of widely used compilers). E.g. having to move variables to beginning of block (c89 style) is just miserable. A popular option is to compile C code as C++ inside an "extern C" block to stop name mangling, but this isn't too clean either.

I'd also like to have compatibility of some language extension features, such as built-in functions and attributes. In particular, CPU-agnostic intrinsics for common instructions like atomics, popcount or count leading zeros as well SIMD arithmetic would be great.

My favorite C language extension is SIMD vector arithmetic with infix operators. You can get really pretty and portable (!) vector math code written in Clang and GCC using vector extensions, but again, it's not available in MSVC.

For most of my current projects, I have no intent of supporting the (current) Microsoft C compiler. It's not worth the effort.


VC++ has historically been the odd man out, but it's improved greatly in the last couple of versions. Certainly, VS2013 has reasonable support for a lot of C99 code. My C99 code builds on clang (OS X), gcc (Linux), and VC++ (Windows). Maybe I need to get more adventurous, but what I see as the key improvements (designated initializers, compound literals, declarations (nearly) anywhere) are now in place.

Main omissions I've noticed: printf isn't quite the same (no 'z' modifier, the 'n' forms are noticeably inferior), no VLAs.Also, standard library isn't POSIX (which is not an omission but a lot of C code assumes POSIX so you'll probably end up having to deal with this).

I'm surprised you mention intrinsics, since VC++ has a wide selection - https://msdn.microsoft.com/en-us/library/hh977022.aspx - and I'd expect, though without proof, the differences between many of those available on both VC++ and gcc/clang to be something you could work around with #define or, failing that, a small inline function (and cross your fingers).

Aside from VLAs, which is annoying, you can work around most of these with wrapper functions and #defines. So I've been pleased enough with the latest VC++. As a long time programmer for multiple platforms, I can put up with a certain amount of deprivation, and having to put in a bit of effort doesn't bother me (all that much). But I've noticed a tendency for people to sometimes assume that "multiple platforms" means "ten different types of gcc+POSIX+fork+pthreads". I'm sure that even shiny new C99-friendly VC++ won't make them happy.


"multiple platforms" means "ten different types of gcc+POSIX+fork+pthreads"

This. Unfortunate, but some even seem to think C99 support automatically means POSIX and then start complaining a platform isn't compatible if it doesn't handle what they thought was portable code..


Why is it bad to expect a portable operating system interface? Almost all languages have been doing this over the last decade (C, C++, Java, Go, Python, Ruby, PHP, etc...).


I once had the pleasure of writing a game engine that ran on Windows, Xbox, Xbox360, PS2, PS3, GameCube and the Wii. The 3 Microsoft systems shared some very basic Win32-style system API, but the other 4 were completely unique. When I see people talk about "cross platform" code only to find they mean "Linux and OSX!" I chuckle a little.


I can only guess what the portable code for PS3 looked like :).



That was some funny stuff and enlightening. I remember having the PS2 experience myself when studying its architecture: so many individual processors stitched together it was like re-inventing 60-80's computing experience lol. That said, I thought it was cool that they threw in some scratchpad. It's better than cache in many ways and so underutlized in special-purpose, performance designs. Helps with real-time, too, as it's predictable. PS2's mainly just helped with I/O hacks IIRC correctly, though.

Also interesting to contrast your description of PS3 with Naughty Dog comments. Seems like they got a lot more out of them but had to work hard to do it. Curious, there were products that auto-synthesized code for SPU's like RapidMind. I had planned on trying them had I used PS3's for acceleration. Did you or anyone in that industry try those tools to see if they delivered similar performance over hand-coded algorithms in gaming workloads?

http://www.theregister.co.uk/Print/2007/05/08/rapidmind_two/


The big difference between the project I was on vs Naughty Dog was that ND was 100% committed to the PS3 exclusively and eternally. If you know you can't escape then you are motivated to put in the effort to make it as good a fate as possible. The PS3 rewarded effort, but only on a very steep curve...

Meanwhile, my team was multiplatform. That meant most people could hide in the easy spaces on the PC or maybe the 360. We had a small group of Russians and Europeans who enjoyed the challenge of hand-optimizing the SPU. They would not have tolerated synthesized code ;)


That makes sense. The Russian angle, too: prior experience showed thdm to bd good at programming and optimization. One told me it's because access to good hardware was limited for many and so people got the most out of what they had. He said some even coded and debugged algorithms on paper before going to Internet cafes.

I'd probably try to get more in OSS projects but they can be a rowdy bunch. Gotta have manager or leader that can keep the egos and nationalism in check. ;)


you mean half of code compiled with __cdecl, other half __stdcall, some calls as pascal... oh, that's cute..

and 32/64bit..


Because C and C++ never really had one.

UNIX was seen as the C runtime. When the standard came ANSI C adopted what was considered the minimum portable bits one could use in non UNIX systems.

Then came POSIX, which isn't as portable as many think. It also enjoys some of the UB and implementation specific behavior of the C world.

C++ while trying to cater to C developers adopted the same attitude.

Thankfully the C++ committee is changing that with C++17.


It's a bit disappointing C seems to be getting left behind in some areas. It's simpler to compile, which in theory should mean that it's more portable.


No it is not.

http://blog.llvm.org/2011/05/what-every-c-programmer-should-...

http://blog.llvm.org/2011/05/what-every-c-programmer-should-...

http://blog.llvm.org/2011/05/what-every-c-programmer-should-...

Many equate "C == what my compiler does", but just like any standardized technology with holes open to implementers to decide upon, portability isn't as people think.


I read Chris Lattner's posts. There are definitely things the standards don't specify that people take for granted (twos-complement for example). C is still an order of magnitude easier to parse than C++ though (especially modern). In theory, implementing and maintaining a compiler for it for a platform should be simpler.

Implementing an effective compiler is another concern entirely, but I would still argue that there tend to be more incompatibilities between C++ implementations than C.


Yes, parsing is way harder because C++ requires the context aware grammar.

However many C++ incompatibilities are actually caused by C compatibility and the goal of having higher constructs that are copy paste compatible with C but with additional semantics, e.g. structs.

So in both cases, to achieve your portability goal the languages would have to be fully specified.


C and C++ are admittedly different languages, but I would think more than less of the code for compiling both languages should be the same. I personally can't see how one language would impede the other.

As a C/C++ user, I'm honestly still not sure why a number of C features have yet to be formalized in C++ (e.g. restrict).


> Thankfully the C++ committee is changing that with C++17.

Which C++17 feature are you referring to? Modules?


The work being done by the SG

https://isocpp.org/std/status

Filesystems, networking, concurrency, ....

Sadly databases died out apparently.


Looks like they're doing what they should've done some time ago and what helped many competing languages get ahead. Between that and recent standards, C++ programming might get really interesting again. Even I might take a stab at it eventually.


Even though I hardly use it at work (JVM/.NET), it was my to go to language after Turbo Pascal. So I still enjoy following and dabbling with it.

Some of the issues C++ had were:

- C compilers were still catching up with ANSI C89 and it was a mess

- C++ being in the process of becoming standardise was even worse. It was quite hard to find out what were the working common features between compiler vendors, specially in terms of semantics

- The C culture that prevented many nice frameworks to succeed, because they were too high level, hence why MFC is such a thin wrapper over Win32.

- Lack of standard ABI in an age people only shared binary libraries.

C++14 and C++17 look really nice, but I doubt C++ will recover its place at the enterprise, beyond performance critical libraries/modules.


Expecting it isn't bad, blindly assuming POSIX is available everywhere is bad, as in calling your code cross-platform because it uses just C99 and POSIX for instance


The majority of the time it's a safe assumption though, so I'm not sure developers can really be considered to blame.

The reality is, without it, developing software for a platform takes more effort which I think is one of the reasons languages have taken on the initiative of building portable interfaces. Even C11 has added threads.h.


> But I've noticed a tendency for people to sometimes assume that "multiple platforms" means "ten different types of gcc+POSIX+fork+pthreads".

Having done multiple platform C development across Aix, HP-UX, Solaris, GNU/Linux and Windows in the first .com wave, as well as, having some embedded knowledge, it is always interesting that for some people only gcc and now clang exist.

As for C99, Microsoft is pretty clear that are only supporting what is required by the C++ standard.


I believe they have started explicitly targeting C99 support in more recent versions in order to make it easier to port C code developed against gcc or clang.

Here's an older article - http://blogs.msdn.com/b/vcblog/archive/2013/07/19/c99-librar...

(I've seen more recent ones, but I can't find an example to hand)


Given that the video was already a bit old, this is the latest official statement I could find, from 29 Apr 2015.

http://blogs.msdn.com/b/vcblog/archive/2015/04/29/c-11-14-17...

"Q. What about the C99 Core Language, or the C11 Core Language and Standard Library?

A. Our top priority is C++ conformance. We've implemented the C99 Standard Library because C++11 incorporated it by reference. C++ (up to the current Working Paper) hasn't incorporated the C99 Core Language in its entirety (only a handful of features, listed above), nor is it ever likely to. It may incorporate the C11 Standard Library at some point in the future, but that hasn't happened yet.

In VS 2013, the compiler team implemented some C99 Core Language features in C mode (e.g. designated initializers, see MSDN for the full list), in order to support some popular libraries. "


No they were pretty clear that it is only what is required by C++ standard, and some high profile customers.

There was a C9 interview about it, need to search for it.


Could not find the interview video, but this was another one I remembered have watched:

https://channel9.msdn.com/Events/Visual-Studio/Launch-2013/V...

Starting at 00:18:40, note the "not an intent to conform to C99/C11".

FFMpeg was one of the customers


Out of curiosity, who are MSVCs high profile customers?


They didn't mentioned on the interview.

If I recall correctly it was at a Visual Studio event last year, on those floor interviews they publish with the teams on C9.


Adobe is almost certainly one:

https://helpx.adobe.com/creative-suite/kb/microsoft-visual-c...

There's probably quite a few internal-only users of Visual C++ too.

EDIT: Well, they're a high-profile user of Visual C++, I have no idea if they require C99 support but they ship a Mac version so it's possible.


One of them was FFMpeg, from the video I digged out.


When I was in that situation (maybe ten years ago), we ran gcc on Aix, HP-UX, Solaris, Linux, Tru64, SGI, and maybe a couple of others. Then of course there was Windows, which used the Microsoft compiler. But using only two compilers made the ten or so platforms easier to deal with. (There still was a bunch of code that was "#ifdef SGI/#else/#endif", and similarly for other platforms, but at least the compilers behaved the same.)


We had to use the vendor compilers, so no fun.

This was the only time period I used C instead of C++ at work.

However I do confess, the situation for C++ support was even worse, so although I am not fan of C, the decision made sense from business point of view.


How long ago was your experience? Was gcc an option at that time? Or did management require you to use the vendor compiler?

What we did was, we used the vendor compilers to compile gcc for a platform (or maybe used gcc to cross-compile itself, I forget). Then we used that gcc to compile gcc for that platform. Then we used the new gcc to compile gcc for that platform, and made sure that it was the same as the previous gcc. This wasn't easy - we had one guy that this is pretty much all he did for months. But it got us to a place where we were free from the vendor compilers.


> How long ago was your experience?

It was back in the first .com wave, so 1999 - 2002.

> Was gcc an option at that time? Or did management require you to use the vendor compiler?

Customers did. Our software needed to integrate with their existing toolchains.

At least for our customers gcc was not a serious compiler, given the license and being that funny open source thingy.


FYI, C99 printf including 'z' for size_t was fully implemented in VS 2015.


Most likely you are not allowed to comment on it, but has the official position from the last C9 VS team interviews changed?


I believe we're hoping that Clang/C2 will satisfy those who want to build C99 TUs on Windows. Personally, I have absolutely no interest in C99 (I wasted a year and a half of my life learning C before C++).


Thanks for replying. I had a similar experience back in the MS-DOS days still, and share the same opinion.

Turbo Pascal -> C (meh) -> C++ (great back in business!)

Even though I mostly do JVM and .NET nowadays, still enjoy playing with C++ on the side, specially for mobile OS coding.

I see dropping C support on Windows as a means to force developers to migrate to more secure languages, but of course Microsoft should do what makes business sense.


correct.


Thanks for the note. Switching to VS2015 just worked its way up my to do list.



E.g. having to move variables to beginning of block (c89 style) is just miserable

Since VS2013 there has been partial C99 support and as such this is not needed anymore, finally.


>My favorite C language extension is SIMD vector arithmetic with infix operators. You can get really pretty and portable (!) vector math code written in Clang and GCC using vector extensions, but again, it's not available in MSVC.

Forgive me as I might be wrong but aren't system intrinsics the same in each compiler? I was under this impression as I wrote a lot of SSE2/3 C99 code for the GCC using MSVC++ documentation it compiles/runs without an issue. (This also before finding the MMX/SSE/AVX docs that intel has which are beautiful).

I mean the function/variable type names (for x86_64 at least) are the same for Clang/ICC/GCC/MSVC.


>> Forgive me as I might be wrong but aren't system intrinsics the same in each compiler?

>> I mean the function/variable type names (for x86_64 at least) are the same for Clang/ICC/GCC/MSVC.

Yes, all those compilers support the same names for x86_64. Then a different set for ARM NEON, and a different set for PPC AltiVec. What GCC has done is implement another set of names that can be used across all those hardware architectures. To clarify the difference, you can choose - do you want to move your code between different compilers, or different hardware but always with GCC.


Honestly the non-intrinsic SIMD code is a mixed bag. I like the portability compared to intrinsics, but it encourages SIMD antipatterns like using a vector register to store one object (as opposed to using it to step over loops 4/8/16 at a time). It's also less reliable about generating good code IME.


E.g. having to move variables to beginning of block (c89 style) is just miserable.

I actually find this style clearer since all the variables are accounted for at the beginning and it helps when looking for them (and associated comments, if any). You can always open a new block when you need a new set of variables. Do you happen to have started with a more dynamic language which didn't require variables to be declared? That was the case I've seen with most other programmers who preferred creating (often very many) variables only at the point of use.


I learned c in the c89 days as my 3rd or 4th language (after modula-2 and pascal and maybe fortran) and now, in 2015, I can confirm that beginning-of-block variables are in fact just miserable.


I share the view that it's more clear to have all the variables declared at the top. The only exception I am making in my code are loop variables but the point doesn't apply to those as you want the smallest scope possible so declaring them at the top of the block where the loop is located is just inferior.

I find the style easier to read as you can instantly see what variables will be operated on, what arrays filled etc. It's also quite important for me to know how much stuff is allocated in given block so I don't blow out the stack and having all the things at the top makes it easier to think about. It's probably about being used to it but I found the code with mixed declarations and logic to be inelegant and more difficult to understand.


You're forced to declare them before they have a sensible value to use.

You're exposing your code to unnecessary bugs where you user the variable prematurely.

Variables tend to be declared further from their use, making it harder to find them and see their types.

The stack size argument is wrong, because variables aren't necessarily on the stack, and non variable allocations may be put on the stack (i.e it's neither necessary nor sufficient to predict stack use).

You're discouraged from creating intermediate variables that aid readability.

You're discouraged from declaring your variables const, losing extra safety.

All in all, it's a terrible way to code.


>>You're forced to declare them before they have a sensible value to use.

As I said I like it because I like seeing what will be operated on when I start reading a block. I don't need to initialize them just yet.

>>You're exposing your code to unnecessary bugs where you user the variable prematurely.

The compiler warns when you use uninitialized variable. Just don't initialize them to random things, leave them uninitialized.

>>Variables tend to be declared further from their use, making it harder to find them and see their types.

I mean, I can see how that could be a problem but I don't have blocks bigger than one screen so I can alway see them anyway. I like short functions/blocks. It's also way easier to visualize what is happening if I allocate memory in my brain for the variables so to speak.

>>The stack size argument is wrong, because variables aren't necessarily on the stack, and non variable allocations may be put on the stack (i.e it's neither necessary nor sufficient to predict stack use).

It might be technically wrong but the real problems are arrays as they are the only thing which could blow out the stack (especially in recursive functions) so yeah, I don't see it as a problem although I imagine it could be for some.

>>You're discouraged from creating intermediate variables that aid readability.

I am not discouraged. Again, maybe it sounds super strange to you but I am actually discouraged if I had to put them in the middle of the code logic as I find it ugly and I need to readjust my picture of what is going on once I encounter some new variable I didn't know is going to be there when glancing at the block.

>>You're discouraged from declaring your variables const, losing extra safety.

I think const is as good as useless in C when it comes to declaring variables. I don't think it gives any extra safety it's just more typing and additional pain.

>>All in all, it's a terrible way to code.

I don't know. I think you lost the perspective. One way or the other it won't be a big difference and a lot of good code is written in the old/traditional style. You may think it's worse but it is not terrible.


All excellent points, and I would add that it changes the semantics of the code when it comes to threading, which is will be a more and more major issue going forward. If the variable is declared outside the parallel section (such as a for loop), it's assumed that it is shared (and therefore synchronized), while if it's declared inside it is clearly private to each thread and no synchronization is required.


We are talking about top of the block. When you are creating a parallel section you have a new block/scope there so that argument doesn't apply. Both styles are the same when it comes to scope of the variables.


> It's also quite important for me to know how much stuff is allocated in given block so I don't blow out the stack and having all the things at the top makes it easier to think about.

With modern compiler backends you haven't been able to reliably reason about stack usage by looking at the number of local variables for quite some time. Even ignoring IR-level optimizations and register allocation, modern compilers will do coloring on the stack slots that remain, meaning that the number of stack locations you end up using is flow-sensitive (and thus moving to the top is actually technically obscuring things).


I just wanted to say that I agree with you, but that's because of age. These days I do more C#, but I still use this style even in that language although this is becoming less frequent as I replace code and use C#'s "var" declaration and assignment keyword.


I believe creating variables at the point of use was retroactively inherited from C++ in C99. It can be useful feature when not abused.


What would be considered abuse here?

Beginning of block variables are generally always worse than as late as possible variables.


Unfortunately, Microsoft has stated publicly on a few occasions that they have no plans of supporting C99 or C11 outside of what they have to implement anyway for new versions of C++.

The article isn't very clear, but I would guess backward compatibility is going to trump adding new features as far as the C compiler goes.

I wouldn't mind getting proved wrong, but the article mentions a lot of C++ features they're working on, and doesn't mention an C features they're working on.


we are targeting C99 via the new Clang / C2 hybrid compiler. We demoed it on stage at CppCon this week.


Well, I guess I stand corrected!

Microsoft keeps surprising me lately!


MSVC has those things. __popcnt64 __mm_add_ps etc.. or do you mean something else?


I hope Microsoft never decides to ditch their compiler and use something existing like Clang. I may have had headaches due to Microsoft incompatibilities before, but having a real alternative (or even three, MinGW still works fine) is great.

That said I applaud their decision to fix their compiler architecture problems!


There is also the Intel compiler (a bit pricy, but comes with enough bells and whistles that for anything performance-related it is easily worth the money). At least in my experience, its code is a couple percent faster than GCC on modern Intel CPUs. Diagnostics are rather horrible, though.


I believe it has switched to use clang as its frontend, though, so it's not a fully independent alternative anymore.


It's good for detecting and working around bugs if nothing else. That's one of reasons OpenBSD development uses so many different pieces of hardware.


That's great. Please don't forget to finish the C99 implementation!

For tiny values I really like being able to use int arr[runtime_size]; rather than risking buffer overflows with int arr[MAX_SIZE] or arr = malloc(size * sizeof(oops)); — and MSVC is the last compiler that still doesn't support that.


VLAs do not absolve you of the need to ensure the buffer is the right size. In fact, they can be worse since at least malloc()'s failure can be determined from its return value. A VLA will just overflow the stack silently:

https://www.clarkcox.com/blog/2009/04/07/c99s-vlas-are-evil/

Edit: to expand on this point, if you really need to keep all that data in memory at once, it's often better to just malloc() a buffer of the appropriate size and reuse it, reallocating if it should be bigger. (Obviously, do not forget to free() it eventually.)


The point is that you sometimes want them on the stack and malloc won't do it for you. MSVC has this already though: https://msdn.microsoft.com/en-us/library/wb1s57t5.aspx

so I really don't think there is any good will there. It seems someone in the management is set on "C++ is the future, C is obsolete" view, at least that's the impression I've got from the press releases about it.


If you want them on the stack, it's safer to use MAX_SIZE so that you don't blow the stack when some arbitrary input arrives in the future.


1)There is sometimes a very slight performance hit as there is more space between arrays and you miss on cache

2)using something which is smaller than MAX_SIZE is never worse unless you write some safety critical software where a bug when you write to unintended but empty location is better than writing randomly somewhere and/or crashing. I prefer the latter in what I am doing (as it's easier to find something is very wrong)


The memory hit is an interesting point, could be useful if there's a static validation on worst case stack size.

The smaller allocation is worse as it hides potential bugs.


Yeah, they are so safe that were made optional in C11.


The made it optional because the committee has to cater to the interest group. They are as safe as the rest of C/C++ language: 100% rock solid safe if you know what you are doing and catastrophically unsafe if you don't.

The point is that a lot of code uses them and that makes porting it a pain. I mean, they spend so much resources to implement all the new toys in C++ which just introduce the new way of doing the same thing but to actually implement something which is used in real world code, which they already almost have (_alloca) is sometimes a problem because it's "unsafe". No one sane can actually believe that explantion.


VLAs are just a bad idea.

A. They make stack allocation much more dynamic, harder to reason about its bounds, and arbitrary inputs may blow the stack later.

B. They make sizeof a dynamic thing! sizeof can no longer always be constant folded and can even cause side effects!


Then don't use them when you have arbitrary unbounded inputs or maybe even don't use them if you care about security.

In many applications you don't care about those things (like say writing a game engine). They come in handy there. I mean, it's not a some kind of critical feature but some code use it and it's nice to be able to compile it. They are also quite convenient and in rare cases the best solution performance wise.


VLAs in reality are just as dangerous as allocating from the heap. I would agree not everything should be allocated on the stack, but you can overflow the heap just as you can overflow the stack.

Unless people are completely irresponsible, they query the stack limit and subtract the current stack bound before using the heap for large objects. Freeing memory is even simpler than using the heap, and faster as well. I think the only real concern is people being responsible about using them.


What's the benefit over just allocating the worst case size? If worst case size is too large, your program is broken anyway on some of your inputs.


In my current project I get a significant (about 3%) performance difference when I change my VLAs to allocating the biggest possible size. My guess is that it's because cache implications (a lot of useless 0s occupy a cache when you allocate a lot of useless space on the stack) but I haven't investigated it too deeply, just measured it.


You can use _alloca(), however be careful not to overflow thread stack. This applies to C99 variable length arrays as well.


Actually, alloca is even less portable than VLAs.


I guess I must have experienced some of this rewrite. With VC++ in 2015, a piece of code that had a very long if/elseif chain no longer compiled. Parser stack overflow: program too complex. Easy enough fix (break the if/elseif in half) but funny in a way.

I hope they'll be able to continue to keep up C support as well as C++. It'd been so nice having C99 in MSVC, as cross-platform projects can now adopt C99 features.


> I hope they'll be able to continue to keep up C support as well as C++. It'd been so nice having C99 in MSVC, as cross-platform projects can now adopt C99 features.

I hope that they only keep it as much as required by the ANSI C++ standard.

Annex K was a joke.


Annex K is optional.


Hence why it is a joke.

Not only has the standards comitte not done anything serious about preventing bounds overflows, they made it optional.

Sizes still aren't guaranteed to be correct for the string/buffer.

No portable C code can rely on its presence.


AFAIK Annex K originated as a Microsoft proposal, so I find it somewhat strange that you're using it as an example of a C feature that Microsoft shouldn't adopt (though I agree that it's a terrible feature that never should have made the cut).


Yes, as such there is nothing to adopt. It is already there.

What I am in favor is Microsoft pushing C aside.

C++ provides safer constructs and if one wants to keep on doing unsafe C style coding, it is still there.


So when will they support C99 (I'm not even asking about C11)?



Note, this is only because the C++ standard grandfathered in changes to the standard library in C99.

It does not actually allow you to write compliant C99 and compile it.


They added several C99-only features that are not part of any version of C++ in VS 2013, such as designated initializers.


Yes, but they still don't plan for full support.

https://channel9.msdn.com/Events/Visual-Studio/Launch-2013/V...


You don't have to run to Microsoft. If you do not like mainstream solutions PellesC and OrangeC compilers are an interesting look.

A would like to see a similar article for OpenWatcom v2.


They already do, to the extent required by the ANSI C++ standard.


So in other words, they don't.


I mean C99 standard.


Partial C99 support is required by ANSI C++11 and C++14. This is what Microsoft is currently supporting.

C++ is now supported on the kernel level.

Good luck doing WinRT applications with pure C.


When they build a C compiler.


So when will they? They had plenty of time.


When they think they have good commercial reason to do so (and, apparently, not before).


see replies elsewhere. We demo'd C99 support via Clang / C2 on stage at CppCon.


With such attitude they should stop developing any tools and everyone would be better for it.


This was some excellent information from Microsoft themselves.

Geoff Chappell has been uncovering various various Microsoft compiler and linker (plus other bintools) hidden command-line options and environments (like _CL, _CL_, and _LINK, _LINK_) - http://www.geoffchappell.com/studies/msvc/cl/index.htm?tx=23


> "we knew new features such as constexpr were going to need a different approach."

Stephan Lavavej (STL) also mentioned this a while ago. It is nice to see they have been working on this so maybe MSVC can catch up with new standard features faster from now on.

https://news.ycombinator.com/item?id=9462644


> The target of the callback could not be found


For posterity: the link used to pop an alert() saying that. Doesn't happen for me anymore...


It still happens if you use STRG+F5 to force a complete reload.


I get this when I open up on mobile chrome..


I see it on desktop firefox.


Well I don't know if it's my machine upgrade or the VC++ upgrade, but my C++ code compiles an awful lot faster on VC2015. I also find the IDE tools very responsive and useful. In particular you can now look at the implementation next to the declaration.


we made a bunch of improvements to build throughput especially for incremental builds. many improvements to our handling of template code as well. -- steve, vc dev mgr


Now if they can just figure out how to make project files that aren't completely and utterly incompatible with each other when they're only a few years apart.


CMake comes with its own drudgery, but I find it better than trying to keep Visual Studio project files in source control.


The last change I'm aware of was around VS2010 when they changed to MSBuild-based project files. Anything since then?


Do you mean backwards compatible? I feel like I have opened VC++6 files in VS2013.


Does static analysis still choke on Boost or CGAL?


Interesting that they mention using clang as the front end for their code generation backend.

Glad that they're finally undertaking this, it'll take cross platform C/C++ code a long way if there's more standardization across all major compilers.


Come guys, just use clang!

It's state of the art, all the tooling already exists, and its under a very permission license (http://clang.llvm.org/features.html#license).

You don't need to rewrite all of these tools.

/me shakes head.


They're not rewriting all of these tools, they're improving their tool that prexists Clang.

Now, I know that's quibbling, but this really isn't: What do you think is less work and less risk for Microsoft: throwing enough resources at MVCC to make it better, or throwing enough resources at Clang to make it so that Windows and Office compile on it without introducing regressions?


> It's state of the art

Does the officially released version support OpenMP by now? A few weeks ago at least I still failed at installing that properly from official sources. There seems to be an OpenMP-in-clang project going on, but it was far from obvious what had to go where. So about this “state of the art”…


It took them forever, but yes, they support OpenMP 3.1 as of clang 3.7.

MSVC still only supports OpenMP 2.0.


Thanks for the information. I must be doing something wrong then as I've recently downloaded 3.7.0 for Windows and while it compiles with -fopenmp it doesn't work correctly (only one thread is launched). I guess some investigation is needed, any ideas why that could be?

The code works correctly when compiled with GCC so that can't be an issue. I am launching OpenMP threads from a Windows thread (not the main one) so maybe that's the issue (MinGW packages had problems with it forever and it's pretty tough to find one without this bug present).


This:

    ./clang++ --version
    clang version 3.8.0 (trunk 246030)
    Target: x86_64-unknown-linux-gnu
    Thread model: posix
does not support OpenMP to my knowledge.



Hmm, interesting. It works in a simple test case, but fails to compile my codebase and throws an error

    ./inc/bla.h:163:9: error: unexpected '#pragma omp ...' in program [-Werror,-Wsource-uses-openmp]
    #pragma omp parallel for num_threads(Threading::num), schedule(dynamic, 1)
Unfortunately I wasn’t able to build a minimally failing example so far, I will have to investigate this further. Thanks for the heads-up in any case!

Edit: Turns out everything fails if -Weverything is supplied. I didn’t offer a runtime library (nor did the compiler define the _OPENMP macro even when having -fopenmp on the command line, otherwise it would have complained about a missing <omp.h>…). If -Weverything is not there, the thing compiles but only runs with one thread.


Competition is good


Have you ever been supporting a large C++ code base to be compiled both by MS VC++ and gcc (or clang)? It's not too bad, but you are constantly working around minor incompatibilities and different levels of language support, time that could be productively spent elsewhere.

I agree that competition is good, I'd much rather prefer compilers competing on the speed, quality of their optimization, static analysis, supported platforms - something that does not give me much of a headache.

Having said that, I acknowledge that MS compiler team made great progress in the recent year catching up on C++ standards, compatibility, and compilation speed.


Can't speak for the parent, but all code I work on professionally or for personal projects compiles cleanly on GCC, clang and VC (all in all about 2 million lines of code). I think it is good for code quality to compile on different compilers, since they all catch different bugs through their warnings and static code analyzers. For this I can forgive minor inconviniences like that VC doesn't fully support C++11, or that clang on iOS doesn't support std::thread_local, or (as a random example that I stumbled upon) that the GCC version I am locked to doesn't have full support for variadic templates as lambda arguments, etc... all compilers have their little problems and corner cases, it would be worse to lose one of the big three compilers.


> Have you ever been supporting a large C++ code base to be compiled both by MS VC++ and gcc (or clang)? It's not too bad, but you are constantly working around minor incompatibilities and different levels of language support, time that could be productively spent elsewhere.

That is what happens to any technology based on standards instead of gold implementation.

Lets have one OS, one browser, one compiler, ....


Maybe they would compete on:

>>the speed, quality of their optimization, static analysis, supported platforms

if the C++ committee wasn't that "prolific" about adding yet new ways to do the same thing to the language. It sucks a lot of resources to implement that monstrosity and every 3 years or so there is a new set of toys to play with.


There were 8 years between c++03 and c++11, and in terms of substantial changes to the language you have to go back to c++98.


It's so state of the art it currently can't even generate profiling data on ARM devices.


is there a llvm/clang integration for visual studio yet?


Yes. Integrates as platform toolset in the project files, which is the same mechanism you can use to change compilers between those VS provides.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: