You could make one hell of a good C++ web framework with this.
http::controller mypage
{
//Browsing to http://server/mypage/test?some_str=blah¶m=5 calls this
http::response test( const std::string& some_str, const unsigned param )
{
return web::response{ 200, "You did awesome stuff" };
}
//This method can only be called with HTTP POST
[[POST]] http::response save_some_data( const std::string& blah )
{
return 200;
}
};
int main()
{
http::server server( "localhost", 80 );
server.add_route( "mypage", std::make_unique< mypage >() );
server.run();
}
It's all the magic of a modern web framework but in a fast language.
Due to web assembly you can easily have your server and client side scripting be done in the same language, with the same libraries which is also huge bonus.
Benchmarks are tricky and hard to find for dlang as its not that popular, but I have common sense to back that statement. Dlang garbage collector is really really slow for now [0], and C# has a very fast GC and overall is almost as fast as zero-overhead languages like Rust or C++ [1]. So, its reasonable that for anything that involves GC (we all can agree that web server does) D is not that fast as manual memory management languages yet.
You can do exactly this with the Haskell "servant" framework. Works great. It's in fact even more powerful than this. Your routes are first class, so you can pass them around and generate type-safe links.
It also doesn't require any "meta" level programming, just plain Haskell.
And Haskell is blazing fast, not far behind C++ (especially when using high level C++ APIs like std::string)
It's more that C++ is the only widely adopted fast language. Rust and D are the only real challengers there but D doesn't seem to have caught on and Rust's growth, while impressive, isn't up to widely-adopted language status yet.
Granted most things don't /need/ the raw throughput potential of C++, but if parts of your code really does need it then it tends to be easier to just make the whole thing in C++ than deal with bindings to other languages.
Sorry but can someone explain why C++ by itself can't be good enough for a web framework? OKCupid wrote one back in the day.
ALSO: Your example is much slower than the same server written in Node.js, haha. Because the language doesn't support evented programming. The language may be fast, but the paradigm of blocking while waiting for I/O is slow!
Back in 2001 I was working on what was at the time the fastest webserver in the world: Zeus. It was written in C++ and built around select(), which gave it effectively evented programming. One process per core. Primarily for serving static content, although there was a fastcgi interface and, god help you, IBM Websphere integration.
There were a few quirks to the codebase. No STL; instead our own containers and length-based string classes. There was also a "stringrange" type class, which referred to a range of a longer string. So it parsed in the entire HTTP header as an octet array, then created objects to refer to individual headers. Since the input header was immutable this worked beautifully. Considerable effort was put into avoiding unnecessary copies.
It also had at least a dozen target platforms; Linux, SunOS, IRIX, and architectures from ARM to Itanium. I'm not sure on how many of those we ever sold a license.
I remember that server! It's really cool stuff, and select() can definitely be used in C++. But there is no real evented ecosystem for mysql, pgsql, websockets, and all the other types of I/O unless it is all done through sockets. I think all languages can do an event loop, it's just a question of the ecosystem. Javascript is designed around async and Node.js was heavily promoting it from the beginning. So the default pattern is async. That's what I meant.
Yes, sure you can do the web server and even middleware as async, but there are many services (redis, memcached etc.) and having async modules for them all is the key.
> ALSO: Your example is much slower than the same server written in Node.js, haha. Because the language doesn't support evented programming. The language may be fast, but the paradigm of blocking while waiting for I/O is slow!
there are plenty of event libraries for C++, many predating Node.js (which is written in C++ btw) by decades.
There is no reason why the op example can't be implemented on top of such a library.
Unless your was sarcasm, in which case Poe's law strikes again.
That may be true, but his example is not evented programming. Unless the function he is calling is badly named, I am pretty sure the style he was thinking about was about blocking the thread until I/O completed.
I can't know what the OP had in mind, but I don't see anything that would prevent the example to be fully asynchronous. http::repsonse could be the moral equivalent of a future or it could be running on top of a coroutine.
It's not about how the bytecods is compiled. It's about the paradigms the language encourages.
Look, even PHP has frameworks to do evented programming: Kraken, amphp, ReactPHP. But it lacks an ecosystem of modules that actually do things in an async way and work with the same framework.
It's the ecosystem of modules and the ethos that matters.
CGI scripts used to be fairly popular back in the day, it was not uncommon to have C or C++ code facing the web directly.
The main problem is that web apps have a huge surface of attack, a huge part of any web application is handling untrusted user input. For this reason you're taking significant risks by using a memory-unsafe language. Also web services are rarely CPU-bound, so the added performance matters less.
> The main problem is that web apps have a huge surface of attack, a huge part of any web application is handling untrusted user input.
That's not actually true; not all web frameworks work by having pages that parse query params on every navigation. Callback based frameworks like Seaside let you write apps where form input is the only attack vector and messing with URL's will only make you lose your session. That pretty much removes the largest attack vector in most web apps.
Well you're just delegating the issue to the framework then, it doesn't really change what I said. If the framework is written in C++ it'll have to be written very carefully and thoroughly validated.
If the framework is written in a safer language then it's not much different than if you write a web application in, say, Django where you call native C and C++ modules. That's very common.
Sure, but that's why we use frameworks to begin with, to create such abstractions. My point was the main attack vector many web apps suffer from are merely due to frameworks that leave too much to the developer. Abstracting page to page linking, as Seaside does or as this very website's Arc framework does with callbacks kills that vector by changing the very nature of what links do.
More seriously Node.js has some pretty major performance problems, like the fact that it's single threaded. You'll make excellent use of one core, but that's about it. It looks great on I/O bound request/second benchmarks, but as soon as your code needs to do meaningful work it doesn't scale anymore. The solution to that is to spawn multiple node instances to make use of the actual machine resources, but now you're on your own and why did you bother with node.js then in the first place?
Anything Mr. Sutter says is well worth consideration, and this is no exception. I would like to raise one concern, however, that applies to all such proposals for simplifying programming by adding more powerful abstractions to the language: I hope as much effort is put into debugging, at the semantic level of the source code, as is put into generating code. In this regard, this quote from the paper leaves me feeling uneasy:
'Enable writing many new “specialized types” features (e.g., as we did in C++11 with enum class) as ordinary library code instead of pseudo-English standardese, with equal usability and efficiency, so that they can be unit-tested and debugged using normal tools' (my emphasis.)
Sometimes, of course, you have to go below that level to figure out what is wrong, but is better if you do not have to do so unless it is unavoidable in principle.
A snippet from the paper that made me understand what this is about:
// $ is the reflection operator and T is a type.
constexpr { // execute this at compile time
for... (auto m : $T.variables()) // examine each member variable m in T
if (m.name() == “xyzzy”) // if there is one with name “xyzzy”
-> { int plugh; } // then inject also an int named “plugh”
}
This would inject "int plugh;" in the scope surrounding the constexpr block.
I have hopes that Jai will be at a very nice local optimum regarding performance and productivity, but I think just like in C++, safety is not a major design goal (you need to be unsafe if working with memory mapped structs).
> you need to be unsafe if working with memory mapped structs
Perhaps when defining their address and layout, or if they are a hardware structure that affects memory like a page table or TLB or a DMA controller, but beyond that, why? Or is that just the main use case you're describing?
Just as others have noted, I'm worried about the intermediate representation hiding necessary details. Here's an example:
If I'm looking at one of these classes that have been compile time injected with new functionality, is there an easy way to see what's been added without looking at these compile time injections? Specifically, it would be great if there was support for looking at both the source code before the injections and the source code after the injections in an easy way (built in IDE support for example).
While different from what's being proposed here, C/C++ macros can be used for massive amounts of code generation, but looking behind the curtain of the macros involves running your source code through the preprocessor. Getting to the intermediate representation for macros feels tedious and most IDEs have limited support to aide the programmer pulling the veil off of macro code generation. I can imagine (*though maybe wrong) that a similar problem exists for certain types of template meta programming.
Note that I think this is still a great idea. In fact, I'm rooting for it to come to C++! However, I think IDE plugins will need to catch up (shouldn't be too bad if metaclasses are adopted and once clang et al add support).
Are they trying to catch up to Rust while maintaining backwards compatibility? Or what?
C++ needs the full definition of a class, including its private members, before you can use its methods. This is a huge dependency headache. Most later languages have interfaces or traits or something to decouple class definition and interface definition. This looks like an attempt to deal with that. Of course, this is a retrofit, so it's going to be messy.
C++ has a lot of baggage. It may have exceeded its excess baggage allowance. The basic problem with C++ is that it has heavy encapsulation without memory safety. No other language has that. Either nothing is hidden, as with C, or the language is memory-safe, as is almost everything since C++.
Rust still needs to catch up with C++ in many areas, including HKT, value parameters in generics, compile time evaluation code, compilation speed at the same level as VS with incremental compilation and linking, GUI and mixed programming tooling across all major desktop OSes.
Trying to use Gtk-rs made me see that lifetimes still need a bit of ergonomics improvements for any reasonable GUI programming in Rust. In fact some of the non-lexical lifetime improvements proposal are actually the exact problem I had related to callbacks handling.
By the way, interfaces in C++ are pure abstract classes, only with pure virtual methods, without any private data.
Is GUI programming in C++ still a thing, though? On Mac, you're better of with Swift/ObjC, on Windows, there's C#, and Gtk has binding to about any language. On mobile, C++ isn't even an option.
The only thing coming to my mind is Qt, but development of Qt Widgets has basically stopped in favor of Qt Quick, and for cross platform GUIs there's other options like Xamarin or Electron (not that I'd prefer either).
On Windows you might have C#, but WPF and UWP lower layers are actually written in C++ and based on COM.
Xamarin uses C and C++ to bind the nice .NET code into the platforms they support.
Qt Quick is just the layout engine, many important parts of the stack keep being C++.
Apple's Metal shaders are C++14.
On Android Vulkan is only exposed to the NDK and SurfaceFlinger and hardware compositor are in C++.
Plus if you compare compare Rust with Objective-C, Swift, C#, Java, the productivity story regarding GUI programming doesn't get better than when comparing with C++, on the contrary.
> On Windows you might have C#, but WPF and UWP lower layers are actually written in C++ and based on COM.
Well, this doesn't really influence the language programmers have to use to build applications, considering you can use C++, C#, VB.NET, or JavaScript to your heart's content.
Windows GDI is a C API, but not every application using it is written in C.
Still the issue stands, from my experience Rust isn't yet at the same productivity level as C++, C#, VB.NET, or JavaScript for writing Windows GUI based applications, both for Win32 and UWP set of APIs.
Yeah it will eventually get there, but it is a disservice to Rust to promote the language as it if was already possible today to do it, when most toolkits are still WIP and lifetimes aren't that friendly for the usual GUI coding patterns.
This will only lead to disillusion and shy away contributors that might otherwise enjoy using the language, if they understood the current state before diving in.
> Is GUI programming in C++ still a thing, though?
All of the new Windows 10 UI is C++ (using the UWP XAML framework, which is implemented in C++). The paper says metaclasses are a way to avoid language extensions like UWP's C++/Cx, which adds C# things like "interface" and "event" to C++.
Oh, that explains why I can't navigate in "Settings" with Alt-Left (and there's no Back button either), yet it looks like an IEView (MSWebView, Electron, etc.).
> C++ needs the full definition of a class, including its private members, before you can use its methods. This is a huge dependency headache. Most later languages have interfaces or traits or something to decouple class definition and interface definition.
C++ has pure virtual classes, as well as the pimpl idiom. Both decouple the class definition from the interface.
Yes, they have a theoretical runtime overhead. To get rid of that, you'd need a special "sizeof" member function instead of the list of private member variables. I guess that would be possible to introduce to save compilation times.
In the past I've sometimes taken this a step further and defined a pure abstract class plus factory function in the header and then defined a derived implementation class in the source file, kind of a COM lite approach. (Is there an agreed name for this pattern? I don't doubt that, like most things I've invented, it was first thought of in the '60s or '70s. :P )
It's good for libraries because it separates all implementation details from the public interface while adding very little overhead for the library dev. Not so good for smaller utilities which are more natural as header-only classes, though.
These days I don't bother with that kind of messing around until it actually becomes necessary, which it almost never does. I feel like my code's become way simpler over the years.
I wouldn't know, but some Dylan book taught me that, along the abstract-concrete dimension, there are not two, but three kinds of classes:
- concrete, instantiable
- abstract, instantiable
- abstract, not instantiable
where a type <foo> is instantiable if a function make(<foo>) exists (because <foo> is abstract, that function must return an instance of a subclass of <foo>)
That factory function makes your abstract class instantiable.
Also, in Java, Foo would be an interface, and that factory would be a FooFactory.
Rather than specifically single it out as "trying to catch up to Rust", I'd say that C++ is doing what it has always been doing. Reaching for the future while maintaining backwards compatibility. It's been a great strategy for 30 years and I hope they keep at it.
Perhaps. It's like that old joke about perspective. The length of a minute depends on which side of the bathroom door you're on. :-)
From my standpoint, I'm _still_ trying to get my head around all the ramifications of what has arrived in C++-11 (and it's a lot!). And I still have '14 and '17 to plow through too. '20 will be here before I know it. So for me, there is so much to absorb.
To my eyes the C++ leadership has done a great job at setting up standards processes for regularly delivering new versions and the compiler community has done an amazing job at keeping up too.
I expect to see more movement in the world of standard libraries which is where C++ still feels like a desert compared to other languages.
But yes it definitely is not fast when compared to other newer language communities.
The only language today with more independent implementations than C++ is C. Compared to C, C++ evolution is not slow — processes involving so many implementers are inherently not in fast-forward mode.
Most major languages only have a handful of active implementations, if it's not just one implementation. Most don't have specs.
I would argue that you're eschewing Lisp and Scheme in terms of independent, active implementations, however in a more general sense I agree.
Moving fast (and breaking things) is just not what the standardization committee does, and that's fine. They can move faster since C++11/14, but sometimes you really don't want a giant moving target with new features all the time. I think their approach works for the kind of environment you find C++ used in.
Exactly, and even if I complained about the current ergonomics of lifetimes, the amazing work of Rust developers is already influencing design decisions on Haskell, Swift, .NET, Pony and C++ regarding resource management.
Any chance you could throw up an email for me to contact you, or use the one in my profile? I'd be interested in chatting with you "offline" as it were.
While your comment is a bit snarky, there is some truth to it in that if rust successfully evangelizes a technique, I'd expect the C++ world to bring it on board if there is room and I'm sure room can be found.
The one topic to consider is this - is there a world where a language's existing code base is so large and the community so diverse that even _you_ would hesitate to break things for them? And at the same time, doesn't it make sense for that community to keep marching forward?
Yes the language has baggage and the path forward isn't easy, but there are still paths ahead for C++.
My advice to the rust community. Model yourself after the C++ strategy of evolution. Do not repeat the mistakes of the Python 2->3 world or the perl 5->6 world or what appears to be happening to Go. Don't break the past.
In general, Rust is modelling ourselves after that. There was even a recent RFC to explicitly move to a model of language change after the C++ model, but people are concerned that it breaks too much.
This proposal isn't about adding interfaces to C++: it's about removing boilerplate through compile-time metaprogramming during the class definition process.
> It's not simple to write memory safe code in C++, otherwise we would have a lot fewer vulnerabilities in C++ codebases.
Using the STL after using Java collections or Qt collections is a shock. I think there really needs to be a "Safe STL" in addition to "Performant STL" that performs range checks. You simply cannot recommend using the STL for safe code.
> I think there really needs to be a "Safe STL" in addition to "Performant STL" that performs range checks.
SaferCPlusPlus[1] provides compatible, memory-safe versions of the most commonly used STL containers. (The compiler switches mentioned by others add range checking, but do not, for example, catch use-after-free bugs.)
> > It's not simple to write memory safe code in C++, otherwise we would have a lot fewer vulnerabilities in C++ codebases.
Using SaferCPlusPlus, it now is simple to write memory-safe code in C++. And it's not much different from writing traditional (unsafe) C++ code.
> I think there really needs to be a "Safe STL" in addition to "Performant STL" that performs range checks.
indeed. for gcc, you can compile stuff with '-D_GLIBCXX_DEBUG' to enable random-iterator bound checks. moreover, random-access-iterators do have the 'at(...)' stuff available for bounds checked access to the containers.
but, yeah, a standardized way would be very useful.
> Using the STL after using Java collections or Qt collections is a shock.
Why?
I prefer having a free-standing generic find function as opposed to it being implemented in each and every container. Even better example: copy. You can use it with containers, but also with input/output streams (using stream iterators).
You mentioned safety, is iterator invalidation bothering you?
> Pointers and virtual classes have runtime overhead.
If you are sensitive to these overhead, you pretty much already only have C++ as your only choice. Or maybe Rust, but I do not know rust well to comment if Rust does not have such overhead.
> It's not simple to write memory safe code in C++
No, in C++11 or newer, memory safe code is the default. That's the design goal of the newer standards of C++. IMHO, memory safety is no longer a central issue for C++.
Rust doesn't have classes, and therefore doesn't have virtual classes. So it ends up being different. Dynamic dispatch is used pretty rarely in Rust.
If you want dynamic dispatch, you don't use a virtual class, you use a "trait object". Basically it's a double pointer: a (pointer to vtable, pointer to data). Technically the C++ standard (as far as I know) doesn't define the exact implementation of this stuff, but in my understanding the vtable pointer is stored with the data.
Yep, they just don't care and keep throwing pointers instead references or std::vector/array, #define instead of const, char* instead of std::string and so on.
I have been preaching how to write proper C++ code vs "nicer C" since the late 90's, but without much success, specially to enterprise developers.
Many of the so called modern C++ features were already possible with C++98.
As a good example, MFC 1.0 could have been like Turbo Vision, OWL, VCL, Qt and so on, but the test group from Microsoft found Afx too high level and wanted just a thin layer over Win16 thus it was reborn as MFC.
> Yep, they just don't care and keep throwing pointers instead references or std::vector/array, #define instead of const, char* instead of std::string and so on.
> I have been preaching how to write proper C++ code vs "nicer C" since the late 90's, but without much success, specially to enterprise developers.
In no practical sense is it easy then. Even if you find it easy to write perfect code, you still have to work with other people that don't and use libraries which have problems.
C++ has so many ways to shoot yourself in the foot I literally cannot think of a mainstream language that is actually more difficult to master. Would you say it was easy compared to Python, Java or JavaScript to write safe code in? Those languages have no undefined behaviour and they all have automatic memory management compared to C++.
It is harder yes, and actually nowadays I spend most of my time on Java/.NET, just diving into C++ for system stuff.
However, the mechanisms for safer code are there, if one cares to use them in first place.
The usual set warnings as errors, CI build with static analyzers that break on failure, unit tests with sanitizers that break on failure.
But many companies don't care to set this workflow.
They also don't care for doing that for Python, Java or JavaScript as well.
JavaScript might have automatic memory management, but it has its own collection of WTF moments and implementation issues across browsers.
Java is way better, yet there is a reason why Java Puzzles are so entertaining.
While a tracing GC is much more productive than RAII/smart pointers, sometimes you also need to track down those references that prevent the collector to do its job.
Android developers have lots of "fun" doing that regarding context and activities.
One should not forget that safe languages only get us free from memory corruption, regarding safe code. There is still the whole spectrum of logical errors, specially bad in dynamic languages.
In any case, until we finally get a mainstream OS written in Swift, .NET Native, Rust or D, we need to make the best use of our tooling for low level system coding activities and in this regard C++ is so much better than C.
I agree tools and workflows exist to make it easier to write good C++ code but it's still an order of magnitude harder to do this compared to other languages. I'd only use C++ if there was no other practical choice. It's simply far too much effort to make your code robust.
Sure. But now you're retreating to a rather small set of applications. Ada has nice tooling and supports a pretty large number of platforms. I don't think you'll find many niches where not one of Rust, D, Swift, Ada, Java, Go, or the dozens of other memory safe(r) languages could do the job and only C or C++ are adequate.
Practical example, we use C++ as systems language for projects done in Java (Standard, Android), .NET (WPF, UWP, Xamarin), Swift/Objective-C (iOS).
There isn't much Ada tooling available for those environments, and even using GNAT would be increasing project costs regarding having to implement ourselves stuff that we get for free with the SDK tools that support C++ out of the box.
> C++ needs the full definition of a class, including its private members, before you can use its methods.
This is more a limitation of the ABI than the language (and even then, only comes up if you are using multiple inheritance or have private virtual functions). Objective-C had the same problem, and then they introduced the non-fragile ABI: no language change was needed, just a compile flag.
(I do realize that there is some awkwardness in C++ if you attempt to declare these things in that private-decoupling sense; but to me the reason this is most brutal is not hiding of state, where we have solutions already, but being able to add and remove members without having to recompile users.)
> The basic problem with C++ is that it has heavy encapsulation without memory safety. No other language has that.
I think in general these things are unrelated, unless you stretch the definition of memory safety rather far.
Also, you can hide things in C, just it's not always zero cost. Put private fields in a forward declared struct that's not implemented in any headers, use static for private functions (also outside of headers).
It is similar to several Rust features, but has its differences, as you'd expect. They're also in-development features, so I agree that "catching up to Rust" feels like a bad way to frame this.
Wow, not sure what to think of this. Seems like this merely grows the monster that is the C++ specification, yet at the same time I wouldn't mind wiping out all that boilerplate.
I'm not sure you can compare the complexity of using the majority of Python3 to modern C++. Besides, the switch to python 3 happened once in ten years. The complexity of the change is I think lower than the introduction of c++11 or 14.
BTW I like C++ but its complexity is daunting.
What subpart is relatively simple to use ? How would you start now ?
I'm excited by this feature, along with all the other compile-time programming stuff (reflection, concepts). The downside I see is having to remember how a metatype works.. e.g., the default access type for member variables of struct, class, interface, value, .. and on and on. But if we keep the number of metatypes low, this may manageable.
The thing that makes me uneasy about the proposed implementation of metaclasses is that imperative code is used for code generation instead of declarative specifications.
Simple examples are relatively simple and predictable, but anything more complex might blow up into incomprehensible mess pretty quickly.
Metaclasses for things like "interface" and "value" sound fantastic, and will get rid of huge amounts of boilerplate, but I wonder how flexible they really are.
I don't immediately see a way to implement something like C#'s extension methods, or Objective-C categories, for example -- extending classes with new methods. Those are incredibly useful when integrating with an existing codebase.
Metaclasses just seem like a way to do existing C++ stuff in a cleaner and terser way. Definitely very useful but maybe not a huge game changer.
I'm also concerned at the potential impact on compile times once people start using metaclasses very heavily; but maybe it won't be as bad as templates in that respect.
With the function-style syntax and general compile-time programming support this is getting awfully close to full hygienic macro capacity, even if restricted to outputting type declarations at first.
Did I miss something or is this restricted to "inheritance" instead of composition?
As example I can't make a constraint/metaclass that is both plain_struct, ordered and final at the same time, maybe that particular example doesn't make sense practically but I'm sure there are others that do.
I like the flexibility that this feature adds. However I find that C++ is hard to use without good IDE support and it looks like this feature will not be properly supported in IDE's for very long time.
Due to web assembly you can easily have your server and client side scripting be done in the same language, with the same libraries which is also huge bonus.
I honestly can't think of a better stack.