This is basically the work of one tireless man; Iain Buclaw. Many thanks to him for putting in so much time and energy into this. It took 6 years from first submission to get it in. Here are the slides from his 2017 DConf talk about the work that went into making this happen: http://dconf.org/2017/talks/buclaw.pdf
I'm so, so happy that after close to a decade of tireless work D is finally in gcc. I don't have much to add except congratulations to Ian for his mostly solo work on this. Having D "standardised" in this fashion within the GNU project can do nothing but benefit both D and GNU.
Slowly but steadily. Given the fact that D has not managed to die when it had so many chances to, I tell that D has better chance at mass adoption as "a better C++" than the current wave of Swift/Rust/Go that rely on "life support" from a corporate donor.
It's a bit of a stretch to cast corporate funding as a negative, especially when the direction of Swift/Rust/Go are all mostly directed by community consensus rather than the corporate middlemen signing the paychecks (see the Swift community's rejection of SE-0110, the Rust's community's rejection of struct inheritance, and the Go community's rejection of aliases). Given the opportunity I'm sure that D would delight in having a corporate sponsor; financial security of D development and infrastructure was a large part of the impetus for the D Language Foundation, after all. Casting sponsorship as "life support" when all three of these languages have thriving communities simply comes across as petty.
It's much more anti-fragile to have corporations who sponsor projects (as do we) than one or two key official sponsors. And it suits D's breadth too, because there's no single company that uses D for everything that's possible to do with it. Makes sense for Google to sponsor Go. Who is supposed to sponsor a language with versatility that encompasses C,C++ and Python and runs on the gamut of platforms. A single sponsor using D on Linux might direct energy away from Windows and embedded...
Mozilla is not a normal corporation though. I donate, and I suspect many others do, as well. So maybe rust could be seen to have a lot of tiny sponsors rather than one big one?
D also has a foundation model, which could (I don't know anything about their financials) provide similar support to a company, but spread among companies instead.
I don't think it's meaningful to put Swift, Rust and Go in a single bucket when comparing them. They're languages targeting different levels of abstraction, with very different approaches as a result.
Some people refer to all of these as "better C++", but it's meant in very different and incomparable ways. For example, Rust is ostensibly a "better C++" in a sense that it retains the low-level, zero-overhead (no GC) etc nature and powerful metaprogramming facilities. Go is a ostensibly a "better C++" because it promotes higher-level, safer abstractions that are still "fast enough" (but not zero-overhead). Ditto Swift.
D is arguably in the same bucket as Swift, and partly as Go. I don't think it's very useful to compare and contrast it against Rust. And I think that of all of these, Rust is the only one that can truly claim to be a "better C++" in a meaningful way. Others are "something better than C++ for most apps you'd write in C++ today".
> D is arguably in the same bucket as Swift, and partly as Go. I don't think it's very useful to compare and contrast it against Rust.
It is easy to claim how things "arguably" are, but without providing any justification that is not a very meaningful statement to make.
D very much matches your definition of "low-level, zero-overhead (no GC) etc nature and powerful metaprogramming facilities" – the GC can be avoided easily enough. It can certainly claim to be a "better C++" in this sense. For instance, Weka.IO (a storage startup founded on D) heavily relies on D to offer exactly that, in order to implement a distributed file system with sub-100µs latency.
The fact that you can also write Python-esque code during prototyping (playing fast and loose with the GC, etc.) doesn't detract from the core identity of the language as a tool for systems programming with zero-cost abstractions.
It seems like use-after-free is still a potential issue, even with those guard rails.
Rust deals with those through named lifetimes --would those ideas be useful in D as well?
I don't think a "better Python" can be statically typed - dynamic typing, and all the associated runtime tricks that you can do with it, is kinda part of what makes Python distinctive (whether for good or bad, opinions differ there).
This is consumer-side duck typing. Python also provides producer-side duck typing - you can make any of your objects a duck dynamically, by making them behave like a duck at runtime. I'm not aware of anything similar in Go wrt interfaces.
Rust also shines when it comes to high level abstractions. This is what attracts me to the language: as in C++, it covers the super low-level stuff, but it also powerful in business logic abstractions.
Go has far more in common with Java than it does with python. Arcane APIs that are incredibly difficult to use correctly while having a very simple base language.
I found the Go APIs I have worked with far more simple and idiomatic than any Java libraries I have worked with (which is admittedly not a lot).
It is, to a degree a matter of taste, of course, but Go APIs tend to be rather simple, whereas Java libraries seem to often have FileOpenDialogStyleTemplateProcessorFactoryFactory-type interfaces.
The major problem I have with Go currently is lack of function overloading, which means you either have to convert your data around a lot or have many similar functions for different data types that do the same thing (basically), but have different names.
I think the comparison with Python is quite fitting, because both languages encourage a certain way to think about and implementing things. For Python it's called pythonic, in Go it's called idiomatic.
No as Go has a GC. It's just supposed to be a fast server-side language without very complicated code. Go is mostly successful due to Google and filling a niche (fast, simple, concurrent, all thrown into a fat statically linked binary I think).
> Others are "something better than C++ for most apps you'd write in C++ today".
As big C++ fan and language geek, I am pretty confident that if Java and .NET had taken the route of all other alternatives in the 90's (Oberon, Eiffel, Modula-3, Delphi,...), C and C++ would be less relevant today than they turned out to be.
This because many people make use of them, just because they are the only languages they know about compilers that produce AOT executables, with the option of static linking them.
Maybe even Longhorn would have been a success, instead of having to wait for UWP with .NET Native.
Indeed. With C# especially it is rather ironic, because the timeline of its development closely overlaps with D - I remember learning the newly released C# 1.0 as I was experimenting with those early 0.x versions of D.
I agree with your points. I do like to bring Nim up as it uses python like syntax with types and then transpiles to C. You get a fast and small native binary and can choose from 2 different GC or no GC. It has a small, but very active community.
> D has not managed to die when it had so many chances to
One interesting thing about D is that we don't have to answer to anybody, so nobody can kill D other than us. We just keep steadily pushing forward regardless.
> the current wave of Swift/Rust/Go that rely on "life support" from a corporate donor.
Most Rust contributors are not Mozilla employees. Losing people to work on Rust full-time would be a setback, but we're not really reliant on it; we'd evolve more slowly.
I don't think that's true. Despite their corporate life support and relatively young ages, Go, Rust and Swift are all in the top 25 most active languages on Github (https://gist.github.com/alysonla/e14c01ec7a0d2823e7317f7b58b...) and D is not (and never has been).
The masses have had a long time to adopt D and I wouldn't get my hopes up simply because it's not dead yet.
The Dcompute kernel language (which for the most part is plain D, the compiler is completely reused) has the usual set of restrictions that any kernel languages have:
no exceptions, recursion, runtime (which is what betterC is about), function pointers.
It still has all the the features that make D great, sane templates (will work across both host and device to a degree), ranges, CTFE (no need to precompile you lookup tables) and so on.
Garbage in, garbage out. Github's language detection isn't perfect, and people in the D community simply aren't particularly focused on marketing and popularity contests. See here for example :
If I'm not mistaken, Rust is being sold as a better C, not C++. Go is being marketed as a higher level language than C and Rust focused on developing network-aware concurrent applications to fill specific server needs. I may be wrong, but I'm not really sure that they target C++ or aim to replace it, unlike D.
Perhaps that's the reason why both Rust and Go managed to gain so much traction and goodwill, because they actually offer entirely different takes on programming. D, on the other hand, tried to fix problems that existed only due to the lack of work developing C++ beyond the 98 standard. Once the C++11 standardization process took off, D was pushed away into irrelevance.
I emphatically disagree that C++11 has made D irrelevant. With C++11, I still have to think about fifty possible ways to do something, old and new, and all of the possible different implications and risks and segfaults I could get with each. I don't have the luxury of ignoring all C++ written before 2011, nor can I ignore all of the design decisions before 2011 that resulted in the current status of C++ features.
After working with C++11 (or C++14 or C++17), D is such a breath of fresh air. Everything is so... easy. I can do what I want, and I don't have to worry about doing just the wrong kind of thing to get a segfault or UB. I might get stack traces at runtime if I make a mistake, or even better, compiler errors because my types didn't check correctly or my compile-time evaluations had errors, but the language is much simpler and cleaner because it's been able to throw out the baggage of C++. In fact, it's thrown it out twice, once for D1 and again for D2, what is now simply called "D".
And being able to actually throw out cruft without concerns for backwards compatibility is why D isn't irrelevant.
> I'm not really sure that they target C++ or aim to replace it, unlike D.
The evolution of both Go's and Rust's strategies as "replacement" languages is actually pretty interesting.
While Go's official marketing no longer calls it a systems language, the use of that phrase when Go was originally released does indicate that that they envisioned Go to be a "better C" in a sense (perhaps specifically in the sense of being used for userspace "system" utilities in the same vein as the Rob Pike's C-like Alef on Plan 9). At the same time, much of the public rationale given for Go was obviously to avoid the pitfalls of using C++ at Google-like scales (e.g. an aversion to junior-dev footguns and a fanatical focus on fast compilation), which implies that they saw some potential for replacing C++, though they later acknowledged that very little of Go's growth appeared to be coming from C++ programmers (see https://commandcenter.blogspot.com/2012/06/less-is-exponenti... : "Although we expected C++ programmers to see Go as an alternative, instead most Go programmers come from languages like Python and Ruby. Very few come from C++.").
Meanwhile, given that Mozilla intended to write a browser engine in Rust, old versions of Rust absolutely intended to replace C++. Like D and Go, ancient pre-0.1 Rust was willing to impose a runtime by default in order to guarantee memory safety (originally intending for both green threads and a garbage collector to be baked into the language), until years of experimentation proved that its static checks were capable of providing memory safety without a runtime, which is the pivotal moment in Rust history (actually a series of pivotal moments, but let me be romantic). Nowadays, rather than market itself as a C++ replacement exclusively, Rust tends to position itself outside of the traditional spectrum of languages as simply a zero-overhead memory-safe systems language. While it's still true Rust competes with C++, is inspired by C++, and can be used to replace C++ (e.g. its usage at Dropbox and in Firefox), Rust also intends to compete directly with C, e.g. for system utilities (e.g. ripgrep), reusable low-level libraries (e.g. librsvg), and extending high-level languages (e.g. Helix).
> While Go's official marketing no longer calls it a systems language, the use of that phrase when Go was originally released does indicate that that they envisioned Go to be a "better C" in a sense
Rob Pike uses the phrase within the first 5 minutes or so of the very first presentation video announcing Go, and explicitly mentions that they mean "systems" in the sense of webservers and the like. Since then, Go being a systems language has endlessly (and often maliciously) been misrepresented, so under these circumstances, it's only understandable why this phrase was dropped, even though it was absolutely appropriate since its first use gave plenty of context.
I don't think web servers clarifies much, though. The most important web servers (Apache, nginx, ISS, etc.) are all written in C or C++, and they try to squeeze out every bit of performance they can. I guess we're talking about applications communicating over HTTP, but that's not my first thought when I hear someone talking about web servers without any more context.
(I'm aware of Caddy, but my understanding is that it's aiming for ease of use over performance.)
I would like to contact Go's "official marketing" department and tell them they have done a horrible job at marketing, because most of the lauding for go comes from developers I know that love getting work done with it!
In any case, I will +1 you on coming from python and ruby, and add another major bucket go developers come from is node js.
Indeed, but as I say in my comment, Go originally foresaw itself replacing C++ as well. The idea that one needed to add a runtime in order to "replace" C++ stems back to the original (and massively successful!) C++ replacement: Java.
> Once the C++11 standardization process took off, D was pushed away into irrelevance.
No doubt about it, many existing C++ users feel this way. I've read the same comment many times. Most programmers, however, are not C++ programmers, and they are not looking for a better C++. They are looking for the best programming language for their needs (which means in most cases they won't go anywhere near C++). Your comment about D being irrelevant doesn't make much sense.
And that's precisely why D fails. D was marketed as nothing more than C++'s successor, right down to the naming choice. No one is looking for a better C++, therefore no one bothers with D.
Meanwhile, those who have to work with C++ keep their eyes in the C++ standardization process. Since D's inception, the C++ standardization committee already produced three standards which oddly enough do include stuff that is sought after by the C++ community.
D was marketed as 'Mars', but for some reason people insisted on calling it 'D' instead, and Walter relented and changed the name of the language.
It's great that C++ is catching up. It's almost got static if, except not because it introduces a new scope. Maybe next time...
The C++ programmers I know are attached to native code, but not exactly content with C++. Ethan Watson of Remedy Games (Quantum Break) said "we are an industry looking for salvation".
> If I'm not mistaken, Rust is being sold as a better C, not C++.
We generally try to market Rust as a good language on its own merits rather than a "better X." That said, there are undoubtably comparisons to make, but the latest iteration of our marketing is "confident, productive systems programming."
D may be a sort of "SpaceX" of languages, designed to improve the competition up to a point it's no longer needed, more or less like Scala did with Java. But even that is a huge success, maybe better than finishing C++ off.
It is usually only way some programmers are forced to adopt new programming languages, in spite they religious beliefs regarding how programming should be.
Notorious example, game developers moving from Assembly to C/Pascal and later from C to C++.
Or systems programmers being force to move into C++ instead staying with C (e.g. IO KIt and UMDF).
Most people I know have not heard of D, but have definitely heard of Go, Swift and (less often, but still frequently) Rust. If in 16 years D has not achieved mass adoption, what is different now, when there are more competitors that offer better features?
It's a big world out there. Would you ever imagine that D would be taking market share from... Extended Pascal? But there's a naval architect who designs great big ships with a 500k sloc codebase he is exploring porting to D. Web guys get the attention but enterprise users are a much bigger world than just that.
If something is growing very quickly then saying it hasn't yet dethroned C, so it won't ever be significant seems to me to be a bit brittle thinking. Compound growth and the passage of time - thats what has been underway for some time now.
I did not compare D to C, but rather to Rust, Swift, and Go. Compared to D, Go already has huge marketshare. Rust, similarly, excels in spaces that D does not endeavor to work in (no-managed-runtime settings), and is beginning to see adoption in areas where performance is critical.
From what I have seen of D, it provides a C++ without some of the cruft, but without trying to solve other issues with C++-like languages (for eg: usefulness of Algebraic Data Types). Why should someone choose D today over Go or Rust for any project? I have yet to see a convincing answer.
> Why should someone choose D today over Go or Rust for any project?
D over Go: D has much better language mechanisms for abstraction and programming in the large.
Go over D: Go has an incremental, mature GC.
Rust and D do not quite target the same application domains (though there's overlap), so it's difficult to compare them. Insofar as they do (D with @nogc and @safe), the tradeoffs become rather complicated.
Silly me, I was under the impression that gdc was already part of gcc...
Well, so at least now I am no longer mistaken. ;-)
I have tried to learn D repeatedly over the last couple of years, but I was usually scared off by how complex this language is. Even so, the syntax is far cleaner than C++[1].
Also, the last time I gave it a try, something finally clicked. I am not all there yet, but I am beginning to like it. The community is very friendly and helpful. Being able to ask stupid questions without being shouted at makes learning a language a lot easier.
[1] To be fair, C++ has carried the baggage of backwards compatibility around since its birth, while D did not and could learn from what C++ got right and wrong.
From a language-user point of view, Trigraphs are an arcane but irrelevant feature, insofar as ignoring them or even being unaware of them won't cause any harm. [I only happen to know what a trigraph is because I read a lot and have a brain that randomly holds on to pieces of information with an iron grip.]
Never having written a compiler, I can only imagine how tedious supporting those must have been; and I would be willing to bet that trigraphs weren't even used much, because, among other reasons, hardly anyone knew of them, so it must have been really frustrating for people to support them. An Exercise in futility.
I have read many posts saying D was better than C++ as a language but there were no libraries for use case X. Now this may help popularise the language.
One of D's goals is to be ABI compatible with C and C++, ABI compatibility with C is 100% as I understand it. C++ is a work in progress.
What is ABI compatiblity? in a nutshell it let's you use libraries from another programming language. In D, all you need is a D file that describes the C/C++ library as a wrapper so the D compiler knows what you are using.
This means D will build upon everything that already exists for C/C++. You can intermix C/C++ and D in the same project if you so choose to. Which is really a big deal because you don't necessarily have to stop and rewrite years of code you might already have to start using D.
Syntax wise D is a kin to C#/Java, so if you know C# or Java, jumping over to using D is just a matter of learning the language specific Gotchas and core libraries. So the learning curve is quite low if you know C/C++,C#, or Java.
How would that even work? Modern C++ libraries are largely templates, templates which must be instantiated at compile-time i.e. you need a full C++ compiler to use C++ libraries directly.
Said compiler must somehow work together with the D compiler here, which should lead to god-awful memory use, compilation time, and error messages.
Most useful C++ code is either not templates or it's templates which can be instantiated a finite number of times and then used like non-templated code. Most "header-only"/header-heavy C++ libraries which present the problem you mention solve problems like C++'s smart for loop working like "for x in xs" does in Python but not working like "for x,y in zip(xs,ys)" does or like "for i,x for enumerate(xs)" does etc.; you don't really need these outside C++ (whether you need them inside C++ is another question but it's irrelevant here.)
Those > can be instantiated a finite number of times and then used like non-templated code.
So you have a C++ file that uses vector<int>, vector<struct c_struct>, vector<CppClass>, vector<DClass>, etc. once and then you can link to them from D code.
The only case where that doesn't work is when you have lots of one-off instantiations, where requiring an instantiation in C++ code for each use in D would get annoying quickly.
Does the D compiler know how to mangle/demangle C++ names?
Do you have to write some kind of D header file for foreign C++ functions? Otherwise, how would D even know about the existence of e.g. vector<int>::push?
You should probably read up on ABI Compatibility, and, Compiling vs linking as well.
Let's be clear a compiler builds a object files that have to be linked to create a executable binary. IE compiler makes .o files. Linker takes output object files from the compiler an puts them (.o,.so/.dll,.a) together, to make .exe files.
This all works because you are linking to the C/C++ library binaries (the .o, .so, .a, .dll) files generated by a C/C++ compiling/linking. In most cases this is shared object library/dll provided by the operating system or pre-built library you install on the system. There is no need to compile it because it is pre-built. if you do compile the library yourself, it is a one time thing.
In D you create a .d file that describes what is in a C++ library .o/.so/.a/.dll file. So that upon linking, you take your D code's .o files + the C/C++ .o files to put together a full executable binary.
That's not how C++ templates work. You cannot deliver them as binaries.
Templates are delivered as C++ include files (i.e. source code). When you use them, the compiler fills in the types you want and compiles the result.
For example, there's the std::vector<> template. You can fill in any type that meets the specific requirements of the template and use it like std::vector<int> or std::vector<YourC++ClassType>.
However, you probably won't be able to do std::vector<YourDClassType>. ABI compatibility is not enough to do this as you would be effectively mixing C++ and D source code here.
> However, you probably won't be able to do std::vector<YourDClassType>. ABI compatibility is not enough to do this as you would be effectively mixing C++ and D source code here.
I think this should be working. You don't have to 'mix source code', you can implement YourDClassType in D and use it in C++. You simply have to write a C++ header describing the class. Then write a D file describing the external C++ sdt::vector<YourDClassType> file and link everything together.
Not sure if anybody actively tested this though ;-)
As mentioned before, C++ compatibility is not 100% yet. My comments are purely what works now; which by describing to D what is in a c/c++ .o file so you can link it in. In your .d file that describes the C/C++ you want to use, sometimes you will have to describe the template to D by creating D template of the C++ template.
Also if your "library" is all templates, it's not a library, it's a framework that uses other libraries. There still has to be at some point tangible code that is compiled to an object file to be linked. You will have to find all of the dependancies and make sure that they are also described in a .d file.
This might mean you have a bunch of work that you need to do in order use the c++ code you want to bring in into D. But then again, if it's a public/open source library, and you go down that rabbit hole and make it work, you just made D even better.
I think you misunderstood your parent. A lot of modern C++ 'libraries' are header-only, because they heavily use templates. There is no object code or library to link against until you instantiate templates.
That's not really relevant to their question. Many templates are implemented via inlining/instantiation in client code by default, and as such they won't be present in the output of the C++ compiler.
> D understands how C++ function names are "mangled" and the correct C++ function call/return sequence.
How does it work in practice ? C++ name mangling is not standardized and so every compiler can implement its own scheme. So does it mean there is a list of D compatible c++ compiler somewhere ? I have the feeling that if you are using some exotic proprietary c++ compiler it won't work well.
I assume the idea is to be able to reuse libraries like Box2d etc where it's mostly concrete code, and that other libraries where it's mostly templates kind of just have to be rewritten.
any thing that you can do with C++ templates, could be done with D templates + CTFE plus would be far more easy to read and understand that C++ templates.
Just more pragmatic, I'd say. D isn't a "big agenda language" (to steal a line from Jonathan Blow). It's extremely multi-paradigm (some might argue to a fault). I think that's why you find people saying D is like C++ or D is like C# or D is like Go or D is like Rust. You get a little taste of everything using D.
Want function programming and purity? Check.
Want C style/low abstraction code? Check.
Want extreme C++ metaprogramming? Check.
Want C#'s LINQ? Check.
Want an improved version of C++'s STL? Check.
Want low cognitive load memory management through a GC? Check.
Want highly tailored memory management? Check.
Want high level object oriented abstractions? Check.
Want memory safety? Check.
Want systems programming? Check.
Want rapid prototyping? Check.
D, fundamentally, assumes the programmer knows what approach they should take and lets them do it. There are no "we know better" design decisions in the language. I think this might be because D is so community driven. With no real company backing D was left in the hands of enthusiasts coming from all sorts of different backgrounds to implement ideas they liked.
> think that's why you find people saying D is like C++ or D is like C# or D is like Go or D is like Rust. You get a little taste of everything using D.
That's a major problem, and not a feature.
One of C++'s main drawbacks is its size and arcane features, to the point that the language is known for being impossible to master. If all D brings to the table is an agenda to pick off C++'s complexity and drive it up even further then I fail to see what problems that will solve while it creates many others.
But it didn't drive up the complexity. It drastically simplified how a lot of features work. C++ is very difficult to master not because of the number of features in the language (it really isn't even all that featureful compared to other modern languages) but because of the thousands of unexpected details you have to know. Scott Meyers made a career out explaining them (and implored D not to make the same mistake of needing someone like him). That doesn't mean the overarching feature can't be implemented in a simple way that avoids the unintended complexity though. Anybody you ask with knowledge of both D and C++ would say that D's metaprogramming facilities are both drastically easier and more powerful than what C++ offers, for instance. It's actually shocking how much you need to know to fully understand things like template/regular type deduction (which aren't the same), initialization, rvalue behavior, forwarding references (or is it universal references...they came up with the feature before they gave it a name), what is constexper-able, reference collapsing, etc. These things are all straightforward in D because they were either designed without the edge cases and legacy behavior or left out entirely because the problem was tackled in a different, more simple way at a fundamental level.
And built in at the compiler level, so it happens when you compile your program as you normally would, the tests show up while compiling, and the output does not contain the code. Always impressed me.
The combination of ranges and UFCS led to it just naturally falling out of the language design. It looks like this (adapted from a LINQ example[1]):
auto names = [ "Burke", "Connor", "Frank", "Everett", "Albert", "George", "Harris", "David"];
names.filter!(a => a.length == 5)
.array // convert from lazy range to array so we can sort
.sort!()
.map!(a => a.asUpperCase)
.joiner("\n")
.writeln;
Ranges enable lazy processing with efficient static dispatch against arbitrary types of ranges. Those individual algorithm functions are basically all template functions that return types tailored to match the input which allows the lazy evaluation to work. When writeln asks for the first element to print it asks joiner which asks map which asks asUpperCase and so on. The results are calculated upon request, not in advance, which helps you forgo a lot of memory allocations for storing temporary results.
UFCS lets you call a function as if it were a member of the first parameter (i.e. fun(x, y) -> x.fun(y)). This lets you write it as if it were chain rather than a series of inside out function calls (i.e. `writeln(joiner(map!(a => asUpperCase(a)(sort!()(array(filter!(a => a.length == 5)(names)))), "\n"))`).
There is exactly two memory allocations in all of that. Once for the initial array and again prior to sorting because it's not reasonable to sort a lazy range. We could have reused the initial array by eagerly removing the items being filtered from it if we wanted.
In my experience, D is like more powerful and feature-packed C#. It doesnt impose any limitations on you and your coding style/paradigm and strives to be "the one to rule them all" tool which has everything and can be used for everything from scripting to systems programming (when stdlib will be more @nogc friendly).
And yes, templates are so much better in D than in C++ or in C# :)
How does @nogc work in D? Is it easy to keep track of what needs freeing and what does not or is it easy to mix up and get hard bugs? Also, what do these bugs look like? Is use-after-free possible or how is the failure mode i that case? Is it possible to call free on an object after it's been garbage collected?
@nogc just causes the compiler to error out when a GC allocation occurs in the region marked @nogc and its call graph. You are then expected to manage memory yourself. You could rename it @c_or_cpp_style_memory_management_only. You can malloc/free (scope(exit) is useful here), use smart pointers, alloca, static arrays, use Andrei's allocators[1], or whatever else you'd use in C or C++.
The answer to all of your questions is basically the same as they are in C and C++. D does have @safe though which prevents unsafe memory operations and Walter is in the process of ratcheting up the memory safety with DIP1000[2].
at the top of your code, it'll be memory safe (excluding issues which we plug 'em when we find 'em). I wouldn't say that makes it fundamentally different from Rust. What is fundamentally different from Rust is the approach D uses to implement memory safety.
It could eventually be an option I guess, but it would require anyway to recompile all code.
As always in such cases, to validate binary libraries, they need some kind of metadata to indicate they are safe libraries (aka they only use of @safe or @thrusted code).
.NET does this with MSIL metadata, Modula-3 does it directly on the module definition section, for example.
Well, if you actually use the GC and array range checks (also enabled by default), it gives you memory safety in the vast majority of cases. Though, indeed, the language does let you break that by default, it isn't something you are likely to do accidentally.
GC + range checks provide memory safety to most programs without the kind of extra work you need in Rust. This is a big reason why they are so common in industry.
Isn't borrow-checking a sort of compile-time reference counting? Not what people usually think of when they say "reference counting", but I wonder if it's a good way to think of borrow-checking.
Not really, it doesn't count references in the same way. The borrow checker maintains a set of rules that are more expansive than reference counting. For instance the rule that you may only have one &mut at a time, and no other references as long as &mut is alive. It also has something like linear types with ownership rules, where owned values can be used only once.
Some people make this analogy, but it has so many caveats, and is so far away from what people think about as RC, and has very serious and significant differences, that I don't think it's a useful analogy, personally. I even might go so far as to say "actively harmful." Not totally sure though.
One of the biggest differences in practice between reference counting and GC (or "other forms of GC" if you consider RC a type of GC) is that nodes involved in a cycle will never be freed.
I like Rust, but I hate with a passion the borrow system. So I was looking for an unsafe-by-default Rust and I found D, thanks to the suggestion of a kind HN user. I must admit that I love it!
To me, D's compile-time features like templates, code generation, etc. are some of its most important features. Lacking those and exceptions and many other useful bits, Go cannot compare to D. (I finally can claim experience on both languages after having coded Go for about a month.)
How do you figure? D is C++ with lessons learned. I mean, it's right there in the name. I find very few similarities between Go and D. Also, D is much older than Go so if anything Go would be D if D decided to not be a "systems" programming language (but it's plainly obvious that isn't the case).
Go is still a systems programming language, in the original sense of how Rob Pike explained it in the introductory presentation video about Go from 2009.
I think D is more evolutionary than Rust. Rust tries to radically change the way programmers reason about their code. D is much more conventional in that regard, but accumulates all the power features from other languages on top of a "C++ fast" core and with a syntax that's familiar to someone coming from C++ (or Java and C#, for that matter).
If you have to share some data across your program and can't determine the lifetime at compilation time, you can't simply rely on the Rust compiler for memory management.
The next option is to use reference counting by wrapping your data in Rc/Arc (depending on whether you need atomicity or not).
But that can still leak memory if you have cyclic data structures and can't break the cycle with Weak pointers.
At this point, what you need is a garbage collector.
IIRC Rust and D have different targets for use. Rust is for safe systems programming so that you don't get UB in C, while D is better C++. Also Rust new compared to D.
From what I've read (and I may be off), but to me it does not seem that D provides many of the features that make Rust useful: algebraic data types, typeclasses, memory safety without a managed runtime.
I remember trying out D a few years ago. One of the things that threw me off was the bare-bones compiler. I think I was using the reference compiler at the time. The language itself I thought was pretty cool, slices are neat.
I could really see D take off now that it's getting gcc support.
These days you'll probably want to be using dub, no matter which compiler you're using. Makes managing dependencies a lot easier, and compiling a simple `dub build` for projects of any complexity..
The frontend has been Free Software since the beginning. The reference backend was only Open Source (not freely redistributable).
So, alternative backends were plugged to the frontend, LLVM and GCC. They all share the same frontend still, although LLVM and GCC slightly lag behind the reference in version (LLVM is nearly in sync).
This announcement is official recognition after Iain and helpers did the grunt work consistently for years now.
Pedantically, you are correct. But making the whole thing Free Software has cleared up a lot of confusion about this issue, and has undoubtedly helped. Perception matters.
I think they meant lower case "open source" (as in the source is open (anyone can read/compile it)). There are plenty of software that are open source but not freely distributable, such as Unreal Engine.
"open source", no matter the capitalization, means what the OSI has defined. Unreal Engine is neither Open Source nor open source. It is simply proprietary. To say otherwise is a practice known as "open washing" that companies use to appear community friendly when in fact they are not.
This. While OSI may not have been successful in getting an official trademark on the phrase "Open Source", in practice almost everybody treats "Open Source" or "open source" as meaning what the Open Source Definition states. Other "source available" approaches are more correctly termed "shared source" or something.
Since when does "open-source" mean "community-friendly"? Why insist on using words that don't say what you want them to mean, when you can use ones that do?
The first time I said that on HN somewhere else I got quite a flak. I don't know when OSI's definition became mainstream. The first time I heard open source, it meant the same as what you said. Nowadays you have to use a different term for that.
Collins still has the old definition "free to use or modify". Merriam-Webster and Oxford has since included "redistribution" in the definition.
The term "shared source" is not in any dictionary though.
> I don't know when OSI's definition became mainstream. The first time I heard open source, it meant the same as what you said. Nowadays you have to use a different term for that.
The notion that the phrase "open source" started out with broader definition but was later revised by OSI is itself revisionism.
OSI's definition is important because the phrase "open source" originates with the folks behind OSI—the term did not exist before 1998 when they created it. The phrase resulted from a public awareness effort leading up to the release of the Mozilla code on March 31, because the only alternative that had legs at the time was the FSF's term "free software", which had marketability problems. A few months later, a bunch of those involved in the brainstorming session where "open source" was coined went on to start the OSI as an advocacy group primarily concerned with:
a) marketing the term "open source" to get the public to adopt it en masse
b) advocating for the adoption of open source ideals themselves
To sum up, there has never been a time when "open source" meant something besides how the OSI defined it, except for sloppy usage in instances of someone co-opting the OSI's term but not their definition for it.
(Disagree with whether it's a good definition or not, but telling history and facts wrong is something to be avoided.)
"Shared source" seems to originate from Microsoft, from the time period before it embraced OSS proper. At least the first time I've seen it in any context was the "shared source CLI", which was released under the look-but-do-not-touch license as a reference implementation to back the ECMA CLI and C# standards.
That said, it's a surprisingly good term, because it's so accurate - the source is shared with you, but there isn't much you can do with it other than look at it.
You can do a lot more than look at it. You can modify it, compile it, share your modifications, and share your binaries with your game.
The one major thing you can't do is redistribute the base source code. That's a big thing, and disqualifies it from being open source, but let's not pretend there is no value in the source code access.
How does this work exactly from a licensing perspective? Will they merge the dmd sources into the gcc tree? Does this mean that the D compiler in gcc will now be GPL(v?) licensed? This is certainly possible since dmd is licensed permissively.
In general, it's possible to create a combined work made from parts with different licenses, provided that the licenses are compatible. So if you have part A licensed under, say, the MIT license, and part B licensed under the GPL, then the combined work falls under the GPL (due to the GPL "virability", and since the GPL and MIT are compatible in the sense that the GPL restrictions are a superset of MIT ). You can still later on extract part A and distribute that under the MIT license.
I don't know the details of the D compiler frontend that has now become part of GCC, but I'd imagine its something like that. E.g. the GCC Go frontend is, AFAIK, BSD licensed.
Although the recent DMD backend license change does not affect GDC, GDC is not a complete standalone implementation of a D compiler.
GDC (and LDC, the LLVM D compiler) use the frontend (lexer, parser,semantic) provided by the DMD reference compiler. GDC and LDC then map the AST produced by the frontend to something GCC and LLVM understand.
The DMD frontend has always been boost licensed. So the licensing question is valid here as we'll include boost frontend code into GCC. But I don't really have an answer to the question ;-)
My guess is that is also why. gcc wants to be assured there are sufficient maintainers. If a crew has been maintaining out-tree-support with a good track record, that demonstrates maintainability.
Is this what happened to gcj, lack of maintainers? I always thought it was a shame support for it lapsed, a to native java compiler would be awefully nice to have.
Could be. I don't know the specifics. I do know that the bar is pretty high, in general, before gcc will commit. I think that simply goes with the territory with such a large project where a lot of people depend on it to just work.
For example, an acquaintance plays (at least used to) a large role in the AVR (8 bit) back-end. He went to a gcc summit to lobby for avr-gcc being upgraded to a support tier where functional (not performance) regressions would be release show-stoppers. His argument: Over a million downloads per month. It took that much to get the gcc steering committee to even consider it. gcc is a big project, they can't afford to make commitments lightly.
The out of tree codebase - was extensively updated with development being led by Ian Buclaw. Ian has given some presentations on all the changes and updates at the recent D language conference - Dconf 2017. see https://www.youtube.com/watch?v=g-5T4zlc_bc
Likely none. Well, I guess that's kind of an oversimplification though so here's the detailed story:
We require some per-target integration to provide correct version(ARM), version(linux) statements in D code and we need to know the size of a mutex on the target system. Iain is currently rewriting this code here:
https://github.com/D-Programming-GDC/GDC/pull/500
However, we already provide and the new, rewritten code will continue to provide this integration for many targets. For example, all debian targets should be working: https://packages.debian.org/experimental/gdc
The main problem with architecture support is porting the runtime library and phobos. Here ARM is supported, MIPS and PPC has seen some work from LDC devs but making this work fully on GDC might require some small changes, MinGW is broken (and was never fully supported).
One caveat thought is that all D compilers mainly target 32 or 64 bit systems. So things won't just work without changes for 8bit targets, although we've seen proof of concept D code on 8bit and 16bit CPUs as well.
Glad to hear there's work in progress. Last time I tried to configure gcc + gdc on AIX on PowerPC, I found out it was explicitly disabled in the configure script, and I didn't dig any deeper to find out why or the potential work involved.
Back then we had a shell script hack that was called by the GCC configure script and parsed the target triplet to set these versions.
Fortunately that's long gone and we now simply add some D specific code to the platform configuration files in gcc/config/ . The shell script hack would have never passed the GCC review ;-)
While I like D and use it for scientific work, wwo "omissions" leave me a bitter taste. Nothrow and Pure are not part of the type signature of functions and consequently the compiler has limited inference about these. However I will keep using it unless something better comes up. C++/C# are not an option for me since I found D.
>Nothrow and Pure are not part of the type signature of functions
Actually they are. Take a look at this example:
void main()
{
int delegate(int) @safe nothrow @nogc pure f;
//This looks like something bad is happening, but the delegate literal
//actually has the proper attributes inferred so it Just Works
f = delegate(int) { return 0; };
int delegate(int) g;
f = g; //Fails, as it should
}
Is there a D IDE with good code completion and refactoring support? Last time I tried it, all that I've seen were pretty bad at it - handling the simple stuff fine, but breaking down on more complicated stuff, metaprogramming especially (kinda like most C++ IDEs did 8 years ago or so).
Code completion yes. (DCD provides this as a library). d-mode for emacs is nice, both of the plugins for vs-code I know work. There are plugins/extensions for (i think) xcode and (i know) Jetbrains's stuff (I don't use it so i can't really comment.
Refactoring, no.
Edit: There is also a D-specific IDE called coedit, which is pretty good AFAIK
18 years is an ok commitment, especially given that there are other implementations available on those same platforms. This is likely not the case with 'D' for all the platforms that gcc targets.
it seemed like facebook was interested in d for a while, but then it also seems they dropped it in favour of ocaml
i think so, because i believe ocaml and d do compete, and it is clear facebook stopped using d and now used ocaml in several project, they even created reason
I am learning rust now, but D seems quite promising.
Rust guides you more toward certain approaches. That makes rust easier to learn, but it makes it harder to integrate with existing projects. On paper, rust could work great in a lot of environments, but I'm finding that it takes a bit more work to integrate with a real codebase. It can be done, and I feel like that's well supported, but it takes some real work to port the concepts of a C API into a good and safe rust API.
D might make that easier because it's an "everything" language, so there is likely a corresponding D equivalent of almost any existing API. Of course, you won't get the same level of safety or other benefits, but it could be a smoother path.
Wow I literally just installed GDC today (as opposed to DMD which I've tried in the past) in order to support ARM as well as x86. Congrats on the inclusion!
Is it? Last time I checked a few years ago there were huge caveats to allocation Objects on the stack (I forget what they were) so unless something's drastically changed this doesn't seem like an honest assessment.
The pitfalls aren't really any different than C++... either use some kind of smart pointer, or use caution not to escape references to stack memory (very easy mistake to make in D) or slice up the object by value (very difficult to do in D) and you'll be fine.
The built-in `scope` syntax is deprecated (though making a comeback recently as more stuff gets implemented around it), but you can also do a library type fairly easily, just slightly heavier in syntax (you need to prepare the memory and construct the object in two separate lines) or prep work (write a struct which does both).
I just started looking into D, and I was wondering about its future as part of my evaluation. Now, I feel it will benefit from being part of the GCC 'canon', and gain more of a user-base, and contributors.
Maybe because it takes most people hundreds or thousands of hours to learn a natural language, but the average programmer can get a basic understanding of a new programming language over a weekend? Plus, there's arguably no advantage of one natural language over another, whereas different programming languages can make very different tradeoffs.
Humans started out with only one very limited language, then spread out to cover the world, and new languages flourished. As technology progressed and the world became smaller, having fewer languages (or at least, a few common languages) made more sense. With the advent of the internet, the world has become so small a single language might be the best option.
Programming started out with one very limited language (machine code), the spread out to cover the world, and new languages flourished. As technology progresses and the world becomes smaller, having fewer languages (or at least, a few common languages) will make more sense. With the advent of the singularity, the world will have become so small a single language might be the best option.=