Hacker News new | past | comments | ask | show | jobs | submit login
Facebook puts bounties on bugs in the D programming language implementation (dlang.org)
158 points by andralex on Nov 15, 2013 | hide | past | favorite | 97 comments



The author of the post also had a Reddit AMA recently, related to Facebook's usage of D: http://www.reddit.com/r/IAmA/comments/1nl9at/i_am_a_member_o...

Among other things, he wrote the book "The D Programming Language".


I believe you're not very much into C++, I mean Andrei is a real celebrity here [as well] ;)


Also all this, this guy is one of top D contributors. He wrote a lot of D articles with Walter Bright (D's creator) and others.


Google supports Go. Mozilla supports Rust. Facebook should actively support D.


Looking back over the D forums, D implementers have been promising to support shared libraries since at least 2006, and as far as I can tell, none of the major D compilers support it yet. D is a very nice language, but without shared libary support in Linux or Windows, its use to me as a low-level alternative to Python or Lua is limited.

Go clearly is interested in static-only compilation, and they've said as much. Rust is still too alpha to use for serious projects, and to be frank I find it carries much of the complexity that turns me off from C++ (I really wanted to like Rust).

Meanwhile, I've found Nimrod can happily handle shared library support, meaning I can easily make nice extensions for Python and Lua. Nimrod is just as fast as D, and probably a bit faster than Go and Rust. Syntax is very nice for Pythonistas, and it has a very full standard library.

I've also noticed that Nimrod is getting some very good exposure here on HN, and people smarter than I am are starting to play around with it (in addition to the people smarter than I am who invented it and brought it to this level).

In the end, the only new low-level language without big company support may turn out to be the winner here.


> Rust is still too alpha to use for serious projects, and to be frank I find it carries much of the complexity that turns me off from C++ (I really wanted to like Rust).

Could you elaborate? I personally think that Rust carries no complexity that we didn't need to get the job done, and we're still simplifying the language (for example, removing conditions and glob imports), but I'm certainly open to hearing other ways that the language could be simplified.

> Nimrod is just as fast as D, and probably a bit faster than Go and Rust.

I think that for numerics all four languages will be roughly on par (although Go will be a bit slower if you're using 6g/8g, as they don't do much optimization), but for memory management the performance characteristics are quite different. Rust doesn't rely on garbage collection or reference counting for safe memory management, unlike all the others (although you can opt out in D, at the cost of safety). So I don't think you can really make an unqualified statement like that in all cases.


>Could you elaborate?

A major source of complexity, in my opinion, are the different pointers: owned pointers, managed pointers, borrowed pointers (mutable and immutable), each with its own syntax and rules for use. And all three seem to be everywhere. I actually understand pointers in C (mostly!), yet I struggle to remember all the use cases for pointers in Rust. Here's the tutorial on just borrowed pointers: http://static.rust-lang.org/doc/master/tutorial-borrowed-ptr... I'm sure I could understand learn the rules given some time, but now that I've played around with D and Nimrod, I'm wondering why I should. I can get excellent performance without using pointers, yet I can access a pointer if I absolutely need one.

>So I don't think you can really make an unqualified statement like that.

I agree that unqualified statements about the relative performances of language implementations are tenuous, and so I retract that statement. But it's clear that Nimrod can reach ballpark C++/Rust/Go speeds, even without GC specifically turned off, based on both my own usage and this benchmark:

1. http://togototo.wordpress.com/2013/08/23/benchmarks-round-tw...

Now, it's quite possible that we're not getting optimal performance out of Rust because we're not writing idiomatic Rust, but this hurdle is at least partly a function of my first point.

EDIT: I should add that I think Rust may well find itself in a very strong position among systems and embedded programmers, given the fine control and safety it offers. But for your average programmer just looking for a faster language to use when Python or Ruby won't suffice, Nimrod (and D with shared library support) is a very nice alternative.


FWIW, the rules about pointers are actually pretty simple. Once you've worked out what you're attempting to do with a given function, the following basically covers it:

- I need ownership (e.g. because I want to push the value onto a vector): take either directly by value (i.e. `x: Type`), or (very rarely) via an ~ (i.e. x: ~Type).

- I don't need ownership, but wish to mutate, `x: &mut Type`

- I don't need ownership and don't need to mutate: `x: &Type` (the most common case)

That is, once you've understood which values need to be owned by what, it all just falls out from there. (Note that this "ownership" concept happens in other languages (e.g. which function should free this pointer in C, which function should close this file handle in any language), it's just not explicit; Rust is just forcing the programmer into making the relationships explicit, which reduces mistakes.)

In general the most common pointer is &, then &mut, and then, if you're writing a data-structure, ~, but otherwise ~ is very rare. (Dynamically sized vectors and strings are written as ~[T] and ~str respectively, and aren't included when I say "~ is very rare" since there's no way to put them on the stack.) I've ignored @, because it's (1) rare in idiomatic code, and (2) isn't implemented fully (it's currently behind a flag, to make it clear that it's not ready for prime-time). The "Boxes" and "Move semantics" parts[1] of the main tutorial are more modern, and may help.

TL;DR; the borrowed pointer tutorial overcomplicates things.

[1]: http://static.rust-lang.org/doc/master/tutorial.html#boxes


I'm not the downvoter (in fact, I upvoted you since you provided a non-emotional, useful, rational comment), but I disagree that those rules are pretty simple. They may be useful to know, but even the heuristic you provide above is fairly complicated for a beginner. But it's nice to know there are parallels in C.


Without knowing Rust, I think it would nice if the sigils were replaced with keywords.

Eg.

    x: owned mut Type

    x: mut Type

    x: Type
Reasoning: My memory is not what it used to be. I hate having to constantly look stuff up to remind myself what what this or that sigil means in this or that language.


I agree actually—I'm not a huge fan of sigils. The GC sigil (@) is being removed, bringing us down to ~ (pointers) and & (references).


> I'm sure I could understand learn the rules given some time, but now that I've played around with D and Nimrod, I'm wondering why I should. I can get excellent performance without using pointers, yet I can access a pointer if I absolutely need one.

Because they let you avoid garbage collection and data races while remaining safe.

If you're just starting to use each language, then having just one, globally garbage collected type, will seem to make everything simpler. And benchmarks of numerics will show that hey, it doesn't seem to matter what your memory story is. But in my experience those who have struggled with garbage collection pauses or data races come to appreciate the ability to declare the ownership semantics at a fine-grained level. Large, production-quality, performance-critical software in Java, for example, very frequently starts running into issues with the garbage collector. And I don't think I need to go into the headaches of sorting out data races.

Last I looked, Nimrod in particular loses all memory safety once you share objects between threads. (I hear this is changing though; I'm interested to see what they come up with.) Go loses some memory safety around maps and slices if GOMAXPROCS>1. D remains memory safe in @safe code, but at the cost of concurrent, stop-the-world garbage collection. None of these languages offer support for avoiding data races at compile time. Those decisions have real costs (as well as benefits; they're all tradeoffs).

It's very easy to brush off features like safe manual memory management with "I don't need that feature in some other language, why would I need it here?" But everything you've posted has not indicated any understanding of why Rust has smart pointers. If you think GC, data races, or safety are not worth solving at a language level (which, to be clear, is a position that reasonable people can take!) then argue that instead of handwaving around pointer type complexity. I don't mind if you like Go/Nimrod/D better than Rust—they're all fine languages and I have great respect for their creators—but be informed with your criticisms.

> But it's clear that Nimrod can reach ballpark C++/Rust/Go speeds, even without GC specifically turned off, based on both my own usage and this benchmark:

This is a perfect example of how single benchmarks can be misleading. First of all, that benchmark was found to be mostly testing the performance of the random number generator, which is cryptographically secure by default in Rust and not in other languages. Second, that benchmark probably never even triggers the GC in any of the languages. It's totally numeric-bound. Try a benchmark of arena allocation, for example. Or compare max pause times.


>Because they let you avoid garbage collection and data races while remaining safe.

Yes, and as I wrote, I think there's a very significant place for Rust among embedded and systems programmers. But for someone who's just looking for a fast alternative to C, Nimrod has been quite nice.

> ...handwaving...handwave....

I was trying to give you the specifics you asked for, not handwave. I don't think I could have answered your question any clearer. And I don't think pointing out the complexity inherent in 3 different pointer types is handwaving. It's a very specific statement.

>...but be informed with your criticisms

You asked me to elaborate on my criticisms, and then you criticize my criticisms? Look, I understand that Rust is your baby, and apologies if I've treated it unfairly, but why ask for why I found Rust complex if you're just going to discount it?

To be clear, I'm talking from the perspective of an intermediate-level programmer, with a couple years of experience, who taught himself to program in his 30s. I can certainly see the value of the design choices you've made for an expert programmer like yourself, with many years of experience, who can benefit from the kind of fine control Rust provides. But to someone like me, who's just looking for some caveman-level (but very significant) speedup from Python, and something I can call from Python, Nimrod has proven more than adequate and a whole lot easier to use than C/++.

>This is a perfect example of how single benchmarks can be misleading

So to be clear, are you saying Nimrod can't reach ballpark C/Rust/D/Go speeds?

EDIT: Wow, I've made comments critical of Apple and Google and not gotten as many downvotes as I'm getting here. But just to be clear, I'm not at all being sarcastic when I say "expert programmer". I'm aware of that pwalton is a very talented programmer and language designer.


I don't disagree with you that if you're looking for a fast language that integrates well with Python, then Nimrod may be a great choice. Pervasive GC certainly makes the language easier, with a performance cost that can be totally reasonable in many scenarios.

I do think that there are applications for which GC will always be slower than alternative systems such as arena allocation that Rust provides safe support for. (As an extreme example, the binary-trees shootout benchmark.) A language that does not provide safe manual memory management will lose in either performance or safety for these applications. I prefer not to make statements like "GC is slower than manual memory management", since there are many applications for which this is not true (for example, the level generation benchmark). Rather, I'd say that manual memory management provides a level of control that can allow skilled programmers to write applications that outperform those in languages with a "one size fits all" memory management scheme.


you may choose to go the C++ STL way for a wildly usable subset.

what I mean is this - we all knew that the rb tree implementation of stl wasn't that hot, neither was its sort (note - this is 7 years back), however that library enabled a c++ project to be jumpstarted in significantly less time than building everything from scratch.

as a language creator, you may not want to touch the design choices you made and make it more "accessible" for a simpler use case - as the previous poster wrote, upgrading from python.

however, you could potentially expose an accessible subset of the language as a standardised library. of course, this may lead to language rust (!!) at scale, but I would argue that its a good problem to have then.

I would be in your target market then - significantly better than python, inherently memory safe, easy to use, worse than c++.


> what I mean is this - we all knew that the rb tree implementation of stl wasn't that hot, neither was its sort (note - this is 7 years back), however that library enabled a c++ project to be jumpstarted in significantly less time than building everything from scratch.

It's not that simple. You can't start with a language that lacks memory safety and add it later. Nor can you start with a language that uses global concurrent GC and try to reel it back in without losing safety. C++ is in the former category and I think there is no way for them to add memory safety at this point. Languages in the latter category really have no way of going back on global concurrent GC; the entire ecosystem is built around garbage collection.

Safe manual memory management is balanced on a very delicate precipice. You must carefully design your language around it for it to work. It is not something that can just be added later, like a faster red-black tree algorithm.


actually - I'm proposing the other way. In fact, what you just wrote above (about the RNG being cryptographically secure in Rust) makes me really, really, really excited to try out the language.

For a large portion of my career, I was building EDA (silicon design automation tools), so I have worked with trying to optimize one bit at a time with unsafe pointers biting my back constantly. IMHO partitioning an EDA netlist is a NP-hard problem, so there have been lots of very interesting startups that tried to solve that problem and failed.

I now work with Ruby and Python.

Trust me I do know the value of everything you wrote: I am just wondering - requesting even - if there is a way to bridge the "accessibility" gap. Is there a way (anything - a "quickstart" library or a safe-but-suboptimal-subset, etc.) that enables me to start hacking with Rust in a matter of minutes ?


Rust isn't even trying to do the same things Python has tried to do. That's not an apples to apples comparison and the fact that you would apply the two in the same cases betrays a lack of sensitivity to which languages are appropriate for what.


Yes, I prefer to have various built-in smart pointers over GC. But I'm not really sure where are you at the moment - One day I hear that Rust is GC-free, then there is some news that you decided to enable GC again, etc...

So how are the things now, and what are the plans for the future?

As a side note: In C++, I prefer to use raw pointers (and unique_ptr for convenience) when the ownership is 100% defined. I believe there is no raw pointers in Rust, right? But at least, built-in smart ptrs might be much easier to use (In C++ it just too much affects code, you really can't use them "transparently").


I'd be interested to know where you heard that Rust was enabling GC again, because that's never been the case. The plan for quite a while now has been to move the machinery for garbage-collected pointers out of the language itself and into a library. In the bleeding-edge version of Rust, it's already the case that you have to explicitly opt-in to using the old built-in managed pointer scheme.

As for your side note, I just want to point out something: Rust's two remaining built-in pointers (owned pointers (~) and borrowed references (&)) are raw pointers at runtime. There's no dynamic overhead, all their magic happens at compile-time.


If I may, although not the OP.

Rust feels heavy and complex from the get go. It has :, ;;, ->, and =>. It has multiple types of pointers, and constant defaults requiring mut. I could go on, but that's about as much as I could take.

But why care about me, or people like me, perhaps I'm unique. It feels like Rust was designed for people as smart as the authors to use, and there's not a thing wrong with that. Myself, I prefer a small language that I can 'keep in my head', and build from there. With that mindset, Rust feels like a -very- complicated language, even at a glance.

I hope Rust succeeds, as I'm a big fan of mozilla's mission, and appreciate all the work that you(they) do. But please don't take offense to the assertion that the language is pretty complex compared to Go, Ruby, Python, or C.


> But why care about me, or people like me, perhaps I'm unique. It feels like Rust was designed for people as smart as the authors to use, and there's not a thing wrong with that. Myself, I prefer a small language that I can 'keep in my head', and build from there.

Rust was not designed to be difficult. It's designed to be as easy as possible without sacrificing the goals of memory safety and data race freedom. Building on a foundation like those of those other languages would result in a language that isn't safe or uses global concurrent GC.

> But please don't take offense to the assertion that the language is pretty complex compared to Go, Ruby, Python, or C.

I think that Rust actually has simpler semantics and fewer special cases than all of those.


Shared libraries are now supported on Linux as of 2.064. Other platforms to follow.


That's great to hear, congratulations on getting that done. I think a lot of us in communities like Python and Ruby are really interested in an alternative to C/++ for writing extensions.


Have a look at PyD and CeleriD.

http://pyd.dsource.org/


Go supports shared C libraries- people have linked it against things like SDL and GTK+.

I'm not so sure it's really necessary for new things. Disk and memory are now cheap and plentiful enough that most of the advantages of shared libraries don't outweigh the advantages of static ones.


I'm talking about being able to call into Go code comprising a shared library via some sort of C calling convention. What you're referring to is something almost any modern language can do, which is call a C library via some sort of library loading mechanism (ctypes, FFI, etc.).

You can see a recent proposal for shared library support in Go and some reactions here:

https://groups.google.com/forum/#!topic/golang-nuts/zmjXkGrE...

So unless what you're talking about has happened in the past few months, I'm not aware of true shared library support in Go.

Practically speaking, shared library support is absolutely necessary if you're looking to use a low-level language as a means of speeding up a dynamic language. Or least highly-desirable (you can use IPC, but you lose a lot of speed, which is often the reason why you're using the lower-level language in the first place).


I doubt Go will ever get it, given the religiosity against them from the community.


Replying to myself, since I can no longer edit.

From the gonuts-dev discussions, Go main toolchain might actually get support for dynamic loading in the future.

So I was wrong it seems.


One advantage of shared libraries is that applications that link against them benefit from bugfixes when the system updater updates the library. (Well, it might also cause regressions, obviously.)


On the other hand, applications also "benefit" when the system updater introduces breakage (library incompatibilities, regressions).

To be clear, D can use the shared C libraries of your system just fine since the beginning.

When D people talk about shared library support, they mean: create a D shared library, which is used by C/C++ code. The tricky part is stuff like initializing the runtime.


...which is of zero importance to the primary users of Go, i.e., Google, as they have a unified code base.

But nobody prevents the Gccgo people from implementing their implementation differently. (It's just that for Google, this is really not a pressing issue.)


> ...which is of zero importance to the primary users of Go, i.e., Google, as they have a unified code base.

And this is why Android will never get official Go support.

- Android only uses dynamic loading

- The Android team does not seem to care about Go

- The Go team is religiously against dynamic loading

So I bet D and Rust compilers will be able to support Android, before Google will support Go in the Android SDK.


Rust already supports Android, thanks to the efforts of Samsung.


Thanks for the heads up, I thought it was still not fully there from the mailing list traffic.

Do you have any pointers about it?


The traffic on the mailing list might lament the inconsistent state of Rust on ARM in general, but AFAIK Android should work ( https://github.com/mozilla/rust/issues/1859 ). For pointers, I encourage you to either post to the mailing list yourself or check out the IRC channel (#rust on irc.mozilla.org).


I just found this,

https://github.com/mozilla/rust/wiki/Doc-building-for-androi...

it does not look like it is production ready, if this is really the latest state.

This is not something I can put into an APK.


Go is a language. There's no problem with dynamic loading as such; it's just that the primary implementation uses static linking for convenience. You'd simply have to write a dynamically loading runtime (and come up with some semantics for dynamic loading, which sounds tricky but probably can be handled somehow). The simple-yet-efficient model of implementing Go would require some things to be taken care of - for example, the method dispatch tables.

Also, "Android only uses dynamic loading" - given the existence of NDK, how is that true? As far as I know, you can link statically whatever you want.


> Go is a language. There's no problem with dynamic loading as such;

True. However the ones responsible for writing the main compiler are against providing such support on the official compiler toolchain.

> Also, "Android only uses dynamic loading" - given the existence of NDK, how is that true? As far as I know, you can link statically whatever you want.

The NDK only produces shared objects as final binaries. You are allowed to produce static libraries to link on your final .so, but that is about it. You cannot produce pure executables.

The code compiled with the NDK is loaded and executed from a DalvikVM instance.

The NativeActivity that so many people without NDK knowledge think is native code, is actually a Java class that inherits from Activity, loads the produced .so and delegates the Android events into native code.


Again- go can deal with dynamic loading of native libraries just fine.

Point two- nobody is that interested- is the real thing.


> Again- go can deal with dynamic loading of native libraries just fine.

The problem is producing .so to be loaded by others.


Shared libraries work on gnu/linux (since 2.063..?), they just haven't been announced (the devs want some "beta" testing first I seem to remember). EDIT: Ok, since 2.064


> Nimrod is just as fast as D, and probably a bit faster than Go and Rust

Note that Rust is/can be as fast as D/C/C++.


And in many ways Microsoft is now supporting C++. Though that language sees a lot more cross pollination between companies because of its ubiquity and age.

I find it interesting as a programmer who loves to operate in the space that uses useful abstractions but still is running on raw hardware, as C++ improves rapidly with the new release model since C++11, my interest in D wanes. I hope that most of what D is eventually seeps into C++, but nowadays I don't mind my C++ syntax and don't feel like I'm fighting with it as much.


Same here; I used to be a huge fan of D but then C++11 came along and solved most of my problems (and some that I didn't know D had).


The problem is that when you work on big teams and are forced to work in C++98 with lots of style guide restrictions. Or when management will ever allow for updating the compilers on the build infrastructure.

Also compiling C++ code in C++11 mode won't make many developers use Modern C++, instead of C compiled with a C++ compiler, as many still do.

Maybe it works for your context, but in a global context we need to have safer languages for systems programming. Which were already available when C didn't had any meaning outside UNIX.


That's true, but you can write C code in D as well, so switching to D won't make people write idiomatic code either. If they won't learn how to use one correctly then they won't care enough to use another correctly.

And if your company won't let you upgrade compilers, then it certainly won't let you switch languages altogether.


> That's true, but you can write C code in D as well, so switching to D won't make people write idiomatic code either

With a Pascal like type safety. You need to be explicitly mark your code as @system to be allowed to do C like tricks.

This alone is a very big advantage.


C++11 adds a little welcome syntaxic sugar to the language. Yet C++ is nowhere near supporting functional style as well as D does. It really doesn't look and feel the same.


Would be great, but the situation is a bit different. Google creates Go. Mozilla creates Rust. Facebook employs one of the designers of D.


And Go/Rust represent new paradigms whereas D aims to be a less sucky C++...would be interesting to know why Facebook took this approach.


"New" paradigms? Care to elaborate?


In brief, all three of Go, Rust and D are designed to replace C++.

Go is built around concurrency, so applications you write are almost instantly parallelisable. It uses a simple garbage collector, to make it easier to program. It was developed at Google, with none other than Rob Pike being involved. http://golang.org/doc/faq.

Rust is designed to help prevent memory leaks, while still enabling manual memory management. https://github.com/mozilla/rust/wiki/Doc-language-FAQ. It was written by mozilla, because using C++ for a web browser leaves quite a few memory leaks. Mozilla are also building a prototype browser engine in Rust, called Servo, http://www.mozilla.org/en-US/research/projects/#servo. Although its future is still quite uncertain at this point.

I haven't had too much contact with D, but from what I understand, it's designed to be C++ done right, while keeping compatibility with many parts of C. http://dlang.org/overview.html


> Rust is designed to help prevent memory leaks, while still enabling manual memory management. https://github.com/mozilla/rust/wiki/Doc-language-FAQ. It was written by mozilla, because using C++ for a web browser leaves quite a few memory leaks.

Nit: It's not just memory leaks, it's memory safety in general. Leaks are actually somewhat less of a problem than issues like use-after-free, because leaks are usually not exploitable, but use-after-free can easily lead to exploitable security vulnerabilities (for example, the one that brought down Chrome a couple of days ago).

In general, only Rust is designed to allow totally memory-safe usage without a garbage collector.


Google has been steadily moving away from "Go is a C++ replacement" (because it's not) and toward "Go is a Python/Ruby replacement." (which fits it _much_ better)

Also, you're descriptions of Go and D are great, but I'd say that Rust is more "Concurrency of Erlang, speed of C++, type safety of ML/Haskell".


> Google has been steadily moving away from "Go is a C++ replacement" (because it's not) and toward "Go is a Python/Ruby replacement." (which fits it _much_ better)

I don't see that Go is particularly good general replacement for C++ or Python/Ruby, it seems more of a good tool for the place where Python or Ruby's performance characteristics and less well developed support for concurrency/parallelism make them unattractive, but at the same time the relative heaviness and complexity of C++ makes it unattractive, and so neither seems to be the right choice.


I think we're in violent agreement. I think people have been building stuff in Python/Ruby, and are seeing some of the drawbacks, and realize they're bigger than they thought.


Go is not particularly DRY. It's an entirely different design, not a superior version of the same thing.


Where does language like Dart fall into?


Dart has a broadly similar server side role to Go (the approach is very different) in that it falls between C++ and currently popular dynamic languages, plus it has a client-side role where it aims to be a better JS.

Obviously on the server side it overlaps with Go, but Google has the resources to do multiple efforts in parallel in areas they think are important to find what works best with real world experience.


In the history footnotes, unless Google manages to convince other browser vendors to adopt it.


Its a replacement for Java. C++ is still suited for situations where GC is impossible, Python/Ruby are still suitable for quick scripting. I think Go aims for everything in between.


I quite agree with that. Java & Go have a close typing system (static but not very elaborate, and java < 5 was lacking generics just like go), share a close philosophy (a simple language with no tricky corner, safer memory management, designed for programming in the large), have built-in concurrency mechanisms, and are even quite close when it comes to raw performance.


The history of many of the languages has been the same in regards to generics. C++ didn't have them, then templates where added. C# 1 didn't have them, but were quickly added (correctly) in 2. D didn't have them, later they were added. It is very common for a new language to avoid generics in the beginning, but it seems most come to agree, it was a mistake.


C# had them already internally in 1999, before the 1.0 release, they just decided to focus on other issues for the first release of .NET.

http://blogs.msdn.com/b/dsyme/archive/2011/03/15/net-c-gener...

On the functional languages world, parametric polymorphism has usually always been part of the first versions.


I'm not sure that this is a distinction (C++ vs. Python replacement) that means all that much.


For me, it helped me get less mad about Go. ;)

A Python replacement accepts that GC is mandatory, while a C++ replacement needs to not have a GC, as the most prominent example of this distinction.


For some use cases that would matter, but for a lot of C++ use-cases, no GC isn't really fundamental. Large codebases increasingly use the "C++ GC", aka RAII with std::shared_ptr, as the de-facto-standard way of managing memory. For those kinds of applications, how D's GC compares then just becomes a matter of performance tradeoffs between different GC implementations, not a matter of GC vs. no-GC.


Absolutely. GCs have been improving a lot over recent years, and there's lots of software that's written in C++ that doesn't have to be.

You can write D without any GC at all, though: you lose a lot of the standard library at present, but I've done it. It's not possible, in my understanding, to turn the GC off in Go.


Set the env. var. `GOGC=off`.


D provides a little bit more control.

http://qznc.github.io/d-tut/memory.html


> Large codebases increasingly use the "C++ GC", aka RAII with std::shared_ptr, as the de-facto-standard way of managing memory.

Which large codebases? This is definitely not true for any of the browser engines (Gecko being the smallest at 6M LOC, and going up from there) for example.


WebKit makes relatively heavy use of reference counted objects, though a custom reference counting implementation is used rather than std::shared_ptr. That said, there has been an increasing emphasis on preferring unique ownership to shared ownership of objects where possible.

I'm a little confused by your throwaway comment about Gecko being the smallest browser engine though. WebKit weighs in at less than 3 million lines of code, much smaller than the size you quote for Gecko.


Yeah, I was thinking Chromium when I wrote that and should have been more specific. 6M is all of Firefox.

And Gecko uses reference counting a lot too, of course... but most objects are stack allocated or uniquely owned. It's a very different situation from a language in which all objects are reference counted.


It shouldn't be the standard way of managing memory; it should be a part of the toolkit. In order, it should look like:

1) Stack allocation 2) std::unique_ptr 3) std::shared_ptr

You should be properly thinking about onwership semantics, and unique_ptr should be the default, not shared_ptr.


From what I've read, google is using Go to replace many of their C++ systems. While it may have been developed independently, I still consider this enough rationale to be a "C++ replacement". Of course, C++ currently occupies so many different roles, that the term "C++ replacement" is not very specific. I appreciate your point though, it has many more properties in common with python/ruby.


Absolutely: there are many programs which _were_ being written in C++ which didn't necessarily have to, and Go does have a place there. What I mean is that where C++ is a _requirement_, not a preference, Go won't make inroads.


If rust is that, and I suspect you may be right, I'll be putting some serious time into it. pcwalton, could you comment on whether or not you also feel this accurately describes rust?


What you present as new paradigms are features of many languages languages, that any compiler design scholar could present examples of.

They are only new paradigms for those that only know mainstream programming languages released after 2000.

Go is a nice example of how new it really is:

http://cowlark.com/2009-11-15-go/


Facebook is the only one smart enough not to invent a new wheel. Not a fan of D though.


Isn't it better for the community if everyone support the same language and add a lot of solid standard libraries?


Good for them. I was a bit surprised by the amounts though. Is it enough to motivate?

Also, I notice the BountySource site is blank without javascript on. I'm not one who demands that every site work w/o it. But, they should at least show their banner and message that it needs to be turned on. The noscript tag is twenty years old, no?


I noticed this too. But I wouldn't single them out. A lot of sites I visit has problems without js. It's even worse when the site isn't blank - _some_ things just doesn't work or show as they should...


http://forum.dlang.org/thread/l65mvq$du0$1@digitalmars.com#p...

    > The D Programming Language? $80? Ha... Fail.

    How do you mean that? The budget is of course much larger than that. I'd 
    just started assigning it.

    Andrei


And when can we put bounties on Facebook bugs?


Good question. What Facebook bug would you put a bounty on?

(ps I work on Facebook platform so this is not an idle question)


Wish I would have seen this earlier. I run a game on Facebook platform and we oftentimes have bugs that you guys likely consider corner cases because they only affect a very small subset of your users.

These tend to happen to our heaviest users who account for a large portion of our revenue. These power users either use obscure edges of the platform or act in unique ways to other users.

For instance, many of our top players have been completely unable to access their friends list for the past 2 weeks now. Playing with friends is a hugely vital component of our game and this bug makes it completely unusable for these players (many of whom have been active, paying users of our game for several years now).

See Bug 1420634571501183 (recently marked as a duplicate of 681781621832581 which I have no way of verifying since that user didn't post an error message and was using FQL rather than the graph api.. I'm hoping this bug doesn't just get lost now that it's been marked as a duplicate).

This isn't the first time we've had a bug like this happen. Allowing developers to put a bounty out would be a strong signal that it's an important bug to fix. It's not necessarily about the bonus revenue to Facebook, it's about giving developers a way to get truly important bugs prioritized.

Or heck, being able to buy 5 minutes of a real Facebook engineer's time would be valuable as well.

See also:

Bug 1394861264086269 which is a bug marked as Invalid but has to do with the (relatively obscure) "Game Groups" Graph API not matching what's actually shown on the web interface.

Bugs 177435992460351 and 242967435851521 which will probably be closed in 5 months because "We are prioritizing bugs based on impact to the developer community. As this bug report has not received much attention from other developers, we are closing it so as to better focus on the top issues." even though they've been confirmed by FB just like countless other bugs I have reported over the years.


oh another of those bounties programs... but maybe this one is different from the rest?

If you ever find any security issue don't expect to obtain a bounty from the big corps easily, and not even a "thanks". Once, the co-founder of one of the most important security companies told me "do not expect to receive a bounty without sending a minimum of 10 emails explaining the same thing in 10 different ways... average 20". It's a sad truth, and I think this means that usually legit critical security issues reports will not be properly rewarded because most people get tired quickly.

One year ago I discovered a session hijacking vulnerability on Facebook, the guy who respond my messages didn't even know what secure flag is. After asking me how to solve the bug (the solution was actually pretty simple) they never replied to me again.

With Google was the same thing: last year I found leakage of sensitive user information because of bad cookies configuration, 0 bounties 0 thanks.

Another bad experience I had with Google, but maybe a bit of topic (sorry): almost two years ago the gmail's cert changed for apparent no reason using a new CA, and it seemed that nobody else was having this issue (ie no mentions of this new cert on the web, googling the fingerprint returned 0 results) except me. I accepted this new cert on my laptop in my home; but then the "funniest" thing happened: when I connected to gmail from my university the previous cert appeared again, "it's ok.. nothing strange is happening here", but then when I went back to my home the new cert showed up again! my paranoid level went to over 9000 and immediately I connected through Tor to gmail (yup, the old nice cert was there again) and sent an encrypted mail to google's security team explaining everything, with the fingerprints and certs info, _including_ at the end of my message my pgp pubkey. One week and a half latter.. I received an email from the "security team": they replied my message in plain text, my message was quoted unencrypted (!) and they asked me how I discovered this, I told them that my browser checks for every new cert. I also told them if it would be possible to not quote in plain text encrypted mails. Then, after two days I got a new email from them, again plain taxt, and it was pretty minimalistic "We checked out and the new certificate is ok" EOF no digital signature no nothing, wtf! oh well... at least on the next day I connected to gmail in my home and the old good cert was there again :) (and the strange new cert never appeared again). A late Halloween story.


Did you even read the article? Did anyone who update you read the article?

This has nothing to do with security issues. It is to do with fixing specific bugs in the D compiler and libraries.


I did, but I think that although my examples are about security issues this is another bounty program by the same company, why would they have a different behavior?


You example is about bugs in Facebooks own application. This D bounty program is more like Facebook throwing some money around to support an Open Source project they use.


The first part of your comment specifically mentions bounty rewards for bugs, so I don't see what the problem is. The rest may be tangentially related, but it's still interesting.

I have to wonder if refusing to award bounties is a symptom of intentional maliciousness or if it is instead some degree of bureaucratic ineptitude. The adage "Never attribute to malice that which can best be explained by stupidity" comes to mind, but I think "stupidity" would best be explained away as the lumbering beast with little coordination among its myriad appendages that is most enterprises.

Your experiences suggest to me that it's also possible a matter of embarrassment. It's a shame really. Improving user experience through safety ought to be a matter of importance but it often isn't...


The article is about a bounty program, and the comment is about how likely these programs are to actually pay out.


A bounty program about bug fixes, not a bounty program about bug reports.


surface interpretation: they want them fixed

sneaky possible reason: recruitment strategy that's more likely to suss out the more "elite" devs

likely: my bet, a mix of both


The sneaky possibility has already begun. You just have to follow the depths of the newsgroup :) http://forum.dlang.org/post/l54orn$tn3$1@digitalmars.com


Maybe they'll put a bounty on getting 13-year-olds to make wall posts instead of sending self-destructing photographs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: