Hacker News new | past | comments | ask | show | jobs | submit login
D as a Better C (dlang.org)
242 points by ingve on Aug 23, 2017 | hide | past | favorite | 184 comments



I tinkered with D a bit (along with Nim, Dart, and C as a comparison) while writing a Clojure interpreter.

D seems to be a nice language, but I found the editor integration wasn't the best. I also found it annoying that the docs used `auto` all the time, so you could never figure out the right type annotations for their APIs (e.g. I want to call the foo API and return its value from my method, but the docs all use `auto` to refer to its return value, so I don't know how to annotate my method.)

I liked Nim better in almost every way but one: the compiler was fickle and would just fail silently sometimes.


(Disclaimer: one of the Nim core devs here)

Happy to see you paint Nim in a (sort of) positive light, I hope I can help with this one negative.

> I liked Nim better in almost every way but one: the compiler was fickle and would just fail silently sometimes.

Can you give some examples and elaborate on what you mean by "fail silently"? Did you at least get a segfault?


Since it's already brought up in this thread, I recently had (another) look at nim and was a little disappointed too.

It's been a while, but as I recall, on Windows - it was difficult to find a supported way to output Unicode on the console in a sane, portable way (write utf8 nim source code, get wide strings in Windows console and utf8 under eg Linux) - and I think I also had some problems getting off the ground with statically linking sdl2 (again on Windows).

The build/package/config system appear to in a bit of flux?

[ED: and I think there were old open issues and forum posts for both problems, hence no new bug reports...]


Indeed there are some bugs with unicode output[1]. Just to be clear the standard way to output anything to the console is `echo` (similar to Python's `print`).

It seems that the progress on that issue has stalled. It's unfortunate but we do have limited man-power and rely on the community to help us out, this does mean that at this stage it helps if you don't mind getting your hands dirty with potential bugs.

Unless one of us are directly affected by a bug, or we see a large number of users affected by it, it's unlikely that we will focus on it. Not much we can do there sadly. I hope you can understand.

Regarding static linking I would say that it's challenging for most languages. Especially on Windows. If you pop into our IRC/Gitter channel[2] though I'm sure somebody would help you out. You can also probably get pretty far by asking C users for help.

Regarding the build system, perhaps you are referring to the fairly recent addition of NimScript? I think that has stabilised by now. But if you've got any specific questions about it then feel free to ask them.

1 - https://github.com/nim-lang/Nim/issues/2348

2 - https://nim-lang.org/community.html


As I'm now on my laptop with access to the two simple programs; for text output:

  import encodings
  var hello1 = convert("Hellø, wørld!", "850", "UTF-8")

  # Doesn't work - seems to think current codepage is utf8.
  var hello2 = convert("Hellø, wørld!", getCurrentEncoding(), "UTF-8")

  # Outputs correct text:
  echo hello1
  # Outputs corrupted text:
  echo hello2
(And simply outputting an unconverted string fails, like hello2 does).

As for the linking problem I had, the main issue is that it seems to be harder than it probably should, to get started with the GUI-programming with nim on windows. The following code works, provided one manually obtains sdl2.dll (and sdl2.lib for static linking) - however, even with static linking flags (no errors), the resulting exe still depends on the presence of sdl2.dll:

  import nimx.window
  import nimx.text_field
  import nimx.system_logger

  proc startApp() =
    var window = newWindow(newRect(40, 40, 800, 600))
    let label = newLabel(newRect(20, 20, 150, 20))
    label.text = "Hellø, wørld!"
    window.addSubview(label)

  runApplication:
    startApp()
With sdl2.dll (and .lib) in the same folder both of these work:

nim --threads:on --dynlibOverride:SDL2 --passL:SDL2.lib c -r win.nim

nim --threads:on c -r win.nim

but both fail without sdl2.dll present (ie: the "statically" linked exe still depends on dynamically loading sdl2.dll).

And there's so far no easy way of getting a "supported" sdl2.dll to go with the nim compiler - as far as I can tell, neither "nimble install sdl2" or "nimble install nimx" provide a way to get the sdl2.dll and/or C source code to compile it.

But perhaps managing DLLs and such is considered to be out-of-scope for nimble/nim package manager for now.


Ha, your comment was my first introduction to Nim but it looks interesting so I "brew install nim" and tried your test (thanks!) with "nim c test.nim && ./test" on macOS and my results is the opposite of your: hello1 is garbled while hello2 works.

I bit of googling found the next steps "nimble install minx" but for compiling your win.nim example I had to use "nim --noMain --threads:on c -r win.nim" but the result worked. However AFAIK, there's no official way to make statically linked binaries on the latest macOS.


It's not really Nim's fault that cmd.exe has crap Unicode support.


If you have an "echo" primitive/function, and "out of the box unicode support", then respecting the host OS codepage/output encoding by default is the sane thing to do (as indicated in the bug linked by the sibling comment ("just use the win32 api").

It's not that nim can't output wide characters from a utf8 source, it's just that it's not obvious how to do it in a standard way - one might thing that utf8 unicode "hello world" should "just work" on windows, and it doesn't.

It doesn't really make much sense to only do the right thing on systems that happen to have a utf8 locale (it's not that windows doesn't handle wide strings, it just doesn't have a utf8 locale by default).

It's not my impression that the nim community doesn't want to be cross-platform and beginner friendly - it's just that they're going through a phase of modernizing the win(32) sub-system.

Correct text handling isn't "just use utf8", as fun as that would be - correct handling is figuring out "what encoding is the source text", "what encoding is the destination file/device" and "how do I put the source into the destination".

I'd expect a latin1, a utf16 and a utf8 string to all be output correctly with "echo" - on all supported platforms. It's kind of why you would want to use a higher level language with "batteries included" in the first place.


Nim is not a Win32 runtime. It outputs strings to stdout. Any sane OS would do the right thing in 2017.


I should perhaps clarify that "just use the win32 api" refers to on Windows. Just as one might use syscall 4/write() on Linux, if one doesn't want to/can use libc printf.

See also, for example:

https://stackoverflow.com/questions/15528359/printing-utf-8-...

https://stackoverflow.com/questions/26106647/c11-unicode-sup...

http://www.cprogramming.com/tutorial/unicode.html

Note; I'm not convinced using utf8 internally is a great idea - especially for a "beginner friendly" language. Playing with anything from palindromes to revise strings and character/grapheme counts and

  "d o u b l e  s p a c i n g"
strings can be fun learning exercises - that might be easier with a 32-bit (or even 64 bit) representation.

But no matter how you look at it, there's no such thing as "simple" handling of international text.


I'm not sure how there can be "one true sane thing". There's more to strings than Unicode, and more to Unicode than utf8. In fact the os does pretty sane here - not checking for utf16 or utf32 isn't really great on Linux either (haven't checked how/if nim handles locales on Linux). Afaik, eg python3 does the right thing on Windows and Linux w/print(). It's just one of the many little things were one may whish for "one, simple, sane way" - but have to deal with the realities.

Same goes for things like handling paths, line endings for text files, resource forks (or not) for files, changing file meta-data...


Unfortunately, I don't have any examples off-hand. It was a month ago. I'll see if I can dig the repo back up and reproduce the error.

As for how it failed, it did not segfault. It just stopped and dumped me back out on the terminal with no output at all.

(I'm on OSX 10.12.6.)


That's a pity. Let me know if you dig it up.

Next time please create an issue on GitHub[1], you don't have to worry about creating a small example either, as long as you give us all the code to compile it.

1 - https://github.com/nim-lang/nim


I tracked down the repo and wasn't able to get it to bomb on me as before. I honestly can't remember what scenarios triggered that, unfortunately, and I'm no longer working on the project (due to actually being back at the 9-5 job!).


Thank you for taking a look. Maybe the problem was already fixed :)

If you do find anything else then do post it on GitHub, or at least let us know on IRC/Gitter.


Hey. I appreciate you showing this level of concern. I'll track down that project and open a few issues if I can reproduce it... It may not be until next week, due to traveling and other things coming up.


Can you compare your D vs Nim experience in more detail? What are the key points that you like better about Nim?


The D standard library is generally designed to generic code: you often aren't supposed to see the type returned by a function.

Also, you can use auto too + Template constraints would help you here.


I understand. That's why I mention it as one of the downsides of D. It's probably subjective, but I like to see the types of function signatures. It serves as documentation, and makes it easier for me to say, "What does this function return, and what can I do with it?"


The problem is, with a lot of the range code in D you can't actually specify the return type since it's an internal struct or similar.

Seeing MapResult!R map(R, alias fn)(R range) makes it seem like MapResult is a thing you can reference/use. But it's an internal struct, so it's neither accessible not documented. That's a whole lot more confusing than having auto and documenting that the result is an input range.


Idk about the editor integration problem, but usually when I run into the 'auto' problem what helps me is printing out the type as follows:

typeof(var).stringof.writeln;


You can also emit the type during compilation, rather than at runtime:

pragma(msg, "var has type: ", typeof(var));


You can also emit the type as a compile error by assigning it to a list or something. A trick I often used when studying C++ back when it was cool.


There's some debate about where is the sweet spot in the use of `auto` between self documentation and minimizing refactoring costs.

Fortunately, the programmer gets to decide which to use.


It sounds like OP's issue isn't that auto exists or that people write code using it, it's that the documentation uses auto, which makes it hard to figure out the types of things when you look them up.


That could be solved by turning function calls in example code into hyperlinks to the relevant documentation, which is a good idea in general.


OP here. Yes, that would be very helpful.


>Nim, Dart, and C as a comparison)

Could I ask about your thoughts regarding Go (which has much less syntax and fewer features than all of the above). Have you looked into it?


Yeah. It should have been in that list, actually. I wanted to like Go. It seems simple (which is what I like so much about Clojure), but honestly, I found it to be frustrating for all the standard reasons the Go team is tired of hearing (e.g. no generics, error handling littering my code and obscuring intent, etc).

In fact, I think Go doesn't quite deserve the reputation it has as being a simple language. It's a familiar language, and it's a bare-bones language, but that's not the same thing as being simple.

One last thing that was interesting: I got more null pointer runtime exceptions in Go than I expected (vs none in Nim, for example). Granted, I'm a Go n00b, but I'm also a Nim and Dart n00b, and didn't have nearly as much trouble with those two languages (aside from the aforementioned compiler bugs).

I really wish modern statically typed languages would rid us of null pointer errors once and for all. This is totally possible with Option types, but seems to rarely be done. I wonder why?


Have you looked at Rust? It uses Option types, has generics, also uses return values for error handling but has a lot more syntactic sugar to make it cleaner than in Go.


I have tried Rust in the past, though not on this particular hobby project (my Clojure interpreter). I found it hard to get into, and I've gotten into a lot of languages in my time-- some of which are considered challenging (e.g. Haskell). With Rust, I always felt I spent too much time wrestling with it, and not enough time being productive. I imagine that you hit a threshold at some point and that begins to change. But for my side projects, that bar wasn't worth jumping.


Rust's major goals this year are around learnability and productivity; maybe come check it out again someday :)


Could you point to any materials (e.g. blog post, roadmap document...) that deal with this? Thanks.


https://blog.rust-lang.org/2017/07/05/Rust-Roadmap-Update.ht... is our "the year is half over" update, and has a link to the overall goals right at the top.


Awesome, thanks!


Inferred lifetimes? A man can hope :).


Inferring lifetimes from the implementation of a function won't happen for the same reason that inferring types won't happen: it is really nice for the signature of a function to not automatically be changed if the body is changed, instead programmers should be told when they might be breaking their users.

However, the compiler could (and did, I believe) run inference and suggest a fix that the programmer can explicitly include.


If it were always possible to infer all lifetimes, we'd do it. Unfortunately that's not the case.

There are a number of RFCs in the pipeline to help with lifetimes, including more elision. In no particular order:

https://github.com/rust-lang/rfcs/pull/2115

https://github.com/rust-lang/rfcs/pull/2094

https://github.com/rust-lang/rfcs/pull/2093


Rust's documentation is very well written. The authors have put in great effort to make things clear.

Granted Rust has some paradigms and constructs which are not easy to learn, I found it rather exciting. I think it has future scope.

Wonder whats happening with Redox. I hope Rust adds more libraries, especially network related, since the networked world is increasingly looking like a distributed OS of sort. Browsers are looking like an OS or a shell inside the OS. Browsers are getting more complex with more functionalities added every few months. Not sure I expressed my thoughts correctly, but I guess one can understand what I mean.

Rust should move away from C standard library, if possible.


> Wonder whats happening with Redox.

They've been involved in Google Summer of Code, and so have had lots of status update posts lately, here's the latest: https://redox-os.org/news/gsoc-self-hosting-final/

> Rust should move away from C standard library, if possible.

It's not always possible, but on systems where it is, there's some ability to do so, see https://github.com/japaric/steed


Thanks Steve.

>It's not always possible, but on systems where it is, there's some ability to do so, see https://github.com/japaric/steed

Doesn't linking to it make a Rust program "unsafe", in the Rust sense? Or my understanding is probably not correct.


Using unsafe inside of safe is totally fine. See https://doc.rust-lang.org/nomicon/safe-unsafe-meaning.html for more.


Cheers, thanks.


I sometimes feel Go went to far, as in the classic quote "Make it as simple as possible, but no simpler".


I understand what you mean by your quote. I will give you a toy example, then show the single example I have from Go, and ask if you know of any more.

Toy example, a language that is simpler than possible:

- If a language does not have functions, you must copy and paste the code every time you call it. Such a language would be "simpler than possible".

The only example I know from Go:

- Many people feel that the lack of generic functions can be solved similarly - such as copying and pasting code or doing search/replace - but makes the language "simpler than possible". (in the exact sense you mean.)

(A generic function is one that works on different types without knowing what they are when it is written, or similar behavior.)

So I've given you a single real example. But it's the only one I can think of.

Other than this example, can you list the ways in which you feel it's "simpler than possible" or went "too far"? I would really appreciate if you could be specific.


I'll repeat my usual comment about what I found annoying in Go:

It seems like strings are basically the only data type of unlimited size that is supported as map keys (or at least that seemed to be the case a few years ago).

I think that only types for which equality is defined in the language could be used as keys. Those types where numbers, characters and all other "small" fixed-size types, structs all of those members had equality defined, arrays of a type with equality, and strings. The size of an array is part of the type, so in that list of types with equality the only source of unlimited sizedness are strings.

I guess that cover most cases where you want a type of unknown size as a key, but I found it pretty annoying. If you want to use, say, polynomials with integer coefficients as keys in a map, you'd have to either choose a fixed maximum size (degree, for example, or number of monomials) for the polynomials or convert back and forth between strings.

It's not the only language that forces you to convert things to strings and back for use as map keys (AWK and Lua come to mind), but I wish it didn't have that limitation.


You can actually dynamically generate struct types at runtime with the `reflect` package.[1] That might work for you. Not that I'm saying this is an ideal solution, I'm just giving you another option to evaluate. :-) (I suspect I'd sooner choose "serialize to a string" if I were in your shoes.)

[1] - https://golang.org/pkg/reflect/#StructOf


thanks!


EDIT: this is getting downvotes for some reason even though it's just addressed to OP. OP, you could email me at the link on my profile if you didn't want to answer here. I'm not on the Go team or anything, just curious.

---

Thanks for your answers!

So I know you don't want to rehash what's easy to find elsewhere, but your perspective is different because you wrote a Clojure interpreter. It's not the same as what I can find from people who just use languages.

You just list:

>(e.g. no generics, error handling littering my code and obscuring intent, etc).

But could you flesh out this litany of complaints, even if it's common? I just want to know your version of that list, without the 'etc'. It's not going to be the same as other people's - different things will bother you or come to mind for you. I'm trying to understand these from your perspective and background. (which is unusual.)

Secondly, you write "but that's not the same thing as being simple". I don't really understand. Here on HN, people say you can pick up Go in an "afternoon" and become proficient in "a week".

You write that it is not simple but I don't know what you mean in specific, even when you write as a "Go n00b". This reply will be particularly interesting to me, because, since you've literally written an interpreter, compared to an average user, everything is simple for you.

Like, think of a standard designer who has first dabbled in Javascript and for some reason picks up Go. Whatever isn't "simple" for them would ordinarily be super-simple for you. So what's not "simple" for you is like obscure string physics to a person who is barely a programmer at all. I'm really curious what parts you didn't find simple.

Thank you.


I feel a little bad writing up a full list of what I don't like about Go since I haven't used it enough to be as informed as I should be to write such a list. But... since you asked.

No generics, error-handling obscuring flow of logic, dependency-management, the compiler complains about not-used vars and imports (which is great, except when you're prototyping) it should be togglable, it felt like I was producing a lot of code to do something small (I felt the same about C, but not about e.g. Nim), and then, I think the little oddities I mentioned in my comments on simplicity added up to make me feel like I was building an inelegant solution.

That's off the top of my head. There were possibly other gripes, and I am possibly mis-remembering, so take it all with a grain of salt.


Hey, thanks! This is exactly what I was asking from you and the fact that these points remain in your mind despite being distant memories makes your reply particularly interesting.

(What I mean is if if you worked with Go every day you likely would not have the same perspective anymore.)

so thank you, perfect reply.


Well, building an interpreter isn't that difficult. Even less so, if you're building one on top of a good dynamic language (which was the case with Dart in my original list). If you're interested in getting started, these are good intros to the basics:

- http://www.buildyourownlisp.com/

- https://github.com/kanaka/mal


[EDIT: I didn't down-vote you, and I appreciate the question.]

Simplicity is not the same as familiarity. It's not the same as a small feature-set, either. Gratuitous Rich Hickey reference for what I mean by simple[1]. So most languages aren't simple by that definition, but I'd say that Go's inconsistencies prevent it from being what I call simple. Some examples:

Instead of having a general mechanism for describing datatypes, they have several one-off, exceptional syntaxes (such as for how you define tyep type of a map).

Go's dependency management / vendoring (or lack thereof) is complexity-inducing, even though it appears simple at the outset.

Sometimes you do things via method (such as `fmt.Println(message)` and sometimes via function (such as `len(message)`).

And so on. Coming from a language like Clojure that is regular, possibly to a fault, Go seemed to be littered with one-offs and inconsistencies.

[1] https://github.com/matthiasn/talk-transcripts/blob/master/Hi...


What is "vendoring"?


Vendoring is when you include a copy of your dependencies inside of your project.


I'll let the parent answer you (though I have similar sentiments as someone with language implementation experience, and FWIW, I have professional experience with Go, Ruby, JavaScript, Clojure, C#, and I've used Haskell, F#, and much more in my free time).

I will, however, point out that you're likely using a different definition of simple ("easily understood or done; presenting no difficulty. ") than is often used in technical circles ("composed of a single element; not compound"). Simplicity and ease are distinct, and one does not necessarily imply the other; it's not hard to find convoluted designs that feel easy due to familiarity, while a much smaller, consistent design will feel difficult because the concepts (while fewer in number) are foreign.

Rich Hickey (creator of Clojure) gave a great talk on easy vs simple: https://www.youtube.com/watch?v=rI8tNMsozo0

Here's something familiar to most professional web developers: JavaScript.

But it's also complicated: https://www.destroyallsoftware.com/talks/wat

Here's the spec: https://www.ecma-international.org/ecma-262/8.0/index.html

And here's something unfamiliar and often considered difficult: lambda calculus.

And yet it's very simple. Here's the entire definition of lambda expressions (lifted from Wikipedia):

---

Lambda expressions are composed of:

* variables v1, v2, ..., vn, ...

* the abstraction symbols lambda 'λ' and dot '.'

* parentheses ( )

The set of lambda expressions, Λ, can be defined inductively:

1. If x is a variable, then x ∈ Λ

2. If x is a variable and M ∈ Λ, then (λx.M) ∈ Λ

3. If M, N ∈ Λ, then (M N) ∈ Λ

Instances of rule 2 are known as abstractions and instances of rule 3 are known as applications.

---

So, while JavaScript programmers might find the lambda calculus unapproachable, it would be difficult to argue that the former has fewer gotchas than the latter, as the former is wildly complicated and the latter is so simple that you could fit its definition on business card. Once you appreciate the difference between simplicity and ease you can better evaluate your options, as it might be worth investing time in learning an intimidating, foreign solution if you believe that it will provide better stability/correctness guarantees/etc. Also, simple things are generally easier to work with compared to complex things assuming a similar degree of experience -- so investing in unfamiliar-yet-simple things will often save you a lot of effort in the long run. Strictly adhering to familiar things is a recipe for being stuck in a local optimum.

To take this full circle, Go being easy doesn't mean that it's simple.


This is a better answer than the one I gave. And is also a good explanation of why leaving Clojure for anything else always feels so icky. :)


Thank you, this was excellent and very informative.

You weren't kidding about the lambda calculus definition!

https://en.wikipedia.org/wiki/Lambda_calculus#Formal_definit...

With that said, your comment seems quite technical, about the single point of simplicity. You don't talk much about Go at all, and although the other poster has also answered, I'd be curious in your answer as well (if you've worked with Go):

You've given a definition of simplicity (correcting mine). Did you find Go simple? What other thoughts did you have?


D is betting hard on memory safety, and apparently system programming

honestly i think, had they gone in the direction of D as a better Python, and application development, they would have made bigger wins (in terms of popularity) ... better tooling, better ide, refactoring, better GC, better libraries ... better faster programs


The better Python market isn't an easy one to crack because its a bit crowded. Go (despite its perceived and real faults) has succeeded in this space by delivering better GC, good libraries, static typing and faster programs. Python itself is improving rapidly, for example with the addition of Type Hints. Its pretty difficult to be a better Python in 2017.

The better C market, on the other hand, hasn't seen any real contender other than C++ gain traction for decades. Rust is trying now and it could get there, but its still a massive opportunity.


I really really don't been to pile on Python... but every time I've had to interact with it I've been shocked at how slow it is compared with C or C++. I tend to write scientific code to process datasets in the range on 10Gb, for simple operations Python code can take hours as opposed to just taking minutes or seconds in C.

I'm sure it's possible to write more highly optimized code in Python, but it never seems to be the case with the code bases I've worked with. With my own code (a few years ago now), I spent significant time optimizing and the final result was something that was still significantly slower than C/C++. Overall, for my work, I couldn't see an advantage.

Am I just doing it wrong? Currently Go seems far more appealing to me and I've enjoyed using it (maybe I should also try Rust).


Python is a high-level interpreted language, and C/C++ are low-level (even compared to other) compiled languages. While you might be able to optimize your Python code to run faster than it does now, it's never going to match the performance of C/C++, nor is it intended to.

Go will be a significant speedup over Python, but likely won't quite match the speed of C/C++ for most tasks. Then again, the ease of development in Go will likely be noticeably better than in C or C++. Rust could potentially match the speed of C++, but it's a much more complex language than Go, so it will take some time to master. (Personally, having spent a large amount of time programming in Rust, I find myself more productive than in Go due to the powerful abstractions present in the former and lacking in the latter, but conventional wisdom is that this won't be the case for most people, and it certainly won't be when you're first learning Rust).

EDIT: Anecdotally I've heard that D is a nice language, but I have no experience with it, so I can't comment on how productive it is or how fast D code will run.


> Go will be a significant speedup over Python, but likely won't quite match the speed of C/C++ for most tasks.

Its true, Go won't match the speed of C, but it comes pretty damn close. Say you have 4 cores, 1 would be dedicated to GC and the other 3 would be executing your program. That sounds sub-optimal, meaning that a Go program could only be 75% as fast as an equivalent C program. But you have to wonder how many developers are capable of writing multi-threaded, correct C code. Go is simple to learn, lightning quick to compile and runs reasonably fast. By no means is it perfect, and it is not the tool I'd use for every task in front of me, but it is good enough for most tasks.

I agree that Rust is the future in the low level space.


> Say you have 4 cores, 1 would be dedicated to GC

In my opinion, idiomatic golang is not that GC heavy. A slice (a bit like ArrayList) of structs can be a contiguous chunk of memory as opposed to array of object references, which requires allocating each object separately.

There are an order of magnitude less allocated objects than in Java for example. (Perhaps the situation will change once Java gains value types in maybe Java 9).


Javascript is also high-level interpreted language, but it's much faster than Python. Python is pretty hard to optimize in a JIT (and it gets much less attention than Javascript).


True, but it's also miles behind C/C++. While Python is arguably the slowest of the mainstream interpreted language, they're all orders of magnitude slower than C.


Hmmm. I kind of expect them to be 1 or 2x slower. Maybe even a single order of magnitude... But Python seems often to be 100 or 1000x slower. For my applications this is kind of unfortunate.


> Go will be a significant speedup over Python, but likely won't quite match the speed of C/C++ for most tasks. Then again, the ease of development in Go will likely be noticeably better than in C or C++.

In my experience, you don't mean: learn to make C and C++ development just as easy as Go development and enjoy the best of both worlds.

I agree with you, but it seems illogical on the surface.


> Go...but likely won't quite match the speed of C/C++ for most tasks.

Last I checked golang generated code 3 years ago, it was full of redundant instructions, most notably redundant range checks. Seemed to be about half as fast as equivalent C/C++ code.

I believe golang can approach something like 80-95% of C/C++ performance once codegen is good.


You can easily get best of both worlds with Lisp derived languages.


It's been my understanding that Lisp and its relatives aren't designed to compete against C for speed either. Is this wrong?


C was designed on a PDP-11, a computer much more powerful than the mainframes where Lisp was already running since 10 years, so Lisp implementations were already quite good for most tasks by then.

While C designers were trying to create UNIX at AT&T, Xerox PARC was busy creating Smalltalk, Interlisp-D and Mesa/Cedar.

Other companies were creating Genera and the Connection Machine,

In the end Worse is Better won, because the machines were too expensive for most pockets, normal developers could not grasp Lisp, mismanagement from those companies and UNIX was kind of cheap when comparing prices, with source code available almost for free (AT&T was prevented from selling it early on).


"How to make Lisp go faster than C" (2006) http://www.iaeng.org/IJCS/issues_v32/issue_4/IJCS_32_4_19.pd...

And these figures aren't using the SBCL Lisp compiler which should currently be faster than the one cited.

You are correct, it wasn't designed to compete against C. And it has a garbage collector. But make the right choices and a compiler like SBCL can produce surprisingly "clean" (optimized) machine language code.


C compilers have presumably gotten faster since then as well, of course


Common Lisp has some very fast implementations (I think SBCL is the fastest) and it was designed to be a systems language. It can be optimized to be as fast as C.


> It can be optimized to be as fast as C

Sorry if dumb question but how is this possible when it's garbage collected?

Now I really feel like I need to learn a Lisp though.. Is there a certain Lisp you would recommend that's practical enough I'd use it for projects? Racket?


You can start by reading about Genera and Interlisp-D

https://en.wikipedia.org/wiki/Genera_(operating_system)

https://www.ics.uci.edu/~andre/ics228s2006/teitelmanmasinter...

http://www.softwarepreservation.org/projects/LISP

Just check the hardware capabilities of systems having Lisp 1.5 support, several years before C was created.

Lisp is not only manipulating lists, it grew up to natively support all relevant data structures, including primitives to handle systems programming tasks.

Regarding GC and systems programming, there are plenty of GC enabled systems programming languages, the main point is that they offer features to control the GC and also do stack and GC free allocation if required.

Some examples would be Mesa/Cedar, Modula-2+, Modula-3, Oberon, Oberon-2, Oberon-07, Active Oberon, Component Pascal, Sing#, System C#, D


> Sorry if dumb question but how is this possible when it's garbage collected?

On a big or complex system you end up at least doing one or both or two things:

a. Having to manage many temporal (transient) objects in memory. Thus you end up doing some sort of garbage collector, either doing it yourself, or using the facilities provided by the languages, or a library.

b. Allocating fixed big blocks of memory (such as arrays, or doing a "malloc") and then using this block of memory as your storage area, which then you manage.

Usually on C programming you do (b), although you can also do (a), it is not so easy.

Usually on Common Lisp programming you do (a) very easily, but you can also do (b) by allocating an array and a bit more stuff. It is not difficult, really.


> how is this possible when it's garbage collected?

Common Lisp lets you give "hints" to the garbage collector, so you end up with pretty much what you'd have in C but without the manual memory management.

> Is there a certain Lisp you would recommend that's practical enough I'd use it for projects?

Racket is very nice and comes with great documentation, a huge standard library and a very handy IDE. Clojure was hyped a lot so it has lots of libraries, which however might be of dubious quality or unmaintained by now, and it's on the JVM, which might or might not be a good thing for you. Common Lisp is very versatile and "pragmatic", but lacking in external libraries.


I like Lisp, though Scheme in particular. However, I have no idea where I would turn to in order to write an application with GUI that needs to be cross-platform and, possibly, derive a mobile version for iOS and Android from the same codebase.



Given that GP said "cross-platform" and then separately "possible derive mobile", I'm guessing that only supporting OS X on the desktop might not be enough for them. That being said, I'd be very surprised if there weren't cross-platform desktop GUI libraries available for Common Lisp.


I'd be shocked if WxWidgets weren't supported, maybe even Qt.


Interesting, but premise seems to be that UI code should be done outside of Lisp. Whereas, at least in my experience, this is where most code and trouble lives.


I suggest taking a long, hard look at Julia.

It's squarely aimed at high performance numerics and scientific programming, but is in fact a general purpose language with great potential. It's already very, very efficient (written using LLVM), yet has a lot of the clean elegance of Python.


I am in a similar situation, as I develop a lot of scientific code. I've tested several solutions, and so far the most viable for me (though far from perfect) is Python+NumPy+Fortran, the latter exposed to Python using f2py (much easier than binding C/C++ to Python). Not sure if this might be a good solution for you, it depends whether the bottleneck in your code is in I/O or in raw calculations (in the former case you're not going to get any significant advantage from Fortran).

I've tested several other languages that might be good for my purposes. From what I have seen, Rust is not ideal because of the acknowledged poor support for multi dimensional arrays [1] and the strange semantics for floats (due to the need to be «correct» in presence of NaNs).

Two options that might become interesting once they gain a stable status are Julia [2] and Chapel [3]. The former might be exactly what you are you looking for; the latter is ok if you usually run your codes on HPC platforms. But be prepared to face some friction when sharing your work with colleagues or publishing papers: the little penetration of these two languages in the scientific world implies that people will face difficulties in using/understanding how your code works.

[1] Look for «array» in the page https://users.rust-lang.org/t/on-rust-goals-in-2018-and-beyo...

[2] https://julialang.org/

[3] http://chapel.cray.com/


Julia could have had 10x the adoption if they painted themselves as more of a general purpose language, like "write the web app and your neural networks (GPU accelerable bien sur) in the same language ftw!"... if only they'd have bolted in some support for "non-weird-looking classic OOP" like in "wanna type `window.` and have even the most retarded editor autocomplete methods".

Imho they got it backwards: Python got so popular in scientific computing because it was first and foremost a general-purpose scripting language and didn't isolate (library-wise) sci-coders from all the other developers, second because it was designed as a "teaching language" so simple examples looked close enough to pseudocode so were liked by people in the business of having to explain their code (academia), and only lastly because it happen to be easily extensible for things like matrix operation (operator overloading...) and had non-weird general syntax (like in "functions are value and that's that" not Ruby-like weird blocks and procs that spooked math and physics folks...).

My only hope for "general purpose & sci computing" language would be Go if they dragged they heads out their asses and added some damn sugar to the langue - like operator overloading, some parametric types or templates (good luck bothering a physicists to understand when to use type switches and type assertions...), and something to allow "casual" developers to not be bitten by things like "not all nill's are equal, wtf?!" things. I like Go, but it's a hard language to sell to a non-100%-professional-developer...


> if only they'd have bolted in some support for "non-weird-looking classic OOP" like in "wanna type `window.

Ah, so you want an inferior way of doing OOP? Because the reason Julia is the way it is, it is because it supports OOP based on multiple dispatch (and over more than one object per method). I think this is a major quantum leap over the OOP you are requesting.

Sincerely, if that's what you want, there is always Java to make you happy.


The "inferior" way has a clear advantage: discoverability!

What if I want to call a method but I totally forgot its name and I'm not not sure how to search for what it does? What if I don't know what I want to do, but I have this "thinggy" and I want to see "what can it do"? Or "what messages it can receive"? This is generally how you think when you build "interfaces", you start somewhere in the middle and figure out how & what to do along the way, regardless if it's "glue APIs" or "web GUIs" or "desktop GUIs".

OOP is "message passing" first of all in my view. This is how it all started in Smalltalk, and single dispatch makes sense if you want to be able to "ask an object/actor what messages it can understand".

Now yeah, truly retarded languages like Java and C++ crapped all over the "OOP as message passing" idea, and also rejected multiple-dispatch... resulting in a truly horrible experience that forces anyone to drown in design patterns.

Julia is not retarded, multiple dispatch makes sense in a functional or in a math-oriented language, but I'd prefer things like operators to be multiple-dispatch and methods single dispatch. This would be the "having the cake and eating it too" solution that would please both "math coders" and "professional software engineers", otherwise they'll keep using different languages and we'll keep wasting time writing "glue" between them imho. It's incredibly hard to reason about functions that dispatch on more than 2 arguments anyway, and those that dispatch on 2 operands could be written just fine in a Haskell-like operator syntax like:

    M1 `mySpecialXVectorOperation` M2
and if you truly need n-ary dispatch in some special cases, add a syntax like this:

    {M1, M1, V3}.mySpecialXVectorOperation(42, false)
that could at least in theory allow IDEs and such provide some assistance to the developer! (not sure any IDE developers would bother developing so much introspection though...)

(But practically thinking, I still think you can get away with single-dispatch + a few tricks, and have easier-to-reason-about code as a side effect ;) )

EDIT+: And I get it that math-people have a more "first you truly understand it, then you apply it, then maybe you try and expand on it" mindset that make my kind of reasoning completely alien to them, but lots of software is and will be written in a "jump in the middle of it, hack your way to the solution, in the end re-re-re-refactor it until it not only works, but it also makes sense and you can explain it and maybe even partially prove it works".

EDIT+2: And don't get me wrong, Julia is a great language, its developers did some unbelievably awesome work... it's just not a "bridge the gap between physicists/engineers and software-people" language, despite being like 90% of the way there... I wish I could "sell it" to my "software-people" colleagues better :)


They're going after Matlab, Octave and R rather than Python.


That's why I think "they get things backwards" :| ...Python went after Matlab, Octave and R. And for ML/AI is almost completely replaced them. For stats and data-exploration it shares the pie with R because they are so different languages with different strengths.

It's a shame that the best language designers seem to also be the worst at "market positioning" and programming languages a pretty much fashion & promotion driven area. Anyway, maybe I should shut up, find the time to delve into Julia and when I know enough try to give them a helping hand instead of whining...


Python (as in the language designers or core people) did not "go after Matlab, Octave and R". Non-affiliated or loosely-affiliated people who wanted to do their data-oriented work in Python wrote a bunch of libraries, and the result after many years was a pretty competitive set of tools. So yeah, I think that similarly can be done for Julia to improve it for more general purpose tasks.

In educational/academic setting, I think even more importantly for adoption than Python being a general-purpose language is that it is already relatively widely adopted in various industries as a tooling language. For good and bad, universities today are very keen on providing skills with immediate relevance to employeers. The same network effect is kinda what is keeping Matlab alive...


> more importantly for adoption than Python being a general-purpose language is that it is already relatively widely adopted in various industries as a tooling language. For good and bad, universities today are very keen on providing skills with immediate relevance to employeers

Maybe I phrased it confusingly but you're saying the same things I thought. I don't mean "go after" in a conscious/targeted way. I meant "it evolved towards taking over". And "immediate relevance to employeers" is provided by being a "general purpose language" (like in "look, you can quickly wip up web apps, general server admin scrips and even excell plugins with it"). And "tooling" mostly equals "general purpose" in my book: there is no clear definition of "tooling" it's about "glue code" that needs to do "a bit of everything" to tie things together... so you need a "general purpose" language for tooling.

(Now I see that maybe some people use "general purpose" as in "you can write anything from device drivers and OSs to web apps if you really want to" but this is "systems languages" in my terms or C and C++ and the newcomers Rust and D. But by this definition even Go would be very far from "general purpose"...)


D has quite good multi-dimensional array support, though probably not as flexible as Python. One of the best libraries in this area is ndslice (part of mir-algorithm) - http://docs.algorithm.dlang.io/latest/index.html. See the "Image Processing" example to get a sense of API - http://docs.algorithm.dlang.io/latest/mir_ndslice.html. mir-algorithm is the base for the high-performance glas implementation written in pure D that outperforms libraries with hand-written assembly like OpenBLAS in single-threaded mode (multi-threaded is not yet ready). See http://blog.mir.dlang.io/glas/benchmark/openblas/2016/09/23/.... (Note that this blog post is bit outdated - since then the author rewrote large parts of the library, IIRC.)


The trick to high-performance scientific calculations in Python is to use libraries like NumPy (possibly via Pandas), and their large set of vectorized operations. Then majority of the number crunching happens in optimized C/C++, with Python primarily 'orchestrating' these.

For the cases where the data-manipulation functionality desired is missing and pure Python is problematic performance wise, one can use Cython to bridge the gap.

Or if that is not enough, write those functions in C against the NumPy C API, and expose that as Python APIs.


If you're interested in high-performance scientific computing, then I guess you'll pleased to know that many use D for this domain, specifically for its high-performance and also high-level features that make prototyping as easy (if not easier) as in Python. In addition to good C/C++ interop, there are also libraries for interfacing with Python and R. Be sure to check http://blog.mir.dlang.io/, https://github.com/kaleidicassociates/lubeck, https://github.com/libmir, https://github.com/DlangScience and https://www.youtube.com/watch?v=edjrSDjkfko


What everyone in this subthread is looking for is LuaJIT. It's the fastest JITed language. Comparable to C for many tasks. It has a very tiny memory footprint and integrates into C and C++ well. And there's torch if you need to do fast matrix math or machine learning.


That's up to interpretation. Eg. Pony is faster and safer than LuaJIT or C for example.


Python is interpreted, C and C++ are compiled. They target different niches. Python is more focused on ease of use than performance.

Usually when writing scientific code in Python you're going to want to at the very least use numpy. It wraps various C functions for many time consuming tasks.

To get the most out of numpy you're going to have to vectorize you're code, as python's looping constructs are notoriously slow. Instead of writing something like

    for i in range(0, len(v)):
        v[i] = w[i]**2 + w[i]
You'd write something more along the lines of

    v[:] = w**2 + w


It is up to the implementation of a language whether that language is compiled or interpreted. For instance, there are C interpreters and Python compilers. Neither language is limited to being one or the other. It's just that usually people use C compilers and Python interpreters, but there's nothing inherent in those languages that limits them to those. It's all up to the implementation.


This is true, but not always in a useful sense. There exist language features whose semantics are expressly dynamic, and if your language includes those features it will frustrate static compilation. For example, if your language includes `eval` (as Python does), then any precompiled binary would also need to include a complete Python interpreter. This is why efforts to compile Python (and other languages that are traditionally interpreted) often omit such features, and why languages that intend to be compiled tend to forgo them in the first place.


Which is why for me the best way is to be able to choose between JIT/AOT deployments, instead of a plain interpreter.

Maybe one day PyPy will finally become the default option.


> It is up to the implementation of a language whether that language is compiled or interpreted.

While that's perfectly true, it didn't seem necessary to go into it.


This seems like it would be great for a list comprehension.

I believe the main appeal to using comprehensions is that it speeds up the iteration process at run time so wouldn't a list comprehension be just as useful here if not more so then?.

You could also use the functools and itertools library for really fast iterations, possibly, depending on what it is you are trying to achieve.

If you are specifically talking lists (arrays) why not use Deque? It is set matched and python saves run time by automatically knowing each value will be a finite forward sequence


But would that be faster than numpy? numpy in typical setup would use SIMD or other math kernel (mkl, blas, etc).


It wouldn't.

Pure Python code is never going to out-perform a low-level optimized C library. Especially not when comparing iterative vs vectorized code.


Oh no of course not. Sorry I must have missed something here. I was speaking specifically of iterating over/with values. Numpy is optimized for this kind of thing. Though you can help yourself out and make sure you python code is also optimized by using comprehensions and itertools I would suppose


Have you tried out any of the options for speeding up Python apps, like Cython, writing parts that are in the perf bottleneck in C (if that is applicable), using ctypes / cffi to call C libraries for some tasks, etc.? You can use ctypes or cffi to call either existing libraries or ones you write.


Try out the Numba package: https://numba.pydata.org/


Have you tried Cython? That seems like a much smaller leap than a new language.


According to their respective Wikipedia articles, D is 8 years older than Go. Given that you're saying that Go had a lot of success breaking into the Python market, why couldn't D have instead if they had put their efforts there from the beginning?


I suspect a large part of go's initial success are the names attached to it, both Google and the team have great reputations. Go's continued success is a credit to the language, but maybe D has had some discoverability issues?


>According to their respective Wikipedia articles, D is 8 years older than Go.

And Go itself is some years old now (created in either 2007 or 2009 - according to its Wikipedia article - maybe they mean initial creation and first release for public use, by those two year values). So Go is about either 8 or 10 years old. And that makes D either 16 or 18 years old.

>Go had a lot of success breaking into the Python market, why couldn't D have instead if they had put their efforts there from the beginning?

So when D was new or just a few years old (13 to 16 years ago, say), the Python market was a lot smaller that it is today or has been for the last few years. It might not have even been a target for them, for that or any other reason - another reason could be that Walter is from a systems and compiler background, so might not have been too interested in the domain of interpreted languages (just guessing here, maybe he will comment on this). But BTW, D can be used almost like a scripting language due to its speed of compilation. See rdmd command and the "Why D?" and "D as a scripting language" articles on the Infognition blog (a google away).


I've always been attracted to systems programming, and it's what I know best. Hence D is angled that way.


Cool!


And on embedded even C++ hardly managed to overtake C.

https://youtu.be/D7Sd8A6_fYU


>Go (despite its perceived and real faults) has succeeded in this space by delivering better GC, good libraries, static typing and faster programs.

Also fast compilation, which D also has.


Most Algol derived languages have it, it became a lost art as C and C++ pushed them aside.


Interesting, didn't know that most had it - only knew it about Pascal and Delphi.


Ironically, there are very few choices for programmers who want to write memory safe, high performance systems code. There's plenty of choice at the Python level.

BetterC is a way for C programmers to gradually move to memory safety without disrupting their valuable, tested investment in working code.


D's objective is/was not to be popular but rather to be exactly what it is: a better low-level language.

From what I can gather, it succeeded pretty well at that, lacking maybe some hype. The recent open-sourcing of the toolchain might help though.


> D's objective is/was not to be popular but rather to be exactly what it is: a better low-level language.

I don't believe that one second, every language strives to be popular and reach large adoption, since more developers == more maintainers == bigger ecosystem == more reach into enterprise.


That's not so simple, because large adoption at a too early stage causes the language definition to freeze before it reaches maturity.

Simon Peyton-Jones from the Haskell community explaining exactly that: "What happens if [a language] become successful too quickly? You can't change anything!" https://www.youtube.com/watch?v=re96UgMk6GQ&t=1395


Is this the same as what I've read of as the (humorous) Haskellers' motto: Anything but success! or Avoid success at all costs? or something like that? :) Meaning maybe that they do not want to dilute the purity of their language for industry success?

Edit: A quick google finds this, which is enlightening:

https://news.ycombinator.com/item?id=12056169


I would hold up Haskell as another counter example to your assertion.


Would you really, though?

I can definitely picture some proponents taking enjoyment from their programming language being perceived as inaccessible to the masses. Insofar as it makes them feel more elite for their knowledge of it.

But even then, the ego gratification depends on there being a wide audience of people who know "of" the language and its inaccessibility. If people don't know that you're elite, then well... are you?

Proponents of programming languages notwithstanding though, I don't believe for a moment that the creators of any programming language do not wish for it to obtain wide adoption.


Wow! I think you're being extremely uncharitable to SPJ, who I believe expressed the sentiment about Haskell originally.

There's ton of reasons to not wish for adoption. For instance, supporting commercial use of a programming language is a lot of effort and takes a very different shape to pure language/compiler experimentation and research: more software engineering and less computer science.

It is true that the creators probably wish for the product of the research (as in, the successful ideas) to be adopted widely, but that's not the same as wishing for popularity of the framework that proved the ideas. The researchers can focus on their strengths as can commercial compiler/language developers.


Well, you're wrong about Haskell but in the general case you are right.

The idea that the creator of Haskell tried to put forth when he said he did not want the language to be popular was that it would mean losing the freedom to change things. As soon as a programming language achieves general adoption you immediately have the issue of technical debt and backwards compatibility to contend with, which would be a huge drawback in a research language.

It says something about the qualities of Haskell that it achieved substantial popularity in spite of this.


You're right. The aim for D is total world domination.


Of course, but while still being itself.

Just like you would like to be personally successful, but won't sell your soul or betray your principles for it.


The HolyC language was mandated by God and therefore needs none of those things.


The two are clearly not mutually exclusive. The standard library, IMO of course, is already structured in such a way that it can be used in a very high level way.


The only difference I see is in marketing.

D could be marketed as a better Python and it has enough features to qualify.


There are many D programmers who use D as a fast Python.


what do people mean by 'a fast Python' in this context, when the language syntax and features look nothing like Python?


Walter and others who know D much better can answer better, but I'll add my 2c, based on what I've read and some use I've done of D:

- The Wikipedia article about D says that Python was one of the languages that influenced it:

https://en.wikipedia.org/wiki/D_(programming_language)

- After programming in D for a while, I noticed that a) it felt comfortable to program in it, and b) that it did feel a bit like Python, in the sense that, once you've learned some of the basic features [1] and are writing small but non-trivial programs, the language does not seem to get much in your way, as they say. Also there are a few things where its library functions are a bit like Python, like using File(filename) to open a file and get a File object to operate on. Not exactly similar syntax (though there may be some of that too, if you overlook the superficial things like braces instead of indentation to delimit blocks of code), but more of a "feel" - maybe it is the way some of the functions are designed, at a somewhat higher level of abstraction than C. Also, both Python and D were influenced by C.

Incidentally, Python seems to have both influenced and been influenced by many languages (14 and 11 respectively):

https://en.wikipedia.org/wiki/Python_(programming_language)

[1] I've not tried many of the advanced language features yet (and D has many language features, including some relatively advanced ones), but I found it somewhat easy to grasp even basic use of function templates. But you can do a lot even with the basic language features and standard library.


I like Walter's writing. He is very articulate; terse but in a polished way. Relies on you knowing a thing or two first though.


I have really been enjoying working with Rust lately. I also enjoyed working with Go prior to that. Maybe I will give D a try too, see if I have similar luck.

I can find things I like and dislike about each of the new languages I try, but it kind of feels like there are a glut of options for me right now, and each one has held mostly positive surprises. Good problem to have: too many new languages that don't give me a headache, hard to choose.


Why remove RAII?

It's not fundamentally incompatible with "Better C" semantics, especially if there are no exceptions.


RAII requires exceptions to be correct. I hesitate to say it it has RAII otherwise.

Exceptions have two issues:

1. D exceptions currently require the GC. There is a Pull Request to fix that, but it isn't incorporated yet.

2. More problematic is the Dwarf exception handling mechanism requires a language specific "personality" handler. This is supplied by the D runtime library. Trying to trick the C runtime library one into working with D isn't a solved problem at the moment.


Rust uses RAII all over the place and doesn't have exceptions :).


Why does RAII require exceptions? RAII is routinely used in C++ with -fno-exceptions.


Because D code may sit in between code that throws and exception and code that handles it. Without EH support, the RAII destructor won't get called when the stack is unwound.


Isn't it generally undefined behaviour to have exceptions pass through code that isn't compiled with support for them? I.e. not running some destructors is the least of your worries if an exception hits C/-fno-exceptions/-betterC code.


Undefined behavior or not, implementations of languages in gcc and Windows tend to support it, even if the language itself does not. I consider it my job to make this work if at all practical, because users do not like ugly surprises.

I don't know how Rust's RAII deals with exceptions thrown by lower level code. If someone better acquainted with Rust could chime in here, it would be interesting.


> I don't know how Rust's RAII deals with exceptions thrown by lower level code.

Unwinding across an FFI boundary is considered undefined behavior.

https://doc.rust-lang.org/reference/behavior-considered-unde... last bullet point.


That's exactly what I wanted to know. Thanks!

Full D is compatible with foreign exceptions and unwinding. I'll see about making D as Better C also compatible, but I suppose Rust's precedent makes it acceptable to not handle it.


No problem!

This isn't my super strong area of expertise, but don't you have to specify how the unwinding works? I believe this is part of the reason why Rust punted; you'd have to specify exactly what kind is supported. (Rust uses libunwind for panics, incidentally.)


The how is handled by providing a language-specific "personality" function in the generated exception tables. The D runtime library has one for D. But with D as Better C, I'd have to find a way to make the C personality function work for D.


Ah right, makes sense. Thanks :)


If you mean lower level Rust code, there are no exceptions. Errors are just regular values encoded in the type system. If you're familiar with OCaml optionals and results, it's the same thing


There's a subtlety here: panics are exceptions, but they can be disabled without also disabling RAII.


How does it work if an exception gets thrown through a C stack frame that has outstanding resources (for instance, pointers that are freed later in the frame)? Are you referring to manual SJLJ?

In any case, it seems like C manual destruction has exactly the same problems as with RAII, because fundamentally the issue is passing exceptions through something that doesn't understand it, rather than with RAII.


Gotcha. C++ has the same problem when its exceptions are thrown through C or -fno-exceptions C++ frames.

If that's the only hold-up, IMO the way to go is to abort at the boundary, when an exception would propagate into frames without unwind tables. RAII is too valuable to do without!


> RAII requires exceptions to be correct.

I'm not exactly sure what you mean here, but all I really want is guaranteed destructors. Can we just have those?


> Can we just have those?

It's a good thought. D as Better C is new for us, and we'll be flexible in making it work better.


RAII in not removed by design. It is just work in progress. It will be added to -betterC when its implementation is more mature for this mode.


"Exceptions, typeid, static construction/destruction, RAII, and unittests are removed"

Thank you guys

"But it is possible we can find ways to add them back in."

Over my dead body


Does anyone know what the package management story is like for D? And also what is the D ecosystem like for embedded systems?


It is called dub.

https://code.dlang.org

Regarding embedded, there were some progress lately, but still needs some improvements.


I have tried D, loved the language but I feel the ecosystem and environment issues need to be addressed asap. Here are a few inputs (1) setting up env different operating systems eg GNU/linux variants. Compilers ldc, etc are not really easy to setup on all systems.

(2) Memory usage during compile can get out of control resulting in mid-build crashes if sufficient memory is not available.

(3) Setting up of a simple http server should be made simple. (ideally the way Go allows to convert any regular Go program to a http server)


High memory usage could be cursed by Ctfe code. There is an existing project to improve the Ctfe interpreter, called newctfe.


> What may be initially most important to C programmers is memory safety in the form of array overflow checking, no more stray pointers into expired stack frames, and guaranteed initialization of locals.

Are there any docs about this anywhere? http://dlang.org/spec/betterc.html doesn't really describe this stuff.


It is not described in the betterC page as these features are not new for D. The betterC page only describes the differences versus the full D feature set.

Documentation for the individual features mentioned:

http://dlang.org/spec/function.html#safe-functions

> array overflow checking

http://dlang.org/spec/arrays.html#bounds

> guaranteed initialization of locals

http://dlang.org/spec/type.html http://dlang.org/spec/declaration.html#void_init

> no more stray pointers into expired stack frames

DIP1000 describes D's scoped pointers approach to memory safety - https://github.com/dlang/DIPs/blob/master/DIPs/DIP1000.md. This is work in progress and the document doesn't reflect latest state of things. Conceptually, scoped slices/pointers/references are similar to concept of borrowing that you guys have in Rust.


Great, thanks! That makes sense, but it wasn't clear to me in the post.


Is "BetterC" only for DMD? I see that LCD (the LLVM front end) also appears to have "--betterC" flag.


I love to see someone prove D as a Better C by porting the "small" sqlite 124K line of C code to D and run some benchmark/test code against it.

I used to work on OO database engine that handles billions of records.

I end up rewriting/overloading my own new/delete and redesign everything how data is load/store around it. Shorten the open/close document time from hours to seconds for large documents.

Basically, one can easily mmap billions of records directly into data structures accessible by C API easily in a few ms. One can dispose billions of records also in a few ms. I saw similar design patterns from the sqlite code base.

Language with memory safety (GC) design in mind won't allow anything like that as far as I know.

I love to see some D experts prove me wrong.


D provides the same low-level access as C. Heck, in D you can even use inline assembly, so there are really no barriers to what you can achieve. MMaping files and casting their contents to structures is often used technique. The GC only one of the many ways of getting memory in D. You'll just have to manage the other ways yourself like in C or C++ ;)


Could you recommend some resources on these techniques?



Funnily, the D code looks more verbose than the C code.


Yes, in this example, but once you start looking at more interesting pieces of code it becomes close to impossible for C beat D without heavy use of macros on the code readability front. Check the runnable examples on the front page - http://dlang.org/.


D is one better than C, but K goes all the way to 11.


(This is a slightly funnier joke than it may appear at first glance, because as well as being the name of another programming language K is in fact the 11th letter of the alphabet.)


M is 2 steps ahead of K


If I'm writing C, it's usually a small module or algorithm that I link into another language. Since D is supposed to be in gcc now, I wonder if it's reasonable to start including D code in e.g. Perl or Ruby or Haskell libraries.

The great thing about C is once you write it, you never really have to worry about the language changing 15 years down-the-line and breaking your code. The GNU version might have a similar property, since GNU tools stay around for a long time.


Now we need another flag to enable a better Java mode. By default D is in better C++ mode.


How about betterC# instead? Given that C++/CLR is a thing, I imagine that it would be possible to get D working on it. JVM is a bit more limited in what it allows vs. CLR. For instance, C# has the unsafe keyword that allows pointer arithmetic, which would correspond to @system in D. @safe D would correspond to the normal C# behavior on that front.

Probably a lot of work though and I'm not sure why anybody would bother.


D already is and has been (better C#) when taken as a language. No need for a command line flag.


Microsoft is.

That is why .NET Native and CoreRT exist.

.NET 4.6 got a few news ways to control when the GC is allowed to run.

Also C# got its reference types improved on version 7, and the roadmap up to 8.0 includes a few more goodies, like spans.


  int main(char** argv, int argc) {
Really? :-)


How embarrassing :-)

But it doesn't affetc the opertaion of the prorgam (I did test it).

Edit: fixed it


shrug if an argv falls in the lexer and there's no backend to hear it, did it really matter to the AST? ;)


Makes me wonder if the D programs work, as they also have the order reversed, but might possibly a) have different order defined, or b) do something clever based on the types (but then, what about

  **env
?)...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: