Hacker News new | past | comments | ask | show | jobs | submit login
Why is Rust difficult? (vorner.github.io)
309 points by chewbacha on Jan 22, 2018 | hide | past | favorite | 254 comments



I like Rust so far, but there's a few things I think aren't true:

* That Rust is only harder because it enforces 'correctness.' It certainly is harder because it enforces correctness, but it's also harder because of how. I'm not saying there's a better approach to this, but I think a lot of people are implying that there isn't, and I don't think that's a safe assumption. I think that we could find ways to make equally memory-safe languages that go about enforcing safety in entirely different manners than with ownership and lifetime semantics.

* In fact, the entire idea that Rust enforces correctness. Only if your definition of 'correctness' to be memory-safety, but I would normally define 'correctness' to include rigorous mathematical proofs. Rust's safety guarantees are often accidentally blown out of proportion; they mainly aid in preventing security and concurrency bugs, but only a certain class of each. This is still useful, but this caveat really needs to be in your face more often, as a lot of people will not mention it when touting the benefits of Rust, and beginners can get easily confused about what exactly Rust prevents you from doing.

* The idea that Rust's approach is always worth the trade-offs. Go is another programming language I like, and there are definitely things that are simply easier to write in Go with few disadvantages. Fearless concurrency is a wonderful feature, but for embarrassingly parallel problems like, often, web servers, where each thread is usually independent in terms of mutable state, Go works wonderfully. It also lets you shoot yourself in the foot in a way that Rust wouldn't, but often for a lot of simpler apps it still ends up being easier.

* The idea that solving the compiler errors makes you understand the problems correctly. For example, you could always just clone memory at every occasion, return the input instead of borrowing, etc. In fact, these things might be easier for a beginner to do. There will probably be a ton of Rust anti-patterns that come about from trying to resolve compiler errors.


> Go is another programming language I like, ... Fearless concurrency is a wonderful feature, but for embarrassingly parallel problems Go works wonderfully.

I adore Go's concurrency model but loathe go's actual language. The constant repetition in error handling, lack of generics and lack of parameterised types and Option<> make it feel like a children's toy set version of C instead of a useful modern language akin to Rust and Swift (and modern javascript).

I really really love the concurrency model though. And as far as I can tell there isn't much like it available elsewhere. You can make a similar concurrency model in rust, but you have to use much heavier OS level threads to do it, and all the other crates don't support it. Erlang / elixir do it but come with a much higher runtime performance penalty. Pony is interesting but very new - there aren't a lot of libraries for it and when I was playing with it the compiler seemed crazy slow.

I'm a little bit tempted to make a simple compile-to-go language. I'd get lynched at Go meetups for fragmenting the ecosystem, but it might be worth it. I like almost everything about go except the language itself.


I'm a little bit tempted to make a simple compile-to-go language. I'd get lynched at Go meetups for fragmenting the ecosystem, but it might be worth it. I like almost everything about go except the language itself.

I would love this; I feel entirely the same. The runtime is pretty good, performance is nice, concurrency is great, the tooling is wonderful (if you ignore the GOPATH nonsense and thew lacklustre package management solution). The language itself is quite frustrating to use, full of needless repetitive boilerplate and a mediocre type system.

I'd love a similar language with all the goodies that I find make development safer and more productive: proper enums, pattern matching, sum types, generics, and a handful of other features.


The Elixir performance penalty compared to Go isn’t as big as people think. It largely depends on what you are doing but the perk that your get is consistency of response time.

This is one of the better articles that shows both when comparing Python, Go and Elixir.

https://medium.com/unbabel-dev/a-tale-of-three-kings-e0be17a...


The nice thing about Rust not having Async IO built into the language is that arbitrary third-party implementations are possible on a level playing field with the async framework being developed by the core team.

For example, there is the May[1] concurrency library. It provides alternative implementations of the standard library's IO interface, but does Go's automatic suspend/resume, so it still looks like blocking code, which is nice. From what I can tell it's still early days on a one-person project, but it is interesting, at least.

[1]: https://blog.zhpass.com/2017/12/23/may-announcement/


Let me offer a counter point. Every language I have used that has gone the route of "let the community make their own concurrency libraries" has turned out a mess; specifically Ruby and Python.

On the other hand, the concurrency OOTB languages have all had much better ecosystems and experiences; C#, nodejs(now and w/TypeScript), F#, Golang, etc.


"Batteries included" is almost always better than "level playing field" because of network effects. The larger the set of common types / protocols available, the higher up the abstraction stack interfacing between two unrelated third party components can be. Minimalist environments have to pay taxes in the form of glue code and adapters for different ways of representing the same underlying concepts (though the costs are reduced in duck-typed languages). Even worse things happen when you try and use two third-party components and they share different versions of a common dependency - sometimes there's no easy way out.

Inferior approaches sometimes do get baked in to standard libraries. Having a culture of versioning, deprecating, migrating, is better IMO. But it's even better again to do a really good job first time around. Not easy, but I didn't say it was!


It also has possibly irreconcilable soundness issues :/

libfringe is another extremely interesting player in this space.


I'd like to read more about those soundness issues - do you have a link?



> I'm a little bit tempted to make a simple compile-to-go language. I'd get lynched at Go meetups for fragmenting the ecosystem, but it might be worth it.

I don't know about "might be worth it" part, and I don't think Go is success is because of it's concurrency model. Instead, I think Go wins because it can get thing done quickly, in one stop.

The concurrency model has been designed to help achieve that, and the huge battery pack came with the language is also for help achieve that.

I'm not discouraging you from design your language, though. In fact, I could be very happy to see a new language which can give me all the benefits that Go gives me and at same time just ... simply be a better language.

My ideal language (in my opinion) is a combination of Go and Rust: Big standard battery (Help me get things done and encourage to it's ecosystem)(&BTW, it don't have to be in the standard library), and fearless programming (Help me avoid mistakes).

Sadly Rust don't want to have a big battery for some reason :(


Rust does want to have a big battery. There have been works about promoting some crates as "the" tool to handle certain tasks. See the rust cookbook[0].

Rust is against pulling it in the standard library because that forces them to make additional stability guarantees. When you do that, you end up like python's `urllib`/`urllib2` situation.

[0] https://rust-lang-nursery.github.io/rust-cookbook/


Have you looked at Clojure’s core.async? It’s a port of Go’s concurrency model to Clojure.


> I'm a little bit tempted to make a simple compile-to-go language.

That's what I did for my current project. I love Go but for web apps is not the best in my opinion. I had some spare time and ended up building a VM that compiles typescript to bytecode. The performance loss is minimal, I still use the Go std library but it is a pleasure to use VS Code, generics (the VM ignores types but the TS compiler gives you static type safety, autocompletion, refactoring, etc...) Also exceptions is great for web apps where usually there is nothing else you can do but save all the info that you can and show an error to the user.


I write somewhat simple programs and webapps for my job, from time to time. I use Python and its standard library, some modules, and the Bottle Framework. Pulling data from APIs, doing analysis, taking some user input, editing configs, etc.

I hardly ever use classes unless I'm extending a vendor library. I have never used generics. Why are generics such a critical component of a programming language that every thread about Go mentions it? It's an honest question from me.


Python is a dynamically typed language so when you say you've never used generics, that makes sense -- the concept does not apply to dynamically typed languages. But I bet you frequently use lists and dictionaries, and perhaps occasionally use higher-order functions like map, filter, and reduce. All of those would be generics in a typical statically typed language, because with static typing you don't just have "a list" but "a list of ints" or "a list of strings".

Go has built-in generic arrays, slices, and dictionaries, but nothing else, and you can't define your own functions that work on those without specifying the type. So say you write a function that shuffles the elements of an array, like Python's random.shuffle(list). You can't make your function work for all kinds of arrays. You have to have a different function to shuffle arrays of strings, arrays of ints, arrays of FooBarClass, etc. even though the shuffling logic doesn't give a damn about what sort of thing is in the array.

I have a suspicion that this occasionally leads developers to write code inline that they previously would've extracted into a utility function. See the answers here:

https://stackoverflow.com/questions/12264789/shuffle-array-i...


Thanks. I took a year of CS ten years ago and haven't had to do statically typed development of any size since then - certainly not developing libraries.

So essentially, containers and function overloading is damned near impossible. Got it.


Python doesn't need generics because it doesn't try to statistically type your code in the first place.

    def example(a):
        return a
Would be perfectly legal python code. But in Go you would have to choose the type of A and duplicate the function under a different name if you want it to work with a different type.

So, when a language implements static typing without generics they actually mean 'lots of code and approaches legal in Python would be rejected'.

So it doesn't add a feature to the expressiveness of the code. It fixes a bug in the type system so that the expressive code is considered legal. Of course better type systems have existed since the 70's, but the set of people that know the ins and outs of how to implement those and the trade offs behind it does not include Rob Pike. His interests and skills are different (and the cause of some of the better features of Go)


When you don’t have type safety you don’t need generics. Example: in python you can call a function with any arguments (numbers, strings, whatever) and it can return anything (usually something of the same type). With go you can do this as well, but you lose compile time type safety, have to add a bunch of gross code, and incur a small performance penalty.


> Why are generics such a critical component of a programming language that every thread about Go mentions it?

Personally, I do not feel that strongly about generics; it would be nice to have them, but for my purposes, I can live without them.

But still: generic container types would be very nice. I don't need to use it very often, but Go's sort.Sort is very uncomfortable to use. If Go had proper generics, it would be easier to define a "generic" interface to iterate over things - currently, Go's builtin slice and map types are privileged over user defined types. There are probably more issues I cannot think of right now.

None of those are deal breakers for me. Go is highly compatible with the way my mind works, to such a degree I can easily forgive it all the things I do not like about it. But I also work in C# from time to time, and seeing how the .Net framework uses generics makes me wish Go had them, too.


Here's a framing I found useful: as a user of a library, you may not particularly use generics. But as an implementor of a library, on the other hand, they're extremely useful.

However, comparing to Python won't make much sense; people see generics as essential to statically typed langauges. You don't need them for dynamically typed ones!


> Here's a framing I found useful: as a user of a library, you may not particularly use generics. But as an implementor of a library, on the other hand, they're extremely useful.

Agreed. Something that's been underlined for me since picking up TypeScript in addition to JavaScript. I can take or leave TS when writing a script, but I consider it indispensable when writing a library.


In Python, duck typing provides the benefits of generics, minus the static guarantees. If you're using duck typing in python, then you should be able to understand why generics can be useful.


Go uses structural typing which feels like duck typing but it's actually checked at compile time.

That's the reason why the Go community feels generics are not the top priority (euphemism.)

Only in very specific cases (implementing data structures, for example) you can feel the need for generics. Maybe also in serialization, although that's really normal to pass a generic container (object, void *, interface{}) and use introspection.


I do hope it's not true that people from the community would attack you for writing a compile-to-go language.

I don't think it's such a bad idea, considering how stable the language itself is, it's a pretty solid bet.


Spend some time on golang-nuts, proposing ideas from "languages for academics".


>writing a compile-to-go language.

Why not fix Go instead?


You can't fix Go directly. You have to convince the language designers that it needs fixing. The fastest way of doing that would likely be at least a proof of concept, or better yet, a complete tool that gets significant community uptake.


Ah, unlike programmable languages like Scheme or Lisp, where I can modify the language as needed, by using code.


> Pony is interesting but very new - there aren't a lot of libraries

The lack of libraries is what drove me away, too. The type system is gorgeous, though. I really hope Pony grows a decent library ecosystem soon, the language itself was very pleasant to use once I got past the initial learning curve.


> I'm a little bit tempted to make a simple compile-to-go language. I'd get lynched at Go meetups for fragmenting the ecosystem, but it might be worth it. I like almost everything about go except the language itself.

If you rephrased that as 'a templating language for generating Go code', it would slot into the 'go generate' build stage and people would love you (if you got it right ;) ). The only current solution to boilerplate and lack of generics is code generation, but the templating languages all seem to suck for Go code generation so people are stuck with writing Go programs to generate Go code (like the various enum generators), or attempt to marry Go code with the standard text templating which is awful to work with (like, say, Xo).


I haven't used Go all that much, but arent the interfaces meant to be used as generics, i.e. in your function you need some data X, and you'll do something with that X. The way you need to achieve this is that expect an X that satisifes a given interface, say Exampler. If a particular X does not support it, you write the method(s) on X's type to satisfy the Exampler interface. IIRC Pike said somewhere that you don't need generics because there should be at least a singleton interface that all possible values of a function parameter satisfy.

(Edit: BTW I do not necessarily agree that)


There is certainly some overlap between those features, but I think they largely cover different usecases.

For example, consider trying to implement a List data structure in Go. What type does the List hold? It poses no constraints on the type of things put in it, except that they must all share that type.

The List can store `interface{}`, but then there's nothing to stop you from adding two different types to the list. And what type of value would a method like `getFirstItem()` return? Just an `interface{}`, forcing the user to cast.

Go's interfaces express constraints on individual types (e.g. type A has methods Foo and Bar) but not across functions defined on a struct (e.g. the type that a List stores is the SAME type that the getHead() function returns).


Pike clearly doesn't even believe his own argument because he added generics for lists and maps. He just doesn't allow users access to the same capabilities.


Kotlin has coroutines, channels and select.


They are a very fresh addition in a very fresh language. But they look pretty well implemented (explicitly designed after those of Go.)


I don’t like any of these things either. But if adding them made compilation take as long as Rust or Swift, I’d rather not have them personally.


On the other hand we ripped out channels and went back to mutexes. And my experienced Go-developer friends seem to have all the same reluctance to use channels after ending up in channel hell.

There are good aspects of Go concurrency like how the whole ecosystem is async by default (like Node's), but truly praising the model is something I mainly hear from beginners.


This is something I noticed with Go. Channels are great and all and occupy a decent chunk of the tutorials and whatnot, but they see to be rarely used apart from a fairly cumbersome way of handling timeouts and cancellation. It seems if you stick concurrency in your libraries you end up in knots, and its left up to the code that uses the libraries to wire up all the synchronous & blocking bits.


I don't know Go, but could you point to more details on the "channel hell"? I would have assumed it was a good way to structure parallelism so I would like to see why not.


You make some good points but I think it's incorrect to compare Rust & Go. Rust is a systems programming language. It competes with C/C++ more than other high-level languages. In fact, while Go was originally positioned as a systems language but it ended up attracting people from scripting languages like Python because its performance characteristics put it there. You'd probably never bother building a serious web browser in Go, but you would (& Mozilla is) in Rust.

In terms of correctness, I've never heard claims about improving security issues, except in so far as those caused by memory/concurrency - think of it as necessary but not sufficient for security. This "limited" class of bugs is responsible for quite a large number of runtime issues & they can be frustratingly difficult to find/fix (+ be confident that you did actually fix it). It's hard to say how much better things will play out in real-world software development as at some fundamental point there are always unsafe calls which weakens the guarantees Rust can make (but does put an explicit boundary on where you should go looking for bugs).

As for compiler errors not helping you understand the problem, I have yet to encounter a compiler that does that. What Rust does do extremely well is that the errors are very clearly explained in the terminal (with an error code that has pretty good online documentation), but also gives you hints on simple ways to potentially alter the code to fix it. As a beginner, I've found it way quicker to fix those bugs (even within macros) than the compiler issues I encountered learning C++ - granted back then compilers were a lot worse on that front, but even these days with C++ I've struggled fighting the compiler/preprocessor.

Also, the Rust compiler appears pretty vibrant with lots of improvements being made to help with user friendliness, so it's possible that further improvements in inference might reduce the problem spots (it's already pretty magic to me).

I've started learning Rust a few weeks ago & those are my impressions so far.


>You make some good points but I think it's incorrect to compare Rust & Go. Rust is a systems programming language. It competes with C/C++ more than other high-level languages. In fact, while Go was originally positioned as a systems language but it ended up attracting people from scripting languages like Python because its performance characteristics put it there. You'd probably never bother building a serious web browser in Go, but you would (& Mozilla is) in Rust.

I'm only comparing Rust and Go where they overlap. For example, Rust webservers versus Go webservers. There are other languages that may overlap different parts of Rust, such as in fact, Ruby and Python.

Go and Rust are both more general than the languages people compare them to. Both have C interop of various levels. Both allow unsafe code that touches memory directly. Both are high performance, relatively low level, and both provide some level of memory safety (though, Go provides much less.) I think they overlap a whole lot more than people think.

>In terms of correctness, I've never heard claims about improving security issues, except in so far as those caused by memory/concurrency - think of it as necessary but not sufficient for security. This "limited" class of bugs is responsible for quite a large number of runtime issues & they can be frustratingly difficult to find/fix (+ be confident that you did actually fix it). It's hard to say how much better things will play out in real-world software development as at some fundamental point there are always unsafe calls which weakens the guarantees Rust can make (but does put an explicit boundary on where you should go looking for bugs).

Well, I personally would claim it helps a lot of security and crash issues. Buffer overflows, use-after-frees, race conditions, and more. Also, most people do not overstate the safety of Rust, but beginners frequently misunderstand it. This is because people often list the benefits without listing the caveats.


> As for compiler errors not helping you understand the problem, I have yet to encounter a compiler that does that.

Not very mainstream, but take a loot at elm's compiler errors [1]. They worked hard in this direction, and the result is both helpful and beautiful.

You can try loading up any of the examples in the online editor [2] and introducing a random bug, just to see how the compiler <del>barks</del> tries to gently teach you.

[1] http://elm-lang.org/blog/compiler-errors-for-humans

[2] http://elm-lang.org/examples


Rust's error messages are (well, try to be) similar, and in fact explicitly inspired by Elm: https://blog.rust-lang.org/2016/08/10/Shape-of-errors-to-com...


I think my point was that they may teach you if you already have some fundamental knowledge (or help remind you of the rules anyway) but they're not instructive in and of themselves (you could copy-paste the suggestion to "fix" your problem quickly but that's not really increasing your understanding I think).


The error message catalogue [1] mentioned at the end of that first blog is interesting, wonder if Rust would benefit from something similar.

[1] https://github.com/elm-lang/error-message-catalog


Rust has a strong policy of adding tests when adding/changing code, so almost all error messages are tested in some form, including "UI" tests, that check the exact formatting.

Additionally, there's https://doc.rust-lang.org/error-index.html


>In terms of correctness, I've never heard claims about improving security issues, except in so far as those caused by memory/concurrency - think of it as necessary but not sufficient for security.

Huh? Those might not be sufficient, but are the source of 99% of security issues.


Spectre & meltdown would like to have a word with you. Yes, memory/concurrency are a large class of problems for C/C++, but plenty of security issues still exist in other languages (check out the CVE count for Django for instance). The thing about security is that attackers will predominantly use the path of least resistance. As prevention evolves so does the sophistication & vector of attacks (e.g. timing attacks attack high-level implementation details rather than buffer overflows or algorithmic flaws). Do you have any supporting evidence for your 99% claim? That seems vastly overstated.


>Spectre & meltdown would like to have a word with you.

Both are highly atypical, once in a decade, CPU issues, so represent 0% of software related bugs.


Those 2 bugs are part of the class of timing attacks which are quite common for security & no amount of memory safety will help you there (as Spectre & meltdown have shown they're not even restricted to the SW domain). Same goes for things like not sanitizing inputs for things like SQL injection, XSS etc. AFAIK Rust doesn't do much on that front either. I don't disagree that memory related errors are the cause of a lot of problems. However, I think that's because C/C++ is so common & it's such low-hanging fruit, why bother? I see no evidence that attackers are running out of tricks to pull to exploit SW regardless of the language it's written in or the countermeasures you have deployed.


> In terms of correctness, I've never heard claims about improving security issues

I would argue that the rest of your paragraph talks about how Rust (indirectly) improves security issues.

> As for compiler errors not helping you understand the problem, I have yet to encounter a compiler that does that.

Try misplacing a { in an average LaTeX document. But don't say that you haven't been warned. ;)

Alternatively, write some C++ code that uses std::map<std::string, std::string> or something like that incorrectly, and marvel at the page-long exceptions with all the default template arguments expanded into an unreadable mess.

> Also, the Rust compiler appears pretty vibrant with lots of improvements being made to help with user friendliness, so it's possible that further improvements in inference might reduce the problem spots (it's already pretty magic to me).

I also recently got into Rust (coming from Go), and the thing I miss most is `gofmt`. Is there a standard tool-enforced coding style for Rust that the community agrees on, in the same way that the Go community has by and large agreed on gofmt?


> I would argue that the rest of your paragraph talks about how Rust (indirectly) improves security issues.

I think perhaps I didn't communicate my meaning clearly enough. Rust does significantly reduce the risk of a certain class of security problems (reduce not eliminate since it's highly unlikely you'll have 0 unsafe{} blocks anywhere in your dependency chain). That's not disputable since that's part of the language design. That's certainly an advantage it has over C/C++. However, security is far more than just memory safety & I have read nowhere that writing more secure code is a design goal for Rust (I'm not even sure yet such a thing is possible).

> Try misplacing a { in an average LaTeX document. But don't say that you haven't been warned. ;)

> Alternatively, write some C++ code that uses std::map<std::string, std::string> or something like that incorrectly, and marvel at the page-long exceptions with all the default template arguments expanded into an unreadable mess.

I agree 100%. I think perhaps you misread what I wrote? I said I have not encountered a compiler where the errors helps you understand a language.

> Is there a standard tool-enforced coding style for Rust that the community agrees on, in the same way that the Go community has by and large agreed on gofmt?

rustfmt


I think Elm has some of the best engineering around that: http://elm-lang.org/blog/compiler-errors-for-humans and http://elm-lang.org/blog/compilers-as-assistants

While it's also not perfect or can correctly identify all possible syntax/intent errors, it's a valuable tool to learn the language nonetheless.


> Is there a standard tool-enforced coding style for Rust that the community agrees on, in the same way that the Go community has by and large agreed on gofmt?

https://github.com/rust-lang-nursery/rustfmt

And is planned to be distributed with the compiler and the cargo stack by default



Ugh, C++ template errors, the bane of my existence (and why I personally avoid using anything beyond dead-simple ones unless I have to interface with STL).

Only thing worse are Java generics, if only because they literally tried to tack them in 10 (heck, getting close to 15 years now) years ago and we are still feeling the consequences of those design choices today, (unless Java 8+ made major fixes to this. Haven't used it in enough detail to make a judgement).


template errors blurb a big wall of text, but if you know how to read it it only takes a few second to find in your code where your problem is. You generally just have to look at the "required from here" (in clang and gcc at least) message which points to your own code 9 times out of ten.

https://my.mixtape.moe/xdkchm.webm



> I think that we could find ways to make equally memory-safe languages that go about enforcing safety in entirely different manners than with ownership and lifetime semantics.

Of course, but at the cost of requiring a garbage collector. Mandatory GC has three main drawbacks:

* It makes it harder and much less convenient to call libraries in this language from other languages.

* Low-resource embedded systems are a no-go.

* GC is typically a tradeoff between speed and memory usage. Manual memory management can be very fast and very lean.


> Low-resource embedded systems are a no-go.

Only if talking about microcontrolers with few hundred KB, where even C is a challenge.

There are Java and Oberon implementations for single digit MB, like Cortex-M4.


> There are Java and Oberon implementations for single digit MB, like Cortex-M4.

So? Just because you can use a language on a given system doesn't mean that you should use it in any serious context.

A GC typically makes memory usage and execution latency non-deterministic, or at the least very hard to analyze. If your washing machine software OOMs whenever the GC didn't run between two button pushes, you'll have a great Heisenbug.


Well, I happen to consider factory control management, weapon targeting and missile tracking systems, some big serious context.

Here is just one of them.

http://www.militaryaerospace.com/articles/2006/10/lockheed-m...

"PERC Ultra offered Lockheed Martin the responsiveness it needed to meet its most demanding timing requirements. In addition to real-time threading and deterministic garbage collection, PERC Ultra provided the instrumentation and VM management tools necessary to support the mission-critical real-time requirements of the Aegis Weapon System."

"The Lockheed Martin-developed Aegis Weapon System is the sea-based element of the U.S. Ballistic Missile Defense System. The Aegis Weapon System is a radar and missile system integrated with its own command and control system, capable of simultaneous operation defending against advanced air, surface, and subsurface threats."

Is that serious enough for you?


Is this where we bring in the anecdote about the missile flight control system which never freed any memory, because the minumum time to OOM was less than the maximum flight time of the missile?


Ask and ye shall receive[0].

[0] https://news.ycombinator.com/item?id=14233542


> A GC typically makes memory usage and execution latency non-deterministic, or at the least very hard to analyze.

You could say same about malloc and free. Not deterministic at all either. Memory fragmentation is a huge issue as well, and can bring down the whole system.

Typical GC (not all flavors) has a huge advantage on microcontrollers: you gain ability to compact heap. No more fragmentation.

That said, mostly I try not to dynamically allocate anything in firmware or kernel drivers. Whenever possible. Sometimes I write my own specialized allocators. For example a simple wait free allocator fast and rugged enough to call from an interrupt service routine.


> Only if talking about microcontrolers with few hundred KB, where even C is a challenge.

C is the default and not a challenge on 8-bitters.


So which compiler does offer 100% ANSI C compliance on something like a Z80 or PIC?


SDCC is standard compliant (even up to C11) and has support for Z80: http://sdcc.sourceforge.net/


Thanks for pointing it out, I wasn't aware of it and it does look good, but they clearly state the uses cases which it isn't fully ANSI C compliant on page 24 of the documentation.


I think you're nitpicking.

That's close enough in my book. Those MCUs are tricky targets and I can certainly live for example with not being able to pass structs as return values. Or without re-entrancy. Perfectly understandable once you take into account limited IRAM space, 128 or 256 bytes, where stack, register banks and most of your temporaries and globals need to reside.


Which validates my point of using C on 8 bit CPUs being a challenge.


"Less than absolutely total standard compliance" != "a significant challenge compared to absolutely total standard compliance", especially if the differences are clearly documented.


The challenge is not being able to write idiomatic ANSI C, rather trying to tame the compiler to produce code comparable to hand tuned Assembly to fit into those processors, the majority of time using compiler specific extensions.


You're sounding like a propagandist - like you want to just argue your talking points, rather than actually have a conversation where you listen to what the other side is actually saying. It makes you a real pain to talk to.

Just in case you're actually trying to engage in good faith, though, I'll try this one more time. If I have a compiler that is less-than-100% standards compliant, I may not be able to use a few features of the standard. That means there may be a few ANSI C idioms that I can't use. Of those, the number that I would choose to use on that size of processor is very, very few. So in practice, there is no "challenge".

Why would I use very few of these features? Because on a processor that size, you're not writing a huge app. You don't use all the functions in the standard library, you don't use all the keywords, you usually don't push the language very far at all. At worst, you might have to develop one or two idioms of your own. It's... mildly annoying, rather than the big deal you're trying to make it.


It is not a challenge, it is by far the most common way to program them.


Yeah not being able to write idiomatic ANSI C, rather trying to convince the compiler to produce code comparable to hand tuned Assembly to fit into those processors, the majority of time using compiler specific extensions.


SDCC is also full of bugs and the compiled code is not well optimized. For a low level CPU like that, you have to write assembly at some point.


I wrote smaller programs with it for a Z80 home computer which originally didn't have C support. I didn't encounter bugs, but it's true that the code generation is less than optimal for Z80 when compared to manually written assembly (understandable because of the small register set), but the compiler supports mixing assembly and C very well, so no complains here :)


Literally the whole industry doesn't care two bits about "100% ANSI C compliance", which I'd guess isn't even possible on a pure Harvard architecture like AVR or PIC.


Even the ESP8266, where ram & flash are measured in kilobytes can run a minimal version of python. This whole "low resource can't run heavy languages" trope needs to die.


Comparison of a sensor fusion algorithm the likes of which are used in quadcopters:

| impl | c | py3 | mpy | pypy3 | cy3 |

|------|-------|--------|---------|-------|-------|

| [ns] | 0.176 | 11.462 | 37.633 | 0.818 | 0.590 |

| .c | 1.000 | 65.126 | 213.824 | 4.648 | 3.352 |

mpy - MicroPython

cy3 - Cython

So, ... low resource can't run heavy languages fast (yet?). Good thing is by using MicroPython and writing the time critical stuff in C you can get the best of both worlds while paying the least.

edit: sigh, markdown table


If you add two spaces in front of each line, it formats as monospace. Like this:

  | impl | c     | py3    | mpy     | pypy3 | cy3   |
  |------|-------|--------|---------|-------|-------|
  | [ns] | 0.176 | 11.462 |  37.633 | 0.818 | 0.590 |
  | .c   | 1.000 | 65.126 | 213.824 | 4.648 | 3.352 |
Also, I'm taking a guess here and assuming that the values in the second line mean "execution time as multiples of the execution time for C".


Yes, it was `*c`, but the star screwed up formatting even more ;)


Why does GC make it harder to call libraries in other languages?


When you're working in a GCed language you assume your language's GC is responsible for freeing memory. When you interoperate with a language where owners are responsible for freeing memory, you have to have a way to "disown" structures you've created but passed into the ownership language (e.g. a callback you've passed to a library function) so that your GC doesn't free them, and a way to "own" structures you've received from the ownership language (e.g. values returned from library functions) so that your GC does free them, and neither of these things will be easy/natural in your language.

When you interoperate with another GCed language it's even harder, virtually impossible, because both languages' GCs assume they own everything and so anything that's visible in both languages will be freed twice.


Well, declaring untraced references in Modula-3 or Active Oberon, or doing native heap allocations in Nim, D, Java and C# is relatively simple.

Using GC languages doesn't mean doing 100% memory allocation via GC.


I've done it in Java; it's not officially supported in the language standard (or has only recently been added if so - certainly they were talking about it for years), and the wider language does not generally have the support or idioms you would want (e.g. try-with-resources was only introduced a couple of versions ago), libraries aren't oriented towards that style.... It's certainly doable but I'd stand by it not being easy or natural.


Although I mentioned Java, due to ByteBuffers and Unsafe, it is clearly not the best one of the set of languages that I mentioned.

Others on that list have specific language features for GC free allocation.


Languages interfacing through an ABI implicitly need to agree on their memory models, calling conventions, exception unwinding, etc.

There's lots of detail in the D language garbage collection docs:

https://dlang.org/spec/garbage.html

The whole thing might be instructive, but 28.3.3 has a bunch of particular expectations of the D GC.


Because you have to add support for GCing of foreign structures from these libraries, or keep the interface low-level and force the user of the GCed language to manually manage these foreign objects. Adding support for GC can be difficult if not impossible, because GCs often arrange memory in special ways (different pools, etc.) whereas the foreign library probably just uses malloc.


What I meant was: If language X needs a GC, then writing a library in X makes it difficult and very inconvenient to use that library from another language.

The comments of jonathanstrange and humanrebar are also spot-on.


As an occasional Ada programmer, I've taken a look several times at Rust and have decided to skip it every time. Ada might be a pain in the ass sometimes (alias rules...), but it's way easier than Rust. In my opinion Rust is a classical case of technology that gets into humans' way rather than serving humans. For me it's just not worth the hassle, especially since most of my programs do not require any soft realtime performance guarantees and therefore work well with way more convenient garbage collected languages like Go, CommonLisp, and Racket.

That being said, Rust is already so obscure that it can easily replace C++ and I predict it a great future. Programmers love obscure programming languages with steep, long learning curves that allow them to show off.


Second the Ada note. That language is so well constructed and thought out on many levels.

...except the outer, most superficial level. I'm genuinely afraid that it will never "catch on" because it just looks weird. (But not weird enough to attract that kind of people.)

It's a shame because

- Ada generic packages are exactly what C++ templates should have been

- Derived types and record extension is inheritance that makes sense

- Access types are obvious in hindsight

- Correct terminology for procedures and functions does help a lot in communication

- RAII in the shape of controlled types feels a bit bolted on but it works very well

- Named blocks and explicit closing is easy but very useful

- Class-wide types are deemphasized the way polymorphism should be

- The -gnatyy flag is almost as good as gofmt.

- Tasks and protected types form a very intuitive and safe concurrency mechanism

- Having contract-based programming as an option built into the language is way superior to relying on asserts in the procedure

- While not always budgeted for, when there is time to spend, SPARK is uh-mazing. Strong guarantees at relatively low cost, and reasonably easy to learn as well.


The thing that kept me from trying Ada (back a couple of years ago when I was writing firmware and so was kind of on its home turf) was a lack of good resources for learning it. I got a copy of the book "Programming in Ada" by Barnes, and did not find it very helpful. Often the advice seems to be to go read the Reference Manual, which I agree is highly readable for a language standard, but it's not aimed at users of the language. If anyone here has any other resources to recommend, I'd be happy to hear about them.


> it's not aimed at users of the language.

Is it not? It doesnt contain recipes, no, but it fully documents the fundanental units of the language youll express your solution in.


Thanks for the rundown. Sounds like Ada has a lot of good ideas to steal!


Definitely! My C coding improved vastly after spending some time with Ada. Though it's worth keeping in mind that it's not only about each individual thing being good – it's also that they work really well together.


And the STL first implementation was actually done in Ada. :)


Yes and no.

Yes, Stepanov's first attempt at writing something like the STL was done in Ada. But no, it wasn't "the STL" - it was neither Standard nor Template.

And Stepanov abandoned Ada for C++, because Ada wasn't expressive enough for what Stepanov was trying to write.


Meaning if Stepanov had not played with Ada for its first implementation, followed by Bjarne advocating him to use C++ instead, the STL would never happened in its form.

And yes, it wasn't quite the STL, we had quite a few variations of it, the most well known coming from SGI, until things kind of settled at ANSI.


I disagree. Stepanov wanted to write that kind of software. He would have done it in any vehicle he found suitable. If he hadn't started with Ada, he still would have wound up writing it in some language.

And C++ was among the better candidates for the language to use. It was more suitable than Ada.

And, do you have any basis for the statement that Stroustrup advocated C++ to Stepanov?


From the source itself,

"And, of course, Andy and Bjarne Stroustrup are responsible for putting STL into the standard."

"The support of Bjarne Stroustrup was crucial. Bjarne really wanted STL in the standard and if Bjarne wants something, he gets it. He is as stubborn as a mule. He even forced me to make changes in STL that I would never make for anybody else - I am also stubborn, but he is the most single minded person I know. He gets things done. It took him a while to understand what STL was all about, but when he did, he was prepared to push it through. He also contributed to STL by standing up for the view that more than one way of programming was valid - against no end of flak and hype for more than a decade, and pursuing a combination of flexibility, efficiency, overloading, and type-safety in templates that made STL possible. I would like to state quite clearly that Bjarne is the preeminent language designer of my generation."

http://www.stlport.org/resources/StepanovUSA.html


That says that, once Stepanov had written the STL in C++, Stroutrup advocated the STL becoming part of the C++ standard.

But you said,

> Bjarne advocating him to use C++ instead [of Ada]

which your quote here doesn't substantiate at all.


I got the names mixed up, it was Andrew Koenig not Bjarne.

"My attempts to implement algorithms that work on any sequential structure (both lists and arrays) failed because of the state of Ada compilers at the time."

"In 1987 at Bell Labs Andy Koenig taught me the semantics of C. The abstract machine behind C was a revelation. I also read lots of UNIX and Plan 9 code: Ken Thompson’s and Rob Pike’s programming style certainly influenced STL. In any case, in 1987 C++ was not ready for STL and I had to move on. "

"In 1993, after 5 years working on unrelated projects, I returned to generic programming. Andy Koenig suggested that I write a proposal for including my library into the C++ standard, Bjarne Stroustrup enthusiastically endorsed the proposal and in less than a year STL was accepted into the standard. STL is the result of 20 years of thinking but of less than 2 years of funding."

http://stepanovpapers.com/history%20of%20STL.pdf


> I got the names mixed up

Fair enough. I forgot he was encouraged to go to C++ by anybody.


I have done pascal before and I wish to have ADA more easily available.


> I have done pascal before and I wish to have ADA more easily available

Then you might be interested in https://nim-lang.org/ as a Pascal-like Rust alternative.


There is a free and open Ada implementation these days: GNAT, the GNU Ada Translator, is a full and complete implementation of Ada.


> That being said, Rust is already so obscure that it can easily replace C++

An obtuse syntax is a big downside of C++. Implying that a steep learning curve and the ability to 'show off' helps keep the language where it is grossly misrepresents the vast majority of C++ programmers. Those traits were born out of necessity. In C++98 you simply had to be 'clever' to keep up with modern languages because the language was stagnant for a decade. If the committee can't add things, you do something clever and do it yourself.

C++ was stagnant for the dot-com era, Web 2.0, and the rise of mobile apps, and today it thrives. There's more to that than being entrenched in legacy code bases.

Rust emulating C++s difficulty as a way to cultivate an elitist community, if that's what you're implying, will not work now when C++ is moving in the opposite direction.


We are certainly not trying to do that. Our stated goals for this past year, and possibly some of our goals this year, are the direct opposite.


I've taken a brief look at Ada several times, but gave up on it because it seemed difficult to get a cohesive set of documentation and examples (and ideally a good book) that were all in sync. Since Ada has been around for so long, there's an awful lot of outdated material. Do you have any suggestions on materials one should use while learning Ada for hobbyist purposes? I'd like to give it another go.


Four versions of Ada, named after the year it was released: 83, 95, 2005, and 2012. Each new version adds features on top of the previous.

Ada 83 has

- arrays,

- records (structs),

- derived types (subtypes carrying the same data as parent type),

- subtypes,

- access types (thick pointers),

- procedures (subprograms executed for their side effects) and functions (subprograms that return values),

- Named parameters

- reference arguments (greatly reducing the amount of pointers you have to deal with)

- Default values for arguments

- overloading by type

- in/out parameters

- private package parts (encapsulation)

- generic packages (kinda like C++ templates)

- exceptions

- tasks (concurrent processes that communicate with message passing)

-----

Ada 95 adds a lot of things that make it easier to do Java-style OOP:

- Record extension (inheritance; subtypes carrying additional data their parent type does not)

- Dynamic dispatch of subtypes

- Abstract types

- Subprogram access types (function pointers)

- Sophisticated package hierarchy (though potentially somewhat unintuitive: child packages extend their parents, parent packages are not umbrellas for children the way they are in Python)

- Protected types (protected objects)

- Modular types (unsigned ints with defined overflow characteristics)

- Unbounded_String (the std::string of Ada, compared to the char[] situation in Ada 83)

-----

In Ada 2005, the greatest addition is probably the Collections packages, which added several common data structures to the standard library:

- Instance.Method(Param) syntax sugar for Object.Method(Instance, Param)

- Interfaces

- Controlled types (RAII)

- Removed silly restrictions on access types

- Keyword "limited" to import only declarations

- Keyword "raise" takes exception message directly

- Pragma Assert

...and this is where I need to get off the train. Anyway, bookmark the Ada 2012 Reference Manual. It has everything in it.

Edit: ...but look what I found! https://www.adacore.com/about-ada/comparison-chart


This PDF introduces Ada for C++ and Java developers. After a quick perusal, it appears that it would be valuable. http://www.adacore.com/uploads_gems/Ada_for_the_C++_or_Java_...


Ada is another non-C with odd semantics, so the problem is that it goes too far and not far enough: It's not enough like C to be familiar, but it's not far enough from C to be able to go head-to-head with Haskell. If I want something that's very safe and don't care about it being similar to C, I'm going all the way to Haskell and not bothering with half-measures.


But it's a big deal to say, "Whenever safety matters, you alwys have room for a runtime with a garbage collector." I feel like you hven't substatiated that claim.


I feel exactly the same. It's so strange seeing everyone screaming about safety, yet they never gave Ada a try.


I didn't program ADA, but reading the ADA intro has left me with a good impression of the language.

It's focused on safety even more than Rust and yet it has kept an understandable syntax.


> I would normally define 'correctness' to include rigorous mathematical proofs.

Do you have an example of a language that does what you're looking for?

On the proof side, Rust's built in unit testing is great, and allows for quick validation of code (proofs). But I think you mean something different.

> Go works wonderfully

Go does work wonderfully, you should definitely use what you like. For me personally, though, Go never excited me. Rust on the other hand continues to be exciting, I always feel like I'm learning something new (and I've been using it for the past 3 years).

> For example, you could always just clone memory at every occasion, return the input instead of borrowing, etc. In fact, these things might be easier for a beginner to do.

I think most Rust beginners (I know it was true of me anyway) do often just clone everywhere. Eventually, you then replace that with Rc or Arc... And then one day you decide you want the fastest thing on the block, and now you know the in's and out's of Rust, so you decide to up the ante and put lifetimes on everything.

I recommend this pattern to all Rust beginners.


> Do you have an example of a language that does what you're looking for?

Dependently typed programming languages such as Coq and Idris let you write proofs about your programs. These languages are fairly academic, and most engineers probably won't find it worthwhile to use these tools. Personally, I love them. I wrote a short article about my experience with Coq: https://www.stephanboyer.com/post/134/my-unusual-hobby


Rust has the same level of rigour, other than lacking totality. Not having dependent types makes properties a lot more cumbersome and less practical to encode, but the actual proofs you get of the properties you do model are just as valid as those in Coq and Idris (modulo nontermination).


>> I would normally define 'correctness' to include rigorous mathematical proofs.

> Do you have an example of a language that does what you're looking for?

You mean, like SPARK?

> On the proof side, Rust's built in unit testing is great, and allows for quick validation of code (proofs).

No, unit testing is not proof of correctness, it's a hint that your code has no regressions at best. I'd take a proof over unit tests any time of day.


I think they mean something like what proof assistants like Cog and Irdis do. ATS is another language that enables proofs, and it's more geared to the systems programming use case, as it's similar to C.

Note that unit tests prove only that the code works for those inputs and those code paths tested. Mathematical proofs are supposed to be exhaustive.


Sorry this comment is so late; ATS is also interesting because it has linear types that allow compile-time resource tracking similar to Rust's ownership and lifetime tracking. If you use linear types you can opt out of the garbage collector. That makes it very attractive for systems programming, and in particular embedded systems. I believe there is a demo of ATS running on an Arduino of some sort.

I haven't had the time to learn it yet, though I would like to.


You'd have to look at niche languages like Coq, Idris, or Agda if you want mathematical proofs. There's a lot of research that needs to be done before proven programs can become the norm.


> There's a lot of research that needs to be done before proven programs can become the norm.

At least equally importantly, we need a new generation of developers to grow up with these tools before they can become the norm.

In fact, the biggest contribution that academia could make to safe programs is to replace Java and C by Rust in all the programming courses, so that the next generation of developers is raised on Rust and makes that language (and its strictness mindset) popular in the industry in the same way that Java's relevance in the industry is based (in part) on its prevalence in curriculums.


As Zalastax was saying, by the standards of Coq, Rust doesn't have a strict mindset.

Rust isn't the first language to emphasise correctness. We've had Ada for decades, but it's not taken over the world.

Going to the extreme, full-bore formal methods will never be taught as introductory material on programming courses for the masses, but they will continue to be taught at good universities.

I'm not sure this is a bad thing. For most applications, ad-hoc develop-and-test makes good sense. RAD is important in some domains. Both formal methods and highly strict languages have their downsides.

There's more to correctness than language, of course. A shift toward correctness could be as simple as encouraging students to put runtime asserts in their Java code.


I hope we'll see more advanced type features added to languages. I find it very annoying when I can't be expressive enough and have to resort to comments and runtime errors. It's a very fine line to walk though. TypeScript is the language that I think is best at walking that line currently.


> On the proof side, Rust's built in unit testing is great, and allows for quick validation of code (proofs). But I think you mean something different.

Unit testing can only proof one instance of the input domain, e.g. the function square() returns 4 under the input 2. Languages like Coq allow you to proof that the function square returns the squared input for every possible input.


And for anything more or less complex and optimised (e.g. Egalitarian Paxos), you'll not only end up with proving the correctness of the algorithm, but also the correctness of the implementation (which by themselves will hugely vary).

I see way more future in one's ability to write proofs of correctness in the comments before the function definition in any language; rather than preferring any specific language for the sake of proof.


C# Code Contracts can specify constraints (like range check, nullability, list sizes, etc) on inputs/outputs of functions and enforces these statically based on the code inside the function.


> but I would normally define 'correctness' to include rigorous mathematical proofs

Well, Rust does push for strong algebraic types. They aren't as expressive as other languages (the lack of higher order types irks me all the time - no, hygienic macros are not a good alternative) but they are the best ones you will find on any bare-metal language.


I agree with everything, just wanted to nuance with a little thought - if cloning memory at every occasion makes the code safer and more correct but slower, that might be worth it. Always favor correct over broken. If you are coming from C to Rust in a domain that is actually better suited for Go... well at least you are arguably better off with Rust than C, right?

So I think that Rust will advance the overall state of the art, just a bit.


Oh yes, as long as the anti-patterns that emerge don't cause other bugs, they're OK; but some may lead to logical bugs. I'm not sure how likely that is, though- not enough history to go off of yet.

Rust is certainly a step in the right direction. I hope the toolchain and so forth end up in a state where all forms of users can love it, though; I have seem some reasonable opposition to including Rust in kernel code, and that sucks.


I wonder if rust could be transpiled to somewhat idiomatic C... It should be easier than the other way around.


mrustc compiles Rust to C; I don’t think idiomatic is a goal though.


Regarding your first point, I have never seen anyone on the Rust side of things claim that Rust can solve most correctness bugs. The attitude has always been to take things one step at a time, and to slowly strengthen the type system to make it expressive enough to write more statically enforceable constraints.


"Most" is a problematic word because it requires a baseline to compare against. When you're coming from something like Ruby, Rust most definitely solves "most" correctness bugs. When you come from Go, probably not.


I do think a lot of the difficulty when starting Rust was all about it forcing you to do everything correctly, even when the 'correctness' wasn't needed for how the program was currently being used.

However, my experience learning Rust was that it was pretty easy to work through these things because of how precise the compiler is at describing the issue. I feel like the compiler was my teacher, and it kept on teaching me new concepts to look up in the docs. It was a grind to learn, but not a 'difficult' grind in the sense of getting stuck and banging my head against the same thing for a long time.


> I feel like the compiler was my teacher, and it kept on teaching me new concepts to look up in the docs.

I work primarily in F#, a functional .Net language, which also enforces strict rules at compile time to ensure correctness... With the benefit of experience it's amazing and comforting and lets you reason happily about applications at a higher level. It's on of my favorite parts of the language, TBH.

And while I see its strength over time, and see the benefit it gives to code bases, I also know that for your average C# dev used to firing up their app and debugging at run-time, or working in certain ways with their code, it feels like putting on a straight-jacket. It hurts the "first week" story to the benefit of long term concerns.

I totally agree: it's forced learning, which makes it a bit of a grind. But damned if I'm not a much better programmer for the effort :)


F# is nice, if only Microsoft would actually provide feature parity with C#, VB.NET and C++ tools.

I bet C# will sooner copy F# features than Blend, GUI designers or .NET Native will support F#.


That response is kinda offtopic for Rust... but:

F# is a first class .Net language that interops rather well with the others... If you have some burning need to use Blend or a GUI designer then you can always have a C# shim feeding into an F# domain/core with little problem. It's the same interop story as VB.Net, fundamentally.

That said: if you track MS's track-record those technologies and their Enterprise market viability, outside of resume-driven-development there isn't a lot of reason to be throwing hard money at their new shiney shiney... C# lags drastically behind F# in areas not focused on the client: scientific computing, cloud computing, parallel computing, agent/actor systems, etc... Being a release cycle off on the coolest new drag-n-drop tooling from MS is a moderating influence that has created a much better cross-platform story, and OSS story, for F# than C# has ever achieved, despite the scale difference.

Oh, and C# has been actively trying to copy F# for many release cycles, including reworking basic language design decisions... It is, however, fundamentally unable to achieve that goal based on the languages overriding design. I'm impressed they're back-working non-nullable value objects, for example, but the language can't give guarantees that ML languages can. Pattern matching is nice, but exhaustive pattern matching is what you want for hard things, IoW. And those deficiencies make C# a non-starter or also-ran for a lot of the BigData, streaming, cross-platform systems you see in big data halls and scientific computing.

It's almost like different tools are good for different things ;)


I went to F# from C#, but I felt the transition was easier than going from C to Rust. I kind of noticed that I used some functional concepts a lot in my C# anyway. I always liked side-effect free functions, I tended to do a lot with LINQ transformations already, etc. F# made a lot of these things nicer to use (+ pattern matching + discriminated unions +++)


The two languages are approaching a superficial parity, as C# eats up more and more functional concepts. I find it interesting, too, because many of the arguments about things being 'too hard' compared to C# make no sense when those concepts are baseline knowledge in both languages. The positive parts of the functional approach will make code more solid in any language, and C# is working hard to support that approach :)

I've got over a decades C# experience, and it's readily one of my favorite pragmatic languages for real-world work. MS has done an exemplary job of actively and aggressively improving the language. That said: barring the sole use case of trying hot-new-tech from microsoft, I never pick it up anymore... My F# code is always more concise, smaller, more robust, and provides execution guarantees that save me metric buttloads of work and testing. The scripting support kills any need for those Python utility scripts. Type providers give structural guarantees to handling external data, and proper use of Discriminated Unions empowers meaningful domain modelling better than anything I've seen.

It's also unusual to me, with... well... a dogs age in the industry, to find myself compelled to open up projects from 6+ months ago just to stare at how deeply pretty some of my algorithms are. So pretty that if git didn't blame me, I'd have to conclude someone else wrote them ;)


I agree. I just wish more other people knew F# so I could use it for more projects. I've also almost stopped using Python for my scripting purposes. F# is just so much more elegant, and if the scope of the script grows, I can grow it to a proper project.

The only reason why I still use Python every once in a while is pandas. If I have to analyze some big tables, it's still far ahead of anything in any other language I know. Sadly F# Deedle feels very clunky, especially when processing a lot of text (which my data frames are usually full of).


F# is my language crush. Been following it and hoping to use it for a long time. I'm in a leadership position and could MAYBE push to standardize on it however the current more likely scenario is TypeScript(I know, I know) due to my having already successfully pushed that out haha. Unfortunately the ship is already a bit far from shore.

Trying to move us completely away from Python for infra due to its tooling/language services being a post-apocalyptic wasteland. We will always have it around for pandas and ML work though.


Big bang transitions never work, IME, but step-by-step is viable :)

Maybe you've seen it before, but if you're looking for conservative ways to introduce F#: https://fsharpforfunandprofit.com/posts/low-risk-ways-to-use...


>I'm in a leadership position and could MAYBE push to standardize on it however the current more likely scenario is TypeScript(I know, I know) due to my having already successfully pushed that out haha.

If you're currently on JavaScript that's definitely an improvement. You could push for F# + Fable in that case. The F# Ionide VSCode extension is written with Fable. It's like TypeScript, just better.

http://fable.io/


It's a good article, but touches on a pet peeve on mine:

It bothers me when people who have written C or C++ mention lifetimes as a novel concept. What's novel is that in Rust the compiler cares about lifetimes. As a concept, though, lifetimes aren't novel and thinking about lifetimes conceptually is essential to writing C or C++ that's correct enough that it can be exposed to untrusted input. It's great that the Rust compiler relieves this cognitive load on the programmer compared to C and C++ leaving it up to the programmer to take care of.


Yes. And tackling vague conceptual problems by introducing a uniform mechanism and syntax leads to overcomplicated designs. That's from my experience using C++ and other languages, at least -- Rust has always intimidated me so I haven't tried it.

Many problems are actually not that hard if we start only with the truly given things. Namely at C level, or even assembler level. For example, I recently had the opportunity to convert a design based on OS threads to one employing userspace threads ("cooperative multithreading"). It was a big success, since the need for synchronisation primitives completely went away, and it was much easier to program than e.g. Javascript -- which requires callback chaining. Any language that can't give as good control as C does needs to build huge infrastructure to support approaches like that (if it at all suits the language's semantics). And strictly, even C is too much abstraction - to support green threads some non-portable parts are needed.


Because it forces you to handle edge cases.

Java is also harder than JavaScript because of this.

Yes, the edge cases Rust forces you to handle are different than those of Java and probably make more sense to solve, but I think this is still the root of the "problem".

While edge cases are part of "the rule" they feel like exceptions of it.

Most of the time you write 80% of the program code in 20% of the time and the remaining 20% eats 80% of the time because of the edge cases.

In many cases you can simply ignore the edge cases, because well they will fail, but nobody cares 90% of the time. If the remaining 10% make too much problems, invest some time in workarounds, but 10% of the remaining 20% isn't that much, so this often will cost you also just 10% of the 80% of time.

If you can't ignore edge cases in your software, because even a 2% chance of people getting injured is a huge problem, you use Rust and fight with these cases.


You can also opt to skip handling these edge cases by either copying (decreasing performance) or unwrapping (decreasing reliability). The great thing is that Rust gives you the choice.


Good article, although there are also some unforced errors of what Rust community calls "ergonomics" - many of which have been addressed, and hopefully many others will be soon - that make things more difficult to learn than the absolutely have to be. But author is quite correct that some things are just hard, and if a language is being "honest", it must expose them to the user.


What we could do better than we do today - and your comment about ergonomics alludes to our work to improve this - is ease the onboarding of that complexity we have to be "honest" about. Its a design constraint of Rust that it must maximize user control, but that does not imply that users have to be faced with all of those choices as soon as they first try to write Rust. In some respects I think this article is an attempt at a counterargument to that work (suggesting that it is "dishonest") but I fundamentally do not believe that there is a contradiction between giving advanced users control and making it easier for new users to write correct code before they have a full understanding of the entire system.


Indeed, I used the term "ergonomics" specifically as such an allusion. But I am not sure I read the article as claiming that such efforts are "dishonest", I instead see it as a reference to some other languages and runtimes that run up technical debt for the sake of easy living in the short term.


I would like a language that has knobs (e.g. file-level pragmas) for strictness, which you could turn all to one side (to get something as strict as Rust) or all the way to the other side (to get something like Ruby) or somewhere in between.

For example, a REPL would be pretty non-strict: When you define a function, you don't have to annotate types on the arguments, and everything gets passed around as generic objects. However, the standard library would be pretty strict [1], so in your unstrict REPL code, types are checked (and implicit clones are made to satisfy the borrow checker) as soon as you call into the strict code.

[1] In fact, the package repository for this hypothetical language (the analog to crates.io etc.) should only accept strict code, or alternatively put HUGE warning signs on libraries that contain unstrict code.


For the record, in PL jargon, "strictness" commonly refers to how soon the arguments in a function call are to be scheduled for evaluation. Using it to talk about the extent of static correctness checks can be slightly confusing!


So Rust has a lot of that, both in turning off some checks and in ability to write whole blocks of unsafe code.


I worked on some small learning projects in Rust about 6 months ago. When I got compiler errors related to the borrow checker and concurrency, I found that it was often easiest to look at whatever variable or function the compiler was pointing to, and then ask myself what could possible go wrong that would somehow fall under the purview of the borrow checker, because the actual error messages didn't always point to the root of the problem.

I don't have the specific examples anymore, but I think my errors were caused by things like passing around or trying to mutate references created from a data structure inside an Arc pointer or safety issues relating to unscoped threads. At the time, I felt like those things would have been very difficult to debug from the compiler errors alone if I hadn't had past experience with those same issues in other languages (which is what the article says about needing to learn quite a bit about writing programs correctly)


I opened this link expecting someone to complain about the difficulty, verbosity, or other issues commonly mentioned by those learning Rust. I was pleasantly surprised!

I learned Rust by implementing an interpreter for one of my graduate classes. I really enjoyed it, especially their compiler which continues to become even more awesome every day.

If you want to get into systems level programming, or already work at that level, you owe it to yourself to explore this as an option.


I feel Go and Rust are great options for writing modern server side applications today, but they exist on two different ends of the spectrum. The Go language is highly minimal, constrained, and forces devopers to write unified code, that performs extremely fast idiomatically. This uniformity is helpful for open source collaboration. Rust is a much more robust language (eg generics), but is more complicated to pick up and harder to read by comparison. Imho, Go is ideal for general web services, while Rust is better for system apps, tools, or complex web services like real-time exchanges or financial systems. This is just a general guideline though in my mind.


Go does not perform "extremely fast idiomatically". There are a lot of cases where replacing channels with semaphores or atomics, avoiding too many go routines etc. can lead to significantly better performance.

But Go does perform good enough if one uses simple idiomatic code style while making things relatively safe and maintainable. That is its huge advantage.


Rust is simple enough for web app needs. It took me about 4 months to start writing code in JavaScript not worse than open source libraries. It took the same time to learn Rust and start writing code, clean and dry enough, with almost 100% tests coverage.

Rust gives something Go can't: when you receive warning "variable doesn't need to be mutable", and you know you make it mutable intentionally, you know you just caught huge error (made by human, not compiler).

It's just one example, in general I feel compiler is my best friend, always 100% honest to me.


You contrast them only by complexity, but the actual reason for it goes further. Go achieves simplicity through a major trade off of using garbage collector. That's a price that you need to pay just for using Go. Rust on the other hand doesn't impose it, following the approach of letting you decide what you want to pay for. This results in higher complexity.

So it's good to remember that simplicity of Go doesn't come for free.


>Imho, Go is ideal for

... tons of boilerplate working fast yet slower than C/Rust/C++.


One of these days I'll get around to Rust. I don't think it's fundamentally difficult, just different from most people's cowboy programming. I'm much more excited these days about program extraction from verified programs written in Idris or Coq, which is on a whole other level of pain and suffering.


Mind you even correctness checkers aren't panacea - they won't go all the way down to check if you are using IEEE 754 correctly and handle all possible edge cases and rather assume "well-defined math properties". And as Donald Knuth used to say: "Beware of bugs in the above code; I have only proved it correct, not tried it."


Made me think, maybe they should check if you are using IEEE 754 correctly...


Numeric 'correctness' is a business-problem property.

For a game engine, correctness is speed and "right enough". For a research physics simulation, it may be "as right as possible".

You can't check whether floating point inaccuracy is correct generically because the definition is entirely dependent on the business needs of the program in question.


One could enforce the strictest correctness by default, and then allow/require to opt-out in cases where programmer decides it is OK, like a game.


An "ancient" conversation demonstrating this issue with early Coq:

http://grouper.ieee.org/groups/754/email/msg00574.html


Does anyone have any experience regarding TLA+. Does it make formal verification at least slightly less painful?


Will Swift ever be able to span all the way down to enable Rust-like performance in places where it’s necessary?

Or is it much more that it takes for a language to be a viable choice for starting new C++ type of projects with?


To (potentially) match Rust/C++ performance Swift needs

1. Stack and static-allocated arrays.

2. True pass-by-reference for value types.

3. A sufficiently-smarter compiler that's more often able to elide dynamic allocations, reference counting, bounds checks, vtables etc. (Lifetime semantics will help with this).


A minor problem with Rust is aesthetics. The syntax might look cryptic at times, and error messages the compiler outputs are not everybody's cup of tea, I'd guess. These are certainly not all that important when you're considering it for the actual benefits of the language, but otherwise that is a little con for me. If you write:

  println!("Hello, {what}!", "world");
in your program, the error message you get is 16 lines long. And there seems to be no option to have terser error output.


Rust error handling along with the From trait is so good. I've been trying to replicate a sort of similar behavior in nodejs but its not as ergonomic.


After years of using Python, it's been difficult to wrap my head around GO and Rust.

I really wish there was a course on Rust / Go for python programmers.


There are several Go for Python programmers tutorials. A google search would throw up many. And it makes sense, because Go is, in my opinion, a good alternative to python as a scripting language. It is faster and safer although slightly more verbose.

There are some Rust for python guides as well, but I think it is the wrong approach in case of Rust. Rust is really a more systems programming language. It makes sense to either learn it from scratch, or learn from a C++ background. Even coming from a Java background, I found it easier to just forget about my knowledge of Java and start looking at Rust as something completely fresh. In case of Go, I was easily able to map it to the python and Java concepts I already knew. I think Go has a nice niche where it can replace both Java and python, but not C++. Rust can replace C++, but not Java or python.


> After years of using Python, it's been difficult to wrap my head around GO and Rust.

I had the similar post-Python problem, until I settled on Nim [0] - it has Python-like syntax, significant whitespace, and ease of writing (but don't expect Python level of it), while compiling to C and providing significant speed benefits.

Also there is a brief "Nim for Python Programmers" [1], which might interest you.

[0] https://nim-lang.org/

[1] https://github.com/nim-lang/Nim/wiki/Nim-for-Python-Programm...


Ditto.. Nim was just easier for me to absorb as a Python person myself and I'm really enjoying it.



Go is IMO much simpler than Rust.


Definitely. I got my first Go program from zero to prod in about a week, where "zero" is the point where I started the tutorial (the "tour" on golang.org).


Yeah, that tutorial was time well spent!


(for newcomers) nothing works out of the box or as expected

program[0]="+"; | ^^^^^^^^^^ the type `str` cannot be mutably indexed by `{integer}` error: use of unstable library feature 'collections': needs investigation to see if to_string() can match perf

error: borrowed value does not live long enough reference must be valid for the block suffix following statement 0 When you use an index operator ([]) you get the actual object at index location. You do not get a reference, pointer or copy. cannot move out of indexed content


Rust is especially difficult if you bring your C-like assumptions to it.

You first need to learn difference between a borrow (a view, which may be read-only) and owned type, and know that Rust enforces stdlib strings to be UTF-8. Random mutation of a byte in a string is possible, but it has been deliberately put behind an unsafe function call, and there are better alternatives available (like char iterators).

It's not hard once you "get" the way Rust works with data.


> Rust is especially difficult if you bring your C-like assumptions to it.

To be fair it's very difficult if you bring your JS/Python/C#/Java assumptions there too. As a lifelong Java/C# dev, I just never imagined having objects on the stack. In C# knowing what goes on the stack is important - but the thing with everything stack is that it's copied - so it's basically just trivial values that sit on the stack. In Rust and C++ you can make entire complex applications whose data is entirely on the stack. And getting over that treshold is a big mental barrier.

I'm used to just thinking of my app state as a blob on the heap. A functional programmer or a C++ programmer thinks differently.


Please reformat your comment so it's more readable. Adding 4 spaces in front of a line will make it monospace.


Ouch. I still couldn't wrap my mind to that string traversing gets intuitive in Rust. The entire iterator thing seems completely counter-intuitive.


Isn't it basically the same in Go?


I don't know Go, so I can't really tell.


could someone point to the article explaining how Rust compiler reasons about the program, how different kinds of objects and pointers are represented in runtime - in a word, something that would help me grasp what is really going on when I compile and run the program. The tutorials I saw are written in assumption that you don't need to know that. Which makes me feel like a total idiot


The introductory manual "The Rust Programming Language" is a good place to get started. https://doc.rust-lang.org/book/second-edition/


Isn't the problem with Rust that it uses crappy pessimistic heuristics (borrow checker) for essentially NP-C problem, causing complaints where code is perfectly OK? IMO interesting design decision but I am staying away precisely because of this.


"Crappy pessimistic heuristics"

Are you talking about the borrow checker from half a decade ago? [0] Have you used the language since then? This is FUD -- it's really not hard to get code past the borrow checker anymore. Maybe every thousand lines or so you'll need to add a block or pull a temporary variable out of a one-liner, but the compiler basically tells you exactly what to do when something like that is relevant.

[0] https://pcwalton.github.io/blog/2013/01/21/the-new-borrow-ch...


Oh, I'd love if that were true. But in my limited experience with Rust it was still the borrow checker that absolutely made writing in the language hard. That was in 2017. My mental model of what should be safe was not at all the model the borrow checker had. And I really read the documentation about that part of the language.

I guess a confirmation that there were problems there is that they tried to improve the borrow checker in the releases after that.


I started working with Rust last month, and wrote about 2000 LOC since then. While I had my fair share of errors coming from the borrow checker, I've always found its complaints to be valid and easily resolved. Except for that one time where I tried to shuffle data between different threads and shouted at the borrow checker for two hours, until I realized that the borrow checker was actually right and what I tried to do was literally impossible without data races.


Were you writing functions/procedures under the implicit assumption that they would be executed under a single threaded context? Because Rust does not allow this assumption without using "unsafe". I've run into this before, and it can be a bit irritating, but if there's even a small chance you'll later want to execute the code in a multithreaded context, it will more than pay for the inconvenience. And if you're sure you won't, then you can just throw the data into an UnsafeCell and call it a day.


It's not even inherently about multi-threading https://manishearth.github.io/blog/2015/05/17/the-problem-wi...


Try to use the borrow checker of this decade to write callbacks in GUI frameworks, without polluting the code with Rc and RefCell, even though it is obvious there is only one path to the data being used.


Callbacks by definition create multiple paths to the data.

(You can also often get away with Cell rather than RefCell.)


> Callbacks by definition create multiple paths to the data.

How come, when you have a struct method, accessing field members only visible to that specific method.

Something that is very easy to do with moving in lambda contexts in C++.


Because the closure itself contains a reference either to the struct or to its fields. That's the second path, and it can be invalidated by the first path either a) freeing the struct or b) changing its "shape" in memory (resizing a `Vec`, changing enum variants, etc.)

You can alternatively move the struct or its fields into the closure, but then you lose the first path. I know you're not doing that because if you were you wouldn't be having lifetime problems (and you wouldn't be able to share the state across event handlers, so it's not a great solution anyway).


The quoted sentence is not true, but most practical callbacks will create multiple paths to objects via borrows. Or you can move your structs into closures (like in C++). But I think what you originally wrote (using callbacks without Rc/RefCell) is fundamentally hard. You have to guarantee somehow that the struct is still alive when you access it though the borrow in the callback, otherwise you have a memory safety problem.


The situation I was having issues is explicitly acknowledged by the NLL RFC as being an issue.

It is related to closure desugaring.

https://github.com/nikomatsakis/nll-rfc/blob/master/0000-non...


I think this comes down to philosophy; do you want the compiler to just check that the program works for all the current parts, or to have the compiler make sure that the program will work for all the possible ways the interfaces can be used as described?

Basically, Rust makes you ensure that your code can actually do all the things it says it can do, not just the things it is currently doing. Personally, I find that very helpful; I would rather do the work up front than have to go back and fix a lot of things later as my code grows. This might be based on my experience designing and maintaining very long lived systems, where I wish I had been forced to do things correctly from the beginning.


I'd love if it were able to predict correctness of the program, but it can't (halting problem something). It can only approximate it and to be meaningful, it has to be pessimistic. Which is one more headache to casual developers.


Are you saying that Rust is trying to predict the correctness of the program? Sure, parts like deadlocks and memory use, but the whole thing?

This seems to be a common misconception, maybe from interacting with overzealous Rust fan clubbers.


Do you stay away from all type systems? This would be an accurate negative framing for any type system, which all necessarily reject some correct programs.


Not really. Just it's super difficult to understand how exactly borrow checker operates and when would it allow program to compile or not, causing all kinds of unpredictable situations and unless you are "black belt"-level, difficulty providing any estimates. Not mentioning loss of morale when you have to fight it every single day.


This is not my experience using Rust. I had six months of programming experience when I first tried Rust. I definitely got confusing errors at first, but I grasped the system within a month or two. And that was in 2014 - the borrowchecker and especially its errors have way improved since then!

But this is also a very different objection from your initial objection, which was just a statement of fact about type systems for turing complete languages.


I'll take a look once I am done learning another hot language ;-) Thanks!


So your issue isnt about rust, its about changing popularity and language trendiness?


Not at all; currently I am able to write non-trivial programs in >60 languages and ranging from low-level distributed transactions (did own Paxos already) through 3D visualizations, business software, mobile apps, Deep Learning models and AI, system software all the way to advanced ETL pipelines using imperative, functional, logical, co-routine-based, generics, reactive, declarative etc. concepts and am always on the lookout for a better language as I can't find the perfect one ;-) So I want to understand where Rust stands and if it is meaningful investment to learn it, both programming "pleasure"-wise and business sense-wise.


Have you tried to write GUI code in Rust?


Is it? Could you give an example? I've found a borrow checker bug in the current release but the rest of the time its behavior is perfectly predictable. Granted, I wouldn't be surprised if the documentation was horrible, my understanding of the checker is based on one way I would do it.


It's simply one more thing you need to keep in your internal "context" while writing a program, taking mental resources you could spend elsewhere. Perhaps an IDE guiding you and outright rejecting the code or offering code completion that is compatible with borrow checker might be a solution, though we aren't there yet I believe.


You have to keep the kind of problems the borrow checker nags you about in your mind anyway. Memory errors and concurrency problems don't go away because the compiler doesn't complain about them. Personally, I find it easier to use Rust than C++, because I don't have to worry so much about iterator invalidation and stuff like that.


Experienced Rust programmers usually say the opposite: the borrow checker frees you from thinking about it since the compiler lets you know when you mess up.


You have the same thing in C++, right?


Yes, and sometimes I wish C++ stayed somewhere in pre C99 levels where a single person could master it. As much as new features are useful, they make codebases unreadable to anyone that doesn't grasp all concepts. Even Google internally "javaizes" C++ and uses a strict subset to keep some sanity. Scala is another language that can go insane in the same fashion if teams don't enforce strict rules.


Rust is a "Javaization" of C++ checked by the compiler.


What you guys think about a documentation like on https://learning-rust.github.io . Will it be difficult to learn Rust with this type of documentation?


It's similar to learning Stateflow: it has a bigger upfront cost in program design, but once it's fully specified, the implementation will likely be correct (less degrees of freedom).


Rust's problem is not correctness. The borrow checker is the good part of the language. It's feeping creaturism. The language started out as imperative, and then became semi-functional. It started out as thread-oriented, and now is acquiring "green threads"/coroutines/cooperative multitasking. The generics system and library started where C++ and Boost left off. The "trait" system was more "objects bad, must do something else" than an improved idea.

The Go guys knew when to stop. The Rust guys don't.


Er, no, Rust started out pretty solidly as a functional language.

It used to look like an ML variant, with purity and stuff. And it had a GC. It's lost most of that now.

It also used to have green threads up till 1.0 (they were removed pre-1.0), we're now adding them back. And they're being added back as a library, not as part of the language.

It moved away from both of these.


This is a misnomer. This hasn't been an issue since 1.0. Before then, the language designers specifically warned people that they were taking the time to learn what works together.

Lightweight threading isn't being re-added to the language. It might emerge from some of the asynchronous I/O work, but that's part of a third party library.


It's not getting green threads per se, it's getting co-routines. Even then, the co-routines will be mostly a library feature rather than a language feature. As for the distinction between coroutines and green threads - coroutines explicitly yield whilst green threads can be pre-empted. Implementing coroutines without a runtime is hard, implementing green threads without a runtime is harder.


Is there a way to avoid the true-believer syndrome for Rust?

I want to embrace Rust, but everyone ~100% of the time comes away chanting about how awesome Rust is. So much so that it's a bit unsettling. Zealotry in general is bad, but especially in programming: once you identify as an X programmer, you lose out on ideas from Y and Z.

Every tool has its flaws, but for whatever reason it seems extremely rare to discuss Rust's. When you try, it's like you're the Pied Piper: your song brings seemingly the entire community scurrying towards you, ready to force-push you why you're wrong and why Rust is awesome.

How about no REPL? We Go now. Slow compiler? Check. Performance is nice, and so is parallelism, but it costs static typing. It's a nice tool to have in your pocket as a fallback, but where's the niche? The ecosystem is huge, too: run `du -hs ~/.cargo/` and see how many GBs you're losing. This is a really tough sell on a 256GB SSD.

"So get a bigger SSD!" "Yeah but the compiler is always improving" "Yes but real programmers don't really need a REPL" "Yeah but static typing is worth the tradeoffs"

These refrains have been identical for nearly a year. Is there anything new to say, or is it time to agree to disagree?


- We are directing material resources (as in, hours of paid work) to improving compiler performance. We consider this a serious problem, and "the compiler performance is always improving" is not an accurate gloss of the amount of work people are putting into solving it.

- We would love to have a REPL but there are nontrivial technical challenges, and it has not been a major requested feature by our users.

- I have never heard the complaint about the size of packages before now; I have no idea how we compare to other languages in this regard.

I've witnessed some of the conversations you're talking about (I think I engaged with you about REPLs in the past), and I think the "true-believer syndrome" has more to do with how you interpret the answers you receive than with the behavior of anyone else.


Only listening to current users for feature requests is selection bias.

I'm not currently using rust, and lack of a repl is a big strike against it. Not as much of a strike as the lack of a good stable async story, but still.


We explicitly reach out to willing non-users to try and counteract said bias.


Remember that Rust is competing with C and C++.

Neither of them have REPLs. Both are statically typed (actually a lot of C programs are pseudo-dynamically typed with void * but pretty much nobody thinks that is better than actually using a dynamic language, or using something that has better generics). C++ is famous for long compilation times.

For the class of problems in which your choices are C or C++, Rust is such a compelling 3rd option that it's hard not to gush. It was clear to me within hours of first learning rust that the designers had exactly the same problems with C++ that I had.

Many communities tend to push back against questions about a language's weaknesses, (both real and imagined). To me, this appears to be mainly just a case of it being the n-100th time someone has said something like "Language X sounds interesting to me, but I heard it has a problem with Y" to a group of people who have all either are not at all worried about Y, or have decided that giving up Y is worth it. This is amplified on a group discussion board (such as usenet or irc[1]), since you get multiple responses simultaneously, and is made worse when people who don't actually understand the tradeoffs try to repeat points they've heard from people who have. Also the less-hot-headed members of the community are typically uninterested in getting involved in a flamewar, so it's not "the entire community scurrying towards you" but rather the most obnoxious subset of the community.

If you want a reasonable discussion about strengths and weaknesses of a language, your best bet is to have a one-on-one discussion with someone who actually had to make these trade-offs.

Note also that the question of typed vs untyped languages is always going to be a distraction, and "agree to disagree" is the only likely consensus here[2].

1: Usenet is better than IRC often because threading means that after the dust has settled, there is sometimes a thread or two with gems that can be mined from the muck.

2: I'm firmly in the "typed languages are betterHop on comp.lang.lisp " camp, but I write almost all of my hobby software in Common Lisp, which is quite firmly in the untyped camp, despite having some language support for static typing. Static analysis (which typing is a subset of), is one of the less fun parts of writing software, but if correctness is important, then, IMO it's reckless not to do it.


> Neither of them have REPLs.

Sure they do, FOSS developers just don't look into the right places.

Already in the 90's there were companies selling C interpreters.

CERN created CINT and Cling for C++.

Microsoft has quite a good interactive debugger, and edit-and-continue, which is a kind of very poor man's REPL.



I find it pretty strange to read you say "it costs static typing". Static typing is a feature. There's a reason people are starting to switch to TypeScript instead of JavaScript.


> Static typing is a feature.

A feature is just an objective quality of a product that isn't contrary to the creators intent.

Features often have costs; but the desirable ones have benefits in some use that outweigh the costs in that use.

It's true that mandatory static typing with a sufficiently expressive type system is beneficial in some uses. It's also true that pure dynamic typing or optional static typing (of the mypy/Typescript sort) are. Now, obviously, the circumstances and users for which eachh is most are beneficial are different.


(ง'̀-'́)ง

Static typing is useful in large codebases at big companies because it forces you to communicate your intentions clearly. It's also useful in large OSS projects because it allows your IDE to auto generate tooling and documentation. If someone is using TypeScript, it's far more likely the codebase will have clear, documented interfaces.

But TypeScript is optional. It doesn't hold you down and yell at you until you obey it. Even when you're using it, you can simply ignore the warnings/errors if you want to be naughty, and TypeScript will simply shrug and say "if you insist. You'll realize later I was right." But in the meantime your code actually runs.

In Rust, it's the opposite. You have to think about every single aspect of what you're doing. Making a cathedral out of toothpicks would also force you to plan your actions very carefully, but you wouldn't really say it's a feature per se. More like the tradeoffs of the medium.


Static typing is useful in large codebases at big companies because it forces you to communicate your intentions clearly.

I disagree with the implication that static typing is not useful for small applications. Last week, I was writing a tiny GTK+ application (still < 1KLOC). GTK+ is not thread-safe, so you can only call GTK+ functions on the main thread. In this application, I wanted a separate worker thread, so that I could already visualize data while the application is still reading.

In Rust, you can only send data of a type that implements Send to another thread and share data of a type that implements Sync. Since Send/Sync are not implemented for gtk-rs types, sending/sharing data to/with another thread is a compiler error. This was immensely helpful in writing a program that uses GTK+ in a safe way. One could say this as the compiler being annoying, but having lived through enough C++/Qt threading bugs, I would rather have the compiler bail out with a clear error than debugging these issues at runtime.


Sounds like you might like Lua+Rust or Scheme+Rust. Rust is still extremely young and all the first people to show up are by definition, the early, willingly converts. As it attracts more folks, it will get a wider spectrum of programmer and opinion. You could make an `Any` enum type and implement a ton of default traits for it, it might feel a little dynamic.


Luajit with it's ffi library for zero overhead ffi is amazing.

Even if you don't use Rust you can increase performance and reduce memory usage massively by simply defining the memory layout for your existing lua code.

http://luajit.org/ext_ffi.html#cdata


Rust is consciously designed for large code bases. shrug


> Even when you're using it, you can simply ignore the warnings/errors if you want to be naughty, and TypeScript will simply shrug and say "if you insist. You'll realize later I was right." But in the meantime your code actually runs.

This is because, well, TypeScript is sufficiently close to the target language---JavaScript. There exists some reasonable JavaScript translation of many "bad" TypeScript code. Rust (among other languages) is compiled down to the machine code and you won't have such luxury there. After all you need the exact enough type [1] to translate the damn thing to the binary anyway.

[1] Yeah the generic type "erasure" is a thing. That's reasonable only because those languages target slightly above the machine code (managed environments).


> but for whatever reason it seems extremely rare to discuss Rust's

At least within Rust spaces these get discussed at a regular cadence, in my experience. The usual pushback you see is "we're working on it", which is a valid response. I'm curious to see examples of what you're talking about, we try to push back hard on language zealotry in Rust spaces.


there is a culty vibe.

if i had to speculate why, it's that a lot of the more zealous Rust programmers simply have never had the skills or wherewithal to write in a systems programming language before. (I'd count myself in this category. I knew how to write half-assed scientific C++ but that's it.)

it's pretty intoxicating to finally be able to write code without a garbage collector and not screw it up, and be able to use a modern package manager at the same time to get all kinds of nice dependencies.

and getting over the borrow checker hump actually is one of those mind-expanding things like learning Lisp or Haskell is supposed to be. when I have to write C or C++ code now I have a little borrow checker living in my head helping me avoid mistakes.

these are very exciting experiences and the kind of thing that inspires zealous evangelism.

more experienced systems programmers, people who maintain major packages, and the Rust core team all seem to be a little more moderate in their enthusiasm even if they are huge fans of the language.


Seems like it’s time to decide for yourself?

I don’t spend much time or thought on what others have to say about which tech is the best for X/YEverything. I just learn it use it, and then make up my mind.


I like to look beyond to other languages, like Idris, ATS, Sixten, F*, Lean, Koka, etc. Rust has many clunky parts that become more obvious through use, even though it is still my goto language. I find often the most zealous folks are those who come from scripting language or Java/C#-ish backgrounds, and have not yet delved further into the possibilities of static types and higher order abstractions.


"Yes but real programmers don't really need a REPL" I'm hearing this for the first time. I don't call myself a real programmer but REPL is a godsend. There's no quicker way to test something - a small function, a numpy matmul etc


This is the exact reason I won’t learn Rust past looking quickly the doc: the promotion made by some very vocal people is a huge turn off. Notably by saying that everything must be rewritten in Rust, it implies that every other language is sh*t and people using them are dumb. Of course I don’t like that view for my work nor I won’t to be associated with this kind of people. This is a bit sad because the language probably have some interesting points, but the social aspect of it just don’t fit the bill.


> every other language is sh*t

Looking at how many CVEs are caused by undefined behavior and insecure properties of C, there is merit to the claim that Rust is objectively better than C in the strictness department and if safety/security is a major concern to you, you should absolutely choose Rust over C in new projects.

As for rewriting existing C software in Rust, the usual caveats for rewrites apply. Decades-old software like coreutils has had many eyes on the code, so the chance of catastrophic bugs decreases over time (but never to zero).


Just for the record, the annoying vocal promotion is not coming from the Rust team. The people who actually know the language know its limitations, and discourage overhyping it. Language flamewars are banned on all forums moderated by the core Rust community.


Where are you seeing these kinds of comments? Can you link me to them? I’d like to tell them to cut it out.


It's calmed down the past year or so but before that it was so insane (especially here on HN) that the impression will be hard to erase for those who were around to see it.


I was around, but still didn't actually see it. I see people complaining about it more than I actually see the comments themselves. That's why I always ask for concrete examples, and people never actually show them to me.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: